As our local regulators are making noises about enforcing the EIRP regulations I need to ensure that the little non-profit community-run wireless network I do the tech for is within the rules.
Could someone take pity and explain the mystery of Tx power to me in words of one sound.
I understand the basic computation =
TX power +dBm - cable loss dB + antenna gain dBi = EIRP
What baffles me is the ‘Tx power’ bit.
ROS with our usual card, the R52Hn, on Tx Power Mode = default shows a table going from 6Mbps at 22dBm to 54dBm at 18dBm.
What does the ‘m’ in dBm stand for?
How is it that you get the highest speed at the lowest power and vice versa?
Given that we are throttled to 10Mbps each way through our gateway to the ISP do I need to go for the highest possible speed/Tx across the network or can I restrict the data rate to, say, 24Mbps or even less to achieve greater reliability with no loss of throughput?
If we have a legal cap of 36dB (4 watts) output, a 30dB dish but lose 1dB with 3 metres of CA-400 antenna cable must I limit the card’s output to 7dB using ‘TX Mode manual’ and setting it to 7? Doing that on a link we having running would mean reducing the card’s power by 15dB which if the relationship is 1:1 would take the TX from its present -78 to -93, which hardly seems desirable?
Finally, we have the same gear at each end of backhaul - 30dB dish antennas with R52Hn cards in RB433s. The only difference is that one end has 3m antenna cables and the other only a 1m cable. Yet the Tx/Rx is -73/-80 give or take -1.
Can 2m of cable make that much difference, or are there likely to be local factors such as another 5GHz transmitter on the same tower at one end even if on a very different frequency. Could one antenna slightly out of alignment have this effect?
Thanks for you time.