Can you tell me why is bandwith double size smaller after every hop. It’s about RB532, every unit have 2 interfaces, one in station wds and other in ap bridge, and on every mikrotik all interfaces are in bridge mode.
How many wireless interfaces do you have on each node?
This looks typical of single-interface nodes with WDS
MT1 and MT4 have 1 wlan
MT2 and MT 3 have 2 wlan
Mt2 and MT3 - does they use non-interfering channels?
mt1 = 5425 MHz
mt2 = 5825 MHz
mt3 = 5400 MHz
mt4 = 5745 MHz
This looks very odd - How can you connect # 3/4 with different freqs to the others?
What mode have you used to connect the nodes?
mt3 have 2 wlan interfaces, one of them is in “station wds” and he is connecting on MT2(ap bridge interface on MT2) and the other one is in “ap bridge” mode. mt4 have one wireless interface in station wds mode and that one is connecting on mt3 (ap bridge interface)
I think you need to supply more information here, in a clear manner.
mt1:
wlan1
wireless-mode
wds-mode
frequency
mt2:
wlan1
wireless-mode
wds-mode
frequency
wlan2
wireless-mode
wds-mode
frequency
etc etc
You can put two wireless interfaces on each node, and use one of them exclusively to interconnect the nodes (with WDS).
mt1 [ap bridge 5425MHz] <----3km----> mt2 [station wds-wlan1]bridge[ap bridge5825MHz-wlan2] <----4km----> mt3 [station wds-wlan1]bridge[ap bridge5400MHz-wlan2]<----9km----> mt4 [station wds-wlan1]bridge[ethernet-eth1]+[wireless-wlan2]>>>users
this is bandwidth test:
from mt1 to mt2
from mt2 to mt3
from mt3 to mt4
and last from mt1 to mt4
Hi, there is possibility that cards are interefering.
I’ve seen it on PC’s with tplink cards.
Do you make lab tests or it is outdoor ? If it’s indoor, try to lower tx power.
this is outdoor, about 3-9 km between every unit.
What hardware/processor are you using?
seems you’ve hit your max throughput of your processing capability
The middle nodes are passing close to their total max throughput - 4 in/4 out =8Mbps less a bit of CPU overhead to handle the 2 WDS’s & tx/rx at the same time…
Use bigger motherboards.
As the boss said. Agreed.
The more you work inward, the faster your boards should be.
Also, try to change the setup to bridge mode and not AP, then enable WDS.
Are your tests TCP? Try UDP. Remember TCP includes ACK ie transmit and receive per packet on two interfaces on the middle units.
By the way, what antennas are you using for backbone?
You are accumulating packetloss over the links. That is why you see so much drop in bandwith. You will have to make sure that the CCQ for every link is close to 100% if you want to move 10 Mbit over it. Also you should be able to move around 15 Mbit TCP between them at least.
/Henrik
True that - what’s your signal strengths between each node, and what data rate do they connect at?
To get 15Mbps, they’d have to be pretty good signals, and even then, you’ll only be able to have half that total throughput from start to end due to the processor limitations.
We get 15Mbs or better over 6 hops using RB 532 hardware with no WDS, just straight routing. A single hop will give about 23 Mbs using a 532 with a good signal…
I am not a big WDS fan.
George
That’s pretty impressive - what signal strengths/channel width/CCQ do these links have, and what kind of distances between them?
Have you compared your links’ performance against a WDS system?
Only difference Between WDS and not WDS is 6 bytes per frame, so this is not the bandwith killer. I move around 40Mbit one way with WDS on RB 532 Hardware over 10km. If you use 2 radios on a “hop” and disable connection tracking and all other services you don’t need you should be able to move up to 30Mbit through it on 5GHz. Good thing with WDS is the possibilty to get VLAN support and full L2 connectivity. (Good thing when OSPF crashes) ![]()
/Henrik
How do you manage 40Mbps one way? Is that TCP?
I don’t think I got that with 2 RB532’s connected by Ethernet…
Best I could do with 2 x 1GHz boards (in lab) was 30Mbps TCP using Nstreme…