I am looking for a good example of a multipoint network where Nstreme w/ polling has been used successfully. We have several access points that have very large view sheds where clients may be ten miles away from each other, thus causing the classic “hidden node” problem. We are looking at polling Nstreme to alleviate this problem, yet in some of our tests we don’t get the results that we expect to see.
How has this worked for you? Can other people post their succesful configurations? I am looking for Polling Nstreme access points with client counts at twenty five or higher.
Hello Adam,
I’m a proud user of nstreme PTMP.
What version of ROS you used for your tests?
I would suggest using 3.30 with wireless test or 4.5 (remember to do a wireless interface reset configuration after the upgrade to 4.5).
I’ve the limit set to 40 clients on each wlan and each user is fed correctly with a low latency/jitter. Previously our bottleneck was the CPU of the Routerboard. Now we deploy RB411AH+R5H and they work flawlessly.
We are using Nstreme PMP on all our APs serving over 600 clients. Most APs have around 30 or so clients and clients have packages ranging from 1.3 to 5mbps down load. Depending on the AP we see around 10mbps of through put per AP(10mhz channel). Most units are 3.30 with wireless test package, and a hand full of 4.x units. Most APs are 411A or AH’s running an XR2 card, with a hand full of 5ghz and 900mhz units in the mix all running Ubiquiti cards. Clients are mostly R52H card for 2.4ghz ans some R5Hs for 5ghz.
We had one AP that at one point was serving 40 clients + 2 “repeaters” for over 75 clients. It has now been sectorized and all units are under 40 clients at this time.
That is ideally what I am trying to do; I need to implement the polling set up to alleviate the hidden node problems that we are seeing AND the streaming video factor of our Netflix addicts. Need to do something to balance the load between all of the users and make sure that no one gets shut out.
A bad link is still a bad link regardless, but we are looking to polling to level the playing field.
What settings did you find worked best for your 40 client sectors?
Seems Nstreme needs all participants to have a fast cpu and the benefit
seems not as big as expected. As I’ve never seen any statement from MT
regarding a running example I’m wondering MT even has a outdoor test
setup for there developer(s). And without I dont believe they can implement it
efficient.
We’ve decided to implement RTS/CTS instead as:
we’ve still a lot of RB133c out there
it’s interoperabel with other equipment and 802.11n
you’ve to implement nstreme it on the whole segment in one sweep as MT has no Nstreme autodetection
yes, MikroTik does have outdor Nstreme setups. Even our internet uplink for the office is running over Nstreme wireless link, so if there would be issues, everyone here would notice
I use Nstreme for ptp and small scale ptmp (up to 3)
with good signal conditions. In this setup Nstreme is very
good. I’ve not seen a success-story with bigger scale
(up to 40) clients with some bad signaled clients in between.
If you can provide me a link …
Our clients have mostly RB411s, there’s some 133c out there still, but we are replacing them since they arent working good with crowded radios. They work fine until the user starts downloading from emule/torrent. The 133c wont handle both Natting and polling with small packets.. The ROS Version vary from 3.2 to 3.30. Only few of them have wireless-test package enabled.
This is my nstreme configuration. The framer-limit is quite useless since i noticed the frame is always 1532 (pppoe encapsulation).. With exact-size you might be able to get more throughput but the latency might be higher.
With wireless-test, nstreme isolates the bad client with high signals.. (our clients disconnect @ -84). That protects the other “good” clients.
We havent unlimited users, and we shape peer-to-peer traffic to keep the quality of the links fair enough.
This of course turns out to be bound to mikrotik, and we cannot buy for example any ubiquiti clients. But until now we are fine with Mikrotik, even if sometimes the new releases are fucked up
In most cases we use the default packet size of 3200 with best fit, with CSMA disabled(box checked). Hardware retries are set to 6. Adaptive Noise Immunity is turned on. We find we have to do some fine tuning on some sites to do to local conditions. If the AP has issues (many client disconnects) we will increase the Hrdwr retries to 8 and drop the the max packet size to 1500, this helps in preventing corruption of larger packets.. you also loose a bit of through put. CSMA in early 3.x seem to make things worse particularly if the RF was not great. Later v3.x and 4.x it seems to help. We also fine tune the customers end by adjusting there modulation rate to a level that provides a good CCQ and reduces/eliminates constant changing of data rates. This provides a better connection and helps reduce latency and jitter. In most cases we use 6(basic) 24,36mbps and add in 18 or 48 as dictated by the link. You can also get pretty good results by selecting Modulations at the AP too. What works best will depend on your installation. We also make an effort to keep all clients above a -70 signal strength, and don’t hesitate to correct problem clients.
We have not directly compared our set up to using RTS/CTS but a polling system should out perform it. While RTS/CTS helps a lot compared to a system that does not use it. It is still a “collision” based and would work best with smaller more compact (area wise) configuration.
One thing we would like to see and is on the top of the MT wish list (wiki) is gps syncing. While it is not a fix all it seems to be on the surface it would help a lot at densely packed sites and keep cost down as few filters would be needed, in wide open areas where were an omni can be seen from another tower more then 50km away…HINT…HINT…NORMIS!
We’ve difficulties to bring all of our clients to good signals. As we respect ETSI regulations this is difficult to do in all cases. There are a lot of customers accepting low bandwidth in Regions where they get max 64kbps otherwise.
So we’ve to handle it.
At the moment we’re on the way to 3.30 wireless-test
and rts. We use only MT as clients but a lot of 133c.
We dont want to replace 133cs at the moment as may be soon 11n
is a gamechanger and then we need 411ahs with other antennas.
This depends on the implementation of polling, traffic pattern and signal strengths.
If you’ve to poll 30 weak clients while only 5 with good signal want to send…
I’ve not seen the polling algorithmus so I dont know if it meets my situation.
If MT implements a nstreme autodetect for clients (they dont as I’ve already asked) I would give it a try as I only have to change it at the AP.
This is some wonderful feedback and a great discussion; just what I was looking for.
We limit our client signals to -75; if they can’t make this threshold then we don’t install them. We find that this levels the playing field dramatically.
We don’t see bittorrent as much as we have problems with users running constant movie downloads; anything that keeps the A.P. busy for a very long time is bad for business. We’re looking to polling with some tuning to help alleviate the case where people may be locked out or complain or slowness with high latency.
I think you’ve made good decision. We test and use wireless-test package on one site with 10-15 clients per sector. But: 133c rb’s are slow for polling and nstreme. Clients must have good signals, if they don’t, cpe’s are disconnecting. You have to use RB433AH or better hw on AP. Latency is not so good for me. It is from 5-50ms.
Positives are that you can have good throughput on sector (24-28Mb/s, TCP). It solves hidden node problem. Eg. five clients downloading 5Mb/s in the same time.
I want to test rts/cts on some AP where we have old hw and non-mikrotik cpe’s. Maybe I will post some results in the future.