Community discussions

MikroTik App
 
User avatar
antispam
Frequent Visitor
Frequent Visitor
Topic Author
Posts: 63
Joined: Mon Apr 11, 2005 5:57 pm

Nstreme or Nv2

Wed Feb 15, 2023 8:19 pm

Will this reduce the ping?
Last edited by antispam on Wed Feb 15, 2023 11:25 pm, edited 1 time in total.
 
User avatar
bpwl
Forum Guru
Forum Guru
Posts: 2978
Joined: Mon Apr 08, 2019 1:16 am

Re: Nstreme or Nv2

Wed Feb 15, 2023 8:55 pm

Depends. On the others wifi traffic.
802.11 is like a roundabout. Everyone can get on the roundabout when there is a gap available. No waiting if there is no traffic, difficult to jump in when there is a lot of traffic coming from the other entries.
NV2 is like traffic lights You have to wait by red light, even if there is no other traffic. But even with very dense traffic each lane gets the green light in turn.

(NV2 is some sort of smart traffic light. The timeslot length depends on how much you have to send, only one extra empty slot is added for potential new connects as overhead. There is no contention window delay between data packets. So those transmissions can be small, what gives a performance drama for 802.11. NV2 handles priority (a bit like WMM in 802.11))

Which one reduces the time to pass? Depends. I like the roundabout in most cases. However with dense traffic or a traffic jam, you can get stuck so it feels 'forever'.
Used to be 'ethernet versus token-ring' discussions.
No experience with Nstreme.
 
holvoetn
Forum Guru
Forum Guru
Posts: 5317
Joined: Tue Apr 13, 2021 2:14 am
Location: Belgium

Re: Nstreme or Nv2

Wed Feb 15, 2023 9:42 pm

NV2 = Nstreme Version 2
So one could assume it's better then Nstreme.

See here, old MUM info but explains the differences.
https://mum.mikrotik.com/presentations/ ... savage.pdf
 
r00t
Long time Member
Long time Member
Posts: 672
Joined: Tue Nov 28, 2017 2:14 am

Re: Nstreme or Nv2

Thu Feb 16, 2023 10:24 pm

NV2 is not always better than NSTREME. Main problem with NV2 is latency increase at all times and setting TDMA period to minimum (2ms) reduces throughput too much (while 2ms is STILL not that low..).
For P2MP that might be a trade-off you can accept under some circumstances, but I find adding 5ms of latency just unacceptable, when ping to first internet router is around 10ms. On all P2P links that use 802.11n I run NSTREME with all options enabled + max frame size. This results in best performance throughput while latency is <0.2ms for shorter hops when link is not/lightly loaded. Just one note, NSTREME needs clean channel, if your CCQ is lower than say 80% don't bother, it will be too unstable. In that case use NV2, as it's more resilient and accept the downsides...

As always, try all the options and use what's best in YOUR case...
 
Trs0w
just joined
Posts: 2
Joined: Fri Apr 21, 2023 10:44 pm

Re: Nstreme or Nv2

Thu Apr 27, 2023 1:00 am

hii, i m having a problem with nsterme, when i tried mac ping it was 1-2ms and it was fine, but my ping to gateway is unstable and it s between 2ms to 40ms, do u guys think if i ll change to 802.11 or nv2 it will help?
btw i have a ptp with two lhg5 and distance is ~2km tx/rx signal 51/50 and tx/rx ccq 88/95%.
just 1 thing, when i had ptmp on 802.11 my ping was more stable even tho i had lower signal and ccq!
 
r00t
Long time Member
Long time Member
Posts: 672
Joined: Tue Nov 28, 2017 2:14 am

Re: Nstreme or Nv2

Thu Apr 27, 2023 2:24 am

Well, best is to just test all of these protocols and use what's best for you in your specific case. Try pinging just both ends of the link first, in case it's some GW problem.
For nice visualization of link latency I like to run PingPlotter with interval 0.001sec (ie 1000 pings/sec). Any interference is clearly visible (even if it may not drop packets.. may cause just retransmits that increase latency), same for latency increase as link gets more loaded. But in general NSTREME tends to drop packets now and then if there is a bit of interference and then just completely fail when it's too much. Always manually configure HT MCS levels and for testing you can try reducing them to make link more reliable against interference.
As for CCQ and signal levels, make sure you always measure them when the link is fully loaded in the direction you want to measure.
CCQ on idle link is meaningless and signal level is usually higher as lower order modulations are used (ie. on empty link), when loaded and more complex modulations gets used, power will drop a bit too (more complex modulations require more linear amplification = reduced total power).
 
User avatar
andyhenckel
newbie
Posts: 27
Joined: Sun Apr 08, 2018 12:56 am
Location: Missoula, MT

Re: Nstreme or Nv2

Tue May 23, 2023 10:54 pm

I'm posting to try and glean the optimal Nv2 performance. I've run a network for 7 years with NV2, and made various attempts to tune performance.

In recent months, I've been making various network upgrades, and one enterprise client who is monitoring our circuit very closely is convinced there is still an unacceptable amount of packet loss.

So, to hopefully benefit all, I'd like to post my results, and see if anyone else sees similar improvements.

Here is the basis for the testing done this morning. I seemed to be getting some packet loss. Throughput was ok, but was getting a certain amount of packet loss to the client, with very little throughput going to the clients. When I started I had the QUEUE type to multi-queue-ethernet default on WLAN and ETH on the AP. There seem to be very few controls for NV2. But the Queues and the TDMA period and distance of the cell seem to be all we have for tuning.

Here are the settings I'm using that seem to be working well this morning:

AP settings: all else default (QRT5AC)
AP is AC only 80 mHz, superchannel
Multicast helper full
Multicast buffering checked.

Advanced - HW retries 2
Max station count 15 (currently set, where most other AP's are at 30) not sure this has much of an impact or not.
Data rates - Default
Guard interval any

Nv2
TDMA period 2ms
cell radius 20k
security yes
mode fixed @ 60% downlink
Queue 2
default

Status 6 clients

Have ethernet interface @ multi-queue-ethernet default
have Wireless interface @ Wireless-default, then modified SFQ to perturb 9 with 4000 bytes (tried 2000000 and 11 but it was lossy. Figured 4000 to match NV2 packet size)

Ran send test with 25mbit to one client (dynadishAC) from another core device in the network
Ran simultaneous send test with 20mbit to another client(QRTAC) on the same ap from another core device in the network
Then on the AP, ran ping test with default packet size and 35ms to the client who was experiencing packet loss.
No longer seeing any drops after modifying the SFQ to 9/4000.
Ran the test up to 85 mbit and 20 respectively from the two btest TCP generators (ccr1009 somewhere on my core ptp network) Still no loss.

AP @ 6.48.3 as well as client
Ran test up to 40mb tcp from one ccr
and 80 mb tcp from other ccr.
120 m aggregate to test clients radios (result - no loss! or jitter from AP to client over 35 ms.)

So I ran another test, this time reducing the TX to 65 and increasing the RX for both test to that client 33.252 (the client complaining about packet loss)
Still got 120x5 and no packet loss.

Surprisingly this config seems to work. I've been thinking for years the trick is to match up the TCP needs with the NV2 needs.
Both clients are at 3KM and 2KM with -50 signal.

I've not changed the Queue type at the far side from Multi-queue-ethernet default for both interfaces on the client. I'm really surprised at the results. It seems the more traffic on the interface the less loss there is. I ran test for about 7000 packets and didn't drop a single one to the client receiving 80 mbit, while the other client RX 40M - It's also 9 AM and I didn't do anything to hamper other clients traffic on the cell. I'm really happy with the test results on this AP.

So I'm going to try this config where I have 20-25 clients on the crowded interface. Clients are more at 15-18KM.

Here's the report: 24 clients - ranging from -50 to 65 signal. 80 mHz, 2 retries, same settings as above, except cell range set to 40. Have some 37km clients - AP is Netmetal.
I had to increase the ping to 50ms to get the same low loss. There are 4 clients, 2 of which are really poor, but 4 who have lower than -65 signal fluctuating between -66 and -71. Normally we try to get all clients in -50 range, but cutoff is -65 for the installer.
I did make sure that clients are on later versions of 6.4x, and there were a couple we put on 7 for testing.
Overall, I was able to get similar results as the smaller cell with fewer customers. And I do feel there was an improvement with the Queue type, and lower ms TDMA window @ 2ms. Had it set @ 3 ms for heavy subscriber cells. I've let our guys know to go and re-align the few clients with poor signal, but the other subscribers seemed to be performing well after making the changes detailed.


I'd love to hear if others get the same results, where a loaded cell has very little loss with this config, or if you have improvements to suggest.
Cheers,

Andy
You do not have the required permissions to view the files attached to this post.
 
fcerezochalten
just joined
Posts: 1
Joined: Wed Feb 08, 2023 11:29 pm

Re: Nstreme or Nv2

Mon Jan 08, 2024 8:29 pm

I'm posting to try and glean the optimal Nv2 performance. I've run a network for 7 years with NV2, and made various attempts to tune performance.

In recent months, I've been making various network upgrades, and one enterprise client who is monitoring our circuit very closely is convinced there is still an unacceptable amount of packet loss.

So, to hopefully benefit all, I'd like to post my results, and see if anyone else sees similar improvements.

Here is the basis for the testing done this morning. I seemed to be getting some packet loss. Throughput was ok, but was getting a certain amount of packet loss to the client, with very little throughput going to the clients. When I started I had the QUEUE type to multi-queue-ethernet default on WLAN and ETH on the AP. There seem to be very few controls for NV2. But the Queues and the TDMA period and distance of the cell seem to be all we have for tuning.

Here are the settings I'm using that seem to be working well this morning:

AP settings: all else default (QRT5AC)
AP is AC only 80 mHz, superchannel
Multicast helper full
Multicast buffering checked.

Advanced - HW retries 2
Max station count 15 (currently set, where most other AP's are at 30) not sure this has much of an impact or not.
Data rates - Default
Guard interval any

Nv2
TDMA period 2ms
cell radius 20k
security yes
mode fixed @ 60% downlink
Queue 2
default

Status 6 clients

Have ethernet interface @ multi-queue-ethernet default
have Wireless interface @ Wireless-default, then modified SFQ to perturb 9 with 4000 bytes (tried 2000000 and 11 but it was lossy. Figured 4000 to match NV2 packet size)

Ran send test with 25mbit to one client (dynadishAC) from another core device in the network
Ran simultaneous send test with 20mbit to another client(QRTAC) on the same ap from another core device in the network
Then on the AP, ran ping test with default packet size and 35ms to the client who was experiencing packet loss.
No longer seeing any drops after modifying the SFQ to 9/4000.
Ran the test up to 85 mbit and 20 respectively from the two btest TCP generators (ccr1009 somewhere on my core ptp network) Still no loss.

AP @ 6.48.3 as well as client
Ran test up to 40mb tcp from one ccr
and 80 mb tcp from other ccr.
120 m aggregate to test clients radios (result - no loss! or jitter from AP to client over 35 ms.)

So I ran another test, this time reducing the TX to 65 and increasing the RX for both test to that client 33.252 (the client complaining about packet loss)
Still got 120x5 and no packet loss.

Surprisingly this config seems to work. I've been thinking for years the trick is to match up the TCP needs with the NV2 needs.
Both clients are at 3KM and 2KM with -50 signal.

I've not changed the Queue type at the far side from Multi-queue-ethernet default for both interfaces on the client. I'm really surprised at the results. It seems the more traffic on the interface the less loss there is. I ran test for about 7000 packets and didn't drop a single one to the client receiving 80 mbit, while the other client RX 40M - It's also 9 AM and I didn't do anything to hamper other clients traffic on the cell. I'm really happy with the test results on this AP.

So I'm going to try this config where I have 20-25 clients on the crowded interface. Clients are more at 15-18KM.

Here's the report: 24 clients - ranging from -50 to 65 signal. 80 mHz, 2 retries, same settings as above, except cell range set to 40. Have some 37km clients - AP is Netmetal.
I had to increase the ping to 50ms to get the same low loss. There are 4 clients, 2 of which are really poor, but 4 who have lower than -65 signal fluctuating between -66 and -71. Normally we try to get all clients in -50 range, but cutoff is -65 for the installer.
I did make sure that clients are on later versions of 6.4x, and there were a couple we put on 7 for testing.
Overall, I was able to get similar results as the smaller cell with fewer customers. And I do feel there was an improvement with the Queue type, and lower ms TDMA window @ 2ms. Had it set @ 3 ms for heavy subscriber cells. I've let our guys know to go and re-align the few clients with poor signal, but the other subscribers seemed to be performing well after making the changes detailed.


I'd love to hear if others get the same results, where a loaded cell has very little loss with this config, or if you have improvements to suggest.
Cheers,

Andy
Hi Andy, your post was very helpful in my journey to improve my NV2 cells! How do you do L2/L3/AAA to the clients? do you use PPPoE DHCP, VLAN per client, RADIUS? I'm asking this because i'm trying to experiment with wireless QoS but our network is mainly PPPoE and the encapsulation prevents it from working, so I'm looking into alternatives.

Who is online

Users browsing this forum: GoogleOther [Bot] and 72 guests