Great. I'll give it a try on a testlink.improved wireless driver, faster performance, "wireless fast path" mode
you can test the Fast Path (FP) like this "/queue interface> set wlan1 queue=only-hardware-queue"
but even with FP off, the wireless speed will be improved compared to regular package
Tried it now but the tx-power was to high (max card rate) for the band. band=5GHz A/N country=Germany.channel list and power limit should be working now with the new wireless package.
Please tell exactly what channel entry you tried to use and did you use it for the frequency or for the scan-list?Tried it now but the tx-power was to high (max card rate) for the band. band=5GHz A/N country=Germany.channel list and power limit should be working now with the new wireless package.
Did you test nv2?I took this wireless package on a short test, and as for me it works worse than 5.14 with unspecifed/nstreme.
link:
rb711GA ---10km perfect LOS-- rb411-DBii-FN50pro , mcs12
and with nsteme 5.14 i get speed up to 120mbps with stable ping as old good nstreme,
with 6.12 with new wireles package it begins to choke around 105mbps.
tried to set hardwareonlyqueue, it doesn't make any diff.
btw. as for AR9220 chipset wider (25+25 Mhz) channels are not supported anymore in 6.12 ? ?
Not yet. This driver has big changes, and we need to test it for a while longer. It will be an optional download for a while.Hello uldis. New wireless package will be integrated in next versions (6.14) in standart wireless package?
Or will be still separate?
This feature isn't made for those boards yet.custom channel width on RB911uag2phnd, with new wireless package is also unsupported.
Will custom channel width option no longer be supported ?
we will try to test this problem.Yes of course, I have superchannel feature enabled.
please contact support@mikrotik.com to get the RouterOS test build which has a fix for that starting test version v6.13rc23Confirmed with OmniTIKs.
I have the superchannel license, and I can't enable the custom 30 MHz channel width with wireless-fp enabled. Disabling wireless-fp and enabling wireless it works perfectly.
We had some problems with the v6.12 wireless-fp package running Nstreme wireless protocol. That problem is fixed in the latest test release of v6.13rcToday I had a strange problem: switching from nv2 to nstreme makes an RB411AH hardly accessible.
The device cycled Ethernetlink and was accessible for 3-20 seconds on ethernet and then not for
some time. I hardly managed to log in and disable wlan1 to make it accessible again and switch away from
nstreme.
Reboot did not help. I guess nstreme killed the cpu and this in some way kills ethernet connectivity???
With mikrotik-routers attached on both ethernet-sides of the link.How did you test the TCP and UDP speeds?
[admin@rb2011-despa] > tool bandwidth-test protocol=tcp direction=both tcp-connection-count=4
address: 10.0.0.3
status: running
duration: 50s
tx-current: 46.3Mbps
tx-10-second-average: 46.1Mbps
tx-total-average: 44.0Mbps
rx-current: 46.6Mbps
rx-10-second-average: 47.8Mbps
rx-total-average: 45.6Mbps
random-data: no
direction: both
[admin@OmniTik-casa] /system> /tool profile
NAME CPU USAGE
wireless all 31%
ethernet all 1%
console all 0.5%
firewall all 0%
networking all 19.5%
mpls all 0%
btest all 46.5%
management all 0.5%
routing all 0%
profiling all 0%
bridging all 0%
unclassified all 1%
-- [Q quit|D dump|C-z continue]
[admin@rb2011-despa] > tool bandwidth-test protocol=udp direction=both tcp-connection-count=4
address: 10.0.0.3
status: running
duration: 48s
tx-current: 62.5Mbps
tx-10-second-average: 64.4Mbps
tx-total-average: 64.0Mbps
rx-current: 97.3Mbps
rx-10-second-average: 92.1Mbps
rx-total-average: 71.6Mbps
lost-packets: 828
random-data: no
direction: both
tx-size: 1500
rx-size: 1500
[admin@OmniTik-casa] /system> /tool profile
NAME CPU USAGE
wireless all 32.5%
www all 0%
ethernet all 1.5%
console all 1%
flash all 0%
ssh all 0.5%
firewall all 0.5%
networking all 8.5%
mpls all 0.5%
btest all 8.5%
management all 0%
routing all 0%
idle all 44.5%
profiling all 0%
unclassified all 2%
[admin@rb2011-despa] > tool bandwidth-test protocol=tcp direction=both tcp-connection-count=4
address: 10.0.0.3
status: running
duration: 2m3s
tx-current: 46.0Mbps
tx-10-second-average: 46.0Mbps
tx-total-average: 44.6Mbps
rx-current: 47.1Mbps
rx-10-second-average: 48.4Mbps
rx-total-average: 46.4Mbps
random-data: no
direction: both
[admin@OmniTik-despa] /tool> profile
NAME CPU USAGE
wireless all 27.5%
ethernet all 9%
console all 0%
dns all 0%
networking all 1%
mpls all 1.5%
management all 1.5%
routing all 0%
idle all 55.5%
profiling all 0.5%
bridging all 0%
unclassified all 3.5%
[admin@OmniTik-casa] /system> /tool profile
NAME CPU USAGE
wireless all 23.5%
ethernet all 1.5%
console all 0.5%
flash all 0.5%
ssh all 0%
firewall all 0%
networking all 14.5%
mpls all 0%
btest all 51%
management all 0.5%
routing all 6%
profiling all 0%
bridging all 0.5%
unclassified all 1.5%
[admin@rb2011-despa] > tool bandwidth-test protocol=tcp direction=both tcp-connection-count=16
address: 10.0.0.2
status: running
duration: 46s
tx-current: 57.1Mbps
tx-10-second-average: 57.5Mbps
tx-total-average: 57.2Mbps
rx-current: 57.2Mbps
rx-10-second-average: 57.5Mbps
rx-total-average: 57.2Mbps
random-data: no
direction: both
[admin@OmniTik-despa] /tool> profile
NAME CPU USAGE
wireless all 5%
ethernet all 17.5%
console all 1.5%
ssh all 0%
networking all 17.5%
mpls all 1.5%
btest all 53.5%
management all 0.5%
routing all 0%
profiling all 0%
bridging all 0%
unclassified all 3%
I would suggest a test using a pair of **nix hosts (FreeBSD or Linux is perfect for that) and checking a full tcp capture using tcptrace. You can find out if the degradationi is caused by packet loss, reordering, variable delays or whatever.You see the degradation is not as worse with only the second link.
So my conclusion is that there are hickups/queuing problems which sum up to make a bad
tcp result while udp is not disturbed.
Using 127.0.0.1 you hit the CPU twice as hard as it is sender and receiver.for testing tcp thruput with Mtik Btest, first run btest to 127.0.0.1 to see how much can the board handle.
and as for my tests:
rb433 - cpu 300mhz - TCP test ~40mbps HD, 30/30 FD
rb omnitik cpu 400mhz - TCP ~50mbps HD, 40/40 FD.
so testing tcp with these boards is pointless as they can't do more than above given stats.
for btest a x86 is most effiecient. intel atom 1,6ghz hits 500mbps TCP HD, 375/375 mbps FD.
another matter is fact that on Ros 5.x these results were half worse. so tcp btest on omnitik with 5.x gives even worse results.
and its nothing to do with wireless capacity.
Measuring usable bandwidth, especially depending on where you do it, is much trickier than it seems.Using 127.0.0.1 you hit the CPU twice as hard as it is sender and receiver.
My tests are reproducable with a speedtest from PC to speedtest.net.
There is a problem other than the speedtest itself.
NAME CPU USAGE
wireless all 37%
www all 0%
ethernet all 13.5%
queue-mgmt all 0%
dns all 0%
networking all 12%
mpls all 7.5%
management all 0%
routing all 0.5%
idle all 21.5%
profiling all 0.5%
queuing all 0%
bridging all 3%
unclassified all 4.5%
Using 48 tcp-streams is a test which does not match real user experience.Measuring usable bandwidth, especially depending on where you do it, is much trickier than it seems.Using 127.0.0.1 you hit the CPU twice as hard as it is sender and receiver.
My tests are reproducable with a speedtest from PC to speedtest.net.
There is a problem other than the speedtest itself.
I have just made a controlled test. This is it:
FreeBSD - Internet - Cablemodem - OmniTIK ~~~ OmniTIK - FreeBSD
Both FreeBSD systems running iperf3.
iperf3 parameters: 48 connections, TCP, 1 MB window size.
The cablemodem and the cablemodem provider is the unknown here, but, anyway, I have no problem to reach 80 - 100 Mbps of TCP. Even though the cablemodem is crap and I don't think it will be very good at keeping the NAT at 100 Mbps.
Unless you are doing specific TCP processing at the routers, or there is packet loss, there is absolutely no reason to get different results for TCP and UDP. If you are not doing any special TCP processing, both are IP packets.Code: Select allNAME CPU USAGE wireless all 37% www all 0% ethernet all 13.5% queue-mgmt all 0% dns all 0% networking all 12% mpls all 7.5% management all 0% routing all 0.5% idle all 21.5% profiling all 0.5% queuing all 0% bridging all 3% unclassified all 4.5%
The reason for those differences is the higher cost of the TCP bandwidth test running on the Mikrotik hardware. And note that depending on your round trip time (this link has an end to end delay of around 30 ms) you need really large window sizes. I am using 1 MB.
In my case, with a round trip time of more than 50 ms for large packets, a user needs a huge window and a large transfer size to be able to reach 100 Mbps in a single TCP flow.Of course you're right if you want to test the capacity of the link(chain). With a lot of users you'll see this capacity but a single user never see this bandwidth even if the link(chain) is unused.
I just tried this on 6.15 with a test-setup and only one user, works fine. The CPE is still running the wireless package, and the AP wireless-fp.I would like to test it also on ptmp on one or two APs and some users.
What happens if APs runs wireless-fp package and users don't?
Allow me to second this lament. I have a diverse collection of RouterBoards, some of which have the original wireless package enabled, and some of which have wireless-fp. This argument change is a disaster for automation, unless you make sure the same package is enabled on all RouterBoards on your network--now and in the future.One other 'gothcha!' I found;
If you made yourself a script to set units to a default setting, the wireless is indeed changed. So the wireless part of the script is full with errors and therefore not accepted. The wireless interface does not even get enabled. So if your routine was to load the scipt (with old format, but now the new wireless package installed) and you think the antenna is ready for deployment in the field.... you're wrong!
You have indeed to make a new script, like 'dohmniq' already sort of pointed at...
In short; because udp is like you send a letter to recipient but you'll never know if it really arrives in proper state. tcp is like guaranteed delivery. If not the mail office will send the package again and again until considered useless to send. It needs to send some extra data with the original package to make sure the sender will know the receiver received the package and that it is received in proper condition. If not the receiver asks sender to send again.UDP & TCP difference:-
In Netmetal(v6.29.1) bandwidth test using udp can get (450Mbps FD)(210Mbps HD), and TCP can get (220Mbps FD)(100MbpsHD).
Anyone can telling me, why udp & tcp bandwidth difference is big?
Indeed, I agree, that's another limiting factor. Always measure 'over' the two units making the link. Not 'from' or 'to' ...How do you test the bandwidth?
Beware that the builtin BW test is very CPU intensive.
In case you tested this with BW test feature of your NetMetals, you very likely maxed out the CPU.
For real life results, better use two comuters (one at each peer) and use iperf, you should see much better results.
-Chris
This is a normal working nv2 link. Doing the same over a licensed wireless link there is nearly no difference between tcp and udp speed.In short; because udp is like you send a letter to recipient but you'll never know if it really arrives in proper state. tcp is like guaranteed delivery. If not the mail office will send the package again and again until considered useless to send. It needs to send some extra data with the original package to make sure the sender will know the receiver received the package and that it is received in proper condition. If not the receiver asks sender to send again.UDP & TCP difference:-
In Netmetal(v6.29.1) bandwidth test using udp can get (450Mbps FD)(210Mbps HD), and TCP can get (220Mbps FD)(100MbpsHD).
Anyone can telling me, why udp & tcp bandwidth difference is big?
So if your link is not 100% quality you will always have some packages resend and down goes your maximum throughput.
rule of thumb on good links is that you get 50-60% tcp traffic compared to the connection rate. That's a good working link!
Now we talk two different things;This is a normal working nv2 link. Doing the same over a licensed wireless link there is nearly no difference between tcp and udp speed.
The guaranteed delivery causes ack packets to flow in the opposite direction making <5% additional traffic in the opposite direction. Modern OSes (ROS since newer 6.x versions) do much bigger sliding window so there is lower ack-traffic and no need to wait for ack which would slow down the connections.
I am sure MT could do some nv2 optimizations to bring tcp speed to udp - 5-10% if they only invest the time.