CubePro Perfomance

Am having a little trouble with performance on these units.

PTP/PTMP - CubePro60SA - CubePro60 - Getting around 800mbps… Links stating 2.3Gbps. Is Btest on unit, and TCP so overheads and the likes… sure - happy enough.

Now Testing UDP or TCP

2 x PTP/PTMP - CubePro60SA - CubePro60 - PowerBox Pro- CubePro60SA - CubePro60. Getting 250mbps. Both Links direct AP to STA get 700Mbps, and PowerbBox Pro is in full hardware offload… Also, both ways to the PB Pro get 300 or less.

Also, Main Link - CubePro60SA - CubePro60, but instead testing to the core (RB850Gx2) - also 250mbps. Connected via Unifi Flex - so another variable, but I’m not convinced it’s the switches.

Also, also, Each Cube to SW or Core that it is plugged - 250mbps

Need to do a Bit of Labbing to confirm the PB Pro as configured can pass the traffic (have never known them not to).
Also need to lab the Cubes (but only have one spare atm).

Just putting out the question has anyone ever noticed a slow down like this.
F/W is 7.15.1/2 accross the devices.

Any insights would be welcomed. Basic Bridge config, HW offload is enabled where possible no vlan filetering/translation occurs, however, management and initial testing was on a vlan. I did just do a quick test with an ip on native, same results. Also at this time did a UDP for the Wireless - getting 1.05 - 1.5+ Gbps - so they are fine.

Does anyone know of a CPU limitation that may be causing this issue between the Wireless and the Etherent. Is anyone else experienceing these results ao are you able to push closer to 1Gbps.

Sorry - I did forget to mention, the wireless interfaces are bonded (active backup) @ the station and bridged at the AP.
The ethernet ont he Cubes is in h/w offload - And turning that off makes minimal difference.

Bridge is RSTP - for what it’s worth and no firewalls/filters/NAT on AP/STA.

Again, any insight would be appreciated.

How do you measure performance bandwidth test inside MT device or using iperf? how about distance and weather condition? i’m not pretending to be expert in wireless but that’s the common theme usually ask by the expert here in wireless so that someone can point you in right direction

Fair Question
In the instance listed - I did On device Bandwidth Test - Sometimes device to adjacent device - othertimes through multiple devices to an endpoint (but still device to device). Distance is 10m/30m for PTMP section and 60M for PTP section - cabling up to 20m to switching inbetween. Weather cold but clear…
Unfortunately not “entirely relevant” as I am only appearing to have the slow down on the switching - so the wireless links are not the problem

IF the 60g Link PTP tests had only got 300ish mbps like the rest - i’d have put it down to CPU issues and left it - but due to the devices getting 700+ between - it’s just a little odd that ethernet would be soo much slower.

Anywho - I have another box of them to configure for another site now - so will actually do some iPerf and throughput tests from external Devices and see if I have the same issues - if so, will put in a Mikrotik Support Request, unless anyone has any ideas as to ways to increase capacities with these devices.

Cheers for the input though.

So, have had a chance to sit down and do some tests.

Initial Testing is not 1:1 - but it is fully Functional - Which is mildly infuriating.

Current Config is…
PC - Cube60Pro - Cube60ProSA - (ether5) Omnitik AC (ether1) - Edgeswitch 24 (our LAN switch) - CCR1036 (our Core Router).

Using “Mikrotik Bandwidth Test v0.1” on PC and testing through to the CCR.

Test reporting 850-900 Mbps TX and upwards of 950 RX to/from the CCR. Seeing this failrly consistenly in the ethernets and w60gs.

All testing TCP but UDP the same.

CPU usage on the Omnitik ~ 5% above idle MAX (between 3 and 11% as opposed to Idle of 2-6%). 60G’s both Idle at ~0% and hit ~ 9%-14% when under load.

These tests were done on 6.48.6 ans 6.49.17 (with all being 6.49.17 at the end of these initial tests)

Also tested from the Cube60Pro through to the CCR. getting 670-730+ TX and 850-900+ RX - TCP - 100% CPU on the Cube60 Pro. Getting 950+ each way UDP (also getting 950+ BOTH ways UDP simultaneous EDIT at the interfaces, btest was around 850/900 TX/RX UDP - I kind of exaggerated the test due to what I was seeing on the Interfaces)

Just for a basic outline of this config
Station
wlan60-1 and wlan1 into bond1 (active-backup with wlan60-1 as Primary)
bond1 and ethernet into primary bridge (rstp)
vlan-MAN on top of primary bridge and placed into MAN-bridge (no stp - dhcp-client).
Testing conducted on MAN IPs

AP
w60g-station1, wds1 and ether1 added to primary bridge (rstp)
although both wireless interfaces are in the bridge, only the one active at the station due to it’s bond (no loop)
vlan-MAN on top of primary bridge and placed into MAN-bridge (no stp - dhcp-client).

Switch
All ethernet ports bridged in primary bridge
vlan-MAN on top of primary bridge and placed into MAN-bridge (no stp - dhcp-client).
no current “other bridge” ports.
Switch vlans enabled - but fallback (no PVID assignment)
hardware offload on all ports.

So… As I am getting the expected/desired results - I’ll need to move to the next stage of testing.
There are 3 major differences here vs what I have onsite (beyond the CCR)

  1. The Omnitik - Is the same switch chip as the PB Pro though, so “should” act the same.
  2. The Firmware - IIRC is 7.13.5 Onsite - will pull that up next.
  3. The Bond config - Was initially bond at both sides as that what I believed to be required - as that is how it was setup in the 2 Pack pre configured bridges. This is not the case as there are 2 factors at play…
    3a. RSTP is in place to help rectify loops should the occur.
    3b. The Bond selects ONE interface under active backup - so although ot are connected - 1 is active.
    This only needs to be at one end and I initially did hit an issue where when a Bond to Bond was enabled, if there was an error in one side, but the other side was still active for the port for whatever reason, things wouldn’t work right - or I’d get 5ghz only

Anywho - preliminary update. Hoping to get some PB Pros to use before testing completed - and will update once they have been tested - or if I notice anything sooner.

And… well… more Testing.

I will try to be a concise here - but I did a lot more tests this time.

Moved Up to V7. Using 7.12.1. Only on the Cubes. Switch remained at 6.49.17.

Changed CPU from 716 to Auto - as per the error that occurs on ARM devices on V7.

Idle - CPU @ 448 Mhz - Usage <2%

Load - PC Throughput test - CPU Frequency between 672 and 896
CPU 25%-35% on Cube60Pro
CPU 15%-25% on Cube60ProSA… on TX. However, on RX it was 25%-35%.
Switch CPU was always below 5% (generally 2% or 4%)

Results
Load From PC Test TX - 850+
Load From PC Test RX - 950+
Evaluated from The Cube60Pro CPE
UDP Both ways - getting 950+ each way (at the interfaces, like the previous test reporting to the btest as 850/900 TX/RX - I kind of exaggerated the test due to what was on the Interfaces - Have edited)
TCP - RX - 770-800
TCP - TX - 640-680

So - Takeaways, significantly higher CPU usage, leading to a slight regression.
Almost imperceivable on Network - but up to 10% drop on device tests (which lines up with the ~ 10% increase in CPU usage for throughput)

Is V7 the issue I have - No. But more tests to go.

V7.15.1 on Cubes (secondary link F/W) - Switch still @ 6.49.17
CPU Load - All basically the Same

Results
Still getting 850+ TX and getting 950+ RX - TCP from PC
Appear to have skipped CPE tests and moved to next stage.

V7.15.1 on Cubes - Switch up to 7.15.1 (secondary link F/W)
Accidentally upgraded directly with ROS 7.15.1 Package - forgetting this was an Omnitik. This meant that there was no wireless package installed for the first round of tests (wireless not in use, but is a load that may have an effect)

Idle - AP/CPE remain the same.
Always, the Switch CPU was <8% (3% to 7%).

Load - PC Throughput test.
AP / CPE CPU remain the Same.
Switch CPU <8% RX (3% to 7%). TX <9% (4% through 8%)
Load From PC Test TX - 840+
Load From PC Test RX - 940+

Ever so slightly slower -
Adding the wireless package back in had a negligible impact on performance - a little more time at the upper CPU - as it was disabled, is expected.

As for the CPE Tests
RX, 810+ / TX - 700+
Strangely, appears to be a little stronger - the Range was 790-830 RX / 670-720 TX.


7.15.2 on All
Upgraded each device to 7.15.2 (the other F/W I have on site - the Primary Link) - to confirm no impact.

Idle - All the same

Load - PC Throughput test.
All The Same

Results
Load From PC Test TX - 850+ - but bursting above 900. Likely just a length of test thing. Would pull back down after, but probably averaged ~ 870.
Load From PC Test RX - 940+

Ever so slightly slower -
Adding the wireless package back in had a negligible impact on performance - maybe a little more time at the upper CPU - as it was disabled, is expected.

As for the CPE Tests
RX, 800+ / TX - 700+
A Little down - but negligible

After all this - I had a look and found an old RB960PGS - Basically an indoor PowerBox Pro. Was worse for wear - needed Firmware recovery - But did so and she was away.
Recovered to 6.49.10 (what was in the netinstall folder at the time)

Testing from PC - TX 830+ - RX 940+.
CPE - RX 800+ - TX - 700+ TCP - 820+TX and 920+RX UDP Both Ways

Switch CPU - 1%-2%

Upgraded to 7.15.2

Testing from PC - TX 830+ - RX 940+.
CPE - RX 800+ - TX - 700+ TCP - 700+TX and 920+RX UDP Both ways
Switch CPU - 5%-7%

Upgraded Everything to 7.15.3 (Latest as of this message)

Testing from PC - TX 830+ - RX 940+.
CPE - RX 810+ - TX - 700+ TCP - 820+TX and 920+RX UDP Both Ways
Switch CPU - 5%-7%

SO… Luckily, and confidently - I suggest that the issue is NOT Hardware related. This saves me having to find replacement devices… presumably.
It does however leave me with the problem of finding out if it is Hardware (failure) or Config related.
My testing config is an update of the one used onsite, so I will need to push that out… however it is a close approximation to what was there after adjustments made to site.
I’m going to pull a copy of all the configs to do some compares – then I’ll push out the new configs and see where we get to.

Will update when I have the results and if there is anything to note.

Well, don’t I have egg on my face.

Have completed the testing, and the updates to site (including full redesign of my scripts to build switching and 60G Links) - then looked at what I was dealing with and realized there was a vairiable I missed.

But before I get to that - for those interested, On a Link with Cube60Pro and RB960 Class Switches, you should be able to get roughly Gigabit speeds with a basic config without too many dramas (gigabit being generally accepted for us as ~900Mbps TCP). If on ROSv7 - there is a slight overall performance drop - but is barely noticable in the testing I did.

The drop is more noticible Mikrotik to Mikrotik, but thats due to more CPU cycles used that are lost for Btest.

If not external facing and really basic config, you may be happier with V6.49.x - but if security is in question, or - if like me, you like to keep a site uniform in Firmware - then will match to the highest and from my testing, 7.12.1 and 7.15.x all seem to do about the same.

Now to where I f*^%&d up… Ubiquiti. I have a setup where I have:
Core - Primary Link - Switch - Secondary Link - Endpoints. At the Switch (PB Pro) I have 4 PoE Ports - however have 5 devices that need power.
1 x Cube60Pro (incoming)
1 x Cambium XVT21 (newer AP)
1 x Unifi AC Mesh (Legacy AP)
1 x Cube60ProSA (outgoing)

1 x Nanostation 5AC (outgoing)

And that is where I duffed it. As the NS-5AC is gigabit on both ports and PoE passthrough - I thought “why don’t i just plug the Secondary Cube Link out via that - it’s gigabit”
Yeah, yeah - it’s Gigabit, but it doesn’t have a switch chip (that I am aware of) so is likely using CPU to attend to traffic.

Although the Switch is low power CPU - did some out tests from it and am confident that my suspiscions are confirmed.
Switch to Primary Link @ Core (direct to Cube @ Core) - 170+ TX 380+ RX UDP - Switch @ 100% CPU
Switch to Secondary Link at Endpoint (via NS-5AC - to Cube @ Endpoint) - 20+ TX, 190+ RX UDP Switch @ ~60% CPU
TCP tests were too much for the poor wee CPU - Barley over 250Mbps on the good link.

Also had a whole bunch more testing to report on as I was re-building my configs, but TBH - it’s similar to all the rest I’ve reported - generally a ± <10% for each different F/W tested from V7.12.1+

May report back when I get a chance to fix this - may not - but am fairly confident this is the biggest issue - besides the Core for site needing an bit of a “glow up” as well.
But let this be a lesson for anyone who forgets to pay attention… “Are you sure about that?” Is that how it is plugged in, is the NOTHING that could be dragging things down.
TBH, am very happy to have been able to get in and refine/update configs, and confirm the performance of devices and configs we are using/moving towards. Hopefully the boss is too.