On hEX (RB750Gr3) product page there are two block diagrams, one with disabled and one with enabled switching.
Qs:
why two, is that a user option?
if so, how to do it?
and why choose one over the other? What are the (dis)advantages of each?
It is a user configurable, depending whether or not two or more ports are “switched” together.
There is a dependency on the version of RouterOS though:
pre 6.41: if at least one port is slave to a master → “enabled switching” applies
post 6.41: if at least two ports are bridged in hw(=yes) → “enabled switching” applies
Sorry to bump such an old thread but after an ISP upgrade and currently using a RB750Gr3 here, I wanted to give the things mentioned above a shot, and well, not true.
With ethernet1 out of any bridges, and with only one single bridge present with ports 2,3,4 & 5 added to it, all hardware offloaded, the “Block Diagram with disabled switching” seems to apply.
So if you have an ISP that gives you over 500Mbps up&down you have to use port 1 for wan (example) and port 2 & 4 for LAN.
Or maybe use one of port 2 or 4 for WAN and use 1,3 and 5 for LAN, that way you have more LAN ports. *WRONG, see below: https://forum.mikrotik.com/viewtopic.php?f=3&p=848197#p848151
Any way to change this behaviour? I guess not.
But worth mentioning somewhere instead of leaving the user wondering where the bottleneck is.
RouterOS: 6.46.8
Below some fast tests made with iperf:
interface bridge port print
Flags: X - disabled, I - inactive, D - dynamic, H - hw-offload
# INTERFACE BRIDGE HW PVID PRIORITY PATH-COST INTERNAL-PATH-COST HORIZON
0 H ;;; defconf
ether2 bridge yes 1 0x80 10 10 none
1 I H ;;; defconf
ether3 bridge yes 1 0x80 10 10 none
2 H ;;; defconf
ether4 bridge yes 1 0x80 10 10 none
3 I H ;;; defconf
ether5 bridge yes 1 0x80 10 10 none
This is the real situation and if you enable switching (bridge) then all work is still done (emulated) in the processor. To the processor you see two lanes of each 1Gbit/s. When using port 1 as WAN then ports two and four provide maximum speed. Ports three and five have to share the 1Gbit/s with port one (WAN).
When plugging looking at the traffic to/from the WAN and best would be 1<->2-4 and less 1<->3-5.
In the hEX-S it even worse if you use the SPF as WAN. Less suited, SPF<->1-2-3-4-5 (1,2,3,4,5 share the same 1Gbit/s lane)
Even with it’s limitations it is still a very nice little box.
If we are talking about 5 independent ports: the two 1Gbps links will be used, as needed. There is no hard assignment of a link to a group os ports.
If we are talking about a mix of independent ports and slave ports: One 1Gbps link is given to the independent ports, the other is given to the grouped ports.
I’m not sure how it would handle, if someone made two or more bridges. This isn’t a recommended config, anyway.
My setup: ISP on ether5, the other four ports grouped under one bridge.
Edited to correct typos.
By your logic, in two of my tests (1 and 3, eth1 wan - eth2 lan; eth1 wan - eth4 lan) I have two ~1.6Gb/s links, with a total of 3.2Gb/s to the CPU, that isn’t in the diagrams.
The 1Gb/s links are full duplex.
Explain otherwise.
PS: shame that after 3 years since you’ve posted this you came with that explanation.
@Paternot your posted speedtest is one way at a time, I covered all the ports and tests in the screenshots posted above.
Also the Block Diagram posted above (disabled switching) seems pretty accurate to my findings. https://i.mt.lv/cdn/product_files/RB750Gr3-dsw_161117.png
Having WAN on ETH5(or 3 or 1) will give best up & down simultaneous speed with LAN on ports 2 and 4, you can test that with two speedtests, starting another one just before the current one ends, on different servers maybe, so that one of them does the upload part and another one the download part.
If you switch your testing machine with the current setup on ports 3 or 1 you’ll see half of the simultaneous transfer speed. (you’ll still get 1Gb/s up OR down, though, so running just ONE speedtest will not show that.)
Like this, here I was doing just that, two speedtests:
in tests 1 & 3 you don’t go to cpu, but are using the hardware switching in the switch chip (= off-loading) → hence the limitations of the link to cpu don’t apply and you get full bandwidth of the 1Gb/s connection / port
Just for posteriority, this is NOT the case: Mikrotik always reports full bandwidth over all directions → that 1Gb/s is shared for both directions!
(as supported by your own tests)
I you want some assistance or information you should be a bit more polite. Most of us on this forum are not here because we are paid for it.
And how would you explain it then, considering that this test goes right against the results of your tests number 2 & 4 from you first post here???
both were full duplex tests across two involved ports
The magic word is “integrated”. Integrated in the CPU I assume. When you route (WAN), then the CPU is always used. Port one to CPU and traffic to three have to share the same lane.
When traffic stays in the bridge then the switch in the CPU will direct the traffic and is offloading in hardware. Don’t put your WAN in the bridge that is that I am told.
So I guess you can’t explain the data that destroys your theory about the 1Gb/s links beeing a sum of upload+download, can you?
I’m not the impolite one. All the data is posted pretty clear above, the tests explained, they confirm that only the “Disabled Switching” diagram is used.
Yet, you insist on supporting bogus data and making false claims based on 2 out of 4 posted tests, out of 5 if you count the PPPoE one.
I’ve explained why those two tests are showing less(half) data passing through (because they use the same 1Gb link to the CPU). The other 2 tests including the one with the PPPoE client active are using BOTH of the 1Gb/s links to the CPU.
Read up again, maybe you learn something.
Make some tests of your own, repeat mine, whatever.
From all that beeing said, you are the ignorant one, I’m sorry.
Cheers!
@msatter I’ve posted above the bridge ports, ether1 is NOT part of the bridge.
This isn’t how it works. The system is a SOC, with a switch integrated right into the CPU. This integrated switch uses two connections (as seen here: http://www.t-firefly.com/download/FireWRT/hardware/MT7621.pdf) to the CPU. Each of these connections runs at 1Gbps.
There is not a physical assignment to a given port: it works just our modern x86 CPUs, that have two memory controllers, and any core can use any controller at any time.
Take a look at “routing, fast path”. The hEX can do almost 2Gbps. The speed test is done using one direction only - it goes in from one port and goes out from another.
So, say it goes in at eht1 and out at eth2. We need 1Gbps in at th1 and 1Gbps out at eth2. So far, so good. We have 2 links after all - even half duplex ones would work. But this gives us 1Gbps of traffic - you don’t count twice. To get the 1,9Gbps stated, the only way would be with two full duplex internal links: say, eth1 and eth2 in, eth3 and eth4 out. Otherwise You would need 4 halfduplex links.
That doesn’t contradict my findings, using just two ports, based on the first screenshots:
screenshot1:
~842Mbps IN ether1 → ~850Mbps OUT ether2 AND
~817Mbps IN ether2 → ~815Mbps OUT ether1 → 2x 1Gb/s links, right? (correct based on the Disabled Switching Diagram, because ports 1 and 2 are on different lanes to the CPU).
screenshot2:
~635Mbps IN ether1 → ~631Mbps OUT ether3 AND
~321Mbps IN ether3 → ~317Mbps OUT ether1 → only 1x 1Gb/s link, right? (correct based on the Disabled Switching Diagram, because ports 1 and 3 are on the same lane to the CPU).
screenshot3:
~799Mbps IN ether1 → ~799Mbps OUT ether4 AND
~869Mbps IN ether4 → ~869Mbps OUT ether1 → 2x 1Gb/s link, right? (correct based on the Disabled Switching Diagram, because ports 1 and 4 are on different lanes to the CPU).
screenshot4:
~626Mbps IN ether1 → ~623Mbps OUT ether5 AND
~331Mbps IN ether5 → ~329Mbps OUT ether1 → only 1x 1Gb/s link, right? (correct based on the Disabled Switching Diagram, because ports 1 and 5 are on the same lane to the CPU).
Ports are fixed, 1,3,5 on one lane, ports 2,4 on another lane. They don’t jump randomly from lane to lane. Based, again, on the Disabled Switching Block Diagram presented on the support page.
Routing performance on the test results advertises 1.1Gbps for routing with 25 filter rules and 1518 byte packets, I have only 14 filter rules and I get ~1.6Gbps routing (screenshots 1 and 3), that’s pretty good for this little router.
One last time: the links ARE full duplex. And, as You can see on the pdf I linked, they are NOT assigned to a given port. Think about it.
I posted a speedtest, crossing the router, from eth5 to eth1. If the ports were assigned to a given link, and by Mikrotik’s schematic, It wouldn’t be possible to cross 1Gbps through a single half duplex link. Only a full duplex one can do this.
Same thing with the speed results: one cannot pass through 1,9Gbps of traffic with two half duplex 1Gbps links. And, Yes, contradict your findings. You claimed the links where half duplex. They aren’t. And by Your screenshot1: we have about 1,7Gbps of crossing traffic. In order to it be even possible, both links must be full duplex.
Take a look at the pdf I posted here. It’s the documentation of the SOC used by Mikrotik. There You will see that the switch is connected to the CPU with two links that aren’t hardwired to any single switch port.
I didn’t say anything about half duplex links, there are two 1Gb/s full duplex links, one link for ports 1,3,5, and one link for ports 2,4. @sebastia is the one claiming half duplex links, not me.
The datasheet doesn’t say how MikroTik configured those links, but the MikroTik posted diagrams say how they did, like above mentioned and tested.
You can’t saturate one full duplex link with ~900Mbps unidirectional traffic at a time, like your speedtest, which does download, THEN upload.
You CAN saturate one full duplex link with bidirectional tests, like I did above:
1<=>2 = TWO full duplex links in use to/from CPU, bidirectional traffic of ~1.6Gbps, ~800Mbps both ways of both links, no bottleneck.
1<=>4 = same as above, no bottleneck.
3<=>2 = same as above, no bottleneck.
3<=>4 = same as above, no bottleneck.
5<=>2 = same as above, no bottleneck.
5<=>4 = same as above, no bottleneck.
1<=>3 = ONE full duplex link in use to/from CPU, bidirectional traffic of ~900Mbps, ~600Mbps one way, ~300 the other, bottleneck.
1<=>5 = same as above, bottleneck.
3<=>5 = same as above, bottleneck.
2<=>4 = same as above, bottleneck.
There is a bottleneck when using WAN and LAN ports from the same link (1<=>3 OR 1<=>5 OR 3<=>5 OR 2<=>4).
For optimal performance and avoid any bottlenecks you have to use WAN ports from one link and LAN ports from the other link.
I just wanted to know if there is any way around this, besides using port 2(or 4) for WAN and ports 1,3,5 for LAN. ← *WRONG, see below: https://forum.mikrotik.com/viewtopic.php?f=3&p=848197#p848151
Or port 1(or 3 or 5) for WAN and ports 2,4 for LAN.
I don’t understand what are you trying to prove. That my tests lie? Feel free to do your own BIDIRECTIONAL tests, two ports at a time, post the results and compare our findings.
Until then, Cheers!