My purpose in doing this was to see what to expect performance-wise if I used these as aggregation "switches" (er, routers) on either end of a pair of fibers between two buildings (vs. installing a mux or replacing the 2-count fiber with a 12- or 24-count) since I already had them in stock.
Initially I configured the 2004's like a switch, with all the ports in a bridge, and each port tagged "natively" (PVID) to a VLAN (1 = 1001, 2 = 1002, 3 = 1003, etc.) with the SFP28 ports acting as the trunks. SFP+ port 1 on each CCR2116 was then connected to the other, as were ports 2, by way of the 2004 "switches." The 2116's 16-core CPUs were easily able to saturate each port with 9.6Gbps of UDP (and occasionally TCP) traffic, and when things got working well, we ended with 19+Gbps of traffic across the SFP28 link.
Code: Select all
2116 No. 1-SFP+1==DAC==SFP+1 2004 No. 1 2004 No. 2==DAC==SFP+1 2116 No. 2
SFP28==SFP28
2116 No. 1-SFP+2==DAC==SFP+2 2004 No. 1 2004 No. 2==DAC==SFP+2 2116 No. 2
At around 19Gbps both 2004's hit over the 90% mark, regardless of UDP or TCP traffic. Using the built-in speed tests on the 2116's, the TCP tests ranged from 4Gbps one-way to 8.8Gbps, with a very rare 9.2Gbps on occasion. There were zero firewall rules, queues, or anything else in all four routers, simply IP addresses and a couple of static routes.
I'd say, using RouterOS 7.4.1, the theoretical maximum for routing or bridging on these things is 20Gbps in the real world, with most people seeing less than that depending on what else they have the router doing.
On 7.6 and 7.7beta9, my throughput tests were much lower, like 6-7Gbps instead of 8-9Gbps per port. So while 7.7 claims to have fixed support for the various SFP+ rates, there's a performance hit somewhere else.
Also interesting to note, traffic going from the SFP28 out the SFP+ ports uses less CPU than traffic coming in via the SFP+ ports and going out the SFP28 ports.