Described by whom please? I am writing as official representative of MikroTik now, that there is, and never was such limitation.The 1 Gbps limit was described as a per CPU forwarding limitation. To get 10 Gbps of throughput, you couldn't just send a single 10 Gbps TCP flow between two ports - you needed to aggregate 10x 1 Gbps TCP flows so that multiple CPUs could get involved in the forwarding to provide the aggregate 10 Gbps performance. This was supposedly due to a single flow only being forwarded by a single CPU at any time and so it limited to the maximum forwarding performance of that CPU. As these are pretty weak individually in the CCR, the performance was limited to around 1 Gbps.
I don't recall seeing this being a traffic generator issue.
http://forum.mikrotik.com/viewtopic.php?f=1&t=85698Described by whom please? I am writing as official representative of MikroTik now, that there is, and never was such limitation.
required tcp buffer to reach 10000 Mbps with RTT of 2.0 ms >= 2441.4 KByte
maximum throughput with a TCP window of 64 KByte and RTT of 2.0 ms <= 262.14 Mbit/sec.
Technically this is true, but the reason for enabling Jumbo Frames/MTU is reduce the CPU overhead on the host and the router (if it is CPU based and not ASIC based)If you put a 2 ms RTT (not unreasonable with a test port on each side of the DUT) into the calculator it gives a max throughput of ~ 58 Gbps at 1500/1460 bytes. Suggests that you don't need to tweak this at least. Might need to up the window size of the tester though (assuming it actually runs a TCP stack).
how many cpu usage on ccr1072 on that test??Here is an example of a 10 Gig single TCP stream with 9000 MTU going through the CCR1072 with the following specs:
Server: HP DL360 G6 (2 x Intel X5570 Quad Core)
Hypervisor: ESXi6.0
Guest OS: CentOS 6.6
Traffic Generator: iperf3
iperf single TCP stream
1072 Interface
Like I said before, there is no such limit. The BTest tool has limits, not the CCR. Other, low end devices, do have limits, but the ports of the CCR1072/1036/1016 are all directly connected to the CPU. The CCR1009 has the first 4 ports going through a switch chip, so that is partially true for that model.I think this 1 gig limit was due to 1gbit of throughput between ports and the cpu on some models, like the 1009 and 1100
These tests have already been published:It would be more useful for us and probably others to know the PPS capabilities of these units, for example what PPS can be achieved via routed ports at 64/128/256/512 byte frame sizes, single 10GbE port to 10GbE port, with 5/10/20 firewall filter rules and also with bonded aggregated links, 20GbE to 20GbE routed and give CPU stats for each test.
Here is the CPU distribution with one 10 gig up/down TCP stream running (20 gig aggregate IP forwarding)how many cpu usage on ccr1072 on that test??Here is an example of a 10 Gig single TCP stream with 9000 MTU going through the CCR1072 with the following specs:
Server: HP DL360 G6 (2 x Intel X5570 Quad Core)
Hypervisor: ESXi6.0
Guest OS: CentOS 6.6
Traffic Generator: iperf3
iperf single TCP stream
1072 Interface
cpu usage distribution on 72 cores?
configuration of ccr1072?
[admin@IPA-LAB-CCR1072] > system resource cpu print
# CPU LOAD IRQ DISK
0 cpu0 0% 0% 0%
1 cpu1 6% 1% 0%
2 cpu2 0% 0% 0%
3 cpu3 0% 0% 0%
4 cpu4 0% 0% 0%
5 cpu5 0% 0% 0%
6 cpu6 0% 0% 0%
7 cpu7 0% 0% 0%
8 cpu8 0% 0% 0%
9 cpu9 0% 0% 0%
10 cpu10 0% 0% 0%
11 cpu11 0% 0% 0%
12 cpu12 32% 2% 0%
13 cpu13 0% 0% 0%
14 cpu14 5% 5% 0%
15 cpu15 0% 0% 0%
16 cpu16 0% 0% 0%
17 cpu17 0% 0% 0%
18 cpu18 0% 0% 0%
19 cpu19 0% 0% 0%
20 cpu20 0% 0% 0%
21 cpu21 0% 0% 0%
22 cpu22 0% 0% 0%
23 cpu23 0% 0% 0%
24 cpu24 0% 0% 0%
25 cpu25 0% 0% 0%
26 cpu26 0% 0% 0%
27 cpu27 0% 0% 0%
28 cpu28 0% 0% 0%
29 cpu29 0% 0% 0%
30 cpu30 0% 0% 0%
31 cpu31 0% 0% 0%
32 cpu32 0% 0% 0%
33 cpu33 0% 0% 0%
34 cpu34 0% 0% 0%
35 cpu35 0% 0% 0%
36 cpu36 0% 0% 0%
37 cpu37 0% 0% 0%
38 cpu38 0% 0% 0%
39 cpu39 21% 4% 0%
40 cpu40 0% 0% 0%
41 cpu41 0% 0% 0%
42 cpu42 0% 0% 0%
43 cpu43 0% 0% 0%
44 cpu44 0% 0% 0%
45 cpu45 0% 0% 0%
46 cpu46 0% 0% 0%
47 cpu47 0% 0% 0%
48 cpu48 0% 0% 0%
49 cpu49 0% 0% 0%
50 cpu50 0% 0% 0%
51 cpu51 0% 0% 0%
52 cpu52 0% 0% 0%
53 cpu53 0% 0% 0%
54 cpu54 0% 0% 0%
55 cpu55 0% 0% 0%
56 cpu56 36% 3% 0%
57 cpu57 0% 0% 0%
58 cpu58 0% 0% 0%
59 cpu59 0% 0% 0%
60 cpu60 0% 0% 0%
61 cpu61 0% 0% 0%
62 cpu62 0% 0% 0%
63 cpu63 0% 0% 0%
64 cpu64 0% 0% 0%
65 cpu65 0% 0% 0%
66 cpu66 0% 0% 0%
67 cpu67 0% 0% 0%
68 cpu68 0% 0% 0%
69 cpu69 0% 0% 0%
70 cpu70 0% 0% 0%
71 cpu71 0% 0% 0%
This is fixed in v6.33rc13.- informative stats display incorrect voltage (0V) even with both PS connected.
i dont get your pointWhile i wouldnt mind upgrading i dont really feel like getting a CCR1072 mainly because first i need to earn money and 2nd is that i feel like the quality of mikrotik is going down on the software side. They just arent that competitive anymore in software. Sure i could just buy the CCR1072 straight away but at the current pace of things for routerOS its really not helping to convince me to get one.
There has been another router that managed 100Gb/s of routing a few years ago but it was only a project and it didnt get that much spotlight. It was also probably cheaper than the CCR1072 for the solution. It was a GPU based router called pixelshader which used 2x GTX 480 and dual xeon CPU to support those speeds. Since you can get faster CPUs and GPUs for much cheaper now and with PCIe x16 3.0 and IGPs that also can run compute connected to the CPU bus the only limitation really is the network IO.
Idle power use on the CCRs are horrible compared to x86. Recent x86 CPUs can idle down to 10W while the CCR1036 idle power is 40W, but 47W since the fan always runs. x86 CPUs also have dynamic clock scaling which reduces heat and power use. So while the CCRs may be impressive they not only cost a lot but they use a lot of electricity. Dont Tilera based CPUs also have dynamic clocks and voltages? Although GPUs are power hungry recent GPUs have very very low idle power especially if you're only using them for compute and not connecting any monitors to them.
I have actually been in favor of using mikrotik for a few years and its just lacking the configurability you can get by using a normal x86 linux server. Even Ubiquiti which lacks features and speed compared to mikrotik can be used as a linux server since you can install debian packages on them.
you are comparing a whole appliance idle power with x86 cpu alone idle power, a x86 platform idle power is higher than 40watt
Idle power use on the CCRs are horrible compared to x86. Recent x86 CPUs can idle down to 10W while the CCR1036 idle power is 40W, but 47W since the fan always runs
Only problem is that a iPerf, doesn't really reflect 'real' experience...
I bet adding a few nat / firewall rules to the mix, and the performance will drop -significantly-
Thanks Tom!
We haven't tried it in overclock mode yet, but I'll add that to the list of tests we are doing.
Just recently we got the CCR1072 to 30,000 PPPoE active connectons with 30,000 simple queues.
Dears,
I have made a real life test, the only problem with CCR1072 and PPPoE with 3 simple queues per subscriber is when traffic starts to flow.
Once the CPU load hits >13-15% BW drops, which leads to CPU load to go down again, this issue occur repeatedly until you reboot the router then it is stable again for another 6-10 hrs! then the cpu load starts to get unstable again.
I have the router in a real life situation now, with ~1400 users online and ~670Mbps at peak!
Best regards,
Layth
Dears,
I have made a real life test, the only problem with CCR1072 and PPPoE with 3 simple queues per subscriber is when traffic starts to flow.
Once the CPU load hits >13-15% BW drops, which leads to CPU load to go down again, this issue occur repeatedly until you reboot the router then it is stable again for another 6-10 hrs! then the cpu load starts to get unstable again.
I have the router in a real life situation now, with ~1400 users online and ~670Mbps at peak!
Best regards,
Layth
Are you un latest version? what are CPU core load distributions? Does all your queues run under a single (or few) parent queues.
have you contacted support@mikrotik.com?
So all your queues are limited to single cpu core then, most probable place of bottleneck.All queues run under a single parent
Sad! I have completely opposite observations - compare to other IT companies one of the best support out there. Try submitting something to CiscoI haven't contacted support because I have a history without a single success!
So all your queues are limited to single cpu core then, most probable place of bottleneck.All queues run under a single parent
Sad! I have completely opposite observations - compare to other IT companies one of the best support out there. Try submitting something to CiscoI haven't contacted support because I have a history without a single success!
We're in the same boat, but with the 1036 CCR and only running 500 queues. All PCQ with "no parent" and barely able to push 600Mbps. Is there an official way to run queuing with many hundreds of clients?I use simple queues with parent queue set to none, there is no queueing as much as traffic shaping with PCQ.
I wish I have your success story with the support, but unfortunately I haven't.
We're using simple queues. What do you mean by frequent changes? We might modify a few queues a day on the list, but I can't imagine a few changes making an impact that lasts hours...Simple Queues, not queue trees.
But simple queues too, have issues if frequent changes to those queues are made.