Community discussions

MikroTik App
 
tim2023
just joined
Topic Author
Posts: 2
Joined: Thu May 18, 2023 10:57 pm

Unable to Surpass 10Gbps in CHR

Thu May 18, 2023 11:20 pm

Hello everyone,

I'm currently experimenting with Cloud Hosted Router (CHR), and I've encountered a peculiar situation. I've installed CHR on a machine with 24 cores, running at 3.4Ghz, and it's a Xeon server. The virtualization OS I'm using is Proxmox.

In Proxmox, I've created three VMs: one for running CHR with a unlimited trial license, and the other two for running Ubuntu and conducting speed tests with iperf3. These three machines are connected using Proxmox's bridge.

The configuration for CHR is as follows:
# may/18/2023 19:52:11 by RouterOS 7.9
# software id = 
#
/interface bridge
add mtu=7000 name=bridge1
/interface ethernet
set [ find default-name=ether1 ] disable-running-check=no name=\
    "ether1 [INTERNET]"
set [ find default-name=ether2 ] disable-running-check=no mtu=7000
set [ find default-name=ether3 ] disable-running-check=no mtu=7000
set [ find default-name=ether4 ] disable-running-check=no mtu=7000
set [ find default-name=ether5 ] disable-running-check=no mtu=7000
/disk
set slot1 slot=slot1 type=hardware
set slot2 slot=slot2 type=hardware
set slot3 slot=slot3 type=hardware
set slot4 slot=slot4 type=hardware
set slot5 slot=slot5 type=hardware
set slot6 slot=slot6 type=hardware
set slot7 slot=slot7 type=hardware
/interface wireless security-profiles
set [ find default=yes ] supplicant-identity=MikroTik
/ip pool
add name=dhcp_pool0 ranges=192.168.222.2-192.168.222.254
/ip dhcp-server
add address-pool=dhcp_pool0 interface=bridge1 name=dhcp1
/interface bridge port
add bridge=bridge1 interface=ether2
add bridge=bridge1 interface=ether3
add bridge=bridge1 interface=ether4
add bridge=bridge1 interface=ether5
/ip address
add address=192.168.222.1/24 interface=bridge1 network=192.168.222.0
/ip dhcp-client
add interface="ether1 [INTERNET]"
/ip dhcp-server network
add address=192.168.222.0/24 gateway=192.168.222.1
/ip firewall filter
add action=fasttrack-connection chain=forward connection-state=\
    established,related hw-offload=yes
add action=accept chain=forward connection-state=established,related
add action=fasttrack-connection chain=forward dst-address=192.168.222.0/24 \
    hw-offload=yes src-address=192.168.222.0/24
/ip firewall nat
add action=masquerade chain=srcnat out-interface="ether1 [INTERNET]"
/system note
set show-at-login=no
Where ether1 is the Internet connection and the remaining ports, ether2-5, are being used for my testing machines. The two Ubuntu VMs are connected to ether4 and ether5.

I have successfully enabled jumbo frames on both the CHR and the Ubuntu machines, but I've noticed an odd issue. Specifically, I'm unable to get iperf3 to exceed a throughput of 10 Gbits/sec.

I've tried increasing the size of the packets, for example, from 3000 to 6000. I can observe a clear decrease in the number of packets being transmitted from CHR, but it has little to no effect on the overall throughput.
I've also tried optimizing the TCP buffer in Linux, to no avail.
Changing the CPU cores affinity for the three VMs, so they are all on the same CPU and not sharing any cores and the cores used by the three virtual machines are not hyper-threading cores, hasn't helped either.

Utilizing the profile tool, I've observed that the highest CPU usage is for networking, with one core at 60%+ and another at 30%+.
Upon checking the two VMs conducting the speed tests, their TCP queue usage seems to be around 32M, which seems to be OK for this speed.

So, what else could possibly be limiting the throughput of CHR? I'd appreciate any suggestions and ideas to help me understand and resolve this issue.

Thank you in advance!
 
tim2023
just joined
Topic Author
Posts: 2
Joined: Thu May 18, 2023 10:57 pm

Re: Unable to Surpass 10Gbps in CHR

Mon May 22, 2023 7:49 pm

Just a quick update, after a series of tests, I have confirmed that the bottleneck lies in the virtio network card used by Proxmox. If we want to achieve higher throughput, we can only map a PCIe network card into the virtual machine instead of using virtualized network cards.

Who is online

Users browsing this forum: rolling and 37 guests