Community discussions

MikroTik App
just joined
Topic Author
Posts: 10
Joined: Fri Apr 19, 2013 3:54 pm

CHR on ESXI ring buffer exhaustion

Mon Feb 15, 2021 9:50 pm

There are serious performance issues when processing large pps on CHR with esxi, and likely is with other host hypervisors as well. We run many CHRs and can generate 60+ gbps with a tcp test to So resources seem good, that is until you actually try and pass traffic across the virtual and physical interfaces. All high performance adapters receive their ring buffer setting via the driver. So on linux or windows you can increase the ring buffers in the driver pretty easily, and that communicates it to the virtual nic and then to the physical nic. In the CHR there is no way to do this, and on every esxi machine we see ring buffer exhaustion regardless of the physical interface used such as mellanox connectx-4, or HPE infiniabnd, or Qlogic... So how do we fix this? Is this a feature request? We are hitting a brick wall with these CHRs as we get near 300,000 packets per second. We will have to move away from Mikrotik CHR if there is no resolution, and I am sure others are having this same issue.

Thank you

localhost:~] vsish -e get /net/portsets/DvsPortset-0/ports/50331805/vmxnet3/rxSummary | grep ring
1st ring size:1056
2nd ring size:1024
# of times the 1st ring is full:7820741
# of times the 2nd ring is full:0
just joined
Posts: 2
Joined: Sat Mar 27, 2021 7:24 am

Re: CHR on ESXI ring buffer exhaustion

Sat Mar 27, 2021 7:29 am

I second this as an issue. Seeing a lot of the same issues with my deployments.
Frequent Visitor
Frequent Visitor
Posts: 72
Joined: Fri Dec 06, 2013 6:07 pm

Re: CHR on ESXI ring buffer exhaustion

Fri May 07, 2021 9:38 pm

We see the same issue, start seeing packet loss through my CHR when going over 300kpps, we do allot of VOIP traffic and just a 100Mbps VOIP traffic at 20ms packet intervals generates 100kpps in each direction.

I have noticed High cpu load with high pps on the CHR.

wonder if Mikrotik has looked into DPDK or XDP or VPP , where we can see 14mpps per cpu core on a linux kernel.
On a standard linux kernel tops depening on cpu at arround 1mpps so you would need almost 16 cpu cores to hit 14mpps,

Who is online

Users browsing this forum: No registered users and 4 guests