Community discussions

MikroTik App
 
catalystjmf
just joined
Topic Author
Posts: 10
Joined: Fri Apr 19, 2013 3:54 pm

CHR on ESXI ring buffer exhaustion

Mon Feb 15, 2021 9:50 pm

There are serious performance issues when processing large pps on CHR with esxi, and likely is with other host hypervisors as well. We run many CHRs and can generate 60+ gbps with a tcp test to 127.0.0.1. So resources seem good, that is until you actually try and pass traffic across the virtual and physical interfaces. All high performance adapters receive their ring buffer setting via the driver. So on linux or windows you can increase the ring buffers in the driver pretty easily, and that communicates it to the virtual nic and then to the physical nic. In the CHR there is no way to do this, and on every esxi machine we see ring buffer exhaustion regardless of the physical interface used such as mellanox connectx-4, or HPE infiniabnd, or Qlogic... So how do we fix this? Is this a feature request? We are hitting a brick wall with these CHRs as we get near 300,000 packets per second. We will have to move away from Mikrotik CHR if there is no resolution, and I am sure others are having this same issue.

Thank you

localhost:~] vsish -e get /net/portsets/DvsPortset-0/ports/50331805/vmxnet3/rxSummary | grep ring
1st ring size:1056
2nd ring size:1024
# of times the 1st ring is full:7820741
# of times the 2nd ring is full:0
 
alphaonezero
just joined
Posts: 3
Joined: Sat Mar 27, 2021 7:24 am

Re: CHR on ESXI ring buffer exhaustion

Sat Mar 27, 2021 7:29 am

I second this as an issue. Seeing a lot of the same issues with my deployments.
 
chubbs596
Frequent Visitor
Frequent Visitor
Posts: 90
Joined: Fri Dec 06, 2013 6:07 pm

Re: CHR on ESXI ring buffer exhaustion

Fri May 07, 2021 9:38 pm

We see the same issue, start seeing packet loss through my CHR when going over 300kpps, we do allot of VOIP traffic and just a 100Mbps VOIP traffic at 20ms packet intervals generates 100kpps in each direction.

I have noticed High cpu load with high pps on the CHR.

wonder if Mikrotik has looked into DPDK or XDP or VPP , where we can see 14mpps per cpu core on a linux kernel.
On a standard linux kernel tops depening on cpu at arround 1mpps so you would need almost 16 cpu cores to hit 14mpps,
 
vint
just joined
Posts: 1
Joined: Tue Oct 13, 2020 6:19 pm

Re: CHR on ESXI ring buffer exhaustion

Tue Aug 10, 2021 5:04 pm

There are serious performance issues when processing large pps on CHR with esxi, and likely is with other host hypervisors as well. We run many CHRs and can generate 60+ gbps with a tcp test to 127.0.0.1. So resources seem good, that is until you actually try and pass traffic across the virtual and physical interfaces. All high performance adapters receive their ring buffer setting via the driver. So on linux or windows you can increase the ring buffers in the driver pretty easily, and that communicates it to the virtual nic and then to the physical nic. In the CHR there is no way to do this, and on every esxi machine we see ring buffer exhaustion regardless of the physical interface used such as mellanox connectx-4, or HPE infiniabnd, or Qlogic... So how do we fix this? Is this a feature request? We are hitting a brick wall with these CHRs as we get near 300,000 packets per second. We will have to move away from Mikrotik CHR if there is no resolution, and I am sure others are having this same issue.

Thank you

localhost:~] vsish -e get /net/portsets/DvsPortset-0/ports/50331805/vmxnet3/rxSummary | grep ring
1st ring size:1056
2nd ring size:1024
# of times the 1st ring is full:7820741
# of times the 2nd ring is full:0
Hi
Try to increase rxBurstQueueLength for the interface in .vmx configuration file. It might help you.
Read this article for details - https://kb.vmware.com/sfc/servlet.sheph ... 009Fi1fAAC
 
changeip
Forum Guru
Forum Guru
Posts: 3829
Joined: Fri May 28, 2004 5:22 pm

Re: CHR on ESXI ring buffer exhaustion

Wed Apr 27, 2022 7:33 am

I think we are running into this ... anyone else have other info to help? vmware 7.0.0 esxi and CHR 6.49.6. Hitting a brick wall once we hit 300k pps.

Sam

Who is online

Users browsing this forum: No registered users and 18 guests