Community discussions

MikroTik App
 
catalystjmf
just joined
Topic Author
Posts: 10
Joined: Fri Apr 19, 2013 3:54 pm

CHR on ESXI ring buffer exhaustion

Mon Feb 15, 2021 9:50 pm

There are serious performance issues when processing large pps on CHR with esxi, and likely is with other host hypervisors as well. We run many CHRs and can generate 60+ gbps with a tcp test to 127.0.0.1. So resources seem good, that is until you actually try and pass traffic across the virtual and physical interfaces. All high performance adapters receive their ring buffer setting via the driver. So on linux or windows you can increase the ring buffers in the driver pretty easily, and that communicates it to the virtual nic and then to the physical nic. In the CHR there is no way to do this, and on every esxi machine we see ring buffer exhaustion regardless of the physical interface used such as mellanox connectx-4, or HPE infiniabnd, or Qlogic... So how do we fix this? Is this a feature request? We are hitting a brick wall with these CHRs as we get near 300,000 packets per second. We will have to move away from Mikrotik CHR if there is no resolution, and I am sure others are having this same issue.

Thank you

localhost:~] vsish -e get /net/portsets/DvsPortset-0/ports/50331805/vmxnet3/rxSummary | grep ring
1st ring size:1056
2nd ring size:1024
# of times the 1st ring is full:7820741
# of times the 2nd ring is full:0

Who is online

Users browsing this forum: Baidu [Spider] and 3 guests