DHCP Server in Container Disables Bridge Fast Path/L3 HW Offload

I submitted a support ticket with this, but I thought I’d bring it up here as well. Hopefully someone has a clever workaround.

I’m attempting to run a DHCP server in a container on my CCR2116. I’d prefer to avoid having to do that, but Mikrotik’s DHCP server doesn’t do dynamic DNS updates and I already run an authoritative DNS server for my local domain name.

However, as soon as I start my DHCP container, “Bridge Fast Path Active” becomes disabled. This happens with 100% reproducability, with both 7.7 and 7.8rc1. It happens both when running isc-dhcp-server and Kea. The easiest way to see it is to use networkboot/dhcpd or jonasal/kea-dhcp4. To confirm that those specific containers aren’t the problem, I tried my own custom container based on debian:bullseye with only isc-dhcp-server installed. As soon as the dhcpd executable is launched, bridge fast path goes down. I’ve tried with my container’s VETH attached to it’s own Container bridge, or to use my primary bridge, both have the same effect. Every other container I’ve run works without any problem at all.

[foobar@ccr2116] /container> print
 0 name="c689df1c-90c8-4d25-b08a-075748030e04" tag="jonasal/kea-dhcp4:2.3" os="linux" arch="arm64" interface=veth_kea 
   envlist="kea_envs" cmd="-c /etc/kea-tmp/kea-dhcp4.conf" root-dir=sata1/container-roots/kea mounts=kea_data dns="" 
   hostname="kea" logging=yes start-on-boot=yes status=stopped 

 1 name="25c60e33-ff95-4064-ad3a-1378435cd970" tag="adguard/adguardhome:latest" os="linux" arch="arm64" 
   interface=veth_adguard envlist="adguard_envs" root-dir=sata1/container-roots/adguard mounts=adguard_work,adguard_conf 
   dns="" hostname="adguard" workdir="/opt/adguardhome/work" logging=yes start-on-boot=yes status=running 
[foobar@ccr2116] /container> /interface/bridge/settings/print
              use-ip-firewall: no
     use-ip-firewall-for-vlan: no
    use-ip-firewall-for-pppoe: no
              allow-fast-path: yes
      bridge-fast-path-active: yes
     bridge-fast-path-packets: 25275082
       bridge-fast-path-bytes: 5526701340
  bridge-fast-forward-packets: 0
    bridge-fast-forward-bytes: 0
[foobar@ccr2116] /container> start 0

[foobar@ccr2116] /container> /interface/bridge/settings/print
              use-ip-firewall: no
     use-ip-firewall-for-vlan: no
    use-ip-firewall-for-pppoe: no
              allow-fast-path: yes
      bridge-fast-path-active: no
     bridge-fast-path-packets: 25275364
       bridge-fast-path-bytes: 5526783890
  bridge-fast-forward-packets: 0
    bridge-fast-forward-bytes: 0

Now the reason it matter is because if Bridge Fast Path is disabled, L3 HW offloading of firewall connections. Does anyone have any idea what might be causing this behaviour or have any ideas for a workaround?

DHCP server listens to broadcasts and hooks pretty low in network stack (in principle it’s a normal layer 7 service but for technical reasons DHCP server hooks between L2 and L3 in order to process ingress packets as those might be mistreated by normal IP stack, return packets are special as well). So it does seem logical that this part can not be HW accelerated in any way. Since L3HW acceleration logic can’t know what kind of packets a generic containerized software wants to catch, the safest action is to disable HW offload altogether as soon as hook into network stack below L3 is detected.

I believe that it would be possible to keep L3HW offload if ROS knew what kind of packets DHCP server process needs to receive. And it is very likely the case with ROS’ own DHCP server.

But I think only MT staff can give definite answer to your question. There are no guarantees that they will see this thread, so you may want to open a ticket with support … and post here any answer you might receive.

Agreed, it seems logical that of all the containers I’ve tried, this one is the one that disabled L3HW offload. In this specific case, the only offloading I use is for fasttrack connections, so L3 shouldn’t be interfering.

I did submit a ticket and they said they’ve been able to reproduce the problem, but they don’t have a solution yet.

I was really hoping to get this working because the DHCP server is the only aspect of Mikrotik’s stack I can’t yet replace with a better container.

Doesn’t matter which kind of offloading, L2 offloading, fasttrack, L3 offloading, anything … for this particular case those packets should be passed from hardware to CPU.