mDNS between VLANs with just bridge filters - Look Mum, no containers!

[EDIT this was my initial version but is superseded by the MACVLAN version below ]

I had an issue on a site where I needed devices on VLAN2 to see Chromecasts, AppleTV’s and Airprint on VLAN1. Taking some ideas I had while formulating the post about mDNS on Wireguard I tried it out. Apparently Mikrotik have a solution for mDNS in ROS they are still cooking up so we’ll have to wait; until then…

I have a CRS354 switch on site doing IGMP snooping and a router doing PIM-SM and of course this doesn’t help for mDNS between VLANs. I also have some hEX’s acting as managed switches (using VLAN-filtering) in some rooms so I tried this on a hEX:

  • hEX has a VLAN-filtered bridge with VLAN1 and VLAN2 with these tagged on Eth1 and untagged on the other ports as needed.
  • Created a new bridge called BridgemDNS.
  • Create 2 VLAN interfaces (VLAN1 and VLAN2) whose parent is the main VLAN filtered bridge.
  • Put the ports for the VLANs onto the new bridge and do some filtering.
/interface bridge
add name=BridgemDNS protocol-mode=none

/interface bridge port
add bridge=BridgemDNS frame-types=admit-only-untagged-and-priority-tagged \
    interface=VLAN1 pvid=1001
add bridge=BridgemDNS frame-types=admit-only-untagged-and-priority-tagged \
    interface=VLAN2 pvid=1001

/interface bridge vlan
add bridge=BridgemDNS untagged=VLAN1,VLAN2 vlan-ids=1001

/interface bridge filter
add action=accept chain=forward comment="Allow mDNS" dst-address=\
    224.0.0.251/32 dst-mac-address=01:00:5E:00:00:FB/FF:FF:FF:FF:FF:FF \
    dst-port=5353 in-bridge=BridgemDNS ip-protocol=udp \
    mac-protocol=ip out-bridge=BridgemDNS src-port=5353
add action=drop chain=forward in-bridge=BridgemDNS \
    out-bridge=BridgemDNS

/interface bridge nat
add action=src-nat chain=srcnat dst-mac-address=\
    01:00:5E:00:00:FB/FF:FF:FF:FF:FF:FF to-src-mac-address=CC:2D:E0:14:64:AD

So you can see in the 1st part the new bridge is created and VLAN ports set up to join it using a PVID of 1001. This way the layer 2 traffic from VLAN1 and VLAN2 will be connected. It’s vitally important that the drop filter rule is there to block all the L2 traffic flowing both ways which would create havoc. The rule before the drop is the magic one that lets mDNS traffic through only, after that the drop rule blocks all other traffic.

Nothing seemed to happen at this point until I did a SRCNAT on the MAC address of frames being sent out using the MAC address (CC:2D:E0:14:64:AD) of the main VLAN-filtered bridge (not the mDNS bridge). I think has to do with IGMP snooping and traffic flooding egress on ports and making sure the MAC is known on that network.

So this seemed to work and mDNS broadcast traffic flowed both ways. The network through the main router allows traffic initiated from VLAN2 to go to VLAN1 so Airplay worked when I connect a Macbook on VLAN2 to an AppleTV on VLAN1.

I did another test to see if I could just allow certain mDNS traffic across.

/interface bridge filter
add action=accept chain=forward comment="Allow mDNS VLAN1" \
    dst-address=224.0.0.251/32 dst-mac-address=\
    01:00:5E:00:00:FB/FF:FF:FF:FF:FF:FF dst-port=5353 in-bridge=BridgemDNS \
    in-interface=VLAN1 ip-protocol=udp mac-protocol=ip \
    out-bridge=BridgemDNS src-mac-address=34:FD:6A:03:A1:8B/FF:FF:FF:FF:FF:FF \
    src-port=5353

add action=drop chain=forward comment="Drop all other mDNS from VLAN1" \
    dst-address=224.0.0.251/32 dst-mac-address=\
    01:00:5E:00:00:FB/FF:FF:FF:FF:FF:FF dst-port=5353 in-bridge=BridgemDNS \
    in-interface=VLAN1 ip-protocol=udp mac-protocol=ip \
    out-bridge=BridgemDNS src-port=5353
    
add action=accept chain=forward comment="Allow mDNS" dst-address=\
    224.0.0.251/32 dst-mac-address=01:00:5E:00:00:FB/FF:FF:FF:FF:FF:FF \
    dst-port=5353 in-bridge=BridgemDNS ip-protocol=udp \
    mac-protocol=ip out-bridge=BridgemDNS src-port=5353

add action=drop chain=forward in-bridge=BridgemDNS \
    out-bridge=BridgemDNS
    
/interface bridge nat
add action=src-nat chain=srcnat dst-mac-address=\
    01:00:5E:00:00:FB/FF:FF:FF:FF:FF:FF to-src-mac-address=CC:2D:E0:14:64:AD
  • The 1st filter rule lets only mDNS traffic from VLAN1->2 across if the SRCMAC is 34:FD:6A:03:A1:8B which is a particular AppleTV.
  • The next rule drops all other mDNS traffic from VLAN1->2.
  • The third rule then allows any remaining mDNS traffic which will only be VLAN2->1 and finally the main drop rule to block everything else getting across either way and the MAC SRCNAT.

The Macbook at this point could then only see the one AppleTV device and the Airprint printer became unavailable.

I am still testing this out but it seems solid enough. I didn’t assign any IP addresses to the VLAN interfaces. There might be unintended consequences to doing this even though the packet flow maps shows bridge packets will get handled before IP.

I’d suggest trying this out on an independent Routerboard device on your network as I have and not your main router and switches.

Great work! Very Cleaver. You’ve been at this problem for a while now :wink:.

Very minor nit on the example. The BridgemDNS is a “dumb” switch (e.g. vlan-filtering**=no**). And maybe the :export does this, but the frame-types & pvid & VLAN assignment should NOT be needed (and do nothing):

/interface bridge port
add bridge=BridgemDNS > frame-types=admit-only-untagged-and-priority-tagged >
interface=VLAN1 > pvid=1001
add bridge=BridgemDNS > frame-types=admit-only-untagged-and-priority-tagged >
interface=VLAN2 > pvid=1001
/interface bridge vlan
add bridge=BridgemDNS untagged=VLAN1,VLAN2 vlan-ids=1001

But I like this approach & on a hEX… you don’t have the option of containers…

I always figured there was a way to do mDNS reflecting with bridge filters. Bridge filters are a really useful tool and they have solved some tricky problems for me before.

I’ll disable the VLAN 1001 stuff later and see what happens. As you say, could be vestigial at this point and removed.

I will add that enabling a bridge filter may well disable any hardware switching so be warned! It’s not an issue for a device like a hEX which usually is doing everything in software anyway. I think there are low complexity cases on a hEX where the bridge is happy to let the switch chip do the leg work.

I was tooling through the Help as it likes to change unannounced from time to time and I noticed and read about MACVLAN. Of course it’s been around a few months as a tab in Winbox but I never looked into it. This interface solves the problem of being able to do this bridge filtering technique BUT ON YOUR MAIN ROUTER. No offsider router like I used in the OP.

The cool thing about MACVLAN it it gives you another MAC address and interface endpoint hanging off an existing ethernet or VLAN interface. This is just awesome because until now you couldn’t add another VLAN interface to a bridge with the same VLAN ID.

My main VLAN is 100 and a test VLAN is 101. I joined MACVLAN interfaces to each VLAN interface and added an mDNS bridge from above. Voila, it works. My phone on 101 can now see the Chromecast on 100 and control it.

  • I have VLAN100 and VLAN101 interfaces with their subnet IP addresses and normal L3 routing and filtering - this is where all the main traffic goes between a device and the CC after discovery. I had to disable the DROP rule I had that blocked traffic non-established and related traffic between 101 → 100.
  • I added MACVLANs to each VLAN and joined them on a common (non-VLANed) bridge with bridge filtering. Bridge NAT will make sure the source MAC address is valid on that segment.
  • Just to be clear, my main bridge which the VLAN interfaces hang off is VLAN-filtered.
/interface bridge
add name=BridgemDNS protocol-mode=none

/interface macvlan
add interface=VLAN100 name=macvlan100
add interface=VLAN101 name=macvlan101

/interface bridge filter
add action=accept chain=forward comment="Allow mDNS only" dst-address=\
    224.0.0.251/32 dst-mac-address=01:00:5E:00:00:FB/FF:FF:FF:FF:FF:FF \
    dst-port=5353 in-bridge=BridgemDNS ip-protocol=udp \
    mac-protocol=ip out-bridge=BridgemDNS src-port=5353
add action=drop chain=forward in-bridge=BridgemDNS comment="Drop all other L2 traffic" \
    out-bridge=BridgemDNS
    
/interface bridge nat
add action=src-nat chain=srcnat dst-mac-address=\
    01:00:5E:00:00:FB/FF:FF:FF:FF:FF:FF to-src-mac-address=48:A9:8A:EF:61:03 \
    comment="Use your primary bridge MAC address here"
  • The thing about this technique is you don’t need a container running some reflector like Avahi and it’ll work even on the puniest SMIPS device.
  • You can make bridge filter rules that block certain MAC addresses (so you can just allow the mDNS ads from only a printer and not your other gadgets for example).
  • Technically it’s more efficient than a container as you obviously don’t need the resources of a container, but mainly all the packet management is done in kernel space rather than user space.

Can you add more VLANs into the mix? It’s untested but why not? All you need to do is add another MACVLAN interface to the additional VLANs. If you’re keen you can make certain ACCEPT/DROP rules that only allow particular MACs to traverse between particular VLANs by adding rules with in-interface and out-interface.

This really works. Very eloquently elegant compared to container (no dependency on third-party software). Thank you!

/interface macvlan add interface=vlan10 name=macvlan10
/interface macvlan add interface=vlan80 name=macvlan80

/interface bridge add name=bridge-mdns protocol-mode=none
/interface bridge port add bridge=bridge-mdns interface=macvlan10
/interface bridge port add bridge=bridge-mdns interface=macvlan80

/interface bridge filter add action=accept chain=forward comment="Allow mDNS only" dst-address=224.0.0.251/32 dst-mac-address=01:00:5E:00:00:FB/FF:FF:FF:FF:FF:FF dst-port=5353 in-bridge=bridge-mdns ip-protocol=udp mac-protocol=ip out-bridge=bridge-mdns src-port=5353
/interface bridge filter add action=drop chain=forward in-bridge=bridge-mdns out-bridge=bridge-mdns comment="Drop all other L2 traffic"

/interface bridge nat add action=src-nat chain=srcnat dst-mac-address=01:00:5E:00:00:FB/FF:FF:FF:FF:FF:FF to-src-mac-address=[/interface bridge get [find name="bridge"] mac-address] comment="SNAT to Primary VLAN bridge"

Is there any (unintended?) downside doing it this way?

So far I don’t think so, I am happy to hear arguments against doing it this way though. I can only think of a misconfiguration or getting the last drop rule wrong (or disabled) causing issues.

Looks hacky to me. Why not just use PIM-SM? I’ve shared PIM-SM config sample on this forum a few times, works on ROS v7 latest stable.

Hi DarkNate, I tested your suggestion about PIM-SM but was not working with printers, Chomecast, etc…
Support said that we need multicast repeater (will paste their answer if needed).
It should work for you?

Share support’s full reply. I don’t know MikroTik multicast inter-VLAN routing is so messy.

/routing pimsm instance add name="PIM-SM" disabled=no [+ Bridge set multicast-querier=no]
/routing pimsm interface-template add instance="PIM-SM" interfaces=LAN-VLAN,IoT-VLAN source-addresses=10.0.3.60

are those two lines of code enough to find the printer connected in IoT-VLAN with IP 10.0.3.60 from LAN-VLAN ?


mDNS cannot be routed between networks using IGMP proxy or PIM, because it uses link-local (non-routable) multicast destination IP address.
And RouterOS natively does not support mDNS proxy, unfortunately.

PIM-SM won’t pass IPv4 mDNS. Lord knows I have tried to force it to no avail. PIM-SM works fine for a Chromecast as it uses mDNS and discovery compatible with PIM-SM but I could not get other devices discovered like such as a printer and Airplay. I have had PIM-SM working on VLAN systems and across Wireguard and Zerotier links. In all cases unless I set up something with bridge filtering like my other Wireguard/EoIP example and now this with MACVLANS the IPv4 mDNS traffic can’t get across.

Is it hacky? I don’t think so for small situations at least with only a handful of VLANs which let’s face it, is 90%+ of interested users here. I would be very wary of using it on larger sites with large volumes of mDNS traffic without being selective with MAC filtering and rate limiting in the bridge filter rules.

Until Mikrotik spins up something in ROS that does mDNS repeating this is all there is unless you start to get containers involved which is only limited to ARM and x86 devices.

Anyone played with this?

https://help.mikrotik.com/docs/display/ROS/Group+Management+Protocol

Thanks for your suggestion. 224.0.0.0/24 doesn’t appear to work even when added to GMP probably due to the 2nd paragraph in https://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml.

I could be in error, not the 1st time. You’re the protocol expert in this forum though. Tell me what I am doing wrong.

Haha, I’m not an expert, unlike other wannabe-experts in this forum or industry in general, I’m just a guy who loves to play with networks.

I’m not sure why it works or doesn’t work, yet, haven’t had time to deep dive into multicast routing. But I do hope, someone with time can properly build a “clean” solution for inter-VLAN multicast routing and “clean” way to do mDNS/link-local multicast across VLANs.

Or maybe, mDNS/multicast discovery IPv6 specs could be updated through the IETF in the future, to allow an official inter-VLAN routing flag or procedure with proper security measures in place at the protocol level. To my knowledge, such a thing doesn’t exist yet.

I put 224.0.0.251 into GMP and it does not show up in the MDB table in the switch.

I put 224.0.1.251, 224.1.1.251 and 224.1.0.251 into GMP at the same time and they do show up in the MDB table in the switch.

I am pretty sure Mikrotik are following convention and not allowing 224.0.0.0/24 to work with the PIM-SM and IGMP protocols. I did ask them to break convention and have a flag to allow it with PIM-SM and got no reply yet.

The scope of the experiments I have done with mDNS and the bridge filter technique only covers IPv4, not IPv6 (and it’s ff02::fb address) as most people, including myself, seem to be eschewing IPv6 routing for now and most mDNS is still on IPv4.

I suspect any luck you have had with PIM-SM and mDNS is that you are routing IPv6 and ff02:fb is working for you and your devices that use IPv6 for mDNS.

As for the bridge filter technique, the most basic version here should reflect mDNS between member ports on the mDNS bridge and in effect really isn’t different to some of the user space programs like https://github.com/Gandem/bonjour-reflector/ that just copy the packets between VLANs on a single ethernet interface.

I am no Go expert but on reading the code Bonjour-reflector (B-R) project checks the DNS QR flag in the body of the packet and floods it to all the known device origin VLANs in the .toml file. If the QR flag is not set it will check the SRC MAC of the packet and flood it only to valid VLANS defined for a valid device MAC.

Bridge-filter (B-F) cannot check for this QR flag and will flood all interfaces in the bridge with an mDNS packet.
B-R can only deal with VLANs but B-F will work with any kind of interface that can be a port member of a bridge, not just VLANs.
B-F can also be fine grained by only allowing certain SRCMACs to get though and (untested) possibly only allow SRCMACs out on limited bridge ports using interface lists.

B-R also rewrites the source MAC address of the packet to that of the ethernet interface it’s being reflected out on. The bridge filter technique does the same by SRCNATing the packet with the MAC address of the main bridge (not the mDNS bridge).

Looking at the ROS bridge filter options you could make it more fine grained too by setting filter rules that allow or deny based on:

  • Traffic between particular interfaces with In and Out Bridge List (using interface lists)
  • SRC MAC addresses
  • Packet marks
    and by the looks of it be able to control the rate too which B-R can’t do.

What I am getting at is the B-F method can do the same thing as B-R with some additional rules but without the headache of containers and it’s ROS native.

Yes, all my home networks/devices and production network are 100% IPv6-enabled/deployed/only/mostly.

I stopped wasting my time on legacy IPv4 years ago. I would suggest you play with IPv6 multicast routing going forward. IPv4 should, one day, be removed from the network stack.

While I agree with your sentiments wholeheartedly MANY ISP’s still do not support ipv6 … very sad to say … My old ISP [Rogers] did suuport ipv6 but my new ISP [Bell] does not so far. I am hopefull the my new ISP will have a change of heart considering the fact that the US Gov has mandated that all of its communications will be ipv6 by 2025.

mDNS uses link-local IPv6…

Just tested it with my home setup. 3 different VLANs and it works perfectly. I got rid of a small VM running Avahi to do the same thing.

@UpRunTech

you mention: “Just to be clear, my main bridge which the VLAN interfaces hang off is VLAN-filtered.”

What do you mean by “my main bridge”, I have a router with multiple interfaces and each interface goes to a switch, I don’t have any other bridge. Not sure if you could lend me a hand. I tried using your aproach with a single bridge, but… No Way Jose.

Thanks in advance