Hi,
I have a pretty standard IPv6 configuration, as detailed below (home, guest and v6only are vlans). Since a couple of updates for stable versions (so not really related to latest 7.14), I’m getting a weird behaviour for clients doing SLAAC address autoconfiguration: it takes a lot of time for clients to negotiate an address. I don’t know it that is related to RA’s announcements periodicity, but it is specially long (several minutes or even more) when the client was already connected and had an IPv6 address before (example, you turn off wifi and turn it on again). It is not really related with one kind of device, because the same behavior is happening for phones, laptops and even or work stations, and with different OS (linux & mac mainly). Do you know what can be causing this behavior? Duplicate address detection process maybe?
Definitely something with the RA. I torched the interface v6only and the communication get stuck when trying to netogiate an address. As if I edit any parameter under ND (that I guess is re-launching RA process), the address gets allocated blazing fast.
Anyone with something similar in 7.14? I’m running on a RB4011.
I found what is causing the issue, but really don’t know why. I recently moved from two bridges configuration (one with IGMP Snooping + multicast, plus another one to vlan filtering) to a single bridge with all, vlan filtering + IGMP Snooping. Is that what is causing the issue, the IGMP Snooping.
It seems enabling permanent multicast router option on IGMP Snooping mitigates the issue.
Anyone knows what could it be the root cause for this?
It’s a known fact that sub-standard implementations of IGMP snoopers interfere with IPv6 (ND is multicast) … also other vendors have (or used to have) such problems.
Thank you very much indeed @DarkNate, that did the trick. My setup has a particularity, and this was the root cause of the issue. Appart from those set of VLANs, the bridge itself runs another VLAN (not declared in /interface/vlan, as I don’t address it) between an eoip tunnel and a port inside the bridge, for populating a network that comes from a different site. As this network has an IGMP Querier, this was causing the bridge to leave multicast groups unnatended (I already checked with support this is an expected behavior). Adding IGMP Proxy with the new loopback interface as upstream, the set of vlans as downstream, plus the bridge itself as an additional downstream interface, did the trick. Now IPv6 run flawlesly while IGMP Proxy keeps track of MDB table. Even the DLNA multicast group, that previously dissapeared and I had to add it manually to MDB table, is now populated and tracked.
Perfect setup now, all in one single bridge and with no issues for multicast traffic. Leave the configuration here, in case someone else need it.
Ummm… I shot fireworks too early. I must be close as the MDB table is populated and multicast groups are tracked properly, but neighbor discovery for IPv6 is broken, and que only way I have to get this working is by splitting it into different bridges: main for my vlan addressing & IPv6, second one for remote eoip network + IGMP Snooping.
I’m not messing up anything mate, my purpose is precisely this, to work with a single bridge. But if I’m messing up my IPv6 setup in the process, I rather go with two bridges to achieve my goal. As I mention, the issue is caused when turning on IGMP Snooping at the bridge, and I need this feature because I’m bringing a remote network using EoIP (full L2) in a particular vlan (eoip-tunnel + ether5 as untagged for vlan 22) that manages multicast.
It happens also that this remote network has an active querier for the multicast traffic, and it seems this is somehow stopping the bridge to do its duty for local multicast on the rest of the vlans. This is not a guess, was confirmed by support:
The current limitation of RouterOS IGMP snooping is that there is no VLAN-aware querier. The bridge itself can only generate untagged VLAN queries and when the bridge detects a remote querier on some VLAN, it stops generating the queries. We are looking forward to adding the VLAN-aware querier, but there is no release date available for that.
Obviously, as the remote network has no visibility of the rest of multicast running on different vlans, cannot really track properly the MDB table. So here I am with only three possible solutions I can imagine:
Add static entries for all multicast groups in MDB table. Really painful.
Play with IGMP Proxy: very promising when you mention, but I cannot make it fully work with this setup, probably because I’m not addressing myself vlan 22, that comes from the other place.
Run these two ports on a separated bridge, and only use IGMP Snooping there: Crappy solution, but it works just fine.
So please, be considered when you point out people are still messing up with bridge config over 10 years, because I did my best for not having to run multiple bridges anywhere. And btw, sometimes, I’m still trying!
PIM-SM allows you to intelligently populate the multicast routing table (mcast database on MikroTik bridge), you also end up resolving the issue with BUM traffic on the Ethernet spec.
I posted PIM-SM config on this forum multiple times, you can search my history here and you’ll find the config.
Single Bridge, 4000 VLANs or 1 VLAN, simply PIM config, simple MLD/IGMP Snooping on the bridge and any other downstream bridges such as an access switch or Wi-Fi AP (also single bridge).
I will give it a try, thanks! However, what is the point of enabling PIM-SM if I then need to add a manual entry to MDB for not breaking SLAAC, which is exactly what is happening and what I’m trying to correct?
From my previous export, just add the PIM-SM, replacing IGMP-Proxy configuration previously setup. I realize the flag “Is multicast router” is now mark as true in the bridge status tab, which is the same config that actually work the other day, when enabling this fixed in the bridge configuration. So it makes sense.
Are you missing anything on the PIM-SM config? VLANs “home” & “guest” are addressed in IPv4 / IPv6, while “v6only” is a vlan that only have IPv6 as its name indicates. I have not included loopback interface to the list of interfaces, shall I?