Hi, I had a similar problem on my RB5009, multicast stopped working after about 5 minutes then I had to call the multicast query again. I solved this problem by enabling “Multicast Querier” on the bridge. After this procedure it started working correctly. I was previously using RB760 and did not have this option enabled and it worked fine.
Just be aware that the ROS bridge IGMP querier is not VLAN aware:
Only untagged > IGMP/MLD general membership queries are generated, IGMP queries are sent with > IPv4 0.0.0.0 source address> , MLD queries are sent with IPv6 link-local address of the bridge interface. The bridge will not send queries if an external IGMP/MLD querier is detected (see the monitoring values igmp-querier and mld-querier).
If you have VLAN filtering active on the bridge with IGMP snooping, learning and multicst forwarding properly works per VLAN, as visible in the MDB stats of the bridge.
But if the bridge acts as querier, the generated IGMP queries are always sent untagged. So if you have multicast clients on the bridge on tagged VLANs, they will not receive queries and those multicast forwards still will time out.
There are also multicast devices silently dropping IGMP queries comming from IP source 0.0.0.0.
This is quite a stupid limitation, which might be fixed in a future version according to MT support.
AFAIK ipv6 RA is using ipv6 mcast group ff02::2. This is link-local and as such according to the docs always flooded, independant of MLD snooping.
But still it seems the MLD querier is required to keep the ff02::2 MDB entries alive when L2 hw offload is enabled.
With bridge L2 hw offload multicast learning/forwarding is handled by the RB5009 switch chip, without by ROS bridge SW on the CPU.
IMHO the ROS SW behaviour without L2 hw offload is correct (not requiring a querier for link local groups as they are flooded)