DLNA over GRE ?

Hello,
i don’t know if it is possible but how i can see DLNA server from location A to location B and stream videos ?

I have the following setup:

Site A
LAN: 10.10.1.0/24
GRE interface over IPsec with IP 172.16.1.1
All trafic is forced through GRE by using mark connection & mark routing
This network has a DLNA server on IP 10.10.1.4

Site B
LAN: 192.168.88.0/24
GRE interface over IPsec with IP 172.16.1.2

This configurations works just fine.
I tried to add PIM interface on both sides with RP 172.16.1.1 but RP status is always “not joined.”

Can someone help me ? I can’t find anything on internet related.
Thank you in advance.

I suspect this is where your problem is coming from.
PIM is basically a routing protocol - and it relies on reverse reachability to route packets AWAY from the source.
I don’t have much hands-on with multicast but if PIM is looking in your main routing table and doesn’t see a route back to the 10.10.1.4 address in the main routing table, then that’s likely your issue.

If your only use for route marking is to force all traffic through GRE, then perhaps you could get rid of the route marking in favor of a slightly easier-to-manage system:

change the default GW to be the GRE tunnel, and create a static /32 route to the remote GRE server’s public IP.
This would accomplish the same task, but leave everything in the main routing table.

Hello,
thank you for your response.
I tried your suggestion but GRE interface does not go up.
IPSec connection is established though.
Is possible that i’m missing something.. a route maybe ?

In any case… it should work with the current configuration.
I can ping site A internal LAN from site B and vice-versa.
I can ping GRE interface from site A or B.
Routing is done properly.

Check picture bellow with my current configuration:

Yep - you’ll need two static routes:
dst=IPSEC_peer/32 gateway=192.168.1.1
dst=192.168.88.0/24 gw=192.168.1.1

Finally, make the GRE interface the primary default GW:
dst=0.0.0.0/0 gateway=gre-tunnel1

If you want the Internet to work whenever GRE is down, put a floating backup static default GW:
dst=0.0.0.0/0 gateway=192.168.1.1 distance=254

Then you should be able to disable all of the route marking rules.

As for the PIM, make sure that PIM is enabled on bridge1 and gre-tunnel1 interfaces.
You shouldn’t need IGMP on the gre-tunnel1 interface, but it probably won’t hurt if it’s on, either.


Finally, if you can’t get GRE up and running without route marking, you can statically configure the MRIB (multicast routing table) to have the correct interfaces / default route.

Hmm GRE interface is up now.
PIM still not working
Added on bridge and GRE as you said.
Added RP with IP 172.16.1.1 (GRE site A ip) on both sides.

Site A:

Site B:

Can my masquerade to GRE NAT rule affect this ?

I would only masquerade traffic that was outbound on the WAN interface at both Mikrotiks.
Let internal GRE traffic pass without NAT (the whole point of VPN is to have full internal connectivity, usually)
I saw where you have OSPF running, so I’m sure you have full reachability in your routing table.

You should check the PIM neighbor status at both ends. Make sure they’re both seeing each other’s PIM messages.

Make sure that no filter rules are blocking PIM / IGMP / Mcast packets on the GRE interfaces, too.

I can see in PIM the other router as neighbor.
No traffic rules are in place.
Changed masquerade only for WAN traffic.

How can i test for PIM messages ?
If i torch GRE interface i do not see any IGMP passing. This is the way ?

Thank you!

I would think you would only see PIM messages on the GRE interface (but again, I’m new at multicasting myself)
And just making sure - at this point, you should have the mangle rules about route-marking disabled, right?
Check the MRIB to see what ways each router wants to route multicast. Remember that multicast routing routes away from the source address. Joins should be pushed up away from the source towards the RP. (make sure your RP address covers 239.255.255.252 which appears to be the group your application is using)

Also - make sure the input filter chain of both routers doesn’t block PIM from each other.

Route marking rules are disabled.
MRIB looks ok.

Tried to add RP 172.16.1.2 instead of 172.16.1.1 and GROUP 239.255.255.250 with source 0.0.0.0 appeared on both routers with join state joined, but still my minidlna server is not visible from site B. :frowning:


If you’re not going to scale this to more sites, you could try the “poor man’s” way - remove PIM and put IGMP proxy on site B with the LAN as the “downstream” side and the GRE interface as the “upstream” side. On Site A, configure GRE as “downstream” side and the LAN (where the dlna server is) as the “upstream” side and see if that works.

Also - make sure that your ip filters aren’t dropping the packets.
(as a test, put a forwarding rule that allows udp dst-address=224.0.0.0/4)

Hmm
i see on site A upstream & downstream interfaces some small traffic, but on site B is 0.
On site B i have some NAT rules like dst-nat to different hosts and src nat to my internal networks like 192.168.88.0/24, 172.16.1.1, 10.10.1.0/24

And that’s it.

I really don’t understand what is happening. :confused:

Post the results of this command:
/ip firewall nat export

add chain=srcnat comment="IPSec Rule" dst-address=10.10.1.0/24 src-address=192.168.88.0/24
add action=dst-nat chain=dstnat comment=DNS5 dst-address=WAN_IP dst-port=80 protocol=tcp to-addresses=192.168.88.253 to-ports=80
add action=dst-nat chain=dstnat comment=DNS dst-address=WAN_IP dst-port=53 protocol=tcp to-addresses=192.168.88.253 to-ports=53
add action=dst-nat chain=dstnat dst-address=WAN_IP dst-port=53 protocol=udp to-addresses=192.168.88.253 to-ports=53
add action=dst-nat chain=dstnat comment=RAILGUN dst-address=WAN_IP dst-port=2408 protocol=tcp to-addresses=192.168.88.253 to-ports=2408
add action=dst-nat chain=dstnat comment=CDN dst-address=WAN_IP dst-port=8080 protocol=tcp to-addresses=192.168.88.253 to-ports=8080
add action=dst-nat chain=dstnat comment=mikrotik dst-address=WAN_IP dst-port=5051 protocol=tcp to-addresses=10.10.1.1 to-ports=5050
add action=dst-nat chain=dstnat comment=EMAIL dst-address=WAN_IP dst-port=110 protocol=tcp to-addresses=192.168.88.253 to-ports=110
add action=dst-nat chain=dstnat dst-address=WAN_IP dst-port=995 protocol=tcp to-addresses=192.168.88.253 to-ports=995
add action=dst-nat chain=dstnat dst-address=WAN_IP dst-port=993 protocol=tcp to-addresses=192.168.88.253 to-ports=993
add action=dst-nat chain=dstnat dst-address=WAN_IP dst-port=585 protocol=tcp to-addresses=192.168.88.253 to-ports=585
add action=dst-nat chain=dstnat dst-address=WAN_IP dst-port=465 protocol=tcp to-addresses=192.168.88.253 to-ports=465
add action=dst-nat chain=dstnat dst-address=WAN_IP dst-port=25 protocol=tcp to-addresses=192.168.88.253 to-ports=25
add action=dst-nat chain=dstnat dst-address=WAN_IP dst-port=2087 protocol=tcp to-addresses=192.168.88.253 to-ports=2087
add action=dst-nat chain=dstnat dst-address=WAN_IP dst-port=22 protocol=tcp to-addresses=192.168.88.253 to-ports=22
add action=dst-nat chain=dstnat dst-address=WAN_IP dst-port=143 protocol=tcp to-addresses=192.168.88.253 to-ports=143
add action=dst-nat chain=dstnat comment="Raspberyy over IPSec (CDN)" dst-address=WAN_IP_SECONDARY dst-port=8080 protocol=tcp to-addresses=10.10.1.5 to-ports=443
add action=dst-nat chain=dstnat comment="Raspberyy over IPSec" dst-address=WAN_IP_SECONDARY dst-port=443 protocol=tcp to-addresses=10.10.1.5 to-ports=443
add action=dst-nat chain=dstnat comment="Raspberyy over IPSec" dst-address=WAN_IP_SECONDARY dst-port=80 protocol=tcp to-addresses=10.10.1.5 to-ports=80
add action=dst-nat chain=dstnat comment="Raspberyy over IPSec" dst-address=WAN_IP_SECONDARY dst-port=22 protocol=tcp to-addresses=10.10.1.3 to-ports=22
add action=dst-nat chain=dstnat comment="Raspberyy over IPSec" dst-address=WAN_IP_SECONDARY dst-port=21 protocol=tcp to-addresses=10.10.1.3 to-ports=21
add action=dst-nat chain=dstnat comment="Raspberyy transmission" dst-address=WAN_IP_SECONDARY dst-port=9091 protocol=tcp to-addresses=10.10.1.3 to-ports=9091
add action=src-nat chain=srcnat comment="NAT FOR DNS5" src-address=192.168.88.253 to-addresses=WAN_IP
add action=src-nat chain=srcnat comment="IPSec for RASPBERRY" dst-address=10.10.1.3 to-addresses=192.168.88.1
add action=src-nat chain=srcnat comment="IPSec for RASPBERRY" dst-address=10.10.1.5 to-addresses=192.168.88.1
add action=src-nat chain=srcnat comment="NAT Remaining" src-address=192.168.88.0/24 to-addresses=WAN_IP
add action=src-nat chain=srcnat comment="NAT Remaining" src-address=172.16.1.1 to-addresses=WAN_IP
add action=src-nat chain=srcnat comment="NAT Remaining" src-address=10.10.1.0/24 to-addresses=WAN_IP

I’m not sure what the RASPBERRY nat rules are for - unless the RASPBERRY is involved in the mcasting, those rules shouldn’t matter…

Rule 2 is more or less covered by rule 5, so rule 2 isn’t necessary…

Rule 6 or 7 is probably what’s messing things up.

Remove rules 1, 2, 5, 6, and 7 and replace them with this as rules 1&2:

chain=srcnat action=accept out-interface=WAN protocol=gre src-address=192.168.88.0/24 dst-address=10.10.1.0/24
chain=srcnat action=masquerade out-interface=WAN

This accomplishes your goal of “nat what’s going to the Internet, but leave everything else alone” by explicitly requiring that packets be going out the WAN interface in order to be considered for srcnat. Traffic going across the GRE tunnel will not be going out the WAN interface, but out the gre-tunnel1 interface, so it won’t be subject to src-nat this way, even if you add more networks in the future.

Mind that many DLNA sources set TTL to 1, so whenever there is a routing step in between, the packet ages out. You need to mangle it to an higher TTL.

Hello paoloaga,
tried your way but it does not work.

I switched back to PIM.
Something is wrong on site B, i see the connections but join state is “rpt pruned”, on site A state is “joined”

For anyone reading this thread in the future - the issue was srcnat. SiteA was performing srcnat before sending traffic across the tunnel, so the multicast source wasn’t the same on the packets as the IGMP joins were requesting.