Public IPv6 on container interface

I don’t see a straightforward solution to allocating an IPv6 address from a prefix dynamically assigned to me by the ISP. NPTv6 / NATv6 seems like the easiest in terms of administration, but feels wrong.

What approach do you use?

I’m not sure what other way there is, as with DHCPv6-PD, the prefix might change unexpectedly. I guess one way one could do it, is via scripting hooks when it does change.

Assigning IPv6 addresses to the veth interfaces for the containers is also somewhat not great.

I can’t try it here for you because $REASONS, but if this container runs on the gateway router (i.e. the one issuing the DHCPv6 PD request to the ISP) then bridging the VETH should work when combined with this:


/ipv6 address
add address=::123 from-pool=ipv6 interface=veth1

That should assign static IPv6 address ::123 within the PD prefix to veth1.

The trick then is, how do you contact that same container without knowing the dynamic prefix? What useful effect has this minor success bought you?

The second-level problem is, I see no way to make this work on any other RouterOS box on the network, because they will not know what “from-pool=ipv6” means.

It seems to me that when the prefix changes, lots of things need to change, including your local DNS, which lets you stop caring about which specific IPv6 address a given container got.

New, Improved Plan B: Pick a random ULA for this LAN and assign one to the container:


/ipv6 address
add address=fddf:dffd:fdfd::123 advertise=no interface=veth1
add address=fddf:dffd:fdfd::1 advertise=yes interface=bridge

That gets you an unchanging IPv6 prefix for use on the LAN. Not only is there nothing wrong with having multiple IPv6 prefixes on a single LAN, it’s pretty much unavoidable. I typically see around 4 addresses on each interface actively using IPv6.

You need both entries here: the second sends RA messages to the other stations on the LAN to give them aliases in the same IPv6 scheme, while the first allows LAN hosts to address the container bound to veth1 specifically.

(The second also gives ::1 to the router sending these RAs.)

In these examples you assign an address to a router, not a container.

Read more carefully: “interface=veth1”

Router’s side of the veth1 link. Container’s side is set at /interfaces/veth.

Then I must assume you overlooked my suggestion to bridge the VETH.

I must be very dull as I don’t connect the dots.

If you bridge the VETH, then you can assign any IPv6 address to it that you like. The only reasonable way to assign a GUA without knowing the prefix is in my first reply, because that requires access to the PD pool. My second reply gives an alternative using a ULA as a workaround, which you can then do 1:1 NAT to present to the public, if you’re trying to achieve WAN service.

I can assign any IPv6 to both router’s and container’s sides without involving bridges, working with the veth interface directly. As well use the ULA + NAT / NPT approach.

I do not see how assignment via from-pool on router’s side of veth1 link helps it. There is no SLAAC in the container, how do you expect it to obtain the address?

I’m sorry, I really don’t understand how your advice helps…

Before I reply to your individual points, please understand that I have what I suggest above working here to my satisfaction. The only way I see that what I say cannot work for you as well is that you want something outside my understanding, in which case the burden is on you to describe more clearly what it is you’re trying to accomplish.


When you attach the veth to the bridge, there aren’t “two sides.” It’s all one.


There is no SLAAC in the container

SLAAC is a feature of the Linux kernel stack, which in this case is provided by RouterOS. I am telling you to have RouterOS assign an IPv6 address to the container’s veth. Where’s the difficulty?

You might have to completely reinstantiate your containers to get an address change to take effect. Some elements of this get baked in by RouterOS, such as in /etc/hosts.

I think this is the part that confuses me. Why does adding the veth interface to a bridge make it “all one”? This sentence just does not make sense to me.


On the veth1 link this is router’s address. What’s the container’s address? Do you suggest that one will be set up via SLAAC?

From my observations, SLAAC does not fully work inside the RouterOS containers. It seems like the container does not actively send Router Solicitation multicast messages itself. Even if you added a prefix on the bridge containing the VETH interface, with advertise turned on, you cannot expect the container to get an IPv6 address immediately after it starts. The container only picks an address for itself when it receives a Router Advertisement (RA) message from the bridge. Which means with the default setting it can take up to 10 minutes before the container has an IPv6 address from the bridge’s prefix with SLAAC (the default RA Interval is set to 200-600 seconds). Furthermore, the container does not use the gateway information from the RA, which means if you hadn’t set “gateway6” on the VETH interface then even if the container has got the RA message, it will still have no default IPv6 route.

Which means if you want the container to immediately be able to go out to the internet with IPv6 you’ll need to:

  • Add a static IPv6 to the VETH interface, but not under /ipv6 address, but in the properties of the VETH interface
  • Set the router’s address on the bridge as gateway6 of the VETH interface.
  • If you don’t have static global prefix, you’ll have to use a random ULA prefix. In that case you’ll need NAT rule(s), I prefer a srcnat netmap rule, to map the ULA prefix to the GUA prefix of the outgoing WAN interface.

If you only want SLAAC then you can omit adding the static IPv6 address to the VETH, but “gateway6” is still required, and of course, you’ll have to accept that the container has no IPv6 connectivity for the first few minutes, until it gets the RA multicast message.


I use netmap .

get an ipv6 /64 address for bridge interface “dockers” from PD pool, and use its prefix as every container’s.

/interface bridge add comment="Dockers bridge" igmp-snooping=yes name=dockers vlan-filtering=yes
/interface veth add address=192.168.88.52/24,fd80:1111:2222:3333:192:168:88:52/64 comment="iperf3 docker" gateway=192.168.88.1 gateway6=fd80:1111:2222:3333::1 name=veth-iperf
/interface bridge port add bridge=dockers interface=veth-iperf
/ipv6 address add address=fd80:1111:2222:3333::1 advertise=no interface=dockers
/ipv6 address add address=::1 comment=Dockers from-pool=v6pool interface=dockers
/ipv6 firewall nat add action=netmap chain=srcnat comment=DockerNETMAPv6 src-address=fd80:1111:2222:3333::/64 to-address=240e:1234:5678:1e0c::/64
/ipv6 firewall nat add action=netmap chain=dstnat comment=DockerNETMAPv6Rtn dst-address=240e:1234:5678:1e0c::/64 to-address=fd80:1111:2222:3333::/64

use a script attach to /ipv6/dhcp-client to update prefix on-the-fly

    
    :set   pre    [/ipv6/pool/used get [ find info="dockers" pool="v6pool" ] prefix ]
    # replace ::/62 with ::/64
    :local newpre2 ([ :pick $pre 0 ( [:len $pre] - 1 ) ]."4")

    /ipv6 firewall nat set [find comment=DockerNETMAPv6 ]    to-address=$newpre2
    /ipv6 firewall nat set [find comment=DockerNETMAPv6Rtn ] dst-address=$newpre2

The snpt and dnpt mangling actions can also be used instead of netmap (https://github.com/Kentzo/routeros-scripts-custom/blob/9c37e5d76e78b302bb0932e2e4dc5aaf3abf85dc/ipv6-npt.rsc#L72-L110) if connection tracking is not needed.