Today I upgrade my rb5009 to v7.20.2,everything is fine except container;
when i try to start it then told me “could not acquire interface”;
then i make some test,then find these strange behivor:
if i set VETH ipv4 gateway empty[only set ipv4/v6 address],container can success start but no ipv4 route set[ipv6 use nd so it can find route]
change VETH name will both change NIC name in container[like set VETH name to FIRSTVETH,start container then shell into container,use ip a you can se NIC name is “FIRSTVETH”],in v7.19 NIC in container is eth0 not effect by outside VETH name
Anyone know how to fix this error?my containers now are running “IPV6 only“ mode :0,maybe i should downgrade to 7.19…
Latest: I have downgrade to v7.19.4,now the problem is container won’t autoset v4 gateway+v4 scope link and v6 gateway,i have to manual set them ,this not happened before
not work,i have use default name veth1~5 but no useful;At the same time, the all-letter naming scheme is tested, and the ipv4 default route and ipv4 local link can not be generated for the containter
Also when I was testing for v6 reachability I noticed that the v6 route was not added [even though the destination route was explicitly declared in veth], only the v6 address was displayed when ND received it, in this case using traceroute6 or ping v6addr will still directly prompt that it cannot be reached, and it still needs to manually add the v6 route to be effective
ok i have just downgrade to 7.19.4[7.19.6 still has this problem or 7.20.2 bring back this error to lower verison],but unfortunately, even if 7.19.4 is returned, IPV4 routes cannot be generated properly.
I tried reallocating IPV4 addresses/routes; Switching the VETH bridge to the LAN bridge;Delete VETH and create new ones; Download a clean raw system image [library/alpine:latest] and assign a new VETH. The result is that IPV4 route cannot be assigned [execute 'ip route' after entering the container, it will be empty]
But what's interesting is that it seems that as 7.20.2 seems to improve VETH's IPV6 RA behavior, container is now significantly faster at SLAAC retrieving IPV6 addresses and discovering routes.
For now, the temporary relief for me is to write additional scripts for the container to manually generate ipv4 routes by writing kernel scope link + default route to the container ipv4 routing table ahead of time
Are you renaming the veth out from under an existing container and expecting it to work with just a restart? Changing the network config requires a complete rebuild of the container:
also not work for me, i have test changename/recreateVETH/rebootcontainer/rebuildcontainer/changecontainerinterface;all of them can’t auto set container v4+v6 default route+scopelink[v4]
I think that that if you are using containers, version 7.20, 7.20.1 and 7.20.2 must be avoided. There are many new cool container features in 7.20 but for now there are many unresolved bugs when upgrading. No sure if 7.20.3 will fix container bugs or must wait for 7.21.
Yes, many issues with the 7.21 beta. I am using 7.20.2 with containers. It works with most but not one. However, I need to reinstall and reconfigure all containers when upgrade from 7.19.2 because error with “could not load config.json”
Don’t try the 7.21 beta until it fixes the issues with interface, capsman, etc