Container as VPN

Hi everyone!
I ran the Ubuntu 22.04 LTS container in ax3, and I connect to the OpenVPN in Ubuntu, Now how to tell the clients to use this container as a VPN? I could not establish a connection with Mangle or Route, at the same time access to IP and SSH through clients is established normally.
(I know that it is possible to run OVPN directly on Mikrotik itself, but my provider has applied settings that are not supported by Mikrotik. And now I can’t use wireguard)

Well, the OpenVPN port used have to be allowed in /ip/firewall/filter for “input” and if same as default, Mikrotik OpenVPN have to be disabled.

But I think there are a lot things that have to align for this to work, beyond just mangle e.g. like NAT. Likely possible, but complex.

Thank you for your reply. Now do you have a way or a suggestion so that I can use OpenVPN on Mikrotik?

I’m not the OpenVPN expert, so hard to know if container would work since I’ve never setup it up on Linux/etc before.

I was trying to say there isn’t a quick answer to running in it a container … you’d/someone have to experiment. There does not seem to be an “official” docker image for it either. While there are containers for openvpn on DockerHub, like this one for ARM64
https://hub.docker.com/r/project31/aarch64-docker-openvpn

But /container isn’t not docker, so commands to use are different. And I see NET_ADMIN permission in instructions, which is NOT allowed on RouterOS. Basically /containers do not have root access, so if OpenVPN needs raw interfaces, it cannot work.

Are you sure the feature is missing RouterOS OpenVPN? Because that be easier.

See this discussion: http://forum.mikrotik.com/t/v7-1rc3-adds-container-support/151712/387

I checked the content you linked; Thank you, but it seems impossible, I am completely disappointed! I am using ProtonVPN and inside the OVPN file there are some parameters that when I enter them in Mikrotik it mentions 4 items that are not supported. After that, I always get a tls error about the lack of a key, even though all the necessary things are in the OVPN file. Currently, in my country, all networks have problems except OpenVPN.

Maybe put in a feature request for whatever is missing in Mikrotik’s built-in OpenVPN VPN at help.mikrotik.com. Running in it a container seems either impossible or at least a nightmare.

One thing is that I was able to connect to OpenVPN inside the container, but I could not make the traffic pass through the container, while I could ssh to it and ping it.

You can use the sniffer on VETH1 and/or add some “action=log” to the /ip/firewall to see what it’s getting stuck. Might want to enable whatever logging OpenVPN has in the container, and check those logs for more clues.

Maybe someone else has gotten this working – I just don’t recall seeing it in forum.

Describe all the subnets (IPv4 & IPv6) in use and list critical IP addresses, servers and specifically gateways for all subnets.
Read about Policy Routing https://help.mikrotik.com/docs/display/ROS/Policy+Routing
A network diagram could help a lot or a good network topology description.

Perhaps. But the issue is a container with OpenVPN, that may be blocked by the “no root access” rule.

I have ubuntu/bind9 Docker image running on RouterOS CHR 7.11.2 and hAP ax3 7.11 using two bridge network pattern.
Both have full IPv4 and IPv6 connection to Internet and LAN clients. I solved the container network routing issues.

OpenVPN container adds another virtual router providing a gateway to specific subnets. Specific LAN clients need to route through OpenVPN.
Policy Routing rules alone or with firewall route marking or in combination is a potential solution but knowing the subnet topology is critical.
Adding the standard forum network diagram and full configurations would not hurt either.
Adding OP ubuntu image source makes RouterOS CHR testing possible.

Let me tell you exactly what I did
I configured two bridges, one for LAN and the other for Docker, as mentioned in the tutorials
I set the first bridge to 192.168.1.1/24
I set to the second bridge 192.168.2.1/24
Now I have two containers on the second bridge, the first one is Adguard with 192.168.2.2, which works without problems, and the second is Ubuntu with 192.168.2.3, which has ping and ssh access when I have not established OpenVPN inside it.
Now I wrote a route 0.0.0.0/0 to the gateway 192.168.2.3 with the OPVN routing table, after that I set Mangle so that, for example, the client 192.168.1.3 passes through the OVPN routing table.
The problem starts here that the client does not have access to the Internet, but ssh and ping to 192.168.2.3 are established! When I traceroute from client to 8.8.8.8 it shows me gateway 192.168.1.1 and then 192.168.2.3 and again 192.168.1.1 this cycle repeats

Inadequate information destroys the incentive to help. Good luck!

I tried to explain to you in the best way, tell me what other information is needed so that I can put it for you, I’m not very professional, but if you help me solve the problem, I will be grateful.

Yeah post your config. I’m just not familar with OpenVPN - if that part is working…then yeah PBR or other changes may be needed.

But I’m skeptical if openvpn can work in /container in the first place – just hadn’t seen anyone successful with that.

Once the container has full network connectivity it can be treated as just another server (virtual or physical is irrelevant).
Routing differences occur when container host is the core router or container host is external with respect to core router.
I presume OpenVPN on LAN is workable configuration but having never tried, I can’t say it will. The OP suggests it does.

This is what I have done:

  1. make sure that you have “ifconfig” AND any requirements it may have bundled inside your container.
  2. Create a Bash Script and set it as your Entrypoint (More must be done with this script in No.8 below)
  3. Inside the Bash Script use something like
ifconfig eth0:0 192.168.70.2 netmask 255.255.255.0

to create a network alias.
here is my ifconfig output:

ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.68.2 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::5485:1dff:fe69:875a prefixlen 64 scopeid 0x20
ether 56:85:1d:69:87:5a txqueuelen 1000 (Ethernet)
RX packets 5702142 bytes 7421585181 (7.4 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7056943 bytes 4535107514 (4.5 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth0:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.70.2 netmask 255.255.255.0 broadcast 192.168.70.255
ether 56:85:1d:69:87:5a txqueuelen 1000 (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 4111979 bytes 4239904963 (4.2 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4111979 bytes 4239904963 (4.2 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 9000
inet 10.0.0.2 netmask 255.255.255.0 destination 10.0.0.2
inet6 fc00::2 prefixlen 126 scopeid 0x0
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 3443935 bytes 3896310321 (3.8 GB)
RX errors 0 dropped 537546 overruns 0 frame 0
TX packets 1827713 bytes 333164757 (333.1 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

In the above the eth0 is my Veth, the eth0:0 is what I had created and tun0 is used for custom VPN purposes ( it is linked via a socks2tun tool to XRAY or V2RAY ).

5.Create a Bridge and the the Container’s VETH to the bridge ports.

  1. Assign a gateway IP address for your eth0:0 created in the 1st step above to the bridge created.

  2. Use firewall SRC NAT to nat traffic to the eth0:0.

  3. More from No.2:
    I use the following code in my Script to route traffic from eth0:0 to the vpn tunnel inside my container. You MUST note that we are limited to use IPtables and Kernel modules available by RouterOS and it is not possible to utilize e.g. TProxy as ROS does not have the module compiled and loaded.

/usr/bin/hev-socks5-tunnel /usr/bin/hevsocksconfig.yml \
& ifconfig eth0:0 192.168.70.2 netmask 255.255.255.0

iptables --flush
iptables --table nat --flush
iptables -t mangle --flush
iptables --delete-chain
iptables --table nat --delete-chain

ip rule add pref 300 from 192.168.70.0/24 table tun2socks

iptables -A FORWARD -i eth0:0 -s 192.168.70.0/24 -j ACCEPT
iptables -A FORWARD -i tun0 -d 192.168.70.0/24 -j ACCEPT
iptables -t nat -A POSTROUTING -s 192.168.70.0/24 -o tun0 -j MASQUERADE

ip route flush table tun2socks
ip route show table main | grep -Ev ^default | while read ROUTE ; do ip route add table tun2socks $ROUTE; done
ip route add default via 10.0.0.1 dev tun0 table tun2socks

ip route flush cache

/usr/local/bin/xray run /usr/local/bin/config.json [u](#THIS LINE IS TO RUN XRAY AS MY TUNNEL VPN)[/u]

Hello my friend
You did exactly what I wanted and thank you so much for sharing it with me
I would like you to explain more about steps 5 and 6 if you can. Look, I created the bridge once, but now I can’t create another bridge and put the container interface inside it. If I want to do this, I have to switch from the previous bridge to the new bridge, which will also lose access to the Internet.