VLess proxy tunnel on mikrotik via containers.

Hello everyone!
Pleez help me with containers and routing traffic on the router.
Have Mikrotik hAP ax3 with a USB flash drive for storage, AdguardHome, Xray-core and Tun2Socks containers are installed on the router.
Configuration:

/interface bridge
add name=Dockers port-cost-mode=short
/interface veth
add address=10.6.0.2/24 gateway=10.6.0.1 gateway6="" name=VETH1-adguard
add address=10.6.0.3/24 gateway=10.6.0.1 gateway6="" name=VETH2-xray
add address=10.6.0.4/24 gateway=10.6.0.1 gateway6="" name=VETH3-tun

/interface list
add name=LANs
add name=WANs

/container
add interface=VETH1-adguard root-dir=usb1-part1/Containers/adguard start-on-boot=yes workdir=/opt/adguardhome/work
add dns=10.6.0.2 interface=VETH2-xray root-dir=usb1-part1/Containers/xray-core start-on-boot=yes workdir=/root
add dns=10.6.0.2 interface=VETH3-tun root-dir=usb1-part1/Containers/tun2socks start-on-boot=yes

/container config
set ram-high=250.0MiB registry-url=https://ghcr.io tmpdir=usb1-part1/TMP

/interface bridge port
add bridge=Dockers interface=VETH1-adguard
add bridge=Dockers interface=VETH2-xray
add bridge=Dockers interface=VETH3-tun

/interface list member
add interface=Bridge list=LANs
add interface=WAN list=WANs

/ip address
add address=10.10.12.1/24 interface=Bridge network=10.10.12.0
add address=10.6.0.1/24 interface=Dockers network=10.6.0.0

/ip dhcp-server network
add address=10.10.12.0/24 dns-server=10.6.0.2 gateway=10.10.12.1 netmask=24

/ip dns
set allow-remote-requests=yes cache-max-ttl=1w3d doh-timeout=6s query-server-timeout=2s500ms query-total-timeout=12s servers=1.1.1.1 use-doh-server=https://cloudflare-dns.com/dns-query verify-doh-cert=yes

/ip firewall filter
add action=fasttrack-connection chain=forward comment="Rule 1.0 Fasttrack" connection-state=established,related hw-offload=yes in-interface=Bridge out-interface=WAN
add action=fasttrack-connection chain=forward connection-state=established,related hw-offload=yes in-interface=WAN out-interface=Bridge
add action=accept chain=forward comment="Rule 1.0.1 Forward input established/related acept" connection-state=established,related,untracked log-prefix="Forward accept"
add action=accept chain=input connection-state=established,related,untracked log-prefix="Input accept"
add action=drop chain=forward comment="Rule 1.0.2 Forward input invalid drop" connection-state=invalid in-interface=WAN log-prefix="Forward drop invalid"
add action=drop chain=input connection-state=invalid in-interface=WAN log-prefix="Input drop invalid"
add action=drop chain=input comment="Rule 1.2.1 Input drop from WAN" in-interface-list=WANs log-prefix="Input all drop from WAN"

/ip firewall mangle
add action=mark-routing chain=prerouting dst-address=10.6.0.2 new-routing-mark=proxy_mark passthrough=yes src-address=10.10.12.52

/ip firewall nat
add action=dst-nat chain=dstnat comment="NAT 1.01 - TCP 53 Redirect DNS requests to AdguardHome" dst-port=53 in-interface=Bridge protocol=tcp to-addresses=10.6.0.2
add action=dst-nat chain=dstnat comment="NAT 1.02 - UDP 53 Redirect DNS requests to AdguardHome" dst-port=53 in-interface=Bridge protocol=udp to-addresses=10.6.0.2
add action=masquerade chain=srcnat comment="Containers through NAT" out-interface=WAN src-address=10.6.0.0/24
add action=masquerade chain=srcnat comment="WWW through VPN" dst-address-list=rkn_wg out-interface=WG1-VPS
add action=masquerade chain=srcnat comment="LAN through NAT" out-interface=WAN src-address=10.10.12.2-10.10.12.254

/routing table
add disabled=no fib name=wg_mark
add disabled=no fib name=proxy_mark

/ip route
add comment="Acceess to WWW through Proxy" disabled=no distance=1 dst-address=0.0.0.0/0 gateway=10.6.0.4 pref-src="" routing-table=proxy_mark scope=30 suppress-hw-offload=yes target-scope=10
add comment="Acceess to WWW through WG1-VPS" disabled=no distance=2 dst-address=0.0.0.0/0 gateway=WG1-VPS pref-src="" routing-table=wg_mark scope=30 suppress-hw-offload=yes target-scope=10

The AdguardHome container works without question, all devices on the network receive it as a DNS-server via DHCP.
The Xray-core container (acts as a client to the VPS server with 3X-UI is installed) - after launching the container is stopped and the file edited config.json (connection settings are specified). It also works without questions, it connects to the VPS server via XLTS+Reality, from a local computer (Windows) and virtual (Ubunta) when specifying http://socks=IP:port I can easily access the Internet through Socks.
The whole question is about the following Tun2Socks container.
To launch the container, an image was mounted from the github, then the container was started, i’m stopped it. And edited entrypoint.sh.

#!/bin/sh
ip tuntap add mode tun dev tun0
ip addr add 198.18.0.1/15 dev tun0
ip link set dev tun0 up
ip route del default
ip route add default via 198.18.0.1 dev tun0 metric 1
ip route add default via 10.6.0.1 dev eth0 metric 10
tun2socks -device tun0 -proxy socks5://10.6.0.3:30804 -interface eth0
run || exit 1

There is Internet access from inside the containers (I checked both in Xray-core and Tun2Socks). But when I want to send all traffic to the tunnel from a local device from the local network (using the mangle rule - proxy_mark marking), the Internet is completely inaccessible, ping and tracer do not pass through the tunnel to any site. Although using the same rule for WireGuard, everything works out, and the Internet is available on all devices).
I myself am not an expert in Unix systems at all, it was difficult to figure out the microtics.
So I can’t understand the problem in the tunnel, or is it in the routing settings on the router?
Or maybe someone knows an easier way?

I am 110% not familar with you project :wink:
But i wanted to know what you want to do , so i looked the tool websites up…
see attachment - maybe it help …

its from https://github.com/xjasonlyu/tun2socks/wiki/Examples
tun2socker.png

rp-filter is set in /ip/settings, but I’m not sure that’s the issue.

Did you see this thread? http://forum.mikrotik.com/t/run-flag-in-container/163100/8

Thanks a lot for the answers.
I’ll try to dig into this issue more, but I still can’t find time for experiments.

Yes, I saw this theme by the user vanes32 just helped me with the setup, but what works for him - I couldn’t get it to work in any way.

Hey. Did you manage to wrap traffic from local devices to a container? I managed to deploy a proxy combine harvester container sing-box. The connection to the proxy server is established, as a test I chose the shadowsocks 2022. This container contains the default tun2socks,from inside the container, a request via curl ifconfig.me returns the ip address of the proxy server. There was a problem with routing, the traffic inside the container comes as can be seen from the output of the iftop, but the host packages are not returned because of what and there is no Internet access.

Hi.
Were you able to set up this configuration?

Good afternoon It’s been quite some time and I’ve been looking at this topic.
And to answer the last question - yes! I managed to get this scheme to work with three containers.
But over time, something changed inside the containers, and right now I can’t guarantee that everything will work the first time without trial.
On the cheaper aX2, I even managed to launch the installation of containers through a script, but after I did not have access to this device for 1.5 months, it turned out that the scheme stopped working, and there is no way yet to find out what the reason is. Perhaps the container assemblies themselves have been updated, and these settings are no longer suitable.

On a home aX3 with containers installed on USB-Flash, the scheme is currently working.
The settings are:

/interface bridge add name=Dockers port-cost-mode=short
/ip address add address=10.6.0.1/24 interface=Dockers network=10.6.0.0
/interface veth add address=10.6.0.2/24 gateway=10.6.0.1 gateway6="" name=VETH1-adguard
/interface veth add address=10.6.0.3/24 gateway=10.6.0.1 gateway6="" name=VETH2-xray
/interface veth add address=10.6.0.4/24 gateway=10.6.0.1 gateway6="" name=VETH3-tun
/interface bridge port add bridge=Dockers interface=VETH1-adguard
/interface bridge port add bridge=Dockers interface=VETH2-xray
/interface bridge port add bridge=Dockers interface=VETH3-tun

/container mounts add dst=/opt/adguardhome/work name=adguard_workdir src=/usb1-part1/Conf/adguardwork
/container mounts add dst=/opt/adguardhome/conf name=adguard_confdir src=/usb1-part1/Conf/adguardconf
/container envs add key=TZ name=adguard_envs value=Europe/Moscow
/container add envlist=adguard_envs interface=VETH1-adguard mounts=adguard_workdir,adguard_confdir root-dir=usb1-part1/Containers/adguard start-on-boot=yes workdir=/opt/adguardhome/work
/container add dns=10.6.0.2 interface=VETH2-xray root-dir=usb1-part1/Containers/xray-core start-on-boot=yes workdir=/root
/container add dns=10.6.0.2 interface=VETH3-tun root-dir=usb1-part1/Containers/tun2socks start-on-boot=yes

/ip dhcp-server network add address=10.10.12.0/24 dns-server=10.6.0.2 gateway=10.10.12.1 netmask=24
/routing table add disabled=no fib name=proxy_mark
/ip route add comment="Acceess to WWW through Proxy" disabled=no distance=1 dst-address=0.0.0.0/0 gateway=10.6.0.4 pref-src="" routing-table=proxy_mark scope=30 suppress-hw-offload=yes target-scope=10

/ip firewall mangle add action=mark-routing chain=prerouting dst-address-list=route_proxy log-prefix=markrou_ new-routing-mark=proxy_mark passthrough=yes
/ip firewall mangle add action=change-mss chain=forward new-mss=clamp-to-pmtu out-interface=Dockers passthrough=yes protocol=tcp tcp-flags=syn

/ip firewall nat add action=dst-nat chain=dstnat comment="TCP 53 Redirect DNS requests to AdguardHome" dst-port=53 in-interface=Bridge protocol=tcp to-addresses=10.6.0.2
/ip firewall nat add action=dst-nat chain=dstnat comment="UDP 53 Redirect DNS requests to AdguardHome" dst-port=53 in-interface=Bridge protocol=udp to-addresses=10.6.0.2
/ip firewall nat add action=masquerade chain=srcnat comment="WWW through VPN Proxy" dst-address-list=route_proxy out-interface=Dockers
/ip firewall nat add action=masquerade chain=srcnat comment="Containers through NAT" out-interface-list=WANs src-address=10.6.0.0/24

In the containers themselves, as I wrote earlier, it is necessary to make changes to the files:
Xray-core (acts as a client to a VPS server with 3X-UI installed) - after launching the container, the config file is installed and edited.json (connection settings are set).
Tun2Socks, after installation and activation. We stop the container and replace it Entryppoint.sh

#!/bin/sh
ip tuntap add mode tun dev tun0
ip addr add 198.18.0.1/15 dev tun0
ip link set dev tun0 up
ip route del default
ip route add default via 198.18.0.1 dev tun0 metric 1
ip route add default via 10.6.0.1 dev eth0 metric 10
tun2socks -device tun0 -proxy socks5://10.6.0.3:30804 -interface eth0

After launching the container, all addresses included in the proxy_mark list go through the VLess proxy tunnel.


In Russian
Добрый день! Прошло довольно много времени и как просматривал эту тему.
И отвечая на последний вопрос - да! Мне удалось заставить работать эту схему с тремя контейнерами.
Но вот по прошествии времени что-то внутри контейнеров поменялось, и прямо сейчас не могу гарантировать, что все заработает с первого раза без разбирательств.
На более дешёвом aX2 даже получилось запускать установку контейнеров через скрипт, но после того как я 1,5 месяца не имел доступа к этому устройству, выяснилось что схема перестала работать, и пока нет возможности выяснить в чем причина. Возможно обновились сами сборки контейнеров, и данные настройки уже не подходят.

На домашнем aX3 с установкой контейнеров на USB-Flash, в данный момент схема работает.
Настройки такие:

В самих контейнерах как и ранее писал, необходимо внести изменения в файлы:
Xray-core (выполняет роль клиента к VPS-серверу с установленным 3X-UI) - после запуска контейнер останавливается и редактируется файл config.json (задаются настройки подключения).
Tun2Socks, после установки и включения. Останавливаем контейнер и подменяем Entryppoint.sh.

После запуска контейнера все адреса включённые в список proxy_mark ходят через Vless прокси туннель.

@DeHb86
Hi! Could you write a more detailed article on how to make containers yourself? By the way, is it possible to run XRAY-core client in TUN Mode?

Привет! Ты не мог бы написать более подробную статью как сделать сами контейнеры? Кстати можно ли запустить XRAY-core client в TUN Mode?

And which step exactly do you need to go into more detail?
The container installation process itself? I did not rebuild the containers to suit my needs, I just installed them from the repository, and replaced the config file.json in xray-core (specifying the IP/key, etc. to connect to the server via the Vless protocol), and in the second container I also replaced the file entrypoint.sh

А какой именно шаг нужно подробней?
Сам процесс установки контейнера? Я не пере собирал под свои нужды контейнеры, я просто установил с репозитория, и подменил файл конфига config.json в xray-core (с указанием IP/ключа и т.д. для подключения к серверу через Vless протокол), и во втором контейнере так же подменил файл entrypoint.sh.
Если надо что-то подробней напиши на почту dehbсобакаlist.ru.

There is a small addition to all of the above.
I don’t know why, but the tun2socks container has stopped working recently - it couldn’t ping either local machines or remote hosts.
Colleagues who are not indifferent to our common problem figured out what the problem was with the container and advised us to make changes.
The solution was a change in spelling Entrypoint.sh , adding commands to turn interfaces off and on.
At the moment, I have restored work on all devices, so I will post an updated version:

#!/bin/sh
sleep 2
ifconfig eth0 down
sleep 2
ifconfig eth0 up
ip tuntap add mode tun dev tun0
ip addr add 198.18.0.1/15 dev tun0
ip link set dev tun0 up
sleep 2
ifconfig tun0 down
sleep 2
ifconfig tun0 up
ip route del default
ip route add default via 198.18.0.1 dev tun0 metric 1
ip route add default via 10.6.2.1 dev eth0 metric 10
tun2socks -device tun0 -proxy socks5://10.6.2.3:30804 -interface eth0

In Russian
Ко всему вышенаписанному небольшое дополнение.
Не знаю почему но все же контейнер tun2socks в последнее время переставал работать - не мог пинговать ни локальные машины, ни удаленные хосты.
Коллеги не равнодушные к нашей общей проблеме, разобрались в чем была проблема с контейнером и посоветовали внести изменения.
Решением стало изменение в написании Entrypoint.sh, добавление команд на выключение и включение интерфейсов.
В данный момент, у меня восстановилась работа на всех устройствах, так что выложу обновлённый вариант:

Just want to share entrypoints for both implementations, so you can build your own images for mikrotik.

Docker file something like this one:

# syntax=docker/dockerfile:1

FROM ghcr.io/<image>:latest

COPY --chown=0:0 --chmod=755 entrypoint.sh /entrypoint.sh

For https://github.com/xjasonlyu/tun2socks


#!/bin/sh

TUN="${TUN:-tun0}"
ADDR="${ADDR:-198.18.0.1/15}"
LOGLEVEL="${LOGLEVEL:-info}"

create_tun() {
  # create tun device
  ip tuntap add mode tun dev "$TUN"
  ip addr add "$ADDR" dev "$TUN"
  ip link set dev "$TUN" up
}

config_route() {
# http://forum.mikrotik.com/t/run-flag-in-container/163100/8
  ip route del default
  ip route add default via ${IPV4} dev ${TUN} metric 1
  ip route add default via $(ip -o -f inet address show eth0 | awk '/scope global/ {print $4}' | cut -d/ -f1) dev eth0 metric 10
}

run() {
  create_tun
  create_table
  config_route

  # execute extra commands
  if [ -n "$EXTRA_COMMANDS" ]; then
    sh -c "$EXTRA_COMMANDS"
  fi

  if [ -n "$MTU" ]; then
    ARGS="--mtu $MTU"
  fi

  if [ -n "$RESTAPI" ]; then
    ARGS="$ARGS --restapi $RESTAPI"
  fi

  if [ -n "$UDP_TIMEOUT" ]; then
    ARGS="$ARGS --udp-timeout $UDP_TIMEOUT"
  fi

  if [ -n "$TCP_SNDBUF" ]; then
    ARGS="$ARGS --tcp-sndbuf $TCP_SNDBUF"
  fi

  if [ -n "$TCP_RCVBUF" ]; then
    ARGS="$ARGS --tcp-rcvbuf $TCP_RCVBUF"
  fi

  if [ "$TCP_AUTO_TUNING" = 1 ]; then
    ARGS="$ARGS --tcp-auto-tuning"
  fi

  if [ -n "$MULTICAST_GROUPS" ]; then
    ARGS="$ARGS --multicast-groups $MULTICAST_GROUPS"
  fi

  exec tun2socks \
    --loglevel "$LOGLEVEL" \
    --interface eth0 \
    --device "$TUN" \
    --proxy "$PROXY" \
    $ARGS
}

run || exit 1

For https://github.com/heiher/hev-socks5-tunnel


#!/bin/sh

TUN="${TUN:-tun0}"
MTU="${MTU:-9000}"
IPV4="${IPV4:-198.18.0.1}"
IPV6="${IPV6:-}"

MARK="${MARK:-438}"

SOCKS5_ADDR="${SOCKS5_ADDR:-172.17.0.1}"
SOCKS5_PORT="${SOCKS5_PORT:-1080}"
SOCKS5_USERNAME="${SOCKS5_USERNAME:-}"
SOCKS5_PASSWORD="${SOCKS5_PASSWORD:-}"
SOCKS5_UDP_MODE="${SOCKS5_UDP_MODE:-udp}"

LOG_LEVEL="${LOG_LEVEL:-warn}"

config_file() {
  cat > /hs5t.yml << EOF
misc:
  log-level: '${LOG_LEVEL}'
tunnel:
  name: '${TUN}'
  mtu: ${MTU}
  ipv4: '${IPV4}'
  ipv6: '${IPV6}'
  post-up-script: '/route.sh'
socks5:
  address: '${SOCKS5_ADDR}'
  port: ${SOCKS5_PORT}
  udp: '${SOCKS5_UDP_MODE}'
  mark: ${MARK}
EOF

  if [ -n "${SOCKS5_USERNAME}" ]; then
      echo "  username: '${SOCKS5_USERNAME}'" >> /hs5t.yml
  fi

  if [ -n "${SOCKS5_PASSWORD}" ]; then
      echo "  password: '${SOCKS5_PASSWORD}'" >> /hs5t.yml
  fi
}

config_route() {
  echo "#!/bin/sh" > /route.sh
  chmod +x /route.sh

  echo "ip route del default" >> /route.sh
  echo "ip route add default via ${IPV4} dev ${TUN} metric 1" >> /route.sh
  echo "ip route add default via $(ip -o -f inet address show eth0 | awk '/scope global/ {print $4}' | cut -d/ -f1) dev eth0 metric 10" >> /route.sh
}

run() {
  config_file
  config_route
  echo "echo 1 > /success" >> /route.sh
  hev-socks5-tunnel /hs5t.yml
}

run || exit 1

Don’t forget to set environments for containers.

I successfully deployed tun2socks and hev-socks5-tunnels on hAP ac^3 (arm).
But the same configuration doesn’t work on RB5009UG+S+ (arm64).

On RB5009 forwarding inside container simply do nothing. Using torch I can’t catch outgoing packets from tun to socks but routing from local is working.
Connection in tun container from tun device can be checked with:

wget -S -O - http://one.one.one.one > /dev/null

I have more complicated firewall on RB5009 and will try to check with simple firewall on weekend.

@DeHb86
Looks like you also have arm64 platform. For the moment there is only difference in platform on my tests.

Routing inside container can be simplified a bit:

ip route del default

ip route add 10.0.0.0/8 via 172.21.0.17 dev eth0
ip route add 172.16.0.0/12 via 172.21.0.17 dev eth0
ip route add 192.168.0.0/16 via 172.21.0.17 dev eth0

ip route add default via 198.18.0.1 dev tun0

All private networks route thru router but external traffic will go to socks.

Here is an equivalent of the tun2socks Docker image for arm64 (e.g., AX2, AX3) - hev-socks5-tunnel-mikrotik
and shadowsocks-client, if you need. Also i have vless reality client container

PS. For arm you can try

snegowiki/hev-socks5-tunnel-mikrotik:test

, but i have not tested it.

Hello!
On the first attempt, I failed to get a container, or rather, the container was installed and works, but does not act as a tunnel.

At startup, the container returns an error:
“Error: invalid prefix for the specified prefix length”.
My local network is 10.10.12.0/24, the container network is 10.6.0.0/24, DNS server 10.6.0.2 (Adguardhome) (Router - aX3)

/container
add dns=10.6.0.2 envlist=HEV-tun interface=VETH4-tun2 logging=yes root-dir=usb1/Containers/Tun2 start-on-boot=yes
/container envs
add key=SOCKS5_ADDR name=HEV-tun value=10.6.0.3
add key=SOCKS5_PORT name=HEV-tun value=30804
add key=SOCKS5_UDP_MODE name=HEV-tun value=udp
add key=LOCAL_ROUTE name=HEV-tun value="ip r a 10.10.12.1/24 via 10.6.0.1"

According to the settings, everything seems to be correct, what could go wrong?


my stupid mistake…
pointed out the network.“…12.1/24” and not “…12.0/24” as it should be, everything started up and works.

snegowiki, Thank you very much!

I confirm, it works well on hap ac3. The speed is a bit low, about 20 megabits with a 100 megabit channel. But I think it is not related to Mikrotik, the CPU load is no more than 30% at peak.


подтверждаю, работает хорошо на hap ac3. Скорость низковата, около 20 мегабит при 100 мегабитном канале. Но я думаю это не связано с микротиком, нагрузка на цпу не более 30% в пике.

мой путь установки.
не забудьте изменить IP-адреса и подсети на свои значения, а также путь до usb диска.

Установите дополнительный пакет container.npk
необходим usb диск отформатированный в ext4

/system/device-mode/update container=yes
/system reboot

/container config set ram-high=250.0MiB registry-url=https://registry-1.docker.io tmpdir=usb1/pull

/interface bridge add name=Dockers port-cost-mode=short

/interface veth add address=11.0.0.2/24 gateway=11.0.0.1 gateway6="" name=veth1-tun2sock
/interface veth add address=11.0.0.3/24 gateway=11.0.0.1 gateway6="" name=veth2-xray-core

/interface bridge port add bridge=Dockers interface=veth1-tun2sock
/interface bridge port add bridge=Dockers interface=veth2-xray-core

/ip address add address=11.0.0.1/24 interface=Dockers network=11.0.0.0

/ip firewall nat add action=masquerade chain=srcnat comment="Containers through NAT" out-interface=WAN src-address=11.0.0.0/24

/container add remote-image=xjasonlyu/tun2socks:latest interface=veth1-tun2sock root-dir=usb1/containers/tun2sock logging=yes
/container add remote-image=teddysun/xray:latest interface=veth2-xray-core root-dir=usb1/containers/ xray-core logging=yes

/routing table add disabled=no fib name=proxy
/ip firewall mangle add action=mark-routing chain=prerouting dst-address-list=proxy_list new-routing-mark=proxy passthrough=yes src-address=<ip адреса клиентов(подсеть)>

/ip route add comment="Acceess to through Proxy" disabled=no distance=1 dst-address=0.0.0.0/0 gateway=11.0.0.2 pref-src="" routing-table=proxy scope=30 suppress-hw-offload=yes target-scope=10

при помощи встроенного SMB можно получить доступ к файлам контейнеров

Необходимо отредактировать файл ../containers/tun2sock/entrypoint.sh
Содержимое файла заменить на это:

#!/bin/sh
sleep 2
ifconfig eth0 down
sleep 2
ifconfig eth0 up
ip tuntap add mode tun dev tun0
ip addr add 198.18.0.1/15 dev tun0
ip link set dev tun0 up
sleep 2
ifconfig tun0 down
sleep 2
ifconfig tun0 up
ip route del default
ip route add default via 198.18.0.1 dev tun0 metric 1
ip route add default via 11.0.0.1 dev eth0 metric 10
tun2socks -device tun0 -proxy socks5://11.0.0.3:1080 -interface eth0

Для настройки подключения xray-core к VPS с установленным 3x-ui необходимо отредактировать файл \containers\xray-core\etc\xray\config.json
примеры конфигурации ищем здесь https://github.com/XTLS/Xray-examples

Здравствуйте! А не могли бы поделится своим конфигом для xray (конечно удалив свои адреса) ? tun2 контейнер стартует, но вот внутри что-то не поднимает туннель к xraу.
Думаю загвоздка в конфиге xray.

Translated using Google Translate:
Hello! Could you share your config for xray (of course, deleting your addresses)? Tun2 container starts, but something inside does not raise the tunnel to XRAU.
I think the catch is in the xray config.

@user7780
Please use English as most will otherwise not be able to understand what you post.
Also for searching it’s a nightmare when other languages are used.

вместо #### вставь свои данные, их можно найти в панели 3x-ui или расшифровав url vless://
instead of #### insert your data, you can find it in the 3x-ui panel or by decoding the url vless://

/etc/xray/config.json VLESS-TCP-XTLS-Vision-REALITY

{
    "log": {
        "loglevel": "warning"
    },
    "inbounds": [
        {
            "listen": "11.0.0.3",
            "port": "1080",
            "protocol": "socks",
            "settings": {
                "auth": "noauth",
                "udp": true,
                "ip": "11.0.0.3"
            }
        },
        {
            "listen": "11.0.0.3",
            "port": "1081",
            "protocol": "http"
        }
    ],
	"outbounds": [
        {
            "protocol": "vless",
            "settings": {
                "vnext": [
                    {
                        "address": "####", 
                        "port": 443, 
                        "users": [
                            {
                                "id": "####", // Needs to match server side
                                "encryption": "none",
                                "flow": "xtls-rprx-vision"
                            }
                        ]
                    }
                ]
            },
            "streamSettings": {
                "network": "tcp",
                "security": "reality",
                "realitySettings": {
                    "fingerprint": "firefox", 
                    "serverName": "####", // A website that support TLS1.3 and h2. If your dest is `1.1.1.1:443`, then leave it empty
                    "publicKey": "####", // run `xray x25519` to generate. Public and private keys need to be corresponding.
                    "spiderX": "", // If your dest is `1.1.1.1:443`, then you can fill it with `/dns-query/` or just leave it empty
                    "shortId": "####" // Required
                }
            },
            "tag": "proxy"
        }
    ],
    "routing": {
        "domainStrategy": "AsIs",
        "rules": [
            {
                "type": "field",
                "ip": [
                    "geoip:private"
                ],
                "outboundTag": "direct"
            }
        ]
    }
}

Hi guys! I read this topic, but I can’t understand the specific algorithm for running the client part of Vless for my VPS server. Maybe someone can systematize the recommendations and write a FAQ on this issue? So far I’ve understood one thing: I need Mikrotik on the ARM architecture with RouteOS 7.0, with the ability to install packages, then install the Xray-core and Tun2Socks package (I don’t need ADGUARD), and the next steps are unclear…

Good day!
At the moment, thanks to the rebuilt Xray-core and hev-socks5-tunnel containers (Thanks a lot for this Snegowiki! https://hub.docker.com/r/snegowiki/vless-mikrotik and https://hub.docker.com/r/snegowiki/hev-socks5-tunnel-mikrotik) - it has become much easier to launch containers.
In addition to preparing the router for the installation of containers, it is also necessary to prescribe routing for marked traffic.

In short, these are the settings you need:

  1. Create 2 virtual interfaces for each container, select an IP for them from a range of private addresses
/interface veth add address=172.17.0.2/24 gateway=172.17.0.1 gateway6="" name=veth1-xray
/interface veth add address=172.17.0.3/24 gateway=172.17.0.1 gateway6="" name=veth2-tun
  1. Create a bridge for veth and containers, аssign him an IP and network:
/interface/bridge/add name=containers
/ip/address/add address=172.17.0.1/24 network=172.17.0.0 interface=containers
  1. Add veth interfaces to the bridge:
/interface/bridge/port add bridge=containers interface=veth1-xray
/interface/bridge/port add bridge=containers interface=veth2-tun
  1. Add a bridge to the LAN list
/interface list member add interface=containers list=LAN
  1. Add a routing table for tagged traffic
/routing table add disabled=no fib name=proxy_mark
  1. Setup NAT for outgoing traffic:
/ip firewall nat add action=masquerade chain=srcnat comment="Containers through NAT" out-interface-list=WAN src-address=172.17.0.0/24
  1. Setup firewall for mark-routing traffic:
/ip firewall mangle add action=mark-routing chain=prerouting dst-address-list=route_proxy new-routing-mark=proxy_mark passthrough=yes
#Optional /ip firewall mangle add action=change-mss chain=forward new-mss=clamp-to-pmtu out-interface=containers passthrough=yes protocol=tcp tcp-flags=syn
  1. Add resources to the address-list:
/ip firewall address-list add address=microsoft.com list=route_proxy
/ip firewall address-list add address=www.microsoft.com list=route_proxy
  1. Add traffic routing for tagged traffic
/ip route add disabled=no distance=1 dst-address=0.0.0.0/0 gateway=172.17.0.3 routing-table=proxy_mark
  1. Set the Environment for the Xray vless container
/container envs add key=SOCKS_PORT name=vless value=@port@
/container envs add key=REMOTE_ADDRESS name=vless value=@your_adress/ip_vps@
/container envs add key=REMOTE_PORT name=vless value=443
/container envs add key=ID name=vless value=@ID from panel 3x-ui@
/container envs add key=ENCRYPTION name=vless value=none
/container envs add key=FLOW name=vless value=xtls-rprx-vision
/container envs add key=FINGER_PRINT name=vless value=chrome
/container envs add key=SERVER_NAME name=vless value=@the domain you're masquerading as@
/container envs add key=PUBLIC_KEY name=vless value=@PUBLIC_KEY@
/container envs add key=SHORT_ID name=vless value=@SHORT_ID@
  1. Set the Environment for the Tun containr
/container envs add key=SOCKS5_ADDR name=tun value=172.17.0.2
/container envs add key=SOCKS5_PORT name=tun value=@port@
/container envs add key=SOCKS5_UDP_MODE name=tun value=udp
/container envs add key=LOCAL_ROUTE name=tun value="ip r a @your network@ via 172.17.0.1"
  1. Add container (in this case, the settings are not complete, the path from where to install from the hub directly, or from a file from the router is not specified)
/container add dns=@your network@ envlist=vless interface=veth1-xray root-dir=@your directory sample - usb1/Containers/vless-mikrotik@ start-on-boot=yes workdir=/root
/container add envlist=tun interface=veth2-tun root-dir=@your directory sample - usb1/Containers/Hev-Tun@ start-on-boot=yes