Container/Docker -Adguard/Pihole For REAL.

If one does go down the route of using some sort of DNS protection there are many options.

  1. USE IPV4 servers from DNS providers that have some decent functionality against ads etc.
    These seem to work well but do not provide any granularity into whats is happening with clients etc… no dashboard LOL.

  2. Instead of regular IPV4 servers, use the DOH function within the ROS to better hide I guess traffic request coming and going from the router?
    Again no granularity or dashboard.

  3. Then we get to other devices, which this thread is focused on, aka the Container/Docker approach.

We have the ability for the router to do DOH, and the router to do a container with adguard/pihole!
Q1. Is it possible to combine BOTH ?
Q2. If not, which is better to implement and why?

(4) Assuming we are going to go ahead with the adguard/pihole on a container/docker approach lets get REAL.
Not the simple cookie cutter examples on youtube that magically describe using this somewhat complex tool with a single bridge and single subnet.
PALEASE…

(5) Lets at least solve a more typical home/soho MT user that has.
a. single bridge
b. multiple vlans

First problem: Does the container/docker get its own VLAN? Or does it have to be on a separate bridge?
Others: Too numerous to mention as I trip over all of them :slight_smile:

(6) Other Assumptions/Problem areas

SEND USERS TO CONTAINER

  • firewall rule allow interface-list=LAN dst-address=adguard/pihole IP { allow users to reach adguard/pihole on container }
  • dst-nat rules in-interface-list=LAN dst-port=53 protocol=tcp/udp to-address=adguard/pihole IP exclude src-address=adguard/pihole IP { force users to adguard/pihole }
  • input chain rules in-interface-list=LAN dst-port=53 protocols=tcp/udp { to give adguard/pihole access to DNS for initial connection }
  • ip dhcp server-networks - set ALL vlan DNS server entries to adguard/pihole IP exclude adguard/pihole vlan (it gets same value as the gateway entry)

Note for Admin to configure adguard/pihole

  • dst-nat rule dst-address=subnet gateway IP of container to=address=adguard/pihole IP { to reach pihole/adguard via web-browser }

IP DNS entries.
Allow Remote Requests = YES
Add Servers IPV4 to enable adguard pihole to reach its own servers. Should it be the same ones or different ones pihole/adguard uses.

SourceNAT: Nothing special that I am aware of. The docker/container will fall under the standard sourcnat rule ??? If the docker/container is in its own vlan, then no hairpin is required!

(7) What am I missing that I have not considered ( keeping it to ipv4 ).

OMG! You may be close to installing the container package…


Let’s stick to pi-hole since Mikrotik has docs for that.

First skip creating the “docker” bridge. Why it’s in the doc’s example IDK. I’d just create the VETH and give it a unique subnet, skip the whole “docker” bridge:

/interface/veth/add name=veth-pihole address=10.10.10.10/24 gateway=10.10.10.1
/ip/address/add address=10.10.10.1/24 interface=veth-pihole

You could do the firewall a lot of ways, the example uses masquerade so if you had multiple VLANs it look like it was coming from the router’s address and allow outbound internet access for pi-hole downloads:

/ip/firewall/nat/add chain=srcnat action=masquerade src-address=10.10.10.0/24

If you simply add the VETH to the LAN interface list, you should be able to use 10.10.10.10 as the DNS server anywhere you’d like. This avoid any dst-nat rules being needed.

/interface/list/member add list=LAN interface=veth-pihole

I know you run a tighter firewall than that. So for any container, you can look at its Dockerfile, often they have EXPOSE commands. Mikrotik doesn’t NOT use these for anything – a container can listen on ANY port (regardless of what’s listed in the Dockerfile0, but ONLY on its container IP address assigned by associated VETH interface. But these can be a guide to what needs to might need to dst-nat’ed to the container IP address, should you not want to allow all ports between the pi-hole container.

e.g. https://github.com/pi-hole/docker-pi-hole/blob/master/src/Dockerfile show pi-hole will listen on these ports:

EXPOSE 53 53/up
EXPOSE 67/up
EXPOSE 80

Unlike built-in service like www (port 80), DNS (port 53) or DHCP (port 67) that listen on ALL interface, a container will ONLY listen on the VETH address assigned to the container. So you may need dst-nat rules for those ports listed EXPOSED, specifically 53, to various VLANs that have drop rules. For something like pi-hole’s web GUI you’d NAT dst-nat rule for dst-address=10.10.10.10 dst-port=80, with the allowed input either the IP or interface list for the “MGMT VLAN”

I’d also get the web GUI working before doing anything else with the actual DNS configuration.

That’s the quick highlight. I think the key is YOU DO NOT NEED A SEPARATE BRIDGE.

Thats fine but I have a single bridge with multiple VLANS.
So you are saying create a separate vlan for the docker??

A VETH is kind of like an EOIP interface. If you make it a member of a bridge (and tag it to a particular VLAN’s PVID), then you can assign it an IP in the subnet for that bridge (or VLAN).

Or, since it’s also just an interface on the router, you can give it an address in a unique subnet and all VLANs that use the router as the gateway will be able to talk directly to it. If desired, you could add a NAT rule to replace a legacy DNS server’s address on a particular VLAN.

Since Mikrotik is steering folks to using one bridge for all containers. To adapt the Mikrotik Pi-Hole example for a vlan-filtering=yes in the @pcunite style… The only two main changes. The rest of Mikrotik’s instructions be exactly the same.


One is Create network, it look like this instead:

/interface vlan 
add interface=BR1 name=CONTAINER_VLAN vlan-id=98
/ip address 
add address=172.17.0.1/24 interface=CONTAINER_VLAN

/interface veth
add name=veth-mycontainer address=172.17.0.2/24 gateway=172.17.0.1
/interface bridge port
add bridge=BR1 interface=veth-mycontainer pvid=98

/interface bridge vlan
add bridge=BR1 tagged=BR1 vlan-ids=98

# Mikrotik use masquerade with IP, this may not be needed, but harmless
/ip firewall nat
add chain=srcnat action=masquerade src-address=172.17.0.0/24

# CONTAINER_VLAN should be in an some TBD interface list...
/interface list member 
# if using @pcunite style
add interface=CONTAINER_VLAN  list=BASE
# for the defconf config
add interface=CONTAINER_VLAN  list=LAN

Two, Forward ports to internal Docker is different for vlan-filtering=yes…

Here we’d need to know your firewall rules. In general, with the “Docker” bridge in the “LAN” interface list, you may not need to do anything with the firewall.

You shouldn’t need a dst-nat to Pi-Hole’s web GUI, assuming your current VLAN can access to the VLAN 98.
Similar if you more complex filter rules, those may need to allow/block…

  • tcp to port 80 at 172.17.0.2 - forward should be allowed ONLY from an management vlan

If you go set Pi-Hole via DHCP server for client, and you have restricted VLANs, then also:

  • tcp and udp to port 53 at 172.17.0.2 - forward needs to be allowed from ANY VLAN as it’s the DNS.
    If you use mikrotik DNS in DHCP server, and Mikrotik’s DNS servers point to Pi-Hole, Mikrotik’s DNS is like allowed by your firewall already.

In either case, the upstream DNS used for client is what ever DNS server is configured in Pi-Hole’s web GUI

Pi-Hole does also support acting like a DHCP Server too. I’m not sure is necessary or a good idea on a Mikrotik, especially if you already have working VLANs etc. I’d make sure it’s disabled in Pi-Hole’s ui – it be better if RouterOS controlled what DNS servers clients use IMO.

Good stuff here !!

DANG ! So that’s the reason I only saw my router’s IP … time to evaluate this option again and move that little Pi-bugger from NAS to router again.
(my reason: my router is on 24/7. I’d like to power down my NAS when I know I’m not home. Also, I’m not too happy having network stuff running on a NAS.)

Mikrotik’s Pi-Hole example isn’t great IMO. In their design, the container’s IP is always hidden to the rest of the network (e.g. masquerade’d) & the ports exposed by the container (Pi-Hole here) are dst-nat from one/all of router’s IP to container. This isn’t a bad approach per se (it models what Docker Desktop does for network)… but since RouterOS already has stuff typically running on port 53 and port 80, you’d need have may have to disabl those so a dst-nat from the router’s IP/port to the container’s IP/port could work for DNS (port 53) and HTTP (port 80) - this need is determine if you want to put the Mikrotik “in front of” Pi-Hole, or just use Pi-Hole as the only DNS (thus disable /ip/dns allow-remote-requests).

By adding an IP address to the VLAN and VETH, you’ll get a “connected” network in /ip/routes. So outside of firewall filter rules, the container IP should be routable from the rest of the networks. It’s just in a lot of the “all-VLAN” approach you may have drop rules in the firewall filter that block inter-VLAN traffic.

Since the OP that doesn’t show his config :wink: … flying blind here. But instead of masquerading+dst-nat’ing the container, you should be able to allow the needed container IP/ports as “accept” in /ip/firewall/filter. A quick fix is to add the VETH (or VLAN) as a “LAN” in /interface/list/member, as that will avoid the !LAN drop rule. But it may be better to just be specific in the IP/port in the “accept” before any inter-VLAN “drop” rules, so again that’s port 80 and 53. You can than either

  • tell DHCP to use the Pi-Hole container’s IP as the DNS server
  • OR, change Mikrotik’s DNS to use Pi-Hole as it’s upstream DNS

In all cases, any DOH can happen inside the Pi-Hole’s configuration before going to the internet, but internal LANs DNS still use normal port 53 DNS to either RouterOS or PiHole’s DNS server. So you’d like NOT want any DOH server listed in /ip/dns if your using Pi-Hole/etc – let that do the DOH.

Lets forget Pi-hole its so yesterday (betamax). Either discuss adguard or blocky for example.

FWIW, I’m more supportive of this approach for IP only services… But more food for thought here, than specific recommendations…

I get the “all VLAN” approach, but for stuff that really only does Layer 3/IP stuff, you really don’t need ANY bridge. A DNS server container (or web server, storage server, etc) does NOT need Layer2.

So you can also treat VETH IP address more like you would Wireguard instead – you give the each VETH some unique subnet to use, just like you do on for WG interface. Perhaps thinking about the VETH IP as the “WG peer” address, and the gateway address as the “WG interface” address may be more helpful at explaining this approach. So what you use as the gateway address in the VETH for a container, is what you’d set in /ip/address for the VETH. And the IP/subnet you use for VETHs should be one you are NOT using elsewhere in your network.

A DNS Server (or most container) only “EXPOSE” a few IP ports. So in some ways this make the firewall configuration maybe easier, since you can refer use the VETH as the src-interface or dst-interface (or it’s IP address) in your firewall rules. The indirection of any bridge makes the firewall rules a tad more complex IMO.

But you may want to do this since it make the “all VLAN” config consistent, at the expense of slightly more complex firewall rules. So not wrong a design to run all contiainer’s VETH’s through the main vlan-filtering=yes bridge as bridge ports.

And since some container do need Layer 2, and those need to be some bridge. That be another arguement for just always using a new VLAN for each container, since it provide consistency. If we take another example, like the netinstall container here: http://forum.mikrotik.com/t/guide-running-netinstall-server-on-a-tik/161056/1 that does needs to be in a bridge, since need to bridge some physical port with the netinstall container for it to work! And those instructions can be adapted similarly to a PVID and bridge port, instead of create a separate bridge. And you’d adapt the above Pi-Hole example, to create some VLAN 97 for a netinstall container.

And some container may even NEED to be on a vlan-filtering=yes bridge. For example, the various mDNS proxy container – these need to listen for multicast on MULTIPLE VLANs, so the container needs to be configured as a TRUNK port to do so & inside the container it’s actually VLAN aware (most container are NOT VLAN aware, so they act as access ports). See here: http://forum.mikrotik.com/t/mdns-repeater-feature/148334/179

Since I’m 100% positive you can figure out any firewall needs. I’m an agnostic guy, but pi-hole is pretty porky and perhaps dated. Never heard of Blocky:
https://0xerr0r.github.io/blocky/installation/

In theory, that’s just changing Mikrotik’s Pi-Hole example to use a different image name, “spx01/blocky”, mounts and env.. And the networking options for VETH are now well described – pick your poison there.

Blocky look interesting, but it requires you to creating mount(s), and then likely use ROSE or SMB to allow editing of the files (e.g. blocky’s config.yaml here) within the mount as well.

See Blocky using a YAML configuration file it seems, so not a web UI to edit. So you need to wired up the container’s mount to your desktop to edit the files… You’d still need to decide how you’d want that work, while they have “reference configuration” here: https://0xerr0r.github.io/blocky/v0.20/configuration/ – it’s complex.
So getting the container part working is only the first step, you’d have to decide how you want Blocky to work… They support a lot of things around white/black lists, and those all require creating additional mount points.

Further, Blocky maps a specific file config.yaml in a mount in their example. You cannot do that in RouterOS – only directories can be mounts – so you’d need use a different directory in the mount, like /app/config, then use their BLOCKY_CONFIG_FILE as container env a value of /app/config/config.yaml & have a mount used by Blocky container with “dst=/app/config” (and you’d place your edited config.yaml in the src=… of that same container mount). Basically you’d have to learn more about mounts with container to use it.

FWIW, I’d really recommend you’d start with the Cloudflare ZeroTrust as your “first” container… It’s a lot simplier – since all of the DNS container require decisions on how you want DNS handled on top of some container basics. And Blocky likely entail also learning the new ROSE package to better be able to deal with all the files/mounts used by Blocky. Or enabling /ip/smb to allow access to the container’s mount. And then there is whole editing YAML and Blocky config commands. Quite the commitment…

In short, Blocky be a very complex example.

Cloudflare container is a good suggestion for anav.
Then he would be completely prepared on all ins and outs when the ATP package becomes available to educate all us mere mortals.
http://forum.mikrotik.com/t/mikrotik-script-editor-and-chatgpt/165134/15

:laughing:

I feel like a shill in his “cloudflare zerotrust as a TILE package” campaign. e.g. see how difficult containers are!!!

But it’s DNS that makes this complex! (And Mikrotik showing a bridge when one isn’t really needed IMO).

Yes but AMMO clearly MT and others are pushing the idea of a separate bridge just for containers but I prefer a separate VLAN for each service/functionality.

Agreed — if you’re using vlan-filtering=yes, then the “new VLAN per container” approach makes TOTAL sense – “if you go VLAN, go all the way!!!” (to paraphrase you, sans color/font). It’s the half-way house in Mikrotik’s example that does more harm than good.

But your becoming the poster child for problematic posters :wink:

  • no config!
  • no diagram!
  • changing/unclear requirements!
  • XY problem!

So are we past the /system/device-mode and setting the container registry stuff? :slight_smile:

p.s. oh, and cross-posting: http://forum.mikrotik.com/t/dns-not-working-in-containers-with-dns-over-https-setup-on-router/165011/1 - (although in fairness that’s discussing how a generic container get it’s DNS)

How is Pi-hole so yesterday?

Blocky uses the very same hosts file as PiHole and Adguard is very hit and miss IMO …

Pi-Hole does what it supposed to do BUT its not an end-user Tool … meaning that it must be paid attention too [understood] plus it works much better when installed on a Raspberry 4+ due to a better CPU.

Key POINT … Tik allow Containers to run under their ARM devices but TIK do not update the Docker binary as quickly as they should so IMO NOT a good way to use 3rd party stuff to augment your deficiencies when updates are lagging … IN actual fact the very same thing can said for Tik and Linux … which means that Tik developers cannot keep up with the changes.

I’m using Pi-hole and Unbound as recursive resolver for some time in containers. So far so good, still, local recursive resolver has side effect - slower dns responses when host is not in cache, depends on quality of internet connection and MT device performance.
dns-diag.png

It is ARM only, and that is annoying. But there is NO “Docker binary” to update. It’s essential a soup’ed up “choot” that uses the OCI container format, and chroot been same for 40+ years.
And it’s unclear what issues you’re actually run into…

Mind sharing how you setup the firewall and bridging? That seem to be the sticking point here. :slight_smile:

And to the point, if it aint vlans ( and one or more bridges ) not interested.

Router config
config.rsc (48.8 KB)