The "classic" approach to enabled REST web service that doesn't support CORS is to use a proxy server that responds with the needed headers (e.g. responds that contain a few headers starting with "Access-Control-Request-"). The underlying issue with CORS and using NGINX reverse proxy to solve is discussed here: https://blogs.perficient.com/2021/02/10 ... -for-cors/.
While I wait, perhaps forever, for Mikrotik to add CORS support to the REST API...
One method avoid the wait is to use a container on the Mikrotik with the REST API that does this proxy. That what I'm trying here. So the rest of the post documents how to create an NGINX docker-like container that acts as a "reverse proxy" to the real Mikrotik HTTPS server using Mikrotik container supporting. See https://www.docker.com/blog/how-to-use- ... ker-image/ which is what this container/code below is largely based upon, with build.sh to deal with using it on a Mikrotik.
I write this up assuming the basics of containers are somewhat understood & generally understand NGINX/"reverse proxies" - otherwise the explanation be long.... And stuff like you need to have the web server enabled/working, firewall, IP addresses, all set right to use a proxy in the first - that isn't exactly covered here. The build.sh script tries to do everything but you need SSH enabled for your routeros account for that work with the build.sh script (see below).
So if you need NGINX, this seems to work for me. The config is specific to CORS and X.509 client authentically, my rough needs – but any nginx.conf could be used for your needs. It's a bit complex, since it actually tries to passthrough variables from build script to runtime, which isn't that easy in Docker as turns out. I tried to document the various scripts, instead of this post. So if interested, read the code before use! Up to you how you use it, but so no warranties here.
To use, you should just need to put the 3 files: build.sh, Dockerfile, and nginx.conf into a new directory. Then edit the top part of "build.sh" as needed, ideally having SSH enabled to automatically deploy/run the container on a specific Mikrotik by setting the "SSHHOST" should be all that's needed (assuming you have a working REST enabled, firewall allows, likely more assumptions,... )
By default we use a 169.254 address for the container – since it's a proxy, we want to force using it through a dst-nat rule for additional safety since 169.254.x.x should generally not be routable. See build.sh below for various commands used to setup this up on the Mikrotik AFTER the container build.
The container is named "mginx" as pun of the embedded Nginx web server used. Change in build.sh as desired.
Here are the three files needed, with the build.sh script creating the self-signed certs to use.
Dockerfile to build NGINX container for Mikrotik
This must be named Dockerfile (no extension) and should live next to build.sh and nginx.conf files.
This is what docker "builds", it's based on the official nginx docker images, which provide hides the actually installation/setup of nginx. FWIW, "ARG" are provided a build time while "ENV" are available inside the container, so we link them in the Dockerfile - otherwise pretty standard.
Code: Select all
FROM nginx:alpine
# NOTE: Dockerfile requires key/cert for proxy webserver.
# ./build.sh run from the same directory will generate them
# But can be generated using the following at docker build terminal:
# openssl req -x509 -newkey rsa:4096 -keyout self.key.pem -out self.cert.pem -sha256 -days 1024 -nodes -subj "/C=AQ/O=Unsecured Worldwide/OU=Self Signed/CN=router.lan"
# "build time" arguments to override container image default ENVs
ARG defaultProxyPort=6443
ARG defaultContainerGateway=169.254.8.1
ARG defaultRouterHttpsPort=443
ARG defaultProxyHostname=router.lan
ARG defaultX509AuthMode=off
ARG defaultBase64AuthBypass="b@se64encod3dUser:P@ssw0rd="
# "runtime" arguments, this can be provided in the container at launch
ENV ROUTERHOST=$defaultContainerGateway
ENV ROUTERHTTPSPORT=$defaultRouterHttpsPort
ENV OURPROXYPORT=$defaultProxyPort
ENV OURPROXYHOST=$defaultProxyHostname
ENV BASE64AUTHBYPASS=$defaultBase64AuthBypass
# Use X509 authentication?
# e.g. sets nginx's ssl_verify_client, values: on | off | optional | optional_no_ca
ENV X509AUTHMODE=$defaultX509AuthMode
# configuration file used by proxy
# ... note it goes to ".../templates", but a NGINX script parses it to
# /etc/nginx/conf.d/default.conf
# ... but this is how we can use any ENV defined in this Dockerfile
# inside the nginx configuraiton (which does NOT support environment vars)
COPY nginx.conf /etc/nginx/templates/default.conf.template
# SSL server certificates - these are need to enabled SSL, which is required by CORS
# ... since we proxy SSL, we either need to export RouterOS's key/cert to use
# or use self-signed ones & trust them in the browser's computer,
# or do more fancy stuff like certbot in a container, etc. #self-signed works fine
COPY self.cert.pem /etc/ssl/default.crt
COPY self.key.pem /etc/ssl/default.key
# This is not needed unless you modify the config to use X.509 authentication
# e.g. via X509AUTHMODE=on - add your own file, and change "self.cert.pem" to use a different
# root CA as the trusted source for verifying an *client* X.509 auth recieved.
# NOTE:
COPY self.cert.pem /etc/ssl/certauth.ca.crt
# Copy the "public" web pages, from the build's "public" directory, disabled by default
# COPY public/ /home/www/public/
# Not sure RouterOS uses this, but tells the container system the proxy port
# we'll be listening on.
EXPOSE $defaultProxyPort
# ENTRYPOINT comes from NGINX parent - normally there'd be one - but not an error here.
Build Script (./build.sh)
This must be named build.sh and should live next to Dockerfile and nginx.conf files. It can be called on Mac/Linux, using "./build.sh" (you may have to change permissions to allow +x, depending on OS).
Code: Select all
#!/bin/sh
# Container name
IMAGENAME=mginx
echo "Container name (e.g. .tar name, without the .tar) = $IMAGENAME"
# Container platform to build
BUILDPLATFORM=linux/arm/v7
echo "What platform to build? = $BUILDPLATFORM"
# Build copies container TAR to Mikrotik and configures it
# and uses SSH/SCP to do it. Fromat is <username>@<routeros_ip>.
# SSH must be enabled on RouterOS for this work.
# If a SSH key is defined for the user, no password is required
# Otherwise, the build will prompt for credentials for upload & config
SSHHOST=admin@router.lan
echo "SSH/SCP user@host = $SSHHOST"
# When copied, where on RouterOS file system (path only)?
TARDEST=sata1-part1
echo "Container build (.tar) will be copied to: $TARDEST/$IMAGENAME.tar"
# An subnet is used for the container. Typically 24 (for a /24)
SUBNETSIZE=24
echo "Container network using a /$SUBNETSIZE"
# Container's IP address
CONTAINERIP=169.254.8.2
echo "Container IP is: $CONTAINERIP/$SUBNETSIZE"
# ... Must be same subnet (based on SUBNETSIZE) as ROUTERHOST below
# IP address of router running the container
ROUTERHOST=169.254.8.1
echo "Container Gateway (Hosting RouterOS) IP is: $ROUTERHOST/$SUBNETSIZE"
# ... Must be same subnet (based on SUBNETSIZE) as CONTAINERIP above
# & both that subnet must NOT overlap any existing subnet on router
# HTTPS port configurated on Mikrotik the proxy will use
ROUTERHTTPSPORT=443
echo "Hosting RouterOS HTTPS URL: https://$ROUTERHOST:$ROUTERHTTPSPORT"
# Proxy server's HTTP hostname
OURPROXYHOST=router.lan
echo "Proxy Server Name: $OURPROXYHOST/$SUBNETSIZE"
# Proxy server's listen port for proxy requests
OURPROXYPORT=6443
echo "Proxy Server Address (may need dest-nat rule on RouterOS): https://$OURPROXYHOST:$OURPROXYPORT"
# When creating the container's network, what virtual interface name to use
NETIFACE=vethMginxProxy
echo "Container will create/use virtual network interface: $NETIFACE"
# SSH & SCP are used to automate deployment and configuration
SSHCMD=ssh
SCPCMD=scp
# but to disable, un-comment below
#SSHCMD=echo
#SCPCMD=echo
# Use X509 Authentication
# e.g. sets nginx's ssl_verify_client, values: on | off | optional | optional_no_ca
X509AUTHMODE=off
# Self-signed Validity
# When generating self-signed certificate, the number of days they should be valid
SELFSSLDAYS=1024
# This isn't used unless the nginx.conf is explictly changed, but
# it's the Base-64 version of user:password for your router. This is fake.
# It's here to avoid needing to change container code to use it if desired.
BASE64AUTHBYPASS="baae64eec0d3d48e57a00902d="
### START BUILD
# NOTE: All build config variables should be assigned above, and only _used_ below...
# Detect actual router's SSL port (TODO: override the configured one if got VALID one...)
# DETECTED_SSLPORT=`$SSHCMD $SSHHOST ":put [/ip/service/get www-ssl port]"`
echo "Proxy is using $ROUTERHTTPSPORT this needs to match your www-ssl port on RouterOS!"
# Generate self-signed key and certificate for container web server
# ... typically this is only done once, you can comment out if you want to rebuild and use same certificate
openssl req -x509 -newkey rsa:4096 -keyout self.key.pem -out self.cert.pem -sha256 -days $SELFSSLDAYS -nodes -subj "/C=AQ/O=Unsecured Worldwide/OU=Self Signed/CN=$OURPROXYHOST"
# ... HINT: if a build is being deployed to same server in future, you may want to comment out the above
# as it will use the already generated keys = the client-side trust doesn't need to change
# otherwise, you will have to "re-trust" the newly generated self-signed cert on your PC before using CORS again
# These create potential client/browser-side X509 authentication cert that can be used to access REST
# Note: A password is still required by default WITH cert. See nginx.conf for details ONLY require X509.
# X509AUTHMODE must be "on" for these to have any effect.
# X509AUTHMODE=on
# create client certificate request
# openssl req -newkey rsa:4096 -keyout client.key.pem -out client.csr.pem -nodes -days $SELFSSLDAYS -subj "/C=AQ/O=Unsecured Worldwide/OU=Self Signed/CN=X509 Client Access to $OURPROXYHOST"
# sign request using server's self-signed SSL certificate as the "CA", which is what to this instance signed client cert = authorized
# openssl x509 -req -in client.csr.pem -CA self.cert.pem -CAkey self.key.pem -out client.cert.pem -set_serial 01 -days $SELFSSLDAYS
# the PEM file can be imported in the local system (or another system) to beable to access the proxy with X509
# EXAMPLE: This converts the generated keys into a PKCS12 file that can be imported to a PC - but requires a passphrase
# & asking for one during a build may be confusing. But uncomment, and load in Certificate Keychain and browser can use it.
#openssl pkcs12 -export -clcerts -in client.cert.pem -inkey client.key.pem -out client.p12
echo "** Starting Docker Build **"
# Using --build-arg to convert the shell env vars into docker build ARG values used by Dockerfile...
# n.b. which then convert back to env vars inside the container,
# so default settings can be from here (buildtime) and built-in to image
# or changed at runtime inside container's settings later
# Build the container for platform
docker buildx build --platform $BUILDPLATFORM \
--build-arg defaultContainerGateway=$ROUTERHOST \
--build-arg defaultRouterHttpsPort=$ROUTERHTTPSPORT \
--build-arg defaultProxyHostname=$OURPROXYHOST \
--build-arg defaultProxyPort=$OURPROXYPORT \
--build-arg defaultX509AuthMode=$X509AUTHMODE \
--build-arg defaultBase64AuthBypass=$BASE64AUTHBYPASS \
-t $IMAGENAME .
echo "** Docker Build Completed **"
# Save and generate build as .tar file
docker save $IMAGENAME > $IMAGENAME.tar
echo "Container image, $IMAGENAME.tar, saved locally"
pwd
# Copy the tar to the router
echo "Copy to RouterOS..."
SCP_COPY_CMD="$IMAGENAME.tar $SSHHOST:$TARDEST/$IMAGENAME.tar"
echo $SCP_COPY_CMD
$SCPCMD $SCP_COPY_CMD
# Create the network interface and firewall rules
echo RouterOS configuration...
# remove any veth assocated with container
SCMD_RMNET="{ /interface/veth remove [find comment~\"$IMAGENAME\"]; /ip/address remove [find comment~\"$IMAGENAME\"] }"
echo $SCMD_RMNET
$SSHCMD $SSHHOST "$SCMD_RMNET"
# add a veth to use for container
SCMD_MKNET="/interface/veth add address=$CONTAINERIP/$SUBNETSIZE gateway=$ROUTERHOST comment=\"$IMAGENAME\" name=\"$NETIFACE\" }; /"
echo $SCMD_MKNET
$SSHCMD $SSHHOST "$SCMD_MKNET"
# add IP address to same veth for router
SCMD_MKIP="/ip/address add address=$ROUTERHOST/$SUBNETSIZE interface=\"$NETIFACE\" comment=\"$IMAGENAME\" }; /"
echo $SCMD_MKIP
$SSHCMD $SSHHOST "$SCMD_MKIP"
# remove any containers of our type - we only want one
SCMD_RMDOCK="/container { :foreach i in=[find comment~\"$IMAGENAME\"] do={stop \$i; :delay 10s; remove \$i }}; /"
echo $SCMD_RMDOCK
$SSHCMD $SSHHOST "$SCMD_RMDOCK"
# add a new container using this build and start it
SCMD_MKDOCK="/container { add file=$TARDEST/$IMAGENAME.tar logging=yes start-on-boot=yes interface=\"$NETIFACE\" comment=\"$IMAGENAME\"; :delay 10s; start [find comment=\"$IMAGENAME\"]; }; /"
echo $SCMD_MKDOCK
$SSHCMD $SSHHOST "$SCMD_MKDOCK"
# re-create a NAT dst-nat rule that provides access to proxy
SCMD_NATRULE="/ip/firewall/nat { remove [find comment~\"$IMAGENAME\"]; add action=dst-nat chain=dstnat dst-port=$OURPROXYPORT protocol=tcp to-addresses=$CONTAINERIP to-ports=$OURPROXYPORT comment=\"$IMAGENAME\" }"
echo $SCMD_NATRULE
$SSHCMD $SSHHOST "$SCMD_NATRULE"
echo "** END **"
NGINX configuration (must be named ./nginx.conf)
The file must be named nginx.conf, and placed next to build.sh and Dockerfile.
This is uses NGINX container's "template" feature, so that environment variables can be passed into the NGINX configuration file (which does NOT support env vars). See https://www.docker.com/blog/how-to-use- ... ker-image/ for details. It just needs to be placed next to the Dockerfile at build, and will go to right spot in container image. Again the shell/.sh variables going to ARG to build to ENV to back to NGINX-provided shell script inside container that process the variables inside the Dockerized nginx.conf is part of the magic in this.
Now how Nginx configuration works, up to you. This proxies CORS and could support X.509 certificates with minor tweaks in build process. But if you have other needs for a web server on Mikrotik container AND know something about nginx, change at well. In fact, most interesting here is the approach to the variables that somehow make it from the building computer to runtime, with being able to be customized at any of those points. For example, overriding later in RouterOS using env vars within /container
Code: Select all
###
# NGINX Proxy for Mikrotik RouterOS to add X.509 certs & CORS supports
###
# Technically, this is a "template"...
# the $ { stuff } are Docker environment ENV variables that
# can be provided at *runtime*, this happens via script
# included by NGINX that parses this code before using it.
server {
# Proxy all RouterOS traffic recieved on ENV:OURPROXYPORT (6443 default)
listen ${OURPROXYPORT} ssl;
### X.509 Client Authentication Support ###
# This is disabled by default, but tested and "plumb'ed".
# If a *client* browser using this proxy has a X.509 cert install
# that was *signed* by the CA referenced in ssl_client_certificate,
# that can be used to authenticate to this proxy, thus *adding* security.
# If you trust your certificate setup, the basic auth needed to use the
# REST API can be provided using proxy_set_header - so no password needs
# be used with CORS, only the X.509 cert. Instead, after auth, this proxy
# can add the needed fixed username/password before
ssl_client_certificate /etc/ssl/certauth.ca.crt;
ssl_verify_client ${X509AUTHMODE};
# HINT: Ideally replace the certauth.ca.crt file as part of Dockerfile.
# The CA KEY file is NOT needed, just a PEM version when you use a "real" one.
# But since we have the server's self-signed certificate, that can be the CA to
# to sign client certs & auth them - this could be done via /container/shell on router
# as one workflow to get clients X509 certs. Using /certificate on the Mikrotik
# likely be better, but even more complex to explain. Thus just a "HINT" here on options.
# NOTE: The default is to use the self-generated SSL server key as the CA used to verify client X509.
# This is done by copying the key.pem to two locations, but for X509 using a different CA
# just add a file to build that points to certauth.ca.crt.
### X.509 Passthrough Authentication
# Is commented-out & disabled – since you need to customize if used...
# "X.509 passthrough authentication" also this proxy to provide a username/password
# to RouterOS REST API on behalf of the user. Since it a fixed, you'd want to
# really think about the security model before user (e.g. X.509 is required & working):
# proxy_set_header Authorization 'Basic ${BASE64AUTHBYPASS}';
# WARNING: Using proxy_set_header Authorization is automatically providing any proxy'ed call to be
# authenticated WITHOUT a password. If you have "ssl_verify_client on", this container
# verifies a valid cert BEFORE automatically providing the password - that how this is to be used.
# HINT: You'll also need to use something to find/encode Authentication value in header above, like Postman.
# Use SSL key and cert installed by Dockerfile at *build* time
# ./build.sh that creates the .tar image, generates self-signed ones by default
# To use your own, likely better to reference them in Dockerfile to these names:
ssl_certificate /etc/ssl/default.crt;
ssl_certificate_key /etc/ssl/default.key;
# Similarly, this should match the CN of the SSL certificate.
# For self-signed, it does & router.lan is used in the RouterOS default config
server_name ${OURPROXYHOST};
# This is just notes. Logging goes stdout/stderr by default, which on RouterOS
# is preferred at this point to going to container disk. In theory, the log directories
# should be mounted in the Dockerfile/container, but RouterOS does not support.
# Leave commented out for now:
#access_log /var/log/nginx/nginx.vhost.access.log;
#error_log /var/log/nginx/nginx.vhost.error.log;
# For a static web site that is not proxied, add files via Dockerfile.
# Disabled, mainly to test & requires changing the "location /" to "location /rest" or etc etc:
#root /home/www/public;
#autoindex on;
#index index.html;
# ... TODO: the above can be used to include an example JS code that uses the proxy in future
# For all request, we just add CORS headers and potentially more. While targeted
# at /rest, we just always add CORS to everything going through the proxy.
# NOTE: the proxy only proxies to the local router running the container,
# and NOT just any server (although that be possible with different config)
# for the actual root, just redirect to real root
# e.g. logos/graphics would never need CORS
location = / {
proxy_pass https://${ROUTERHOST}:${ROUTERHTTPSPORT};
}
location / {
# This is what a web browser JavaScript needs to see to use RouterOS REST API,
# which is why there is this container in the middle...
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PATCH, PUT, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'Authorization,DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range' always;
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
# since CORS needs OPTIONS
# we provide a generic answer, that yes we support it.
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Headers' 'Authorization,DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
# And as a proxy, we need to actually do that. We can add headers OUTBOUND
# so theoritically the RouterOS knows the request was proxies. But ROS doesn't care AFAIK.
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# This is what does all the work. It take any request recieved here, and just passes along
# all the headers/data to the "real" RouterOS web server. This is configuration at
# runtime via ENV vars, but using 172.28.1.1:443 is default
proxy_pass https://${ROUTERHOST}:${ROUTERHTTPSPORT};
# ... again the $ { stuff } can be provided in environment at *runtime* on Mikrotik container,
# (or buildtime in Dockerfile)
}
}
edit: It's Nginx NOT Ngnix - fixed for clarity
edit 2: X.509 not X506