Disable HTTPS-based Services (except REST API)

I'm doing a fair bit of work with the REST API on RouterOS and want to enable the https://router/rest/ interface, but disable the https://router/ and https://router/webfig/ pages. Obviously the traditional approach of disabling the www-ssl service entirely isn't going to work here and I'm not finding many other options. Does anyone have any thoughts on this?

AFAIK it cannot be separately disabled, since it belongs to same web service. MT could add option to disable feature on www(-ssl) service, management UI or API.
For HTTP (not encrypted) protocol (www service) you can use firewall L7 rule to restrict access only to /rest path prefix, but for HTTPS (www-ssl), reverse proxy in the middle is needed (Nginx, Caddy...) where rules can be applied for path restriction. Proxy can run in ROS container or external device and access to ROS www-ssl service can be restricted with firewall rules to accept traffic only from proxy. Ofc. then API client will need to access ROS API through proxy endpoint.

1 Like

Thanks. That’s what I was worried the answer might be. I’ll put in a feature request for disabling individual services within the web process. In the meantime, I’ll just make sure that the web service is only reachable from ::1 and will use an SSH tunnel from the API client.

Why do you want to disable these? If it is public facing, you might want to consider using VPN.

It’s not public-facing. I typically disable all web-based access to a router. I need the REST API for some automation work that I’m doing, but don’t like that I have to open additional (and unnecessary) attack vectors to accomplish this.

You could consider using the traditional API instead of the REST API. That one you can separately enable/disable.
Also of course it is possible to restrict the IP address that can access the web service (or each service in that list), so you can allow your REST API host access while denying access to everyone else.

Recoding the engine to use a non-standard API is something I’d rather avoid. I’ve already taken steps to restrict access. Still, this has required me to document the necessity for a service exception with the SOC and I’m going to have to justify it at every review. If I can avoid adding this to my list of things to do each quarter, I would really like to.

Is for non public access SSH tunnel really necessary? Using firewall filter rules (I prefer that than restrictions in service for just blocking) should be enough for HTTPS because it is already encrypted. Are you expecting HTTPS MITM attacks on network with TLS certificate used by ROS www-ssl service?
Also tunnel vs firewall restriction is not extra protection when someone has access to client device on which tunnel is created, because instead accessing router port it can access local tunel port in same way.

I don’t really get to make the decisions with respect to what is necessary. If I tunnel through SSH and restrict access to HTTP/HTTPS to ::1, I can honestly say that I’m not opening those services to the network and don’t have to document exceptions. It’s an annoying wrapper, but it meets the requirements. (I suppose it adds the benefit of key authentication on top of basic authentication, but that’s a side thing.)

Well this goes beyond question in OP (disabling admin UI but allowing API on same service) for particural security requirements (key authentication on top of basic authentication) and still tunnel needs to be used to enhance security regardless is admin available or not.
Regarding security policy exception which considers service open on network when is restricted to specific client host but not considering it open when tunneled to same host seems to be wrong, but through my career I read a lot of silly policies so I'm not surprised.

1 Like

Right with you there. It seems wrong to me too, but when there’s approval to open the REST API on the network and denial for opening the web-based admin interface on the network, I either have to find a way to open one and not the other or just take both off of the network entirely. The SSH tunnel does that.

Suggest them If they are such paranoid to restrict access on OS level access to local tunnel port on API client host for users which are not allowed to access service (with SELinux on Linux for eg.) :slight_smile:

1 Like

Their requirements are simpler than that. No web-based user interface is to be exposed on the network. Automation APIs can be exposed with restrictions. If I can’t separate those, I ‘m not allowed to do either.

The SSH tunnelling idea isn’t theirs. It’s just my hack to get what I need while still meeting the strict letter of their requirements. Per my OP, I’d rather just be able to just disable webfig while leaving the REST API open (with restrictions) but that’s just not possible unless I use insecure HTTP and L7-protocol filters. Unfortunately, the REST API uses basic authentication and that would be sending credentials across an unencrypted channel, which is another no-no.

Yes It's understandable that HTTP shouldn't be used because it is not encrypted, but restricted HTTPS with firewall rules to specific host IMO is not same as open to all hosts on network and it shouldn't to be treated as security risk at same level.
I understand web-based user interface blocked on network prevents potential attack on web framework, but when is restricted (by firewall) to specific host, attack must be made from such host and it can be made over ssh tunnel as well if attacker has access to host unless there are user restrictions on OS level as I mentioned previously.
Also, when one has ROS credentials for REST API it can make any damage when REST API-only service is exposed to all hosts on network, but not when REST API along with web UI is exposed only to restricted host on where one has no access.
If ssh tunnel is workaround for security requirements then let it be, but I find it a bit silly considering all mentioned above.

I don't always agree with the requirements of the different security "best practices" checkbox-style compliance regimes, but this one sort of makes at least some sense.

About 9 out of 10 serious security problems with routers (and yes, 93.7% of all statistics on the internet are made up) are some compromise in the web admin interface. This is understandable: with any web framework there are a lot of moving parts, but on an embedded platform you additionally have a lot of specific difficulties. Also, I somewhat understand why this doesn't apply to APIs, these only require basic auth support, and then the request goes straight to the application without a framework involved.

I actually don't consider this solution of tunneling the traffic to a trusted host (although I would rather use a permanent wg tunnel, but ssh is no less secure...)

And I think that the ability to enable/disable specific endpoint would be very welcome. Almost this exact request came to me in relation to Let's Encrypt challenges: why can't you just turn that on specifically? I would actually be really glad if there was a really tiny separate web server for that purpose alone that ran as an unprivileged user and could only respond with the challenge string...

1 Like

@lurker888 from witch host attack will be performed if access to service is restricted by firewall?

One spoofing the correct IP.

To not be glib, I'm going to answer in good faith. In most setups it's not practical to ensure that an IP identifies a given piece of hardware. The simplest way to ensure this is to use an encrypted tunnel.

Which such speculations, one also can have access on host to where tunnel or VPN is made and perform attacks.

Okay, I’m drifting off into the weeds with this, but… Access to the host is less of a problem. A Linux orchestration container is already stripped down to the bare essentials for API access. It doesn’t have users nor does it have access methods. It gets created and destroyed as needed.

There's always some speculation in modeling threats. I'm just saying that in most company networks that I see every day, when you lock a computer in an office/server room (somewhere) then you can recognize with lay person eyes, how difficult (or easy) it is to get access to it.

On most of these networks the places and points where you can claim you are who you want to be is too numerous to list.

So yes, ensuring that the host is who they say they are is an improvement.

By the way, this is the approach that all the big firewall vendors use: they don't whitelist their server's ip addresses, etc.: they TLS-encrypt their management traffic to a central location - which in their case is their cloud presence - and you make changes via their system. OP does the same, only locally.

Fortunately in Mikrotiks this doesn't cost anything: there's no subscription or license, and with the usual amount of management traffic, there really isn't any performance hit.