I am working on a client side website tha displays active firewall connections on a worldmap. To keep it as simple as possible, i am using plain JS and the REST Api.
This is intended for LAN Use only, i don’t understand why i am forced to hassle with SSL. Self Signed Root_CAs etc. are not a problem, but the normal api has unencrypted access, why doesn’t REST?
A bit counterintuitive for RouterOS to be restricted in that way. Please let me access REST via Plain HTTP, i am totally capabale of locking of access by myself.
I get your point, sure. However, whole of RouterOS’s excellence is (in my eyes) the ability to decide for myself. I could open Webfig to WAN, could run without any Firewall rules whatsover etc.
I think even stdcfg is considered “unsafe” by most or even better, setting up device without config at all. ALL doors wide open.
So the reasoning especially for this one to be cut off, and that after the fact that i first have to enable API anyhow appears somewhat strange.
Although it seems a bad idea in most cases, OP is right it’s not consistent. webfig allows an unencrypted password.
But I just think it be better if “rest” was it own thing under /ip/services (e.g. perhaps allowing different port, cert or no cert, send CORS headers, etc.) . Or perhaps, if the existing www and www-ssl allows more configurations (e.g. checkboxes to enable REST vs webfig, CORS, etc.) – either way works IMO, mainly a few more options on the web server’s exports.
I say this since there is likely also the cases where someone may want webfig but NOT rest, or reverse: rest but no webfig.
But that’s a big unfair. Most of the routers for home/SMB include a decent firewall on WAN port. And unsecured (HTTP) webfig is enabled on LAN side only.
Now I wouldn’t be surprised if EU regulations required secured access to configure at some point in future. And, it’s not that hard to generate a self-cert. which allow you to move on your code to that use the REST (likely with some parameter in your calling code to ignore untrusted certs).
REST API over https is also extremely slow on LHG, SXT, and similar devices.
Before a possible porting of our mikrotik LTE monitoring agents to REST API, we carried out some pretty extensive tests, but due to the above problems, we decided to stick with the old API.
The main risk is that authentication credentials can be read with passive eavesdropping, no need to intercept and modify packages. If data is sent over a hub or the admin is storing traffic headers for logging purposes, the password can land in unexpected places.
WebFig over HTTP does some encryption and even SwOS uses Digest access authentication to at least hide passwords.
This would be a whole new level of unsecure, compared to HTTP
Well, it’s nice to hear you are picky regarding the security in this case!
However, I don’t think that people in general consider WebFig/SwOS and the old API to be secure, right? Thus, why not let the developer/user to decide what should be used or at least try to optimize the encryption model so it’s usable on endpoint devices where the large number of units are found (ie PTMP, PTP, AP, LTE, etc).
Think so too. That is mostly the main point of my post: Give the user more options. Nobody is stoping me from enabling webfig WAN access and set “admin / admin” as credentials, or enabling telnet access which would also send login credentials as plain text right? - why would i be stopped here?
It would be very much appreciated if this requirement gets dropped. It adds additional burden on people just wanting to “play” with the REST API.
For people who want to secure their network, I guess they wouldn’t consider to activate telnet or www at all.
For the rest of us, please let us do some stuff without too much of a hassle
If REST would get to be a dedicated service, even better, then settings such as CORS might become an option, too.
But I do think improving the “Let’s Encrypt” support to allow different auth method would also help this “quickest path to playing with REST” use case.
A broader definition of:
/certificate/enable-ssl-certificate
could, optionally, generate a self-signed certs to make HTTPS work — the “enable-ssl-certificate” command name kinda implies it should help more with HTTPS than it does …
Definitely. Make it simpler to turn HTTPS on, would allow much faster adoption, then all the hoops and loops (I know I’m exaggerating a bit, but it’s really inconvenient).
In most language HTTP client libraries, they all force certificate checks – all different ways to disable, and some don’t use the OS’s trust store, adding more complexity with self-signed.
Do think improving somehow the “/certificate/enable-ssl-certificate” to more likely be able to get some certificate solve the more general problem – e.g. you want TLS but it ain’t exactly easy – quite of few steps beyond even RouterOS.
If a container wanted to communicate back to its host, I don’t think HTTP is a bad option. And unsecured traffic allow packet sniffing to better troubleshoot issues in the calls (& with TZSP could watch it live in Wireshark from the dev PC).
Anyway a checkbox for “Allow REST” in the /ip/service for www wouldn’t be bad as an initial step.
Don’t get me wrong, I’m all for letting people decide. If someone wants unencrypted REST, is should be their choice. I’m also big fan of configurable things. Currently you can enable web server and it’s all or nothing (WebFig, REST, …) => not good. Same for current enable-ssl-certificate, it’s hardcoded shortcut with zero flexibility.