I’m about to remotely deploy 30 RouterOS 6.4X devices.
Some are Internet-facing while some are not.
I would like to manage them with WebFig and HTTPS, if possible, along with SSH.
Only a couple of Linux PCs (from sysadmin team) will ever need to access WebFig.
May I add, that I’m not familiar with PKI or certificate concepts: as a sysadmin, I use them when I have to but I don’t have a deep understanding.
To enable an HTTPS with a RouterOS 6.48 machine, I followed instructions from [1].
Given the number of devices currently shipped with vendor self-signed certs, do you fear Chrome or FF or others to remove, one day, the capability to access “self-signed web sites” ?
What are the advantages and limitations of generating certs locally instead of uploading them from a dedicated cert-producing host ?
What are the dangers of using very long validity (10 yers) as opposed to renew cert very often (3 months) ?
What are the steps to renew self-signed certs ?
Beside HTTPS access, what are embedded certs commonly used for ?
no, I’m sure “support” for untrusted certificates will stay
Note that self-signed certificates are nothing special … except for the fact that client can not link certificate through trust chain to a trusted top-level certificate. List of trusted top-level certificates is either part of browser suite or is part of OS, current implementations of both allow to install additional top-level certificates by hand.
if you install certificate of self-signing CA to browser, then browser will automatically trust certificates issued by self-signing CA. With locally generated certificates that will never happen. However, creating certificates and signing them with self-signing CA does mean some (manual) work.
When installing certificate, created off-device, it is vital that private key is transferred to device via safe connection … if it’s read by man-in-the-middle, then certificate is compromised and thus useless. With locally generated certificates copying private key is not necessary.
with long-lasting certificates it’s more likely that “man in the middle” finds private key of certificate and starts to decrypt communication. And do it as long as certificate is still valid.
Another danger: if certificate gets compromised and you establish automatical trust in certificates issued by a CA (doesn’t matter if it’s self-signing or some well-known), then you’d have to deal with certificate revocation mechanisms. If certificate validity is short, then you might live without it (because validity after certificate compromise would be short anyway).
basicaly you perform all the steps you do for new certificate. Basically steps are:
create private key
create certificate signing request (CSR), which already includes CN of server that will use certificate. CSR also includes private key hash.
CA signs CSR … or in other words, issues certificate. Certificate validity is set at this step.
both signed certificate and private key are installed to server which then uses certificate.
When renewing certificate, in theory it should be enough to perform steps c. and half of d. (private key is the same, certificate is different), but that would increase probability to find the private key (as mentioned in answer #3 above). So in practice certificate renewal includes all the steps taken for new certificate.
You can easily manage your own small CA. In my private use I simply use free & multi-platform XCA (https://hohnstaedt.de/xca/) toolset as it’s way easier than hand-crafting OpenSSL commands. In such scenario you will need to install your CA certificate and mark it as globally trusted on every computer/phone/device using it. CA certificate can (and usually is) valid for very long time.
Then make your life easy and forget about certificates and https. It’s not that they are too difficult, but it is some extra work, and I don’t see how they can add anything for your use case. Allow only ssh, and if someone wants WebFig, they can use ssh port forwarding to access it. Since ssh already provides encryption, WebFig can use plaintext http and it will be still perfectly secure.
I always thought I needed two remote administration channels such as HTTPS and SSH, one backing the other but is that really the case, given HTTPS relative complexity ?
If you really want backup management channel (e.g. for ssh), then ideally you should look at possibilities which don’t include same (logical) path. Usually https will be routed according to same routes as ssh (and possibly filtered using similar set of firewall rules) over same insecure internet. I’d say you don’t need backup with same basic characteristics, you need something completely different, e.g. serial console or WOOBM for local access … yeah, I know, it’s PITA to travel to customer’s premise in case anything radically wrong happens, but that’s the kind of backup access to router you really need.
But you don’t want to just expose https to the world, do you? In theory it should be secure if you have strong passwords, but I’d definitely feel better about ssh with keys. My favourite backup channel is dual stack IPv4 with IPv6. It doesn’t solve everything, e.g. disabled interface will kill both, but I can e.g. mess up IPv4 firewall and lock myself out, and still have working IPv6 to get back in.