From my perspective, the following background information and events occurred:
1.The admin account had been disabled all the time. A password was set for it before disabling, and an alternative administrator account named planB was created and used normally.
2.One morning, a container reported an error and failed to start. A scheduled task kept attempting to restart it, but all attempts were unsuccessful.
3.When checked that night, it was found that the container logs had filled up the disk space. After cleaning up the logs, the container was restarted successfully.
4.Later that night, the system was upgraded from version v7.20.1 to v7.20.2. After the restart, it was discovered that the planB account could not log in.
5.Finally, it was found that the system could be logged in using the admin account with a blank password. The last configuration change was still present. Due to the urgency to use the system, the last backup was used for restoration, and the specific status of each account was forgotten to be checked.
So basically saying that the upgrade process reset the users to default? Assuming, after the upgrade the admin was the only account and no password assigned?
In all such cases, your best bet is to a netinstall of the firmware desired, which ensures a clean router.
I would reports the instance to MT, however since you reloaded firmware a suppout at this point may not be useful. However if you mean by backup, the router is now at a state found just before the upgrade in firmware, then ensure you have the same setup a differnent admin name ( like planB) , and the admin account there but disabled, then take a supout, then upgrade to 7.20.2 and if the same things happens take another supout.
Send all to MT.
I update to the latest RouterOS firmware from time to time, and this is the first time I've encountered this issue. So I'm wondering if it could be caused by a full disk space, which might trigger this anomaly under certain conditions. Of course, this is just a guess.
There are many errors like that. E.g. when space is at a premium and the router crashes, on a router with more than 16MB flash, it may create an autosupout.rif that fills all the storage. So you have to remove that manually.
(on a 16MB flash router the place where the file is created is in RAMdisk and it is not a problem)
So this thread is another example of why running containers off the built-in storage is a bad thing. One really should be using external storage in such cases ... both for storing container images and for storing any output that containers or container engine might dump (e.g. logs). This would both prevent internal storage from getting exhausted and from excessive wear (premature failure of internal storage which also carries ROS license).
Agree on container storage best practices. But seemingly more subtle: it was writing a log file to flash (i.e. with container's logging=yes), not per se the container storage. Unclear but OP may actually be using external storage for container & just forgot to set a limit disk logging (which is also problematic, for same worry about flash over usage)
I still feel like there is a bug here... I cannot see why an upgrade mess with the account database. IDK how something like an admin account gets enabled as part of an upgrade - it having no account make slightly more sense if it was corruption...
Perhaps the internal "crossfig" thing has some bad logic, IDK, but upgrade process re-write config. Since OP also suggests the disk space issue was resolved before the upgrade, which leads my concern there is some bug unrelated to out-of-space being "problematic".
I'd recommend if this ever happens again to collect a supout.rifbefore reverting to a backup. As without a supout, there is not much for MikroTik support to even look at for a root cause.