RouterOS version 6.43.12 has been released in public “stable” channel!
Before an upgrade:
Remember to make backup/export files before an upgrade and save them on another storage device;
Make sure the device will not lose power during upgrade process;
Device has enough free storage space for all RouterOS packages to be downloaded.
What’s new in 6.43.12 (2019-Feb-08 11:46):
MAJOR CHANGES IN v6.43.12:
!) winbox - improvements in connection handling to router with open winbox service (CVE-2019–3924);
To upgrade, click “Check for updates” at /system package in your RouterOS configuration interface, or head to our download page: http://www.mikrotik.com/download
If you experience version related issues, then please send supout file from your router to support@mikrotik.com. File must be generated while router is not working as suspected or after some problem has appeared on device
Please keep this forum topic strictly related to this concrete RouterOS release.
Still 100% CPU-load on one of the cores in my RB3011. The router is working, but still this indicate something is wrong. Anyone else with the same problem? Any suggestions on how to fix?
There is a bug in this version as it does not show the routes received from the IPv6 sessions.
New_terminal:
/ip route print detail where received-from=Peer_X
Flags: X - disabled, A - active, D - dynamic, C - connect, S - static, r - rip, b - bgp, o - ospf, m - mme,
B - blackhole, U - unreachable, P - prohibit
If the search is done by the prefix received from the session, example 2x0x:fec::/32, the search returns the routes correctly.
New_Terminal:
/ipv6 route print detail where dst-address in 2x0x:fec::/32
Flags: X - disabled, A - active, D - dynamic, C - connect, S - static, r - rip, o - ospf, b - bgp, U - unreachable
0 ADb dst-address=2x0x:fec::/32 gateway=2x0x:fec::3e gateway-status=2x0x:fec::3e reachable via VLan_X distance=20 scope=40
target-scope=10 bgp-as-path=“2xxxxx” bgp-local-pref=250 bgp-med=0 bgp-origin=igp received-from=Peer_X
After updating from .11 to .12, one RB1100AHx4 (the only one on PPPoE) would not connect via PPPoE at all. Kept looping Initializing, connecting, terminating, disconnected for more than 5 minutes.
One more reboot and it connected instantly.
There is a generic issue in some environments where a PPPoE connection will not re-establish when the previous one is not closed correctly and/or the new connection is made too soon.
You can experience this on any reboot even without an update. Or on a connection loss between te router and PPPoE server.
bizzy - Fixes for SMB will be included in next release; inteq , pe1chl - If only-one option is enabled on PPP/Profile or you allow only single session on RADIUS (If you use RADIUS), then client can not establish new session while keepalive is keeping old session open. Reduce keepalive in order to speed up re-connect or allow more simultaneous sessions for single user. If this is not your case, then please provide supout file to support@mikrotik.com and make sure that file is generated while problem is actual on your router.
I have such an issue vs. my ISP but I am not controlling the other end. I require a fix at the client side…
The problem is occurring with VDSL. Inside the VDSL and ISP network there is some “PPPoE helper” that forwards my outgoing PPPoE session to the BRAS at the ISP, adding some info (like line#).
Unfortunately it is buggy. It keeps state. When the path is somehow interrupted (e.g. due to some reboot inside the VDSL ethernet network OR due to reboot of the MikroTik in an uncontrolled way) the PPPoE client in the MikroTik sees the session is down and starts to re-establish it by sending PADI.
The PPPoE helper sees this traffic as an indication that the session is still allive and does not forward the PADI to the BRAS (a bug IMHO but unlikely to be fixed).
Because the MikroTik PPPoE client is trying at constant interval (well, after some tries it logs a failure but then it immediately starts a new cycle) this is a fatal condition that does not recover.
There are two conditions to reset the PPPoE helper:
to re-establish the VDSL line sync (apparently the loss-of-sync immediately resets the helper)
to wait a couple of minutes.
So what is required to cleanly recover from this: a dead time between PPPoE sessions even when the setting is not dial-on-demand.
I now cobbled this together using scripting but it would be more convenient when there was a settings field that does this, especially because there is no easy way to say “schedule this job to run once at 3 minutes from now”. I worked around that by having a repeating scheduled job that I disable/enable, but of course it can happen that the first try occurs too soon and it has to wait another cycle.
I’ve encountered a similar problem: after I performed some reconfiguration of the WAN interface (which came online just fine), pppoe-client was unsuccessfully trying to re-establish connection. The cure was to disable and re-enable pppoe-out interface[*] (without any intentional delay between the two operations, so net delay was a few seconds).
[*] this is, unfortunately, not possible to perform if one is not physically present in LAN perimeter as WAN connectivity does not exist at this moment.
My biggest problem with MT wireless right now is the sensitivity of DFS detection. Even in relatively clear environments, I get far too many DFS changes each day. In urban settings with close device proximities and generally full spectrum, it just becomes ridiculous, with devices continually “detecting” something on all the channels. In such areas, the 5590-5650 MHz slice has to be disabled outright because the (multiple) 10 minute scans just kill off reliability entirely. And even so, any careful frequency planning goes right out the window.
These “detections” are almost guaranteed to be false positives. I’m quite certain of this because UBNT devices operating in the same areas get precisely zero detections all day. There are no weather radars nearby (100km radius). Other radar installations I’m aware of should also be far enough away.
I know DFS is a touchy subject and there’s not really the way of implementation that satisfies (or at least balances) all the requirements, as these are hard tradeoffs. It’s just that I am more or less forced to use Superchannel right now to get anything resembling reliability for my clients. I would really like to see some tweaks or larger overhauls in this regard. Who knows, maybe false positives can be reduced by say 80%, while >97% of actual radar pulses are still caught?
It appears that UBNT is fighting the same problem (how to detect RADAR and not detect other pulses).
The before-latest UBNT firmware introduces the same problem as MikroTik has had for years…
There was a newer release lately but I have not yet deployed it to see if it has been fixed.