v7.16beta [testing] is released!

The DNS cache does not flush, it stays in the ram memory on x86

Are you sure it’s not flushing, my cache became overwhelmed @9 days, that might explain this…
http://forum.mikrotik.com/t/dns-cahe-full-adlist-read-max-cache-size-reached/177327/1

Indeed there is a problem obtaining IPv6 addresses from pool.
I have this in the config:

/ipv6 address
add address=::2 from-pool=v6prefix interface=bridge.vlan62
add address=::2 from-pool=v6prefix interface=bridge.vlan64

The second address silently disappeared. When I try to re-add it, it says “already have such address”.
Yes sure the suggested address is the same, but it is supposed to fetch another prefix from the pool and add an address using that prefix.
Sure I would like to specify which prefix, but it has never been possible in RouterOS.
So why is it now not possible to add the address from the pool?
After tinkering a bit and trying some different values in ::2 and deleting them, it suddenly is possible to add ::2 again.
Please:

  • fix this bug
  • add some capability to hint a prefix to be requested from the pool, e.g. ::1:0:0:0:2

Will the same happen if you do something like this instead?


/ipv6 address
add address=::2 from-pool=v6prefix interface=bridge.vlan62
add address=::3 from-pool=v6prefix interface=bridge.vlan64

Do recursive route no longer show up red when they are unavailable and inactive? Previous releases (can’t remember how long ago), but they would show red when they were not active. Closing the route window and opening it back up does not change it.
Screenshot 2024-07-16 at 11.24.42 AM.png

Maybe not. At the moment I can not test that.
But it would still be a bug!

You are doing that wrong! You should have the /32 routes with “ping” check and the recursive 0.0.0.0/0 routes without “ping”.

I’ve tried both ways, but this way actually let’s me have two DNS checks for each WAN route. I’ve tested it thoroughly. If 1.1.1.1 is unreachable, but 208.67.220.220 is, it keeps the route for WAN1 active. Only if both are unreachable, does it disable the 0.0.0.0/0 route for WAN1.

Same thing for WAN2, it has 2 DNS server checks: 1.0.0.1 and 208.67.222.222.

I realized my screenshot doesn’t show all the routes. See below.

Here is another example, more simple. WAN1 is disconnected and the 0.0.0.0/0 route with distance 1 is Unavailable and Inactive, however it does not show red or any clear indication that it’s not the active route.

0.0.0.0/0 with distance 2 (WAN2) is the active one.
Screenshot 2024-07-16 at 2.37.31 PM.png

have you hit F5 or closed / reopened the ip/route window? The routing table doesn’t get updated in realtime anymore since the release of ROS7. It’s incredibly annoying, for all non-bgp users with less than <1000 routes in the routing table.

Yes, I’ve tried closing the route window, opening it back up, closing Winbox, opening it back up. It doesn’t refresh nor show red anymore. I’ve also tried on Webfig, but it shows the same as Winbox.
Screenshot 2024-07-16 at 3.05.45 PM.png
It will show blue if WAN2 is up, but WAN1 is active (as WAN2 has a higher distance).
Screenshot 2024-07-16 at 3.07.39 PM.png

ehm.. I didn’t notice that before, but it seems that the check gateway functionality is completely broken in 7.16.
See the packet counter for 1.1.1.1 in the raw table..
2024-07-17 at 00.15.14.png

It’s not turning red for me either, but the failover works.
But, I agree, if you‘re used to have non-working routes marked red, it can be confusing. Don‘t know if this change is intended or a bug, it has definitely worked in earlier versions of ROS7.

From v7.15 the “sanitize-names” option was implemented, there is or will be an option to :convert to/transform=sanitize-names ?

RouterOS version 7.16beta4

hi guys testing this version with BCM57840 quadport sfp+ it crashes mikrotik keeps rebooting server…

it was supposed to have the missing PCI IDs to bnx2x driver added.. but somehow is crashing server.. if i downgrade for 7.15stable.. it does not crash server.. but ofcourse no support because of missing missing PCI IDs to bnx2x driver.

Curious what the status of this version is. It has been out for a few weeks and nobody has posted about it in almost a week.

It looks like every address defined this way would be picked up as different prefix if it was defined as ::2/64 instead of ::2 which is identified as single address i.e. ::2/128. It might be that automatic picking from pool does not work as expected if address is not defined as /64. Maybe this is the problem?

“Sure I would like to specify which prefix,” → check the prefix-length of your pool definition

What I mean is not to set the prefix length, it is 64 which is fine.
What I mean is when I have different interfaces each with a different prefix from the pool, I want to configure which prefix goes to which interface.
E.g. we get a /48 here from the provider, so the pool has aaaa:bbbb:cccc::/48 and when I configure ::2 for an interface it will get aaaa:bbbb:cccc:0::2/64 and the next one will get aaaa:bbbb:cccc:1::2/64.
But it is not really clear which interface will get what prefix, and it can change e.g. when an interface is brought down/up it will get aaaa:bbbb:cccc:2::2/64 for example.
The only way to set it back to aaaa:bbbb:cccc:0::2/64 in that case is to release and renew the DHCPv6 prefix request, which will empty and re-fill the pool, and make all interfaces re-allocate their address.

It would be nice when there was more control.

Yes if you have multiple interfaces to assign prefix from the pool, then they will get prefixes in order that is hard to understand - but it seems to be in same order every time (if all interfaces are enabled).

One other annoyance - ND/Prefix menu does not show deprecated prefixes which are still being advertised.