Normis - "Maznu" wrote the dates and when sent it to you. Ticket numbers... Description... All this a year ago. What do you need more?For everyone here, I wanted to clarify, that to my best knowledge, the author of the CVE has not contacted MikroTik and we are in the dark as to what he plans to publish.
It's quite clear that this wasn't given much priority.. only since there was a fire lit under the subject did it get pounded into a beta"maznu" wrote the dates he sent it to you. Ticket numbers. Description... All this a year ago. What do you need more?For everyone here, I wanted to clarify, that to my best knowledge, the author of the CVE has not contacted MikroTik and we are in the dark as to what he plans to publish.
Maznu - Thank you for your comments
No. He did not send proof of concept for all issues, just a generic report about a crash. When he now said that CVE number such and such is not fixed, It was not clear, since we don't know what he will publish in that CVE. There is not a single issue, there are multiple issues, we fixed most, now he has stumbled upon another (memory leak in some other condition). We are fixing the others as well.
Normis - "Maznu" wrote the dates and when sent it to you. Ticket numbers... Description... All this a year ago. What do you need more?
admin@MikroTik] /ipv6 firewall> export
/ipv6 firewall filter
add action=drop chain=forward connection-mark=drop connection-state=new
/ipv6 firewall mangle
add action=accept chain=prerouting connection-state=new dst-address=\
2001:db8:3::/64 limit=2,5:packet
add action=mark-connection chain=prerouting connection-state=new dst-address=\
2001:db8:3::/64 new-connection-mark=drop passthrough=yes
Until we release the next beta with memory exhaustion fix, this firewall config should stop any attack even with small amount of RAM:
Replace 2001:db8:3::/64 with your network.Code: Select alladmin@MikroTik] /ipv6 firewall> export /ipv6 firewall filter add action=drop chain=forward connection-mark=drop connection-state=new /ipv6 firewall mangle add action=accept chain=prerouting connection-state=new dst-address=\ 2001:db8:3::/64 limit=2,5:packet add action=mark-connection chain=prerouting connection-state=new dst-address=\ 2001:db8:3::/64 new-connection-mark=drop passthrough=yes
I think we should listen to what he has to say …
many many many many many years ago, I had both Microsoft and Sun Microsystems on the phone telling both of them I could lockup their Internet connected computers. They did not believe me so with both of them online in a conference call to both, I locked up the computers on the IP addresses they told me to try. Well - that little easy lockup trick is called "Ping of Death".
Yup - that was me that discovered it and worked with both Sun and Microsoft verify they later fixed their TCP/IP stack buffer-over-run problem.
Another one with Microsoft that would totally lockup a Windows computer was the left over PIP "ConCon" ( a carry over from the original CPM operating system ).
The point I am trying to make is that if somebody says there is a vulnerability, I wish they would get together in a live on-line conference and show their stuff and prove their point - prior to releasing the vulnerability to the general public.
Question - Am I correct in guessing this is something like "Packet of death in IPv6" , pretty much the same issue in IPv6 that is called Ping of Death in IPv4 ?
North Idaho Tom Jones
I sent support@mikrotik.com two tickets in April 2018 about two different issues. Later, in November, I advised MikroTik that these had now been allocated two different CVE numbers.No. He did not send proof of concept for all issues, just a generic report about a crash. When he now said that CVE number such and such is not fixed, It was not clear, since we don't know what he will publish in that CVE. There is not a single issue, there are multiple issues, we fixed most, now he has stumbled upon another (memory leak in some other condition). We are fixing the others as well.
In those initial emails, almost a year ago, I even gave your support team my ideas about what I thought the problems were and — guess what! — your support team just told me about an hour ago that they believe the they've now found some problems to be — and they are exactly what I said in April 2018.We have tested the same on our local network and managed to reproduce the same problem. We will see what we can do about this problem and improve IPv6 functionality in future RouterOS releases.
Your report and input into reproducing this problem is higly appreciated.
Yes.For CVE-2018-19299, Are systems that do not have IPv6 connection tracking enabled affected?
Normis...i'm pretty confident we have replicated the conditions of one of the CVEs from doing some digging on our own for this issue. Without the rules, the router crashed. When we added the rules the router stayed online.Until we release the next beta with memory exhaustion fix, this firewall config should stop any attack even with small amount of RAM:
Replace 2001:db8:3::/64 with your network.Code: Select alladmin@MikroTik] /ipv6 firewall> export /ipv6 firewall filter add action=drop chain=forward connection-mark=drop connection-state=new /ipv6 firewall mangle add action=accept chain=prerouting connection-state=new dst-address=\ 2001:db8:3::/64 limit=2,5:packet add action=mark-connection chain=prerouting connection-state=new dst-address=\ 2001:db8:3::/64 new-connection-mark=drop passthrough=yes
May I please add "discovered independently by a third party" to the timeline and credit you during my UKNOF 43 talk, then?Normis...i'm pretty confident we have replicated the conditions of one of the CVEs from doing some digging on our own for this issue. Without the rules, the router crashed. When we added the rules the router stayed online.
The post @IPANetEngineer was responding to, was Normis's.May I please add "discovered independently by a third party" to the timeline and credit you during my UKNOF 43 talk, then?Normis...i'm pretty confident we have replicated the conditions of one of the CVEs from doing some digging on our own for this issue. Without the rules, the router crashed. When we added the rules the router stayed online.
Looking at the remaining workaround, usual end-user blocking any incoming traffic not already established / related, isn't impacted, right?/ipv6 firewall filter
add action=drop chain=forward connection-mark=drop connection-state=new
/ipv6 firewall mangle
add action=accept chain=prerouting connection-state=new dst-address=\
2001:db8:3::/64 limit=2,5:packet
add action=mark-connection chain=prerouting connection-state=new dst-address=\
2001:db8:3::/64 new-connection-mark=drop passthrough=yes[/code]
You'd be limiting traffic to 2 new flows per second which is not an option except in the tiniest of networks.Looking at the remaining workaround, usual end-user blocking any incoming traffic not already established / related, isn't impacted, right?
I've sent through all the details in a new ticket, Ticket#2019032922005182, which I hope includes all the information you need collated into one place.This version fixes:
1) Soft lockup when IPv6 router is forwarding IPv6 packets;
2) Soft lockup when the router is forwarding packets to a local network (directly connected) due to large IPv6 Neighbor table.
We are still working on improvements for IPv6 Neighbor table processing in userspace which can lead up to OOM condition.
Since CVE detials are not yet published, we did assume that CVE targets software lockup (#2) which we did fix in this release.
I'm very glad you've prioritised this issue with your development team, and I look forward to testing releases that address the problem soon.
How you access the router isn't the major factor here… I'm not sure I understand your question?Maznu, thank you for showing that you are seeking a solution for the whole community.
Could you inform me if disabling SSH and Winbox service also works the exploit?
Using RoMon only can be a "temporary" solution?
Normis:Until we release the next beta with memory exhaustion fix, this firewall config should stop any attack even with small amount of RAM:
Replace 2001:db8:3::/64 with your network.Code: Select alladmin@MikroTik] /ipv6 firewall> export /ipv6 firewall filter add action=drop chain=forward connection-mark=drop connection-state=new /ipv6 firewall mangle add action=accept chain=prerouting connection-state=new dst-address=\ 2001:db8:3::/64 limit=2,5:packet add action=mark-connection chain=prerouting connection-state=new dst-address=\ 2001:db8:3::/64 new-connection-mark=drop passthrough=yes
While many of you are notably upset about the extraordinary amount of time that has gone by on this issue. I note some of you are wanting to move to new product vendors. This is your prerogative to do so. That said, I will point out the BIG VENDORS such as CISCO are smashed by CVE's problems ALL the time. Mikrotik is doing pretty well on the CVE footprint issue comparatively.
At some point enough, is enough. And yes, other vendors have other issues. Other vendors may also be more costly. But at least other vendors take responsibility for their products, have a clear guideline what a timely response to a ticket is and implement critical features, that customers and the industry needs. Again .. a good example: it's taken Mikrotik over a decade to implement IPV6-Delegated-Prefix, but then they decided only to implement it for DHCP and not for PPPoE. I mean .. where is the logic in that ?
https://openwrt.org/toh/hwdata/mikrotik/start
Also if your unhappy with RouterOS on your hardware, feel free to flash OPENWRT onto your MT hardware devices, and become pro-active in network code design and development for the community
How you access the router isn't the major factor here… I'm not sure I understand your question?Maznu, thank you for showing that you are seeking a solution for the whole community.
Could you inform me if disabling SSH and Winbox service also works the exploit?
Using RoMon only can be a "temporary" solution?
Maybe its time for MT to consider a parallel "community" like edition version of RouterOS. That open to view /compile "source code" and allows the community to quickly fix issues(CVE's !!!) and add networking functionality as community made plugin's for MT Hardware..
We believe that we have now recreated the conditions of both CVEs and have been able to cause a memory leak and router crash in both of the conditions listed below using software from a common offensive linux security tool for IPv6.
soft lockup when forwarding IPv6 packets (CVE-2018-19299);
soft lockup when processing large IPv6 Neighbor table (CVE-2018-19298)
Our engineers are working on mitigation options and so far have been able to keep the test routers online by limiting to 5k new connections with 256MB of memory. We still have some testing to do and a number of different options to try.
Will post our rule sets as we further refine the testing but it is a variation on what Normis already posted.
Yes, that is still vulnerable (my test lab has no services enabled because it has no Internet connectivity - only console access). These IPv6 handling problems are not about accessing the router. They are about causing the router to crash by how it routes IPv6 packets.Maznu, do the following:
ip service disable [find]
Verify that even with all Mikrotik access media services the problem occurs?
Meanwhile CVE-2018-19299 still needs fixing, because even with those performance-crippling firewall rules on 6.45beta22, I'm crashing routers.Normis...i'm pretty confident we have replicated the conditions of one of the CVEs from doing some digging on our own for this issue. Without the rules, the router crashed. When we added the rules the router stayed online.
Fixes and Sec Holes are all over the world, we did change not from security side, but Mikrotik don't tells when they are fixing there missing IPv6 PPPoEMistry7,
Good call moving off of mikrotik for security reasons. I couldn't agree more. I did too for some things. Today is a security day for us Cisco fans too. There are 17 flaws to fix. Have you seen this.
https://www.networkworld.com/article/33 ... flaws.html
R
We have had some success mitigating CVE-2018-19299 now that we know what command to run in the linux IPv6 exploit tools to crash edge or transit routers. However, we are moving testing over to our dual stack data center on hardware that isn't serving customers to get a better idea of the real world IPv6 results as labs disconnected from the Internet can produce results that don't always translate to the real world in the same way.Meanwhile CVE-2018-19299 still needs fixing, because even with those performance-crippling firewall rules on 6.45beta22, I'm crashing routers.Normis...i'm pretty confident we have replicated the conditions of one of the CVEs from doing some digging on our own for this issue. Without the rules, the router crashed. When we added the rules the router stayed online.
In exactly the same lab layout as before, and with exactly the same single command that MikroTik has known for over 11 months, and still attacking 2001:db8:3::/64 (so your firewall rules are just a copy-and-paste): https://youtu.be/vJBUdAMrKJw
Ros 7 is like the Yeti or Mrs. Colombo. Everyone talks about it, but nobody has ever seen it.The fix is in v7 guys c'mon
You don't have to disable it.
We will never see it. Mikrotik is more concerned at improving Kids Control and other useless features than working on a better platform for IPv6 and BGP multi threading for the CCR series. We have nine CCR1072 that can't receive full tables because they literally froze processing them. Already thinking on changing our routing equipment, and we are one of the biggest ISPs of the south of South America. Now I have to disable IPv6 (We are the FIRST provider implementing this on our region) because of this bold***t.
And not speaking about CCRs rebooting itself with no reason and other things like that....
Keep up the good work Mikrotik
Ros 7 is like the Yeti or Mrs. Colombo. Everyone talks about it, but nobody has ever seen it.The fix is in v7 guys c'mon
Totally disabling IPv6 before the details of the bug as well as how to exploit it are even public is over-reactionary and knee-jerk extreme, especially since MT has said that a fix will be available before the disclosure.
Totally disabling IPv6 before the details of the bug as well as how to exploit it are even public is over-reactionary and knee-jerk extreme, especially since MT has said that a fix will be available before the disclosure.
If your networks are so huge and yet you have failed to scale your infrastructure accordingly so that updating them is not manageable, and yet you have money to change over many of your devices just tells me you have other issues to overcome before changing equipment over.
Besides the fact that you actually believe that a proven long term vendor exists.............
Totally disabling IPv6 before the details of the bug as well as how to exploit it are even public is over-reactionary and knee-jerk extreme, especially since MT has said that a fix will be available before the disclosure.
It is actually not.
MT having a fix ready before disclosure is not enough. The fix needs to be ready, it needs to be in a stable release tree, it needs to tested and validated and customers need time to implement it.
@maznu has provided a clear timeline of when this was reported to Mikrotik, when the CVEs were issued and Mikrotik has ignored this until an imminent threat of it going public came about. If they had used the time from the report and the creation of the CVEs to mitigate this, there would have been plenty of time.
So in my case, it means IPv6 will be removed from all Mikrotik gear, because the risk of loss of business is too great. And where necessary, the gear will have been replaced with gear from other vendors, that's long term proven. Even if Mikrotik released a fix in time, there would be no time for testing. I would literally have to roll it out across 300+ devices in the core network. We have near to 10k Mikrotik devices in our network, if every one of them needs to be updated. This is not something you do in a few hours .. or days. And automated updates are not an option. The quality control of the release trees supplied by Mikrotik are not good enough. We aren't even on the current long-term release yet, as it has bugs we can't live with.
And the issue is, that it is not the first time, that Mikrotik has handled an issue this way. It is every single time. They only react, when issues become public knowledge.
/M
My opinion is clear: IPv6 is a required service, disabling it is akin to shutting off power.
I have provided MikroTik with every detail at every step of the way.despite the author has being less than helpful about providing details.
My opinion is clear: IPv6 is a required service, disabling it is akin to shutting off power.
But this is where you are getting me wrong:
I'm not shutting IPv6 off on our network. We have been providing IPv6 to endusers since 2008. And even longer on the infrastructure. I'm turning it off on everything Mikrotik, because it is too big a risk.
And a good chunk of CCRs will have to replaced with alternative gear, not only because of this bug, but also because Mikrotik does not implemented features needed by the industry. I have tickets going back to before 2007 for some stuff that still isn't implemented. Can't even get a timeline.
There is plenty of hardware that is not affected.
/M
I have provided MikroTik with every detail at every step of the way.despite the author has being less than helpful about providing details.
I cannot provide anyone else with any more detail at all as this would literally give them the means to carry out the attack.
I have not shared any mitigation options because I do not believe there are any which do not involve significant expenditure of time and resources — something that many MikroTik users may not be able to afford.
I remain convinced that the fix can only come from MikroTik.
An alternative that doesn't necessarily mean we need to disclose vulnerabilities to each other or pass around weaponised PoCs is that MikroTik could send us the beta and we all bash it with our labs on Monday morning…?If you haven't already, I would strongly encourage those of you who discovered and reverse engineered these bugs to compare notes and check that they are in fact the same methods - the last thing we need is for MikroTik to release a fix for the original issue, and then find that those who reverse engineered it discovered a related but different issue that is still unfixed.
An alternative that doesn't necessarily mean we need to disclose vulnerabilities to each other or pass around weaponised PoCs is that MikroTik could send us the beta and we all bash it with our labs on Monday morning…?If you haven't already, I would strongly encourage those of you who discovered and reverse engineered these bugs to compare notes and check that they are in fact the same methods - the last thing we need is for MikroTik to release a fix for the original issue, and then find that those who reverse engineered it discovered a related but different issue that is still unfixed.
An alternative that doesn't necessarily mean we need to disclose vulnerabilities to each other or pass around weaponised PoCs is that MikroTik could send us the beta and we all bash it with our labs on Monday morning…?If you haven't already, I would strongly encourage those of you who discovered and reverse engineered these bugs to compare notes and check that they are in fact the same methods - the last thing we need is for MikroTik to release a fix for the original issue, and then find that those who reverse engineered it discovered a related but different issue that is still unfixed.
Have to agree with the above. I have been a Mikrotik user for 8+ years - loved the devices at first. still like them now. But I hate being let down by the developers...My opinion is clear: IPv6 is a required service, disabling it is akin to shutting off power.
But this is where you are getting me wrong:
I'm not shutting IPv6 off on our network. We have been providing IPv6 to endusers since 2008. And even longer on the infrastructure. I'm turning it off on everything Mikrotik, because it is too big a risk.
And a good chunk of CCRs will have to replaced with alternative gear, not only because of this bug, but also because Mikrotik does not implemented features needed by the industry. I have tickets going back to before 2007 for some stuff that still isn't implemented. Can't even get a timeline.
There is plenty of hardware that is not affected.
/M
@IPANetEngineer: do you want to compare notes now that we are probably on the same page? Prompted by something MikroTik told me last thing on Friday about the nature of the underlying problem, and following my own research last night, I've got some good news to share — so it would be great to know if we're now independently testing and trying to mitigate the same things.now that we know what command to run
This sounds almost exactly the same as what MikroTik will be fixing on Monday.When I launch the attack, the chr reboots but the other routers are not affected by the attack.
Firewall rules seems not to be effective.
But if I increase the chr memory from about 300 MiB to 3000 MiB the router seems to be ok: the free memory goes between 2200 and 2400.
As my lab is made in gns3 maybe it can't be the same as IRL with Mikrotik hardware so I plan to make some tests tomorrow with several ccr1009.
But I would ask @maznu if he can check what happens if he increase the ram. In his Twitter video he has about 300 MiB. And I would ask him if he has tested with routerboards or only chr.
Thanks for this information, @MichaelHallager. I've saw something similar several times in the first two weeks of March this year, and advised MikroTik on 2019-03-15 about this, asking for urgent action.As a consequence, I am now assuming the exploit is out there in the wild and is being used.
Sorry @maznu but I don't get the same md5sum you expected. Maybe mine is a different but correlated attackThis sounds almost exactly the same as what MikroTik will be fixing on Monday.
What would be characters 9, 10, 11, 12 of the md5sum?
At this stage, my best advice would be that people monitor the memory usage on their routers and graph it. If your memory usage is stable for many weeks, and then increases by several hundred Mb within a few minutes — possibly causing a crash as a result — that is one of the signs I would expect to see during an attack.I have been spreading the word around in other forums.
If it's of any interest / help I am happy to act as a remote test case providing no harm is done.
It is possible we are using different tools to trigger the same issue — there is more than one way to make some IPv6 packets. ;-)Sorry @maznu but I don't get the same md5sum you expected. Maybe mine is a different but correlated attack
EDIT: with only:FastNetMon may work if the netflow is being generated by an intermediate device in the path (like off of a tap), it's very fast and can potentially mitigate assuming null routing is performed before cache write.
EDIT: with only:FastNetMon may work if the netflow is being generated by an intermediate device in the path (like off of a tap), it's very fast and can potentially mitigate assuming null routing is performed before cache write.
* a route back to the attacker
* and only a default null route in my victim router's table
* (besides dynamic routes from link local addresses)
my lab looks like it will still OOM.
(the attack is coming from a single IPv6 address)
Where is this posted? I did a quick search and didn't find anything.Mikrotik have publicly disclosed the details of the vulnerability, on a Sunday, in a way that a child could exploit it - before even providing a fixed beta, let alone a stable release version, let along giving us time to test and deploy it.
Truly despicable behaviour there Mikrotik. Do you have no respect for your customers at all?
-davidc
The reason, that here isn't such an option or a service contract is, that they'd have to take responsibility then and also implement features that customers require, instead of choosing themselves what they see fit to fix and implement.Personally, I think Mikrotik products are possibly a bit too cheap and I would be happy to pay a few extra bucks if it meant they were properly resourced so we don't have these types of issues.
viewtopic.php?p=724264#p724238Where is this posted? I did a quick search and didn't find anything.
.Dumb question, have you validated that this is remotely exploitable outside of a contained lab?
I think assembled is a more fitting description.@bmann has made some very good points which I can relate to.
I come from the Cisco camp and I was amazed when I bought my RB1100AHx4 what I was getting for the money... and it's made in Latvia, not China!
Personally, I think Mikrotik products are possibly a bit too cheap and I would be happy to pay a few extra bucks if it meant they were properly resourced so we don't have these types of issues.
.Dumb question, have you validated that this is remotely exploitable outside of a contained lab?
That is certainly a critical question that needs to be answered and understood. The extraordinary amount of "pre-show publicity" has lead many to form strong opinions and responses before the full facts are understood.
People above you in this thread are saying that there should be separate terminology between denial of service (CPU high, out of memory, crash etc) and something that allows attacker to gain access to devices, steal credentials, install malware and read private data. You are shoving them both under one term. I'm not expert, but at least that is what I mean when I said Denial of Service and Vulnerability are different things.
You always can use cisco:)This may be the final straw. Time to start researching other products.
Um, there are more than a few ways to automate the roll out of a config change or RouterOS package. Unimus or Ansible make this easy. Example PDF: https://mum.mikrotik.com/presentations/ ... 842532.pdfWe have near to 10k Mikrotik devices in our network, if every one of them needs to be updated. This is not something you do in a few hours .. or days.
...
And the issue is, that it is not the first time, that Mikrotik has handled an issue this way. It is every single time. They only react, when issues become public knowledge.
/M
Sorry, what?We have nine CCR1072 that can't receive full tables because they literally froze processing them.
Don't do full tables on CCRs. They are terrible at it.Sorry, what?We have nine CCR1072 that can't receive full tables because they literally froze processing them.
How many tables on 1072?
I use 1036 - and this work with 2 FV tables
Why?Don't do full tables on CCRs. They are terrible at it.
The full timeline will be available next week. But when I reported this in April 2018, my request to MikroTik was to plead with support to treat this as a serious security vulnerability, and asked that they themselves allocate a CVE to it. After no action for six months, I then raised CVEs myself and communicated this to MikroTik to try and pressure them into taking another look at this.where the reporter didn't report it as a security concern and left it for 6 months till he was able to get a CVE
Default routes mean you can't use uRPF.Why?Don't do full tables on CCRs. They are terrible at it.
Is work fine, i receive FV and default route (while router compute FV's - used default route, when session up, internet work after 2 sec)
So what firmware do we need to install on what routers to prevent this ?
maznu - can you contact me via Twitter? I sent you a tweet already.The full timeline will be available next week. But when I reported this in April 2018, my request to MikroTik was to plead with support to treat this as a serious security vulnerability, and asked that they themselves allocate a CVE to it. After no action for six months, I then raised CVEs myself and communicated this to MikroTik to try and pressure them into taking another look at this.where the reporter didn't report it as a security concern and left it for 6 months till he was able to get a CVE
> This is not a security vulnerability as I would describe it.
Then what EXACTLY would you call the ability to stop your router from routing, remotely, without authentication? Because we in the Infosec world call that an Unauthenticated Denial of Service. (Which is a bad thing)
Look, MikroTik have dropped the ball on this. That they haven't gone 'Shit, yes, we made a mistake, we're sorry, and we're making (these changes) to make sure it doesn't happen again' is MORE telling.
/ip rp-filter
Default routes mean you can't use uRPF.Why?Don't do full tables on CCRs. They are terrible at it.
Is work fine, i receive FV and default route (while router compute FV's - used default route, when session up, internet work after 2 sec)
Such high CPU usage during table churn means you'll be at best sub-optimally routing packets and at worst, sending them down blackholes for 10 - 20 minutes at a time.
My timeline exploded a bit, as you might imagine. I'm @maznu on Twitter, DMs are open :)maznu - can you contact me via Twitter? I sent you a tweet already.
Something about this question?If you just have the package enabled and absolutely no configuration from an IPv6 perspective are you okay?
Appreciate this!It's true, MikroTik should have handled it better. It was mistakenly filed as just another ipv6 bug, which there are more of, until the author complained about lack of action. Then we looked at it again, and started to look for alternative solutions to handle it (because kernel change is not possible in v6). But in the end, the issue is fixed finally.
P.S.: we don't rely on kernel for TILE support, so new kernel doesn't affect it.
That's very kind, but after we've all got the patch in longterm and stable, I want to know how I can mail order a crate of beer to MikroTik's offices to say thank you for getting this fixed.Also, send @maznu a present/gift/bounty/4011. He sure as hell earned it.
If you have performance issues with the CCR1072, then why don't you get a real router? Real hardware and pay the price instead of banging on Mikrotik for not releasing ROSv7?@normis
Is it possible to get an honest responce about ROS v7's release timetable? We and many other people have been struggling with BGP performance so much so we've had to reject larger potential clients because we cannot offer MPLS due to large packet loss with routing convergence on the CCR1072.
We are not a high volume network and cannot hold a single full table from any of our Transit providers because of CPU load and extended outages if for some reason one of the sessions drop. We were holding out until MUM Europe as we were told by support there would be an announcement there about it but alas there wasn't one.
We now have to seriously look at switching vendors away from Mikrotik which we've used for 15+ years because we just cannot wait any more with such a loose open ended timeline that everything appears to be "Fixed in ROS v7" and there being no release deadline.
Even if you were to release it today it would be 6 months before we could roll it out with enough confidence and testing in LAB given the extended development timeline.
I hope you can take the time to responce publicly or even privately as I'm under alot of pressure to grow our network but with current hardware not able to without potentially impacting client performance and reliability.
We'd also be happy to pay a few thousand dollars a year for advanced support/updates if required to offset the cheap licensing charges.
Maybe other topic ?Is it possible to get an honest responce about ROS v7's release timetable?
I emailed MikroTik yesterday, tweeted, and posted about this on the 6.45beta thread - yes!@maznu - beta23 fixes both vulnerability? Did you test?
I emailed MikroTik yesterday, tweeted, and posted about this on the 6.45beta thread - yes!@maznu - beta23 fixes both vulnerability? Did you test?
MikroTik has said that another beta is expected to make the settings on the affected components more "optimal" for devices with low RAM. I hope it lands soon.
This gels with something I found when I was doing some testing last night. I suspect the problem is that the various processes in RouterOS struggle to malloc() memory for various tasks, and it makes for a very painful experience. My symptoms included: typing on the console being very laggy, issuing commands was also quite slow, and a reboot fixed it. In extreme memory exhaustion (but again, not quite OOM), the router seems to become unresponsive for a short while (won't answer telnet connections, etc), but sort of seems to be answering ping and forwarding packets… kind of. I hypothesise that this is what MikroTik has been referring to as a "soft lockup" in their 6.45beta22 changelog entries. It "felt" very similar to the feeling of interacting with a Linux server under extreme load, swapping or otherwise thrashing.More testing has yielded more data. This has not been properly replicated by anyone else that I know of, so take it as plausible hypothesis. I think I found more fallout from the ipv6 flaw: boxes that have their ND cache or their ipv6 route cache run up but not to the point of OOM reload experience a degraded performance over time in fun and unique ways after a scan attempt or resource exhaustion. I have seen two symptoms, all resolved by a reboot.
This gels with something I found when I was doing some testing last night. I suspect the problem is that the various processes in RouterOS struggle to malloc() memory for various tasks, and it makes for a very painful experience.
The data structure that the Linux kernel used in RouterOS v6 uses for the IPv6 route table (and also route cache) is a sort of radix tree. It's a very different different from how the data structures for IPv4 work (because the IPv6 routing table is very sparse by comparison), and it's different again to how the more modern kernels do this too. I suspect the kernel is allocating the memory for this in fairly large slabs, but it would not at all surprise me if there is a possibility of fragmentation.With extreme fragmentation, it can result in no contiguous memory that satisfies the malloc() or realloc() and you either segfault in userland or (I'd imagine) panic in the kernel, hence the reboot even with memory theoretically available.
The data structure that the Linux kernel used in RouterOS v6 uses for the IPv6 route table (and also route cache) is a sort of radix tree. It's a very different different from how the data structures for IPv4 work (because the IPv6 routing table is very sparse by comparison), and it's different again to how the more modern kernels do this too. I suspect the kernel is allocating the memory for this in fairly large slabs, but it would not at all surprise me if there is a possibility of fragmentation.With extreme fragmentation, it can result in no contiguous memory that satisfies the malloc() or realloc() and you either segfault in userland or (I'd imagine) panic in the kernel, hence the reboot even with memory theoretically available.
All that aside, it is possible to consume a nice big chunk of contiguous memory in RouterOS's user-space — and that ought to have very similar effects. That was how I got a bunch of CHRs behaving in a way that I believe matches what buraglio is describing above. A fairly easy way to try this out might be to inject a bunch of unreachable BGP routes to make the "route" process use more memory. As you get down to a few Mbytes of free RAM, things get icky… but this is completely expected behaviour (even if it would be unfortunate).
MAJOR CHANGES IN v6.45:
----------------------
!) ipv6 - fixed soft lockup when forwarding IPv6 packets;
!) ipv6 - fixed soft lockup when processing large IPv6 Neighbor table;
Attacked both, and both releases fix CVE-2018-19299. Fantastic news — but now the hard work for all us network operators begins:Fixel also in long-term - 6.43.14
and Current - 6.44.2
The tests I did on 6.45beta23 suggested different levels of memory usage would be used for the IPv6 routing table/cache depending on the installed RAM in the router. For example, routers with 2Gb of memory would permit a few hundred thousand entries in the IPv6 routing table/cache.Just wondering what will happen / be the effect when "under attack" and hitting memory limit?
* on neighbour mem limit
* on routing cache limit
Router will survive, but what with the legit connections?
If you DROP in PREROUTING then it is safe against CVE-2018-19299.What I don't understand: why is it not possible to firewall against it.
When you limit the addresses that are routed, e.g. by dropping traffic in the raw prerouting table, does it still
create entries for the dropped traffic in the route cache or neighbor table? Why?
Of course it would still be possible to exploit it from the inside, but frankly I always worry more about exploiting from outside than from inside.It can be firewalled like you say, I posted rules that give you ideas how (and you can tune it to your needs).
But many said that they have legitimate traffic coming from a single source to multiple destinations.
Congrats to your Team Normis, under what must be an incredibly stressful pressure cooker of a year. Proost, gun bei, sit, cin cin, Na zdrowie and Priekā!It can be firewalled like you say, I posted rules that give you ideas how (and you can tune it to your needs).
But many said that they have legitimate traffic coming from a single source to multiple destinations.
I'm hearing reports that this isn't fixed on routers with 64Mb or less of RAM. Is your ticket about this, eben? Or something else? :-|This is far from over.
Please refer to ticket 2019040422005244 and advise.
I've tried installing 6.44.2 on about 50 hAP Lites using manual update, Dude Update, Winbox update, Commandline update vir puTTY.I'm hearing reports that this isn't fixed on routers with 64Mb or less of RAM. Is your ticket about this, eben? Or something else?This is far from over.
Please refer to ticket 2019040422005244 and advise.![]()
I have 6.43.14 installed on a hAP ac lite (64Mb RAM), and it is still vulnerable. Ticket#2019040222005195 and Ticket#2019032922005182It is an upgrade problem because of no free space on the router, not related to this thread at all.
Question - if there is a problem with not enough free space (on some mikrotiks) …It is an upgrade problem because of no free space on the router, not related to this thread at all.
Welcome to 2019.. you must have been asleep since DARPA were experimenting with this TCP/IP thing.. that's ok though, we'll help you through it.ipv6 dumb extravagance anyway
Thank god for that, otherwise I wouldn't had the pure joy of playing doom with a brit and a yank all being served up by one of our pcs all over a 33.6k modem connection for about 10min before crashing LOL. Really rocked with the 56k modem though.Welcome to 2019.. you must have been asleep since DARPA were experimenting with this TCP/IP thing.. that's ok though, we'll help you through it.ipv6 dumb extravagance anyway
It all started with RIPE and global commerce..
Yes, externally initiated IPv6 traffic to random addresses is disallowed. I added this when NDP exhaustion attacks were discussed.@pe1chl, question: in your setup externally initiated ipv6 traffic is disallowed right?
It looks like a very interesting idea.Yes, externally initiated IPv6 traffic to random addresses is disallowed. I added this when NDP exhaustion attacks were discussed.@pe1chl, question: in your setup externally initiated ipv6 traffic is disallowed right?
Due to the address list, only systems that have initiated outbound traffic (within the last 8 hours) plus a number of addresses of
servers put in the address list as static entries are allowed inbound.
But I had this rule in the /ipv6 firewall filter chain=forward list which should be fine to prevent NDP exhaustion attacks but apparently is not enough
for the route cache table overflow attack, so I moved it to /ipv6 firewall raw chain=prerouting list.
Of course there also is a rule in /ipv6 firewall filter chain=forward that adds the src address to the list (with 8 hour timeout) for new outbound traffic.
If I understood correctly pe1chl, the capturing of outward addresses is done at the connection=new level, but the dropping of packets has to be done before connection tracking to avoid the routing machinery to engage, so the dropping has to happen at the packet level at raw...Thx for info, I guess you do it on connection level, not for each packet?
I had a response from MikroTik earlier saying: "Next beta will have further improvements."I have done several tests with GNS3 using CHR 6.44.2 (stable) and as long as the router has enough memory, it doesn't crash. In my tests, the attack 'steals' around 180 MiB.
Using a CHR with 256 MB, system resources shows a total memory of 224 MiB and free-memory of 197 MiB before attack. During the attack, only from one computer, the free memory decreases to around 20 MiB and sometimes to 13 MiB. Using two attackers, it seems the results are the same and not worst.
With 200 MB the router reboots because OOM.
Eben: you are aware that you can pull the "all_packages.zip" file, only upload the modules you need and upgrade ?I've tried installing 6.44.2 on about 50 hAP Lites using manual update, Dude Update, Winbox update, Commandline update vir puTTY.I'm hearing reports that this isn't fixed on routers with 64Mb or less of RAM. Is your ticket about this, eben? Or something else?This is far from over.
Please refer to ticket 2019040422005244 and advise.![]()
Fail on all fronts.
There isn't enough memory / storage for the update.
We can do package by package, not on a couple of thousand routers in three days.Eben: you are aware that you can pull the "all_packages.zip" file, only upload the modules you need and upgrade ?I've tried installing 6.44.2 on about 50 hAP Lites using manual update, Dude Update, Winbox update, Commandline update vir puTTY.I'm hearing reports that this isn't fixed on routers with 64Mb or less of RAM. Is your ticket about this, eben? Or something else?This is far from over.
Please refer to ticket 2019040422005244 and advise.![]()
Fail on all fronts.
There isn't enough memory / storage for the update.
Installing everything is not always an advantage and this is not the first time in the lifetime of RouterOS, that this has been an issue (example RB112 and 113c for ROS3 and upwards)
/M
https://unimus.net/blog/network-wide-mi ... grade.htmlWe can do package by package, not on a couple of thousand routers in three days.Eben: you are aware that you can pull the "all_packages.zip" file, only upload the modules you need and upgrade ?I've tried installing 6.44.2 on about 50 hAP Lites using manual update, Dude Update, Winbox update, Commandline update vir puTTY.I'm hearing reports that this isn't fixed on routers with 64Mb or less of RAM. Is your ticket about this, eben? Or something else?This is far from over.
Please refer to ticket 2019040422005244 and advise.![]()
Fail on all fronts.
There isn't enough memory / storage for the update.
Installing everything is not always an advantage and this is not the first time in the lifetime of RouterOS, that this has been an issue (example RB112 and 113c for ROS3 and upwards)
/M
Can it do package by package, or just platform by platform?
On the master ROS install, just have only the packages you do want.Can it do package by package, or just platform by platform?
Correct!If I understood correctly pe1chl, the capturing of outward addresses is done at the connection=new level, but the dropping of packets has to be done before connection tracking to avoid the routing machinery to engage, so the dropping has to happen at the packet level at raw...
We can do package by package, not on a couple of thousand routers in three days.
@pe1chl - having moved your rule from chain=forward to chain=prerouting do you add your WAN address to the list too (as chain=prerouting will affect input as well as forward traffic), or does your router not provide any external services?Yes, externally initiated IPv6 traffic to random addresses is disallowed. I added this when NDP exhaustion attacks were discussed.
Due to the address list, only systems that have initiated outbound traffic (within the last 8 hours) plus a number of addresses of
servers put in the address list as static entries are allowed inbound.
But I had this rule in the /ipv6 firewall filter chain=forward list which should be fine to prevent NDP exhaustion attacks but apparently is not enough
for the route cache table overflow attack, so I moved it to /ipv6 firewall raw chain=prerouting list.
Of course there also is a rule in /ipv6 firewall filter chain=forward that adds the src address to the list (with 8 hour timeout) for new outbound traffic.
We have no external IPv6 service on the router but indeed if you have, you should add it as a static entry.@pe1chl - having moved your rule from chain=forward to chain=prerouting do you add your WAN address to the list too (as chain=prerouting will affect input as well as forward traffic), or does your router not provide any external services?
We tested this on three routers during the night. It works - just, but there's no way we'll be able to finish within the time constraints.We can do package by package, not on a couple of thousand routers in three days.
I don't know, how you get the idea of package by package. You upload the needed packages. The bare minimum. Reboot. It installs all the npk files in one go.
It's no different than uploading the combined package. Just multiple files instead.
/M
How do you normally update your routers?We tested this on three routers during the night. It works - just, but there's no way we'll be able to finish within the time constraints.We can do package by package, not on a couple of thousand routers in three days.
I don't know, how you get the idea of package by package. You upload the needed packages. The bare minimum. Reboot. It installs all the npk files in one go.
It's no different than uploading the combined package. Just multiple files instead.
/M
We have a set of scripts. All non smips routers are done and dusted.How do you normally update your routers?
Hi,Of course it would still be possible to exploit it from the inside, but frankly I always worry more about exploiting from outside than from inside.It can be firewalled like you say, I posted rules that give you ideas how (and you can tune it to your needs).
But many said that they have legitimate traffic coming from a single source to multiple destinations.
For some time I have a dynamic address list in IPv6 that contains all internal addresses that have attempted to make outgoing traffic (plus some static servers),
and drops all incoming traffic towards addresses not in that list. This drop is now in the forward chain, I will move it to the raw prerouting chain.
Of course this countermeasure generates a new attack surface, where local users are able to fill that address list with 2^64 entries, but as I wrote
I am not so worried that this will happen.
If IPv6 was not enabled, then this CVE could not be the reason. Please isolate at least one of the devices which get rebooted, generate a Supout.rif file and send it to support@mikrotik.com, of course, if you have any additional information, provide that too.Since 2 days (9 April) all our mikrotik devices with 64Mb RAM are rebooting continuously after 1 minute and 40-50 seconds. IPv6 package was already disabled since long time. RouterOS versions are not latest but we have very strong security rules. Is this the same DDoS issue? We are not able to reboot or update firmware since reboot command is not responding. Any ideas?
Hey,In ipv6 usual prefix is /64. So a local attack will not be filtered by the rules proposed and the number of possible hosts is 2^64 because ipv6 addresses are 128 bit numbers.
Enviado desde mi Mi A2 mediante Tapatalk
The issue is that you can set an outgoing address for your device, send a packet to outside, the address will be addedWhat is this attack surface he's talking about? this is what I'm not fully getting