6.22 released!

CCR series still have problems with randomly rebooting.

If you have packet loss in all versions, maybe there is some sort of hardware problem? if you have the chance, try another machine or at least other NIC

please clarify what you mean by this?

please make a supout.rif file and email it to support. when did this issue start?

Groove A-52HPn has the very same problem/bug when operating in 2.4 GHz, which was in Metal 2SHPn. Metal is fixed now, so can you guys check and port this fix to Groove A-52HPn also? On A-52PHn the very same bug appears also only in 2.4 GHz.

No, we have packet loss only on version 6.8-6.22.Versions 6.5-6.7 works stable. After 20-30 minutes MT start drops packet on all interfaces. In log file we see next message “105 (no buffer space available)”. We try start VM on next servers:

  1. Supermicro X8DTT 12CPUs x 2.4 GHz Processor Type Inter(R) Xeon(R) CPU E5645 @ 2.40GHZ
    Intel Corporation 82576 Gigabit Network Connection


    2.HP Proliant DL380p Gen8 16 CPUs x 2.593 GHz Processor Type Inter(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
    Broadcom Corporation NetXtreme II BCM5710 10G

Last month we buy CCR1036-8G-2S+, but his work very terribly.

Please make a supout.rif file at the moment of the packet loss and send to support.
What problems do you have with the CCR, you did not say

*) 100% CPU load caused by DNS service fixed;
*) 100% CPU load caused by unclassified services fixed;
Its not fixed. I updated my CCR 1036 to 6.22 still cpu 100%. Sometimes CCR-1036 is crashing.
[Ticket#2014110166000177]

Ok, clear.
But how come I can’t see my local drive/disk?
On the old situation (system/stores → disk tab) there is a system disk.
Now there is nothing, unless there is a USB drive inserted.

Now “Disks” menu shows only additionally mounted drives and allows formatting them. Local drive is what you see in the File List and you can use it for the same services by changing the path: “/web-proxy1”; “/log”; “/user-manager”

Reading all these comments, only hours after releasing a 3rd version in less then 2 months makes me suspicious. I’m skipping this version too… Is it 22nd in a row already or 24th? Anyway, skipped whole v6 version. One dissapointment after another.

I can understand how this looks, but please remember that tens of thousands of users upgraded without a single issue. The problems posted here are quite unique and affect only people with specific configurations.

Please make a supout.rif file at the moment of the packet loss and send to support.
What problems do you have with the CCR, you did not say

About CCR. Still problem with QOS(PCQ) low bandwith. 550 Mbit/s traffic(testing in production) and all CCR’s CPU goes about 78-100%(all 36 cores). 2 years ago CCR can skip only 250 Mbit/s and works only 2 core(100%) and after 1 hour goes crash.
For examle VDS can skip about 600 Mbit/s traffic with QOS,NAT.
There is only 1 minus on VDS. In ROS V5 i can appoint 8 virtual cores. In ROS V6 i can appoint 24 virtual cores but only from higt version ROS 6.19-6.22(only this versions tested on VDS in this month). In ROS 6.5-6-7 8 cores only.
In the next test, i make support.rif file.
CCR-1036 tested 2 years ago(6.0, 6.1, 6.2, 6.3) - sell device. This month(6.19,6.20,6.21,6.22) - more better balancing between cores, but still pure perfomance. Waiting.
Thats all.

Wait, isn’t “no buffer space available” the same message we saw when there was a bug with the PPP driver filling up the kernel route cache? In cursemipn’s other thread that he linked to, he mentions his router runs a PPPoE server with hundreds of clients. He also mentions that he has no problems with 6.7 and his problems start with 6.8. RouterOS 6.8 was the version that included the large PPP driver rewrite, and for several versions after that, MikroTik had to fix tons of PPP bugs that 6.8 introduced. The kernel route cache memory leak was fixed for most PPP interfaces on 6.13, but maybe MikroTik missed one that still exists.

His packet loss issue could easily be explained by a runaway PPP driver filling up the kernel route cache, which would prevent the router from being able to transmit anything, and then the kernel route cache garbage collector intermittently managing to clear out some of the entires in the route cache (which would allow the router to once more transmit responses) before it gets filled up again.

This is the thread in question: http://forum.mikrotik.com/t/rb2011uas-2hnd-stops-responding-spontaneously/75301/1

To cursemipn: the next time this happens, please paste the results of “/ip route cache print” to this thread.

– Nathan

Why do you assume we don’t have complex configs on our core network? I’m just being cautious. Also reading soo many negative comments on ALL V6 release threads including this one, makes me a bit worried about my decision to implement your routers in our network. All you need to do is release a STABLE version where all declared features work. Random rebooting of CCR’s isn’t what I had in mind and these device can be used for core network but I’m glad we are not using them.

Hello,

I have big trouble with our main RouterBoard RB750G. It shows up after upgrading to 6.21.1 and after upgrade to 6.22 it’s still there.

All physical interfaces randomly goes down and up again (looks like they are restarting). It never happend before and we are using these RouterBoards for more than 3 years now.

There is a few lines from todays log:

Nov/13/2014 12:39:32 interface,info ether1-gateway link down
Nov/13/2014 12:39:32 interface,info ether2-local-master link down
Nov/13/2014 12:39:32 interface,info ether3-local-slave link down
Nov/13/2014 12:39:34 interface,info ether2-local-master link up (speed 100M, full duplex)
Nov/13/2014 12:39:36 interface,info ether1-gateway link up (speed 1G, full duplex)
Nov/13/2014 12:39:50 interface,info ether3-local-slave link up (speed 100M, full duplex)
Nov/13/2014 13:07:32 interface,info ether1-gateway link down
Nov/13/2014 13:07:32 interface,info ether2-local-master link down
Nov/13/2014 13:07:32 interface,info ether3-local-slave link down
Nov/13/2014 13:07:34 interface,info ether2-local-master link up (speed 100M, full duplex)
Nov/13/2014 13:07:36 interface,info ether1-gateway link up (speed 1G, full duplex)
Nov/13/2014 13:07:51 interface,info ether3-local-slave link up (speed 100M, full duplex)

I have tried to find something that can trigger this strange behavior by Wireshark, but nothing :frowning:

Can somebody help?
Or can I somehow downgrade the RouterOS?

Thanks to all :sunglasses:

I did not say complex, I said specific. There are many ISPs running on this version now without any issues, with complext configs.
Most people having prolems are in this forum, you can see how many there are, less than 10.
We are addressing all those remaining issues, please don’t worry.

It’s because others are not beta testers unlike us.

we have statistics how many devices have upgraded. I don’t want to argue about this, please let us concentrate on the remaining issues here. Let us discuss the release rather than nitpick

Yes, you said specific not complex. But still, every major company has some specific configs and if your routers are constantly having bugs and with each new version new bugs, it is hard to say, not to worry, especially because I’m already worring the whole time v6 is out. I will wait for 6.23 but regarding previous experiences nothing will change. Some old bugs will be fixed and some new will be implemented simply because you DON’T TEST your software. When you get a critical mass of unsatisfied customers it will be a different story.
The other thing is licencing… we bought many routers (mostly 411xx) when v4 was the current version and they are upgradable to v6.x. It can be quite hard for me to accept claim that v6 will be stable before v7 comes out. These routers cannot be upgraded to v6 because of constant connect/disconnect issues over PPTP VPN. It is unstable. Period. I’ve been reading this forum for more then 6 years now and I have seen many answers from MT staff like, please wait for version xx.xx for this bug to be fixed. Always the same. No, the only right thing for me is to worry. Upgrading at this stage to any v6 is a no-no.