Remote Host Scanning our IPv6 Network

A few days ago one of our routers was hitting IPv6 neighbor cache exhaustion. The symptoms were occasional unreachability via IPv6. I pulled up Torch and found someone was actually scanning our network, probing consecutive addresses in a /64 to see if anything responded!

Dropping this traffic is easy, but remember that because this is a neighbor discovery cache exhaustion attack, you can’t protect yourself with /ipv6 firewall filter to drop on forward. You need something like:

/ipv6 firewall raw add action=drop chain=prerouting src-address-list=shitpit
/ipv6 firewall address-list add list=shitpit address=2001:0db8:85a3:0000:0000:8a2e:0370:7334

Hope that helps someone who encounters this rather… pointless scanning.

I wish a pack of wild dogs would devour anyone who does this crap on purpose.

And kudos for using the doc prefix in your example. :slight_smile:

Thanks.

Currently I’m unsure whether this scan (coming from a university network) is “legitimate research” or “pwned box” — as yet, no response from their abuse contact — otherwise my example might have named-and-shamed the device in question :wink:

Hi
I think you have found risk which calls for mitigation techniques in RouterOS.
Default IPv6 max-neighbor-entries table size is 8K, entries with status=“failed” stay there nearly 30 secs. Quite easy to keep such table busy.
Unless we have some method to detect next candidate and update “shitpit” ACL automagically, more ideas would be welcome.
Should it be configurable ND cache expiration time? Dropping icmpv6 responses to attacker IP in firewall when certain threshold is reached? Clearing oldest entries in failed neighbor list when next one is probed?

This last suggestion sounds pretty awesome to me.

Any suggestions? We are currently filtering target by doing !IPv6/112 instead of allowing the entire /64.. luckily we did static addresses or this would not be possible.

I wonder if a firewall filter with limits would pick this scanning up effectively without blocking real traffic. Setting the packet count to just below the maximum you can set the “MaxNeigbhorEntries” value to without experiencing exhaustion from a ND cache perspective but not high enough to cause resource exhaustion on the device.

Possibly a rule that matches on connection-state=new and by per packet limiting?

A 3 pronged approach:

  1. Double the cache to 16384 (/ipv6 settings set max-neighbor-entries=16384)
  2. Apply firewall rules that identify the number of connections and on violation adds them to a source address list that can be dropped in raw for more performance
  3. Use /126 addressing on PtP links and only use /64 on shared connections (where you need SLAAC)


/ipv6 firewall filter add connection-limit=6000,128 action=add-src-to-address-list ...

A real fix would be able to control IPv6 ND traffic or at least control the cache like we do with ARP from a timing perspective. Some previous documentation and testing indicates this is difficult to exploit when the cache time is drastically lowered.

[admin@rtr1] > ipv6 settings set max-neighbor-entries=

MaxNeighborEntries ::= 0..4294967295    (integer number)

When you really need an IPv6 network to be reachable from outside without stateful filtering, it is better to limit
its size e.g. to /120 or /112 as mentioned.
“usually” most people would have a stateful firewall or allow incoming traffic only to a few addresses and this issue
would not occur.
When the network is simply open, most routers will have this problem.

Or use link-local addressing for router-to-router links and /128 on loopback interfaces.
That’d clear it right up. :wink:

I’m mostly kidding because I understand the drawbacks to using link-local-only addressing within non-trivial topologies.

Derp, now able to replicate it on my 750Gr3. Oddly, I see the number of entries in /ipv6 neighbor climbing well past my current setting of maxneighbors. That said it hasn’t seemed to hit my level of free-memory yet. Thankfully the 750Gr3 has 256 MB. It is slowly creeping downwards though. I’m definitely able to populate the table faster than it is purged by default. I suspect I’ll be able to fill all free-memory. We’ll see if the 750Gr3 locks up or what.

I’ll have to see if my connection-limiting would protect from this. I’m not sure if it’d be a stretch to perform connection tracking on a “per-tower” basis where you want to use a /64 for the benefit of SLAAC from a hardware requirements perspective.

I know this has been mitigated by default in the latest Cisco routers for a while now with cache limits per interface.

At the time of this post …

[admin@rtr1] > ipv6 neighbor print count-only where interface=br1-vlan41 
46883

[admin@rtr1] > system resource print 
                   uptime: 1w1d2h33m40s
                  version: 6.41rc31 (testing)
               build-time: Sep/20/2017 06:56:52
         factory-software: 6.36.1
              free-memory: 144.7MiB
             total-memory: 256.0MiB
                      cpu: MIPS 1004Kc V2.15
                cpu-count: 4
            cpu-frequency: 880MHz
                 cpu-load: 2%
           free-hdd-space: 6.0MiB
          total-hdd-space: 16.3MiB

[admin@rtr1] > ipv6 settings print 
                       forward: yes
              accept-redirects: yes-if-forwarding-disabled
  accept-router-advertisements: yes-if-forwarding-disabled
          max-neighbor-entries: 8192

Cache limits only limit the problem of complete resource (memory) exhaustion in the router, not the “denial of service” problem on the network itself.
They were introduced when it was shown that routers could be completely DOS’ed (for IPv6 AND IPv4) using a quite slow scan.
They fix that problem, but not the problem this topic started with.

That type of scanning technique is indeed pointless or dimwitted, however it may have served it’s purpose… That might be the real context.

Turns out IPv6 scanning isn’t “that” hard, although it’s still much harder than IPv4 with zmap:

https://tools.ietf.org/html/rfc7707

(Gives me the heebie jeebies a little bit.)

Anyone scanning sequential addresses is either an idiot or they are doing exactly what happened to you. Glad to see some discussion on it.

Thinking out loud: What if your router could reply from every non allocated address in the PD? Would be interesting to “see what happens”. Chaos, obviously. Fun.

So I thought about what I said above, and it wasn’t well thought out. I guess what I’m suggesting is that anyone on an IPv6 endpoint, that is trying to run actual IPv6 services to the internet (web, streaming, whatever), might be well served by a honeypot that responds to “everything” on the PD subnets not meant for a legitimate server. Or maybe even just respond to random addresses. You’d tie up anyone trying to discover real hosts basically “forever” and it would be pointless. Would be easy to DDoS but things are these days anyway. Maybe what I’m saying isn’t a new idea in IPv6 land.

It was still >2million more attempts (many hours) before this “attack” ceased. Maybe neighbour cache exhaustion was the purpose… never heard back from the university in question so probably not “real” research.

Another one for our IPv6 address space scanning shitlist:

add address=2607:f140:4800::/48 list=shitpit

It’s likely being done by one of the authors of this paper (or someone working with them) at Berkeley University: https://conferences.sigcomm.org/imc/2017/papers/imc17-final245.pdf

They’re doing a sort of enumerative scan of IPv6 address space, depth-first, which results in about 100pps of IPv6 traffic from them. That soon fills up a neighbour cache on a smaller device (even if you’ve set it to 100k+ entries), and pretty soon afterwards your little device loses its connectivity (cue lots of Nagios alerts).

I’ve contacted the abuse address and registered address in WHOIS, and reached out to one of the researchers on Twitter, and suggested they add some “ethical considerations” to section 4 of their paper — and think about potential impact upon targeted networks.

Meanwhile is there anything MikroTik could do in RouterOS to make it easier to flush out “failed” or “noarp” entries in the IPv6 neighbour cache? Or at least let us adjust the timeout for the failed entries to be much quicker than the successful ones?

In the time it’s taken me to email their network abuse contact, and shitpost about them on Twitter, they’ve probed ~150k addresses. Fun times.

Those “researchers” are the worst. With a hacker or prober (those guys that keep a list of available addresses/ports for hackers)
you at least know they do it to cause (indirect) damage. The researchers however are claiming they are doing it for a good
purpose, yet they are causing the same damage.

I think starting this kind of research immediately disqualifies them as a credible scientist. One that would first investigate the situation,
define a research project, and find out about possible unwanted effects before even starting the actual operation.

Seems like MikroTik needs a ND policer like everyone else implemented in 2012 or earlier. That said it would constitute IPv6 feature work and we know how unlikely that is at MikroTik.

Maybe a high severity CVE is needed to get MikroTik’s attention to effectively mitigate this.

Or, take my approach which is growing I’m the community, stop purchasing new MikroTik equipment until they begin to take IPv6 seriously.

I received a reply just now:

Hi Marek,

We’ve added your prefix […] to our blacklist.

We spread probes as evenly as possible across routed prefixes, and shuffle targets within each prefix.

Regards,
6Gen Team

If they’re actually spreading things out evenly across routed prefixes, then to get the quantity of packets we were receiving they are scanning the Internet at somewhere around 20-40Mpps.

mikrotik.com has IPv6 address 2a02:610:7501:1000::2”

It’s only a matter of time before the researchers hit that…

But of course that is not likely to cause a problem. Especially with an address like that.
The issue only occurs when there are large (/64) subnets behind a router that does not do any filtering.
Hosting usually is on small subnets (/112) and often there will be a firewall on the router that only passes
traffic to a few wanted ports on some specific addresses. When you see addresses like ::2 it points
to a situation where the admin actually considered this.