-In a football and tennis campus (with around 200 mobile devices connected every day) we have a free network wifi with vlan 150.
I wanted to set up a proxy server external to my location, to “mask” my public IP from this vlan 150.
Long story
On a football and tennis campus (with around 200 mobile devices connected every day) we have a free network wifi.
My ISP provides 1gb download and 400mb upload and dynamic public ip. It turns out that my public ip changes +/- every 7 to 10 days. Our router it’s a Mikrotik rb5009.
We offer a free network wifi with symmetrical 50mb download/upload limits.
On the first or second day that the public IP changes, everything works fine, but then, from the third day onwards, the DDOS attacks begin,
which makes it almost impossible for the local network to even ping servers 1.1.1.1 or 8.8.8.8, starting to give timeouts.
If not using IP → DNS → Allow Remote Requests… make sure its unchecked.
If you providing free WIFI I guess u can just allow them to connect to public DNS servers like 1.1.1.1 or 8.8.8.8 or 8.8.4.4 instead of the Mikrotik itself
More than likely the issue is someone within the network is causing the issues.
Without seeing the config its hard to say if you have something missing in terms of proper security setup.
/export file=anynameyouwish ( minus router serial number, any public WANIP information, keys etc.)
This is like worrying needlessly about a trifle.
As for this request, the firewall is configured correctly, and the number of packets, if I interpreted correctly what you wrote (1000000 in 2/3 days),
is perfectly normal these days, for the moron ISPs.
If you really had a DDoS they were 1000000 per second, not 1000000 every 2/3 days (or every 12h)…
(And if you had an ISP that knows how to do its job, there wasn’t one)…
For example, in my edge router that serve more than 500 customers, i have ~250 packet every second(*) on incoming UDP connection on port 53…
So, your problem is elsewhere, it can’t be a supposed DDoS on DNS, for the numbers you write…
273000/12=22750
22750/60=379
379/60=6.32 packets per second (on average)
And in any case 18.5 Mb over 12 hours.
The action of dropping them shouldn’t need a large amount of resources, so large that the consequence “makes it almost impossible for the local network to even ping servers 1.1.1.1 or 8.8.8.8, starting to give timeouts”.
It always means the complete configuration (purged from sensitive data censoring it, not cleaning the lines…), not just a little piece where YOU think is the problem.
Its seems to be a truism Rextended.
Poster come here for help but insist they know where the problem is, which begs the question why come here in the first place…
EIther that or 95% of poster are illiterate or believe in fiction writing.
I never say
Without seeing the config ( but please only the parts that you, the one having issues, the one that doesnt know whats going on, think I need to see ), its hard to say if you have something missing in terms of proper security setup.
/export file=anynameyouwish ( minus everything that may be critical and only include where you mistakenly think the problem is.)
You could consider turning on logging (temporarily) to find out what the source IP address is.
Unfortunately it is not possible to do anything about DDOS, you would need your ISP. AFAIK the firewall is properly configured and DNS traffic from the WAN site is already blocked without the pre-routing rules.
Still can’t explain why you experience the problems you describe, though be aware that ping is dropped because it has lower priority.
I would advise you to change the DHCP network and have all clients use the router as DNS server:
Where is the VLAN config part?
My best guess would be that the DDNS name is configured publically. Might be possible when there is a domain name involved where the dns is pointing to the DDNS name. Is that possible? Can you turn off DDNS (as far as I can see it isn’t used at all…).
/ip cloud
set ddns-enabled=no ddns-update-interval=1h
@erlinden @anav
If I may ask a few side questions, only trying to understand your suggestions, the proposed changes are shifting all DNS requests (from LAN) to the router/gateway at 10.44.73.1, right?
Is this a “generic” good idea/practice or it is something that is only a test specific to try to address/mitigate the OP’s issue?
The DNS server on the Mikrotik will anyway interrogate the 1.1.1.1 and 8.8.8.8 public servers set in /ip dns, the difference should be that the RouterOS cache comes into play, right?
On larger networks it can be helpful to cache DNS locally for faster responses. The NAT rule trick is decent as well as you can still hand out public IP’s to clients and NAT bends the behaviour in the background so clients wouldn’t know you’re doing it unless they really dug down.
I too though am unsure how this will limit inbound DNS from the web. Surely this is something the ISP should be fixing rather than you at the edge of their network?
This too is also good -but- you don’t need logging, you can create input rule for your WAN for udp 53 traffic and add the source IP to an address list, put the timeout as maybe 7 days (more if you feel needed) and then create a rule above that (usually first) as input to drop your dynamic list from the rule just mentioned.
You can extend it to run common ports and throw in a copy of it looking at tcp, a very capable honeypot kind of setup that sniffs out nasties and blocks them for a period of time with no maintenance needed.
This is not the case, but sometimes it is a really bad idea to log in case of DDoS, it takes up a lot of CPU.
This is not the case, but sometimes it is also a really bad idea to create an address list in case of DDoS,
it takes up a lot of CPU and in a short time it runs out of RAM…
I’m sure I listened to a MuM talk once that forwarding a packet to black hole takes less CPU than dropping? No idea who or where or why but also an option.
Port forward inbound DNS requests to non existent IP.
If there’s a very effective way of doing it. Otherwise I doubt it.
But whatever one does, packets definitely have to be silently dropped (as opposed to rejecting them with ICMP port unavailable which theoretically would be the right thing to do).
If packets are dropped by firewall raw rules, then they are dropped even before connection tracking machinery does the classification. After that there’s SRC-NAT, at least a few firewall filter rules and routing decision. And waiting (and buffering packets) for ARP whohas to timeout on non-existing (new) DST IP. Even if final destination is proper blackhole and ARP and buffering doesn’t happen, still quite some packet processing takes place.
Now, how to effectively drop packets in firewall raw? In case of DDOS with random SRC IPs hammering a particular TCP/UDP port it’s probably easiest to whitelist legitimate addresses (LAN subnet(s), configured remote DNS servers) and drop the rest. However even this doesn’t help if src-address of ingress packets is spoofed to one of whitelisted IP addresses … in which case one would have to rate limit traffic towards allowed remote addresses. In any case properly conducted DDoS attack will make your internet connection crawl.
I sent an email to my ISP, with screenshots of my raw rb5009 rules and the millions of packets, I got a call from tech support asking for 2 woriking days to fix this situation.
Right now, they call me again saying they have made some changes and to report them within 1 or 2 weeks.
I reset all the counters and it looks like for now they are fixed… no packets coming from raw rules… Normally I would have a few hundred in a matter of seconds… I hope my ISP has it fixed!