Hi, I used to have a big IP address list uploaded to my RB2011 router (128M RAM).
This list was used to bypass Russian govermental official IP blacklist filter.
Without the list loaded, I have around 100M free.
When the list contained about 120’000 entries, I had 40M free memory.
Now the list is around 250’000 entries and it no longer fits into memory.
I have to idea how much memory 1 entry consumes and I’m no expert in Linux ipset internals, let’s imagine 1 entry is 20 bytes, then whole list should fit into 5M.
I have a feeling that the code that loads the list into memory consumes some itself and maybe leaks a bit.
Maybe you can share some knowledge. Is there any room for devs to optimize?
For now I will cherry-pick IP addresses that I need and manually add them.
How about optimizing the list … by joining addresses to networks (of different sizes)? That would bring down the number of entries … which would both consume less memory and speed up filtering …
For the record, this is the solution I came up with.
With a few firewall mangle rules, I detect when a second TCP SYN is sent for a connection (meaning packets are probably dropped), dst address is added to list with 1 week timeout, for further connections from this list I set a routing mark, with this routing mark they are then directed to tunnel.
I have a periodically running script that detect when my WAN is down (by pinging Google) and disables/re-enables the rules to prevent list pollution.
Suggest someone has already done all the work you are doing but provides protection for so much more at pennies a day and which addresses specific MT device limitations wrt storage.
Search word MOAB.