I want a MikroTik script to read a ‘.txt’ file and add the found IPs to a list used by a firewall rule. However, the addresses are not added to the list. Can you help me?
The :deserialize is very new, so it may a bug in that actually. i.e. inserting an extra array element, that would unconverted to an IP type, which defaults to 0.0.0.0.
Do you have a sample file you can post?
But I think this should protect against that case:
The “old school” way of doing this is to read each char using [:pick] and using two arrays to store, one for the currently in-progress parse one, and another a list of all IP found (when a \n is the char found in buffer). There is nothing like a readline() from file in RouterOS otherwise.
The ip 0.0.0.0 was set when I created a test file on Linux. When I downloaded the file from a URL I had no problem.
Script:
# Variables
:local url "https://myurl/blocklist.txt"
:local fileName "blocklist.txt"
:local addressListName "blocklist"
:local filebody
:local delayTime 5s
# Download the specified file
/tool fetch url=$url mode=https dst-path=$fileName
# 5 second delay
:delay $delayTime
# Remove all addresses from the specified list
/ip firewall address-list remove [find list=$addressListName]
# Reads the contents of the file and stores it in the local variable filebody
:set filebody [/file get $fileName contents]
# Iterate over each IP from the file contents and add it to the specified list of addresses
:foreach ip in=[:deserialize $filebody delimiter="\n" from=dsv options=dsv.plain] do={
:put $ip
/ip firewall address-list add list=$addressListName address=$ip
}
# Command to remove the file
/file remove $fileName
All the problems already emerged to import IPs from a file, present in other posts, will also be duplicated in this topic.
Here it seems that everything starts from scratch, ignoring what has ALREADY been done.
It is true that now there is :deserialize, but all the rest of the problems remain the same…
(like avoid block 0.0.0.0, avoid block own IPs, avoid unprotect the network during import***, different delimiter, etc.)
*** do not delete/remove, simply set dynamic timeout > update interval and import just what not already exist…
Avoid add/remove from permanent config, use dynamic entry instead for preserve NAND/Flash
If the timeout parameter is not specified, then the address will be saved to the list permanently on the disk. If a timeout is specified, the address will be stored on the RAM and will be removed after a system’s reboot.
And actually duplicates be another one. Updated example script to cover these.
I was more providing an example of the read line-by-line, than using address-list. But if the list is really long… @rextended is right with larger list more care may be needed. Like you might want to consolidate prefixes before adding them to the address-list.
As I see :deserialize from dsv always split by new line regardless which line ending and delimiter is used, no need for conversion: iplist.txt (LF or CRLF):
1.1.1.1
2.2.2.2
same results regardless if is line ending LF or CRLF:
@optio has a good point. delimiter= is the “field separator”, not the “record separator” (if I borrow awk’s terms). The default “record separator” is a “newline” it seems.. So the delimiter does not matter if there only one “field” per row, so the delimiter= does not need to look for the newline, since that’s implicit in a CSV-like thing.
Also looking at what it does is hard to see in a :put… since that doesn’t show the actual array structure :deserialize generates. So taking one of @rextended’s examples…
It make sence to be 2-dimensional array (fields per lines). When new line separator is used or delimiter which doesn’t exists in text it will be always single filed per line and such can be used for full line reading…