I have not studied @jonte’s Splunk scripts, so bit blind on what they might capture. Fundamentally, the logging system has some limits - which @jonte/others have cataloged. The one trick logging does have is adding more/“duplicate” /system/logging/actions (and use them for different subsets) – so that’s kinda the tools we’re playing with here.
For example, if you have some “!script” today, you should be able to a “2nd memory” with a different “name=”, that can be used for topics=script to prevent those from hiding splunk/etc. Essentially you’re allowed to have multiple target=memory (or other target=):
/system/logging/action/add name=script target=memory
/system/logging/add topics=script action=script-memory
For /tool/fetch logging, similarly, you can create another log action. But, if you ask me, there are two use case for logging & each has different approach:
- Case A: when interactively writing the script to get the right parameter to fetch, and get an error/issue
- Case B: after fetch script is “done”, what to do when it “fails”
… & each has different solution.
For case A with /tool/fetch, with the newer logging in recent V7, you can essentially replicate curl/wget’s “verbose” option that will output logs you can use the “echo” action. For example, here is a call that gets a 301 error, which will be shown on the console when you’re entering because to the “action=echo” in logs:
/system/logging/add topics=fetch action=echo comment="fetch debug"
/tool/fetch url="http://postman-echo.com/status/301"
status: failed
failure: Fetch failed with status 301 ( Set-cookie: “sails.sid=s%3AdV8YXvKdMKPcpB
6QK1X2_VGykfh_xiMX.U3HgJ8engUfXhHSpYiWuf9EbcyXMUUOG6tFtddyM7iU; Path=/; HttpOnly”
)
(5 messages discarded)
08:06:31 echo: fetch,raw,read HTTP/1.1 301 Moved Permanently\r\n
08:06:31 echo: fetch,raw,read Date: Sun, 29 Sep 2024 15:06:31 GMT\r\n
08:06:31 echo: fetch,raw,read Content-Type: application/json; charset=utf-8\r\n
08:06:31 echo: fetch,raw,read Content-Length: 19\r\n
08:06:31 echo: fetch,raw,read Connection: keep-alive\r\n
08:06:31 echo: fetch,raw,read Server: nginx/1.25.3\r\n
08:06:31 echo: fetch,raw,read ETag: W/“13-exJghAlC9Ebmzy4CXRTzhbeVAQc”\r\n
08:06:31 echo: fetch,raw,read set-cookie: sails.sid=s%3AdV8YXvKdMKPcpB6QK1X2_VGyk
fh_xiMX.U3HgJ8engUfXhHSpYiWuf9EbcyXMUUOG6tFtddyM7iU; Path=/; HttpOnly\r\n
08:06:31 echo: fetch,raw,read \r\n
08:06:31 echo: fetch,raw,read {\n “status”: 301\n}
08:06:31 echo: fetch,info > Download from postman-echo.com FAILED: Fetch failed wit
h status 301 ( Set-cookie: “sails.sid=s%3AdV8YXvKdMKPcpB6QK1X2_VGykfh_xiMX.U3HgJ8
engUfXhHSpYiWuf9EbcyXMUUOG6tFtddyM7iU; Path=/; HttpOnly”)
08:06:31 echo: fetch,debug Download from > http://postman-echo.com/status/301 > to 30
1 FAILED: Fetch failed with status 301 ( Set-cookie: "sails.sid=s%3AdV8YXvKdMKPcp
B6QK1X2_VGykfh_xiMX.U3HgJ8engUfXhHSpYiWuf9EbcyXMUUOG6tFtddyM7iU; Path=/; HttpOnly
")
And what’s in bold above has a topics=info, so that’s the ONLY message that goes elsewhere (at least in default script where “info” is covered by default memory/etc log actions. The rest is just shown on your console when entering it at CLI. Now this does have a side-effect that it applies to any fetch going on - not just the one at the CLI. But this can be helpful to troubleshoot some operational failure. But you way want to turn off the “echo” logging when not using scripting or debugging recurring some operational issue with fetch.
For case B of /tool/fetch, i.e. how to log failures from some scheduled/etc script in operation. This is trickier since you do not have a lot of control of the logs going out (i.e. no regex, filters, or anything). And particular to your case if your NOT logging topics=script to the remote logging, then using “/log” etc in script may not go anywhere. So my “safeFetch” function example would be BAD in your case. And, in general, it’s bad example for a lot of cases.
To deal with these operational failures, I kinda believe it’s better to “let it fail” for all the “hard failures” (dns, no route, server errors, etc) from /tool/fetch… BUT instead make sure the data returned (i.e. the ->“data” from /tool/fetch) is in the form you expect. If the ->“data” is bad, then also “fail” the script, but /logs that as an error (which should pass through, depending on your acting logging rules).
Since you might want to know more about what failed from a /tool/fetch when you did get the “single failure message” in remote logging system. And, since you are limited to /system/logging/action… you can use yet another target=memory (or use some local file logging work too). This way when you see the one failure message upstream… you can go to the router to check it’s local logs for what happened during fetch (and getting same data shown above for “echo”).
These 2nd and 3rd “memory” in /system/logging/action, do show up in winbox as drop-downs so you can filter them “normal” logs from the “script” or “fetch” logs there:

So something like this:
/system/logging/action
add name=fetch target=memory
add name=script target=memory
add memory-lines=20 name=recentonly target=memory
/system logging
add action=script topics=script
add action=echo topics=fetch
add action=fetch topics=fetch
Lastly… since you mention the The Dude… You can add Dude device – or even a simple netwatch “http-get” monitor on router could work too (which be a netwatch topic in logs) to monitor the web server fetch is using. This allow you to check the web server itself that fetch is using is up and responding to HTTP, and thus also get log message about that server’s failure (and be red in Dude map) - so that be a pretty good clue why you’d see fetch failure elsewhere in logs.