OK doesn't really mean "correct", it is more like "NOT WRONG".
I am still not convinced of the default (or of yours) settings, a machine that goes ping:
https://www.youtube.com/watch?v=VQPIdZvoV4g
should do so at regular intervals.
The graph would be something *like*:
|____|____|____|____|____|____|____|____|____ ...
the graph of the default settings would be more *like*:
||||||||||____________________________________
with an initial high frequency burst of pings, lasting 1/20 of the interval followed by nothing for the remaining 19/20.
Your settings would have a similar shape, but they seem to me "better" as your initial burst is a looooong one and covers 2/3 of the interval, leaving only 1/3 of inactivity.
Still, you run the probe every 120s.
If Amm0 is correct (and I believe he is) nothing happens to netwatch status during the time needed for the amount of packets to be sent + timeout (80-83 seconds), and then there are 37-40 seconds of nothingness before the next run.
If you had ONLY the lost packet percent as threshold (for simplicity) you would be monitoring the interface for 80 seconds, and if instead of 0 lost packets you find more than 95% of 400, i.e. 380 packets lost the netwatch probe will come out as "down".
Let us assume that packets are not lost "here and there" but are lost all in a same sequence or block.
Since you set an interval of 200 ms between packets, a "glitch" in the connection lasting 380x200= 76000 ms or 76 seconds will be needed to trigger the down status.
This -
more or less - would be your actual "resolution", i.e. you could try with these settings to physically disconnect the cable for one whole minute or slightly more and then reconnect it, the netwatch ICMP probe should not be able to sense it.
As a matter of fact this is true if you disconnect the cable EXACTLY at the time the ICMP probe starts or within 4 seconds from its start, if you instead disconnect it exactly at the end of the run, you can keep it disconnected for up to 40+76=116 seconds or almost two minutes without netwatch reaching the threshold.
Now what would happen if you run (still with only the packet loss set as 95% and with the default timeout of 3s) with settings like:
interval=120s (same as you have now)
packet-count=80
packet-interval=1,000ms
To reach the threshold you need to lose 95%x80=76 packets, that will take 76 seconds to be sent, so you have more or less the same "resolution" of the above, and you have exactly the same 40s interval where nothing is sensed/happens.
And what if you change the settings to:
interval=120s (still same as you have now)
packet-count=115
packet-interval=1,000ms
thr-loss-percent=66%
To reach the threshold you need 66%x115=76 lost packets that wiil take the same 76 seconds to be sent, but you don't have anymore the variability between 76 and 116, because you are actually monitoring during all (almost all) the 120s interval.
So, provided that the way I understood the mechanism is correct

, it seems to me that:
interval should be as low as possible (with some common sense, the default 10s seems too little, I would settle for 60 seconds or 1 minute)
packet-count should be as low as possible
packet-interval should be as high as possible (1,000 ms or one second sounds good as it is the default on general OS and anyway no less than the minimum Windows allows - 500 ms - or Linux strongly suggests - 200 ms)
The result of the formula ((packet-count-1)*packet-interval)+timeout should be as close to 100% (Ratio in the spreadsheet) of the interval as reasonably possible (taking into account some slack due to the times that are actually taken for sending the pings below 95% or maybe 90% sounds like conservative enough).