0 packets lost is normal. Previous state resulted in effective ~1% packet loss on each interface except ether1 for some reason, even though most of the ports weren't a part of offending bridge.
Anyway, I managed to track this down.
For reference, here's some "normal" for you; this btest is running over 5 hops, physical span is about 30kms of fiber. "0" didn't even flinch for a second:
> /tool bandwidth-test [redacted] direction=both protocol=udp local-tx-speed=17.5G remote-tx-speed=17.5G
status: running
duration: 20s
tx-current: 17.5Gbps
tx-10-second-average: 17.5Gbps
tx-total-average: 17.5Gbps
rx-current: 17.5Gbps
rx-10-second-average: 17.5Gbps
rx-total-average: 17.5Gbps
lost-packets: 0
random-data: no
direction: both
tx-size: 9000
rx-size: 9000
connection-count: 20
local-cpu-load: 61%
remote-cpu-load: 59%
Cause for that previous ABSOLUTELY ABNORMAL behavior was usage of /interface bridge vlan method of bridging vlans. After switching to "legacy" /interface vlan + /interface bridge port method everything is functioning as expected, albeit with more cumbersome management given the amount of vlans that need to be bridged on this router. I won't even waste my time reporting this bug, but I'll leave this post here to potentially save some poor soul time wasted on debugging this, should they fall into this trap too