Hi guys,
I have not found a way to effectively block traffic to public proxies so as not to bypass the rules in the firewall !
If anyone has such a solution, please share their experience !
P.S. I want to ask, if i can add a firewall rule in filter section on forward chain with conten=https and one with http , can i block the redirected traffic to proxies only ?
Indeed it is far from perfect. Probably it’ll successfully block proxy requests, but will most probably block usual http requests as well (it probably won’t interfere with direct https connections though). It’s quite usual to see full
GET http://www.somedomain.com/path/to/document.html HTTP/1.x
on direct connections to source server as well … it’s the only way for server to distinguish between all those named virtual http servers sharing the same public IP address. (x in HTTP protocol nowadays is zsually 1 but with older web browsers it can be 0 as well).
I’ve never seen “GET http://…” in regular requests. The way to distinguish between virtual hosts is Host header. And quick test with nginx (as regular webserver, not proxy) shows that “GET http://www.domain.tld/ HTTP/1.0” without Host header returns default virtual host (i.e. ignores hostname from request) and the same with HTTP/1.1 (still without Host header) returns “400 Bad Request”.
For now, this stops traffic to proxies that do not use https / SSL /. Unfortunately, most of the public are over https ! Тhe only solution for now is that I have to collect their ip addresses in lists .
I don’t follow what happens in public proxy world, but what I got from Google was all without https, just http. But if you have different sources with https, then it’s bad for you, because you can’t see what’s inside https connection, it’s the whole point of https. And collecting address, good luck. Maybe if there already is some source of proxy addresses, you could use that. But doing it yourself will be never ending story.
Yes, i know - about https web proxies i mean !I will look for more information in the internet. For now, I will collect the names and addresses of most well-known ones!
Thanks again for your help Sob !
It really depends on what exactly you need it for and how persistent users you have. Maybe if you block the most obvious servers, they will give up. The major thing against you is that all they need is just one working server.
Behind a ccr I have a very sensitive network with about 150 clients.
There are several different servers on this network, with important information.
All this is done to prevent any type of virus, worm … etc. in the network .
I already use the Joshaven Potter script to update the Spamhaus, dshield, and malc0de lists.
I also have a lot of rules based on ports in the forward chain. I use AdGuard DNS and redirect all queries to my router.
By blocking proxies, I will try to reduce the risk of any attempts to circumvent the rules!
Blocking access to proxies doesn’t sound like something that would help much. Unless you have some very strict filtering of all outgoing traffic, any worm will just use either custom ports, or if you block those, then regular https. And you pretty much have to allow that, if those 150 clients should be able to use internet in the most basic sense, which today means access to http(s).
I guess you already have that, but if not, I’d start with segmenting the network, make sensitive servers isolated from users as much as possible. Other than that, it’s mostly non-technical, you need “long whip”, “iron fist”, or whatever is the fitting English idiom, users must know what they can and can not do, and don’t dare to break the rules.