It is possible to configure mikrotik web proxy to use multiple outgoing IP address like in squid
Simple requirement:
If packets comes from src=10.0.1.10, forward it via public ip 1
If packets comes from src=10.0.2.10, forward it via public ip 2
If packets comes from src=10.0.3.10, forward it via public ip 3
I’m looking for something like that. In squid is:
#TAG: tcp_outgoing_address
Allows you to map requests to different outgoing IP addresses
based on the username or source address of the user making
the request
I need this becouse in my country must log traffic. To some web page I log about 100 -150 connection per minute. I need to divide my user per couple ip. Search for specific person it then possible for me.
What a pity.
I think that it should be add this option to the new version ROS.
This should help many people to log traffic and identify the guilty guys.
Especially in cases when proxy is use by many users.
Imagine that you have about 1500 users that connect to your proxy and connect to very popular web site.
How could I indicate guilty guys when I get from prosecutor only address IP of my proxy, address IP of web site and time. In log appears about 100 -150 connection at the same web site on the same second.
If there was the possibility of divide my users to a few public IP it will by possible indicate guilty guys.
examples:
If packets comes from src=172.16.10.0/24 forward it via public ip 1
If packets comes from src=172.16.20.0/24, forward it via public ip 2
In this solution I know that from address IP proxy xxx.xxx.xxx.9 is used by users from network 172.16.10.0/24 and user from network 172.16.20.0/24 should be ignored in investigation. In this examples I have about 50 - 75 suspects but when I divide this by two ip address I get about 25 -35 suspects or even less.
You can do this, but not with the proxy system. You can send out specific IPs out specific public IPs, but again, you can’t have Proxy enabled to do this.
I start this post because in squid is this possibility. Now to divide traffic I used about 20 src-nat rules like this:
add action=src-nat chain=srcnat comment=“net_1 - xxx.xxx.xxx.5”
disabled=no out-interface=ether1 src-address-list=“NAT_1” to-addresses=xxx.xxx.xxx.5
add action=src-nat chain=srcnat comment=“net_2 - xxx.xxx.xxx.6”
disabled=no out-interface=ether1 src-address-list=“NAT_2” to-addresses=xxx.xxx.xxx.6
But I want used proxy from mikrotik, currently in MT proxy is possibility to change Ip address of proxy
/ip proxy set src-address=xxx.xxx.xxx.5.
Only missing acl lists maybe mt developers add this options in future.
Probably problem is connections to users and options Max.Client Connections. I used max value 5000 but after 15 min all connections are used and web site don’t want load.
I check this:
ip proxy connections print count-only where client
4893
How can I solve this problem?
Maybe limit user connections to port 80 like this rule
Is it possible to count mangle and queue traffic incoming to my proxy? Because I don’t want that proxy used all my bandwidth. Especially that I have several business users that have guaranteed bandwidth and all his connections are directly. Gateway and proxy is on the same machine.
Maybe count package different “!Cache hit DSCP” and queue
proxy traffic is generated by router, so check out packet flow diagram on where you have to set up your queues/mangle to limit outgoing proxy traffic, as requests to web servers and not handling user requests that come in from your users.
4 ;;; traffic from internet use by proxy
chain=output action=mark-connection new-connection-mark=proxy_con
passthrough=yes dst-address-list=networks dscp=!4
enabled: yes
src-address: xx.xx.xx.3
port: 8080
parent-proxy: 0.0.0.0
parent-proxy-port: 0
cache-administrator: “admin@admin.ad”
max-cache-size: unlimited
cache-on-disk: yes
max-client-connections: 5000
max-server-connections: 5000
max-fresh-time: 3d
serialize-connections: no
always-from-cache: no
cache-hit-dscp: 4
cache-drive: sata1
what about my users connections to proxy is there any possibility to increase Max.Client Connections. Maybe if I turn on this options serialize-connections and
always-from-cache the proxy performance will increase.