How to prioritize traffic of one host?

Hello,

In my local network there is a server which provide a service on the one specific port (15555) for computers inside and outside my lan.
How to prioritize the traffic into and out of this server?

Thank you in advance.

http://forum.mikrotik.com/t/how-i-can-give-priority-on-port-base-in-mikrotik/52552/1 - any help ?

I think Firewall filter will do, using the IP Address of the Server. You can reject every other service and Accept only that of your Server.

The question is: how to give the higher priority to the traffic of one host without guarantee the bandwidth?

You really can’t. Priority only comes into play between queues which have already been given their guaranteed minimum bandwidth.

I’d say do something basic like this -
create two simple queues
The first is the “priority queue” and its target is set to the IP address of the priority host.
Guarantee that host (limit-at=) about 50% of the available bandwidth and set the max-limit to the full bandwidth of the connection. Priority=1
Make a second queue with target=x.x.x.0/24 (your LAN IP range) and guarantee it roughly 45% of the bandwidth, and a max-limit=full bandwidth of the connection.
Set the priority of this queue to 8.

That should do what you want and allow the priority host to have up to 55% of the bandwidth no matter what. You can lower the limit-at value for the default queue if you want to guarantee more bandwidth for the priority host. Just don’t guarantee 100% to the priority host, as it can basically starve the rest of the network.

My internet connection is 50/5M and based on my experience it is 45/4,5 in real.
I created the priority queue for priority host with limit-at=0,5M/5M and 4,5M/45M for max-limit. (priority 1)
The rest of network (192.168.88.0/24 and 192.168.89.0/24) got the rest of traffic with priority 8.

add limit-at=512k/5M max-limit=4608k/45M name="priority queue" priority=1/1 target=192.168.88.226/32
add limit-at=4M/40M max-limit=4608k/45M name=rest target=192.168.88.0/23

Will it be good solution?

I strongly recommend a guaranteed minimum bandwidth for the “rest” queue.
You don’t want the priority host to be able to completely starve out the entire network for its own use.

Give it limit-at=512K/1M

Which one is better in your opinion?

/queue simple
add limit-at=512k/1M max-limit=4710k/46M name="priority queue" priority=1/1 target=192.168.88.226/32
add limit-at=512k/1M max-limit=4710k/46M name=rest target=192.168.88.0/23



/queue simple
add limit-at=2355k/23M max-limit=4710k/46M name="priority queue" priority=1/1 target=192.168.88.226/32
add limit-at=2355k/23M max-limit=4710k/46M name=rest target=192.168.88.0/23

The limit-at on the priority queue actually doesn’t matter very much, just so long as it’s set to be something non-zero so that it can get into the “above minimum” state.

I’d say base it on the smaller minimums 512K/1M and then raise the downstream guarantee of the “rest” queue to be 4m
That’s only 10% of your bandwidth, but it’s much more usable in today’s world than a single megabit.

The priority queue is basically going to get all of the bandwidth except up to whatever minimum you reserved for the “rest” queue because the priority queue will either be:
a) below its guaranteed minimum, so it’s going to get service no matter what in this case
b) above its guaranteed minimum:

  • if the “rest” queue is below IT’S guarantee (4M) then “rest” will get up to that much, regardless of priority
  • if the “rest” queue is also above IT’S guaranteed minimum, then priority queue gets the first bite at the apple.

Could you please explain to me the above using some examples?

Okay - if the priority host is idle, then the rest devices may use up to max-limit bandwidth.
If The priority host is downloading like there’s no tomorrow - then the rest devices may use “limit-at” bandwidth.
The priority host will receive its max-limit bandwidth at all times except for whatever amount is guaranteed to the rest queue.

The first priority of queues is to serve the guaranteed minimum (CIR) bandwidth to every queue.
Any queue which is below its guaranteed minimum amount will get serviced before any queues which have already consumed at or above their minimums.
So if your priority host is the only thing on the network, and it’s downloading at 100% of the available bandwidth, then it’s free to do so.
However, if another host starts receiving traffic as well, then the “rest” queue will be at 0 utilization, which is below the guaranteed minimum. Therefore, the rest queue will start pushing away some of the priority host’s consumption, because a guarantee is a guarantee… limit-at = a guarantee. The only way to satisfy the guarantee is to take bandwidth away from the 100% utilization of the priority queue.

Then when the “rest” queue reaches the guaranteed minimum of service, both queues’ guarantees will then have been met. Further service will be given based on priorities. The priority host will then have the ability to use 100% of whatever remains after “rest” gets its contract fulfilled. So the priority host will slow down by the “rest” queue’s “limit-at” amount. If “rest” stops using the line again, then “priority” will go to 100% again.

Conversely, suppose the rest hosts are all combining for 100% utilization.
Priority host comes alive and starts downloading as well. At first, the priority queue will get service simply because it is below its guaranteed minimum. The rest hosts will be slowed down to make room for this second queue’s requests for guaranteed bandwidth. The priority doesn’t even come into play at that point.
Then as the priority host’s throughput increases, it will exceed the minimum guarantee, at which point the priority will come into play. Being higher priority, the priority queue will continue to speed up at the expense of the rest queue, which will slow down at the same time, until the rest queue gets down to its guaranteed minimum bandwidth, at which point the priority queue cannot take any more bandwidth away from the rest queue.

Does this make sense now?

It makes sense. Almost clear. Thank you.

But, I have one more question: how to dynamic share the bandwidth the among users? e.g. for 5 computers connected to 45M/4M.

Read the Wiki/Docs on PCQ.

Basically, you want to implement a basic upload/download PCQ type which doesn’t specify any limitations on the “subqueues” - just leave it alone so it will just “evenly” divide the bandwidth by the number of streams that it sees. You still use the limit-at / max-limit values on the main queue itself the same way as always. PCQ just “shares it fairly.”

What does it means “shares it fairly”?

  • shares it just equally ?

or

  • shares it dynamically according to needs, but taking into account the others and the whole connection bandwidth?

Both statements are true - because “equally” is a dynamic concept.
When you set the pcq-classifier, you can configure it to consider each stream to be a sub-queue (specifying address and port number in the hashing function).

What do you think about the rules below? Is this solution functional?

/ip firewall mangle add chain=forward src-address=192.168.88.0/23 action=mark-connection new-connection-mark=users-con
/ip firewall mangle add connection-mark=users-con action=mark-packet new-packet-mark=users chain=forward

/queue type add name=pcq-download kind=pcq pcq-classifier=dst-address
/queue type add name=pcq-upload kind=pcq pcq-classifier=src-address

/queue tree add name=Download parent=ether1 max-limit=50M
/queue tree add parent=Download queue=pcq-download packet-mark=users

/queue tree add name=Upload parent=pppoe-out1 max-limit=5M
/queue tree add parent=Upload queue=pcq-upload packet-mark=users

I think this solution would have a problem with the upload because by the time the packet is going out the pppoe interface, it will have been modified by srcnat, right? So you’re not going to get any “fair queue” behavior with your configuration because every packet is going to have the same src IP, meaning that there is only going to be one sub queue.

You could fix this by making the classifier use dst address on the pppoe interface’s queue - this would “share” bandwidth based on destination host, or use src port number, which would essentially create a sub queue for each outbound connection (regardless of which user made it).

I’d probably go for the second of those options.

Or - probably better, just use a simple queue instead of queue trees. The simple queue will see the traffic before it gets masqueraded by the NAT table, so you can use the src-address as the classifier in the upload queue (as your example does).

Sorry, I have no idea how to do it.

I believe PCQ queues would work good for what you want to achieve, and priority 1 when creating queues for specific host
http://wiki.mikrotik.com/wiki/Manual:Queue

How to prioritize the traffic of one host is now for me almost clear. Now I would like to know more about the dynamic share the bandwidth the among users. But I will make new topic for it.