Load Balancing T1's/Traffic

I have 3 T1’s and soon to be 4 that I want to equally load balance traffic accross. ACC does a good job on there end and equally balances the packets accross the 3 T1’s coming my way so I can download 4.5mbs on a single connection. Right now I use 3 gateways listed in a single route but that only does per session and does not allow a single upload to excede 1.5mbps.

So I am trying to make the route marking work. THis is what I have done so far.

/ ip firewall mangle
add chain=prerouting in-interface=local action=mark-routing new-routing-mark=cyclades1 passthrough=yes comment=“”
disabled=no
add chain=prerouting in-interface=local random=50 action=mark-routing new-routing-mark=cyclades2 passthrough=yes
comment=“” disabled=no
add chain=prerouting in-interface=local random=33 action=mark-routing new-routing-mark=cyclades3 passthrough=yes
comment=“” disabled=no
add chain=output action=mark-routing new-routing-mark=cyclades1 passthrough=yes comment=“” disabled=no
add chain=output random=50 action=mark-routing new-routing-mark=cyclades2 passthrough=yes comment=“” disabled=no
add chain=output random=33 action=mark-routing new-routing-mark=cyclades3 passthrough=yes comment=“” disabled=no

In theory I should be marking all packets 33% percent of the time as either “cyclades1”, “cyclades2” or “cyclades3”. Not?

/ ip route
add dst-address=10.0.0.0/16 pref-src=0.0.0.0 gateway=12.4.3.3 distance=1 scope=255 target-scope=10 comment=“”
disabled=no
add dst-address=0.0.0.0/0 gateway=12.12.18.157 distance=1 scope=255 target-scope=10 routing-mark=cyclades1 comment=“”
disabled=no
add dst-address=0.0.0.0/0 gateway=12.12.18.161 distance=1 scope=255 target-scope=10 routing-mark=cyclades2 comment=“”
disabled=no
add dst-address=0.0.0.0/0 gateway=12.12.18.165 distance=1 scope=255 target-scope=10 routing-mark=cyclades3 comment=“”
disabled=no

Now here I should be routing out based on the routing marks which should equally balance accross all T1’s? Everytime I enable all 3 routes I can no longer ping anything on the Internet. Why doesn’t this work? WHat do I have wrong?

What would be really nice is if in a route with multiple gateways Mikrotik had an option to chose between “per packet” or “per session” load balancing. Then I would not have to mess with this. Thats how a Cisco router works as I understand it.

THanks.

Matthew

I must being the only one out there still using T1 circuits or something. In a Cisco its as simple as adding “ip load-sharing per-packet” to the T1 config to equally balance T1’s.

Matthew

Actually, I would guess that it has more to do with PPLB being a major can of worms, that no one (myself included) wants to tackle.

It often does not work as expected, and makes troubleshooting difficult. The potential complications and problems are myriad, and can become immense time-sinks. Even in Cisco-land, it is rarely done (at least on interfaces of this sort), due to the associated issues.

The only way of bonding T-1s such that a single flow will make use of all of them, that actually gives consistent, desired results is IMA (which is, obviously, impossible with MT at this point).

Even with ML-PPP (which may be coming in 2.10, if I recall correctly) it is better to keep individual flows on a single link. Additionally, most of the options in the Ethernet-esque bonding drivers don’t attempt PPLB, and those that do are invariably the more troublesome ones.

Attempting to do it with the policy routing system is just asking for trouble, in my opinion.

If you insist, though;

I would start by determining how you expect the in-bound PPLB to work, i.e. how will your provider be splitting the load across your links? Work with them to determine how you should proceed.

Also be sure you are not attempting to do any sort of connection tracking, or NAT on the box doing the balancing, those are sure to scuttle any PPLB plan.

Good luck, you will need it,
–Eric

Actually my understanding is that load-sharing per-packet is less CPU intensive then load-sharing per-connection. On per-packet it does not need to keep track of connections at all and just uses a simple counter to spit the packets out alternate ports.

Years back I mentioned to an AT&T tech setting up a circuit that I heard(heard on the Mikrotik mailing list actually) CEF or load-sharing per-packet could cause problems. He said they setup routers all over the world that way with no problems. Actually the only area I have seen it giving problems is VOIP and in that case the jitter buffer simply needs to be turned up to deal with occasional out of order packets.

BTW, with ML-PPP I do not think its possible to keep a single flow on one link since each packet is split up between the link’s.

Matthew

It’s probably working in most cases because there is a cisco on both ends and they just deal with it right? If you had a mikrotik on both ends you could use bonding to do the same I believe… not 100% though.

Sam

If you create EoIP tunnels which travel over each T1, and bond those using the rr scheduler, then yes, it should work. Since T1s usually have large MTUs, you should be able to avoid fragmentation, and it’s associated performance hit. I still wouldn’t want to deal with it though.

Which would be why you should ensure connection tracking is off, along with anything that depends on it.

It isn’t so much a matter of CPU load (especially on Cisco, where they can offload some of it to an ASIC), as long as you use a decent box. You do want to make sure it has plenty of headroom, though, so that a packet heading out one interface doesn’t hang around for 3ms, while one going out another interface only takes 1ms.

Most of the issues with PPLB have to do with out-of order packet delivery, if you can find a way to ensure that does not happen, it can work fine.

My understanding of ML-PPP may be a bit dated (very large MTU frame-relay), but if I recall correctly, the packets are encapsulated in their entirety within a single PPP frame. Multiple packets within a flow can certainly be spread over multiple links, though.

However, with IMA, a single packet is almost certainly spread across multiple interfaces, due to the 50-byte native cell MTU.

–Eric

This is correct (sort of). Mikrotik does not (at this time) have support for a per packet load balancing in a “standard” way. MLPPP is going to be your best bet, but it is not here, yet. With MT on both ends, you can use the bonder and get the results you are looking for.

Years back I mentioned to an AT&T tech setting up a circuit that I heard(heard on the Mikrotik mailing list actually) CEF or load-sharing per-packet could cause problems. He said they setup routers all over the world that way with no problems. Actually the only area I have seen it giving problems is VOIP and in that case the jitter buffer simply needs to be turned up to deal with occasional out of order packets.

CEF is not a solution for all. It (like many other “workaround” type solutions) is not perfect, and will cause potential issues. Having said that, it is better than nothing. What you need, though, is a type of protocol that is “standardized” so that you can get the benefit of the full pipe up and down. The best way to do that TODAY is to put in a cisco with enough T1 ports and let the cisco handle the bonding. Alternatively, you can put in an IMAGESTREAM (better choice, IMO) and let that talk to the Cisco on the other end. The only other choice that I can see would be to put a MT at both ends of the T1 interfaces and use bonder. That last one is not likely to actually be a choice, but there ya’ go.

BTW, with ML-PPP I do not think its possible to keep a single flow on one link since each packet is split up between the link’s.

MLPPP will use the individual circuits as though they are just a single circuit. There is a bit of overhead for this, but MLPPP is a standard protocol, and you’d be able to use this from a Mikrotik talking to a cisco.

Not the answer you were WANTING, but unless I am missing something, this is the only answer you will get.

Check this…

http://forum.mikrotik.com/viewtopic.php?t=4300&start=0&postdays=0&postorder=asc&highlight=sbe+t1

Just an experoment I have been playing with..

Things that make you go HMMMMMMM…

Craig