MPLS - how to hide mpls cloud hops

Hi,
I 'm made a lab with 5 routerboards to do some mpls networking “experiments” everything is working quite well :slight_smile:

I would like to know how to hide the mpls router’s cloud from costumers.?

For an exemple i have a pc here connected in the first routerboard with mpls enabled, them as a cascate R1 > R2 > R3> R4 > R5

here some ping testing o made.
C:\Documents and Settings\Gustavo>tracert 172.16.1.5

Rastreando a rota para 172.16.1.5 com no máximo 30 saltos

1 <1 ms <1 ms <1 ms 192.168.0.69
2 * * * Esgotado o tempo limite do pedido.
3 1 ms * 1 ms 10.2.2.2
4 1 ms 1 ms 1 ms 10.3.3.2
5 1 ms <1 ms <1 ms 172.16.1.5

Rastreamento concluído.

C:\Documents and Settings\Gustavo>tracert 172.16.1.5

Rastreando a rota para 172.16.1.5 com no máximo 30 saltos

1 * <1 ms <1 ms 192.168.0.69 r1 ( gateway for my test pc)
2 1 ms <1 ms <1 ms 10.1.1.2 r2
3 1 ms * 1 ms 10.2.2.2 r3
4 1 ms 1 ms 1 ms 10.3.3.2 r4
5 1 ms <1 ms <1 ms 172.16.1.5 r5


[admin@R5] > tool traceroute 172.16.1.1 src-address=172.16.1.5
ADDRESS STATUS
1 10.4.4.1 1ms 1ms 1ms
mpls-label=5244
2 10.3.3.1 1ms 1ms 5ms
mpls-label=8170
3 10.2.2.1 1ms 1ms 1ms
mpls-label=9542
4 172.16.1.1 1ms 1ms 1ms

i know some carreirs that with mpls hides the mpls cloud and only the first router and the last router appears in traceroute.

Mpls guy and others with more mpls knowledge knows how to do that?

thanks!

you may try to decrease TTL of packets by ‘internal’ hops count in Mangle rule

you might also try a mpls-aware vpn-technology. vpls for example between your host and the last hop in your cloud. then everything gets routed through the tunnel which gets established the way mpls tells him.

thank you guys for the answers!

But i found what i was looking for.. but it isn’t possible with routerOS yet ( as far as i know )

here the explanation..
By default, the TTL field value in the packet header is decremented by 1 for every hop the packet traverses in the LSP, thereby preventing loops. If the TTL field value reaches 0, packets are dropped, and an ICMP error packet might be sent to the originating router.

If normal TTL decrement is disabled, the TTL field of IP packets entering LSPs are decremented by only 1 upon transiting the LSP, making the LSP appear as a one-hop router to diagnostic tools, such as traceroute. This is done by the ingress router, which pushes a label on IP packets with the TTL field in the label initialized to 255. The label’s TTL field value is decremented by 1 for every hop the MPLS packet traverses in the LSP. On the penultimate hop of the LSP, the router pops the label but does not write the label’s TTL field value to the IP packet’s TTL field. Instead, when the IP packet reaches the egress router, the IP packet’s TTL field value is decremented by 1.

When you use traceroute to diagnose problems with an LSP, traceroute sees the ingress router, although the egress router performs the TTL decrement. Note that this assumes that traceroute is initiated outside of the LSP. The behavior of traceroute is different if it is initiated from the ingress router of the LSP. In this case, the egress router would be the first router to respond to traceroute.

You can disable normal TTL decrementing in an LSP so that the TTL field value does not reach 0 before the packet reaches its destination, thus preventing the packet from being dropped. You can also disable normal TTL decrementing to make the MPLS cloud appear as a single hop, thereby hiding the network topology.
There are two ways to disable TTL decrementing:

  • On the ingress of the LSP, if you include the no-decrement-ttl statement at the [edit protocols mpls label-switched-path lsp-path-name ] hierarchy level, the ingress router negotiates with all downstream routers using a proprietary RSVP object, to ensure all routers are in agreement. If negotiation succeeds, the whole LSP behaves as one hop to transit IP traffic.

[edit protocols mpls label-switched-path lsp-path-name ]

no-decrement-ttl;


Note that the RSVP object is proprietary to the JUNOS software and might not work with other software. This potential incompatibility only applies to RSVP-signaled LSPs, not to LDP-signaled LSPs. When you include the no-decrement-ttl statement, TTL hiding can be enforced on a per-LSP basis.

  • On the router, you can include the no-propagate-ttl statement at the [edit protocols mpls] hierarchy level. This statement applies to all LSPs, regardless of whether they are RSVP-signaled or LDP-signaled. Once set, all future LSPs traversing through this router behave as a single hop to IP packets. LSPs established before you configure this statement are not affected.

[edit protocols mpls]

no-propagate-ttl;


If you include the no-propagate-ttl statement, make sure all routers are configured consistently within an MPLS domain; failing to do so might cause the IP packet TTL to increase while in transit within LSPs. This can happen, for example, when the ingress router has no-propagate-ttl configured but the penultimate router does not, so the penultimate router writes the MPLS TTL value (which starts from the ingress router as 255) into the IP packet.

The operation of the no-propagate-ttl statement is more interoperable with other vendors’ equipment. However, you must ensure all routers are configured identically.

so what’s problem? just increase TTL in the entrance to your cloud =) and then every hop will decrease it, so sum will be 1 =)

thanks chupaka, going to try that.

worked !

gustkiller, you are right - currently there is no way to achieve this with RouterOS MPLS. If you feel this feature is important for you, submit a feature request.

As to incrementing TTL before entering your MPLS cloud - always be very carefull when increasing TTL so that routing loop (even transient) does not take your network down. And additionally - this will only work properly if you know exactly how many hops your LSP consists of - when your “cloud” will not be “linear” but with redundant paths this may not work as expected (you will either start seeing hops from MPLS cloud or hiding hops from non-MPLS path when number of hops in LSP will change).

thanks Mplsguy,

you’re right, i really dont need this feature, the problem is my customers are likely to complain about icmp erros in traceroute like that..

Rastreando a rota para 172.16.1.5 com no máximo 30 saltos

1 <1 ms <1 ms <1 ms 192.168.0.69 mpls ingress edge lsr
2 * * * Esgotado o tempo limite do pedido. lsr2
3 1 ms 1 ms 1 ms 10.2.2.2 lsr3
4 1 ms 1 ms 1 ms 10.3.3.2 lsr4
5 1 ms <1 ms <1 ms 172.16.1.5 mpls egress edge lsr

do you know any way to fix icmp “problem” that was a icmp traceroute from a computer conected in the edge lsr .

gustkiller, I do not fully understand your problem. Are you saying that you get inconsistent traceroute results? If you are tracerouting from device connected to ingress LSR (and this device is not connected to interface forming MPLS cloud) like this:

device – R1 – R2 – R3 – R4 – R5

I can not think of reason why you should not get response from R2 provided that:

  • everything is configured properly (no confusion with IP addresses and networks)
  • all label bindings are distributed.

It would help if you gave detailed description of setup (with IP addresses and networks) and label bindings (/mpls forwarding-table print, /mpls local-bindings print, /mpls remote-bindings print) at the time you have traceroute issues.

Hi mplsguy! thanks for your help!

i made it work.. but for help and push everyone to try mpls on mikrotik here is my working setup! :slight_smile:

Test Network configuration


EDGE LSR (R1) eth1 - 192.168.0.69/24 ( customer network gateway) ( directly conected to a laptop whith ip 192.168.0.186)
eth5 - directly conected to r2 with network 10.1.1.0/30 and ip 10.1.1.1/30
OSPF distributing only conected routes and network 10.1.1.0/30
bridge loopback interface ip 172.16.1.1
LDP enabled on interface eth5

LSR (R2) eth5 - directly conected to r1 with network 10.1.1.0/30 and ip 10.1.1.2/30
eth4 - directly conected to r3 with network 10.2.2.0/30 and ip 10.2.2.1/30
OSPF distributing only conected routes and network 10.1.1.0/30 and 10.2.2.0/30
bridge loopback interface ip 172.16.1.2
LDP enabled on intefaces eth4,eth5


LSR (R3) eth5 - directly conected to r2 with network 10.2.2.0/30 and ip 10.2.2.2/30
eth4 - directly conected to r4 with network 10.3.3.0/30 and ip 10.3.3.1/30
OSPF distributing only conected routes and network 10.2.2.0/30 and 10.3.3.0/30
bridge loopback interface ip 172.16.1.3
LDP enabled on intefaces eth4,eth5

EGRESS LSR (R4) eth5 - directly conected to r3 with network 10.3.3.0/30 and ip 10.3.3.2/30
eth4 - directly conected to a test customer PC with network 10.4.4.0/30 and ip 10.4.4.1/30
OSPF distributing only conected routes and network 10.3.3.0/30
bridge loopback interface ip 172.16.1.4
LDP enabled on inteface eth5

Customer pc (rb450) with ip 10.4.4.2


here a working traceroute :slight_smile:

[admin@customer cliente pc] > tool traceroute 192.168.0.186
ADDRESS STATUS
1 10.4.4.1 1ms 1ms 1ms
2 10.3.3.1 1ms 1ms 1ms
mpls-label=19
3 10.2.2.1 1ms 1ms 1ms
mpls-label=19
4 10.1.1.1 1ms 1ms 1ms
5 192.168.0.186 1ms 1ms 1ms

[admin@customer cliente pc] >

and a windows pc on another edge.
Rastreando a rota para 10.4.4.2 com no máximo 30 saltos

1 <1 ms <1 ms <1 ms 192.168.0.69
2 1 ms <1 ms <1 ms 10.1.1.2
3 1 ms <1 ms <1 ms 10.2.2.2
4 1 ms <1 ms <1 ms 10.3.3.2
5 1 ms <1 ms <1 ms 10.4.4.2

Rastreamento concluído.
now going playing arround with mangling traffig trough TE tunnels ( voip)