Hi,
I’ve setup an MPLS between 2 routers : one is a CCR1009 the other one a CHR (with a PU license just for the record).
Both are running the latest bugfix release : 6.37.5.
The link between both routers has 200M bandwidth, <1ms latency.
MTU is not an issue on this link (link MTU is 1590, MPLS MTU set to 1508 on both sides, ESXi vswitch MTU set to 7500 on jumbo enabled ports, etc…)
OSPF is up and running, LDP runs smoothly too.
Lets consider the following topology :
A — CHR – 200M link – CCR1009 — B
When I run LDP without explicit nulls I can fill the 200M link in both directions.
When I run LDP with explicit nulls I get 200M from B to A also, but the throughput from A to B decreases to a few Mbps (fluctuating around 3 to 5 Mbps depending on the number of concurent sessions).
Obviously, I’ve taken care of using the same explicit-nulls setting on both routers.
My interpretation is that when the CHR has to add the MPLS label (only when using explicit-nulls, otherwise he only has to route the packet, not MPLS label it) the throughput goes down.
CPU on the CHR is really low (less than 2%), memory is amply available (90% free), no single core does any significant amount of work, esxi host nics are ok too, etc…
Have you already seen this problem ?
Were you able to fix it and if so, how ?
I did some research on the forum, internet, and the changelogs, but I could find anything similar.
Any idea would be welcome !
Have a nice week-end !