Cloud Router Switch Uplink

I want to use the CRS125-24G as a switch with an uplink port that goes to a PPPoE server on port one. So all ports can talk to port 1 and port 1 can talk to all ports. I do not want to allow any packets to pass between say ports 2 and 3 etc. What is the easiest way to do that?

Probably create a bridge interface. Set ether1’s master port to “none”, ether2’s master port to “none” and then every other port’s master port set to “ether2”. Then put ether2 in the bridge port interface list.

I think thats backwards… Personally I’d set ALL of the masters to NONE… create a bridge and place them all into it… then setup bridge rules for each traffic you want to allow… Setting the ethers master port puts them into a switch group… don’t think you really want to do that here.

e.g.

  1. Allow all out-interface = ether1…
  2. Allow for each in=ether1 to out=etherN
  3. Drop everything else.

-Eric

I put all interfaces in same bridge group then did this.

/interface bridge filter
add action=drop chain=forward in-interface=!ether1 out-interface=!ether1

Seems to work.

Uh… didn’t even think about doing it that way… much simpler ruleset… Nice job!

Beautiful, yes … I misunderstood you.

I’d suspect this kills performance for the CRS, cause as soon as you use interfaces as non switch port, everything is done in software…

/interface bridge filter
add action=drop chain=forward in-interface=!ether1 out-interface=!ether1

I’d suspect this kills performance for the CRS, cause as soon as you use interfaces as non switch port, everything is done in software…

I suspect your right but if I still get close too 1gbps out of uplink port I am not sure I care. I am sure there are better ways but this works and gives the throughput I need.

Until they implement more of the CRS feature set I don’t know another way to do it…

I highly doubt, that you will anything even near to 1Gbps, if the CRS design was done like all aother RBs with integrated switch chip, then there is a single 1G link from the switch chip to the CPU. So if you add all ports including the uplink to a software bridge you trying to push the traffic of 24x ports through one 1G link. Even the traffic between the ports.

I highly doubt, that you will anything even near to 1Gbps, if the CRS design was done like all aother RBs with integrated switch chip, then there is a single 1G link from the switch chip to the CPU. So if you add all ports including the uplink to a software bridge you trying to push the traffic of 24x ports through one 1G link. Even the traffic between the ports.

Reached 300mbps on a PPPoE connection going through it. That cap was caused by the PPPoE client router(RB2011) hitting 100 percent CPU not cloud router acting as switch. The cloud router was at about 50 percent but I think part of that was me being logged in it on winbox. Will need to find a different mikrotik box and do raw IP to see if I can do 1gps flowing through the uplink on it but I bet it comes close. Wander if turning connection tracking off would help. Mostly only seeing PPPoE packets so I am not sure what it would change.

Late to the discussion, but rather than a filter on the bridge, I’d put all the ports in the bridge with a horizon=2, with the uplink having a horizon=1. All hosts can communicate to the upstream, but not to the side.

According to the block diagram, the CRS has all 24 ports + SFP on a single switch chip. I seriously doubt you’ll get even remotely close to a full Gig of throughput in bridge mode.

The only real reason for the SFP, is to use as an Uplink port, which makes me wonder why it wouldn’t have it’s own GigE lane into the CPU.

Better yet (and I say this without ever seeing the CRS for myself), why wouldn’t MT provide port isolation? I’m also guessing that VLAN management on the switch chip hasn’t improved either (ranges over enumeration).

As a whole, the CRS will be hard to beat in terms of price/performance, but as an aggregation device, it could have been better.

Hopefully, MT will release a 1G-[24|48][G|S|F]-2S+ device. That’s a copper management port, 24 or 48 access ports (copper, sfp, or built-in fiber), and two 10G uplinks.

In terms of optics, it would also be awesome if MT would give us an 1.25G SFP (or built in optics) that could autosense and work with 100Mbit/s on the other side.

I was talking about the effect bridging all ports will have. If you take a look at the picture troy linked, you will see what I’m talking about. Bridging all ports in ROS, will force them all through the 1G link to the CPU.

I was talking about the effect bridging all ports will have. If you take a look at the picture troy linked, you will see what I’m talking about. Bridging all ports in ROS, will force them all through the 1G link to the CPU.

I see exactly what you are saying. When bridging all ports traffic likely cannot exceed 1gbps simplex total for router. It likely depends how the traffic is flowing through the ports. Hopefully Mikrotik addresses this in future with more SWOS features?

Does Mikrotik 6.9 release solve any of this?

We’ll have to let the boys from MT chime in on this one.

At a glance though, it would appear that the problem isn’t so much with the software, as it is with the hardware. As a Layer2 switch, the CRS should perform just as well, if not better, than any other $200 switch.

If the switch chip supports it, MT needs to implement port isolation at the switch level. This would then make it a decent device for Layer2 aggregation. I don’t think there’s anything MT can do with the current hardware to improve Layer3 performance.

With v6.11 do have any port isolation features on the CRS?

What’s new in 6.12

*) many fixes for CRS managed switch functionality -
particularly improved VLAN support, port isolation, defaults;

So how does one setup port isolation? Not seeing it on manual page.

http://wiki.mikrotik.com/wiki/Manual:Switch_Chip_Features

Please, could you give some details about the setup of your RB2011? Number of pppoe conns, queues (simple/tree), firewall rules, overclock…
I’m using an RB2011 as a pppoe server for about 100 clients, throughput is ~35 Mbps, 90 % of pppoe connections limited by queue tree with pcq, 6 filter rules, 6 nat rules and 37 mangle rules (not written very well), but cpu load is about 90% (cpu-frequency is also raised, to 650 MHz).

Or am I wrong and your RB2011 is a simple PPPoE client and the real PPPoE server is something else?