Hi Problem with my RB3011

Hello i recent install a routerboard rb3011 and i hace problem when the traffic is high but low of 100mbits it start to loose icmp request to the others nanos and mikrotik in the same switch my config is the next

I have a dhcp server which gives ip to the computers that are connected to a nanostation in ap mode, and they all take ip from the routerboard, the routerboard is connected to a tplink switch, the dhcp traffic from everything enters through the port 1 of the router, and the other ports out to other links, in the switch I have the ports divided by vlan in the same ports to avoid broadcast, the thing is that when the traffic is something for 20,000 paquets per second there is cuts and the same routerboard fails pins to the switch and the equipment that is in the tower, help that you think that this can cause the current configuration that I have?

PD: everything I have with static routing, without NAT or masquerade, for the dhcp relay

You mention VLANs but it seems they exist outside the 'Tik, right? So the 'Tik is a pure router, and there is nothing to save by offloading some bridge functionality to the switch chips.

Even though it has 10+1 gigabit-Ethernet ports, the Mikrotik is a software router, and as you use a lot of routing marks, I assume a significant share of the traffic cannot be handled with reduced CPU effort using fasttracking, which means that your CPU cannot handle more than 100 Mbit/s of this complex packet handling. Use /tool profile to check how busy your CPU cores are when the machine starts dropping packets.

So if you can identify the type of traffic which occupies the most bandwidth and use fasttracking to handle that traffic, it may help a bit or a lot, depending on the percentage of the total traffic which could be fasttracked.

The other thing is that your mark-routing rules are organized in a terribly inefficient way, as every single packet has to be matched against all the million and six rules because you have set passthrough=yes just to be able to mark a few of them in the end of the chain.

As the 'Tik is a software router, you have to think about the firewall as an alghoritm (program) (because that’s what it actually is) and optimise it using the same methods programmers use to make searches efficient. RouterOS won’t do that for you.

Example: you have to map eight src-address-lists (A to H) to eight routing-marks (a to h), and then add a packet mark to some packets. The way you do it is linear:

set routing-mark a if src-address-list=A
...
set routing-mark h if src-address-list=H
set packet-mark x if packet matches some other condition orthogonal to those above

So each packet has to pass through all those 9 rules regardless whether it actually matches already at the first one or only at the last one.

A faster way would be a binary tree:

if src-address-list=ABCD  { <--A:1--H:1--
    if src-address-list=AB { <--A:2--
        if src-address-list=A {set routing-mark a} <--A:3--
        else {set routing-mark b}
    }
    if src-address-list=AB {
        if src-address-list=C {set routing-mark c}
        else {set routing-mark d}
    }
}
if src-address-list=EFGH { <--H:2--
    if src-address-list=EF { <--H:3--
        if src-address-list=E {set routing-mark e}
        else {set routing-mark f}
    }
    if src-address-list=GH { <--H:4--
        if src-address-list=G {set routing-mark g} <--H:5--
        else {set routing-mark h} <--H:6--
    }
}
if (packet matches some other condition orthogonal to those above) {set packet-mark to x} <--A:4--H:7--

So you can see that in the best case (when the source address of the packet matches src-address-list=A), the packet only had to be matched against 4 rules; even in the worst case (when the source address of the packet matches src-address-list=H), the packet still had to be matched against only 7 rules instead of 9 in the linear case. So in average this step will become two times faster as compared to the linear processing, and that’s only for 8 mark-routing rules; with your 25, the relative improvement will be even better (something like average 6 rules matched instead of 25)

If you can use matching to src-address instead of src-address-list in the example, the matching will be faster too, as address lists have to use an internal hash algorithm while plain src-address matching uses single from-to intervals at worst.

But it is even better to assingn a connection mark, albeit based on relatively expensive operations like address list matching, to the whole connection when processing its initial packet, and translate connection mark to routing mark for all subsequent packets of the connection in upload direction, as described here. You cannot organize the connection mark->routing mark translation rules into a binary tree, but matching a connection mark is a faster operation than matching an address list.

Yet another option, as you seem to use routing marks only based on source addresses: I was always wondering whether routing rules were just a relict of past or only applicable together with dynamic routing protocols, but inspired by your multi-tenant configuration, I’ve done a quick test and it seems to me that, unlike routing marking in mangle, routing rules can coexist with fasttracking. They support much less match conditions, actually only source and destination prefixes (no lists, no intervals) and routing-mark, but for your scenario the src-address should be sufficient. And fasttracking really does make a difference.

The problem is this, i want to know if this routerbord do not support the traffic or i need one more powerfull
This is the problem
Captura.JPG

Porqué eso es un router y lo estas usando como un switch eso no son vlans, el equipo esta a tope porque lo estas usando mal. Y pensar que michel y elio siempre dicen no asere si desde que tu te fuiste la red hasta al palo no falla…

I’ve seen this network, i was once part of it, its just a huge broadcast network that extends over miles with daisy chained switches and wireless access points, were subnets are called vlans even though there is no such thing as vlan tags.

I dare to disagree with the contents of what you wrote. The OP is not using the machine as a bridge, the L2 functionality is provided by an external device, but the routing is so complex and the address plan so fragmented that it may require a rearrangement of the address plan to permit use of fasttracking. Without fasttracking, the CPU spends too much time on every packet.

Eveything is untagged, if you ran wireshark you can see the broadcast from a pc 12 switches away from you on another subnet (not vlan) all those subnets are on the same default vlan, still think its not a switch? :laughing: :laughing: :laughing:

Well, to my defense I must say that it doesn’t cry loud from the configuration that many various subnets coexist on the same LAN, there is just one case where addresses from two different subnets are assigned to the same ether port. Of course if there is so much broadcast traffic, the CPU is busy handling it, but at least it does not have to forward L2 traffic. So I still think that converting those tens of mangle rules, translating that bambillion source addresses aggregated to address lists to routing marks, into a bambillion of /ip route rule items, thus allowing to use fasttracking, could do miracles with the throughput.

Also the number of routes could be optimized.

That can advise me with the configuration then, another method of configuration in the network, delete everything and start again configuring it but in that way, I do not know another way to configure that, and the Switch that I have the TPLINK SL2452WEB to the vlan tag seems what they do is not compatible with what the Mikrotik does, it seems that they only work from switch to switch, if they know of any method to make them work

I was looking at your original configuration in the evevning, and as I woke up in the morning, I’ve realized that RoadkillX’s diagnosis was actually accurrate - the way the network is chaotized, you cannot use the advantages of routing, so you use L3 tools to inefficiently implement L2 behaviour.

In particular: on L2, the internal structure of the (MAC) addresses has no relationship to the network topology, so you need a lookup table to associate an L2 address with a physical port through which it is accessible. Switches are good in doing this, as they were designed for that, so all the lookup alghoritms are implemented in hardware.

On L3, the (IP) address, like e.g. a telephone number, has a prefix, which is used for “long-distance” routing, and a suffix, which is used for “local” routing. So normally you use just short or long prefixes to find the route.

But what you’ve done (or were forced to do, I don’t know whether there is any network design involved) is that you’ve scattered addresses from the same IP subnet (same /24 prefix) across several different ports, so you have to route each single IP address separately, leading to a bambillion of routes and/or routing rules. Which effectively means that you use the L3 addresses the same way like L2 addresses are normally used, but without the power of hardware and with alghorithms which are not optimized for this type of use as no one has ever expected such type of use would be necessary.

Use of “real” VLANs alone won’t help here a single bit, because the main problem, which is the need to find a destination address in a list to determine via which gateway/port to send the packet for it, will remain. What would help would be to use the address hierarchy, so that the addresses of all devices accessible through the same gateway would have a common prefix (i.e. would be from the same subnet) and the prefix would be unique for them, i.e. no devices accessible via some other route would belong to the same subnet. And only after such rearrangement it could make sense to place these subnets into VLANs.

If the network as it is currently is very poorly configured, you could provide some kind of guide or tutorial of how it would be the right way to organize it as if I have to start from 0 IP all IP, any tutorial or guide that can take me