Community discussions

MikroTik App
 
kcarhc
Frequent Visitor
Frequent Visitor
Topic Author
Posts: 57
Joined: Thu Feb 01, 2018 9:54 am

FEATURE REQUEST: BBR(Bottleneck Bandwidth and Round-trip propagation time) Congestion Control

Sun Aug 23, 2020 2:49 pm

Bottleneck Bandwidth and Round-trip propagation time, or BBR, is a congestion control algorithm that powers traffic from google.com and YouTube.
The algorithm was developed by Google, and it can produce higher throughput, and lower latency for traffic from your VPS.

RouterOS 7.1 beta X –> it support WireGuard -> maybe -> Linux Kernel version 5.6.X

So please add the BUTTON to switch TCP congestion control from default(CUBIC or MIKROTIK) to BBR.
 
 
User avatar
nithinkumar2000
Member Candidate
Member Candidate
Posts: 157
Joined: Wed Sep 11, 2019 7:42 am
Location: Coimbatore
Contact:

Re: FEATURE REQUEST: BBR(Bottleneck Bandwidth and Round-trip propagation time) Congestion Control

Sun Aug 23, 2020 8:08 pm

+1 from my side
 
santyx32
Member Candidate
Member Candidate
Posts: 215
Joined: Fri Oct 25, 2019 2:17 am

Re: FEATURE REQUEST: BBR(Bottleneck Bandwidth and Round-trip propagation time) Congestion Control

Mon Aug 24, 2020 1:00 am

+1 but still needs a modern/smarter queuing disc like CAKE or fq_codel to keep latency low with different kinds of traffic not just TCP

Edit: I don't think we need BBR, TCP congestion control algorithms can do very little to improve the user experience and the overall network quality. SQM gives better results and only needs to be implemented at the gateway level.
Last edited by santyx32 on Mon Aug 24, 2020 3:04 am, edited 1 time in total.
 
User avatar
vecernik87
Forum Veteran
Forum Veteran
Posts: 882
Joined: Fri Nov 10, 2017 8:19 am

Re: FEATURE REQUEST: BBR(Bottleneck Bandwidth and Round-trip propagation time) Congestion Control

Mon Aug 24, 2020 2:13 am

I always thought that TCP congestion control is managed by endpoints? (e.g. web browser and web server)

My understanding of BBR is, that endpoints are "smarter" and learn how the network behaves. Then they adjust sending rate based on this info. Network itself (any router on the path) is unaware of this and behave as it normally would.
 
neutronlaser
Member
Member
Posts: 445
Joined: Thu Jan 18, 2018 5:18 pm

Re: FEATURE REQUEST: BBR(Bottleneck Bandwidth and Round-trip propagation time) Congestion Control

Mon Aug 24, 2020 3:44 am

-1.
 
kcarhc
Frequent Visitor
Frequent Visitor
Topic Author
Posts: 57
Joined: Thu Feb 01, 2018 9:54 am

Re: FEATURE REQUEST: BBR(Bottleneck Bandwidth and Round-trip propagation time) Congestion Control

Wed Aug 26, 2020 12:34 am

+1 but still needs a modern/smarter queuing disc like CAKE or fq_codel to keep latency low with different kinds of traffic not just TCP

Edit: I don't think we need BBR, TCP congestion control algorithms can do very little to improve the user experience and the overall network quality. SQM gives better results and only needs to be implemented at the gateway level.
I have try it on some VPS server, if I don't open BBR, just use ROS 6/7, you can't get network max performance, when other's Server on same VPS Host they all use BBR on then Linux.
So I have open two VPS server with additional Private Network, two server on Private Network is reachable,
I set Linux with main router, Debian 10 or CentOS 7, Update to newest kernel, and set BBR on, RouterOS 6/7 behind the Linux router
Linux router set dstnat all to RouterOS 6/7 through Private Network.
this make RouterOS 6/7 network performance much better.

I don't like BBR either, but when I rent a VPS, I only hope that the network maximizes performance, not queued because BBR is not on.

So I think it make a switch button, change CUBIC to BBR or BBR to CUBIC.
just need click button and reboot. if you don't need it, don't open it.

just like BBR not the default on Linux.
nano /etc/sysctl.conf. At the bottom of this file, add the following two lines:
net.core.default_qdisc=fq
net.ipv4.tcp_congestion_control=bbr
Save and close that file. Reload sysctl with the command sudo sysctl -p
Now when you check which congestion control algorithm is in use (with the command sysctl net.ipv4.tcp_congestion_control)
it will show net.ipv4.tcp_congestion_control=bbr, BBR is on.
You can now enjoy Google's much improved Congestion Control Algorithm (CCA) on Linux.

You should see significant improvements with network speed on that server.
 
kcarhc
Frequent Visitor
Frequent Visitor
Topic Author
Posts: 57
Joined: Thu Feb 01, 2018 9:54 am

Re: FEATURE REQUEST: BBR(Bottleneck Bandwidth and Round-trip propagation time) Congestion Control

Mon Feb 13, 2023 4:21 pm

What's the use of enabling BBR? In simple terms, enabling BBR can optimize your website access speed. BBR is a TCP network congestion optimization algorithm open-sourced by Google, TCP BBR aims to solve two problems: to fully utilize the bandwidth on a network link with a certain packet loss rate, and to reduce the buffer occupancy on the network link, thereby reducing latency.

So why enable BBR on RouterOS? Because most of the VPS we rent have shared bandwidth, that is, one 1G port can have 100 people sharing the 1G bandwidth. In the past, when there was no BBR, everyone had no problems, including RouterOS was used very well. But when someone started using BBR on other Linux servers, RouterOS would become the relatively poor user under the same shared bandwidth, or the bandwidth priority for RouterOS was no longer fair at this time.

I have tested this, the same VPS, Debian 10, with BBR enabled and not enabled has a very obvious effect, especially during some network peak periods.

The BBR feature was planned to be added in Linux kernel 4.9 and is not enabled by default. In fact, my suggestion is that RouterOS provides the option to enable this feature, and those who need it can enable it, just like fq-codel on RouterOS, which is supported but not enabled by default.
 
shavenne
just joined
Posts: 16
Joined: Wed Dec 11, 2019 4:27 pm

Re: FEATURE REQUEST: BBR(Bottleneck Bandwidth and Round-trip propagation time) Congestion Control

Tue Feb 14, 2023 4:27 pm

Tell me if I'm wrong but this wouldn't have any impact in routed traffic.
Only if you would use your RouterOS as proxy or webserver or similar.

BUT: As I often have problems with my ISP that I don't get my download speed as I should I found out it helps sometimes A LOT if I use a VPS as proxy with TCP congestion control set to BBR. Same thing with my upload where BBR makes my upload around 2-3 times faster.
So it really could be a interesting feature for some and it actually shouldn't be a big deal to implement it. Especially interesting for the docker instances (seeing my proxy there if this feature would come :D).
 
User avatar
sirbryan
Member Candidate
Member Candidate
Posts: 298
Joined: Fri May 29, 2020 6:40 pm
Location: Utah
Contact:

Re: FEATURE REQUEST: BBR(Bottleneck Bandwidth and Round-trip propagation time) Congestion Control

Tue Feb 14, 2023 8:44 pm

As already mentioned, BBR will do zero for routing, but would help in the following areas with RouterOS as a host:
  • Router-generated TCP speed tests
  • Proxying (existing ROS proxy options)
  • Containers (i.e. nginx or apache as a web host or proxy)
  • File servers
For me, even more important than BBR, would be a transparent TCP proxy, like what Bequant and Cambium have done. With TCP acceleration on the front (peering) end helping to overcome the latency of fixed wireless, for example, and AQM (Cake, fq_codel) at the edge (like hAP AC/AX), you could ensure that customers are able to satisfactorily fill their pipe and still have a balanced experience with all the different protocols riding the network.
Last edited by sirbryan on Wed Feb 15, 2023 5:39 am, edited 1 time in total.
 
dtaht
Member Candidate
Member Candidate
Posts: 209
Joined: Sat Aug 03, 2013 5:46 am

Re: FEATURE REQUEST: BBR(Bottleneck Bandwidth and Round-trip propagation time) Congestion Control

Tue Feb 14, 2023 11:55 pm

I am of two minds about BBR. I have been mostly waiting for BBRv2 to come out before recommending it for anything other than its original purpose: being better than DASH (netflix) style traffic for youtube. It is presently ill-suited for sharded web sites in particular (does not compete with itself very well). As for speedtest-like benchmarks to the router, it is good at finding the actual capacity of the link and staying there.

With the "FQ" portion fq_codel/cake/fq_pie in the loop, it does very little harm, and for longer than 10s transfers tends to stay in its delay based regime. The first 10s, though... and against policers, however, it shreds all other traffic that someone might be trying to control via their policer so they have a good experience with voip or video. BBR does not have RFC3168 ecn support *at all*, nor does it respond quickly to drops from the "codel" portion of the algorithm. Because of its non tcp-friendly behavior, it outcompetes cubic in many cases which you could (as BBR-user) celebrate, or as a cubic user, be unhappy about. BBR is helpful against genuinely lossy links where cubic would struggle.

It requires slightly more cpu resources than cubic, also. It is a mixed bag. On even a fq_codel enabled router it will mask other bufferbloat problems elsewhere on your network.

Secondly I am quite mortally opposed to tcp proxies, because in general breaking the tcp control loop in two, like that, negatively impacts the tcp control loop. A good example of where tcp proxies can break things, is in an otherwise responsive screen sharing application can no longer get enough feedback to drop a frame rather than keep spitting packets.

We see this all the time nowadays also in container systems, where not using TCP_NOTSENT_NOWAT in particular can lead to 99% of the data being transferred between web proxy (nginx) and the container, but to the user, never getting sent, causing problems with actual interactivity of other flows. A hilarious real world example of this happened last year, where correctly enabling that option between container and nginx resulted in that reduction of traffic, with the observable benefit was a certain large maps provider suddenly started getting individual components of the maps and other data downloaded much faster.

The BBR implementation now in cilium relies on *host* not *network* backpressure as of linux 6.0, and works pretty well, except when talking directly to the net, or udp within the container.

I am very very not happy about the explosion of the tcp proxy concept in the past 2 years. People using it aren't measuring the right things in the right places.

I can give cites for BBRs problems if you like. But yes - tcp speed tests will look better except not reflect customer reality, proxying will look better for 1/2 of the connection with rather undefined results on the other half, containers as I already said... and for file servers... depends on the workload.

See also:

https://web.mit.edu/Saltzer/www/publica ... dtoend.pdf
 
dtaht
Member Candidate
Member Candidate
Posts: 209
Joined: Sat Aug 03, 2013 5:46 am

Re: FEATURE REQUEST: BBR(Bottleneck Bandwidth and Round-trip propagation time) Congestion Control

Wed Feb 15, 2023 12:45 am

I am of two minds about BBR. I have been mostly waiting for BBRv2 to come out before recommending it for anything other than its original purpose: being better than DASH (netflix) style traffic for youtube. It is presently ill-suited for sharded web sites in particular (does not compete with itself very well). As for speedtest-like benchmarks to the router, it is good at finding the actual capacity of the link and staying there.

With the "FQ" portion fq_codel/cake/fq_pie in the loop, it does very little harm, and for longer than 10s transfers tends to stay in its delay based regime. The first 10s, though... and against policers, however, it shreds all other traffic that someone might be trying to control via their policer so they have a good experience with voip or video. BBR does not have RFC3168 ecn support *at all*, nor does it respond quickly to drops from the "codel" portion of the algorithm. Because of its non tcp-friendly behavior, it outcompetes cubic in many cases which you could (as BBR-user) celebrate, or as a cubic user, be unhappy about. BBR is helpful against genuinely lossy links where cubic would struggle.

It requires slightly more cpu resources than cubic, also. It is a mixed bag. On even a fq_codel enabled router it will mask other bufferbloat problems elsewhere on your network.

Secondly I am quite mortally opposed to tcp proxies, because in general breaking the tcp control loop in two, like that, negatively impacts the tcp control loop. A good example of where tcp proxies can break things, is in an otherwise responsive screen sharing application can no longer get enough feedback to drop a frame rather than keep spitting packets.

We see this all the time nowadays also in container systems, where not using TCP_NOTSENT_NOWAT in particular can lead to 99% of the data being transferred between web proxy (nginx) and the container, but to the user, never getting sent, causing problems with actual interactivity of other flows. A hilarious real world example of this happened last year, where correctly enabling that option between container and nginx resulted in that reduction of traffic, with the observable benefit was a certain large maps provider suddenly started getting individual components of the maps and other data downloaded much faster.

The BBR implementation now in cilium relies on *host* not *network* backpressure as of linux 6.0, and works pretty well, except when talking directly to the net, or udp within the container.

I am very very not happy about the explosion of the tcp proxy concept in the past 2 years. People using it aren't measuring the right things in the right places.

I can give cites for BBRs problems if you like. But yes - tcp speed tests will look better except not reflect customer reality, proxying will look better for 1/2 of the connection with rather undefined results on the other half, containers as I already said... and for file servers... depends on the workload.

See also:

https://web.mit.edu/Saltzer/www/publica ... dtoend.pdf
 
kcarhc
Frequent Visitor
Frequent Visitor
Topic Author
Posts: 57
Joined: Thu Feb 01, 2018 9:54 am

Re: FEATURE REQUEST: BBR(Bottleneck Bandwidth and Round-trip propagation time) Congestion Control

Fri Feb 17, 2023 3:39 pm

I think there's no need to wait for BBRv2, as it is a future problem. RouterOS v7 uses the Linux 5.x kernel, which natively supports BBR.

Enabling BBR in the kernel through code is all that is needed, as it is currently not enabled by default, but can be enabled by those who need it.

This way, RouterOS can have better network performance for Docker, similar to other Linux systems. Although this may not be what I want to say, enabling BBR does have the potential to improve performance on some cloud servers.

Rather than rejecting this feature, we should just avoid forcing it to be enabled.

Who is online

Users browsing this forum: No registered users and 16 guests