SQM help | bufferbloat | LTE connection

Thx for your reply! I don't think I got it... I enabled SQM on eth2 (WAN/LTE interface). Ingress = download and egress = upload, for example=50000 ingress, 20000 egress?

thx for your help!!

so long

I really can not tell that, as it depends on the hardware configuration of your router. But there is a quite easy test, in the GUI you set one of the bandwidth, say upload to 5000 and the other to 0 (which effectively disables the shaper on download, zero just signals no shaping, as shaping to 0 ist completely no-sensical). Now run a speedtest, if the results show that the upload speed is limited to around 5000, then the GUIs directionality is correct, if however the test shows that the download bandwidth was limited, you will need to switch ingress and egress values in the GUI. If that should be the case it would probably also indicate why autorate_ingress does not seem to have any meaningfull effect, as it would act on the egress part of the link (which as your tests show does not seem to suffer from bufferbloat).

So how about you do two tests, with 5000/0 and 0/5000 and test duration 30c seconds and high-resolution buffer bloat measurements configured and post the results here (please also post the output of "tc -s qdisc" after each speedtest)? Then we can discuss the directionality in an informed manner.

Also I hope you are testing via ethernet and not via wlan1-1, as you seem to have another htb shaper instantiated on the wlan interface...

Best Regards

Sorry for my late answer, I was very busy the last days.

I am always testing via ethernet, thx for your advice. I also tried to test with ingress 5000 and egress 0 and 0/5000. In fact ingress is for downloading and egress for upload. I will do some tests later and post the tc -s qdisc results.

thx & so long

BTW, if using dslreports' speedtest the connection type buttons really just load predefined configurations. I would always attempt o measure different links with the same parameters if possible, so I recommend to always select the same configuration. Actually I recommend to create your own configuration, that way you are in control over the exact parameters and even the permitted test servers. This is especially relevant, as I have a hunch that the 4G test profile does use very few streams and might not have high-res bufferbloat testing activated, but that is quite helpful in trying to understand fine link behaviour as in your case...

Best Regards

Try doing a iptables packets per second limitation with the LTE... possibly your ISP restring the qty of connections... one device like a cellphone or just one computer when using LTE usually just browsing will use about 10 to 40 simultaneous connections, at top, if you use in realworld on a router with more than one device will rapidly turn above 50... and the ISP doen't care about and will just drop (deny) them... if you REJECT with proper response using iptables, the other end will receive a icmp packet as signal and will probably guess ¿? that is needed to retry the connection ¿?...

you could enable a netcat listening on a public IP on internet and do a script that open several netcat connections to that end to test how many it is capable to handle (disable forwading to disable internet on your computer and test using router nc if possible, if not connect just one computer via cable.

I think that conntrack or other kmod package maybe will be needed in order to do this... Sorry but I am no expert of iptables.

May be something like:

iptables -I FORWARD -p tcp -s 192.168.1.100 -m connlimit --connlimit-above 50 -j REJECT --reject-with tcp-reset

this may work for many addreses:

iptables -I FORWARD -p tcp --syn -m connlimit --connlimit-above 15 --connlimit-mask 32 -j REJECT --reject-with tcp-reset 

May be you could control not just connections, but packets too:

iptables -I FORWARD -m state --state RELATED,ESTABLISHED -m limit --limit 150/second --limit-burst 160 -j ACCEPT

Sorry for my late answer, I was very busy...

I created my own config @dslreports as written in your own thread: [SQM/QOS] Recommended settings for the dslreports speedtest (bufferbloat testing)

result dslreport:

result tc -s qdisc (after speedtest):

qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root
Sent 120450913377 bytes 102089359 pkt (dropped 54, overlimits 0 requeues 23406)
backlog 0b 0p requeues 23406
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 120450913377 bytes 102089359 pkt (dropped 54, overlimits 0 requeues 23406)
backlog 0b 0p requeues 23406
maxpacket 3028 drop_overlimit 0 new_flow_count 47586 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc mq 0: dev eth1 root
Sent 257256143 bytes 762034 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 257256143 bytes 762034 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev eth0.1 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev eth0.10 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc mq 0: dev wlan1 root
Sent 26037312208 bytes 23926337 pkt (dropped 0, overlimits 0 requeues 308)
backlog 0b 0p requeues 308
qdisc fq_codel 0: dev wlan1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 3052184 bytes 19947 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 26034259861 bytes 23906388 pkt (dropped 0, overlimits 0 requeues 308)
backlog 0b 0p requeues 308
maxpacket 3028 drop_overlimit 0 new_flow_count 15227 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 163 bytes 2 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc cake 80dd: dev wlan1-1 root refcnt 5 bandwidth 4096Kbit besteffort triple-isolate rtt 100.0ms raw
Sent 806591074 bytes 621700 pkt (dropped 18507, overlimits 641618 requeues 4)
backlog 0b 0p requeues 4
memory used: 130944b of 4Mb
capacity estimate: 4096Kbit
Tin 0
thresh 4096Kbit
target 5.0ms
interval 100.0ms
pk_delay 416us
av_delay 18us
sp_delay 1us
pkts 640207
bytes 832663471
way_inds 11430
way_miss 6579
way_cols 0
drops 18507
marks 0
sp_flows 1
bk_flows 1
un_flows 0
max_len 1474

qdisc ingress ffff: dev wlan1-1 parent ffff:fff1 ----------------
Sent 38737749 bytes 515338 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc cake 80da: dev eth2 root refcnt 2 bandwidth 20480Kbit besteffort triple-isolate rtt 100.0ms raw
Sent 590237698 bytes 3318149 pkt (dropped 3964, overlimits 520327 requeues 0)
backlog 0b 0p requeues 0
memory used: 1114880b of 4Mb
capacity estimate: 20480Kbit
Tin 0
thresh 20480Kbit
target 5.0ms
interval 100.0ms
pk_delay 10.3ms
av_delay 4.8ms
sp_delay 14us
pkts 3322113
bytes 595977510
way_inds 98700
way_miss 54787
way_cols 0
drops 3964
marks 4
sp_flows 1
bk_flows 1
un_flows 0
max_len 13266

qdisc ingress ffff: dev eth2 parent ffff:fff1 ----------------
Sent 6474252031 bytes 5260741 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc mq 0: dev wlan0 root
Sent 2162628396 bytes 5399904 pkt (dropped 0, overlimits 0 requeues 6)
backlog 0b 0p requeues 6
qdisc fq_codel 0: dev wlan0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 190573 bytes 1246 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 2162437823 bytes 5398658 pkt (dropped 0, overlimits 0 requeues 6)
backlog 0b 0p requeues 6
maxpacket 1514 drop_overlimit 0 new_flow_count 4576 ecn_mark 0
new_flows_len 1 old_flows_len 15
qdisc fq_codel 0: dev wlan0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev tun0 root refcnt 2 limit 10240p flows 1024 quantum 1500 target 5.0ms interval 100.0ms ecn
Sent 807386 bytes 2644 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc cake 80db: dev ifb4eth2 root refcnt 2 bandwidth 1294Kbit autorate_ingress besteffort triple-isolate wash rtt 100.0ms raw
Sent 6499587663 bytes 5226657 pkt (dropped 34084, overlimits 7853540 requeues 0)
backlog 0b 0p requeues 0
memory used: 718208b of 4Mb
capacity estimate: 1380Kbit
Tin 0
thresh 1294Kbit
target 14.0ms
interval 109.0ms
pk_delay 2.4ms
av_delay 867us
sp_delay 11us
pkts 5260741
bytes 6547902405
way_inds 21293
way_miss 52339
way_cols 0
drops 34084
marks 639
sp_flows 1
bk_flows 1
un_flows 0
max_len 1514

qdisc cake 80de: dev ifb4wlan1-1 root refcnt 2 bandwidth 2048Kbit besteffort triple-isolate wash rtt 100.0ms raw
Sent 45947057 bytes 515304 pkt (dropped 34, overlimits 306370 requeues 0)
backlog 0b 0p requeues 0
memory used: 226368b of 4Mb
capacity estimate: 2048Kbit
Tin 0
thresh 2048Kbit
target 8.9ms
interval 103.9ms
pk_delay 755us
av_delay 110us
sp_delay 1us
pkts 515338
bytes 45952481
way_inds 2798
way_miss 6668
way_cols 0
drops 34
marks 0
sp_flows 1
bk_flows 1
un_flows 0
max_len 1474

Thx for your help so far!

so long!

So I fear that especially your downstream is simply to variable for autorate_ingress to follow... Could you run a speedtest without autorate_ingress but with download bandwidth set to 8000Kbps, please? So far none of the tests show that downstream bufferbloat is meaningfully controlled, so something is really of and we should convince ourselves that there are settings low enough for sqm tohave the expected effect on download bufferbloat. I realize that this is not your preferred solution, but consider this a test, if at a fixed 8000Kbps things do not work as expected, they probably will also not work with autorate_ingress...

Best Regards

thx for your answer! Is there no way to configure an offset? Like autorate_ingress - 20%? Sometimes I am having 30mbit/s, sometimes only 5 mbit/s...

By using 8000kbps for download, here is the result:

output tc -s qdisc:

qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root
Sent 121420780262 bytes 102993055 pkt (dropped 56, overlimits 0 requeues 23561)
backlog 0b 0p requeues 23561
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 121420780262 bytes 102993055 pkt (dropped 56, overlimits 0 requeues 23561)
backlog 0b 0p requeues 23561
maxpacket 3028 drop_overlimit 0 new_flow_count 48240 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc mq 0: dev eth1 root
Sent 261749057 bytes 775343 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 261749057 bytes 775343 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev eth0.1 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev eth0.10 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc mq 0: dev wlan1 root
Sent 26210323643 bytes 24099879 pkt (dropped 0, overlimits 0 requeues 308)
backlog 0b 0p requeues 308
qdisc fq_codel 0: dev wlan1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 3113230 bytes 20356 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 26207210250 bytes 24079521 pkt (dropped 0, overlimits 0 requeues 308)
backlog 0b 0p requeues 308
maxpacket 3028 drop_overlimit 0 new_flow_count 15287 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 163 bytes 2 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc cake 80ed: dev wlan1-1 root refcnt 5 bandwidth 4096Kbit besteffort triple-isolate rtt 100.0ms raw
Sent 7758221 bytes 5558 pkt (dropped 27, overlimits 5399 requeues 0)
backlog 0b 0p requeues 0
memory used: 27776b of 4Mb
capacity estimate: 4096Kbit
Tin 0
thresh 4096Kbit
target 5.0ms
interval 100.0ms
pk_delay 16.1ms
av_delay 5.4ms
sp_delay 2.6ms
pkts 5585
bytes 7798019
way_inds 24
way_miss 61
way_cols 0
drops 27
marks 0
sp_flows 0
bk_flows 1
un_flows 0
max_len 1474

qdisc ingress ffff: dev wlan1-1 parent ffff:fff1 ----------------
Sent 274620 bytes 4024 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth2 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 89188146 bytes 79944 pkt (dropped 0, overlimits 0 requeues 833)
backlog 0b 0p requeues 833
maxpacket 13266 drop_overlimit 0 new_flow_count 1365 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc ingress ffff: dev eth2 parent ffff:fff1 ----------------
Sent 40989622 bytes 68926 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc mq 0: dev wlan0 root
Sent 2164409453 bytes 5426868 pkt (dropped 0, overlimits 0 requeues 6)
backlog 0b 0p requeues 6
qdisc fq_codel 0: dev wlan0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 190573 bytes 1246 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 2164218880 bytes 5425622 pkt (dropped 0, overlimits 0 requeues 6)
backlog 0b 0p requeues 6
maxpacket 1514 drop_overlimit 0 new_flow_count 4623 ecn_mark 0
new_flows_len 1 old_flows_len 7
qdisc fq_codel 0: dev wlan0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev tun0 root refcnt 2 limit 10240p flows 1024 quantum 1500 target 5.0ms interval 100.0ms ecn
Sent 807386 bytes 2644 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc cake 80eb: dev ifb4eth2 root refcnt 2 bandwidth 8Mbit besteffort triple-isolate wash rtt 100.0ms raw
Sent 39633506 bytes 67322 pkt (dropped 1604, overlimits 110161 requeues 0)
backlog 0b 0p requeues 0
memory used: 310Kb of 4Mb
capacity estimate: 8Mbit
Tin 0
thresh 8Mbit
target 5.0ms
interval 100.0ms
pk_delay 4.2ms
av_delay 923us
sp_delay 27us
pkts 68926
bytes 41954586
way_inds 743
way_miss 423
way_cols 0
drops 1604
marks 0
sp_flows 1
bk_flows 1
un_flows 0
max_len 1474

qdisc cake 80ee: dev ifb4wlan1-1 root refcnt 2 bandwidth 2048Kbit besteffort triple-isolate wash rtt 100.0ms raw
Sent 330800 bytes 4022 pkt (dropped 1, overlimits 2084 requeues 0)
backlog 78b 1p requeues 0
memory used: 33536b of 4Mb
capacity estimate: 2048Kbit
Tin 0
thresh 2048Kbit
target 8.9ms
interval 103.9ms
pk_delay 766us
av_delay 188us
sp_delay 10us
pkts 4024
bytes 330956
way_inds 0
way_miss 67
way_cols 0
drops 1
marks 0
sp_flows 0
bk_flows 1
un_flows 0
max_len 1414

so long

Okay, the download bufferbloat now looks much nicer, but upload now looks less ideal and there is a noticeable spike even in the idle measurements. Not sure what to recommend...

Thx, "autorate_ingress" won't help me so far. An option with link aggregation and overhead is no way? I just dream of an offset feature, to help de autorate ingress setting to be more powerful.
For example: Detected bandwith = 20Mbit/s, SQM ingress limit = 16Mbit/s, just to reduce the estiminated bandwith.

so long

AS far as I can tell autorate_ingress has no ceiling at all so... the current intended behaviour is (from man tc-cake.8):
autorate_ingress
Automatic capacity estimation based on traffic arriving at this
qdisc. This is most likely to be useful with cellular links, which
tend to change quality randomly. A bandwidth parameter can be used in
conjunction to specify an initial estimate. The shaper will periodi-
cally be set to a bandwidth slightly below the estimated rate. This
estimator cannot estimate the bandwidth of links downstream of itself.

Thx, so in fact the autorate_ingress option is not working. The commented "bandwidth parameter", how to use it?

so long

well that is simply the normal bandwidth parameter, if you set that to say 10 Mbps and still use autorate_ingress you might get something that works... but you tested that already I believe and so.. it seems sqm can not actually help you...

yes I did :confused: Thx so far, I actually don't know what to do :confused:

Hey, sorry for necroposting, but shoudn't it be 'autorate-ingress' and not 'autorate_ingress'?
See (https://dl.lochnair.net/Bufferbloat/Cake/tc-cake.8.html)

I'm actually also using it with a LTE connection, and I must say it is quite responsive on the bandwidth management. (A bit too responsive if you ask me).

Right you are for current cake. I believe that the keyword got renamed sometime in a move to bring order to the sprawl.

Anyway currently:

root@router:~# tc qdisc add root cake help
Usage: ... cake [ bandwidth RATE | unlimited* | autorate-ingress ]
                [ rtt TIME | datacentre | lan | metro | regional |
                  internet* | oceanic | satellite | interplanetary ]
                [ besteffort | diffserv8 | diffserv4 | diffserv3* ]
                [ flowblind | srchost | dsthost | hosts | flows |
                  dual-srchost | dual-dsthost | triple-isolate* ]
                [ nat | nonat* ]
                [ wash | nowash* ]
                [ split-gso* | no-split-gso ]
                [ ack-filter | ack-filter-aggressive | no-ack-filter* ]
                [ memlimit LIMIT ]
                [ ptm | atm | noatm* ] [ overhead N | conservative | raw* ]
                [ mpu N ] [ ingress | egress* ]
                (* marks defaults)
1 Like

Try cake/simple.qos

Please don't, either use cake/piece_of_cake or fq_codel/simple (or worst-case fq_codel/tbf_simplest), cake/simple is rather a testing vehicle than a recommended configuration

1 Like

... so what do we do for lte links?
there is autorate-ingress but no autorate-egress?

Ideally BQL on the LTE modem should solve this, but realistically all you can do is set the egress shaper so that it stays below the (acceptable) worst-case egress rate. And yes, that sucks.

1 Like