Skip to content
Snippets Groups Projects
Commit cb81e6af authored by Martin Ottens's avatar Martin Ottens Committed by Frieder Schrempf
Browse files

net/sched: tbf: correct backlog statistic for GSO packets


[ Upstream commit 1596a135e3180c92e42dd1fbcad321f4fb3e3b17 ]

When the length of a GSO packet in the tbf qdisc is larger than the burst
size configured the packet will be segmented by the tbf_segment function.
Whenever this function is used to enqueue SKBs, the backlog statistic of
the tbf is not increased correctly. This can lead to underflows of the
'backlog' byte-statistic value when these packets are dequeued from tbf.

Reproduce the bug:
Ensure that the sender machine has GSO enabled. Configured the tbf on
the outgoing interface of the machine as follows (burstsize = 1 MTU):
$ tc qdisc add dev <oif> root handle 1: tbf rate 50Mbit burst 1514 latency 50ms

Send bulk TCP traffic out via this interface, e.g., by running an iPerf3
client on this machine. Check the qdisc statistics:
$ tc -s qdisc show dev <oif>

The 'backlog' byte-statistic has incorrect values while traffic is
transferred, e.g., high values due to u32 underflows. When the transfer
is stopped, the value is != 0, which should never happen.

This patch fixes this bug by updating the statistics correctly, even if
single SKBs of a GSO SKB cannot be enqueued.

Fixes: e43ac79a ("sch_tbf: segment too big GSO packets")
Signed-off-by: default avatarMartin Ottens <martin.ottens@fau.de>
Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
Link: https://patch.msgid.link/20241125174608.1484356-1-martin.ottens@fau.de


Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
parent e75b8795
No related branches found
No related tags found
Loading
...@@ -207,7 +207,7 @@ static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch, ...@@ -207,7 +207,7 @@ static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch,
struct tbf_sched_data *q = qdisc_priv(sch); struct tbf_sched_data *q = qdisc_priv(sch);
struct sk_buff *segs, *nskb; struct sk_buff *segs, *nskb;
netdev_features_t features = netif_skb_features(skb); netdev_features_t features = netif_skb_features(skb);
unsigned int len = 0, prev_len = qdisc_pkt_len(skb); unsigned int len = 0, prev_len = qdisc_pkt_len(skb), seg_len;
int ret, nb; int ret, nb;
segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK); segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);
...@@ -218,21 +218,27 @@ static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch, ...@@ -218,21 +218,27 @@ static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch,
nb = 0; nb = 0;
skb_list_walk_safe(segs, segs, nskb) { skb_list_walk_safe(segs, segs, nskb) {
skb_mark_not_on_list(segs); skb_mark_not_on_list(segs);
qdisc_skb_cb(segs)->pkt_len = segs->len; seg_len = segs->len;
len += segs->len; qdisc_skb_cb(segs)->pkt_len = seg_len;
ret = qdisc_enqueue(segs, q->qdisc, to_free); ret = qdisc_enqueue(segs, q->qdisc, to_free);
if (ret != NET_XMIT_SUCCESS) { if (ret != NET_XMIT_SUCCESS) {
if (net_xmit_drop_count(ret)) if (net_xmit_drop_count(ret))
qdisc_qstats_drop(sch); qdisc_qstats_drop(sch);
} else { } else {
nb++; nb++;
len += seg_len;
} }
} }
sch->q.qlen += nb; sch->q.qlen += nb;
if (nb > 1) sch->qstats.backlog += len;
if (nb > 0) {
qdisc_tree_reduce_backlog(sch, 1 - nb, prev_len - len); qdisc_tree_reduce_backlog(sch, 1 - nb, prev_len - len);
consume_skb(skb); consume_skb(skb);
return nb > 0 ? NET_XMIT_SUCCESS : NET_XMIT_DROP; return NET_XMIT_SUCCESS;
}
kfree_skb(skb);
return NET_XMIT_DROP;
} }
static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch, static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch,
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment