From 532ec4fcd7016486e3bb7633fb226a21d7198605 Mon Sep 17 00:00:00 2001 From: Pim van Pelt Date: Sat, 21 Feb 2026 15:39:49 +0000 Subject: [PATCH] Typo fixes, h/t Claude --- content/articles/2026-02-14-vpp-policers.md | 16 ++++++++-------- content/articles/2026-02-21-vpp-srv6.md | 10 +++++----- 2 files changed, 13 insertions(+), 13 deletions(-) diff --git a/content/articles/2026-02-14-vpp-policers.md b/content/articles/2026-02-14-vpp-policers.md index 868171f..910dae2 100644 --- a/content/articles/2026-02-14-vpp-policers.md +++ b/content/articles/2026-02-14-vpp-policers.md @@ -34,7 +34,7 @@ vpp# policer input name client-a GigabitEthernet10/0/1 vpp# policer output name client-a GigabitEthernet10/0/1 ``` -The idea is to give a _committed information rate_ of 150Mps with a _committed burst_ rate of 15MB. +The idea is to give a _committed information rate_ of 150Mbps with a _committed burst_ rate of 15MB. The _CIR_ represents the average bandwidth allowed for the interface, while the _CB_ represents the maximum amount of data (in bytes) that can be sent at line speed in a single burst before the _CIR_ kicks in to throttle the traffic. @@ -71,7 +71,7 @@ also contains the feature bitmap: a statically configured set of features for th 1. It will store the effective feature bitmap for each individual packet in the packet buffer. For bridge mode, depending on the packet being unicast or multicast, some features are disabled. For -example,flooding for unicast packets is not performed, so those bits are cleared. The result is +example, flooding for unicast packets is not performed, so those bits are cleared. The result is stored in a per-packet working copy that downstream nodes can be triggered on, in turn. 1. For each of the bits set in the packet buffer's `l2.feature_bitmap`, starting from highest bit @@ -79,7 +79,7 @@ set, `l2-input` will set the next node, for example `l2-input-vtr` to do VLAN Ta that node is finished, it'll clear its own bit, and search for the next one set, in order to set a new node. -I note that processing order is HIGH to LOW bits. By reading `l2_input.h`, I can see that The full +I note that processing order is HIGH to LOW bits. By reading `l2_input.h`, I can see that the full `l2-input` chain looks like this: ``` @@ -260,7 +260,7 @@ Next, I can generate a few packets and send them out from `pg0`, and wait to rec Similar to the L3 sub-interface input policer, I also write a test for L3 sub-interface output policer. The only difference between the two is that in the output case, the policer is applied to -`pg1` in the `Dir.TX` direction, while in the input case, it's applied to `pg0` in the `Dir.Rx` +`pg1` in the `Dir.TX` direction, while in the input case, it's applied to `pg0` in the `Dir.RX` direction. I can predict the outcome. Every packet is exactly 146 bytes: @@ -303,8 +303,8 @@ the ethernet frame and encapsulation, so no adjustment is needed there. Ben also points out that when applying the policer to the interface, I can detect at creation time if it's a PHY, a single-tagged or a double-tagged interface, and store some information to help correct the accounting. We discuss a little bit on the mailinglist, and agree that it's best for all four -cases (L2 input/output and L3 intput/output) to use the full L2 frame bytes in the accounting, which -as an added benefit also that is remains backwards compatible with the `device-input` accounting. +cases (L2 input/output and L3 input/output) to use the full L2 frame bytes in the accounting, which +as an added benefit also that it remains backwards compatible with the `device-input` accounting. Chapeau, Ben you're so clever! I add a little helper function: @@ -383,7 +383,7 @@ pim@summer:~/src/vpp$ make test-debug TEST=test_policer_subif V=2 | grep 'L2.*po ## Results -The policer works in all sorts of cool scenario's now. Let me give a concrete example, where I +The policer works in all sorts of cool scenarios now. Let me give a concrete example, where I create an L2XC with VTR and then apply a policer. I've written about VTR, which stands for _VLAN Tag Rewriting_ before, in an old article lovingly called [[VPP VLAN Gymnastics]({{< ref "2022-02-14-vpp-vlan-gym" >}})]. It all looks like this: @@ -420,7 +420,7 @@ gerrit on [[44654](https://gerrit.fd.io/r/c/vpp/+/44654)]. I don't think the pol after adding the l2 path, and one might argue it doesn't matter because policing didn't work on sub-interfaces and L2 output at all, before this change. However, for the L3 input/output case, and for the PHY input case, there are a few CPU cycles added now to address the L2 and sub-int use -cases. Perhaps I should do a side by side comparision of packets/sec throughput on the bench some +cases. Perhaps I should do a side by side comparison of packets/sec throughput on the bench some time. It would be great if VPP would support FQ-CoDel (Flow Queue-Controlled Delay), which is an algorithm diff --git a/content/articles/2026-02-21-vpp-srv6.md b/content/articles/2026-02-21-vpp-srv6.md index fcd5737..238d698 100644 --- a/content/articles/2026-02-21-vpp-srv6.md +++ b/content/articles/2026-02-21-vpp-srv6.md @@ -71,7 +71,7 @@ signatures, operational and performance monitoring data, and so on. {{< image width="14em" float="right" src="/assets/vpp-srv6/magnets.jpg" alt="Insane Clown" >}} -Much like magnents, you might be wondering _SRv6 Routers: How do they work?_. There's really only +Much like magnets, you might be wondering _SRv6 Routers: How do they work?_. There's really only three relevant things: SR Policy (they determine how packets are steered into the SRv6 routing domain), SRv6 Source nodes (they handle the ingress part), and SRv6 Segment Endpoint Nodes (they handle both the intermediate routers that participate in SRv6, and also the egress part where the @@ -115,7 +115,7 @@ The _Segment Endpoint Node_ is a router that is SRv6 capable. A packet may arriv configured address in the IPv6 destination. The magic happens here - one of two things: 1. The _Segment Routing Header_ is inspected. If _Segments Left_ is 0, then the next header -(typically UDP, TCP, ICMP) is processed. Otherwisem the next segment is read from the _Segment +(typically UDP, TCP, ICMP) is processed. Otherwise, the next segment is read from the _Segment List_, and the IPv6 destination address is overwritten with it. The _Segments Left_ field is decremented. In this case the packet is routed normally through a bunch of potential transit routers, who are blissfully ignorant of what is happening, and onto a next _Segment Endpoint_ @@ -296,9 +296,9 @@ vpp0-0# sr policy add bsid 8298::2:2 next 2001:678:d78:20F::2:ffff next 2001:678 next 2001:678:d78:20f::3:1 encap ``` -Now each router knows that if an IPv6 packet is destined to it's `:ffff` address, that it needs to +Now each router knows that if an IPv6 packet is destined to its `:ffff` address, that it needs to "End" the segment by inspecting the SRH. And the _SR Policy_ for `vpp0-0` is to send it first to -`::2:ffff`, which is `vpp0-2`, which has now inspects the SRH and advances the _Segment List_. +`::2:ffff`, which is `vpp0-2`, which now inspects the SRH and advances the _Segment List_. The proof is in the tcpdump pudding, and it makes me smile to see the icmp-echo packet bounce back @@ -476,7 +476,7 @@ this bug and SRv6 encap starts to work flawlessly. I decide to add four tests: for {PHY, SUB} x {Encap, Decap}. On the encap side, I create a _SR Policy_ with BSID `a3::9999:1` which encapsulates from source `a3::` and sends to _Segment List_ [`a4::`, `a5::`, `a6::c7`]. I then _steer_ L2 traffic from interface `pg0` using this _BSID_. I'll -generate a packet and want to receive it ffom `pg1` encapsulated with the correct SRH and +generate a packet and want to receive it from `pg1` encapsulated with the correct SRH and destination address. On the decap side, I create an SRv6 packet and send it into `pg1`, and want to see it decapsulated and exit on interface `pg0`.