From 2e1bb6977240512030a39f36780da3c3f25c66aa Mon Sep 17 00:00:00 2001 From: Pim van Pelt Date: Thu, 12 Sep 2024 15:40:37 +0200 Subject: [PATCH] A few typo fixes - h/t jeroen@ --- content/articles/2024-09-08-sflow-1.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/articles/2024-09-08-sflow-1.md b/content/articles/2024-09-08-sflow-1.md index 2f5ef7e..77c80e7 100644 --- a/content/articles/2024-09-08-sflow-1.md +++ b/content/articles/2024-09-08-sflow-1.md @@ -126,7 +126,7 @@ samples are sent from many threads, there will be lock contention and performanc ### sFlow Plugin: Functional -I boot up the [[IPng Lab]({{< ref 2022-10-14-lab-1 >}})] and install a bunch of sFLow tools on it, +I boot up the [[IPng Lab]({{< ref 2022-10-14-lab-1 >}})] and install a bunch of sFlow tools on it, make sure the `psample` kernel module is loaded. In this first test I'll take a look at tablestakes. I compile VPP with the sFlow plugin, and enable that plugin in `startup.conf` on each of the four VPP routers. For reference, the Lab looks like this: @@ -205,7 +205,7 @@ I am amazed! The `psampletest` output shows a few packets, considering I'm askin 100Mbit using 9000 byte jumboframes (which would be something like 1400 packets/second), I can expect two or three samples per second. I immediately notice a few things: -***1. Network Namespae***: The Netlink sampling channel belongs to a network _namespace_. The VPP +***1. Network Namespace***: The Netlink sampling channel belongs to a network _namespace_. The VPP process is running in the _default_ netns, so its PSAMPLE netlink messages will be in that namespace. Thus, the `psampletest` and other tools must also run in that namespace. I mention this because in Linux CP, often times the controlplane interfaces are created in a dedicated `dataplane` network @@ -265,7 +265,7 @@ bridging or cross connects in the VPP dataplane, and it does not have a Linux Co interface, or `linux-cp` is not used at all. 1. Even if it does exist and it's the "correct" ifIndex in Linux, for example if the _Linux -Interface Pair_'s tuntap `hosf_vif_index` index is used, even then the statistics counters in the +Interface Pair_'s tuntap `host_vif_index` index is used, even then the statistics counters in the Linux representation will only count packets and octets of _punted_ packets, that is to say, the stuff that LinuxCP has decided need to go to the Linux kernel through the TUN/TAP device. Important to note that east-west traffic that goes _through_ the dataplane, is never punted to Linux, and as @@ -481,7 +481,7 @@ because of that, it'll end up consuming more packets on each subsequent iteratio up. The L2 path on the other hand, is quicker and therefore will have less packets waiting on subsequent iterations of `dpdk-input`. -2. The `sfloww` plugin spends between 13.5 and 19.7 CPU cycles shoveling the packets into +2. The `sflow` plugin spends between 13.5 and 19.7 CPU cycles shoveling the packets into `ethernet-input` without doing anything to them. That's pretty low! And the L3 path is a little bit more efficient per packet, which is very likely because it gets to amort its L1/L2 CPU instruction cache over 45 packets each time it runs, while the L2 path can only amort its instruction cache over