From a2f10236a3925ba2921559594c74277060b4a133 Mon Sep 17 00:00:00 2001 From: Pim van Pelt Date: Mon, 5 Aug 2024 01:43:55 +0200 Subject: [PATCH] Replace all post_url with Hugo ref blocks --- content/articles/2021-02-27-network.md | 12 +++++----- content/articles/2021-03-27-coloclue-vpp.md | 2 +- content/articles/2021-05-26-amsterdam.md | 4 ++-- content/articles/2021-05-28-lille.md | 2 +- content/articles/2021-06-01-paris.md | 4 ++-- content/articles/2021-07-03-geneva.md | 6 ++--- content/articles/2021-07-26-bucketlist.md | 12 +++++----- content/articles/2021-08-07-fs-switch.md | 2 +- content/articles/2021-08-13-vpp-2.md | 4 ++-- content/articles/2021-08-15-vpp-3.md | 6 ++--- content/articles/2021-08-25-vpp-4.md | 8 +++---- content/articles/2021-08-26-fiber7-x.md | 4 ++-- content/articles/2021-09-02-vpp-5.md | 8 +++---- content/articles/2021-09-21-vpp-7.md | 14 +++++------ content/articles/2021-10-24-as8298.md | 10 ++++---- content/articles/2021-11-26-netgate-6100.md | 8 +++---- content/articles/2021-12-23-vpp-playground.md | 4 ++-- content/articles/2022-02-14-vpp-vlan-gym.md | 10 ++++---- content/articles/2022-02-21-asr9006.md | 18 +++++++-------- .../articles/2022-03-03-syslog-telegram.md | 2 +- content/articles/2022-03-27-vppcfg-1.md | 8 +++---- content/articles/2022-04-02-vppcfg-2.md | 12 +++++----- content/articles/2022-10-14-lab-1.md | 10 ++++---- content/articles/2022-11-20-mastodon-1.md | 2 +- content/articles/2022-11-24-mastodon-2.md | 2 +- content/articles/2022-11-27-mastodon-3.md | 7 +++--- content/articles/2022-12-05-oem-switch-1.md | 6 ++--- content/articles/2022-12-09-oem-switch-2.md | 14 +++++------ content/articles/2023-02-12-fitlet2.md | 10 ++++---- content/articles/2023-02-24-coloclue-vpp-2.md | 8 +++---- content/articles/2023-03-11-mpls-core.md | 13 ++++------- content/articles/2023-03-17-ipng-frontends.md | 11 ++++----- content/articles/2023-03-24-lego-dns01.md | 6 ++--- content/articles/2023-04-09-vpp-stats.md | 16 ++++++------- content/articles/2023-05-07-vpp-mpls-1.md | 8 +++---- content/articles/2023-05-17-vpp-mpls-2.md | 11 ++++----- content/articles/2023-05-21-vpp-mpls-3.md | 23 +++++++++---------- content/articles/2023-05-28-vpp-mpls-4.md | 6 ++--- content/articles/2023-08-06-pixelfed-1.md | 12 +++++----- content/articles/2023-08-27-ansible-nginx.md | 11 ++++----- .../articles/2023-10-21-vpp-ixp-gateway-1.md | 2 +- .../articles/2023-11-11-mellanox-sn2700.md | 2 +- content/articles/2023-12-17-defra0-debian.md | 13 ++++------- content/articles/2024-01-27-vpp-papi.md | 7 +++--- content/articles/2024-02-10-vpp-freebsd-1.md | 7 +++--- content/articles/2024-02-17-vpp-freebsd-2.md | 6 ++--- content/articles/2024-03-06-vpp-babel-1.md | 14 +++++------ content/articles/2024-04-06-vpp-ospf.md | 6 ++--- content/articles/2024-04-27-freeix-1.md | 6 ++--- content/articles/2024-05-17-smtp.md | 9 ++++---- content/articles/2024-05-25-nat64-1.md | 8 +++---- content/articles/2024-06-22-vpp-ospf-2.md | 9 ++++---- content/articles/2024-06-29-coloclue-ipng.md | 18 +++++++-------- content/articles/2024-07-05-r86s.md | 6 ++--- content/articles/2024-08-03-gowin.md | 4 ++-- content/services.md | 9 ++++---- 56 files changed, 221 insertions(+), 241 deletions(-) diff --git a/content/articles/2021-02-27-network.md b/content/articles/2021-02-27-network.md index b97b512..1571a61 100644 --- a/content/articles/2021-02-27-network.md +++ b/content/articles/2021-02-27-network.md @@ -51,7 +51,7 @@ all Equinix sites for IPng Networks. The green link (**D** to **B**) is a 10G carrier ethernet circuit between Interxion, over the light purple path (**B** to **A**) on its last mile to Albisrieden, where we built a very small colocation site, which you can read about in more detail in our -[informational post]({% post_url 2022-02-24-colo %}) - the colo is open for private +[informational post]({{< ref "2022-02-24-colo" >}}) - the colo is open for private individuals and small businesses ([contact](/s/contact/) us for details!). ### European Ring @@ -72,27 +72,27 @@ hubs. a first 10G circuit, and from Interxion's datacenter at Glattbrugg (Zurich) with a second 10G circuit, this is our first hop into the world. Here, we connect to [DE-CIX](https://de-cix.net/) from Equinix FR5 at the Kleyerstrasse. -More details in our post [IPng Arrives in Frankfurt]({% post_url 2021-05-17-frankfurt %}). +More details in our post [IPng Arrives in Frankfurt]({{< ref "2021-05-17-frankfurt" >}}). ***Amsterdam*** - The Amsterdam Science Park is where European Internet was born. [NIKHEF](https://www.nikhef.nl/) is where we rent rackspace that connects with a 10G circuit to Frankfurt, and a 10G circuit onwards towards Lille. We connect to [Speed-IX](https://speed-ix.net/), [LSIX](https://lsix.net/), [NL-IX](https://nl-ix.net), and an exchange point we help run called [FrysIX](https://www.frys-ix.net/). -More details in our post [IPng Arrives in Amsterdam]({% post_url 2021-05-26-amsterdam %}). +More details in our post [IPng Arrives in Amsterdam]({{< ref "2021-05-26-amsterdam" >}}). ***Lille*** - [IP-Max](https://ip-max.net/) does lots of business in this region, with presence in both local datacenters here, one in Lille and one in Anzin. IPng has a point of presence here too, at the [CIV1](https://www.civ.fr/) facility, with a northbound 10G circuit to Amsterdam, and a southbound 10G circuit to Paris. Here, we connect to [LillIX](https://lillix.fr/). -More details in our post [IPng Arrives in Lille]({% post_url 2021-05-28-lille %}). +More details in our post [IPng Arrives in Lille]({{< ref "2021-05-28-lille" >}}). ***Paris*** - Where two large facilities are placed back-to-back in the middle of the city, originally Telehouse TH2, with a new facility at Léon Frot, where we pick up a 10G circuit from Lille and further on the ring with a 10G circuit to Geneva. Here, we connect to [FranceIX](https://franceix.net). -More details in our post [IPng Arrives in Paris]({% post_url 2021-06-01-paris %}). +More details in our post [IPng Arrives in Paris]({{< ref "2021-06-01-paris" >}}). ***Geneva*** - The home-base of [IP-Max](https://ip-max.net) is where we close our ring. From Paris, IP-Max has two redundant paths back to Switzerland, the first @@ -101,7 +101,7 @@ then into Geneva. Here, at [SafeHost](https://safehost.com/) in Plan les Ouates, is where we have our fourth Swiss point of presence, with a connection to our very own [Free-IX](https://free-ix.net/) and a 10G circuit to Interxion at Glattbrugg (Zurich), and of course to Paris. -More details in our post [IPng Arrives in Geneva]({% post_url 2021-07-03-geneva %}). +More details in our post [IPng Arrives in Geneva]({{< ref "2021-07-03-geneva" >}}). ## Logical diff --git a/content/articles/2021-03-27-coloclue-vpp.md b/content/articles/2021-03-27-coloclue-vpp.md index 2db8299..fb18d83 100644 --- a/content/articles/2021-03-27-coloclue-vpp.md +++ b/content/articles/2021-03-27-coloclue-vpp.md @@ -8,7 +8,7 @@ title: 'Case Study: VPP at Coloclue, part 1' ## Introduction -Coloclue AS8283 operates several Linux routers running Bird. Over the years, the performance of their previous hardware platform (Dell R610) has deteriorated, and they were up for renewal. At the same time, network latency/jitter has been very high, and variability may be caused by the Linux router hardware, their used software, the inter-datacenter links, or any combination of these. The routers were replaced with relatively modern hardware. In a [previous post]({% post_url 2021-02-27-coloclue-loadtest %}), I looked into the links between the datacenters, and demonstrated that they are performing as expected (1.41Mpps of 802.1q ethernet frames in both directions). That leaves the software. This post explores a replacement of the Linux kernel routers by a userspace process running VPP, which is an application built on DPDK. +Coloclue AS8283 operates several Linux routers running Bird. Over the years, the performance of their previous hardware platform (Dell R610) has deteriorated, and they were up for renewal. At the same time, network latency/jitter has been very high, and variability may be caused by the Linux router hardware, their used software, the inter-datacenter links, or any combination of these. The routers were replaced with relatively modern hardware. In a [previous post]({{< ref "2021-02-27-coloclue-loadtest" >}}), I looked into the links between the datacenters, and demonstrated that they are performing as expected (1.41Mpps of 802.1q ethernet frames in both directions). That leaves the software. This post explores a replacement of the Linux kernel routers by a userspace process running VPP, which is an application built on DPDK. ### Executive Summary diff --git a/content/articles/2021-05-26-amsterdam.md b/content/articles/2021-05-26-amsterdam.md index 429d65e..b431eff 100644 --- a/content/articles/2021-05-26-amsterdam.md +++ b/content/articles/2021-05-26-amsterdam.md @@ -68,7 +68,7 @@ for our trip. Double yaay! Because this is a completely new site for [IP-Max](https://ip-max.net) as well as [IPng](https://ipng.ch/), we'll have to do a bit more work. And this suites -us just fine, because after driving through Frankfurt (see my [previous post]({% post_url 2021-05-17-frankfurt %})), +us just fine, because after driving through Frankfurt (see my [previous post]({{< ref "2021-05-17-frankfurt" >}})), to the Netherlands, we have to stay in quarantine for five days (or, ten if we happen to fail our PCR test after five days!), which gives us plenty of time to stage and configure what will be our Cisco **er01.ams01.ip-max.net** and @@ -166,7 +166,7 @@ SFP+ DAC!). So that leaves either a faulty Cisco or a faulty Supermicro, neither of which are appealing. On day two, after breakfast, we had to do a few chores first (like the claim -for the VAT for imports, see our [previous post]({% post_url 2021-05-17-frankfurt %}), +for the VAT for imports, see our [previous post]({{< ref "2021-05-17-frankfurt" >}}), and as well get a corona PCR test for the way to France (which was absolutely horrible, by the way, I *still* feel my nose which was violated). So we hit NIKHEF at around 4pm to finish the job and take care of a few small favors for diff --git a/content/articles/2021-05-28-lille.md b/content/articles/2021-05-28-lille.md index bd293ba..e5280e0 100644 --- a/content/articles/2021-05-28-lille.md +++ b/content/articles/2021-05-28-lille.md @@ -15,7 +15,7 @@ in the Netherlands, until the stars aligned ... {{< image width="300px" float="right" src="/assets/network/lille-civ1.png" alt="Lille CIV1" >}} -After our adventure in [Amsterdam]({% post_url 2021-05-26-amsterdam %}), and +After our adventure in [Amsterdam]({{< ref "2021-05-26-amsterdam" >}}), and after Fred and I both got negative PCR test results, we made our way down to Lille, France. There are two datacenters there where IP-Max has a presence, and they are very innovative ones. There's a specific trick with a block of diff --git a/content/articles/2021-06-01-paris.md b/content/articles/2021-06-01-paris.md index 021cb08..0e485f8 100644 --- a/content/articles/2021-06-01-paris.md +++ b/content/articles/2021-06-01-paris.md @@ -44,7 +44,7 @@ ports on the ASR9010, one towards Lille and one towards Zürich (which will Geneva later on). I'm getting the hang of this VLL stuff after our adventures, previously in -[Lille]({% post_url 2021-05-28-lille %}) at CIV, which is a lazy 4.8ms away +[Lille]({{< ref "2021-05-28-lille" >}}) at CIV, which is a lazy 4.8ms away from this place (Fred already speaks of making that more like 3.7ms with a _small call_ to his buddy Laurent). So I went about my business, racking first the WiFi enabled **console.par.ipng.nl**, connecting to WiFi with it, and @@ -72,7 +72,7 @@ claim I am on _the FLAP_ because I have a router in **L**ille. So there's that : Fred has ordered my FranceIX connection this afternoon, delivered from a 20Gig LAG on **er02.par02.ip-max.net** and directly into my router there. In -the mean time, I will be busy configuring my DE-CIX port from [a previous post]({% post_url 2021-05-17-frankfurt %}). +the mean time, I will be busy configuring my DE-CIX port from [a previous post]({{< ref "2021-05-17-frankfurt" >}}). The console server here (a standard issue APU3 with 802.11ac WiFi broadcasting _AS50869 PAR_ with password _IPngGuest_, you're welcome), connects to the diff --git a/content/articles/2021-07-03-geneva.md b/content/articles/2021-07-03-geneva.md index f7f3f96..9fbff7b 100644 --- a/content/articles/2021-07-03-geneva.md +++ b/content/articles/2021-07-03-geneva.md @@ -15,9 +15,9 @@ in the Netherlands, until the stars aligned ... {{< image width="400px" float="left" src="/assets/network/qdr.png" alt="Quai du Rhône" >}} -After our adventure in [Frankfurt]({% post_url 2021-05-17-frankfurt %}), -[Amsterdam]({% post_url 2021-05-26-amsterdam %}), [Lille]({% post_url 2021-05-28-lille %}), -and [Paris]({% post_url 2021-06-01-paris %}) came to an end, I still had a few +After our adventure in [Frankfurt]({{< ref "2021-05-17-frankfurt" >}}), +[Amsterdam]({{< ref "2021-05-26-amsterdam" >}}), [Lille]({{< ref "2021-05-28-lille" >}}), +and [Paris]({{< ref "2021-06-01-paris" >}}) came to an end, I still had a few loose ends to tie up. In particular, in Lille I had dropped an old Dell R610 while waiting for new Supermicros to be delivered. There is benefit to having one standard footprint setup, in my case an PCEngines `APU2`, Supermicro diff --git a/content/articles/2021-07-26-bucketlist.md b/content/articles/2021-07-26-bucketlist.md index 6099ec7..ed57b74 100644 --- a/content/articles/2021-07-26-bucketlist.md +++ b/content/articles/2021-07-26-bucketlist.md @@ -65,7 +65,7 @@ don't think any global downtime for my internet presence has ever occured. It's not a coincidence that even Google for the longest time used my website at [SixXS](https://sixxs.net/) for their own monitoring, now _that_ is cool. Although Jeroen and I did decide to retire the SixXS project (see my -[Sunset]({% post_url 2017-03-01-sixxs-sunset %}) article on why), the website +[Sunset]({{< ref "2017-03-01-sixxs-sunset" >}}) article on why), the website is still up and served off of three distinct networks, because I have to stay true to the SRE life. @@ -129,11 +129,11 @@ So, in a really epic roadtrip full of nerd, Fred and I went into total geek-mode as we traveled to several European cities to deploy AS50869 on a european ring. I wrote about my experience extensively in these blog posts: -* [Frankfurt]({% post_url 2021-05-17-frankfurt %}): May 17th 2021. -* [Amsterdam]({% post_url 2021-05-26-amsterdam %}): May 26th 2021. -* [Lille]({% post_url 2021-05-28-lille %}): May 28th 2021. -* [Paris]({% post_url 2021-06-01-paris %}): June 1st 2021. -* [Geneva]({% post_url 2021-07-03-geneva %}): July 3rd 2021. +* [Frankfurt]({{< ref "2021-05-17-frankfurt" >}}): May 17th 2021. +* [Amsterdam]({{< ref "2021-05-26-amsterdam" >}}): May 26th 2021. +* [Lille]({{< ref "2021-05-28-lille" >}}): May 28th 2021. +* [Paris]({{< ref "2021-06-01-paris" >}}): June 1st 2021. +* [Geneva]({{< ref "2021-07-03-geneva" >}}): July 3rd 2021. I think we can now say that I'm _peering on the FLAP_. It's not that this AS50869 carries that much traffic, but it's a very welcome relief of daily worklife to be diff --git a/content/articles/2021-08-07-fs-switch.md b/content/articles/2021-08-07-fs-switch.md index 21387ce..013abef 100644 --- a/content/articles/2021-08-07-fs-switch.md +++ b/content/articles/2021-08-07-fs-switch.md @@ -216,7 +216,7 @@ For my loadtests, I used Cisco's T-Rex ([ref](https://trex-tgn.cisco.com/)) in s with a custom Python controller that ramps up and down traffic from the loadtester to the device under test (DUT) by sending traffic out `port0` to the DUT, and expecting that traffic to be presented back out from the DUT to its `port1`, and vice versa (out from `port1` -> DUT -> back -in on `port0`). You can read a bit more about my setup in my [Loadtesting at Coloclue]({% post_url 2021-02-27-coloclue-loadtest %}) +in on `port0`). You can read a bit more about my setup in my [Loadtesting at Coloclue]({{< ref "2021-02-27-coloclue-loadtest" >}}) post. To stress test the switch, several pairs at 10G and 25G were used, and since the specs boast diff --git a/content/articles/2021-08-13-vpp-2.md b/content/articles/2021-08-13-vpp-2.md index 565eb3d..708a93e 100644 --- a/content/articles/2021-08-13-vpp-2.md +++ b/content/articles/2021-08-13-vpp-2.md @@ -29,7 +29,7 @@ to interfaces in VPP, into their Linux CP counterparts. ## My test setup -I'm using the same setup from the [previous post]({% post_url 2021-08-12-vpp-1 %}). The goal of this +I'm using the same setup from the [previous post]({{< ref "2021-08-12-vpp-1" >}}). The goal of this post is to show what code needed to be written and which changes needed to be made to the plugin, in order to propagate changes to VPP interfaces to the Linux TAP devices. @@ -247,7 +247,7 @@ pim@hippo:~/src/lcpng$ fping6 2001:db8:0:1::2 2001:db8:0:2::2 \ In case you were wondering: my previous post ended in the same huzzah moment. It did. The difference is that now the VPP configuration is _much shorter_! Comparing -the Appendix from this post with my [first post]({% post_url 2021-08-12-vpp-1 %}), after +the Appendix from this post with my [first post]({{< ref "2021-08-12-vpp-1" >}}), after all of this work I no longer have to manually copy the configuration (like link states, MTU changes, IP addresses) from VPP into Linux, instead the plugin does all of this work for me, and I can configure both sides entirely with `vppctl` commands! diff --git a/content/articles/2021-08-15-vpp-3.md b/content/articles/2021-08-15-vpp-3.md index 76cfd3b..ef67178 100644 --- a/content/articles/2021-08-15-vpp-3.md +++ b/content/articles/2021-08-15-vpp-3.md @@ -28,7 +28,7 @@ configured. ## My test setup -I've extended the setup from the [first post]({% post_url 2021-08-12-vpp-1 %}). The base +I've extended the setup from the [first post]({{< ref "2021-08-12-vpp-1" >}}). The base configuration for the `enp66s0f0` interface remains exactly the same, but I've also added an LACP `bond0` interface, which also has the whole kitten kaboodle of sub-interfaces defined, see below in the Appendix for details, but here's the table again for reference: @@ -51,7 +51,7 @@ made to the plugin, in order to automatically create and delete sub-interfaces. ### Startingpoint -Based on the state of the plugin after the [second post]({% post_url 2021-08-13-vpp-2 %}), +Based on the state of the plugin after the [second post]({{< ref "2021-08-13-vpp-2" >}}), operators must create _LIP_ instances for interfaces as well as each sub-interface explicitly: @@ -124,7 +124,7 @@ The code for the configuration toggle is in this The original plugin code (that ships with VPP 21.06) made a start by defining a function called `lcp_itf_phy_add()` and registering an intent with `VNET_SW_INTERFACE_ADD_DEL_FUNCTION()`. I've -moved the function to the source file I created in [Part 2]({% post_url 2021-08-13-vpp-2 %}) +moved the function to the source file I created in [Part 2]({{< ref "2021-08-13-vpp-2" >}}) (called `lcp_if_sync.c`), specifically to handle interface syncing, and gave it a name that matches the VPP callback, so `lcp_itf_interface_add_del()`. diff --git a/content/articles/2021-08-25-vpp-4.md b/content/articles/2021-08-25-vpp-4.md index 86d664d..d92027b 100644 --- a/content/articles/2021-08-25-vpp-4.md +++ b/content/articles/2021-08-25-vpp-4.md @@ -28,7 +28,7 @@ allowing changes to interfaces made in Linux to make their way back into VPP! ## My test setup -I'm keeping the setup from the [third post]({% post_url 2021-08-15-vpp-3 %}). A Linux machine has an +I'm keeping the setup from the [third post]({{< ref "2021-08-15-vpp-3" >}}). A Linux machine has an interface `enp66s0f0` which has 4 sub-interfaces (one dot1q tagged, one q-in-q, one dot1ad tagged, and one q-in-ad), giving me five flavors in total. Then, I created an LACP `bond0` interface, which also has the whole kit and caboodle of sub-interfaces defined, see below in the Appendix for details, @@ -55,7 +55,7 @@ I implement the Linux-to-VPP synchronization using, _quelle surprise_, Netlink m ### Startingpoint -Based on the state of the plugin after the [third post]({% post_url 2021-08-15-vpp-3 %}), +Based on the state of the plugin after the [third post]({{< ref "2021-08-15-vpp-3" >}}), operators can enable `lcp-sync` (which copies changes made in VPP into their Linux counterpart) and `lcp-auto-subint` (which extends sub-interface creation in VPP to automatically create a Linux Interface Pair, or _LIP_, and its companion Linux network interface): @@ -342,7 +342,7 @@ implementation starts approaching 'vanilla' Linux user experience! Here's [a screencast](https://asciinema.org/a/432243) showing me playing around a bit, demonstrating that synchronization works pretty well in both directions, a huge improvement from the -[previous screencast](https://asciinema.org/a/430411) in my [second post]({% post_url 2021-08-13-vpp-2 %}), +[previous screencast](https://asciinema.org/a/430411) in my [second post]({{< ref "2021-08-13-vpp-2" >}}), which was only two weeks ago: @@ -381,7 +381,7 @@ of help along the way from Neale Ranns and Jon Loeliger. I'd like to thank them #### Ubuntu config -This configuration has been the exact same ever since [my first post]({% post_url 2021-08-12-vpp-1 %}): +This configuration has been the exact same ever since [my first post]({{< ref "2021-08-12-vpp-1" >}}): ``` # Untagged interface ip addr add 10.0.1.2/30 dev enp66s0f0 diff --git a/content/articles/2021-08-26-fiber7-x.md b/content/articles/2021-08-26-fiber7-x.md index 6c60702..4fcb0f2 100644 --- a/content/articles/2021-08-26-fiber7-x.md +++ b/content/articles/2021-08-26-fiber7-x.md @@ -12,7 +12,7 @@ a bit different back in 2016. There was a switch provided by Litecom in which ports were resold OEM to upstream ISPs, and Litecom would provide the L2 backhaul to a central place to hand off the customers to the ISPs, in my case Easyzone. In Oct'16, Fredy asked me if I could do a test of -Fiber7-on-Litecom, which I did and reported on in a [blog post]({% post_url 2016-10-07-fiber7-litexchange %}). +Fiber7-on-Litecom, which I did and reported on in a [blog post]({{< ref "2016-10-07-fiber7-litexchange" >}}). Some time early 2017, Init7 deployed a POP in Dietlikon (790BRE) and then magically another one in Brüttisellen (1790BRE). It's a funny story @@ -78,7 +78,7 @@ and aggregation switches are C9500-32C which take 32x QSFP+ (40/100Gbit). As a subscriber, we all got a courtesy headsup on the date of 1790BRE's upgrade. It was [scheduled](https://as13030.net/status/?ticket=4238550) for Thursday Aug 26th starting at midnight. As I've written about before (for example at the bottom of my -[Bucketlist post]({% post_url 2021-07-26-bucketlist %})), I really enjoy the immediate +[Bucketlist post]({{< ref "2021-07-26-bucketlist" >}})), I really enjoy the immediate gratification of physical labor in a datacenter. Most of my projects at work are on the quarters-to-years timeframe, and being able to do a thing and see the result of that thing ~immmediately, is a huge boost for me. diff --git a/content/articles/2021-09-02-vpp-5.md b/content/articles/2021-09-02-vpp-5.md index 71b5ff0..913eeaa 100644 --- a/content/articles/2021-09-02-vpp-5.md +++ b/content/articles/2021-09-02-vpp-5.md @@ -30,11 +30,11 @@ prefixes and 870K IPv4 prefixes. ## My test setup The goal of this post is to show what code needed to be written to extend the **Netlink Listener** -plugin I wrote in the [fourth post]({% post_url 2021-08-25-vpp-4 %}), so that it can consume +plugin I wrote in the [fourth post]({{< ref "2021-08-25-vpp-4" >}}), so that it can consume route additions/deletions, a thing that is common in dynamic routing protocols such as OSPF and BGP. -The setup from my [third post]({% post_url 2021-08-15-vpp-3 %}) is still there, but it's no longer +The setup from my [third post]({{< ref "2021-08-15-vpp-3" >}}) is still there, but it's no longer a focal point for me. I use it (the regular interface + subints and the BondEthernet + subints) just to ensure my new code doesn't have a regression. @@ -54,7 +54,7 @@ The test setup offers me the ability to consume OSPF, OSPFv3 and BGP. ### Startingpoint -Based on the state of the plugin after the [fourth post]({% post_url 2021-08-25-vpp-4 %}), +Based on the state of the plugin after the [fourth post]({{< ref "2021-08-25-vpp-4" >}}), operators can create VLANs (including .1q, .1ad, QinQ and QinAD subinterfaces) directly in Linux. They can change link attributes (like set admin state 'up' or 'down', or change the MTU on a link), they can add/remove IP addresses, and the system will add/remove IPv4 @@ -163,7 +163,7 @@ compare it to issuing `ip link` and acting on additions/removals as they occur. called _direct_, generates directly connected routes for interfaces that have IPv4 or IPv6 addresses configured. It turns out that if I add `194.1.163.86/27` as an IPv4 address on an interface, it'll generate several Netlink messages: one for the `RTM_NEWADDR` which -I discussed in my [fourth post]({% post_url 2021-08-25-vpp-4 %}), and also a `RTM_NEWROUTE` +I discussed in my [fourth post]({{< ref "2021-08-25-vpp-4" >}}), and also a `RTM_NEWROUTE` for the connected `194.1.163.64/27` in this case. It helps the kernel understand that if we want to send a packet to a host in that prefix, we should not send it to the default gateway, but rather to a nexthop of the device. Those are intermittently called `direct` diff --git a/content/articles/2021-09-21-vpp-7.md b/content/articles/2021-09-21-vpp-7.md index 23d5aae..f25c183 100644 --- a/content/articles/2021-09-21-vpp-7.md +++ b/content/articles/2021-09-21-vpp-7.md @@ -25,7 +25,7 @@ like [FRR](https://frrouting.org/) or [Bird](https://bird.network.cz/) on top of ## Running in Production In the first articles from this series, I showed the code that needed to be written to implement the -**Control Plane** and **Netlink Listener** plugins. In the [penultimate post]({% post_url 2021-09-10-vpp-6 %}), +**Control Plane** and **Netlink Listener** plugins. In the [penultimate post]({{< ref "2021-09-10-vpp-6" >}}), I wrote an SNMP Agentx that exposes the VPP interface data to, say, LibreNMS. But what are the things one might do to deploy a router end-to-end? That is the topic of this post. @@ -33,7 +33,7 @@ But what are the things one might do to deploy a router end-to-end? That is the ### A note on hardware Before I get into the details, here's some specifications on the router hardware that I use at -IPng Networks (AS50869). See more about our network [here]({% post_url 2021-02-27-network %}). +IPng Networks (AS50869). See more about our network [here]({{< ref "2021-02-27-network" >}}). The chassis is a Supermicro SYS-5018D-FN8T, which includes: * Full IPMI support (power, serial-over-lan and kvm-over-ip with HTML5), on a dedicated network port. @@ -318,7 +318,7 @@ See all interfaces? Great. Moving on :) I set a VPP interface configuration (which it'll read and apply any time it starts or restarts, thereby making the configuration persistent across crashes and reboots). Using the `exec` stanza described above, the contents now become, taking as an example, our first router in -Lille, France [[details]({% post_url 2021-05-28-lille %})], configured as so: +Lille, France [[details]({{< ref "2021-05-28-lille" >}})], configured as so: ``` cat << EOF | sudo tee /etc/vpp/bootstrap.vpp @@ -351,9 +351,9 @@ EOF This base-line configuration will: * Ensure all host interfaces are created in namespace `dataplane` which we created earlier * Turn on `lcp-sync`, which copies forward any configuration from VPP into Linux (see - [VPP Part 2]({% post_url 2021-08-13-vpp-2 %})) + [VPP Part 2]({{< ref "2021-08-13-vpp-2" >}})) * Turn on `lcp-auto-subint`, which automatically creates _LIPs_ (Linux interface pairs) - for all sub-interfaces (see [VPP Part 3]({% post_url 2021-08-15-vpp-3 %})) + for all sub-interfaces (see [VPP Part 3]({{< ref "2021-08-15-vpp-3" >}})) * Create a loopback interface, give it IPv4/IPv6 addresses, and expose it to Linux * Create one _LIP_ interface for four of the Gigabit and all 6x TenGigabit interfaces * Leave 2 interfaces (`GigabitEthernet7/0/0` and `GigabitEthernet8/0/0`) for later @@ -423,7 +423,7 @@ for all of the connected interfaces, while Linux has already added those. Theref avoid the source `RTS_DEVICE`, which means "connected routes", but otherwise offer all routes to the kernel, which in turn propagates these as Netlink messages which are consumed by VPP. A detailed discussion of Bird's configuration semantics is in my -[VPP Part 5]({% post_url 2021-09-02-vpp-5 %}) post. +[VPP Part 5]({{< ref "2021-09-02-vpp-5" >}}) post. ### Configuring SSH @@ -477,7 +477,7 @@ important to note that Linux will only see those packets that were _punted_ by V is to say, those packets which were destined to any IP address configured on the control plane. Any traffic going _through_ VPP will never be seen by Linux! So, I'll have to be clever and count this traffic by polling VPP instead. This was the topic of my previous -[VPP Part 6]({% post_url 2021-09-10-vpp-6 %}) about the SNMP Agent. All of that code +[VPP Part 6]({{< ref "2021-09-10-vpp-6" >}}) about the SNMP Agent. All of that code was released to [Github](https://github.com/pimvanpelt/vpp-snmp-agent), notably there's a hint there for an `snmpd-dataplane.service` and a `vpp-snmp-agent.service`, including the compiled binary that reads from VPP and feeds this to SNMP. diff --git a/content/articles/2021-10-24-as8298.md b/content/articles/2021-10-24-as8298.md index c1a7c88..f499000 100644 --- a/content/articles/2021-10-24-as8298.md +++ b/content/articles/2021-10-24-as8298.md @@ -7,9 +7,9 @@ title: IPng acquires AS8298 In January of 2003, my buddy Jeroen announced a project called the [Ghost Route Hunters](/assets/as8298/RIPE44-IPv6-GRH.pdf), after the industry had been plagued for a few years with anomalies in the DFZ - routes would show up with phantom BGP paths, unable to be traced down to a source or faulty implementation. Jeroen presented his [findings](/assets/as8298/RIPE46-IPv6-Routing-Table-Anomalies.pdf) at RIPE-46 and for years after this, the industry used the [SixXS GRH](https://www.sixxs.net/tools/grh/how/) as a distributed looking glass. At the time, one of SixXS's point of presence providers kindly lent the project AS8298 to build this looking glass and underlying infrastructure. -After running SixXS for 16 years, Jeroen and I decided to [Sunset]({%post_url 2017-03-01-sixxs-sunset %}) it, which meant that in June of 2017, the Ghost Route Hunter project came to an end as well, and as we tore down the infrastructure, AS8298 became dormant. +After running SixXS for 16 years, Jeroen and I decided to [Sunset]({{< ref "2017-03-01-sixxs-sunset" >}}) it, which meant that in June of 2017, the Ghost Route Hunter project came to an end as well, and as we tore down the infrastructure, AS8298 became dormant. -Then in August of 2021, I was doing a little bit of cleaning on the IPng Networks serving infrastructure, and came across some old mail from RIPE NCC about that AS number. And while IPng Networks is running [just fine]({%post_url 2021-02-27-network %}) on AS50869 today, it would be just that little bit cooler if it were to run on AS8298. So, I embarked on a journey to move a running ISP into a new AS number, which sounds like fun! This post describes the situation going in to this renumbering project, and there will be another post, likely in January 2022, that describes the retrospective (this future post may be either celebratory, or a huge postmortem, to be determined). +Then in August of 2021, I was doing a little bit of cleaning on the IPng Networks serving infrastructure, and came across some old mail from RIPE NCC about that AS number. And while IPng Networks is running [just fine]({{< ref "2021-02-27-network" >}}) on AS50869 today, it would be just that little bit cooler if it were to run on AS8298. So, I embarked on a journey to move a running ISP into a new AS number, which sounds like fun! This post describes the situation going in to this renumbering project, and there will be another post, likely in January 2022, that describes the retrospective (this future post may be either celebratory, or a huge postmortem, to be determined). ## The Plan @@ -31,8 +31,8 @@ With the permission of the previous holder, and with the help of the previous sp The autonomous system of IPng Networks spans two main parts. Firstly, in Zurich IPng Networks operates four sites and six routers: * Two in a private colocation site at Daedalean (**A**) in Albisrieden called `ddln0` and `ddln1`, they are running [DANOS](https://danosproject.org/) * Two at our offices in Brüttisellen (**C**), called `chbtl0` and `chbtl1`, they are running [Debian](https://debian.org/) -* One at Interxion ZUR1 datacenter in Glattbrugg (**D**), called `chgtg0`, running [VPP]({%post_url 2021-09-21-vpp-7 %}), connecting to a public internet exchange CHIX-CH and taking transit from IP-Max and Openfactory. -* One at NTT's datacenter in Rümlang (**E**), called `chrma0`, also running [VPP]({%post_url 2021-09-21-vpp-7 %}), connecting to a public internet exchange SwissIX and taking transit from IP-Max and Meerfarbig. +* One at Interxion ZUR1 datacenter in Glattbrugg (**D**), called `chgtg0`, running [VPP]({{< ref "2021-09-21-vpp-7" >}}), connecting to a public internet exchange CHIX-CH and taking transit from IP-Max and Openfactory. +* One at NTT's datacenter in Rümlang (**E**), called `chrma0`, also running [VPP]({{< ref "2021-09-21-vpp-7" >}}), connecting to a public internet exchange SwissIX and taking transit from IP-Max and Meerfarbig. NOTE: You can read a lot about my work on VPP in a series of [VPP articles](/s/articles/), please take a look! @@ -40,7 +40,7 @@ There's a few downstream IP Transit networks and lots of local connected network {{< image width="300px" float="right" src="/assets/network/european-ring.png" alt="European Ring" >}} -That ring, then, consists of five additional sites and five routers, all running [VPP]({%post_url 2021-09-21-vpp-7 %}): +That ring, then, consists of five additional sites and five routers, all running [VPP]({{< ref "2021-09-21-vpp-7" >}}): * Frankfurt: `defra0`, connecting to four DE-CIX exchangepoints in Frankfurt itself directly, and remotely to Munich, Düsseldorf and Hamburg * Amsterdam: `nlams0`, connecting to NL-IX, SpeedIX, FrysIX (our favorite!), and LSIX; we also pick up two transit providers (A2B and Coloclue). * Lille: `frggh0`, connecting to the northern france exchange called LillIX diff --git a/content/articles/2021-11-26-netgate-6100.md b/content/articles/2021-11-26-netgate-6100.md index 021a0a2..939ee4b 100644 --- a/content/articles/2021-11-26-netgate-6100.md +++ b/content/articles/2021-11-26-netgate-6100.md @@ -7,14 +7,14 @@ title: 'Review: Netgate 6100' * Reviewed: Jim Thompson <[jim@netgate.com](mailto:jim@netgate.com)> * Status: Draft - Review - **Approved** -A few weeks ago, Jim Thompson from Netgate stumbled across my [APU6 Post]({% post_url 2021-07-19-pcengines-apu6 %}) +A few weeks ago, Jim Thompson from Netgate stumbled across my [APU6 Post]({{< ref "2021-07-19-pcengines-apu6" >}}) and introduced me to their new desktop router/firewall the Netgate 6100. It currently ships with [pfSense Plus](https://www.netgate.com/pfsense-plus-software), but he mentioned that it's designed as well to run their [TNSR](https://www.netgate.com/tnsr) software, considering the device ships with 2x 1GbE SFP/RJ45 combo, 2x 10GbE SFP+, and 4x 2.5GbE RJ45 ports, and all network interfaces are Intel / DPDK capable chips. He asked me if I was willing to take it around the block with VPP, which of course I'd be happy to do, and here are my findings. The TNSR image isn't yet -public for this device, but that's not a problem because [AS8298 runs VPP]({% post_url 2021-09-10-vpp-6 %}), +public for this device, but that's not a problem because [AS8298 runs VPP]({{< ref "2021-09-10-vpp-6" >}}), so I'll just go ahead and install it myself ... # Executive Summary @@ -62,7 +62,7 @@ I'll have to deface this little guy, and reinstall it with Linux. My game plan i 1. Based on the shipped pfSense 21.05 (FreeBSD 12.2), do all the loadtests 1. Reinstall the machine with Linux (Ubuntu 20.04.3), do all the loadtests -1. Install VPP using my own [HOWTO]({% post_url 2021-09-21-vpp-7 %}), and do all the loadtests +1. Install VPP using my own [HOWTO]({{< ref "2021-09-21-vpp-7" >}}), and do all the loadtests This allows for, I think, a pretty sweet comparison between FreeBSD, Linux, and DPDK/VPP. Now, on to a description on the defacing, err, reinstall process on this Netgate 6100 machine, as it was not as easy @@ -160,7 +160,7 @@ duration. If at any time the loadtester fails to see the traffic it's emitting r second port, it flags the DUT as saturated; and this is noted as the maximum bits/second and/or packets/second. -Since my last loadtesting [post]({% post_url 2021-07-19-pcengines-apu6 %}), I've learned a lot +Since my last loadtesting [post]({{< ref "2021-07-19-pcengines-apu6" >}}), I've learned a lot more about packet forwarding and how to make it easier or harder on the router. Let me go into a few more details about the various loadtests that I've done here. diff --git a/content/articles/2021-12-23-vpp-playground.md b/content/articles/2021-12-23-vpp-playground.md index 037595c..9a41c4e 100644 --- a/content/articles/2021-12-23-vpp-playground.md +++ b/content/articles/2021-12-23-vpp-playground.md @@ -62,7 +62,7 @@ plugins: I've published the code on [Github](https://github.com/pimvanpelt/lcpng/) and I am targeting a release in upstream VPP, hoping to make the upcoming 22.02 release in February 2022. I have a lot of ground to -cover, but I will note that the plugin has been running in production in [AS8298]({% post_url 2021-02-27-network %}) +cover, but I will note that the plugin has been running in production in [AS8298]({{< ref "2021-02-27-network" >}}) since Sep'21 and no crashes related to LinuxCP have been observed. To help tinkerers, this article describes a KVM disk image in _qcow2_ format, which will boot a vanilla @@ -226,7 +226,7 @@ ipng@vpp-proto:~$ sudo dpkg -i ~/packages/*.deb ipng@vpp-proto:~$ sudo adduser `id -un` vpp ``` -I'll configure 2GB of hugepages and 64MB of netlink buffer size - see my [VPP #7]({% post_url 2021-09-21-vpp-7 %}) +I'll configure 2GB of hugepages and 64MB of netlink buffer size - see my [VPP #7]({{< ref "2021-09-21-vpp-7" >}}) post for more details and lots of background information: ``` diff --git a/content/articles/2022-02-14-vpp-vlan-gym.md b/content/articles/2022-02-14-vpp-vlan-gym.md index 1cdeb7d..c33497d 100644 --- a/content/articles/2022-02-14-vpp-vlan-gym.md +++ b/content/articles/2022-02-14-vpp-vlan-gym.md @@ -20,7 +20,7 @@ can be shared between VPP and the Linux kernel in a clever way, so running softw up and running, VPP has so much more to offer - many interesting L2 and L3 services that you'd expect in commercial (and very pricy) routers like Cisco ASR are well within reach. -When Fred and I were in Paris [[report]({%post_url 2021-06-01-paris %})], I got stuck trying to +When Fred and I were in Paris [[report]({{< ref "2021-06-01-paris" >}})], I got stuck trying to configure an Ethernet over MPLS circuit for IPng from Paris to Zurich. Fred took a look for me and quickly determined "Ah, you forgot to do the VLAN gymnastics". I found it a fun way to describe the solution to my problem back then, and come to think of it: the router really can be configured @@ -109,7 +109,7 @@ vpp# set interface l2 bridge BondEthernet0 10 And if I want to add an IP address (creating the equivalent of a routable _VLAN Interface_), I create what is called a Bridge Virtual Interface or _BVI_, add that interface to the bridge domain, and optionally -expose it in Linux with the [LinuxCP]({%post_url 2021-09-21-vpp-7%}) plugin: +expose it in Linux with the [LinuxCP]({{< ref "2021-09-21-vpp-7" >}}) plugin: ``` vpp# bvi create instance 10 mac 02:fe:4b:4c:22:8f @@ -143,7 +143,7 @@ of what I might be able to configure on an L2 switch. ### L2 CrossConnect I thought it'd be useful to point out another powerful concept, which made an appearance in my previous -post about [Virtual Leased Lines]({%post_url 2022-01-12-vpp-l2%}). If all I want to do is connect two +post about [Virtual Leased Lines]({{< ref "2022-01-12-vpp-l2" >}}). If all I want to do is connect two interfaces together, there won't be a need for learning, L2 FIB, and so on. It is computationally much simpler to just take any frame received on interface A and transmit it out on interface B, unmodified. This is known in VPP as a layer2 crossconnect, and can be configured like so: @@ -318,12 +318,12 @@ The four concepts discussed here can be combined in countless interesting ways: * Ensure that VLAN tags are popped and pushed consistently on tagged sub-interfaces The practical conclusion is that VPP can provide fully transparent, dot1q and jumboframe enabled -virtual leased lines (see my previous post on [VLL performance]({%post_url 2022-01-12-vpp-l2 %})), +virtual leased lines (see my previous post on [VLL performance]({{< ref "2022-01-12-vpp-l2" >}})), including using regular breakout switches to greatly increase the total port count for customers. I'll leave you with a working example of an L2VPN between a breakout switch behind **nlams0.ipng.ch** in Amsterdam and a remote VPP router in Zurich called **ddln0.ipng.ch**. Take the following -[S5860-20SQ]({%post_url 2021-08-07-fs-switch %}) switch, which connects to the VPP router on Te0/1 +[S5860-20SQ]({{< ref "2021-08-07-fs-switch" >}}) switch, which connects to the VPP router on Te0/1 and a customer on Te0/2: ``` diff --git a/content/articles/2022-02-21-asr9006.md b/content/articles/2022-02-21-asr9006.md index feece8f..7f91c21 100644 --- a/content/articles/2022-02-21-asr9006.md +++ b/content/articles/2022-02-21-asr9006.md @@ -7,9 +7,9 @@ title: 'Review: Cisco ASR9006/RSP440-SE' {{< image width="180px" float="right" src="/assets/asr9006/ipmax.png" alt="IP-Max" >}} -If you've read up on my articles, you'll know that I have deployed a [European Ring]({%post_url 2021-02-27-network %}), -which was reformatted late last year into [AS8298]({%post_url 2021-10-24-as8298 %}) and upgraded to run -[VPP Routers]({%post_url 2021-09-21-vpp-7 %}) with 10G between each city. IPng Networks rents these 10G point to point +If you've read up on my articles, you'll know that I have deployed a [European Ring]({{< ref "2021-02-27-network" >}}), +which was reformatted late last year into [AS8298]({{< ref "2021-10-24-as8298" >}}) and upgraded to run +[VPP Routers]({{< ref "2021-09-21-vpp-7" >}}) with 10G between each city. IPng Networks rents these 10G point to point virtual leased lines between each of our locations. It's a really great network, and it performs so well because it's built on an EoMPLS underlay provided by [IP-Max](https://ip-max.net/). They, in turn, run carrier grade hardware in the form of Cisco ASR9k. In part, we're such a good match together, because my choice of [VPP](https://fd.io/) on the IPng @@ -157,7 +157,7 @@ of stability beyond Cisco and maybe Juniper. So if you want _Rock Solid Internet the way to go. I have written a word or two on how VPP (an open source dataplane very similar to these industrial machines) -works. A great example is my recent [VPP VLAN Gymnastics]({%post_url 2022-02-14-vpp-vlan-gym %}) article. +works. A great example is my recent [VPP VLAN Gymnastics]({{< ref "2022-02-14-vpp-vlan-gym" >}}) article. There's a lot I can learn from comparing the performance between VPP and Cisco ASR9k, so I will focus on the following set of practical questions: @@ -183,7 +183,7 @@ Mellanox ConnectX5-Ex (PCIe v4.0 x16) network card sporting two 100G interfaces, with this 2x10G single interface, and 2x20G LAG, even with 64 byte packets. I am continually amazed that a full line rate loadtest of small 64 byte packets at a rate of 40Gbps boils down to 59.52Mpps! -For each loadtest, I ramp up the traffic using a [T-Rex loadtester]({%post_url 2021-02-27-coloclue-loadtest %}) +For each loadtest, I ramp up the traffic using a [T-Rex loadtester]({{< ref "2021-02-27-coloclue-loadtest" >}}) that I wrote. It starts with a low-pps warmup duration of 30s, then it ramps up from 0% to a certain line rate (in this case, alternating to 10GbpsL1 for the single TenGig tests, or 20GbpsL1 for the LACP tests), with a rampup duration of 120s and finally it holds for duration of 30s. @@ -254,7 +254,7 @@ not easily available (ie. both VPP as well as the ASR9k in this case!) ### Test 1.1: 10G L2 Cross Connect A simple matter of virtually patching one interface into the other, I choose the first port on blade 1 and 2, and -tie them together in a `p2p` cross connect. In my [VLAN Gymnastics]({%post_url 2022-02-14-vpp-vlan-gym %}) post, I +tie them together in a `p2p` cross connect. In my [VLAN Gymnastics]({{< ref "2022-02-14-vpp-vlan-gym" >}}) post, I called this a `l2 xconnect`, and although the configuration statements are a bit different, the purpose and expected semantics are identical: @@ -292,7 +292,7 @@ imix | 3.25 Mpps | 9.94 Gbps | 6.46 Mpps | 19.78 Gbps ### Test 1.2: 10G L2 Bridge Domain I then keep the two physical interfaces in `l2transport` mode, but change the type of l2vpn into a -`bridge-domain`, which I described in my [VLAN Gymnastics]({%post_url 2022-02-14-vpp-vlan-gym %}) post +`bridge-domain`, which I described in my [VLAN Gymnastics]({{< ref "2022-02-14-vpp-vlan-gym" >}}) post as well. VPP and Cisco IOS/XR semantics look very similar indeed, they differ really only in the way in which the configuration is expressed: @@ -450,7 +450,7 @@ AggregatePort 2 20000 Mbit 0.0000019575% 0.0000023950% 0 It's clear that both `AggregatePort` interfaces have 20Gbps of capacity and are using an L3 loadbalancing policy. Cool beans! -If you recall my loadtest theory in for example my [Netgate 6100 review]({%post_url 2021-11-26-netgate-6100%}), +If you recall my loadtest theory in for example my [Netgate 6100 review]({{< ref "2021-11-26-netgate-6100" >}}), it can sometimes be useful to operate a single-flow loadtest, in which the source and destination IP:Port stay the same. As I'll demonstrate, it's not only relevant for PC based routers like ones built on VPP, it can also be very relevant in silicon vendors and high-end routers! @@ -729,7 +729,7 @@ I took out for a spin here). They are large (10U of rackspace), heavy (40kg load list price, the street price is easily $10'000,- apiece). On the other hand, we have these PC based machines with Vector Packet Processing, operating as low as 19W for 2x10G, -2x1G and 4x2.5G ports (like the [Netgate 6100]({%post_url 2021-11-26-netgate-6100%})) and offering roughly equal +2x1G and 4x2.5G ports (like the [Netgate 6100]({{< ref "2021-11-26-netgate-6100" >}})) and offering roughly equal performance per port, except having to drop only $700,- apiece. The VPP machines come with ~infinite RAM, even a 16GB machine will run much larger routing tables, including full BGP and so on - there is no (need for) TCAM, and yet routing performance scales out with CPUs and larger CPU instruction/data-cache. Looking at my Ryzen 5950X based Hippo/Rhino diff --git a/content/articles/2022-03-03-syslog-telegram.md b/content/articles/2022-03-03-syslog-telegram.md index 01ad074..c6ee8b5 100644 --- a/content/articles/2022-03-03-syslog-telegram.md +++ b/content/articles/2022-03-03-syslog-telegram.md @@ -43,7 +43,7 @@ extent, being made explicitly aware of BGP adjacencies to downstream (IP Transit There are two parts to this. First I want to have a (set of) central receiver servers, that will each receive messages from the routers in the field. I decide to take three servers: the main one being `nms.ipng.nl`, which runs LibreNMS, and further two read-only route collectors `rr0.ddln0.ipng.ch` at -our own DDLN [colocation]({%post_url 2022-02-24-colo %}) in Zurich, and `rr0.nlams0.ipng.ch` running +our own DDLN [colocation]({{< ref "2022-02-24-colo" >}}) in Zurich, and `rr0.nlams0.ipng.ch` running at Coloclue in DCG, Amsterdam. Of course, it would be a mistake to use UDP as a transport for messages that discuss potential network diff --git a/content/articles/2022-03-27-vppcfg-1.md b/content/articles/2022-03-27-vppcfg-1.md index 67b8f7c..3d5d5d9 100644 --- a/content/articles/2022-03-27-vppcfg-1.md +++ b/content/articles/2022-03-27-vppcfg-1.md @@ -8,10 +8,10 @@ title: VPP Configuration - Part1 # About this series I use VPP - Vector Packet Processor - extensively at IPng Networks. Earlier this year, the VPP community -merged the [Linux Control Plane]({%post_url 2021-08-12-vpp-1 %}) plugin. I wrote about its deployment -to both regular servers like the [Supermicro]({%post_url 2021-09-21-vpp-7 %}) routers that run on our -[AS8298]({% post_url 2021-02-27-network %}), as well as virtual machines running in -[KVM/Qemu]({% post_url 2021-12-23-vpp-playground %}). +merged the [Linux Control Plane]({{< ref "2021-08-12-vpp-1" >}}) plugin. I wrote about its deployment +to both regular servers like the [Supermicro]({{< ref "2021-09-21-vpp-7" >}}) routers that run on our +[AS8298]({{< ref "2021-02-27-network" >}}), as well as virtual machines running in +[KVM/Qemu]({{< ref "2021-12-23-vpp-playground" >}}). Now that I've been running VPP in production for about half a year, I can't help but notice one specific drawback: VPP is a programmable dataplane, and _by design_ it does not include any configuration or diff --git a/content/articles/2022-04-02-vppcfg-2.md b/content/articles/2022-04-02-vppcfg-2.md index 7d4232d..50a7b8e 100644 --- a/content/articles/2022-04-02-vppcfg-2.md +++ b/content/articles/2022-04-02-vppcfg-2.md @@ -8,10 +8,10 @@ title: VPP Configuration - Part2 # About this series I use VPP - Vector Packet Processor - extensively at IPng Networks. Earlier this year, the VPP community -merged the [Linux Control Plane]({%post_url 2021-08-12-vpp-1 %}) plugin. I wrote about its deployment -to both regular servers like the [Supermicro]({%post_url 2021-09-21-vpp-7 %}) routers that run on our -[AS8298]({% post_url 2021-02-27-network %}), as well as virtual machines running in -[KVM/Qemu]({% post_url 2021-12-23-vpp-playground %}). +merged the [Linux Control Plane]({{< ref "2021-08-12-vpp-1" >}}) plugin. I wrote about its deployment +to both regular servers like the [Supermicro]({{< ref "2021-09-21-vpp-7" >}}) routers that run on our +[AS8298]({{< ref "2021-02-27-network" >}}), as well as virtual machines running in +[KVM/Qemu]({{< ref "2021-12-23-vpp-playground" >}}). Now that I've been running VPP in production for about half a year, I can't help but notice one specific drawback: VPP is a programmable dataplane, and _by design_ it does not include any configuration or @@ -102,7 +102,7 @@ dependencies, let me give a few examples: ## VPP Config: Ordering -In my [previous]({% post_url 2022-03-27-vppcfg-1 %}) post, I talked about a bunch of constraints that +In my [previous]({{< ref "2022-03-27-vppcfg-1" >}}) post, I talked about a bunch of constraints that make certain YAML configurations invalid (for example, having both _dot1q_ and _dot1ad_ on a sub-interface, that wouldn't make any sense). Here, I'm going to talk about another type of constraint: ***Temporal Constraints*** are statements about the ordering of operations. With the example DAG above, I derive the @@ -448,7 +448,7 @@ needed, so it sets them all admin-state down. The bridge-domain `bd10` no longer exist, the poor thing. But before it is deleted, the interface that was in `bd10` can be pruned (membership _depends_ on the bridge, so in pruning, dependencies are removed before dependents). Considering `Hu12/0/1.101` and `Gi3/0/0.100` were an L2XC pair before, they are returned to default -(L3) mode and because it's no longer needed, the [VLAN Gymnastics]({%post_url 2022-02-14-vpp-vlan-gym %}) +(L3) mode and because it's no longer needed, the [VLAN Gymnastics]({{< ref "2022-02-14-vpp-vlan-gym" >}}) tag rewriting is also cleaned up for both interfaces. Finally, the sub-interfaces that do not appear in the target configuration are deleted, completing the **pruning** phase. diff --git a/content/articles/2022-10-14-lab-1.md b/content/articles/2022-10-14-lab-1.md index f533cd4..11be3be 100644 --- a/content/articles/2022-10-14-lab-1.md +++ b/content/articles/2022-10-14-lab-1.md @@ -7,11 +7,11 @@ title: VPP Lab - Setup # Introduction -In a previous post ([VPP Linux CP - Virtual Machine Playground]({% post_url 2021-12-23-vpp-playground %})), I +In a previous post ([VPP Linux CP - Virtual Machine Playground]({{< ref "2021-12-23-vpp-playground" >}})), I wrote a bit about building a QEMU image so that folks can play with the [Vector Packet Processor](https://fd.io) and the Linux Control Plane code. Judging by our access logs, this image has definitely been downloaded a bunch, and I myself use it regularly when I want to tinker a little bit, without wanting to impact the production -routers at [AS8298]({% post_url 2021-02-27-network %}). +routers at [AS8298]({{< ref "2021-02-27-network" >}}). The topology of my tests has become a bit more complicated over time, and often just one router would not be enough. Yet, repeatability is quite important, and I found myself constantly reinstalling / recheckpointing @@ -22,7 +22,7 @@ the `vpp-proto` virtual machine I was using. I got my hands on some LAB hardware {{< image width="300px" float="left" src="/assets/lab/physical.png" alt="Physical" >}} First, I specc'd out a few machines that will serve as hypervisors. From top to bottom in the picture here, two -FS.com S5680-20SQ switches -- I reviewed these earlier [[ref]({% post_url 2021-08-07-fs-switch %})], and I really +FS.com S5680-20SQ switches -- I reviewed these earlier [[ref]({{< ref "2021-08-07-fs-switch" >}})], and I really like these, as they come with 20x10G, 4x25G and 2x40G ports, an OOB management port and serial to configure them. Under it, is its larger brother, with 48x10G and 8x100G ports, the FS.com S5860-48SC. Although it's a bit more expensive, it's also necessary because I often test VPP at higher bandwidth, and as such being able to make @@ -66,7 +66,7 @@ On this production hypervisor (`hvn0.chbtl0.ipng.ch`), I'll also prepare and mai image, which will serve as a consistent image to boot the LAB virtual machines. This _main_ image will be replicated over the network into all three `hvn0 - hvn2` hypervisor machines. This way, I can do periodical maintenance on the _main_ `vpp-proto` image, snapshot it, publish it as a QCOW2 for downloading (see my [[VPP Linux CP - Virtual Machine -Playground]({% post_url 2021-12-23-vpp-playground %})] post for details on how it's built and what you can do with it +Playground]({{< ref "2021-12-23-vpp-playground" >}})] post for details on how it's built and what you can do with it yourself!). The snapshots will then also be sync'd to all hypervisors, and from there I can use simple ZFS filesystem _cloning_ and _snapshotting_ to maintain the LAB virtual machines. @@ -83,7 +83,7 @@ of the runtime directly from the `lab.ipng.ch` headend, not having to log in to # Implementation Details I start with image management. On the production hypervisor, I create a 6GB ZFS dataset that will serve as my `vpp-proto` -machine, and install it using the exact same method as the playground [[ref]({% post_url 2021-12-23-vpp-playground %})]. +machine, and install it using the exact same method as the playground [[ref]({{< ref "2021-12-23-vpp-playground" >}})]. Once I have it the way I like it, I'll poweroff the VM, and see to this image being replicated to all hypervisors. ## ZFS Replication diff --git a/content/articles/2022-11-20-mastodon-1.md b/content/articles/2022-11-20-mastodon-1.md index 1aa4597..81639ba 100644 --- a/content/articles/2022-11-20-mastodon-1.md +++ b/content/articles/2022-11-20-mastodon-1.md @@ -59,7 +59,7 @@ main ways: borg(1) and zrepl(1). * **Hypervisor hosts** make a daily copy of their entire filesystem using **borgbackup(1)** to a set of two remote fileservers. This way, the important file metadata, configs for the virtual machines, and so on, are all safely stored remotely. * **Virtual machines** are running on ZFS blockdevices on either the SSD pool, or the disk pool, or both. Using a tool called **zrepl(1)** - (which I described a little bit in a [[previous post]({% post_url 2022-10-14-lab-1 %})]), I create a snapshot every 12hrs on the local + (which I described a little bit in a [[previous post]({{< ref "2022-10-14-lab-1" >}})]), I create a snapshot every 12hrs on the local blockdevice, and incrementally copy away those snapshots daily to the remote fileservers. If I do something silly on a given virtual machine, I can roll back the machine filesystem state to the previous checkpoint and reboot. This has diff --git a/content/articles/2022-11-24-mastodon-2.md b/content/articles/2022-11-24-mastodon-2.md index 3e068c8..67525c0 100644 --- a/content/articles/2022-11-24-mastodon-2.md +++ b/content/articles/2022-11-24-mastodon-2.md @@ -13,7 +13,7 @@ is convenient, but these companies are sometimes taking away my autonomy and exe for me it's time to take back a little bit of responsibility for my online social presence, away from centrally hosted services and to privately operated ones. -In the [[previous post]({% post_url 2022-11-20-mastodon-1 %})], I shared some thoughts on how the overall install of a Mastodon instance +In the [[previous post]({{< ref "2022-11-20-mastodon-1" >}})], I shared some thoughts on how the overall install of a Mastodon instance went, making it a point to ensure my users' (and my own!) data is somehow safe, and the machine runs on good hardware, and with good connectivity. Thanks IPng, for that 10G connection! In this post, I visit an old friend, [[Borgmon](https://sre.google/sre-book/practical-alerting/)], which has since reincarnated and become the _de facto_ open source diff --git a/content/articles/2022-11-27-mastodon-3.md b/content/articles/2022-11-27-mastodon-3.md index 0f2a059..a64d0ab 100644 --- a/content/articles/2022-11-27-mastodon-3.md +++ b/content/articles/2022-11-27-mastodon-3.md @@ -13,8 +13,8 @@ is convenient, but these companies are sometimes taking away my autonomy and exe for me it's time to take back a little bit of responsibility for my online social presence, away from centrally hosted services and to privately operated ones. -In my [[first post]({% post_url 2022-11-20-mastodon-1 %})], I shared some thoughts on how I installed a Mastodon instance for myself. In a -[[followup post]({% post_url 2022-11-24-mastodon-2 %})] I talked about its overall architecture and how one might use Prometheus to monitor +In my [[first post]({{< ref "2022-11-20-mastodon-1" >}})], I shared some thoughts on how I installed a Mastodon instance for myself. In a +[[followup post]({{< ref "2022-11-24-mastodon-2" >}})] I talked about its overall architecture and how one might use Prometheus to monitor vital backends like Redis, Postgres and Elastic. But Mastodon _itself_ is also an application which can provide a wealth of telemetry using a protocol called [[StatsD](https://github.com/statsd/statsd)]. @@ -84,8 +84,7 @@ minutes, I think I can see lots of nifty data in here. ## Prometheus At IPng Networks, we use Prometheus as a monitoring observability tool. It's worth pointing out that **statsd** has a few options itself to -visualise data, but considering I already have lots of telemetry in Prometheus and Grafana (see my [[previous post]({% post_url -2022-11-24-mastodon-2 %})]), I'm going to take a bit of a detour, and convert these metrics into the Prometheus _exposition format_, so that +visualise data, but considering I already have lots of telemetry in Prometheus and Grafana (see my [[previous post]({{< ref "2022-11-24-mastodon-2" >}})]), I'm going to take a bit of a detour, and convert these metrics into the Prometheus _exposition format_, so that they can be scraped on a `/metrics` endpoint just like the others. This way, I have all monitoring in one place and using one tool. Monitoring is hard enough as it is, and having to learn multiple tools is _no bueno_ :) diff --git a/content/articles/2022-12-05-oem-switch-1.md b/content/articles/2022-12-05-oem-switch-1.md index 1f892ad..822fa6e 100644 --- a/content/articles/2022-12-05-oem-switch-1.md +++ b/content/articles/2022-12-05-oem-switch-1.md @@ -6,7 +6,7 @@ title: 'Review: S5648X-2Q4Z Switch - Part 1: VxLAN/GENEVE/NvGRE' After receiving an e-mail from a newer [[China based switch OEM](https://starry-networks.com/)], I had a chat with their founder and learned that the combination of switch silicon and software may be a good match for IPng Networks. You may recall my previous endeavors in the Fiberstore lineup, -notably an in-depth review of the [[S5860-20SQ]({% post_url 2021-08-07-fs-switch %})] which sports +notably an in-depth review of the [[S5860-20SQ]({{< ref "2021-08-07-fs-switch" >}})] which sports 20x10G, 4x25G and 2x40G optics, and its larger sibling the S5860-48SC which comes with 48x10G and 8x100G cages. I use them in production at IPng Networks and their featureset versus price point is pretty good. In that article, I made one critical note reviewing those FS switches, in that they'e @@ -200,7 +200,7 @@ IPv6 routes, compared to the little ones. But, it has a few more MPLS labels. AC switch once again has a bit more capacity. But, of course the large switch has lots more ports (56 versus 26), and is more expensive. Choose wisely :) -Regarding IPv4/IPv6 and MPLS space, luckily [[AS8298]({% post_url 2021-02-27-network %})] is +Regarding IPv4/IPv6 and MPLS space, luckily [[AS8298]({{< ref "2021-02-27-network" >}})] is relatively compact in its IGP. As of today, it carries 41 IPv4 and 48 IPv6 prefixes in OSPF, which means that these switches would be fine participating in Area 0. If CAM space does turn into an issue down the line, I can put them in stub areas and advertise only a default. As an aside, VPP @@ -402,7 +402,7 @@ in the next article. What folks don't always realize is that the industry is _moving on_ from MPLS to a set of more flexible IP based solutions, notably tunneling using IPv4 or IPv6 UDP packets such as found in VxLAN or GENEVE, two of my favorite protocols. This certainly does cost a little bit in VPP, as I wrote -about in my post on [[VLLs in VPP]({% post_url 2022-01-12-vpp-l2 %})], although you'd be surprised +about in my post on [[VLLs in VPP]({{< ref "2022-01-12-vpp-l2" >}})], although you'd be surprised how many VxLAN encapsulated packets/sec a simple AMD64 router can forward. With respect to these switches, though, let's find out if tunneling this way incurs an overhead or performance penalty. Ready? Let's go! diff --git a/content/articles/2022-12-09-oem-switch-2.md b/content/articles/2022-12-09-oem-switch-2.md index e60cf9e..f986721 100644 --- a/content/articles/2022-12-09-oem-switch-2.md +++ b/content/articles/2022-12-09-oem-switch-2.md @@ -24,8 +24,7 @@ management, and Network time synchronization.

-After discussing basic L2, L3 and Overlay functionality in my [[previous post]({% post_url -2022-12-05-oem-switch-1 %})], I left somewhat of a cliffhanger alluding to all this fancy MPLS and +After discussing basic L2, L3 and Overlay functionality in my [[previous post]({{< ref "2022-12-05-oem-switch-1" >}})], I left somewhat of a cliffhanger alluding to all this fancy MPLS and VPLS stuff. Honestly, I needed a bit more time to play around with the featureset and clarify a few things. I'm now ready to assert that this stuff is really possible on this switch, and if this tickles your fancy, by all means read on :) @@ -47,7 +46,7 @@ one shows `NetworkOS-e580-v7.4.4.r.bin` as the firmware, and the smaller one sho `uImage-v7.0.4.40.bin`, I get the impression that the latter is a compiled down version of the former to work with the newer chipset. -In my [[previous post]({% post_url 2022-12-05-oem-switch-1 %})], I showed L2, L3 and VxLAN, GENEVE +In my [[previous post]({{< ref "2022-12-05-oem-switch-1" >}})], I showed L2, L3 and VxLAN, GENEVE and NvGRE capabilities of this switch to be line rate. But the hardware also supports MPLS, so I figured I'd complete the Overlay series by exploring VxLAN, and the MPLS, EoMPLS (L2VPN, Martini style), and VPLS functionality of these units. @@ -57,7 +56,7 @@ style), and VPLS functionality of these units. ![Front](/assets/oem-switch/topology.svg){: style="width:500px; float: right; margin-left: 1em; margin-bottom: 1em;"} -In the [[IPng Networks LAB]({% post_url 2022-10-14-lab-1 %})], I build the following topology using +In the [[IPng Networks LAB]({{< ref "2022-10-14-lab-1" >}})], I build the following topology using the loadtester, packet analyzer, and switches: * **msw-top**: S5624-2Z-EI switch @@ -75,8 +74,7 @@ work that goes into building such an MPLS enabled telco network. ### MPLS -Why even bother, if we have these fancy new IP based transports that I [[wrote about]({% post_url -2022-12-05-oem-switch-1 %})] last week? I mentioned that the industry is _moving on_ from MPLS +Why even bother, if we have these fancy new IP based transports that I [[wrote about]({{< ref "2022-12-05-oem-switch-1" >}})] last week? I mentioned that the industry is _moving on_ from MPLS to a set of more flexible IP based solutions like VxLAN and GENEVE, as they certainly offer lots of benefits in deployment (notably as overlays on top of existing IP networks). @@ -338,7 +336,7 @@ PW here means _pseudowire_ and CW means _controlword_. Et, voilà, the fir One very common use case for me at IPng Networks is to work with excellent partners like [[IP-Max](https://www.ip-max.net/)] who provide Internet Exchange transport, for example from DE-CIX or SwissIX, to the customer premises. IP-Max uses Cisco's ASR9k routers, an absolutely beautiful piece -of technology [[ref]({% post_url 2022-02-21-asr9006 %})], and with those you can terminate a _L2VPN_ in +of technology [[ref]({{< ref "2022-02-21-asr9006" >}})], and with those you can terminate a _L2VPN_ in any sub-interface. Let's configure something similar. I take one port on `msw-top`, and branch that out into three @@ -537,7 +535,7 @@ And the chassis doesn't even get warm. ### Conclusions It's just super cool to see a switch like this work as expected. I did not manage to overload it at -all, in my [[previous article]({% post_url 2022-12-05-oem-switch-1 %})], I showed VxLAN, GENEVE and +all, in my [[previous article]({{< ref "2022-12-05-oem-switch-1" >}})], I showed VxLAN, GENEVE and NvGRE overlays at line rate. Here, I can see that MPLS with all of its Martini bells and whistles, and as well the more advanced VPLS, are keeping up like a champ. I think at least for initial configuration and throughput on all MPLS features I tested, both the small 24x10 + 2x100G switch, diff --git a/content/articles/2023-02-12-fitlet2.md b/content/articles/2023-02-12-fitlet2.md index 5912e7d..2aa5870 100644 --- a/content/articles/2023-02-12-fitlet2.md +++ b/content/articles/2023-02-12-fitlet2.md @@ -7,11 +7,11 @@ title: 'Review: Compulab Fitlet2' A while ago, in June 2021, we were discussing home routers that can keep up with 1G+ internet connections in the [CommunityRack](https://www.communityrack.org) telegram channel. Of course -at IPng Networks we are fond of the Supermicro Xeon D1518 [[ref]({% post_url 2021-09-21-vpp-7 %})], +at IPng Networks we are fond of the Supermicro Xeon D1518 [[ref]({{< ref "2021-09-21-vpp-7" >}})], which has a bunch of 10Gbit X522 and 1Gbit i350 and i210 intel NICs, but it does come at a certain price. -For smaller applications, PC Engines APU6 [[ref]({%post_url 2021-07-19-pcengines-apu6 %})] is +For smaller applications, PC Engines APU6 [[ref]({{< ref "2021-07-19-pcengines-apu6" >}})] is kind of cool and definitely more affordable. But, in this chat, Patrick offered an alternative, the [[Fitlet2](https://fit-iot.com/web/products/fitlet2/)] which is a small, passively cooled, and expandable IoT-esque machine. @@ -82,7 +82,7 @@ For the curious, here's a list of interesting details: [[lspci](/assets/fitlet2/ ## Preparing the Fitlet2 First, I grab a USB key and install Debian _Bullseye_ (11.5) on it, using the UEFI installer. After -booting, I carry through the instructions on my [[VPP Production]({% post_url 2021-09-21-vpp-7 %})] +booting, I carry through the instructions on my [[VPP Production]({{< ref "2021-09-21-vpp-7" >}})] post. Notably, I create the `dataplane` namespace, run an SSH and SNMP agent there, run `isolcpus=1-3` so that I can give three worker threads to VPP, but I start off giving it only one (1) worker thread, because this way I can take a look at what the performance is of a single CPU, before @@ -171,8 +171,8 @@ After this exploratory exercise, I have learned enough about the hardware to be Fitlet2 out for a spin. To configure the VPP instance, I turn to [[vppcfg](https://github.com/pimvanpelt/vppcfg)], which can take a YAML configuration file describing the desired VPP configuration, and apply it safely to the running dataplane using the VPP -API. I've written a few more posts on how it does that, notably on its [[syntax]({% post_url -2022-03-27-vppcfg-1 %})] and its [[planner]({% post_url 2022-04-02-vppcfg-2 %})]. A complete +API. I've written a few more posts on how it does that, notably on its [[syntax]({{< ref "2022-03-27-vppcfg-1" >}})] +and its [[planner]({{< ref "2022-04-02-vppcfg-2" >}})]. A complete configuration guide on vppcfg can be found [[here](https://github.com/pimvanpelt/vppcfg/blob/main/docs/config-guide.md)]. diff --git a/content/articles/2023-02-24-coloclue-vpp-2.md b/content/articles/2023-02-24-coloclue-vpp-2.md index 6ca42cb..667e40e 100644 --- a/content/articles/2023-02-24-coloclue-vpp-2.md +++ b/content/articles/2023-02-24-coloclue-vpp-2.md @@ -12,10 +12,10 @@ title: 'Case Study: VPP at Coloclue, part 2' Almost precisely two years ago, in February of 2021, I created a loadtesting environment at [[Coloclue](https://coloclue.net)] to prove that a provider of L2 connectivity between two datacenters in Amsterdam was not incurring jitter or loss on its services -- I wrote up my findings -in [[an article]({% post_url 2021-02-27-coloclue-loadtest %})], which demonstrated that the service +in [[an article]({{< ref "2021-02-27-coloclue-loadtest" >}})], which demonstrated that the service provider indeed provides a perfect service. One month later, in March 2021, I briefly ran [[VPP](https://fd.io)] on one of the routers at Coloclue, but due to lack of time and a few -technical hurdles along the way, I had to roll back [[ref]({% post_url 2021-03-27-coloclue-vpp %})]. +technical hurdles along the way, I had to roll back [[ref]({{< ref "2021-03-27-coloclue-vpp" >}})]. ## The Problem @@ -169,7 +169,7 @@ I've now ensured that traffic to and from 185.52.227.1 will always traverse thro I've written about this before, the general _spiel_ is just following my previous article (I'm often very glad to read back my own articles as they serve as pretty good documentation to my forgetful chipmunk-sized brain!), so here, I'll only recap what's already written in -[[vpp-7]({% post_url 2021-09-21-vpp-7 %})]: +[[vpp-7]({{< ref "2021-09-21-vpp-7" >}})]: 1. Build VPP with Linux Control Plane 1. Bring `eunetworks-2` into maintenance mode, so we can safely tinker with it @@ -656,7 +656,7 @@ The performance of the one router we upgraded definitely improved, no question a there's a couple of things that I think we still need to do, so Rogier and I rolled back the change to the previous situation and kernel based routing. -* We didn't migrate keepalived, although IPng runs this in our DDLN [[colocation]({% post_url 2022-02-24-colo %})] +* We didn't migrate keepalived, although IPng runs this in our DDLN [[colocation]({{< ref "2022-02-24-colo" >}})] site, so I'm pretty confident that it will work. * Kees and Ansible at Coloclue will need a few careful changes, to facilitate ongoing automation, think of dataplane and controlplane firewalls, sysctls (uRPF et al), fastnetmon, and so on will diff --git a/content/articles/2023-03-11-mpls-core.md b/content/articles/2023-03-11-mpls-core.md index 74dd68f..2966433 100644 --- a/content/articles/2023-03-11-mpls-core.md +++ b/content/articles/2023-03-11-mpls-core.md @@ -24,9 +24,8 @@ management, and Network time synchronization.

-After discussing basic L2, L3 and Overlay functionality in my [[first post]({% post_url -2022-12-05-oem-switch-1 %})], and explored the functionality and performance of MPLS and VPLS in my -[[second post]({% post_url 2022-12-09-oem-switch-2 %})], I convinced myself and committed to a bunch +After discussing basic L2, L3 and Overlay functionality in my [[first post]({{< ref "2022-12-05-oem-switch-1" >}})], and explored the functionality and performance of MPLS and VPLS in my +[[second post]({{< ref "2022-12-09-oem-switch-2" >}})], I convinced myself and committed to a bunch of these for IPng Networks. I'm now ready to roll out these switches and create a BGP-free core network for IPng Networks. If this kind of thing tickles your fancy, by all means read on :) @@ -63,8 +62,7 @@ expand your reach in a co-op style environment, [[reach out](/s/contact)] to us, I've decided to make this the direction of IPng's core network -- I know that the specs of the Centec switches I've bought will allow for a modest but not huge amount of routes in the hardware -forwarding tables. I loadtested them in [[a previous article]({% post_url 2022-12-05-oem-switch-1 -%})] at line rate (well, at least 8x10G at 64b packets and around 110Mpps), so they were forwarding +forwarding tables. I loadtested them in [[a previous article]({{< ref "2022-12-05-oem-switch-1" >}})] at line rate (well, at least 8x10G at 64b packets and around 110Mpps), so they were forwarding both IPv4 and MPLS traffic effortlessly, and at 45 Watts I might add! However, they clearly cannot operate in the DFZ for two main reasons: @@ -146,13 +144,12 @@ I stub out on IPng's resolvers. Winner! ### Inserting MPLS Under AS8298 -I am currently running [[VPP](https://fd.io)] based on my own deployment [[article]({% post_url -2021-09-21-vpp-7%})], and this has a bunch of routers connected back-to-back with one another using +I am currently running [[VPP](https://fd.io)] based on my own deployment [[article]({{< ref "2021-09-21-vpp-7" >}})], and this has a bunch of routers connected back-to-back with one another using either crossconnects (if there are multiple routers in the same location), or a CWDM/DWDM wave over dark fiber (if they are in adjacent buildings and I have found a provider willing to share their dark fiber with me), or a Carrier Ethernet virtual leased line (L2VPN, provided by folks like [[Init7](https://init7.net)] in Switzerland, or [[IP-Max](https://ip-max.net)] throughout europe in -our [[backbone]({% post_url 2021-02-27-network %})]). +our [[backbone]({{< ref "2021-02-27-network" >}})]). {{< image width="350px" float="right" src="/assets/mpls-core/before.svg" alt="Before" >}} diff --git a/content/articles/2023-03-17-ipng-frontends.md b/content/articles/2023-03-17-ipng-frontends.md index aca1130..18b3685 100644 --- a/content/articles/2023-03-17-ipng-frontends.md +++ b/content/articles/2023-03-17-ipng-frontends.md @@ -5,23 +5,22 @@ title: 'Case Study: Site Local NGINX' A while ago I rolled out an important change to the IPng Networks design: I inserted a bunch of [[Centec MPLS](https://starry-networks.com)] and IPv4/IPv6 capable switches underneath -[[AS8298]({% post_url 2021-02-27-network %})], which gave me two specific advantages: +[[AS8298]({{< ref "2021-02-27-network" >}})], which gave me two specific advantages: 1. The entire IPng network is now capable of delivering L2VPN services, taking the form of MPLS -point-to-point ethernet, and VPLS, as shown in a previous [[deep dive]({% post_url -2022-12-09-oem-switch-2 %})], in addition to IPv4 and IPv6 transit provided by VPP in an elaborate -and elegant [[BGP Routing Policy]({% post_url 2021-11-14-routing-policy %})]. +point-to-point ethernet, and VPLS, as shown in a previous [[deep dive]({{< ref "2022-12-09-oem-switch-2" >}})], in addition to IPv4 and IPv6 transit provided by VPP in an elaborate +and elegant [[BGP Routing Policy]({{< ref "2021-11-14-routing-policy" >}})]. 1. A new internal private network becomes available to any device connected IPng switches, with addressing in **198.19.0.0/16** and **2001:678:d78:500::/56**. This network is completely isolated from the Internet, with access controlled via N+2 redundant gateways/firewalls, described in more -detail in a previous [[deep dive]({% post_url 2023-03-11-mpls-core %})] as well. +detail in a previous [[deep dive]({{< ref "2023-03-11-mpls-core" >}})] as well. ## Overview {{< image width="220px" float="left" src="/assets/ipng-frontends/soad.png" alt="Toxicity" >}} -After rolling out this spiffy BGP Free [[MPLS Core]({% post_url 2023-03-11-mpls-core %})], I wanted +After rolling out this spiffy BGP Free [[MPLS Core]({{< ref "2023-03-11-mpls-core" >}})], I wanted to take a look at maybe conserving a few IP addresses here and there, as well as tightening access and protecting the more important machines that IPng Networks runs. You see, most enterprise networks will include a bunch of internal services, like databases, network attached storage, backup diff --git a/content/articles/2023-03-24-lego-dns01.md b/content/articles/2023-03-24-lego-dns01.md index 8e5748c..a19d3f8 100644 --- a/content/articles/2023-03-24-lego-dns01.md +++ b/content/articles/2023-03-24-lego-dns01.md @@ -6,10 +6,10 @@ title: 'Case Study: Let''s Encrypt DNS-01' Last week I shared how IPng Networks deployed a loadbalanced frontend cluster of NGINX webservers that have public IPv4 / IPv6 addresses, but talk to a bunch of internal webservers that are in a private network which isn't directly connected to the internet, so called _IPng Site Local_ -[[ref]({%post_url 2023-03-11-mpls-core %})] with addresses **198.19.0.0/16** and +[[ref]({{< ref "2023-03-11-mpls-core" >}})] with addresses **198.19.0.0/16** and **2001:678:d78:500::/56**. -I wrote in [[that article]({% post_url 2023-03-17-ipng-frontends %})] that IPng will be using +I wrote in [[that article]({{< ref "2023-03-17-ipng-frontends" >}})] that IPng will be using _ACME_ HTTP-01 validation, which asks the certificate authority, in this case Let's Encrypt, to contact the webserver on a well-known URI for each domain that I'm requesting a certificate for. Unsurprisingly, several folks reached out to me asking "well what about DNS-01", and one sentence @@ -30,7 +30,7 @@ wrong, and that using DNS-01 ***is*** relatively simple after all. I've installed three frontend NGINX servers (running at Coloclue AS8283, IPng AS8298 and IP-Max AS25091), and one LEGO certificate machine (running in the internal _IPng Site Local_ network). -In the [[previous article]({% post_url 2023-03-17-ipng-frontends %})], I described the setup and +In the [[previous article]({{< ref "2023-03-17-ipng-frontends" >}})], I described the setup and the use of Let's Encrypt with HTTP-01 challenges. I'll skip that here. #### HTTP-01 vs DNS-01 diff --git a/content/articles/2023-04-09-vpp-stats.md b/content/articles/2023-04-09-vpp-stats.md index abdbb65..bb21537 100644 --- a/content/articles/2023-04-09-vpp-stats.md +++ b/content/articles/2023-04-09-vpp-stats.md @@ -17,13 +17,13 @@ can read all about in my series on VPP back in 2021: [![DENOG14](/assets/vpp-stats/denog14-thumbnail.png){: style="width:300px; float: right; margin-left: 1em;"}](https://video.ipng.ch/w/erc9sAofrSZ22qjPwmv6H4) -* [[Part 1]({% post_url 2021-08-12-vpp-1 %})]: Punting traffic through TUN/TAP interfaces into Linux -* [[Part 2]({% post_url 2021-08-13-vpp-2 %})]: Mirroring VPP interface configuration into Linux -* [[Part 3]({% post_url 2021-08-15-vpp-3 %})]: Automatically creating sub-interfaces in Linux -* [[Part 4]({% post_url 2021-08-25-vpp-4 %})]: Synchronize link state, MTU and addresses to Linux -* [[Part 5]({% post_url 2021-09-02-vpp-5 %})]: Netlink Listener, synchronizing state from Linux to VPP -* [[Part 6]({% post_url 2021-09-10-vpp-6 %})]: Observability with LibreNMS and VPP SNMP Agent -* [[Part 7]({% post_url 2021-09-21-vpp-7 %})]: Productionizing and reference Supermicro fleet at IPng +* [[Part 1]({{< ref "2021-08-12-vpp-1" >}})]: Punting traffic through TUN/TAP interfaces into Linux +* [[Part 2]({{< ref "2021-08-13-vpp-2" >}})]: Mirroring VPP interface configuration into Linux +* [[Part 3]({{< ref "2021-08-15-vpp-3" >}})]: Automatically creating sub-interfaces in Linux +* [[Part 4]({{< ref "2021-08-25-vpp-4" >}})]: Synchronize link state, MTU and addresses to Linux +* [[Part 5]({{< ref "2021-09-02-vpp-5" >}})]: Netlink Listener, synchronizing state from Linux to VPP +* [[Part 6]({{< ref "2021-09-10-vpp-6" >}})]: Observability with LibreNMS and VPP SNMP Agent +* [[Part 7]({{< ref "2021-09-21-vpp-7" >}})]: Productionizing and reference Supermicro fleet at IPng With this, I can make a regular server running Linux use VPP as kind of a software ASIC for super fast forwarding, filtering, NAT, and so on, while keeping control of the interface state (links, @@ -43,7 +43,7 @@ Google [[ref](https://sre.google/sre-book/practical-alerting/)] but popularized open source interpretation called **Prometheus** [[ref](https://prometheus.io/)]. IPng Networks ♥ Prometheus. I'm a really huge fan of Prometheus and its graphical frontend Grafana, as you can see with my work on -Mastodon in [[this article]({% post_url 2022-11-27-mastodon-3 %})]. Join me on +Mastodon in [[this article]({{< ref "2022-11-27-mastodon-3" >}})]. Join me on [[ublog.tech](https://ublog.tech)] if you haven't joined the Fediverse yet. It's well monitored! ### SNMP diff --git a/content/articles/2023-05-07-vpp-mpls-1.md b/content/articles/2023-05-07-vpp-mpls-1.md index d2a28aa..10eeb8d 100644 --- a/content/articles/2023-05-07-vpp-mpls-1.md +++ b/content/articles/2023-05-07-vpp-mpls-1.md @@ -15,7 +15,7 @@ are shared between the two. I've deployed an MPLS core for IPng Networks, which allows me to provide L2VPN services, and at the same time keep an IPng Site Local network with IPv4 and IPv6 that is separate from the internet, based on hardware/silicon based forwarding at line rate and high availability. You can read all -about my Centec MPLS shenanigans in [[this article]({% post_url 2023-03-11-mpls-core %})]. +about my Centec MPLS shenanigans in [[this article]({{< ref "2023-03-11-mpls-core" >}})]. Ever since the release of the Linux Control Plane [[ref](https://github.com/pimvanpelt/lcpng)] plugin in VPP, folks have asked "What about MPLS?" -- I have never really felt the need to go this @@ -23,7 +23,7 @@ rabbit hole, because I figured that in this day and age, higher level IP protoco are just as performant, and a little bit less of an 'art' to get right. For example, the Centec switches I deployed perform VxLAN, GENEVE and GRE all at line rate in silicon. And in an earlier article, I showed that the performance of VPP in these tunneling protocols is actually pretty good. -Take a look at my [[VPP L2 article]({% post_url 2022-01-12-vpp-l2 %})] for context. +Take a look at my [[VPP L2 article]({{< ref "2022-01-12-vpp-l2" >}})] for context. You might ask yourself: _Then why bother?_ To which I would respond: if you have to ask that question, clearly you don't know me :) This article will form a deep dive into MPLS as implemented by VPP. In @@ -33,7 +33,7 @@ as a fully fledged provider- and provider-edge MPLS router. ## Lab Setup -A while ago I created a [[VPP Lab]({% post_url 2022-10-14-lab-1 %})] which is pretty slick, I use it +A while ago I created a [[VPP Lab]({{< ref "2022-10-14-lab-1" >}})] which is pretty slick, I use it all the time. Most of the time I find myself messing around on the hypervisor and adding namespaces with interfaces in it, to pair up with the VPP interfaces. And I tcpdump a lot! It's time for me to make an upgrade to the Lab -- take a look at this picture: @@ -145,7 +145,7 @@ machine, or emitting _egress_ from any port on any machine, respectively. ## Preparing the LAB I wrote a little bit about the automation I use to maintain a few reproducable lab environments in a -[[previous article]({% post_url 2022-10-14-lab-1 %})], so I'll only show the commands themselves here, +[[previous article]({{< ref "2022-10-14-lab-1" >}})], so I'll only show the commands themselves here, not the underlying systems. When the LAB boots up, it comes with a basic Linux CP configuration that uses OSPF and OSPFv3 running in Bird2, to connect the `vpp0-0` through `vpp0-3` machines together (each router's Gi10/0/0 port connects to the next router's Gi10/0/1 port). LAB0 is in use by diff --git a/content/articles/2023-05-17-vpp-mpls-2.md b/content/articles/2023-05-17-vpp-mpls-2.md index 3f5fc01..dd0f75d 100644 --- a/content/articles/2023-05-17-vpp-mpls-2.md +++ b/content/articles/2023-05-17-vpp-mpls-2.md @@ -15,14 +15,13 @@ are shared between the two. I've deployed an MPLS core for IPng Networks, which allows me to provide L2VPN services, and at the same time keep an IPng Site Local network with IPv4 and IPv6 that is separate from the internet, based on hardware/silicon based forwarding at line rate and high availability. You can read all -about my Centec MPLS shenanigans in [[this article]({% post_url 2023-03-11-mpls-core %})]. +about my Centec MPLS shenanigans in [[this article]({{< ref "2023-03-11-mpls-core" >}})]. In the last article, I explored VPP's MPLS implementation a little bit. All the while, [@vifino](https://chaos.social/@vifino) has been tinkering with the Linux Control Plane and adding MPLS support to it, and together we learned a lot about how VPP does MPLS forwarding and how it sometimes differs to other implementations. During the process, we talked a bit about -_implicit-null_ and _explicit-null_. When my buddy Fred read the [[previous article]({% post_url -2023-05-07-vpp-mpls-1 %})], he also talked about a feature called _penultimate-hop-popping_ which +_implicit-null_ and _explicit-null_. When my buddy Fred read the [[previous article]({{< ref "2023-05-07-vpp-mpls-1" >}})], he also talked about a feature called _penultimate-hop-popping_ which maybe deserves a bit more explanation. At the same time, I could not help but wonder what the performance is of VPP as a _P-Router_ and _PE-Router_, compared to say IPv4 forwarding. @@ -32,7 +31,7 @@ performance is of VPP as a _P-Router_ and _PE-Router_, compared to say IPv4 forw For this article, I'm going to boot up instance LAB1 with no changes (for posterity, using image `vpp-proto-disk0@20230403-release`), and it will be in the same state it was at the end of my -previous [[MPLS article]({% post_url 2023-05-07-vpp-mpls-1 %})]. To recap, there are four routers +previous [[MPLS article]({{< ref "2023-05-07-vpp-mpls-1" >}})]. To recap, there are four routers daisychained in a string, and they are called `vpp1-0` through `vpp1-3`. I've then connected a Debian virtual machine on both sides of the string. `host1-0.enp16s0f3` connects to `vpp1-3.e2` and `host1-1.enp16s0f0` connects to `vpp1-0.e3`. Finally, recall that all of the links between these @@ -286,7 +285,7 @@ operation performed on a packet does cost valuable CPU cycles. I can't really perform a loadtest on the virtual machines backed by Open vSwitch, while tightly packing six machines on one hypervisor. That setup is made specifically to do functional testing and development work. To do a proper loadtest, I will need bare metal. So, I grabbed three Supermicro -SYS-5018D-FN8T, which I'm running throughout [[AS8298]({% post_url 2021-02-27-network %})], as I +SYS-5018D-FN8T, which I'm running throughout [[AS8298]({{< ref "2021-02-27-network" >}})], as I know their performance quite well. I'll take three of these, and daisychain them with TenGig ports. This way, I can take a look at the cost of _P-Routers_ (which only SWAP MPLS labels and forward the result), as well as _PE-Routers_ (which have to encapsulate, and sometimes decapsulate the IP or @@ -295,7 +294,7 @@ Ethernet traffic). These machines get a fresh Debian Bookworm install and VPP 23.06 without any plugins. It's weird for me to run a VPP instance without Linux CP, but in this case I'm going completely vanilla, so I disable all plugins and give each VPP machine one worker thread. The install follows my popular -[[VPP-7]({% post_url 2021-09-21-vpp-7 %})]. By the way did you know that you can just type the search query [VPP-7] directly into Google to find this article. Am I an influencer now? Jokes aside, I decide to call the bare metal machines _France_, +[[VPP-7]({{< ref "2021-09-21-vpp-7" >}})]. By the way did you know that you can just type the search query [VPP-7] directly into Google to find this article. Am I an influencer now? Jokes aside, I decide to call the bare metal machines _France_, _Belgium_ and _Netherlands_. And because if it ain't dutch, it ain't much, the Netherlands machine sits on top :) diff --git a/content/articles/2023-05-21-vpp-mpls-3.md b/content/articles/2023-05-21-vpp-mpls-3.md index 1280238..e9b65ff 100644 --- a/content/articles/2023-05-21-vpp-mpls-3.md +++ b/content/articles/2023-05-21-vpp-mpls-3.md @@ -14,11 +14,10 @@ performance and versatility. For those of us who have used Cisco IOS/XR devices, _ASR_ (aggregation service router), VPP will look and feel quite familiar as many of the approaches are shared between the two. -In the [[first article]({%post_url 2023-05-07-vpp-mpls-1 %})] of this series, I took a look at MPLS +In the [[first article]({{< ref "2023-05-07-vpp-mpls-1" >}})] of this series, I took a look at MPLS in general, and how setting up static _Label Switched Paths_ can be done in VPP. A few details on special case labels (such as _Implicit Null_ which enabled the fabled _Penultimate Hop Popping_) -were missing, so I took a good look at them in the [[second article]({% post_url -2023-05-17-vpp-mpls-2 %})] of the series. +were missing, so I took a good look at them in the [[second article]({{< ref "2023-05-17-vpp-mpls-2" >}})] of the series. This was all just good fun but also allowed me to buy some time for [@vifino](https://chaos.social/@vifino) who has been implementing MPLS handling within the Linux @@ -41,13 +40,13 @@ integrations, while using the Linux netlink subsystem feels easier from an end-u This is a technical deep dive into the implementation of MPLS in the Linux Control Plane plugin for VPP. If you haven't already, now is a good time to read up on the initial implementation of LCP: -* [[Part 1]({% post_url 2021-08-12-vpp-1 %})]: Punting traffic through TUN/TAP interfaces into Linux -* [[Part 2]({% post_url 2021-08-13-vpp-2 %})]: Mirroring VPP interface configuration into Linux -* [[Part 3]({% post_url 2021-08-15-vpp-3 %})]: Automatically creating sub-interfaces in Linux -* [[Part 4]({% post_url 2021-08-25-vpp-4 %})]: Synchronize link state, MTU and addresses to Linux -* [[Part 5]({% post_url 2021-09-02-vpp-5 %})]: Netlink Listener, synchronizing state from Linux to VPP -* [[Part 6]({% post_url 2021-09-10-vpp-6 %})]: Observability with LibreNMS and VPP SNMP Agent -* [[Part 7]({% post_url 2021-09-21-vpp-7 %})]: Productionizing and reference Supermicro fleet at IPng +* [[Part 1]({{< ref "2021-08-12-vpp-1" >}})]: Punting traffic through TUN/TAP interfaces into Linux +* [[Part 2]({{< ref "2021-08-13-vpp-2" >}})]: Mirroring VPP interface configuration into Linux +* [[Part 3]({{< ref "2021-08-15-vpp-3" >}})]: Automatically creating sub-interfaces in Linux +* [[Part 4]({{< ref "2021-08-25-vpp-4" >}})]: Synchronize link state, MTU and addresses to Linux +* [[Part 5]({{< ref "2021-09-02-vpp-5" >}})]: Netlink Listener, synchronizing state from Linux to VPP +* [[Part 6]({{< ref "2021-09-10-vpp-6" >}})]: Observability with LibreNMS and VPP SNMP Agent +* [[Part 7]({{< ref "2021-09-21-vpp-7" >}})]: Productionizing and reference Supermicro fleet at IPng To keep this writeup focused, I'll assume the anatomy of VPP plugins and the Linux Controlplane _Interface_ and _Netlink_ plugins are understood. That way, I can focus on the _changes_ needed for @@ -102,7 +101,7 @@ bits and TTL, and these can be added to the route path in VPP by casting them to fib_mpls_label_t`. The last label in the stackwill have the S-bit set, so we can continue consuming these until we find that condition. The first patchset that plays around with these semantics is [[38702#2](https://gerrit.fd.io/r/c/vpp/+/38702/2)]. As you can see, MPLS is going to look very much -like IPv4 and IPv6 route updates in [[previous work]({%post_url 2021-09-02-vpp-5 %})], in that they +like IPv4 and IPv6 route updates in [[previous work]({{< ref "2021-09-02-vpp-5" >}})], in that they take the Netlink representation, rewrite them into VPP representation, and update the FIB. Up until now, the Linux Controlplane netlink plugin understands only IPv4 and IPv6. So some @@ -187,7 +186,7 @@ exit ``` I configure _LDP_ here to prefer advertising locally connected routes as _MPLS Explicit NULL_, which I -described in detail in the [[previous post]({% post_url 2023-05-17-vpp-mpls-2 %})]. It tells the +described in detail in the [[previous post]({{< ref "2023-05-17-vpp-mpls-2" >}})]. It tells the penultimate router to send the router a packet as MPLS with label value 0,S=1 for IPv4 and value 2,S=1 for IPv6, so that VPP knows imediately to decapsulate the packet and continue to IPv4/IPv6 forwarding. An alternative here is setting implicit-null, which instructs the router before this one to perform diff --git a/content/articles/2023-05-28-vpp-mpls-4.md b/content/articles/2023-05-28-vpp-mpls-4.md index f54a022..a8015a0 100644 --- a/content/articles/2023-05-28-vpp-mpls-4.md +++ b/content/articles/2023-05-28-vpp-mpls-4.md @@ -17,12 +17,12 @@ are shared between the two. In the last three articles, I thought I had described "all we need to know" to perform MPLS using the Linux Controlplane in VPP: -1. In the [[first article]({%post_url 2023-05-07-vpp-mpls-1 %})] of this series, I took a look at MPLS +1. In the [[first article]({{< ref "2023-05-07-vpp-mpls-1" >}})] of this series, I took a look at MPLS in general. -2. In the [[second article]({% post_url 2023-05-17-vpp-mpls-2 %})] of the series, I demonstrated a few +2. In the [[second article]({{< ref "2023-05-17-vpp-mpls-2" >}})] of the series, I demonstrated a few special case labels (such as _Explicit Null_ and _Implicit Null_ which enables the fabled _Penultimate Hop Popping_ behavior of MPLS. -3. Then, in the [[third article]({% post_url 2023-05-21-vpp-mpls-3%})], I worked with +3. Then, in the [[third article]({{< ref "2023-05-21-vpp-mpls-3" >}})], I worked with [@vifino](https://chaos.social/@vifino) to implement the plumbing for MPLS in the Linux Control Plane plugin for VPP. He did most of the work, I just watched :) diff --git a/content/articles/2023-08-06-pixelfed-1.md b/content/articles/2023-08-06-pixelfed-1.md index 733bfca..bbf1a0b 100644 --- a/content/articles/2023-08-06-pixelfed-1.md +++ b/content/articles/2023-08-06-pixelfed-1.md @@ -13,8 +13,8 @@ is convenient, but these companies are sometimes taking away my autonomy and exe for me it's time to take back a little bit of responsibility for my online social presence, away from centrally hosted services and to privately operated ones. -After having written a fair bit about my Mastodon [[install]({% post_url 2022-11-20-mastodon-1 %})] and -[[monitoring]({% post_url 2022-11-27-mastodon-3 %})], I've been using it every day. This morning, my buddy Ramón asked if he could +After having written a fair bit about my Mastodon [[install]({{< ref "2022-11-20-mastodon-1" >}})] and +[[monitoring]({{< ref "2022-11-27-mastodon-3" >}})], I've been using it every day. This morning, my buddy Ramón asked if he could make a second account on **ublog.tech** for his _Campervan Adventures_, and notably to post pics of where he and his family went. But if pics is your jam, why not ... [[Pixelfed](https://pixelfed.org/)]! @@ -73,7 +73,7 @@ it is copied incrementally daily off-site by the hypervisor. I'm pretty confiden machine guests as well, because now I can do local snapshotting, of say `data/pixelfed`, and I can more easily grow/shrink the datasets for the supporting services, as well as isolate them individually against sibling wildgrowth. -The VM gets one virtual NIC, which will connect to the [[IPng Site Local]({% post_url 2023-03-17-ipng-frontends %})] network using +The VM gets one virtual NIC, which will connect to the [[IPng Site Local]({{< ref "2023-03-17-ipng-frontends" >}})] network using jumboframes. This way, the machine itself is disconnected from the internet, saving a few IPv4 addresses and allowing for the IPng NGINX frontends to expose it. I give it the name `pixelfed.net.ipng.ch` with addresses 198.19.4.141 and 2001:678:d78:507::d, which will be firewalled and NATed via the IPng SL gateways. @@ -81,7 +81,7 @@ firewalled and NATed via the IPng SL gateways. #### IPng Frontend: Wildcard SSL I run most websites behind a cluster of NGINX webservers, which are carrying an SSL certificate which support wildcards. The system is -using [[DNS-01]({% post_url 2023-03-24-lego-dns01 %})] challenges, so the first order of business is to expand the certificate from serving +using [[DNS-01]({{< ref "2023-03-24-lego-dns01" >}})] challenges, so the first order of business is to expand the certificate from serving only [[ublog.tech](https://ublog.tech)] (which is in use by the companion Mastodon instance), to include as well _*.ublog.tech_ so that I can add the new Pixelfed instance as [[pix.ublog.tech](https://pix.ublog.tech)]: @@ -391,7 +391,7 @@ and directories with `rwx------`, which doesn't seem quite right to me, so I mak Although I do like the Pixelfed logo, I wanted to keep a **ublog.tech** branding, so I replaced the `public/storage/headers/default.jpg` with my own mountains-picture in roughly the same size. By the way, I took that picture in Grindelwald, Switzerland during a -[[serene moment]({% post_url 2021-07-26-bucketlist %})] in which I discovered why tinkering with things like this is so important to my +[[serene moment]({{< ref "2021-07-26-bucketlist" >}})] in which I discovered why tinkering with things like this is so important to my mental health. #### Backups @@ -403,7 +403,7 @@ failure" or "computer broken" or "datacenter on fire". To honor this promise, I handle backups in three main ways: zrepl(1), borg(1) and mysqldump(1). * **VM Block Devices** are running on the hypervisor's ZFS on either the SSD pool, or the disk pool, or both. Using a tool called **zrepl(1)** - (which I described a little bit in a [[previous post]({% post_url 2022-10-14-lab-1 %})]), I create a snapshot every 12hrs on the local + (which I described a little bit in a [[previous post]({{< ref "2022-10-14-lab-1" >}})]), I create a snapshot every 12hrs on the local blockdevice, and incrementally copy away those snapshots daily to the remote fileservers. ``` diff --git a/content/articles/2023-08-27-ansible-nginx.md b/content/articles/2023-08-27-ansible-nginx.md index 5a4c06c..7857924 100644 --- a/content/articles/2023-08-27-ansible-nginx.md +++ b/content/articles/2023-08-27-ansible-nginx.md @@ -39,7 +39,7 @@ pim@squanchy:~/src/paphosting/scripts$ wc -l *push.sh funcs 1468 total ``` -In a [[previous article]({% post_url 2023-03-17-ipng-frontends %})], I talked about having not one but a cluster of NGINX servers that would +In a [[previous article]({{< ref "2023-03-17-ipng-frontends" >}})], I talked about having not one but a cluster of NGINX servers that would each share a set of SSL certificates and pose as a reversed proxy for a bunch of websites. At the bottom of that article, I wrote: > The main thing that's next is to automate a bit more of this. IPng Networks has an Ansible controller, which I'd like to add ... @@ -83,7 +83,7 @@ I'm not going to go into all the details here for the **debian** playbook, thoug all servers (bare metal or virtual). The one thing I'll mention though, is that the **debian** playbook will see to it that the correct users are created, with their SSH pubkey, and I'm going to first use this feature by creating two users: -1. `lego`: As I described in a [[post on DNS-01]({% post_url 2023-03-24-lego-dns01 %})], IPng has a certificate machine that answers Let's +1. `lego`: As I described in a [[post on DNS-01]({{< ref "2023-03-24-lego-dns01" >}})], IPng has a certificate machine that answers Let's Encrypt DNS-01 challenges, and its job is to regularly prove ownership of my domains, and then request a (wildcard!) certificate. Once that renews, copy the certificate to all NGINX machines. To do that copy, `lego` needs an account on these machines, it needs to be able to write the certs and issue a reload to the NGINX server. @@ -196,8 +196,7 @@ In order: * `conf.d/options-ssl-nginx.inc` and `conf.d/ssl-dhparams.inc` are files borrowed from Certbot's NGINX configuration, and ensure the best TLS and SSL session parameters are used. * `sites-available/*.conf` are the configuration blocks for the port-80 (HTTP) and port-443 (SSL certificate) websites. In the interest of - brevity I won't copy them here, but if you're curious I showed a bunch of these in a [[previous article]({% post_url -2023-03-17-ipng-frontends %})]. These per-website config files sensibly include the SSL defaults, custom IPng headers and `upstream` log + brevity I won't copy them here, but if you're curious I showed a bunch of these in a [[previous article]({{< ref "2023-03-17-ipng-frontends" >}})]. These per-website config files sensibly include the SSL defaults, custom IPng headers and `upstream` log format. ### NGINX Cluster: Let's Encrypt @@ -208,9 +207,9 @@ Name Indication_ or SNI. Let's first take a look at building these two of these one for [[FrysIX](https://frys-ix.net/)], the internet exchange with Frysian roots, which incidentally offers free 1G, 10G, 40G and 100G ports all over the Amsterdam metro. My buddy Arend and I are running that exchange, so please do join it! -I described the usual `HTTP-01` certificate challenge a while ago in [[this article]({% post_url 2023-03-17-ipng-frontends %})], but I +I described the usual `HTTP-01` certificate challenge a while ago in [[this article]({{< ref "2023-03-17-ipng-frontends" >}})], but I rarely use it because I've found that once installed, `DNS-01` is vastly superior. I wrote about the ability to request a single certificate -with multiple _wildcard_ entries in a [[DNS-01 article]({% post_url 2023-03-24-lego-dns01 %})], so I'm going to save you the repetition, and +with multiple _wildcard_ entries in a [[DNS-01 article]({{< ref "2023-03-24-lego-dns01" >}})], so I'm going to save you the repetition, and simply use `certbot`, `acme-dns` and the `DNS-01` challenge type, to request the following _two_ certificates: ```bash diff --git a/content/articles/2023-10-21-vpp-ixp-gateway-1.md b/content/articles/2023-10-21-vpp-ixp-gateway-1.md index 96e56d3..9169bbf 100644 --- a/content/articles/2023-10-21-vpp-ixp-gateway-1.md +++ b/content/articles/2023-10-21-vpp-ixp-gateway-1.md @@ -129,7 +129,7 @@ from `xe0.10` which are tagged, and add them as-is to the bridge, which is weird other bridge ports are expecting untagged frames. So what I must do is tell VPP, upon receipt of a tagged ethernet frame on these ports, to strip the tag; and on the way out, before transmitting the ethernet frame, to wrap it into its correct encapsulation. This is called **tag rewriting** in VPP, -and I've written a bit about it in [[this article]({% post_url 2022-02-14-vpp-vlan-gym %})] in case +and I've written a bit about it in [[this article]({{< ref "2022-02-14-vpp-vlan-gym" >}})] in case you're curious. But to cut to the chase: ``` diff --git a/content/articles/2023-11-11-mellanox-sn2700.md b/content/articles/2023-11-11-mellanox-sn2700.md index e68f6ce..bc16609 100644 --- a/content/articles/2023-11-11-mellanox-sn2700.md +++ b/content/articles/2023-11-11-mellanox-sn2700.md @@ -805,7 +805,7 @@ the box. This switch is phenomenal, and Jiří Pírko and the Mellanox team truly outdid themselves with their `mlxsw` switchdev implementation. I have in my hands a very affordable 32x100G or 64x(50G, 25G, 10G, 1G) and anything in between, with IPv4 and IPv6 forwarding in hardware, with a limited FIB size, not too -dissimilar from the [[Centec]({% post_url 2022-12-09-oem-switch-2 %})] switches that IPng Networks +dissimilar from the [[Centec]({{< ref "2022-12-09-oem-switch-2" >}})] switches that IPng Networks runs in its AS8298 network, albeit without MPLS forwarding capabilities. Still, for a LAB switch, to better test 25G and 100G topologies, this switch is very good value for diff --git a/content/articles/2023-12-17-defra0-debian.md b/content/articles/2023-12-17-defra0-debian.md index 232be78..7f08b43 100644 --- a/content/articles/2023-12-17-defra0-debian.md +++ b/content/articles/2023-12-17-defra0-debian.md @@ -18,8 +18,7 @@ which graduated into a feature called the Linux Control Plane plugin for the Linux Control Plane, notably Neale Ranns from Cisco (these days Graphiant), and Matt Smith and Jon Loeliger from Netgate (who ship this as TNSR [[ref](https://netgate.com/tnsr)], check it out!). I helped as well, by adding a bunch of Netlink handling and VPP->Linux synchronization code, -which I've written about a bunch on this blog in the 2021 VPP development series [[ref]({% post_url -2021-08-12-vpp-1 %})]. +which I've written about a bunch on this blog in the 2021 VPP development series [[ref]({{< ref "2021-08-12-vpp-1" >}})]. At the time, Ubuntu and CentOS were the supported platforms, so I installed a bunch of Ubuntu machines when doing the deploy with my buddy Fred from IP-Max [[ref](https://ip-max.net)]. But as @@ -33,8 +32,7 @@ I took stock of the fleet at the end of 2023, and I found the following: * ***OpenBSD***: 3 virtual machines, bastion jumphosts connected to Internet and IPng Site Local * ***Ubuntu***: 4 physical machines, VPP routers (`nlams0`, `defra0`, `chplo0` and `usfmt0`) * ***Debian***: 22 physical machines and 116 virtual machines, running internal and public services, - almost all of these machines are entirely in IPng Site Local [[ref]({% post_url -2023-03-11-mpls-core %})], not connected to the + almost all of these machines are entirely in IPng Site Local [[ref]({{< ref "2023-03-11-mpls-core" >}})], not connected to the internet at all. It became clear to me that I could make a small sprint to standardize all physical hardware on @@ -48,10 +46,9 @@ unilaterally :) ## Upgrading to Debian Luckily, I already have a fair number of VPP routers that have been deployed on Debian (mostly -_Bullseye_, but one of them is _Bookworm_), and my LAB environment [[ref]({% post_url -2022-10-14-lab-1 %})] is running Debian Bookworm as well. Although its native habitat is Ubuntu, I +_Bullseye_, but one of them is _Bookworm_), and my LAB environment [[ref]({{< ref "2022-10-14-lab-1" >}})] is running Debian Bookworm as well. Although its native habitat is Ubuntu, I regularly run VPP in a Debian environment, for example when Adrian contributed the MPLS code -[[ref]({% post_url 2023-05-21-vpp-mpls-3 %})], he also recommended Debian 12, because that ships +[[ref]({{< ref "2023-05-21-vpp-mpls-3" >}})], he also recommended Debian 12, because that ships with a modern libnl which supports a few bits and pieces he needed. ### Preparations @@ -281,7 +278,7 @@ Debian 12 _netinst_ ISO: At this point I can't help but smile. I'm sitting here in Brüttisellen, roughly 400km south of this computer in Frankfurt, and I am looking at the VGA output of a fresh Debian installer. Come on, you have to admit, that's pretty slick! Installing Debian follows pretty precisely my previous VPP#7 -article [[ref]({% post_url 2021-09-21-vpp-7 %})]. I go through the installer options and a few +article [[ref]({{< ref "2021-09-21-vpp-7" >}})]. I go through the installer options and a few minutes later, it's mission accomplished. I give the router its IPv4/IPv6 address in _IPng Site Local_, so that it has management network connectivity, and just before it wants to reboot, I quickly edit `/etc/default/grub` to turn on serial output, just like in the article: diff --git a/content/articles/2024-01-27-vpp-papi.md b/content/articles/2024-01-27-vpp-papi.md index e65ad52..1437a2a 100644 --- a/content/articles/2024-01-27-vpp-papi.md +++ b/content/articles/2024-01-27-vpp-papi.md @@ -17,8 +17,7 @@ design. However, there is this also a CLI utility called `vppctl`, right, so wha the CLI is used a lot by folks to configure their dataplane, but it really was always meant to be a debug utility. There's a whole wealth of programmability that is _not_ exposed via the CLI at all, and the VPP community develops and maintains an elaborate set of tools to allow external programs -to (re)configure the dataplane. One such tool is my own [[vppcfg]({% post_url 2022-04-02-vppcfg-2 -%})] which takes a YAML specification that describes the dataplane configuration, and applies it +to (re)configure the dataplane. One such tool is my own [[vppcfg]({{< ref "2022-04-02-vppcfg-2" >}})] which takes a YAML specification that describes the dataplane configuration, and applies it safely to a running VPP instance. ## Introduction @@ -142,7 +141,7 @@ The VPP API defines three types of message exchanges: If the convention is kept, the API machinery will correlate the `foo` and `foo_reply` messages into RPC services. But it's also possible to be explicit about these, by defining _service_ scopes in the `*.api` files. I'll take two examples, the first one is from the Linux Control Plane plugin (which -I've [[written about]({% post_url 2021-08-12-vpp-1 %})] a lot while I was contributing to it back in +I've [[written about]({{< ref "2021-08-12-vpp-1" >}})] a lot while I was contributing to it back in 2021). **Dump/Detail (example)**: When enumerating _Linux Interface Pairs_, the service definition looks like @@ -193,7 +192,7 @@ and an additional 80 or so APIs defined by _plugins_ like the Linux Control Plan Implementing APIs is pretty user friendly, largely due to the `vppapigen` tool taking so much of the boilerplate and autogenerating things. As an example, I need to be able to enumerate the interfaces -that are MPLS enabled, so that I can use my [[vppcfg]({% post_url 2022-03-27-vppcfg-1 %})] utility to +that are MPLS enabled, so that I can use my [[vppcfg]({{< ref "2022-03-27-vppcfg-1" >}})] utility to configure MPLS. I contributed an API called `mpls_interface_dump` which returns a stream of `mpls_interface_details` messages. You can see that small contribution in merged [[Gerrit 39022](https://gerrit.fd.io/r/c/vpp/+/39022)]. diff --git a/content/articles/2024-02-10-vpp-freebsd-1.md b/content/articles/2024-02-10-vpp-freebsd-1.md index c41a5b8..0269feb 100644 --- a/content/articles/2024-02-10-vpp-freebsd-1.md +++ b/content/articles/2024-02-10-vpp-freebsd-1.md @@ -55,7 +55,7 @@ different from a leaky sieve. ### VMs: IPng Lab -I really like the virtual machine environment that the [[IPng Lab]({% post_url 2022-10-14-lab-1 %})] +I really like the virtual machine environment that the [[IPng Lab]({{< ref "2022-10-14-lab-1" >}})] provides. So my very first step is to grab an UFS based image like [[these ones](https://download.freebsd.org/releases/VM-IMAGES/14.0-RELEASE/amd64/Latest/)], and I prepare a lab image. This goes roughly as follows -- @@ -117,7 +117,7 @@ pim@summer:/usr/src/linux-source-6.1$ sudo make -j`nproc` bindeb-pkg Finally, I add a new LAB overlay type called `freebsd` to the Python/Jinja2 tool I built, which I use to create and maintain the LAB hypervisors. If you're curious about this part, take a look at -the [[article]({% post_url 2022-10-14-lab-1 %})] I wrote about the environment. I reserve LAB #2 +the [[article]({{< ref "2022-10-14-lab-1" >}})] I wrote about the environment. I reserve LAB #2 running on `hvn2.lab.ipng.ch` for the time being, as LAB #0 and #1 are in use by other projects. To cut to the chase, here's what I type to generate the overlay and launch a LAB using the FreeBSD I just made. There's not much in the overlay, really just some templated `rc.conf` to set the correct @@ -160,8 +160,7 @@ Next, I take three spare Supermicro SYS-5018D-FN8T, which have the following spe * m.SATA 120G boot SSD * 2x16GB of ECC RAM -These were still arranged in a test network from when Adrian and I worked on the [[VPP MPLS]({% -post_url 2023-05-07-vpp-mpls-1 %})] project together, and back then I called the three machines +These were still arranged in a test network from when Adrian and I worked on the [[VPP MPLS]({{< ref "2023-05-07-vpp-mpls-1" >}})] project together, and back then I called the three machines `France`, `Belgium` and `Netherlands`. I decide to reuse that, and save myself some recabling. Using IPMI, I install the `France` server with FreeBSD, while the other two, for now, are still running Debian. This can be useful for (a) side by side comparison tests and (b) to be able to diff --git a/content/articles/2024-02-17-vpp-freebsd-2.md b/content/articles/2024-02-17-vpp-freebsd-2.md index 7b9c18e..a9b9855 100644 --- a/content/articles/2024-02-17-vpp-freebsd-2.md +++ b/content/articles/2024-02-17-vpp-freebsd-2.md @@ -28,7 +28,7 @@ over 2023 and forward to 2024: > experience, improving hardware support on arm64 platforms, and adding support for low power idle > on Intel and arm64 hardware. -In my first [[article]({% post_url 2024-02-10-vpp-freebsd-1 %})], I wrote a sort of a _hello world_ +In my first [[article]({{< ref "2024-02-10-vpp-freebsd-1" >}})], I wrote a sort of a _hello world_ by installing FreeBSD 14.0-RELEASE on both a VM and a bare metal Supermicro, and showed that Tom's VPP branch compiles, runs and pings. In this article, I'll take a look at some comparative performance numbers. @@ -41,7 +41,7 @@ utilities like a _netmap_ bridge, and of course completely userspace based datap VPP project that I'm working on here. Last week, I learned that VPP has a _netmap_ driver, and from previous travels I am already quite familiar with its _DPDK_ based forwarding. I decide to do a baseline loadtest for each of these on the Supermicro Xeon-D1518 that I installed last week. See the -[[article]({% post_url 2024-02-10-vpp-freebsd-1 %})] for details on the setup. +[[article]({{< ref "2024-02-10-vpp-freebsd-1" >}})] for details on the setup. The loadtests will use a common set of different configurations, using Cisco T-Rex's default benchmark profile called `bench.py`: @@ -203,7 +203,7 @@ the kernel (which clocked in at 1.2Mpps). It's good to have a baseline on this machine on how the FreeBSD kernel itself performs. But of course this series is about Vector Packet Processing, so I now turn my attention to the VPP branch that Tom shared with me. I wrote a bunch of details about the VM and bare metal install in my -[[first article]({% post_url 2024-02-10-vpp-freebsd-1 %})] so I'll just go straight to the +[[first article]({{< ref "2024-02-10-vpp-freebsd-1" >}})] so I'll just go straight to the configuration parts: ``` diff --git a/content/articles/2024-03-06-vpp-babel-1.md b/content/articles/2024-03-06-vpp-babel-1.md index fcd7a77..383dce2 100644 --- a/content/articles/2024-03-06-vpp-babel-1.md +++ b/content/articles/2024-03-06-vpp-babel-1.md @@ -10,12 +10,12 @@ title: VPP with Babel - Part 1 Ever since I first saw VPP - the Vector Packet Processor - I have been deeply impressed with its performance and versatility. For those of us who have used Cisco IOS/XR devices, like the classic _ASR_ (aggregation services router), VPP will look and feel quite familiar as many of the approaches -are shared between the two. Thanks to the [[Linux ControlPlane]({% post_url 2021-08-12-vpp-1 %})] +are shared between the two. Thanks to the [[Linux ControlPlane]({{< ref "2021-08-12-vpp-1" >}})] plugin, higher level control plane software becomes available, that is to say: things like BGP, OSPF, LDP, VRRP and so on become quite natural for VPP. IPng Networks is a small service provider that has built a network based entirely on open source: -[[Debian]({% post_url 2023-12-17-defra0-debian %})] servers with widely available Intel and Mellanox +[[Debian]({{< ref "2023-12-17-defra0-debian" >}})] servers with widely available Intel and Mellanox 10G/25G/100G network cards, paired with [[VPP](https://fd.io/)] for the dataplane, and [[Bird2](https://bird.nic.cz/)] for the controlplane. @@ -191,7 +191,7 @@ Babel-Bird out for a test flight. Thank you for the Babel-Bird-Build, Summer! ### Babel and the LAB -I decide to take an IPng [[lab]({% post_url 2022-10-14-lab-1 %})] out for a spin. These labs come +I decide to take an IPng [[lab]({{< ref "2022-10-14-lab-1" >}})] out for a spin. These labs come with four VPP routers and two Debian machines connected like so: {{< image src="/assets/vpp-mpls/LAB v2.svg" alt="Lab Setup" >}} @@ -288,8 +288,8 @@ IPv6 loopbacks across the network. IPv6 pings and looks good. However, IPv4 endpoints do not ping yet. The first thing I look at, is does VPP understand how to interpret an IPv4 route with an IPv6 nexthop? I think it does, because I -remember reviewing a change from Adrian during our MPLS [[project]({% post_url 2023-05-28-vpp-mpls-4 -%})], which he submitted in this [[Gerrit](https://gerrit.fd.io/r/c/vpp/+/38633)]. His change +remember reviewing a change from Adrian during our MPLS [[project]({{< ref "2023-05-28-vpp-mpls-4" >}})], +which he submitted in this [[Gerrit](https://gerrit.fd.io/r/c/vpp/+/38633)]. His change allows VPP to use routes with `rtnl_route_nh_get_via()` to map them to a different address family, exactly what I am looking for. The routes are correctly installed in the FIB: @@ -415,7 +415,7 @@ loop0 (up): ``` The Linux ControlPlane configuration will always synchronize interface information from VPP to -Linux, as I described back then when I [[worked on the plugin]({% post_url 2021-08-13-vpp-2 %})]. +Linux, as I described back then when I [[worked on the plugin]({{< ref "2021-08-13-vpp-2" >}})]. Babel starts and sets next hops for IPv4 that look like this: ``` @@ -631,7 +631,7 @@ to retire the many /31 IPv4 and /112 IPv6 transit networks (which consume about IPv4 addresses!). I will discuss my change with the VPP and Babel/Bird Developer communities and see if it makes sense to upstream my changes. Personally, I think it's a reasonable direction, because (a) both changes are backwards compatible and (b) its semantics are pretty straight forward. I'll -also add some configuration knobs to [[vppcfg]({% post_url 2022-04-02-vppcfg-2 %})] to make it +also add some configuration knobs to [[vppcfg]({{< ref "2022-04-02-vppcfg-2" >}})] to make it easier to configure VPP in this way. diff --git a/content/articles/2024-04-06-vpp-ospf.md b/content/articles/2024-04-06-vpp-ospf.md index 0d3c3b1..d4958e5 100644 --- a/content/articles/2024-04-06-vpp-ospf.md +++ b/content/articles/2024-04-06-vpp-ospf.md @@ -7,7 +7,7 @@ title: VPP with loopback-only OSPFv3 - Part 1 # Introduction -A few weeks ago I took a good look at the [[Babel]({% post_url 2024-03-06-vpp-babel-1 %})] protocol. +A few weeks ago I took a good look at the [[Babel]({{< ref "2024-03-06-vpp-babel-1" >}})] protocol. I found a set of features there that I really appreciated. The first was a latency aware routing protocol - this is useful for mesh (wireless) networks but it is also a good fit for IPng's usecase, notably because it makes use of carrier ethernet which, if any link in the underlying MPLS network @@ -55,7 +55,7 @@ precludes the ability for IPv6 nexthops to be used. Crap on a cracker! # OSPFv3 with IPv4 🥰 -But wait, not all is lost! Remember in my [[VPP Babel]({% post_url 2024-03-06-vpp-babel-1 %})] +But wait, not all is lost! Remember in my [[VPP Babel]({{< ref "2024-03-06-vpp-babel-1" >}})] article I mentioned that VPP has this ability to run _unnumbered_ interfaces? To recap, this is a configuration where a primary interface, typically a loopback, will have an IPv4 and IPv6 address, say **192.168.10.2/32** and **2001:678:d78:200::2/128** and other interfaces will borrow from that. @@ -94,7 +94,7 @@ Meanwhile, in the Bird community, we were thinking about solving this problem in Babel allows a feature to use IPv6 transit networks with IPv4 destinations, by specifying an option called `extended next hop`. With this option, Babel will set a nexthop across address families. It may sound freaky at first, but it's not too strange when you think about it. Take a look at my -explanation in the [[Babel]({% post_url 2024-03-06-vpp-babel-1 %})] article on how IPv6 neighbor +explanation in the [[Babel]({{< ref "2024-03-06-vpp-babel-1" >}})] article on how IPv6 neighbor discovery can take the place of IPv4 ARP resolution to figure out the ethernet next hop. So our initial take was: why don't we do that with OSPFv3 as well? We thought of a trick to diff --git a/content/articles/2024-04-27-freeix-1.md b/content/articles/2024-04-27-freeix-1.md index 65fe6db..5fcb4d5 100644 --- a/content/articles/2024-04-27-freeix-1.md +++ b/content/articles/2024-04-27-freeix-1.md @@ -214,11 +214,9 @@ full table. It'll _merely_ provide a form of partial transit from member A at IX can be found at IXPs #2-#N. Makes the mind boggle? Don't worry, we'll figure it out together :) In an upcoming article I'll detail the programming work that goes into implementing this complex peering policy in Bird2 -as driving VPP routers (duh), with an IGP that is IPv4-less, because at this point, I [[may as well]({%post_url -2024-04-06-vpp-ospf %})] put my money where my mouth is. +as driving VPP routers (duh), with an IGP that is IPv4-less, because at this point, I [[may as well]({{< ref "2024-04-06-vpp-ospf" >}})] put my money where my mouth is. -If you're interested in this kind of stuff, take a look at the IPng Networks AS8298 [[Routing Policy]({% post_url -2021-11-14-routing-policy %})]. Similar to that one, this one will use a combination of functional programming, templates, +If you're interested in this kind of stuff, take a look at the IPng Networks AS8298 [[Routing Policy]({{< ref "2021-11-14-routing-policy" >}})]. Similar to that one, this one will use a combination of functional programming, templates, and clever expansions to make a customized per-member and per-peer configuration based on a YAML input file which dictates which member and which prefix is allowed to go where. diff --git a/content/articles/2024-05-17-smtp.md b/content/articles/2024-05-17-smtp.md index ae54f6b..4be0fdf 100644 --- a/content/articles/2024-05-17-smtp.md +++ b/content/articles/2024-05-17-smtp.md @@ -105,8 +105,7 @@ themselves will not need to do any DNSBL lookups, which is convenient because it them behind a loadbalancer and serve them entirely within IPng Site Local. If you're curious as to what this site local thing means, basically it's an internal network spanning all IPng's points of presence, with an IPv4, IPv6 and MPLS backbone that is disconnected from the internet. For more -details on the design goals, take a look at the [[article]({% post_url 2023-03-11-mpls-core -%})] I wrote about it last year. +details on the design goals, take a look at the [[article]({{< ref "2023-03-11-mpls-core" >}})] I wrote about it last year. #### Debian VMs @@ -259,7 +258,7 @@ this case for `chrma0` in Rümlang, Switzerland. However, when clients conne let the server present itself as simply `smtp-out.ipng.ch`, which will also be its public DNS name later, but put the internal FQDN for debugging purposes between parenthesis. See the `smtpd_banner` and `myhostname` for the destinction. I'll load up the `*.ipng.ch` wildcard certificate which I -described in my Let's Encrypt [[DNS-01]({% post_url 2023-03-24-lego-dns01 %})] article. +described in my Let's Encrypt [[DNS-01]({{< ref "2023-03-24-lego-dns01" >}})] article. ***Authorization***: I will make Postfix accept relaying for those users that are either in the @@ -364,7 +363,7 @@ This allows OpenDKIM to sign messages for any number of domains, using the corre Now that I have three of these identical VMs, I am ready to hook them up to the internet. On the way in, I will point `smtp-out.ipng.ch` to our NGINX cluster. I wrote about that cluster in a [[previous -article]({% post_url 2023-03-17-ipng-frontends %})]. I will add a snippet there, that exposes these +article]({{< ref "2023-03-17-ipng-frontends" >}})]. I will add a snippet there, that exposes these VMs behind a TCP loadbalancer like so: ``` @@ -394,7 +393,7 @@ spot within IPng Site Local (which, you will remember, is not connected directly There are three redundant gateways in IPng Site Local (in Geneva, Brüttisellen and Amsterdam). If any of these were to go down for maintenance or fail, the network will use OSPF E1 to find the next closest default gateway. I wrote about how this entire european network is connected via three -gateways that are self-repairing in this [[article]({% post_url 2023-03-11-mpls-core %})], in case +gateways that are self-repairing in this [[article]({{< ref "2023-03-11-mpls-core" >}})], in case you're curious. But, for the purposes of SMTP, it means that each of the internal `smtp-out` VMs will be seen by diff --git a/content/articles/2024-05-25-nat64-1.md b/content/articles/2024-05-25-nat64-1.md index b9dfe28..63b4bda 100644 --- a/content/articles/2024-05-25-nat64-1.md +++ b/content/articles/2024-05-25-nat64-1.md @@ -14,11 +14,11 @@ IPv6, VxLAN, GENEVE and GRE all in silicon, are very cheap on power and relative port. Centec switches allow for a modest but not huge amount of routes in the hardware forwarding tables. -I loadtested them in [[a previous article]({% post_url 2022-12-05-oem-switch-1 %})] at line rate +I loadtested them in [[a previous article]({{< ref "2022-12-05-oem-switch-1" >}})] at line rate (well, at least 8x10G at 64b packets and around 110Mpps), and they forward IPv4, IPv6 and MPLS traffic effortlessly, at 45 watts. -I wrote more about the Centec switches in [[my review]({% post_url 2023-03-11-mpls-core %})] of them +I wrote more about the Centec switches in [[my review]({{< ref "2023-03-11-mpls-core" >}})] of them back in 2022. ### IPng Site Local @@ -39,11 +39,11 @@ message bus using [[Nats](https://nats.io)], and of course monitoring with SNMP make use of this network. But it's not only internal services like management traffic, I also actively use this private network to expose _public_ services! -For example, I operate a bunch of [[NGINX Frontends]({% post_url 2023-03-17-ipng-frontends %})] that +For example, I operate a bunch of [[NGINX Frontends]({{< ref "2023-03-17-ipng-frontends" >}})] that have a public IPv4/IPv6 address, and reversed proxy for webservices (like [[ublog.tech](https://ublog.tech)] or [[Rallly](https://rallly.ipng.ch/)]) which run on VMs and Docker hosts which don't have public IP addresses. Another example which I wrote about [[last -week]({% post_url 2024-05-17-smtp %})], is a bunch of mail services that run on VMs without public +week]({{< ref "2024-05-17-smtp" >}})], is a bunch of mail services that run on VMs without public access, but are each carefully exposed via reversed proxies (like Postfix, Dovecot, or [[Roundcube](https://webmail.ipng.ch)]). It's an incredibly versatile network design! diff --git a/content/articles/2024-06-22-vpp-ospf-2.md b/content/articles/2024-06-22-vpp-ospf-2.md index c4347c4..8d196d4 100644 --- a/content/articles/2024-06-22-vpp-ospf-2.md +++ b/content/articles/2024-06-22-vpp-ospf-2.md @@ -14,10 +14,10 @@ transit networks between routers really start adding up. I explored two potential solutions to this problem: -1. **[[Babel]({% post_url 2024-03-06-vpp-babel-1 %})]** can use IPv6 nexthops for IPv4 destinations - +1. **[[Babel]({{< ref "2024-03-06-vpp-babel-1" >}})]** can use IPv6 nexthops for IPv4 destinations - which is _super_ useful because it would allow me to retire all of the IPv4 /31 point to point networks between my routers. -1. **[[OSPFv3]({% post_url 2024-04-06-vpp-ospf %})]** makes it difficult to use IPv6 nexthops for +1. **[[OSPFv3]({{< ref "2024-04-06-vpp-ospf" >}})]** makes it difficult to use IPv6 nexthops for IPv4 destinations, but in a discussion with the Bird Users mailinglist, we found a way: by reusing a single IPv4 loopback address on adjacent interfaces @@ -279,7 +279,7 @@ interface `loop0`. {{< image width="100px" float="left" src="/assets/freebsd-vpp/brain.png" alt="brain" >}} Planning and applying this is straight forward, but there's one detail I should -mention. In my [[previous article]({% post_url 2024-04-06-vpp-ospf %})] I asked myself a question: +mention. In my [[previous article]({{< ref "2024-04-06-vpp-ospf" >}})] I asked myself a question: would it be better to leave the addresses unconfigured in Linux, or would it be better to make the Linux Control Plane plugin carry forward the borrowed addresses? In the end, I decided to _not_ copy them forward. VPP will be aware of the addresses, but Linux will only carry them on the `loop0` @@ -464,8 +464,7 @@ of this: OSPFv2, cost will remain consistent; and also within the routers that speak OSPFv3, it will be consistent. Between them, routes will be learned, but cost will be roughly meaningless. -I upgrade another link, between router `chgtg0` and `ddln0` at my [[colo]({% post_url -2022-02-24-colo %})], which is connected via a 10G EoMPLS link from a local telco called Solnet. The +I upgrade another link, between router `chgtg0` and `ddln0` at my [[colo]({{< ref "2022-02-24-colo" >}})], which is connected via a 10G EoMPLS link from a local telco called Solnet. The colo, similar to IPng's office, has two redundant 10G uplinks, so if things were to fall apart, I can always quickly shutdown the offending link (thereby removing OSPFv3 adjacencies), and traffic will reroute. I have created two islands of OSPFv3, drawn in }})]. There's just something magical about remote-mounting a Debian Bookworm iso image from my workstation in Brüttisellen, Switzerland, in a router running in Amsterdam, to then proceed to use KVM over HTML5 to reinstall the whole thing remotely. We didn't have that, growing up!! @@ -371,8 +371,8 @@ I have one more thing to share. Up until now, the hypervisor has internal connec Local_, and a single IPv4 / IPv6 address in the shared colocation network. Almost all VMs at IPng run entirely in IPng Site Local, and will use reversed proxies and other tricks to expose themselves to the internet. But, I also use a modest amount of IPv4 and IPv6 addresses on the VMs here, for -example for those NGINX reversed proxies [[ref]({% post_url 2023-03-17-ipng-frontends %})], or my -SMTP relays [[ref]({% post_url 2024-05-17-smtp %})]. +example for those NGINX reversed proxies [[ref]({{< ref "2023-03-17-ipng-frontends" >}})], or my +SMTP relays [[ref]({{< ref "2024-05-17-smtp" >}})]. For this purpose, I will need to plumb through some form of colocation VLAN in each site, which looks very similar to the BGP uplink VLAN I described previously: @@ -459,7 +459,7 @@ I run an anycasted AS112 cluster in all sites where IPng has hypervisor capacity Amsterdam, my nodes are running on both Qupra and EUNetworks, and connect to LSIX, SpeedIX, FogIXP, FrysIX and behind AS8283 and AS8298. The nodes here handle roughly 5kqps at peak, and if RIPE NCC's node in Amsterdam goes down, this can go up to 13kqps (right, WEiRD?). I described the setup in an -[[article]({% post_url 2021-06-28-as112 %})]. You may be wondering: how do I get those internet +[[article]({{< ref "2021-06-28-as112" >}})]. You may be wondering: how do I get those internet exchanges backhauled to a VM at Coloclue? The answer is: VxLAN transport! Here's a relevant snippet from the `nlams0.ipng.ch` router config: @@ -519,15 +519,14 @@ At IPng, almost everything runs in the internal network called _IPng Site Local_ network via a few carefully placed NGINX frontends. There are two in my own network (in Geneva and Zurich), and one in IP-Max's network (in Zurich), and two at Coloclue (in Amsterdam). They frontend and do SSL offloading and TCP loadbalancing for a variety of websites and services. I described the -architecture and design in an [[article]({% post_url 2023-03-17-ipng-frontends %})]. There are +architecture and design in an [[article]({{< ref "2023-03-17-ipng-frontends" >}})]. There are currently ~120 or so websites frontended on this cluster. **SMTP Relays** \ I self-host my mail, and I tried to make a fully redundant and self-repairing SMTP in- and outbound with Postfix, IMAP server and redundant maildrop storage with Dovecot, a webmail service with Roundcube, and so on. Because I need to perform DNSBL lookups, this requires routable IPv4 and IPv6 -addresses. Two of my four mailservers run at Coloclue, which I described in an [[article]({% -post_url 2024-05-17-smtp %})]. +addresses. Two of my four mailservers run at Coloclue, which I described in an [[article]({{< ref "2024-05-17-smtp" >}})]. **Mailman Service** \ For FrysIX, FreeIX, and IPng itself, I run a set of mailing lists. The mailman service runs @@ -562,7 +561,7 @@ internal network, and NAT'ed towards the Internet. Each border gateway announces a default route towards the Centec switches, and connect to AS8298, AS8283 and AS25091 for internet connectivity. One of them runs in Amsterdam, and I wrote about -these gateways in an [[article]({% post_url 2023-03-11-mpls-core %})]. +these gateways in an [[article]({{< ref "2023-03-11-mpls-core" >}})]. **Public NAT64/DNS64 Gateways** \ I operate a set of four private NAT64/DNS64 gateways, one of which in Amsterdam. It pairs up and @@ -571,8 +570,7 @@ useful in general, I also operate two public NAT64/DNS64 gateways, one at Qupra EUNetworks. You can try them for yourself by using the following anycasted resolver: `2a02:898:146:64::64` and performing a traceroute to an IPv4 only host, like `github.com`. Note: this works from anywhere, but for satefy reasons, I filter some ports like SMTP, NETBIOS and so on, -roughly the same way a TOR exit router would. I wrote about them in an [[article]({% post_url -2024-05-25-nat64-1 %})]. +roughly the same way a TOR exit router would. I wrote about them in an [[article]({{< ref "2024-05-25-nat64-1" >}})]. ``` pim@cons0-nlams0:~$ cat /etc/resolv.conf diff --git a/content/articles/2024-07-05-r86s.md b/content/articles/2024-07-05-r86s.md index a436dbb..bf6aaa4 100644 --- a/content/articles/2024-07-05-r86s.md +++ b/content/articles/2024-07-05-r86s.md @@ -12,7 +12,8 @@ issue 19" rack mountable machine like a Dell, HPE or SuperMicro machine is an ob come with redundant power supplies, PCIe v3.0 or better expansion slots, and can boot off of mSATA or NVME, with plenty of RAM. But for some people and in some locations, the power envelope or size/cost of these 19" rack mountable machines can be prohibitive. Sometimes, just having a smaller -form factor can be very useful: \ +form factor can be very useful: + ***Enter the GoWin R86S!*** {{< image width="250px" float="right" src="/assets/r86s/r86s-nvme.png" alt="R86S NVME" >}} @@ -321,8 +322,7 @@ adapter. I'll run the same eight loadtests: **{1514b,64b,64b-1Q,MPLS} x {unidirectional,bidirectional}** In the table above, I showed the output of `show runtime` in the VPP debug CLI. These numbers are -also exported in a prometheus exporter. I wrote about that in this [[article]({% post_url -2023-04-09-vpp-stats %})]. In Grafana, I can draw these timeseries as graphs, and it shows me a lot +also exported in a prometheus exporter. I wrote about that in this [[article]({{< ref "2023-04-09-vpp-stats" >}})]. In Grafana, I can draw these timeseries as graphs, and it shows me a lot about where VPP is spending its time. Each _node_ in the directed graph counts how many vectors (packets) it has seen, and how many CPU cycles it has spent doing its work. diff --git a/content/articles/2024-08-03-gowin.md b/content/articles/2024-08-03-gowin.md index 0082715..15dc2ec 100644 --- a/content/articles/2024-08-03-gowin.md +++ b/content/articles/2024-08-03-gowin.md @@ -11,7 +11,7 @@ Last month, I took a good look at the Gowin R86S based on Jasper Lake (N6005) CP [[ref](https://www.gowinfanless.com/products/network-device/r86s-firewall-router/gw-r86s-u-series)], which is a really neat little 10G (and, if you fiddle with it a little bit, 25G!) router that runs off of USB-C power and can be rack mounted if you print a bracket. Check out my findings in this -[[article]({% post_url 2024-07-05-r86s %})]. +[[article]({{< ref "2024-07-05-r86s" >}})]. David from Gowin reached out and asked me if I was willing to also take a look their Alder Lake (N305) CPU, which comes in a 19" rack mountable chassis, running off of 110V/220V AC mains power, @@ -153,7 +153,7 @@ I'm very curious how this NIC stacks up between DPDK and RDMA -- read on below f ### DPDK: ConnectX-5 EN I swap the card out of its OCP bay and replace it with a ConnectX-5 EN that I have from when I -tested the [[R86S]({% post_url 2024-07-05-r86s %})]. It identifies as: +tested the [[R86S]({{< ref "2024-07-05-r86s" >}})]. It identifies as: ``` 0e:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5] diff --git a/content/services.md b/content/services.md index 0edbcfd..537367c 100644 --- a/content/services.md +++ b/content/services.md @@ -18,7 +18,7 @@ connected with dark fiber, and we have access to several local loop providers in example, we can arrange connectivity to Colozueri, Equinix ZH4 and Equinix ZH5 directly, and using our partners (such as IP-Max, Init7, Openfactory), domestically and to most european cities. -You can read more about our network in this [informative post]({% post_url 2021-02-27-network %}). +You can read more about our network in this [[informative post]({{< ref "2021-02-27-network" >}})]. ### IP Transit @@ -32,7 +32,8 @@ Gaining access to this wealth of IPv4 and IPv6 coverage is as easy as finding an one of our points of presence, establishing a BGP session to us, and announcing your netblock(s). We'll take it from there! -You can read more about our BGP capabilities in this [informative post]({% post_url 2021-02-27-network %}). +You can read more about our BGP capabilities in this [[informative post]({{< ref +"2021-02-27-network" >}})]. ### Local Loop Ethernet @@ -52,8 +53,8 @@ your own home, or to the main internet hubs of Zurich, are easily accomplished i facility. If more space is needed, we are regulars in most all Swiss carrier housing facilities, and can help broker a deal that is tailored to your needs. -You can read more about how we built our own colocation from scratch in this [informative post]({% -post_url 2022-02-24-colo %}). +You can read more about how we built our own colocation from scratch in this [[informative post]( +{{< ref "2022-02-24-colo" >}})]. ## Project Design / Execution