Replace all post_url with Hugo ref blocks

This commit is contained in:
2024-08-05 01:43:55 +02:00
parent c1f1775c91
commit a2f10236a3
56 changed files with 221 additions and 241 deletions

View File

@ -7,9 +7,9 @@ title: 'Review: Cisco ASR9006/RSP440-SE'
{{< image width="180px" float="right" src="/assets/asr9006/ipmax.png" alt="IP-Max" >}}
If you've read up on my articles, you'll know that I have deployed a [European Ring]({%post_url 2021-02-27-network %}),
which was reformatted late last year into [AS8298]({%post_url 2021-10-24-as8298 %}) and upgraded to run
[VPP Routers]({%post_url 2021-09-21-vpp-7 %}) with 10G between each city. IPng Networks rents these 10G point to point
If you've read up on my articles, you'll know that I have deployed a [European Ring]({{< ref "2021-02-27-network" >}}),
which was reformatted late last year into [AS8298]({{< ref "2021-10-24-as8298" >}}) and upgraded to run
[VPP Routers]({{< ref "2021-09-21-vpp-7" >}}) with 10G between each city. IPng Networks rents these 10G point to point
virtual leased lines between each of our locations. It's a really great network, and it performs so well because it's
built on an EoMPLS underlay provided by [IP-Max](https://ip-max.net/). They, in turn, run carrier grade hardware in the
form of Cisco ASR9k. In part, we're such a good match together, because my choice of [VPP](https://fd.io/) on the IPng
@ -157,7 +157,7 @@ of stability beyond Cisco and maybe Juniper. So if you want _Rock Solid Internet
the way to go.
I have written a word or two on how VPP (an open source dataplane very similar to these industrial machines)
works. A great example is my recent [VPP VLAN Gymnastics]({%post_url 2022-02-14-vpp-vlan-gym %}) article.
works. A great example is my recent [VPP VLAN Gymnastics]({{< ref "2022-02-14-vpp-vlan-gym" >}}) article.
There's a lot I can learn from comparing the performance between VPP and Cisco ASR9k, so I will focus
on the following set of practical questions:
@ -183,7 +183,7 @@ Mellanox ConnectX5-Ex (PCIe v4.0 x16) network card sporting two 100G interfaces,
with this 2x10G single interface, and 2x20G LAG, even with 64 byte packets. I am continually amazed that
a full line rate loadtest of small 64 byte packets at a rate of 40Gbps boils down to 59.52Mpps!
For each loadtest, I ramp up the traffic using a [T-Rex loadtester]({%post_url 2021-02-27-coloclue-loadtest %})
For each loadtest, I ramp up the traffic using a [T-Rex loadtester]({{< ref "2021-02-27-coloclue-loadtest" >}})
that I wrote. It starts with a low-pps warmup duration of 30s, then it ramps up from 0% to a certain line rate
(in this case, alternating to 10GbpsL1 for the single TenGig tests, or 20GbpsL1 for the LACP tests), with a
rampup duration of 120s and finally it holds for duration of 30s.
@ -254,7 +254,7 @@ not easily available (ie. both VPP as well as the ASR9k in this case!)
### Test 1.1: 10G L2 Cross Connect
A simple matter of virtually patching one interface into the other, I choose the first port on blade 1 and 2, and
tie them together in a `p2p` cross connect. In my [VLAN Gymnastics]({%post_url 2022-02-14-vpp-vlan-gym %}) post, I
tie them together in a `p2p` cross connect. In my [VLAN Gymnastics]({{< ref "2022-02-14-vpp-vlan-gym" >}}) post, I
called this a `l2 xconnect`, and although the configuration statements are a bit different, the purpose and expected
semantics are identical:
@ -292,7 +292,7 @@ imix | 3.25 Mpps | 9.94 Gbps | 6.46 Mpps | 19.78 Gbps
### Test 1.2: 10G L2 Bridge Domain
I then keep the two physical interfaces in `l2transport` mode, but change the type of l2vpn into a
`bridge-domain`, which I described in my [VLAN Gymnastics]({%post_url 2022-02-14-vpp-vlan-gym %}) post
`bridge-domain`, which I described in my [VLAN Gymnastics]({{< ref "2022-02-14-vpp-vlan-gym" >}}) post
as well. VPP and Cisco IOS/XR semantics look very similar indeed, they differ really only in the way
in which the configuration is expressed:
@ -450,7 +450,7 @@ AggregatePort 2 20000 Mbit 0.0000019575% 0.0000023950% 0
It's clear that both `AggregatePort` interfaces have 20Gbps of capacity and are using an L3
loadbalancing policy. Cool beans!
If you recall my loadtest theory in for example my [Netgate 6100 review]({%post_url 2021-11-26-netgate-6100%}),
If you recall my loadtest theory in for example my [Netgate 6100 review]({{< ref "2021-11-26-netgate-6100" >}}),
it can sometimes be useful to operate a single-flow loadtest, in which the source and destination
IP:Port stay the same. As I'll demonstrate, it's not only relevant for PC based routers like ones built
on VPP, it can also be very relevant in silicon vendors and high-end routers!
@ -729,7 +729,7 @@ I took out for a spin here). They are large (10U of rackspace), heavy (40kg load
list price, the street price is easily $10'000,- apiece).
On the other hand, we have these PC based machines with Vector Packet Processing, operating as low as 19W for 2x10G,
2x1G and 4x2.5G ports (like the [Netgate 6100]({%post_url 2021-11-26-netgate-6100%})) and offering roughly equal
2x1G and 4x2.5G ports (like the [Netgate 6100]({{< ref "2021-11-26-netgate-6100" >}})) and offering roughly equal
performance per port, except having to drop only $700,- apiece. The VPP machines come with ~infinite RAM, even a
16GB machine will run much larger routing tables, including full BGP and so on - there is no (need for) TCAM, and yet
routing performance scales out with CPUs and larger CPU instruction/data-cache. Looking at my Ryzen 5950X based Hippo/Rhino