All checks were successful
continuous-integration/drone/push Build is passing
70 lines
4.4 KiB
Markdown
70 lines
4.4 KiB
Markdown
---
|
|
title: "Week 14, Tuesday: Patches Galore"
|
|
date: 2024-10-29T21:55:00+02:00
|
|
---
|
|
|
|
{{< image frame="true" width="17em" float="right" src="/img/headline/dreamscape-03.png" alt="Credit: Dreamscape, Kush Sessions, YouTube" >}}
|
|
|
|
This morning my buddy Arend sends me a message on Telegram - and asks me if I can check the port in
|
|
Qupra. Oh my deity, it's finally happening! After the general assembly of Coloclue approved a
|
|
member's petition to allow members to install cross connects at Qupra, a few months of "kastje, muur"
|
|
happened, and the networking committee and association board reached an agreement on how this would
|
|
happen.
|
|
|
|
A few months after that, Arend tried to install a crossconnect but we weren't ready paper-wise. I
|
|
brought it back to the attention of the networking committee and we identified the missing pieces: a
|
|
patch panel with keystones needed to be installed in each rack (and it was only available in two of
|
|
the racks at the time), and a change to the administrative database needed to be added to document
|
|
which members had which crossconnects.
|
|
|
|
Tim took care of the first thing - he ordered the panels and keystones and went to the datacenter to
|
|
install them. I offered to take careo f the second thing - but since the administrative database has
|
|
need-to-know information, our treasurer Arjan preferred to add the records himself. Once these two
|
|
things were taken care of, all I had to do is wait for a practical moment :) I had planned to deploy
|
|
the fiber myself last week, but I had to cancel my trip to Amsterdam due to the COVID situation.
|
|
|
|
So I was surprised and delighted that Arend pinged me. The Qupra FrysIX switch was pre-configured,
|
|
and all that was left was to plug things in. Arend made quick work of it, and as well put in the
|
|
cross connect for a few other members at Coloclue, he's such a sweetheart! For me, this link will be
|
|
used to alleviate the hypervisor at Equinix AM3, as it is running low on disk throughput due to me
|
|
using Samsung consumer SSDs. I shipped Arend a few enterprise SAS SSDs before, but he hasn't gotten
|
|
around to deploying them yet. More importantly, the AM3 hypervisor runs FrysIX routeserver, LibreNMS
|
|
and IXPManager.
|
|
|
|
After the Qupra gig, Arend made his way to NIKHEF where he installed the FrysIX patch for FreeIX
|
|
Remote, directly into the VPP router `nlams0.net.free-ix.net`. That router now has LSIX, SpeedIX,
|
|
and FrysIX connected. I spend some time bringing FreeIX Remote AS50869 into quarantine and then into
|
|
the production VLAN. That's a benefit of running the IXP: I get to expedite my own connections :)
|
|
|
|
Now that the FreeIX Remote router is connected to FrysIX, I allocate a private VLAN between it and
|
|
IPng's infrastructure. This allows me to create a VPWS (L2VPN, Ethernet over MPLS) on IPng's MPLS
|
|
switches `msw0.nlams0` and `msw1.chrma0` from this router in Amsterdam to the one I already installed
|
|
in Zurich. iBGP comes up, and there are now three routers in play (`nlams0`, `chrma0`, and
|
|
`grskg0`), and amongst them, they know about 207K IPv4 prefixes and 64.7K IPv6 prefixes, and all of
|
|
them can be reached via direct or routeserver peering. How cool is that?
|
|
|
|
|
|
```
|
|
pim@nlams0:~$ birdc show route count
|
|
BIRD v2.15.1-4-g280daed5-x ready.
|
|
800934 of 800934 routes for 207438 networks in table master4
|
|
449754 of 449754 routes for 64696 networks in table master6
|
|
1501107 of 1501107 routes for 500369 networks in table t_roa4
|
|
364077 of 364077 routes for 121359 networks in table t_roa6
|
|
Total: 3115872 of 3115872 routes for 893862 networks in 4 tables
|
|
```
|
|
|
|
In the evening I send a maintenance announcement out to FrysIX members: in the night of Wednesday to
|
|
Thursday of this week, I will move the routeserver RS2 and the IXPManager over to the hypervisor
|
|
at Qupra, which now sports a 10G connection to the FrysIX peering switch there. I have plumbed the
|
|
management VLAN 264 and the Quarantine 2605 and the Peering LAN 2604 through to the hypervisor.
|
|
|
|
I practice by moving `nms.frys-ix.net` over - this is a non-intrusive change. Using ZFS block device
|
|
replication, I can pump over the boot disk with about 110MB/s, because the hypervisor itself has
|
|
"only" a one gigabit connection. I boot the VM, and it comes up cleanly. Nice. I spend a few hours
|
|
preparing the move of the other two machines (RS2 and IXPManager), which are service impacting. But
|
|
I can start by making a snapshot of the block devices, copy their data over ahead of time, and then
|
|
copy a final snapshot incrementally.
|
|
|
|
Today was a good day for FrysIX :)
|