Files
vpp-containerlab/BUILDING.md

8.5 KiB

Building vpp-containerlab

This docker container creates a VPP instance based on the latest VPP release. It starts up as per normal, using /etc/vpp/startup.conf (which Containerlab might replace when it starts its containers). Once started, it'll execute /etc/vpp/bootstrap.vpp within the dataplane. There are two relevant files:

  1. clab.vpp -- generated by files/init-container.sh. Its purpose is to bind the veth interfaces that containerlab has added to the container into the VPP dataplane (see below).
  2. vppcfg.vpp -- generated by files/init-container.sh. Its purpose is to read the user specified vppcfg.yaml file and convert it into VPP CLI commands. If no YAML file is specified, or if it is not syntactically valid, an empty file is generated instead.

For Containerlab users who wish to have more control over their VPP bootstrap, it's possible to bind-mount /etc/vpp/bootstrap.vpp.

Building

To build, this container uses Docker's buildx, for which on Debian Bookworm it's required to use the upstream (docker.com) packages described [here]. To allow the buildx to build for multi-arch, it's also required to install the Qemu binfmt emulators, with:

docker run --privileged --rm tonistiigi/binfmt --install all

Then, ongoing builds can be cross-platform and take about 1500 seconds on an AMD64 i7-12700T The buildx invocation will build 'latest' and then tag it with the current VPP package release, which you can get from vppcfg show version, like so:

IMG=git.ipng.ch/ipng/vpp-containerlab
ARCH=linux/$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/')
TAG=latest
docker buildx build --load --platform $ARCH \
  --tag $IMG:$TAG -f docker/Dockerfile docker/

TAG=v25.10-release
docker buildx build --load --build-arg REPO=2510 --platform $ARCH \
  --tag $IMG:$TAG -f docker/Dockerfile docker/

Sideloading locally built VPP packages

Instead of pulling VPP from packagecloud, you can sideload locally built .deb packages using Docker buildx's --build-context flag. This is useful for testing unreleased VPP builds or working around version-specific issues (for example, VPP 25.10 fails to start on kernels that do not expose NUMA topology via sysfs, such as OrbStack on Apple Silicon; VPP 26.06+ fixes this).

Point --build-context vppdebs=<path> at a directory containing libvppinfra_*.deb, vpp_*.deb, and vpp-plugin-core_*.deb. If the context is not provided, the build falls back to packagecloud as normal. The .deb files are bind-mounted during the build and never stored in an image layer. Note: the directory must contain .deb files for exactly one VPP version; if multiple versions are present the glob patterns will match ambiguously and the build will fail.

# Build from locally compiled VPP packages (e.g. from ~/src/vpp after make pkg-deb):
IMG=git.ipng.ch/ipng/vpp-containerlab
ARCH=linux/$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/')
VPPDEBS=~/src/vpp/build-root
docker buildx build --load --platform $ARCH \
  --build-context vppdebs=$VPPDEBS \
  --tag $IMG:latest -f docker/Dockerfile docker/

# Build from packagecloud as normal (no --build-context needed):
docker buildx build --load --platform $ARCH \
  --tag $IMG:latest -f docker/Dockerfile docker/

Multiarch

Building a combined linux/amd64 + linux/arm64 manifest requires two machines building natively — one per architecture. The setup below uses summer (amd64, Linux) and jessica (arm64, macOS running OrbStack). VPP must be compiled on each machine before building the Docker image, because the sideloader mounts locally built .deb files that are architecture-specific.

Setup

On jessica, the Docker daemon runs inside OrbStack's Linux VM. Expose its SSH port so summer can reach it. OrbStack listens on 127.0.0.1:32222; add a jump-host entry to ~/.ssh/config on summer:

Host jessica-orb
    HostName 127.0.0.1
    Port 32222
    User pim
    ProxyCommand ssh jessica -W 127.0.0.1:32222
    IdentityFile ~/.ssh/jessica-orb-key
    IdentitiesOnly yes
    UserKnownHostsFile /dev/null
    StrictHostKeyChecking no

Copy OrbStack's SSH key from jessica to summer:

scp jessica:~/.orbstack/ssh/id_ed25519 ~/.ssh/jessica-orb-key
chmod 600 ~/.ssh/jessica-orb-key

Verify the full chain works:

ssh jessica-orb 'uname -m && docker info | head -3'
# expected: aarch64

Create the multiarch builder (run once on summer):

docker buildx create --name multiarch --driver docker-container --platform linux/amd64 --node summer-amd64
docker buildx create --append --name multiarch --driver docker-container --platform linux/arm64 --node jessica-arm64 ssh://jessica-orb
docker buildx inspect multiarch --bootstrap

Build

Build VPP on both machines first (make pkg-deb in your VPP source tree on both summer and the OrbStack VM on jessica). When sideloading .deb files, Docker sends the build context from the client to every builder node — meaning summer's amd64 debs would be sent to jessica-orb for the arm64 build (wrong arch). The solution is to build each platform separately on its native machine and combine them into a manifest.

IMG=git.ipng.ch/ipng/vpp-containerlab
VPPDEBS=~/src/vpp/build-root

# Step 1: build amd64 on summer, push with platform tag
docker buildx build --platform linux/amd64 \
  --build-context vppdebs=$VPPDEBS \
  --push --tag $IMG:latest-amd64 \
  -f docker/Dockerfile docker/

# Step 2: build arm64 natively on jessica-orb, push with platform tag
#   (repo and VPP debs must be present on jessica-orb at the same paths)
#   Note: $IMG and $VPPDEBS expand on summer before being sent over SSH -- set them first.
ssh jessica-orb "cd ~/src/vpp-containerlab && \
  docker buildx build --platform linux/arm64 \
    --build-context vppdebs=$VPPDEBS \
    --push --tag $IMG:latest-arm64 \
    -f docker/Dockerfile docker/"

# Step 3: combine into a single multi-arch manifest and push in one step
# (docker buildx build --push produces manifest lists, so use imagetools, not docker manifest)
docker buildx imagetools create \
  --tag $IMG:latest \
  $IMG:latest-amd64 \
  $IMG:latest-arm64

Testing standalone container

docker network create --driver=bridge clab-network --subnet=192.0.2.0/24 \
                      --ipv6 --subnet=2001:db8::/64
docker rm clab-pim
docker run --cap-add=NET_ADMIN --cap-add=SYS_NICE --cap-add=SYS_PTRACE \
           --device=/dev/net/tun:/dev/net/tun \
           --device=/dev/vhost-net:/dev/vhost-net \
           --privileged --name clab-pim \
           git.ipng.ch/ipng/vpp-containerlab:latest
docker network connect clab-network clab-pim

A note on DPDK

DPDK will be disabled by default as it requires hugepages and VFIO and/or UIO to use physical network cards. If DPDK at some future point is desired, mapping VFIO can be done by adding this:

           --device=/dev/vfio/vfio:/dev/vfio/vfio

or in Containerlab, using the devices feature:

my-node:
  image: git.ipng.ch/ipng/vpp-containerlab:latest
  kind: fdio_vpp
  devices:
    - /dev/vfio/vfio
    - /dev/net/tun
    - /dev/vhost-net

If using DPDK in a container, one of the userspace IO kernel drivers must be loaded in the host kernel. Options are igb_uio, vfio_pci, or uio_pci_generic:

$ sudo modprobe igb_uio
$ sudo modprobe vfio_pci
$ sudo modprobe uio_pci_generic

Particularly the VFIO driver needs to be present before one can attempt to bindmount /dev/vfio/vfio into the container!

Configuring VPP

When Containerlab starts the docker containers, it'll offer one or more veth point to point network links, which will show up as eth1 and further. eth0 is the default NIC that belongs to the management plane in Containerlab (the one which you'll see with containerlab inspect). Before VPP can use these veth interfaces, it needs to bind them, like so:

docker exec -it clab-pim vppctl

and then within the VPP control shell:

create host-interface v2 name eth1
set interface name host-eth1 eth1
set interface mtu 1500 eth1
set interface ip address eth1 192.0.2.2/24
set interface ip address eth1 2001:db8::2/64
set interface state eth1 up

Containerlab will attach these veth pairs to the container, and replace our Docker CMD with one that waits for all of these interfaces to be added (typically called if-wait.sh). In our own CMD, we then generate a config file called /etc/vpp/clab.vpp which contains the necessary VPP commands to take control over these veth pairs.