2025-05-04 12:31:46 +02:00
2025-05-04 12:24:26 +02:00
2025-05-04 12:24:26 +02:00
2025-05-04 12:31:46 +02:00
2025-05-02 15:33:52 +02:00
2025-05-04 12:31:46 +02:00
2025-05-04 12:24:26 +02:00

VPP Containerlab Docker image

This docker container creates a VPP instance based on the latest VPP release. It starts up as per normal, using /etc/vpp/startup.conf (which Containerlab might replace when it starts its containers). Once started, it'll execute /etc/vpp/bootstrap.vpp within the dataplane. There are two relevant files:

  1. clab.vpp -- generated by files/init-container.sh. Its purpose is to bind the veth interfaces that containerlab has added to the container into the VPP dataplane (see below).
  2. vppcfg.vpp -- generated by files/init-container.sh. Its purpose is to read the user specified vppcfg.yaml file and convert it into VPP CLI commands. If no YAML file is specified, or if it is not syntactically valid, an empty file is generated instead.

For Containerlab users who wish to have more control over their VPP bootstrap, it's possible to bind-mount /etc/vpp/bootstrap.vpp.

Building

IMG=git.ipng.ch/ipng/vpp-containerlab
TAG=latest
docker build --no-cache -f docker/Dockerfile.bookworm -t $IMG docker/
docker image tag $IMG $IMG:$TAG
docker push $IMG
docker push $IMG:$TAG

Testing the container standalone

docker network create --driver=bridge clab-network --subnet=192.0.2.0/24 \
                      --ipv6 --subnet=2001:db8::/64
docker rm clab-pim 
docker run --cap-add=NET_ADMIN --cap-add=SYS_NICE --cap-add=SYS_PTRACE \
           --device=/dev/net/tun:/dev/net/tun \
           --device=/dev/vhost-net:/dev/vhost-net \
           --privileged --name clab-pim \
           docker.io/pimvanpelt/vpp-containerlab:latest
docker network connect clab-network clab-pim

A note on DPDK

DPDK will be disabled by default as it requires hugepages and VFIO and/or UIO to use physical network cards. If DPDK at some future point is desired, mapping VFIO can be done by adding this:

           --device=/dev/vfio/vfio:/dev/vfio/vfio

or in Containerlab, using the devices feature:

my-node:
  image: vpp-containerlab:latest
  kind: vpp
  devices:
    - /dev/vfio/vfio
    - /dev/net/tun
    - /dev/vhost-net

If using DPDK in a container, one of the userspace IO kernel drivers must be loaded in the host kernel. Options are igb_uio, vfio_pci, or uio_pci_generic:

$ sudo modprobe igb_uio
$ sudo modprobe vfio_pci
$ sudo modprobe uio_pci_generic

Particularly the VFIO driver needs to be present before one can attempt to bindmount /dev/vfio/vfio into the container!

Configuring VPP

When Containerlab starts the docker containers, it'll offer one or more veth point to point network links, which will show up as eth1 and further. eth0 is the default NIC that belongs to the management plane in Containerlab (the one which you'll see with containerlab inspect). Before VPP can use these veth interfaces, it needs to bind them, like so:

docker exec -it clab-pim vppctl

and then within the VPP control shell:

create host-interface v2 name eth1
set interface name host-eth1 eth1
set interface mtu 1500 eth1
set interface ip address eth1 192.0.2.2/24
set interface ip address eth1 2001:db8::2/64
set interface state eth1 up

Containerlab will attach these veth pairs to the container, and replace our Docker CMD with one that waits for all of these interfaces to be added (typically called if-wait.sh). In our own CMD, we then generate a config file called /etc/vpp/clab.vpp which contains the necessary VPP commands to take control over these veth pairs.

In addition, you can add more commands that'll execute on startup by copying in /etc/vpp/manual-pre.vpp (to be executed before the containerlab stuff) or /etc/vpp/manual-post.vpp (to be executed after the containerlab stuff).

Example Containerlab

The file `vpp.clab.yml' contains an example topology existing of two VPP instances connected each to one Alpine linux container, in the following topology:

learn-vpp

Description
VPP Containerlab Docker image
Readme BSD-2-Clause 113 KiB
Languages
Shell 100%