Fix pause to cancel probe goroutine; add Robot Framework integration tests

Pause semantics
- PauseBackend now cancels the probe goroutine so no HTTP/TCP/ICMP
  traffic is sent while the backend is paused. Previously the goroutine
  kept running and results were silently discarded.
- ResumeBackend launches a fresh probe goroutine on the existing worker,
  preserving transition history. The backend re-enters unknown state.

Integration tests (tests/01-maglevd/)
- Containerlab topology with 3 nginx:alpine backends on a dedicated
  management network (172.20.30.0/24) with static IPs.
- maglevd config with 200ms HTTP health-check interval for fast test
  convergence (rise=2, fall=2).
- 8 test cases: deploy lab, start maglevd, all backends reach up,
  nginx logs confirm probes arriving, pause stops probes (probe count
  stable), resume restarts probes, disable stops probes, enable
  restarts probes.

VPP dataplane test (tests/02-vpp-lb/)
- Rewrite 01-e2e-lab.robot to match the actual single-VPP topology:
  test client-to-server ping through VPP bridge domains and verify
  nginx is serving on all app servers. The previous version referenced
  a non-existent topology file and tested OSPF/BFD between two VPP
  nodes that don't exist in this lab.

Build infrastructure
- Add 'make robot-test' target with TEST= for suite selection.
- Add tests/.venv target for Robot Framework virtualenv.
- Make IMAGE optional in rf-run.sh.
- Add .gitignore entries for test output, venv, logs, and clab state.
This commit is contained in:
2026-04-11 20:16:22 +02:00
parent 3bd30b69f4
commit 8bde00eb61
20 changed files with 519 additions and 7 deletions

View File

@@ -0,0 +1,63 @@
*** Settings ***
Library OperatingSystem
Resource ../common.robot
Suite Teardown Run Keyword Cleanup
*** Variables ***
${lab-name} e2e-maglev
${lab-file-name} e2e-lab/maglev.clab.yml
${runtime} docker
*** Test Cases ***
Deploy ${lab-name} lab
Log ${CURDIR}
${rc} ${output} = Run And Return Rc And Output
... ${CLAB_BIN} --runtime ${runtime} deploy -t ${CURDIR}/${lab-file-name}
Log ${output}
Should Be Equal As Integers ${rc} 0
Wait for VPP dataplane startup
Sleep 5s
Client cl1 can ping app server as1 via VPP
${rc} ${output} = Run And Return Rc And Output
... ${CLAB_BIN} --runtime ${runtime} exec -t ${CURDIR}/${lab-file-name} --label clab-node-name\=cl1 --cmd "ping -c 3 -W 2 10.82.98.82"
Log ${output}
Should Be Equal As Integers ${rc} 0
Should Not Contain ${output} 0 received
Client cl2 can ping app server as2 via VPP
${rc} ${output} = Run And Return Rc And Output
... ${CLAB_BIN} --runtime ${runtime} exec -t ${CURDIR}/${lab-file-name} --label clab-node-name\=cl2 --cmd "ping -c 3 -W 2 10.82.98.83"
Log ${output}
Should Be Equal As Integers ${rc} 0
Should Not Contain ${output} 0 received
App server as1 can reach app server as3 via VPP
${rc} ${output} = Run And Return Rc And Output
... ${CLAB_BIN} --runtime ${runtime} exec -t ${CURDIR}/${lab-file-name} --label clab-node-name\=as1 --cmd "ping -c 3 -W 2 10.82.98.84"
Log ${output}
Should Be Equal As Integers ${rc} 0
Should Not Contain ${output} 0 received
App servers have nginx running
[Template] Nginx Should Be Serving
as1 10.82.98.82
as2 10.82.98.83
as3 10.82.98.84
*** Keywords ***
Cleanup
Run ${CLAB_BIN} --runtime ${runtime} destroy -t ${CURDIR}/${lab-file-name} --cleanup
Nginx Should Be Serving
[Arguments] ${node} ${ip}
${rc} ${output} = Run And Return Rc And Output
... ${CLAB_BIN} --runtime ${runtime} exec -t ${CURDIR}/${lab-file-name} --label clab-node-name\=${node} --cmd "wget -q -O- http://${ip}/"
Log ${output}
Should Be Equal As Integers ${rc} 0
Should Contain ${output} ${node}

View File

@@ -0,0 +1,9 @@
#!/bin/sh
MYIP=$(ip addr show dev eth1 | awk '/inet .*scope/ { print $2}' | cut -f1 -d/)
ip tunnel add maglev0 mode gre local $MYIP
ip link set maglev0 up mtu 1500
ip addr add 10.82.98.255/32 dev maglev0
echo "This is $(hostname -f)" >> /usr/share/nginx/html/index.html

View File

@@ -0,0 +1 @@
../as1/rc.local

View File

@@ -0,0 +1 @@
../as1/rc.local

View File

@@ -0,0 +1,12 @@
comment { You can add commands here that will execute after vppcfg.vpp }
lb conf ip4-src-address 10.82.98.0 ip6-src-address 2001:db8:8298:: buckets 524288
lb vip 10.82.98.255/32 protocol tcp port 80
lb as 10.82.98.255/32 protocol tcp port 80 10.82.98.82
lb as 10.82.98.255/32 protocol tcp port 80 10.82.98.83
lb as 10.82.98.255/32 protocol tcp port 80 10.82.98.84
lb vip 10.82.98.255/32 protocol tcp port 443 src_ip_sticky
lb as 10.82.98.255/32 protocol tcp port 443 10.82.98.82
lb as 10.82.98.255/32 protocol tcp port 443 10.82.98.83
lb as 10.82.98.255/32 protocol tcp port 443 10.82.98.84

View File

@@ -0,0 +1,45 @@
loopbacks:
loop0:
description: "Core: vpp1"
lcp: loop0
addresses: [10.82.98.0/32, 2001:db8:8298::/128]
loop1:
description: "Core: Maglev VIP"
lcp: maglev0
loop2:
description: "BVI: clients"
mtu: 1500
lcp: bvi101
addresses: [10.82.98.65/28, 2001:db8:8298:101::1/64]
loop3:
description: "BVI: application servers"
mtu: 2026
lcp: bvi102
addresses: [10.82.98.81/28, 2001:db8:8298:102::1/64]
bridgedomains:
bd101:
description: "Clients"
mtu: 1500
bvi: loop2
interfaces: [ eth1, eth2 ]
bd102:
description: "Application Servers"
mtu: 2026
bvi: loop3
interfaces: [ eth3, eth4, eth5 ]
interfaces:
eth1:
description: "To cl1:eth1"
mtu: 1500
eth2:
description: "To cl2:eth1"
mtu: 1500
eth3:
description: "To as1:eth1"
mtu: 2026
eth4:
description: "To as2:eth1"
mtu: 2026
eth5:
description: "To as3:eth1"
mtu: 2026

View File

@@ -0,0 +1,64 @@
name: e2e-maglev
topology:
kinds:
fdio_vpp:
image: git.ipng.ch/ipng/vpp-containerlab:latest
startup-config: config/__clabNodeName__/vppcfg.yaml
binds:
- config/__clabNodeName__/manual-post.vpp:/config/vpp/config/manual-post.vpp:rw
linux:
image: ghcr.io/srl-labs/network-multitool:latest
binds:
- config/__clabNodeName__/rc.local:/config/rc.local:rw
nodes:
vpp1:
kind: fdio_vpp
cl1:
kind: linux
exec:
- ip addr add 10.82.98.66/28 dev eth1
- ip route add 10.82.98.0/24 via 10.82.98.65
- ip addr add 2001:db8:8298:101::2/64 dev eth1
- ip route add 2001:db8:8298::/48 via 2001:db8:8298:101::1
- sh /config/rc.local
cl2:
kind: linux
exec:
- ip addr add 10.82.98.67/28 dev eth1
- ip route add 10.82.98.0/24 via 10.82.98.65
- ip addr add 2001:db8:8298:101::3/64 dev eth1
- ip route add 2001:db8:8298::/48 via 2001:db8:8298:101::1
- sh /config/rc.local
as1:
kind: linux
exec:
- ip addr add 10.82.98.82/28 dev eth1
- ip route add 10.82.98.0/24 via 10.82.98.81
- ip addr add 2001:db8:8298:102::2/64 dev eth1
- ip route add 2001:db8:8298::/48 via 2001:db8:8298:102::1
- sh /config/rc.local
as2:
kind: linux
exec:
- ip addr add 10.82.98.83/28 dev eth1
- ip route add 10.82.98.0/24 via 10.82.98.81
- ip addr add 2001:db8:8298:102::3/64 dev eth1
- ip route add 2001:db8:8298::/48 via 2001:db8:8298:102::1
- sh /config/rc.local
as3:
kind: linux
exec:
- ip addr add 10.82.98.84/28 dev eth1
- ip route add 10.82.98.0/24 via 10.82.98.81
- ip addr add 2001:db8:8298:102::4/64 dev eth1
- ip route add 2001:db8:8298::/48 via 2001:db8:8298:102::1
- sh /config/rc.local
links:
- endpoints: ["vpp1:eth1", "cl1:eth1"]
- endpoints: ["vpp1:eth2", "cl2:eth1"]
- endpoints: ["vpp1:eth3", "as1:eth1"]
- endpoints: ["vpp1:eth4", "as2:eth1"]
- endpoints: ["vpp1:eth5", "as3:eth1"]