This commit wires the checker's state machine through to the VPP dataplane:
every backend state transition flows through a single code path that
recomputes the effective per-backend weight (with pool failover) and pushes
the result to VPP. Along the way several latent bugs in the state machine
and the sync path were fixed.
internal/vpp/reconciler.go (new)
- New Reconciler type subscribes to checker.Checker events and, on every
transition, calls Client.SyncLBStateVIP for the affected frontend. This
is the ONLY place in the codebase where backend state changes cause VPP
calls — the "single path" discipline requested during design.
- Defines an EventSource interface (checker.Checker satisfies it) so the
dependency direction stays vpp → checker; the checker never imports vpp.
internal/vpp/client.go
- Renamed ConfigSource → StateSource. The interface now has two methods:
Config() and BackendState(name) — the reconciler and the desired-state
builder both need live health state to compute effective weights.
- SetConfigSource → SetStateSource; internal cfgSrc field → stateSrc.
- New getStateSource() helper for internal locked access.
- lbSyncLoop still uses the state source for its periodic drift
reconciliation; it's fully idempotent and runs the same code path as
event-driven syncs.
internal/vpp/lbsync.go
- desiredAS grows a Flush bool so the mapping function can signal "on
transition to weight 0, flush existing flow-table entries".
- asFromBackend is now the single source of truth for the state →
(weight, flush) rule. Documented with a full truth table. Takes an
activePool parameter so it can distinguish "up in active pool" from
"up but standby".
- activePoolIndex(fe, states) implements priority failover: returns the
index of the first pool containing any StateUp backend. pool[0] wins
when at least one member is up; pool[1] takes over when pool[0] is
empty; and so on. Defaults to 0 (unobservable, since all backends map
to weight 0 when nothing is up).
- desiredFromFrontend snapshots backend states once, computes activePool,
then walks every backend through asFromBackend. No more filtering on
b.Enabled — disabled backends stay in the desired set so they keep
their AS entry in VPP with weight=0. The previous filter caused delAS
on disable, which destroyed the entry and broke enable afterwards.
- EffectiveWeights(fe, src) exported helper that returns the per-pool
per-backend weight map for one frontend. Used by the gRPC GetFrontend
handler and robot tests to observe failover without touching VPP.
- reconcileVIP computes flush at the weight-change call site:
flush = desired.Flush && cur.Weight > 0 && desired.Weight == 0
This ensures only the *transition* to disabled flushes sessions —
steady-state syncs with already-zero weight skip the call entirely.
- setASWeight now plumbs IsFlush into lb_as_set_weight.
internal/vpp/lbsync_test.go (new)
- TestAsFromBackend: 15 cases locking down the truth table, including
failover scenarios (up in standby pool, up promoted in pool[1]).
- TestActivePoolIndex: 8 cases covering pool[0]-has-up, pool[0]-all-down,
all-disabled, all-paused, all-unknown, nothing-up-anywhere, and
three-tier failover.
- TestDesiredFromFrontendFailover: 5 end-to-end scenarios wiring a fake
StateSource through desiredFromFrontend and asserting the final
per-IP weight map. Exercises the complete pipeline without VPP.
internal/checker/checker.go
- Added BackendState(name) (health.State, bool) — one-line method that
satisfies vpp.StateSource. The checker is otherwise unchanged.
- EnableBackend rewritten to reuse the existing worker (parallel to
ResumeBackend). The old code called startWorker which constructed a
brand-new Backend via health.New, throwing away the transition
history; the resulting 'backend-transition' log showed the bogus
from=unknown,to=unknown. Now uses w.backend.Enable() to record a
proper disabled→unknown transition and launches a fresh goroutine.
- Static (no-healthcheck) backends now fire their synthetic 'always up'
pass on the first iteration of runProbe instead of sleeping 30s
first. Previously static backends sat in StateUnknown for 30s after
startup — useless for deterministic testing and surprising for
operators. The fix is a simple first-iteration flag.
internal/health/state.go
- New Enable(maxHistory) method parallel to Disable. Transitions the
backend from whatever state it's in (typically StateDisabled) to
StateUnknown, resets the health counter to rise-1 so the expedited
resolution kicks in on the first probe result, and emits a transition
with code 'enabled'.
proto/maglev.proto
- PoolBackendInfo gains effective_weight: the state-aware weight that
would be programmed into VPP (distinct from the configured weight in
the YAML). Exposed via GetFrontend.
internal/grpcapi/server.go
- frontendToProto takes a vpp.StateSource, computes effective weights
via vpp.EffectiveWeights, and populates PoolBackendInfo.EffectiveWeight.
- GetFrontend and SetFrontendPoolBackendWeight updated to pass the
checker in.
cmd/maglevc/commands.go
- 'show frontends <name>' now renders every pool backend row as
<name> weight <cfg> effective <eff> [disabled]?
so both values are always visible. The VPP-style key/value format
avoids the ANSI-alignment pitfall we hit earlier and makes the output
regex-parseable for robot tests.
cmd/maglevd/main.go
- Construct and start the Reconciler alongside the VPP client. Two
extra lines, no other changes to startup.
tests/01-maglevd/maglevd-lab/maglev.yaml
- Two new static backends (static-primary, static-fallback) and a new
failover-vip frontend with one backend per pool. No healthcheck, so
the state machine resolves them to 'up' immediately via the synthetic
pass. Used by the failover robot tests.
tests/01-maglevd/01-healthcheck.robot
- Three new test cases exercising pool failover end-to-end:
1. primary up, secondary standby (initial state)
2. disable primary → fallback takes over (effective weight flips)
3. enable primary → fallback steps back
All run without VPP: they scrape 'maglevc show frontends <name>' and
regex-match the effective weight in the output. Deterministic and
fast (~2s total) because the static backends don't probe.
- Two helper keywords: Static Backend Should Be Up and
Effective Weight Should Be.
Net result: 16/16 robot tests pass. Backend state transitions now
flow through a single documented path (checker event → reconciler →
SyncLBStateVIP → desiredFromFrontend → asFromBackend → reconcileVIP →
setASWeight), and the pool failover / enable-after-disable / static-
backend-startup bugs are all fixed.
220 lines
9.1 KiB
Plaintext
220 lines
9.1 KiB
Plaintext
*** Settings ***
|
|
Library OperatingSystem
|
|
Resource ../common.robot
|
|
|
|
Suite Setup Setup Suite
|
|
Suite Teardown Cleanup Suite
|
|
|
|
|
|
*** Variables ***
|
|
${lab-name} maglevd-test
|
|
${lab-file} maglevd-lab/maglevd.clab.yml
|
|
${runtime} docker
|
|
${MAGLEVD_NODE} clab-maglevd-test-maglevd
|
|
${METRICS_URL} http://172.20.30.2:9091/metrics
|
|
|
|
|
|
*** Test Cases ***
|
|
Deploy maglevd-test lab
|
|
[Documentation] Deploy the containerlab topology. The maglevd node starts
|
|
... automatically as PID 1 via start.sh and begins probing the nginx
|
|
... backends immediately.
|
|
${rc} ${output} = Run And Return Rc And Output
|
|
... ${CLAB_BIN} --runtime ${runtime} deploy -t ${CURDIR}/${lab-file}
|
|
Log ${output}
|
|
Should Be Equal As Integers ${rc} 0
|
|
Sleep 3s Wait for nginx containers and probes to converge
|
|
|
|
All backends reach up state
|
|
[Template] Backend Should Be Up
|
|
nginx1
|
|
nginx2
|
|
nginx3
|
|
|
|
Health checks are reaching all backends
|
|
[Template] Probe Count Should Be Positive
|
|
nginx1
|
|
nginx2
|
|
nginx3
|
|
|
|
Pause backend stops probing
|
|
Maglevc set backend nginx1 pause
|
|
Backend Should Have State nginx1 paused
|
|
Sleep 1s
|
|
${before} = Get Probe Count nginx1
|
|
Sleep 2s Wait to confirm no new probes arrive
|
|
${after} = Get Probe Count nginx1
|
|
Should Be True ${after} == ${before}
|
|
... Probe count for nginx1 grew while paused: ${before} → ${after}
|
|
|
|
Resume backend restarts probing
|
|
Maglevc set backend nginx1 resume
|
|
${before} = Get Probe Count nginx1
|
|
Sleep 2s Wait for resumed probes to accumulate
|
|
${after} = Get Probe Count nginx1
|
|
Should Be True ${after} > ${before}
|
|
... Probe count for nginx1 did not grow after resume: ${before} → ${after}
|
|
Wait Until Keyword Succeeds 5s 500ms
|
|
... Backend Should Be Up nginx1
|
|
|
|
Disable backend stops probing
|
|
Maglevc set backend nginx2 disable
|
|
Backend Should Have State nginx2 disabled
|
|
Backend Should Be Disabled nginx2
|
|
Sleep 1s
|
|
${before} = Get Probe Count nginx2
|
|
Sleep 2s Wait to confirm probes stopped
|
|
${after} = Get Probe Count nginx2
|
|
Should Be True ${after} == ${before}
|
|
... Probe count for nginx2 grew while disabled: ${before} → ${after}
|
|
|
|
Enable backend restarts probing
|
|
Maglevc set backend nginx2 enable
|
|
${before} = Get Probe Count nginx2
|
|
Sleep 2s Wait for re-enabled probes to accumulate
|
|
${after} = Get Probe Count nginx2
|
|
Should Be True ${after} > ${before}
|
|
... Probe count for nginx2 did not grow after enable: ${before} → ${after}
|
|
Wait Until Keyword Succeeds 5s 500ms
|
|
... Backend Should Be Up nginx2
|
|
|
|
|
|
Prometheus endpoint is reachable
|
|
${rc} ${output} = Run And Return Rc And Output
|
|
... curl -sf ${METRICS_URL}
|
|
Log ${output}
|
|
Should Be Equal As Integers ${rc} 0
|
|
Should Contain ${output} maglev_backend_state
|
|
|
|
Prometheus reports all backends up
|
|
${output} = Scrape Metrics
|
|
# Each backend should have state="up" = 1.
|
|
Should Contain ${output} maglev_backend_state{address="172.20.30.11",backend="nginx1",healthcheck="http-check",state="up"} 1
|
|
Should Contain ${output} maglev_backend_state{address="172.20.30.12",backend="nginx2",healthcheck="http-check",state="up"} 1
|
|
Should Contain ${output} maglev_backend_state{address="172.20.30.13",backend="nginx3",healthcheck="http-check",state="up"} 1
|
|
|
|
Prometheus reports probe counters
|
|
${output} = Scrape Metrics
|
|
Should Match Regexp ${output} maglev_probe_total\\{backend="nginx1".*result="success".*\\}\\s+[1-9]
|
|
Should Match Regexp ${output} maglev_probe_total\\{backend="nginx2".*result="success".*\\}\\s+[1-9]
|
|
Should Match Regexp ${output} maglev_probe_total\\{backend="nginx3".*result="success".*\\}\\s+[1-9]
|
|
|
|
Prometheus reports probe duration histogram
|
|
${output} = Scrape Metrics
|
|
Should Match Regexp ${output} maglev_probe_duration_seconds_count\\{backend="nginx1".*\\}\\s+[1-9]
|
|
|
|
Prometheus reports pool weights
|
|
${output} = Scrape Metrics
|
|
Should Contain ${output} maglev_frontend_pool_backend_weight{backend="nginx1",frontend="http-vip",pool="primary"} 100
|
|
Should Contain ${output} maglev_frontend_pool_backend_weight{backend="nginx3",frontend="http-vip",pool="fallback"} 100
|
|
|
|
Prometheus reports transition counters
|
|
${output} = Scrape Metrics
|
|
# All backends transitioned unknown → up during startup.
|
|
Should Match Regexp ${output} maglev_backend_transitions_total\\{backend="nginx1",from="unknown",to="up"\\}\\s+[1-9]
|
|
|
|
|
|
# ---- pool failover tests ----------------------------------------------------
|
|
#
|
|
# These tests use the static failover-vip frontend defined in maglev.yaml:
|
|
# one backend in the primary pool (static-primary) and one in the fallback
|
|
# pool (static-fallback). Both have no healthcheck, so they're always in
|
|
# state=up. Because the effective weight is computed from the pool-failover
|
|
# logic (and not from probes), these tests are deterministic and don't
|
|
# depend on timing or a running VPP.
|
|
|
|
Failover: primary up, secondary standby
|
|
Wait Until Keyword Succeeds 3s 200ms
|
|
... Static Backend Should Be Up static-primary
|
|
Wait Until Keyword Succeeds 3s 200ms
|
|
... Static Backend Should Be Up static-fallback
|
|
Effective Weight Should Be failover-vip static-primary 100
|
|
Effective Weight Should Be failover-vip static-fallback 0
|
|
|
|
Failover: disable primary → fallback takes over
|
|
Maglevc set backend static-primary disable
|
|
Backend Should Have State static-primary disabled
|
|
Effective Weight Should Be failover-vip static-primary 0
|
|
Effective Weight Should Be failover-vip static-fallback 100
|
|
|
|
Failover: enable primary → fallback steps back
|
|
Maglevc set backend static-primary enable
|
|
Wait Until Keyword Succeeds 3s 200ms
|
|
... Static Backend Should Be Up static-primary
|
|
Effective Weight Should Be failover-vip static-primary 100
|
|
Effective Weight Should Be failover-vip static-fallback 0
|
|
|
|
|
|
*** Keywords ***
|
|
Setup Suite
|
|
${arch} = Run go env GOARCH
|
|
Set Suite Variable ${ARCH} ${arch}
|
|
|
|
Cleanup Suite
|
|
Run docker logs ${MAGLEVD_NODE} > ${EXECDIR}/tests/out/maglevd.log 2>&1
|
|
Run ${CLAB_BIN} --runtime ${runtime} destroy -t ${CURDIR}/${lab-file} --cleanup
|
|
|
|
Maglevc
|
|
[Documentation] Run a maglevc command inside the maglevd container.
|
|
[Arguments] ${cmd}
|
|
${rc} ${output} = Run And Return Rc And Output
|
|
... docker exec ${MAGLEVD_NODE} /opt/maglev/build/${ARCH}/maglevc --color\=false ${cmd}
|
|
Log ${output}
|
|
Should Be Equal As Integers ${rc} 0
|
|
RETURN ${output}
|
|
|
|
Backend Should Be Up
|
|
[Arguments] ${name}
|
|
${output} = Maglevc show backends ${name}
|
|
Should Match Regexp ${output} state\\s+up
|
|
|
|
Backend Should Have State
|
|
[Arguments] ${name} ${expected_state}
|
|
${output} = Maglevc show backends ${name}
|
|
Should Match Regexp ${output} state\\s+${expected_state}
|
|
|
|
Backend Should Be Disabled
|
|
[Arguments] ${name}
|
|
${output} = Maglevc show backends ${name}
|
|
Should Match Regexp ${output} enabled\\s+false
|
|
|
|
Get Probe Count
|
|
[Documentation] Return the number of HTTP health-check requests seen in a backend's nginx log.
|
|
[Arguments] ${name}
|
|
${output} = Run docker logs clab-${lab-name}-${name} 2>/dev/null | grep -c "GET /" || echo 0
|
|
${count} = Convert To Integer ${output.strip()}
|
|
RETURN ${count}
|
|
|
|
Probe Count Should Be Positive
|
|
[Arguments] ${name}
|
|
${count} = Get Probe Count ${name}
|
|
Should Be True ${count} > 0
|
|
... No health-check requests found in nginx logs for ${name}
|
|
|
|
Scrape Metrics
|
|
[Documentation] Fetch the Prometheus /metrics endpoint from the maglevd container.
|
|
${rc} ${output} = Run And Return Rc And Output
|
|
... curl -sf ${METRICS_URL}
|
|
Should Be Equal As Integers ${rc} 0
|
|
RETURN ${output}
|
|
|
|
Static Backend Should Be Up
|
|
[Documentation] Like Backend Should Be Up but for backends without a
|
|
... healthcheck (hop straight to up via the synthetic-pass path).
|
|
[Arguments] ${name}
|
|
${output} = Maglevc show backends ${name}
|
|
Should Match Regexp ${output} state\\s+up
|
|
|
|
Effective Weight Should Be
|
|
[Documentation] Parse 'show frontends <fe>' output for the named
|
|
... backend and assert its effective weight matches the expected value.
|
|
... Backend rows have the form:
|
|
... [backends ]<name> weight <cfg> effective <eff>
|
|
... so we match on <name> followed by 'weight N effective E' anywhere
|
|
... on a single line.
|
|
[Arguments] ${frontend} ${backend} ${expected}
|
|
${output} = Maglevc show frontends ${frontend}
|
|
Should Match Regexp ${output}
|
|
... ${backend}\\s+weight\\s+\\d+\\s+effective\\s+${expected}\\b
|
|
... backend ${backend}: expected effective weight ${expected} in:\n${output}
|