Restart-neutral VPP LB sync; deterministic AS ordering; maglevt cadence; v0.9.5

Three reliability fixes bundled with docs updates.

Restart-neutral VPP LB sync via a startup warmup window
(internal/vpp/warmup.go). Before this, a maglevd restart would
immediately issue SyncLBStateAll with every backend still in
StateUnknown — mapped through BackendEffectiveWeight to weight
0 — and VPP would black-hole all new flows until the checker's
rise counters caught up, several seconds later. The new warmup
tracker owns a process-wide state machine gated by two config
knobs: vpp.lb.startup-min-delay (default 5s) is an absolute
hands-off window during which neither the periodic sync loop
nor the per-transition reconciler touches VPP; vpp.lb.
startup-max-delay (default 30s) is the watchdog for a per-VIP
release phase that runs between the two, releasing each frontend
as soon as every backend it references reaches a non-Unknown
state. At max-delay a final SyncLBStateAll runs for any stragglers
still in Unknown. Config reload does not reset the clock. Both
delays can be set to 0 to disable the warmup entirely. The
reconciler's suppressed-during-warmup events log at DEBUG so
operators can still see them with --log-level debug. Unit tests
cover the tracker state machine, allBackendsKnown precondition,
and the zero-delay escape hatch.

Deterministic AS iteration in VPP LB sync. reconcileVIP and
recreateVIP now issue their lb_as_add_del / lb_as_set_weight
calls in numeric IP order (IPv4 before IPv6, ascending within
each family) via a new sortedIPKeys helper, instead of Go map
iteration order. VPP's LB plugin breaks per-bucket ties in the
Maglev lookup table by insertion position in its internal AS
vec, so without a stable call order two maglevd instances on
the same config could push identical AS sets into VPP in
different orders and produce divergent new-flow tables. Numeric
sort is used in preference to lexicographic so the sync log
stays human-readable: string order would place 10.0.0.10 before
10.0.0.2, and the same problem in v6. Unit tests cover empty,
single, v4/v6 numeric vs lexicographic, v4-before-v6 grouping,
a 1000-iteration stability loop against Go's randomised map
iteration, insertion-order invariance, and the desiredAS
call-site type.

maglevt interval fix. runProbeLoop used to sleep the full
jittered interval after every probe, so a 100ms --interval
with a 30ms probe actually produced a 130ms period. The sleep
now subtracts result.Duration so cadence matches the flag.
Probes that overrun clamp sleep to zero and fire the next
probe immediately without trying to catch up on missed cycles
— a slow backend doesn't get flooded with back-to-back probes
at the moment it's already struggling.

Docs. config-guide now documents flush-on-down and the new
startup-min-delay / startup-max-delay knobs; user-guide's
maglevd section explains the restart-neutrality property, the
three warmup phases, and the relevant slog lines operators
should watch for during a bounce.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-04-15 11:25:53 +02:00
parent 695ebc4bd1
commit 6d78921edd
10 changed files with 1257 additions and 23 deletions

View File

@@ -30,6 +30,94 @@ are used for anything not set.
| `SIGHUP` | Reload the configuration file (same code path as `config reload` in `maglevc`). The file is checked before applying; if there is a parse or semantic error the reload is aborted and the error is logged (the daemon continues running with its current config). New backends are started, removed backends are stopped, backends whose health-check config is unchanged continue probing without interruption. |
| `SIGTERM` / `SIGINT` | Graceful shutdown. Active gRPC streams are closed, the server drains, then the process exits. |
#### Restart behaviour
A `maglevd` restart is designed to be dataplane-neutral: `SIGTERM`
bounce → steady state should not cause any visible disruption to
flows traversing the VIP, assuming `vpp` itself stays up throughout.
This is enforced by a two-phase startup warmup controlled by
`vpp.lb.startup-min-delay` (default `5s`) and
`vpp.lb.startup-max-delay` (default `30s`):
1. **`[0, min-delay)` — absolute hands-off window.** Neither the
periodic `SyncLBStateAll` loop nor the per-transition
`SyncLBStateVIP` path from the reconciler touches VPP. Probes
run, the checker accumulates state, and any backend transitions
are logged at `DEBUG` level but suppressed from the dataplane.
VPP continues serving whatever it had programmed before the
restart, unmodified.
2. **`[min-delay, max-delay)` — per-VIP release phase.** Each
frontend is released (and one `SyncLBStateVIP` runs against it)
as soon as every backend it references has reached a non-
`Unknown` state, i.e. the checker's rise counter has completed
for every probe. The reconciler event path and a 250ms
background poll both attempt to release VIPs; whichever wins
the race logs `vpp-lb-warmup-release` with
`trigger=reconciler-event` or `trigger=poll`.
3. **Exit.** `vpp-lb-warmup-max-delay-elapsed` always fires at
the `max-delay` boundary, regardless of how the warmup got
there. One of two paths gets taken:
- **Happy path:** every frontend was released individually
during the release phase before `max-delay` expired. Logged
as `vpp-lb-warmup-complete` at the moment all releases
complete (anywhere in `[min-delay, max-delay)`). The warmup
gates open immediately at `-complete`, so the periodic sync
loop can start drift-correction right away. The warmup
driver then sleeps until `max-delay` and emits
`vpp-lb-warmup-max-delay-elapsed` as a gratuitous timeline
marker — the gate is already open, but the line keeps the
log sequence symmetric with the watchdog path.
- **Watchdog path:** `max-delay` reached with one or more
frontends still holding `StateUnknown` backends. Logged as
`vpp-lb-warmup-max-delay-elapsed` at the boundary, followed
by a final `SyncLBStateAll` that sweeps the stragglers —
anything still in `StateUnknown` at this point is programmed
as weight 0.
After either path, the reconciler and the periodic sync loop
run unconditionally on every transition.
The warmup clock is measured from `vpp.New()` (shortly after
process start) and is **not** reset by config reloads, VPP
reconnects, or `SIGHUP` — it's strictly tied to the maglevd
process lifetime. A VPP drop mid-warmup is handled transparently:
when VPP reconnects, the warmup driver picks up wherever the
process-relative clock now stands.
To disable the warmup entirely — first sync fires immediately at
startup, backends may be black-holed for a few seconds until rise
probes complete — set both `startup-min-delay` and
`startup-max-delay` to `0s` in the config. This is useful for
tests and dev setups where a couple of seconds of downtime on
restart is acceptable and the extra observability is not worth
the delay.
Relevant log lines (all at `INFO` unless noted):
- `vpp-lb-warmup-start` — warmup begins, with the configured delay values.
- `vpp-lb-warmup-min-delay-elapsed` — absolute hands-off window ended;
per-VIP release phase starting.
- `vpp-lb-warmup-release` — a frontend has been individually released;
`trigger` is `poll` or `reconciler-event` depending on which path
won the race.
- `vpp-lb-warmup-complete` — every VIP was released individually
before `max-delay`. Fires any time in `[min-delay, max-delay)`
depending on how quickly backends settled. On the happy path
the warmup gates open at this moment; `-max-delay-elapsed`
still fires later at the boundary as a timeline marker.
- `vpp-lb-warmup-max-delay-elapsed``max-delay` boundary reached.
Always fires, on both the happy and watchdog paths. On the
watchdog path it's followed immediately by a full
`SyncLBStateAll` to sweep stragglers still in `StateUnknown`;
on the happy path the gates are already open and this line is
purely informational.
- `vpp-lb-warmup-skipped` — both delays were configured to 0 and the
warmup was bypassed entirely.
- `vpp-reconciler-suppressed-min-delay` (DEBUG) — a transition event
arrived during min-delay and was dropped.
- `vpp-reconciler-suppressed-warmup` (DEBUG) — a transition event arrived
after min-delay but the frontend has backends still in `StateUnknown`.
### Capabilities
`maglevd` requires:
@@ -74,6 +162,26 @@ dataplane change without raising the log level. Set `--log-level debug` to
see individual probe attempts and every VPP binary-API call
(`vpp-api-send` / `vpp-api-recv` with full payload) as they happen.
Within a single VIP reconcile, maglevd issues `lb_as_add_del` calls in
ascending numeric order of the AS's IP address (all IPv4 before all
IPv6, numeric-ascending within each family), not Go map iteration order.
This matters because VPP's LB plugin stores ASes in an internal vec in
insertion order and breaks per-bucket ties in the Maglev lookup table by
whichever AS comes earlier in the vec — so without a stable call order,
two maglevd instances serving identical configs can end up programming
different new-flow tables on their respective VPP boxes, and per-bucket
debugging becomes non-reproducible. Numeric (rather than lexicographic)
ordering is chosen because a string sort would place `10.0.0.10` before
`10.0.0.2` (and `2001:db8::10` before `2001:db8::2`), which would
satisfy determinism but produce sync-log output that looks scrambled to
human readers. The sort is a correctness property, not just a cosmetic
one, and the sync log lines appear in that same order so `watch events`
output is comparable across instances. Note that this is the first half
of the fix; the second half (a matching sort inside VPP's own
`lb_vip_update_new_flow_table` to close the flap-history case where
freed `as_pool` slots are reused in locally-visited order) is a separate
change to VPP upstream.
### Prometheus metrics
`maglevd` exposes Prometheus metrics on `--metrics-addr` (default `:9091`) at