Restart-neutral VPP LB sync; deterministic AS ordering; maglevt cadence; v0.9.5

Three reliability fixes bundled with docs updates.

Restart-neutral VPP LB sync via a startup warmup window
(internal/vpp/warmup.go). Before this, a maglevd restart would
immediately issue SyncLBStateAll with every backend still in
StateUnknown — mapped through BackendEffectiveWeight to weight
0 — and VPP would black-hole all new flows until the checker's
rise counters caught up, several seconds later. The new warmup
tracker owns a process-wide state machine gated by two config
knobs: vpp.lb.startup-min-delay (default 5s) is an absolute
hands-off window during which neither the periodic sync loop
nor the per-transition reconciler touches VPP; vpp.lb.
startup-max-delay (default 30s) is the watchdog for a per-VIP
release phase that runs between the two, releasing each frontend
as soon as every backend it references reaches a non-Unknown
state. At max-delay a final SyncLBStateAll runs for any stragglers
still in Unknown. Config reload does not reset the clock. Both
delays can be set to 0 to disable the warmup entirely. The
reconciler's suppressed-during-warmup events log at DEBUG so
operators can still see them with --log-level debug. Unit tests
cover the tracker state machine, allBackendsKnown precondition,
and the zero-delay escape hatch.

Deterministic AS iteration in VPP LB sync. reconcileVIP and
recreateVIP now issue their lb_as_add_del / lb_as_set_weight
calls in numeric IP order (IPv4 before IPv6, ascending within
each family) via a new sortedIPKeys helper, instead of Go map
iteration order. VPP's LB plugin breaks per-bucket ties in the
Maglev lookup table by insertion position in its internal AS
vec, so without a stable call order two maglevd instances on
the same config could push identical AS sets into VPP in
different orders and produce divergent new-flow tables. Numeric
sort is used in preference to lexicographic so the sync log
stays human-readable: string order would place 10.0.0.10 before
10.0.0.2, and the same problem in v6. Unit tests cover empty,
single, v4/v6 numeric vs lexicographic, v4-before-v6 grouping,
a 1000-iteration stability loop against Go's randomised map
iteration, insertion-order invariance, and the desiredAS
call-site type.

maglevt interval fix. runProbeLoop used to sleep the full
jittered interval after every probe, so a 100ms --interval
with a 30ms probe actually produced a 130ms period. The sleep
now subtracts result.Duration so cadence matches the flag.
Probes that overrun clamp sleep to zero and fire the next
probe immediately without trying to catch up on missed cycles
— a slow backend doesn't get flooded with back-to-back probes
at the moment it's already struggling.

Docs. config-guide now documents flush-on-down and the new
startup-min-delay / startup-max-delay knobs; user-guide's
maglevd section explains the restart-neutrality property, the
three warmup phases, and the relevant slog lines operators
should watch for during a bounce.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-04-15 11:25:53 +02:00
parent 695ebc4bd1
commit 6d78921edd
10 changed files with 1257 additions and 23 deletions

View File

@@ -15,7 +15,7 @@ FRONTEND_WEB_SRC := $(shell find cmd/frontend/web/src -type f 2>/dev/null) \
FRONTEND_WEB_DIST := cmd/frontend/web/dist/index.html FRONTEND_WEB_DIST := cmd/frontend/web/dist/index.html
NATIVE_ARCH := $(shell go env GOARCH) NATIVE_ARCH := $(shell go env GOARCH)
VERSION := 0.9.3 VERSION := 0.9.5
COMMIT_HASH := $(shell git rev-parse --short HEAD 2>/dev/null || echo unknown) COMMIT_HASH := $(shell git rev-parse --short HEAD 2>/dev/null || echo unknown)
DATE := $(shell date -u +%Y-%m-%dT%H:%M:%SZ) DATE := $(shell date -u +%Y-%m-%dT%H:%M:%SZ)
LDFLAGS := -X '$(MODULE)/cmd.version=$(VERSION)' \ LDFLAGS := -X '$(MODULE)/cmd.version=$(VERSION)' \

View File

@@ -108,10 +108,36 @@ VPP's GRE encap needs a source address and every VIP `maglevd` programs uses GRE
* ***lb.flow-timeout***: Idle time after which an established flow is removed * ***lb.flow-timeout***: Idle time after which an established flow is removed
from the table. Must be a whole number of seconds between `1s` and `120s` from the table. Must be a whole number of seconds between `1s` and `120s`
inclusive. Defaults to `40s`. inclusive. Defaults to `40s`.
* ***lb.startup-min-delay***: Absolute hands-off window at the start of the
`maglevd` process. For the first `startup-min-delay` seconds of the
process's life, no VPP LB sync of any kind is issued — neither the
periodic `SyncLBStateAll` loop nor the per-transition `SyncLBStateVIP`
path from the reconciler touches VPP. This makes a `maglevd` restart
dataplane-neutral: without this gate, the first sync would fire before
any probes had completed, every backend would still be in `StateUnknown`,
and every AS would be reprogrammed to weight 0 until the rise counters
caught up — producing a visible black-hole window of several seconds
on every restart. A non-negative Go duration. Must be `<= startup-max-delay`.
Defaults to `5s`. Set to `0s` (together with `startup-max-delay: 0s`) to
disable the warmup entirely and sync VPP immediately on startup.
* ***lb.startup-max-delay***: Watchdog for the per-VIP release phase that
follows `startup-min-delay`. Between `min-delay` and `max-delay`, each
frontend is released individually as soon as every backend it references
has reached a non-`Unknown` state, and a single `SyncLBStateVIP` runs
against the newly-released frontend. At `max-delay` the warmup driver
unconditionally runs `SyncLBStateAll`, marking the warmup phase complete
regardless of whether any backends are still `StateUnknown` — stragglers
get programmed at their current effective weight at that point, which
for a `StateUnknown` backend means weight 0. A non-negative Go duration.
Must be `>= startup-min-delay`. Defaults to `30s`. Set to `0s` together
with `startup-min-delay: 0s` to disable the warmup entirely.
These four values are pushed to VPP via `lb_conf` when `maglevd` connects to These values are pushed to VPP via `lb_conf` when `maglevd` connects to
VPP and again after every config reload (whenever they change). A log line VPP and again after every config reload (whenever they change). A log line
`vpp-lb-conf-set` records the effective values. `vpp-lb-conf-set` records the effective values. The `startup-*` settings
are latched at the first successful VPP connect and are not re-read on
subsequent config reloads — a reload that changes them only takes effect
on the next process start.
Example: Example:
```yaml ```yaml
@@ -123,6 +149,8 @@ maglev:
ipv6-src-address: 2001:db8::1 ipv6-src-address: 2001:db8::1
sticky-buckets-per-core: 65536 sticky-buckets-per-core: 65536
flow-timeout: 40s flow-timeout: 40s
startup-min-delay: 5s
startup-max-delay: 30s
``` ```
--- ---
@@ -313,6 +341,22 @@ ordered list of backend pools. The gRPC API exposes frontends by name.
application servers are deleted with flush, then the VIP itself is deleted) and application servers are deleted with flush, then the VIP itself is deleted) and
recreate it with the new value; VPP has no API to mutate `src_ip_sticky` on an recreate it with the new value; VPP has no API to mutate `src_ip_sticky` on an
existing VIP, and existing flow state cannot be preserved across the flip. existing VIP, and existing flow state cannot be preserved across the flip.
* ***flush-on-down***: Boolean, default `true`. Controls what happens to existing
flows pinned to a backend that transitions to `down`. With it `true`, maglevd
issues `lb_as_set_weight(is_flush=true)` on the down transition, clearing VPP's
flow-table entries for that backend so existing connections are torn down
immediately and reshuffle onto healthy backends. With it `false`, the weight drops
to 0 (drain only): new flows skip the dead backend via the Maglev lookup table,
but existing sticky flows keep being steered at the dead IP until the client
retries, producing visible "connection refused" oscillations during an outage.
The default is `true` because the healthcheck's `rise` / `fall` counters already
absorb single-probe flaps, so a fall-counted `down` is almost always a real
outage where immediate session teardown is the safer behaviour. Set it to `false`
per frontend only when existing sessions are expensive enough that you'd rather
keep them pinned to a potentially-dead backend than reset them (e.g. long-lived
WebSocket connections with expensive reconnect logic). The `disabled` state
always flushes regardless of this flag — `disabled` is an explicit operator
signal that the backend is going away.
Each pool has: Each pool has:

View File

@@ -30,6 +30,94 @@ are used for anything not set.
| `SIGHUP` | Reload the configuration file (same code path as `config reload` in `maglevc`). The file is checked before applying; if there is a parse or semantic error the reload is aborted and the error is logged (the daemon continues running with its current config). New backends are started, removed backends are stopped, backends whose health-check config is unchanged continue probing without interruption. | | `SIGHUP` | Reload the configuration file (same code path as `config reload` in `maglevc`). The file is checked before applying; if there is a parse or semantic error the reload is aborted and the error is logged (the daemon continues running with its current config). New backends are started, removed backends are stopped, backends whose health-check config is unchanged continue probing without interruption. |
| `SIGTERM` / `SIGINT` | Graceful shutdown. Active gRPC streams are closed, the server drains, then the process exits. | | `SIGTERM` / `SIGINT` | Graceful shutdown. Active gRPC streams are closed, the server drains, then the process exits. |
#### Restart behaviour
A `maglevd` restart is designed to be dataplane-neutral: `SIGTERM`
bounce → steady state should not cause any visible disruption to
flows traversing the VIP, assuming `vpp` itself stays up throughout.
This is enforced by a two-phase startup warmup controlled by
`vpp.lb.startup-min-delay` (default `5s`) and
`vpp.lb.startup-max-delay` (default `30s`):
1. **`[0, min-delay)` — absolute hands-off window.** Neither the
periodic `SyncLBStateAll` loop nor the per-transition
`SyncLBStateVIP` path from the reconciler touches VPP. Probes
run, the checker accumulates state, and any backend transitions
are logged at `DEBUG` level but suppressed from the dataplane.
VPP continues serving whatever it had programmed before the
restart, unmodified.
2. **`[min-delay, max-delay)` — per-VIP release phase.** Each
frontend is released (and one `SyncLBStateVIP` runs against it)
as soon as every backend it references has reached a non-
`Unknown` state, i.e. the checker's rise counter has completed
for every probe. The reconciler event path and a 250ms
background poll both attempt to release VIPs; whichever wins
the race logs `vpp-lb-warmup-release` with
`trigger=reconciler-event` or `trigger=poll`.
3. **Exit.** `vpp-lb-warmup-max-delay-elapsed` always fires at
the `max-delay` boundary, regardless of how the warmup got
there. One of two paths gets taken:
- **Happy path:** every frontend was released individually
during the release phase before `max-delay` expired. Logged
as `vpp-lb-warmup-complete` at the moment all releases
complete (anywhere in `[min-delay, max-delay)`). The warmup
gates open immediately at `-complete`, so the periodic sync
loop can start drift-correction right away. The warmup
driver then sleeps until `max-delay` and emits
`vpp-lb-warmup-max-delay-elapsed` as a gratuitous timeline
marker — the gate is already open, but the line keeps the
log sequence symmetric with the watchdog path.
- **Watchdog path:** `max-delay` reached with one or more
frontends still holding `StateUnknown` backends. Logged as
`vpp-lb-warmup-max-delay-elapsed` at the boundary, followed
by a final `SyncLBStateAll` that sweeps the stragglers —
anything still in `StateUnknown` at this point is programmed
as weight 0.
After either path, the reconciler and the periodic sync loop
run unconditionally on every transition.
The warmup clock is measured from `vpp.New()` (shortly after
process start) and is **not** reset by config reloads, VPP
reconnects, or `SIGHUP` — it's strictly tied to the maglevd
process lifetime. A VPP drop mid-warmup is handled transparently:
when VPP reconnects, the warmup driver picks up wherever the
process-relative clock now stands.
To disable the warmup entirely — first sync fires immediately at
startup, backends may be black-holed for a few seconds until rise
probes complete — set both `startup-min-delay` and
`startup-max-delay` to `0s` in the config. This is useful for
tests and dev setups where a couple of seconds of downtime on
restart is acceptable and the extra observability is not worth
the delay.
Relevant log lines (all at `INFO` unless noted):
- `vpp-lb-warmup-start` — warmup begins, with the configured delay values.
- `vpp-lb-warmup-min-delay-elapsed` — absolute hands-off window ended;
per-VIP release phase starting.
- `vpp-lb-warmup-release` — a frontend has been individually released;
`trigger` is `poll` or `reconciler-event` depending on which path
won the race.
- `vpp-lb-warmup-complete` — every VIP was released individually
before `max-delay`. Fires any time in `[min-delay, max-delay)`
depending on how quickly backends settled. On the happy path
the warmup gates open at this moment; `-max-delay-elapsed`
still fires later at the boundary as a timeline marker.
- `vpp-lb-warmup-max-delay-elapsed``max-delay` boundary reached.
Always fires, on both the happy and watchdog paths. On the
watchdog path it's followed immediately by a full
`SyncLBStateAll` to sweep stragglers still in `StateUnknown`;
on the happy path the gates are already open and this line is
purely informational.
- `vpp-lb-warmup-skipped` — both delays were configured to 0 and the
warmup was bypassed entirely.
- `vpp-reconciler-suppressed-min-delay` (DEBUG) — a transition event
arrived during min-delay and was dropped.
- `vpp-reconciler-suppressed-warmup` (DEBUG) — a transition event arrived
after min-delay but the frontend has backends still in `StateUnknown`.
### Capabilities ### Capabilities
`maglevd` requires: `maglevd` requires:
@@ -74,6 +162,26 @@ dataplane change without raising the log level. Set `--log-level debug` to
see individual probe attempts and every VPP binary-API call see individual probe attempts and every VPP binary-API call
(`vpp-api-send` / `vpp-api-recv` with full payload) as they happen. (`vpp-api-send` / `vpp-api-recv` with full payload) as they happen.
Within a single VIP reconcile, maglevd issues `lb_as_add_del` calls in
ascending numeric order of the AS's IP address (all IPv4 before all
IPv6, numeric-ascending within each family), not Go map iteration order.
This matters because VPP's LB plugin stores ASes in an internal vec in
insertion order and breaks per-bucket ties in the Maglev lookup table by
whichever AS comes earlier in the vec — so without a stable call order,
two maglevd instances serving identical configs can end up programming
different new-flow tables on their respective VPP boxes, and per-bucket
debugging becomes non-reproducible. Numeric (rather than lexicographic)
ordering is chosen because a string sort would place `10.0.0.10` before
`10.0.0.2` (and `2001:db8::10` before `2001:db8::2`), which would
satisfy determinism but produce sync-log output that looks scrambled to
human readers. The sort is a correctness property, not just a cosmetic
one, and the sync log lines appear in that same order so `watch events`
output is comparable across instances. Note that this is the first half
of the fix; the second half (a matching sort inside VPP's own
`lb_vip_update_new_flow_table` to close the flap-history case where
freed `as_pool` slots are reused in locally-visited order) is a separate
change to VPP upstream.
### Prometheus metrics ### Prometheus metrics
`maglevd` exposes Prometheus metrics on `--metrics-addr` (default `:9091`) at `maglevd` exposes Prometheus metrics on `--metrics-addr` (default `:9091`) at

View File

@@ -59,6 +59,30 @@ type VPPLBConfig struct {
// removed from the table. Must be between 1 and 120 seconds inclusive. // removed from the table. Must be between 1 and 120 seconds inclusive.
// Defaults to 40s. // Defaults to 40s.
FlowTimeout time.Duration FlowTimeout time.Duration
// StartupMinDelay is the absolute hands-off window at the start of
// the maglevd process. For the first StartupMinDelay seconds after
// startup, the VPP LB sync path makes no calls to VPP at all —
// neither the periodic SyncLBStateAll loop nor the per-transition
// SyncLBStateVIP path from the reconciler. This gives a restarting
// maglevd a chance to complete its first few probes before any VPP
// state is touched, so a bounce does not black-hole traffic while
// the checker is still warming up. Default 5s. Set to 0 together
// with StartupMaxDelay to disable the warmup entirely and sync VPP
// immediately on startup.
StartupMinDelay time.Duration
// StartupMaxDelay is the watchdog for the per-VIP release phase.
// Between StartupMinDelay and StartupMaxDelay, each VIP is released
// (and one SyncLBStateVIP runs against it) as soon as every backend
// it references has reached a non-Unknown state. At StartupMaxDelay
// the warmup driver unconditionally runs SyncLBStateAll to handle
// any stragglers whose backends are still Unknown — those get
// programmed with whatever weight their current state maps to,
// which for a still-Unknown backend is 0. Must be >= StartupMinDelay.
// Default 30s. Set to 0 together with StartupMinDelay to disable
// the warmup.
StartupMaxDelay time.Duration
} }
// HealthCheck describes how to probe a backend. // HealthCheck describes how to probe a backend.
@@ -161,6 +185,8 @@ type rawVPPLBCfg struct {
IPv6SrcAddress string `yaml:"ipv6-src-address"` IPv6SrcAddress string `yaml:"ipv6-src-address"`
StickyBucketsPerCore *uint32 `yaml:"sticky-buckets-per-core"` // default 65536 StickyBucketsPerCore *uint32 `yaml:"sticky-buckets-per-core"` // default 65536
FlowTimeout string `yaml:"flow-timeout"` // Go duration; default 40s, [1-120]s FlowTimeout string `yaml:"flow-timeout"` // Go duration; default 40s, [1-120]s
StartupMinDelay *string `yaml:"startup-min-delay"` // Go duration; default 5s; 0 disables
StartupMaxDelay *string `yaml:"startup-max-delay"` // Go duration; default 30s; must be >= startup-min-delay
} }
type rawHealthCheck struct { type rawHealthCheck struct {
@@ -403,6 +429,41 @@ func convertVPP(r *rawVPPCfg, cfg *VPPConfig) error {
cfg.LB.FlowTimeout = 40 * time.Second cfg.LB.FlowTimeout = 40 * time.Second
} }
// startup-min-delay: absolute hands-off window at process start.
// Default 5s. May be 0 (no gate) but must not be negative.
if r.LB.StartupMinDelay != nil {
d, err := time.ParseDuration(*r.LB.StartupMinDelay)
if err != nil {
return fmt.Errorf("vpp.lb.startup-min-delay: %w", err)
}
if d < 0 {
return fmt.Errorf("vpp.lb.startup-min-delay must be >= 0")
}
cfg.LB.StartupMinDelay = d
} else {
cfg.LB.StartupMinDelay = 5 * time.Second
}
// startup-max-delay: watchdog for the per-VIP release phase. Default
// 30s. May be 0 (no warmup at all, together with min-delay=0), but
// must be >= min-delay so the per-VIP release phase is well-formed.
if r.LB.StartupMaxDelay != nil {
d, err := time.ParseDuration(*r.LB.StartupMaxDelay)
if err != nil {
return fmt.Errorf("vpp.lb.startup-max-delay: %w", err)
}
if d < 0 {
return fmt.Errorf("vpp.lb.startup-max-delay must be >= 0")
}
cfg.LB.StartupMaxDelay = d
} else {
cfg.LB.StartupMaxDelay = 30 * time.Second
}
if cfg.LB.StartupMaxDelay < cfg.LB.StartupMinDelay {
return fmt.Errorf("vpp.lb.startup-max-delay (%s) must be >= startup-min-delay (%s)",
cfg.LB.StartupMaxDelay, cfg.LB.StartupMinDelay)
}
// A missing src address is a hard error: VPP's GRE encap needs a source, // A missing src address is a hard error: VPP's GRE encap needs a source,
// and every VIP we program uses GRE. Fail the config check so the // and every VIP we program uses GRE. Fail the config check so the
// operator cannot start maglevd with a broken setup. // operator cannot start maglevd with a broken setup.

View File

@@ -66,6 +66,12 @@ type Client struct {
// lbStatsLoop. Published as an immutable slice via atomic.Pointer so // lbStatsLoop. Published as an immutable slice via atomic.Pointer so
// Prometheus scrapes (metrics.Collector.Collect) don't take any lock. // Prometheus scrapes (metrics.Collector.Collect) don't take any lock.
lbStatsSnap atomic.Pointer[[]metrics.VIPStatEntry] lbStatsSnap atomic.Pointer[[]metrics.VIPStatEntry]
// warmup gates every VPP LB sync call (both periodic and event-
// driven) during the first StartupMaxDelay seconds after Client
// construction. See warmup.go for the state machine. Process-wide,
// not per-connection: reconnects do not re-enter warmup.
warmup *warmupTracker
} }
// SetStateSource attaches a live config + health state source. When set, the // SetStateSource attaches a live config + health state source. When set, the
@@ -86,9 +92,18 @@ func (c *Client) getStateSource() StateSource {
return c.stateSrc return c.stateSrc
} }
// New creates a Client for the given socket paths. // New creates a Client for the given socket paths. The warmup tracker's
// clock starts here — the restart-neutrality window is measured from the
// moment the Client is constructed, which in practice is a few tens of
// milliseconds after process start (see cmd/maglevd/main.go startup
// sequence). If main.go ever grows a long-running initialisation step
// before vpp.New(), the warmup clock should be moved accordingly.
func New(apiAddr, statsAddr string) *Client { func New(apiAddr, statsAddr string) *Client {
return &Client{apiAddr: apiAddr, statsAddr: statsAddr} return &Client{
apiAddr: apiAddr,
statsAddr: statsAddr,
warmup: newWarmupTracker(),
}
} }
// Run connects to VPP and maintains the connection until ctx is cancelled. // Run connects to VPP and maintains the connection until ctx is cancelled.
@@ -165,19 +180,48 @@ func (c *Client) Run(ctx context.Context) {
} }
} }
// lbSyncLoop periodically runs SyncLBStateAll to catch drift between the // lbSyncLoop drives the periodic VPP LB sync. On first entry (after the
// maglev config and the VPP dataplane. The first run happens immediately // first successful VPP connect) it runs the warmup phase via runWarmup,
// on loop start (VPP has just connected, so any pre-existing state needs // which enforces the restart-neutrality window and handles the first full
// reconciliation). Subsequent runs fire every cfg.VPP.LB.SyncInterval. // sync itself. Subsequent reconnect entries find warmup.allDone == true
// Exits when ctx is cancelled. // and skip straight to the periodic ticker. Exits when ctx is cancelled.
//
// The warmup phase is intentionally run from inside this loop rather
// than from Run: it needs the state source registered (which happens
// only after SetStateSource) and it wants to be cancelled by the same
// connCtx that cancels the stats loop on disconnect, so a VPP drop
// during warmup doesn't leak a goroutine.
func (c *Client) lbSyncLoop(ctx context.Context) { func (c *Client) lbSyncLoop(ctx context.Context) {
src := c.getStateSource() src := c.getStateSource()
if src == nil { if src == nil {
return // no state source registered; nothing to sync return // no state source registered; nothing to sync
} }
// next-run timestamp starts at "now" so the first tick is immediate. // Warmup phase: runs once per process. On the first successful
next := time.Now() // VPP connect, runWarmup handles the entire window (min-delay
// hands-off, per-VIP release phase, final SyncLBStateAll at
// max-delay) and calls finishAll before returning. On any
// reconnect after that, the gate is already open and we skip
// straight to the periodic ticker. A VPP drop mid-warmup
// returns from runWarmup via ctx.Done without closing the gate;
// the next reconnect re-enters runWarmup, which re-reads the
// process-relative clock and picks up wherever it left off.
if !c.warmup.isAllDone() {
c.runWarmup(ctx)
if ctx.Err() != nil {
return
}
}
cfg := src.Config()
if cfg == nil {
return
}
interval := cfg.VPP.LB.SyncInterval
if interval <= 0 {
interval = defaultLBSyncInterval
}
next := time.Now().Add(interval)
for { for {
wait := time.Until(next) wait := time.Until(next)
if wait < 0 { if wait < 0 {
@@ -189,12 +233,12 @@ func (c *Client) lbSyncLoop(ctx context.Context) {
case <-time.After(wait): case <-time.After(wait):
} }
cfg := src.Config() cfg = src.Config()
if cfg == nil { if cfg == nil {
next = time.Now().Add(defaultLBSyncInterval) next = time.Now().Add(defaultLBSyncInterval)
continue continue
} }
interval := cfg.VPP.LB.SyncInterval interval = cfg.VPP.LB.SyncInterval
if interval <= 0 { if interval <= 0 {
interval = defaultLBSyncInterval interval = defaultLBSyncInterval
} }

View File

@@ -3,11 +3,13 @@
package vpp package vpp
import ( import (
"bytes"
"errors" "errors"
"fmt" "fmt"
"log/slog" "log/slog"
"net" "net"
"regexp" "regexp"
"sort"
"strconv" "strconv"
"strings" "strings"
@@ -21,6 +23,80 @@ import (
lb_types "git.ipng.ch/ipng/vpp-maglev/internal/vpp/binapi/lb_types" lb_types "git.ipng.ch/ipng/vpp-maglev/internal/vpp/binapi/lb_types"
) )
// sortedIPKeys returns the keys of m in ascending numeric IP order. Used
// by the AS iteration sites in reconcileVIP and recreateVIP to make the
// sequence of lb_as_add_del calls deterministic across maglevd instances
// on the same config.
//
// Why this matters for Maglev: VPP's LB plugin stores ASes in a vec in
// insertion order and rebuilds the new-flow lookup table by walking that
// vec. The per-AS permutation is a pure function of the AS address (so it
// matches across instances), but when two ASes want the same bucket on
// the same pass the tie is broken by whichever one comes first in the
// vec. If maglevd issues its add calls in Go map iteration order — which
// is randomised on every run — two independent maglevd instances with
// bit-identical configs can push the same ASes into VPP in different
// orders, and their lookup tables end up differing in tie-broken buckets.
// Sorting here is the first half of the fix; a matching sort inside VPP's
// lb_vip_update_new_flow_table closes the flap-history hole (where a
// remove+re-add after steady state reuses freed slots in the as_pool in
// locally-visited order) and is landing in a separate commit.
//
// The keys at every call site are IP literals from net.IP.String(). Sort
// order is numeric, not lexicographic: lexicographic would put 10.0.0.10
// before 10.0.0.2 (and 2001:db8::10 before 2001:db8::2), which is
// correct for determinism but confusing to operators reading the sync
// log. We parse each key back into a net.IP once, compare the canonical
// 16-byte form, and group IPv4 before IPv6 so mixed-family frontends
// read as "v4 block, v6 block" top-to-bottom. ParseIP always succeeds
// here because the caller built the map key via net.IP.String() in the
// first place.
func sortedIPKeys[V any](m map[string]V) []string {
type kv struct {
key string
ip net.IP
}
pairs := make([]kv, 0, len(m))
for k := range m {
pairs = append(pairs, kv{k, net.ParseIP(k)})
}
sort.Slice(pairs, func(i, j int) bool {
return compareIPNumeric(pairs[i].ip, pairs[j].ip) < 0
})
out := make([]string, len(pairs))
for i, p := range pairs {
out[i] = p.key
}
return out
}
// compareIPNumeric returns <0, 0, >0 in the three-way convention. IPv4
// sorts before IPv6. Within each family the comparison runs against the
// canonical fixed-width byte form (4 bytes for v4, 16 bytes for v6),
// which makes byte ordering match numeric ordering. A nil input — which
// should not happen given sortedIPKeys's call contract — sorts before
// any non-nil address to keep the comparator total.
func compareIPNumeric(a, b net.IP) int {
a4 := a.To4()
b4 := b.To4()
switch {
case a == nil && b == nil:
return 0
case a == nil:
return -1
case b == nil:
return 1
case a4 != nil && b4 == nil:
return -1
case a4 == nil && b4 != nil:
return 1
case a4 != nil: // both v4
return bytes.Compare(a4, b4)
default: // both v6
return bytes.Compare(a.To16(), b.To16())
}
}
// ErrFrontendNotFound is returned by SyncLBStateVIP when the caller asks for // ErrFrontendNotFound is returned by SyncLBStateVIP when the caller asks for
// a frontend name that does not exist in the config. // a frontend name that does not exist in the config.
var ErrFrontendNotFound = errors.New("frontend not found in config") var ErrFrontendNotFound = errors.New("frontend not found in config")
@@ -241,8 +317,8 @@ func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, f
return err return err
} }
st.vipAdd++ st.vipAdd++
for _, as := range d.ASes { for _, addr := range sortedIPKeys(d.ASes) {
if err := addAS(ch, d.Prefix, d.Protocol, d.Port, as); err != nil { if err := addAS(ch, d.Prefix, d.Protocol, d.Port, d.ASes[addr]); err != nil {
return err return err
} }
st.asAdd++ st.asAdd++
@@ -279,7 +355,8 @@ func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, f
} }
// Remove ASes that are in VPP but not desired. // Remove ASes that are in VPP but not desired.
for addr, a := range curASes { for _, addr := range sortedIPKeys(curASes) {
a := curASes[addr]
if _, keep := d.ASes[addr]; keep { if _, keep := d.ASes[addr]; keep {
continue continue
} }
@@ -290,7 +367,8 @@ func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, f
} }
// Add new ASes, update weights on existing ones. // Add new ASes, update weights on existing ones.
for addr, a := range d.ASes { for _, addr := range sortedIPKeys(d.ASes) {
a := d.ASes[addr]
c, hit := curASes[addr] c, hit := curASes[addr]
if !hit { if !hit {
if err := addAS(ch, d.Prefix, d.Protocol, d.Port, a); err != nil { if err := addAS(ch, d.Prefix, d.Protocol, d.Port, a); err != nil {
@@ -362,8 +440,8 @@ func recreateVIP(ch *loggedChannel, d desiredVIP, cur LBVIP, st *syncStats, reas
return err return err
} }
st.vipAdd++ st.vipAdd++
for _, as := range d.ASes { for _, addr := range sortedIPKeys(d.ASes) {
if err := addAS(ch, d.Prefix, d.Protocol, d.Port, as); err != nil { if err := addAS(ch, d.Prefix, d.Protocol, d.Port, d.ASes[addr]); err != nil {
return err return err
} }
st.asAdd++ st.asAdd++

View File

@@ -438,3 +438,207 @@ func TestDesiredFromFrontendSharedBackend(t *testing.T) {
}) })
} }
} }
// TestSortedIPKeysDeterministic pins the iteration-order helper that
// reconcileVIP and recreateVIP use to sequence their lb_as_add_del
// calls. The Maglev lookup table in VPP's LB plugin breaks per-bucket
// ties by the order ASes sit in its internal vec, which is just the
// order maglevd issued add calls — so if this helper ever stops
// returning a total, stable ordering, two independent maglevd
// instances on the same config can silently program different
// new-flow tables.
//
// Sort order is numeric (by the parsed net.IP), not lexicographic.
// The specific cases that a string sort would get wrong and a
// numeric sort must get right:
//
// - 10.0.0.2 < 10.0.0.10 (string sort puts "10" before "2")
// - 2001:db8::2 < 2001:db8::10 (same issue in v6)
// - all IPv4 before all IPv6 (operator-friendly grouping)
func TestSortedIPKeysDeterministic(t *testing.T) {
t.Run("empty", func(t *testing.T) {
got := sortedIPKeys(map[string]int{})
if len(got) != 0 {
t.Errorf("empty map: got %v, want []", got)
}
})
t.Run("single entry", func(t *testing.T) {
got := sortedIPKeys(map[string]int{"10.0.0.1": 1})
if len(got) != 1 || got[0] != "10.0.0.1" {
t.Errorf("got %v, want [10.0.0.1]", got)
}
})
t.Run("v4 numeric order beats string order", func(t *testing.T) {
// The headline bug: "10.0.0.10" < "10.0.0.2" lexicographically
// because '1' < '2'. Numeric sort must place 2 before 10.
m := map[string]int{
"10.0.0.10": 1,
"10.0.0.2": 2,
"10.0.0.1": 3,
"10.0.0.11": 4,
}
got := sortedIPKeys(m)
want := []string{"10.0.0.1", "10.0.0.2", "10.0.0.10", "10.0.0.11"}
if len(got) != len(want) {
t.Fatalf("got %v, want %v", got, want)
}
for i := range want {
if got[i] != want[i] {
t.Errorf("pos %d: got %q, want %q", i, got[i], want[i])
}
}
})
t.Run("v6 numeric order beats string order", func(t *testing.T) {
// Same bug in v6: "2001:db8::10" < "2001:db8::2" lexicographically.
// The To16() canonical byte form handles both compressed and
// expanded forms correctly.
m := map[string]int{
"2001:db8::10": 1,
"2001:db8::2": 2,
"2001:db8::1": 3,
}
got := sortedIPKeys(m)
want := []string{"2001:db8::1", "2001:db8::2", "2001:db8::10"}
for i := range want {
if got[i] != want[i] {
t.Errorf("pos %d: got %q, want %q", i, got[i], want[i])
}
}
})
t.Run("v4 before v6", func(t *testing.T) {
// Mixed-family frontends: the operator-friendly order is
// the v4 block before the v6 block, each sorted numerically
// within its family.
m := map[string]int{
"2001:db8::1": 1,
"10.0.0.2": 2,
"10.0.0.1": 3,
"fe80::1": 4,
"192.168.0.1": 5,
}
got := sortedIPKeys(m)
want := []string{
"10.0.0.1", "10.0.0.2", "192.168.0.1",
"2001:db8::1", "fe80::1",
}
if len(got) != len(want) {
t.Fatalf("got %v, want %v", got, want)
}
for i := range want {
if got[i] != want[i] {
t.Errorf("pos %d: got %q, want %q", i, got[i], want[i])
}
}
})
t.Run("repeated calls produce identical sequence", func(t *testing.T) {
// Core determinism property: Go's map iteration is randomised,
// but sortedIPKeys must normalise it. Run the helper many
// times and compare every result to the first — if the
// normalisation ever breaks we'll see a divergence well within
// the loop count.
m := map[string]int{
"10.0.0.5": 1, "10.0.0.3": 2, "10.0.0.11": 3,
"10.0.0.2": 4, "10.0.0.4": 5, "10.0.0.20": 6,
}
first := sortedIPKeys(m)
for i := 0; i < 1000; i++ {
got := sortedIPKeys(m)
if len(got) != len(first) {
t.Fatalf("iter %d: length drift: got %v, first %v", i, got, first)
}
for j := range first {
if got[j] != first[j] {
t.Fatalf("iter %d pos %d: got %q, first %q", i, j, got[j], first[j])
}
}
}
})
t.Run("insertion order does not matter", func(t *testing.T) {
// A map built by inserting keys in ascending order must
// produce the same result as one built in descending order.
// Both go through the same normalisation.
asc := map[string]int{}
for _, k := range []string{"10.0.0.1", "10.0.0.2", "10.0.0.3", "10.0.0.10", "10.0.0.11"} {
asc[k] = 0
}
desc := map[string]int{}
for _, k := range []string{"10.0.0.11", "10.0.0.10", "10.0.0.3", "10.0.0.2", "10.0.0.1"} {
desc[k] = 0
}
gotAsc := sortedIPKeys(asc)
gotDesc := sortedIPKeys(desc)
if len(gotAsc) != len(gotDesc) {
t.Fatalf("length mismatch: asc %v, desc %v", gotAsc, gotDesc)
}
for i := range gotAsc {
if gotAsc[i] != gotDesc[i] {
t.Errorf("pos %d: asc %q, desc %q", i, gotAsc[i], gotDesc[i])
}
}
})
t.Run("desiredAS map", func(t *testing.T) {
// Exercise the actual call-site type: map[string]desiredAS.
// If the generic helper ever loses its type parameterisation
// this catches it at compile time (the call would fail).
m := map[string]desiredAS{
"10.0.0.9": {Address: net.ParseIP("10.0.0.9"), Weight: 100},
"10.0.0.11": {Address: net.ParseIP("10.0.0.11"), Weight: 100},
"10.0.0.5": {Address: net.ParseIP("10.0.0.5"), Weight: 50},
"10.0.0.1": {Address: net.ParseIP("10.0.0.1"), Weight: 25},
}
got := sortedIPKeys(m)
want := []string{"10.0.0.1", "10.0.0.5", "10.0.0.9", "10.0.0.11"}
for i := range want {
if got[i] != want[i] {
t.Errorf("pos %d: got %q, want %q", i, got[i], want[i])
}
}
})
}
// TestCompareIPNumeric pins the ordering comparator that sortedIPKeys
// delegates to. Split out so the v4/v6 boundary and nil-safety logic
// have named failure modes rather than being buried in the map-based
// subtests.
func TestCompareIPNumeric(t *testing.T) {
cases := []struct {
name string
a, b net.IP
want int // -1, 0, +1 (sign of compareIPNumeric)
}{
{"v4 numeric asc", net.ParseIP("10.0.0.2"), net.ParseIP("10.0.0.10"), -1},
{"v4 numeric desc", net.ParseIP("10.0.0.10"), net.ParseIP("10.0.0.2"), 1},
{"v4 equal", net.ParseIP("10.0.0.1"), net.ParseIP("10.0.0.1"), 0},
{"v6 numeric asc", net.ParseIP("2001:db8::2"), net.ParseIP("2001:db8::10"), -1},
{"v6 numeric desc", net.ParseIP("2001:db8::10"), net.ParseIP("2001:db8::2"), 1},
{"v4 before v6", net.ParseIP("192.168.0.1"), net.ParseIP("2001:db8::1"), -1},
{"v6 after v4", net.ParseIP("2001:db8::1"), net.ParseIP("192.168.0.1"), 1},
{"nil before v4", nil, net.ParseIP("10.0.0.1"), -1},
{"v4 after nil", net.ParseIP("10.0.0.1"), nil, 1},
{"nil equal nil", nil, nil, 0},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got := compareIPNumeric(tc.a, tc.b)
sign := func(x int) int {
switch {
case x < 0:
return -1
case x > 0:
return 1
}
return 0
}
if sign(got) != tc.want {
t.Errorf("got %d (sign %d), want sign %d", got, sign(got), tc.want)
}
})
}
}

View File

@@ -70,6 +70,15 @@ func (r *Reconciler) Run(ctx context.Context) {
// Frontend-transition events are observational only — the dataplane work // Frontend-transition events are observational only — the dataplane work
// they would imply has already been done by the backend-transition event // they would imply has already been done by the backend-transition event
// that triggered them. // that triggered them.
//
// The handler consults the VPP client's warmup tracker before doing any
// dataplane work. During the startup warmup window the reconciler is
// either fully suppressed (inside min-delay) or per-VIP gated (the
// frontend must have been released before events for it pass through).
// When a transition fires for a VIP that isn't yet released but whose
// backends have now all settled, the handler opportunistically releases
// it here so the per-VIP release fires on the event rather than waiting
// for the next warmup poll tick.
func (r *Reconciler) handle(ev checker.Event) { func (r *Reconciler) handle(ev checker.Event) {
if ev.FrontendTransition != nil { if ev.FrontendTransition != nil {
return // frontend-only event; no dataplane work return // frontend-only event; no dataplane work
@@ -81,20 +90,66 @@ func (r *Reconciler) handle(ev checker.Event) {
if cfg == nil { if cfg == nil {
return return
} }
w := r.client.warmup
feName := ev.FrontendName
if !w.isReleased(feName) {
// Warmup is still gating this frontend. Decide whether to
// release it now, or defer until a later event / the warmup
// poll / the final max-delay SyncLBStateAll.
if w.inMinDelay() {
slog.Debug("vpp-reconciler-suppressed-min-delay",
"frontend", feName,
"backend", ev.BackendName,
"from", ev.Transition.From.String(),
"to", ev.Transition.To.String(),
"elapsed", w.elapsed(),
"reason", "inside vpp.lb.startup-min-delay window")
return
}
fe, ok := cfg.Frontends[feName]
if !ok {
return
}
if !allBackendsKnown(fe, r.stateSrc) {
slog.Debug("vpp-reconciler-suppressed-warmup",
"frontend", feName,
"backend", ev.BackendName,
"from", ev.Transition.From.String(),
"to", ev.Transition.To.String(),
"elapsed", w.elapsed(),
"reason", "frontend has backends still in StateUnknown; "+
"waiting for all to settle or for max-delay watchdog")
return
}
if !w.tryRelease(feName) {
// Lost a race with finishAll or another release caller.
// Either way the next isReleased call will return true, but
// for this event we've already done the right thing by
// letting the next few lines re-check and proceed.
return
}
slog.Info("vpp-lb-warmup-release",
"frontend", feName,
"trigger", "reconciler-event",
"backend", ev.BackendName,
"elapsed", w.elapsed())
}
slog.Debug("vpp-reconciler-event", slog.Debug("vpp-reconciler-event",
"frontend", ev.FrontendName, "frontend", feName,
"backend", ev.BackendName, "backend", ev.BackendName,
"from", ev.Transition.From.String(), "from", ev.Transition.From.String(),
"to", ev.Transition.To.String()) "to", ev.Transition.To.String())
if err := r.client.SyncLBStateVIP(cfg, ev.FrontendName, ""); err != nil { if err := r.client.SyncLBStateVIP(cfg, feName, ""); err != nil {
if errors.Is(err, ErrFrontendNotFound) { if errors.Is(err, ErrFrontendNotFound) {
// Frontend was removed between the event being emitted and // Frontend was removed between the event being emitted and
// us handling it; a periodic SyncLBStateAll will clean it up. // us handling it; a periodic SyncLBStateAll will clean it up.
return return
} }
slog.Warn("vpp-reconciler-error", slog.Warn("vpp-reconciler-error",
"frontend", ev.FrontendName, "frontend", feName,
"backend", ev.BackendName, "backend", ev.BackendName,
"from", ev.Transition.From.String(), "from", ev.Transition.From.String(),
"to", ev.Transition.To.String(), "to", ev.Transition.To.String(),

408
internal/vpp/warmup.go Normal file
View File

@@ -0,0 +1,408 @@
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
package vpp
import (
"context"
"log/slog"
"sync"
"time"
"git.ipng.ch/ipng/vpp-maglev/internal/config"
"git.ipng.ch/ipng/vpp-maglev/internal/health"
)
// warmupPollInterval is how often runWarmup re-checks per-VIP backend
// state during the [minDelay, maxDelay) per-VIP release phase. 250ms
// is fast enough that a VIP whose last backend just completed rise
// probes gets released within a quarter-second of settling, and slow
// enough that the polling cost is negligible compared to probe work
// the checker is doing on the same core at the same time.
const warmupPollInterval = 250 * time.Millisecond
// warmupTracker is the per-process gate for the VPP LB sync path
// during the first StartupMaxDelay seconds of maglevd's life. It
// exists to keep a maglevd restart dataplane-neutral: without it,
// the first SyncLBStateAll would fire before any probes had
// completed, every backend would still be in StateUnknown, and
// BackendEffectiveWeight would reduce every AS to weight 0 — which
// on VPP's side means the new-flow table empties and every new
// connection hits the "no server" drop counter until the rise
// counters catch up.
//
// The tracker expresses three states and the transitions between
// them:
//
// 1. inside [0, minDelay) — "min-delay window". No sync of any
// kind is allowed to touch VPP, neither the periodic SyncLBStateAll
// loop nor the per-transition SyncLBStateVIP path from the
// reconciler. This is the absolute hands-off window the operator
// configures with vpp.lb.startup-min-delay.
//
// 2. inside [minDelay, maxDelay), per-VIP gating — "release phase".
// Each frontend is released (and one SyncLBStateVIP runs against
// it) as soon as every backend it references has reached a
// non-Unknown state. Both the warmup driver goroutine (which
// polls at warmupPollInterval) and the reconciler event path
// (which checks on every received transition) attempt to release
// VIPs; tryRelease arbitrates.
//
// 3. allDone — warmup is complete. Either every VIP has been
// individually released during the release phase, or the
// maxDelay watchdog expired and the warmup driver ran a final
// SyncLBStateAll for any stragglers. After allDone every gate
// is open, the reconciler runs normally on every transition,
// and the periodic lbSyncLoop's ticker starts.
//
// The clock is process-relative: startAt is set in Client.New()
// and does not reset across VPP reconnects. If VPP drops at t=8s
// while the release phase is mid-run and reconnects at t=12s, the
// warmup driver re-enters the release phase knowing that 12s of
// the 30s maxDelay have already been consumed. If VPP stays down
// past maxDelay, the first connect after that jumps straight to
// the final SyncLBStateAll and marks allDone.
type warmupTracker struct {
startAt time.Time
minDelay time.Duration
maxDelay time.Duration
mu sync.Mutex
released map[string]bool // frontend name → released-for-sync
allDone bool
doneCh chan struct{} // closed when allDone is first set
}
// newWarmupTracker constructs a tracker with startAt pinned to time.Now().
// Delay values are not read at construction time — they come from the
// config via runWarmup's call to getStateSource().Config() — so main.go
// can construct the Client before the config has been fully propagated.
func newWarmupTracker() *warmupTracker {
return &warmupTracker{
startAt: time.Now(),
released: make(map[string]bool),
doneCh: make(chan struct{}),
}
}
// configure latches the min/max delay values onto the tracker. Idempotent
// if called with the same values; separate from the constructor so the
// tracker exists before we've parsed the config, and so runWarmup can
// read a consistent pair of values even if the config is reloaded mid-
// warmup (per design decision, config reload does not reset the warmup
// clock, and the delay values latched at first configure() are authoritative
// for the lifetime of the warmup phase).
func (w *warmupTracker) configure(minDelay, maxDelay time.Duration) {
w.mu.Lock()
defer w.mu.Unlock()
// Only latch once. Subsequent calls are no-ops so a config reload
// doesn't re-run warmup against new (possibly shorter) delays.
if w.minDelay != 0 || w.maxDelay != 0 {
return
}
w.minDelay = minDelay
w.maxDelay = maxDelay
}
// inMinDelay reports whether the absolute hands-off window is still active.
func (w *warmupTracker) inMinDelay() bool {
w.mu.Lock()
defer w.mu.Unlock()
if w.allDone {
return false
}
return time.Since(w.startAt) < w.minDelay
}
// isReleased reports whether the given frontend may be synced. True if
// warmup is fully done or this specific frontend has been individually
// released during the release phase. Fast path for the reconciler event
// handler: a cheap check before it considers attempting a release.
func (w *warmupTracker) isReleased(feName string) bool {
w.mu.Lock()
defer w.mu.Unlock()
return w.allDone || w.released[feName]
}
// tryRelease atomically decides whether to release feName. Returns true
// if the frontend is (now) eligible for sync:
//
// - already released (by a previous caller, including allDone) → true
// - inside minDelay window → false
// - past minDelay and caller has verified allKnown externally → true,
// and the tracker is mutated to remember the release
//
// tryRelease does NOT check the allKnown precondition itself — the
// caller is responsible for evaluating backend states before calling.
// Separating the checks this way lets two independent release drivers
// (the warmup poll goroutine and the reconciler event handler) share
// the same gating state without exposing a mid-check race.
//
// Returns true for the "already released" case so callers have a
// single branch: if tryRelease(fe) is true, proceed to sync.
func (w *warmupTracker) tryRelease(feName string) bool {
w.mu.Lock()
defer w.mu.Unlock()
if w.allDone || w.released[feName] {
return true
}
if time.Since(w.startAt) < w.minDelay {
return false
}
w.released[feName] = true
return true
}
// finishAll marks warmup fully complete. Called once by runWarmup when
// either every frontend has been released via the per-VIP path or the
// maxDelay watchdog has expired. Idempotent: repeat calls are no-ops.
// Closes doneCh on the first call so waiters in lbSyncLoop unblock.
func (w *warmupTracker) finishAll() {
w.mu.Lock()
defer w.mu.Unlock()
if w.allDone {
return
}
w.allDone = true
close(w.doneCh)
}
// doneChan returns a channel that is closed when finishAll is called.
// Waiters block on this to defer periodic sync work until after the
// warmup phase has completed (or been skipped entirely).
func (w *warmupTracker) doneChan() <-chan struct{} {
return w.doneCh
}
// isAllDone is the non-blocking companion to doneChan: true iff
// finishAll has been called. Used by lbSyncLoop to decide whether
// to re-enter runWarmup on each VPP reconnect.
func (w *warmupTracker) isAllDone() bool {
w.mu.Lock()
defer w.mu.Unlock()
return w.allDone
}
// elapsed returns how long the tracker has been running, formatted
// as a human-readable Go duration string (e.g. "959ms", "5.2s") for
// use in slog attributes. Returning the string form directly —
// rather than a time.Duration — is deliberate: slog's default JSON
// handler renders time.Duration as a raw nanosecond int64 which is
// unreadable in a log viewer, while a pre-formatted string lands
// as "5.2s" and matches how the config values are written in YAML.
func (w *warmupTracker) elapsed() string {
return time.Since(w.startAt).Round(time.Millisecond).String()
}
// allBackendsKnown reports whether every backend referenced by fe is
// in a non-Unknown state. This is the precondition for releasing a
// frontend during the per-VIP release phase: desiredFromFrontend can
// only produce correct weights for a frontend whose backends have
// all been probed at least once through the health checker's rise
// counter (unknown → up/down).
//
// "Known" here is the literal reading: StateUnknown disqualifies,
// everything else qualifies. That means a legitimately-down backend
// counts as known and contributes its weight=0 to the desired set,
// which is the correct restart behaviour — a backend that was down
// before the restart stays down across the restart without waiting
// for it to come back up.
func allBackendsKnown(fe config.Frontend, src StateSource) bool {
for _, pool := range fe.Pools {
for bName := range pool.Backends {
s, ok := src.BackendState(bName)
if !ok || s == health.StateUnknown {
return false
}
}
}
return true
}
// runWarmup drives the warmup state machine. Called from lbSyncLoop
// on first entry (subsequent reconnect entries find allDone == true
// and skip straight to the periodic ticker).
//
// Phases:
//
// 1. Latch delay values from the current config.
// 2. If maxDelay == 0 (warmup disabled): run SyncLBStateAll
// immediately, mark allDone, return.
// 3. Sleep until minDelay has elapsed (absolute hands-off).
// 4. Poll every warmupPollInterval, releasing any frontend whose
// backends are all known. Each release fires a single-VIP sync.
// Exit the poll when all frontends are released OR maxDelay
// elapses.
// 5. Run SyncLBStateAll for any stragglers and mark allDone.
//
// Exits early if ctx is cancelled at any point.
func (c *Client) runWarmup(ctx context.Context) {
src := c.getStateSource()
if src == nil {
// No state source ever registered; nothing meaningful to do.
// Close the gate so lbSyncLoop doesn't hang.
c.warmup.finishAll()
return
}
cfg := src.Config()
if cfg == nil {
c.warmup.finishAll()
return
}
c.warmup.configure(cfg.VPP.LB.StartupMinDelay, cfg.VPP.LB.StartupMaxDelay)
w := c.warmup
// maxDelay == 0 is the "no warmup" escape hatch: sync immediately
// and mark the gate open. Operators pick this for tests and dev
// setups where a few seconds of startup black-hole on bounce is
// acceptable in exchange for not having to wait out the warmup.
if w.maxDelay == 0 {
slog.Info("vpp-lb-warmup-skipped",
"impact", "VPP LB update skipped")
if err := c.SyncLBStateAll(cfg); err != nil {
slog.Warn("vpp-lb-sync-error", "err", err)
}
w.finishAll()
return
}
slog.Info("vpp-lb-warmup-start",
"min-delay", w.minDelay.String(),
"max-delay", w.maxDelay.String(),
"impact", "Gating all VPP LB updates")
// Phase 3: wait out the min-delay absolute hands-off window.
minDeadline := w.startAt.Add(w.minDelay)
if wait := time.Until(minDeadline); wait > 0 {
select {
case <-ctx.Done():
return
case <-time.After(wait):
}
}
slog.Info("vpp-lb-warmup-min-delay-elapsed",
"elapsed", w.elapsed(),
"impact", "Ungating VPP LB updates for VIPs with known backend state")
// Phase 4: poll for per-VIP release until maxDelay expires.
// happyPath is set to true if we exit the poll loop because
// every frontend has been released individually via SyncLBStateVIP.
// In that case Phase 5 below skips the SyncLBStateAll entirely —
// running it would be a redundant full reconcile over a dataplane
// that's already in the desired state, and the log would misreport
// the warmup-complete event as a "max-delay-final" stragglers sweep.
maxDeadline := w.startAt.Add(w.maxDelay)
happyPath := false
for time.Now().Before(maxDeadline) {
// Re-read the state source every tick: the config may not
// have been available at loop entry (e.g. first connect
// beat the config load race) but could be present now.
src = c.getStateSource()
if src != nil {
cfg = src.Config()
}
if cfg != nil {
// Release any frontend whose backends have all settled.
allReleased := true
for feName, fe := range cfg.Frontends {
if w.isReleased(feName) {
continue
}
if !allBackendsKnown(fe, src) {
allReleased = false
continue
}
if !w.tryRelease(feName) {
// Still inside minDelay — shouldn't happen here
// because we waited above, but guard anyway.
allReleased = false
continue
}
slog.Info("vpp-lb-warmup-release",
"frontend", feName,
"trigger", "poll",
"elapsed", w.elapsed())
if err := c.SyncLBStateVIP(cfg, feName, ""); err != nil {
slog.Warn("vpp-lb-warmup-release-error",
"frontend", feName,
"err", err)
}
}
if allReleased {
// Everything settled before maxDelay — fast path
// out of the poll so we don't sit idle for the
// remainder of the watchdog window.
happyPath = true
break
}
}
// Sleep warmupPollInterval or until maxDelay, whichever
// is shorter, before trying again.
wait := warmupPollInterval
if rem := time.Until(maxDeadline); rem < wait {
wait = rem
}
select {
case <-ctx.Done():
return
case <-time.After(wait):
}
}
// Phase 5: close out warmup. Two paths, but both emit
// vpp-lb-warmup-max-delay-elapsed at the max-delay boundary so
// the log timeline (start → min-delay-elapsed → (releases
// happen) → max-delay-elapsed) is consistent regardless of
// whether the warmup ended early or the watchdog tripped.
//
// - happyPath: every frontend was released individually during
// Phase 4 and each one's SyncLBStateVIP already ran. VPP is
// in the desired state; finishAll is called immediately so
// the periodic sync loop can start drift-correction without
// waiting out the rest of max-delay. The warmup driver then
// sleeps until max-delay and emits -max-delay-elapsed as a
// gratuitous timeline marker — the gate is already open,
// but the line completes the warmup picture for an operator
// reading the log and keeps the event sequence symmetric
// with the watchdog path.
//
// - watchdog: max-delay elapsed with stragglers remaining. At
// least one frontend never made it through allBackendsKnown,
// so its effective weight computation still treats some
// backends as StateUnknown and will program weight=0 for
// them. Emit -max-delay-elapsed at the boundary, run
// SyncLBStateAll to sweep stragglers, then finishAll.
if happyPath {
slog.Info("vpp-lb-warmup-complete",
"elapsed", w.elapsed(),
"impact", "Ungating VPP LB updates, all frontends released")
w.finishAll()
if wait := time.Until(maxDeadline); wait > 0 {
select {
case <-ctx.Done():
return
case <-time.After(wait):
}
}
slog.Info("vpp-lb-warmup-max-delay-elapsed",
"elapsed", w.elapsed(),
"impact", "Ungating all VPP LB updates")
return
}
slog.Info("vpp-lb-warmup-max-delay-elapsed",
"elapsed", w.elapsed(),
"impact", "Ungating all VPP LB updates")
src = c.getStateSource()
if src != nil {
cfg = src.Config()
}
if cfg != nil {
if err := c.SyncLBStateAll(cfg); err != nil {
slog.Warn("vpp-lb-sync-error", "err", err)
}
}
w.finishAll()
}

232
internal/vpp/warmup_test.go Normal file
View File

@@ -0,0 +1,232 @@
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
package vpp
import (
"net"
"testing"
"time"
"git.ipng.ch/ipng/vpp-maglev/internal/config"
"git.ipng.ch/ipng/vpp-maglev/internal/health"
)
// TestWarmupTrackerBasic pins the state-machine transitions the tracker
// owns: inMinDelay, isReleased, tryRelease, finishAll, isAllDone. Time
// is manipulated by backdating startAt — there is no hidden global
// clock, so a test can assert behaviour at any simulated point in
// the [0, maxDelay] window by rewinding startAt before each step.
func TestWarmupTrackerBasic(t *testing.T) {
t.Run("min-delay gates everything", func(t *testing.T) {
w := newWarmupTracker()
w.configure(5*time.Second, 30*time.Second)
// At t=0 we are inside min-delay.
if !w.inMinDelay() {
t.Error("t=0: expected inMinDelay=true")
}
if w.isReleased("fe1") {
t.Error("t=0: expected isReleased(fe1)=false")
}
if w.tryRelease("fe1") {
t.Error("t=0: tryRelease should fail inside min-delay")
}
if w.isAllDone() {
t.Error("t=0: expected isAllDone=false")
}
})
t.Run("per-VIP release after min-delay", func(t *testing.T) {
w := newWarmupTracker()
w.configure(5*time.Second, 30*time.Second)
// Simulate t=10s by backdating startAt 10s into the past.
w.startAt = time.Now().Add(-10 * time.Second)
if w.inMinDelay() {
t.Error("t=10s: expected inMinDelay=false")
}
if w.isReleased("fe1") {
t.Error("t=10s: fe1 should not be released yet")
}
if !w.tryRelease("fe1") {
t.Error("t=10s: tryRelease(fe1) should succeed")
}
if !w.isReleased("fe1") {
t.Error("after tryRelease: isReleased(fe1) should be true")
}
// A second call returns true (already-released path).
if !w.tryRelease("fe1") {
t.Error("second tryRelease(fe1) should return true (already released)")
}
// Other VIPs are independent.
if w.isReleased("fe2") {
t.Error("releasing fe1 should not affect fe2")
}
})
t.Run("finishAll opens all gates", func(t *testing.T) {
w := newWarmupTracker()
w.configure(5*time.Second, 30*time.Second)
// Inside min-delay: fe1 not released.
if w.isReleased("fe1") {
t.Error("pre-finishAll: fe1 should not be released")
}
w.finishAll()
if !w.isAllDone() {
t.Error("post-finishAll: isAllDone should be true")
}
if !w.isReleased("fe1") {
t.Error("post-finishAll: every frontend should be released")
}
if w.inMinDelay() {
t.Error("post-finishAll: inMinDelay should be false")
}
// Second call is idempotent.
w.finishAll()
})
t.Run("doneChan closes on finishAll", func(t *testing.T) {
w := newWarmupTracker()
w.configure(5*time.Second, 30*time.Second)
select {
case <-w.doneChan():
t.Fatal("doneChan should not be readable before finishAll")
default:
}
w.finishAll()
select {
case <-w.doneChan():
// expected
case <-time.After(100 * time.Millisecond):
t.Fatal("doneChan should be readable after finishAll")
}
})
t.Run("configure is idempotent after first call", func(t *testing.T) {
w := newWarmupTracker()
w.configure(5*time.Second, 30*time.Second)
// Reload with shorter delays should be a no-op so a config
// reload mid-warmup doesn't move the goalposts.
w.configure(1*time.Second, 2*time.Second)
if w.minDelay != 5*time.Second {
t.Errorf("minDelay got %v, want %v", w.minDelay, 5*time.Second)
}
if w.maxDelay != 30*time.Second {
t.Errorf("maxDelay got %v, want %v", w.maxDelay, 30*time.Second)
}
})
}
// staticStateSource is a minimal StateSource for allBackendsKnown tests.
// It holds a fixed config and a static backend-state map; BackendState
// returns (state, true) for entries in the map and (StateUnknown, false)
// for everything else — matching how checker.Checker would report a
// backend that isn't under its watch.
type staticStateSource struct {
cfg *config.Config
states map[string]health.State
}
func (s *staticStateSource) Config() *config.Config { return s.cfg }
func (s *staticStateSource) BackendState(name string) (health.State, bool) {
st, ok := s.states[name]
return st, ok
}
// TestAllBackendsKnown pins the per-VIP release precondition. A
// frontend is eligible for release during the warmup phase iff every
// backend it references has reached a non-Unknown state. StateDown,
// StatePaused, and StateDisabled all count as "known" — the property
// is "has the checker reported at least once", not "is healthy".
func TestAllBackendsKnown(t *testing.T) {
ip := func(s string) net.IP { return net.ParseIP(s) }
fe := config.Frontend{
Address: ip("192.0.2.1"),
Protocol: "tcp",
Port: 80,
Pools: []config.Pool{
{Name: "primary", Backends: map[string]config.PoolBackend{
"b1": {Weight: 100},
"b2": {Weight: 100},
}},
{Name: "fallback", Backends: map[string]config.PoolBackend{
"b3": {Weight: 100},
}},
},
}
cases := []struct {
name string
states map[string]health.State
want bool
}{
{
name: "all up → known",
states: map[string]health.State{
"b1": health.StateUp, "b2": health.StateUp, "b3": health.StateUp,
},
want: true,
},
{
name: "mixed up/down/disabled → known",
// All of up, down, paused, disabled are "non-Unknown" and
// therefore counts as known; allBackendsKnown is about
// "has the checker reported once", not about health.
states: map[string]health.State{
"b1": health.StateDown, "b2": health.StateDisabled, "b3": health.StatePaused,
},
want: true,
},
{
name: "one still unknown → not known",
states: map[string]health.State{
"b1": health.StateUp, "b2": health.StateUnknown, "b3": health.StateUp,
},
want: false,
},
{
name: "backend not reported at all (checker doesn't know it) → not known",
states: map[string]health.State{"b1": health.StateUp, "b2": health.StateUp},
want: false,
},
{
name: "fallback pool unknown → not known",
states: map[string]health.State{
"b1": health.StateUp, "b2": health.StateUp, "b3": health.StateUnknown,
},
want: false,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
src := &staticStateSource{cfg: &config.Config{}, states: tc.states}
got := allBackendsKnown(fe, src)
if got != tc.want {
t.Errorf("got %v, want %v", got, tc.want)
}
})
}
}
// TestWarmupTrackerZeroDelays pins the "warmup disabled" escape hatch:
// with both delays set to 0, tryRelease succeeds immediately and
// isReleased returns true for every frontend without needing any
// state transitions. This is the configuration an operator picks
// when they'd rather take the brief startup black-hole than wait
// out the warmup — typically for tests and dev setups.
func TestWarmupTrackerZeroDelays(t *testing.T) {
w := newWarmupTracker()
w.configure(0, 0)
if w.inMinDelay() {
t.Error("min=0: expected inMinDelay=false immediately")
}
if !w.tryRelease("fe1") {
t.Error("min=0: tryRelease should succeed immediately")
}
if !w.isReleased("fe1") {
t.Error("min=0: isReleased(fe1) should be true after tryRelease")
}
}