Dataplane reconcile fixes; LB counters cleanup; SPA scope cookie

Checker / reload:
- Reload's update-in-place branch now mirrors b.Address onto the
  runtime health.Backend. Without this, GetBackend kept returning
  the pre-reload address indefinitely after a config edit that
  touched addresses but not healthcheck settings — the VPP sync
  path reads cfg.Backends directly so the dataplane moved on
  while the gRPC and SPA view stayed wedged on the old IPv4/IPv6.

Sync (internal/vpp/lbsync.go):
- reconcileVIP now detects encap mismatch in addition to
  src-ip-sticky mismatch and takes the full tear-down / re-add
  path via a new shared recreateVIP helper. Triggered when every
  backend flips address family (gre4 <-> gre6) and the existing
  VIP can no longer accept new ASes — previously the sync wedged
  with 'Invalid address family' until a full maglevd restart.
- setASWeight is issued whenever the state machine requests
  flush (a.Flush=true), not only on the weight-value transition
  edge. Fixes the case where a backend reached StateDisabled
  after its effective weight had already been drained to 0 by
  pool failover — the sticky-cache entries pointing at it were
  previously never cleared.

maglev-frontend:
- signal.Ignore(SIGHUP) so a controlling-terminal disconnect
  doesn't kill the daemon.
- debian/vpp-maglev.service grants CAP_SYS_ADMIN in addition to
  CAP_NET_RAW so setns(CLONE_NEWNET) can join the healthcheck
  netns. Comment documents the 'operation not permitted' symptom
  and notes the knob can be dropped if the deployment doesn't use
  the 'netns:' healthcheck option.

LB plugin counters (internal/vpp/lbstats.go + friends):
- Fix the VIP counter regex: the LB plugin registers
  vlib_simple_counter_main_t names without a leading '/'
  (vlib_validate_simple_counter in counter.c:50 uses cm->name
  verbatim; only entries that set cm->stat_segment_name get a
  slash). first/next/untracked/no-server now read through as
  live values instead of zero.
- Drop the per-backend FIB counter block end-to-end (proto,
  grpcapi, metrics, vpp.Client, lbstats, maglevc). Traced from
  lb/node.c:558 into ip{4,6}_forward.h:141 — the LB plugin
  forwards by writing adj_index[VLIB_TX] directly and bypassing
  ip{4,6}_lookup_inline, which is the only path that increments
  lbm_to_counters. The backend's FIB load_balance stats_index
  literally never ticks for LB-forwarded traffic, so the column
  was always zero and misleading. docs/implementation/TODO
  records the full investigation and the recommended upstream
  path (new lb_as_stats_dump API message) for when we're ready
  to carry that VPP patch.
- maglevc show vpp lb counters: plain-text tabular headers.
  label() wraps strings in ANSI escapes (~11 bytes of overhead),
  but tabwriter counts bytes, not rendered width — so a header
  row with label()'d cells and data rows with plain cells drifts
  column alignment on every row. color.go comment now spells
  out the constraint: label() only works when column N is
  wrapped identically in every row (key-value layouts are fine,
  multi-column tables with header-only labelling are not).

SPA:
- stores/scope.ts is cookie-backed (maglev_scope, 1 year,
  SameSite=Lax). App.tsx hydrates from the cookie then validates
  against the fetched snapshots: a cookie referencing a maglevd
  that no longer exists falls through to snaps[0] instead of
  leaving the user on a ghost selection.
- components/Flash.tsx wraps props.value in createMemo. Solid's
  on() fires its callback on every dep notification, not on
  value change — source is right in solid-js/dist/solid.js:460,
  no equality check. Without the memo, flipping scope between
  two 'connected' maglevds (or any other cross-store reactive
  re-eval that doesn't actually change the concrete string)
  replays the animation every time. createMemo's default ===
  dedupe fixes it in one place for every Flash consumer,
  superseding the local createMemo workaround we'd added in
  BackendRow earlier.
This commit is contained in:
2026-04-14 14:39:52 +02:00
parent 4288e22b71
commit 224167ce39
20 changed files with 435 additions and 471 deletions

View File

@@ -251,27 +251,25 @@ func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, f
}
if curSticky != d.SrcIPSticky {
slog.Info("vpp-lb-sync-vip-recreate",
"vip", d.Prefix.IP.String(),
"protocol", protocolName(d.Protocol),
"port", d.Port,
"reason", "src-ip-sticky-changed",
"from", curSticky,
"to", d.SrcIPSticky)
if err := removeVIP(ch, *cur, st); err != nil {
return err
}
if err := addVIP(ch, d); err != nil {
return err
}
st.vipAdd++
for _, as := range d.ASes {
if err := addAS(ch, d.Prefix, d.Protocol, d.Port, as); err != nil {
return err
}
st.asAdd++
}
return nil
return recreateVIP(ch, d, *cur, st, "src-ip-sticky-changed",
"from", curSticky, "to", d.SrcIPSticky)
}
// Encap mismatch: every backend flipped address family (e.g. all
// IPv6 → all IPv4 after a config edit). VPP's encap is a VIP-level
// attribute set at lb_add_del_vip time with no mutation API, so
// adding new-family ASes under the old encap wedges the
// reconciler: packets would be wrapped for the wrong family and
// the new backends never see traffic. The only recovery is a VIP
// recreate. The old-family ASes end up orphaned in VPP's pool and
// are GC'd on the plugin's own ~40s "USED-flag=false" schedule;
// the next regular sync tick confirms steady state. A recreate
// does tear down existing flows to the VIP, which is why we gate
// on an explicit encap difference rather than reconciling every
// sync cycle.
if desiredEncap := encapString(d.Encap); desiredEncap != cur.Encap {
return recreateVIP(ch, d, *cur, st, "encap-changed",
"from", cur.Encap, "to", desiredEncap)
}
// VIP exists in both — reconcile ASes.
@@ -301,20 +299,38 @@ func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, f
st.asAdd++
continue
}
if c.Weight != a.Weight {
// Flush only on the transition from serving traffic (cur > 0) to
// zero, and only when the desired state explicitly asks for it
// (i.e. the backend was disabled, not merely drained). Steady-
// state syncs where weight doesn't change never re-flush.
flush := a.Flush && c.Weight > 0 && a.Weight == 0
// Caller-forced flush: used by SetFrontendPoolBackendWeight
// with flush=true to explicitly drop live sessions for a
// single backend. The address match is exact — no other
// AS's weight change is affected, even if several happen
// in the same reconcile pass.
if flushAddress != "" && addr == flushAddress {
flush = true
}
// setASWeight is issued whenever the weight changes OR whenever
// the state machine asks for a flush (a.Flush=true, currently
// emitted only for StateDisabled). The a.Flush path has to
// fire even on a no-op weight change, because a backend can
// reach StateDisabled via a pool-failover that already drained
// its VPP weight to 0 on an earlier tick — at that moment
// c.Weight == a.Weight == 0, and a gate keyed solely on the
// weight diff would silently drop the flush intent and leave
// stale sticky-cache entries pointing at the now-disabled AS
// (see the "disable nlams0 after fallback deactivation" trace
// in the bug investigation).
//
// Firing unconditionally on a.Flush is idempotent at VPP's
// side: lb_as_set_weight with an unchanged weight is a no-op
// on the Maglev table (lb_vip_update_new_flow_table rebuilds
// the same table), and a redundant lb_flush_vip_as is bounded
// — it walks each per-worker sticky_ht once. The trade-off is
// that disabled backends re-issue the flush on every periodic
// SyncLBStateAll tick, and any sticky entries that happened to
// land in the meantime get cleared; both are acceptable for
// the "correctness over churn" semantics we want here.
weightChanged := c.Weight != a.Weight
flush := a.Flush
// Caller-forced flush: used by SetFrontendPoolBackendWeight
// with flush=true to explicitly drop live sessions for a
// single backend. The address match is exact — no other
// AS's weight change is affected, even if several happen
// in the same reconcile pass.
if flushAddress != "" && addr == flushAddress {
flush = true
}
if weightChanged || flush {
if err := setASWeight(ch, d.Prefix, d.Protocol, d.Port, a, c.Weight, flush); err != nil {
return err
}
@@ -324,6 +340,37 @@ func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, f
return nil
}
// recreateVIP tears down an existing VIP and rebuilds it with the
// desired configuration and ASes. Used when a VIP attribute that VPP
// can't mutate in place has changed — today src_ip_sticky and the
// encap family. reason is logged as an operator-facing explanation;
// extra is appended to the slog call as additional fields (typically
// "from", <oldvalue>, "to", <newvalue>).
func recreateVIP(ch *loggedChannel, d desiredVIP, cur LBVIP, st *syncStats, reason string, extra ...any) error {
logAttrs := []any{
"vip", d.Prefix.IP.String(),
"protocol", protocolName(d.Protocol),
"port", d.Port,
"reason", reason,
}
logAttrs = append(logAttrs, extra...)
slog.Info("vpp-lb-sync-vip-recreate", logAttrs...)
if err := removeVIP(ch, cur, st); err != nil {
return err
}
if err := addVIP(ch, d); err != nil {
return err
}
st.vipAdd++
for _, as := range d.ASes {
if err := addAS(ch, d.Prefix, d.Protocol, d.Port, as); err != nil {
return err
}
st.asAdd++
}
return nil
}
// removeVIP flushes all ASes from a VIP and then deletes the VIP itself.
func removeVIP(ch *loggedChannel, v LBVIP, st *syncStats) error {
for _, as := range v.ASes {