Files
vpp-maglev/internal/vpp/client.go
Pim van Pelt 224167ce39 Dataplane reconcile fixes; LB counters cleanup; SPA scope cookie
Checker / reload:
- Reload's update-in-place branch now mirrors b.Address onto the
  runtime health.Backend. Without this, GetBackend kept returning
  the pre-reload address indefinitely after a config edit that
  touched addresses but not healthcheck settings — the VPP sync
  path reads cfg.Backends directly so the dataplane moved on
  while the gRPC and SPA view stayed wedged on the old IPv4/IPv6.

Sync (internal/vpp/lbsync.go):
- reconcileVIP now detects encap mismatch in addition to
  src-ip-sticky mismatch and takes the full tear-down / re-add
  path via a new shared recreateVIP helper. Triggered when every
  backend flips address family (gre4 <-> gre6) and the existing
  VIP can no longer accept new ASes — previously the sync wedged
  with 'Invalid address family' until a full maglevd restart.
- setASWeight is issued whenever the state machine requests
  flush (a.Flush=true), not only on the weight-value transition
  edge. Fixes the case where a backend reached StateDisabled
  after its effective weight had already been drained to 0 by
  pool failover — the sticky-cache entries pointing at it were
  previously never cleared.

maglev-frontend:
- signal.Ignore(SIGHUP) so a controlling-terminal disconnect
  doesn't kill the daemon.
- debian/vpp-maglev.service grants CAP_SYS_ADMIN in addition to
  CAP_NET_RAW so setns(CLONE_NEWNET) can join the healthcheck
  netns. Comment documents the 'operation not permitted' symptom
  and notes the knob can be dropped if the deployment doesn't use
  the 'netns:' healthcheck option.

LB plugin counters (internal/vpp/lbstats.go + friends):
- Fix the VIP counter regex: the LB plugin registers
  vlib_simple_counter_main_t names without a leading '/'
  (vlib_validate_simple_counter in counter.c:50 uses cm->name
  verbatim; only entries that set cm->stat_segment_name get a
  slash). first/next/untracked/no-server now read through as
  live values instead of zero.
- Drop the per-backend FIB counter block end-to-end (proto,
  grpcapi, metrics, vpp.Client, lbstats, maglevc). Traced from
  lb/node.c:558 into ip{4,6}_forward.h:141 — the LB plugin
  forwards by writing adj_index[VLIB_TX] directly and bypassing
  ip{4,6}_lookup_inline, which is the only path that increments
  lbm_to_counters. The backend's FIB load_balance stats_index
  literally never ticks for LB-forwarded traffic, so the column
  was always zero and misleading. docs/implementation/TODO
  records the full investigation and the recommended upstream
  path (new lb_as_stats_dump API message) for when we're ready
  to carry that VPP patch.
- maglevc show vpp lb counters: plain-text tabular headers.
  label() wraps strings in ANSI escapes (~11 bytes of overhead),
  but tabwriter counts bytes, not rendered width — so a header
  row with label()'d cells and data rows with plain cells drifts
  column alignment on every row. color.go comment now spells
  out the constraint: label() only works when column N is
  wrapped identically in every row (key-value layouts are fine,
  multi-column tables with header-only labelling are not).

SPA:
- stores/scope.ts is cookie-backed (maglev_scope, 1 year,
  SameSite=Lax). App.tsx hydrates from the cookie then validates
  against the fetched snapshots: a cookie referencing a maglevd
  that no longer exists falls through to snaps[0] instead of
  leaving the user on a ghost selection.
- components/Flash.tsx wraps props.value in createMemo. Solid's
  on() fires its callback on every dep notification, not on
  value change — source is right in solid-js/dist/solid.js:460,
  no equality check. Without the memo, flipping scope between
  two 'connected' maglevds (or any other cross-store reactive
  re-eval that doesn't actually change the concrete string)
  replays the animation every time. createMemo's default ===
  dedupe fixes it in one place for every Flash consumer,
  superseding the local createMemo workaround we'd added in
  BackendRow earlier.
2026-04-14 14:40:16 +02:00

397 lines
11 KiB
Go

// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
// Package vpp manages the connection to a local VPP instance over its
// binary API and stats sockets. The Client reconnects automatically when
// VPP restarts.
package vpp
import (
"context"
"log/slog"
"sync"
"sync/atomic"
"time"
"go.fd.io/govpp/adapter"
"go.fd.io/govpp/adapter/socketclient"
"go.fd.io/govpp/adapter/statsclient"
"go.fd.io/govpp/binapi/vpe"
"go.fd.io/govpp/core"
"git.ipng.ch/ipng/vpp-maglev/internal/config"
"git.ipng.ch/ipng/vpp-maglev/internal/health"
"git.ipng.ch/ipng/vpp-maglev/internal/metrics"
lb "git.ipng.ch/ipng/vpp-maglev/internal/vpp/binapi/lb"
)
// StateSource provides a live view of the running config and the current
// health state of each backend. checker.Checker satisfies this interface via
// its Config() and BackendState() methods. Decoupling via an interface avoids
// an import cycle with the checker package.
type StateSource interface {
Config() *config.Config
BackendState(name string) (health.State, bool)
}
const retryInterval = 5 * time.Second
const pingInterval = 10 * time.Second
const defaultLBSyncInterval = 30 * time.Second
// Info holds VPP version and connection metadata, populated on connect.
type Info struct {
Version string
BuildDate string
BuildDirectory string
PID uint32
BootTime time.Time // when VPP started (from /sys/boottime stats counter)
ConnectedSince time.Time // when maglevd connected to VPP
}
// Client manages connections to both the VPP API and stats sockets.
// Both connections are treated as a unit: if either drops, both are
// torn down and re-established together.
type Client struct {
apiAddr string
statsAddr string
mu sync.Mutex
apiConn *core.Connection
statsConn *core.StatsConnection
statsClient adapter.StatsAPI // raw adapter for DumpStats
info Info // populated on successful connect
stateSrc StateSource // optional; enables periodic LB sync
lastLBConf *lb.LbConf // cached last-pushed lb_conf (dedup)
// lbStatsSnap is the most recent per-VIP stats snapshot captured by
// lbStatsLoop. Published as an immutable slice via atomic.Pointer so
// Prometheus scrapes (metrics.Collector.Collect) don't take any lock.
lbStatsSnap atomic.Pointer[[]metrics.VIPStatEntry]
}
// SetStateSource attaches a live config + health state source. When set, the
// VPP client runs a periodic SyncLBStateAll loop (at the interval from
// cfg.VPP.LB.SyncInterval) for as long as the VPP connection is up, and
// state-aware weights are used throughout the sync path. Must be called
// before Run.
func (c *Client) SetStateSource(src StateSource) {
c.mu.Lock()
defer c.mu.Unlock()
c.stateSrc = src
}
// getStateSource returns the registered state source under the mutex.
func (c *Client) getStateSource() StateSource {
c.mu.Lock()
defer c.mu.Unlock()
return c.stateSrc
}
// New creates a Client for the given socket paths.
func New(apiAddr, statsAddr string) *Client {
return &Client{apiAddr: apiAddr, statsAddr: statsAddr}
}
// Run connects to VPP and maintains the connection until ctx is cancelled.
// If VPP is unavailable or restarts, Run reconnects automatically.
func (c *Client) Run(ctx context.Context) {
for {
if err := c.connect(); err != nil {
slog.Debug("vpp-connect-failed", "err", err)
select {
case <-ctx.Done():
return
case <-time.After(retryInterval):
continue
}
}
// Fetch version info and record connect time.
// fetchInfo uses NewAPIChannel and statsClient which both take c.mu,
// so we must not hold c.mu here.
info := c.fetchInfo()
c.mu.Lock()
c.info = info
c.mu.Unlock()
slog.Info("vpp-connect", "version", c.info.Version,
"build-date", c.info.BuildDate,
"pid", c.info.PID,
"api", c.apiAddr, "stats", c.statsAddr)
// Read the current LB plugin state so we can log what's programmed.
if state, err := c.GetLBStateAll(); err != nil {
slog.Warn("vpp-lb-read-failed", "err", err)
} else {
totalAS := 0
for _, v := range state.VIPs {
totalAS += len(v.ASes)
}
slog.Info("vpp-lb-state",
"vips", len(state.VIPs),
"application-servers", totalAS,
"sticky-buckets-per-core", state.Conf.StickyBucketsPerCore,
"flow-timeout", state.Conf.FlowTimeout)
}
// Push global LB conf (src addresses, buckets, timeout) from the
// running config. On startup this is the initial set; on reconnect
// (VPP restart) VPP has forgotten everything, so we set it again.
c.mu.Lock()
src := c.stateSrc
c.mu.Unlock()
if src != nil {
if cfg := src.Config(); cfg != nil {
if err := c.SetLBConf(cfg); err != nil {
slog.Warn("vpp-lb-conf-set-failed", "err", err)
}
}
}
// Start the LB sync and stats loops for as long as the connection
// is up. Both exit when connCtx is cancelled.
connCtx, connCancel := context.WithCancel(ctx)
go c.lbSyncLoop(connCtx)
go c.lbStatsLoop(connCtx)
// Hold the connection, pinging periodically to detect VPP restarts.
c.monitor(ctx)
connCancel()
// If ctx is done we're shutting down; otherwise VPP dropped and we retry.
c.disconnect()
if ctx.Err() != nil {
return
}
slog.Warn("vpp-disconnect", "msg", "connection lost, reconnecting")
}
}
// lbSyncLoop periodically runs SyncLBStateAll to catch drift between the
// maglev config and the VPP dataplane. The first run happens immediately
// on loop start (VPP has just connected, so any pre-existing state needs
// reconciliation). Subsequent runs fire every cfg.VPP.LB.SyncInterval.
// Exits when ctx is cancelled.
func (c *Client) lbSyncLoop(ctx context.Context) {
src := c.getStateSource()
if src == nil {
return // no state source registered; nothing to sync
}
// next-run timestamp starts at "now" so the first tick is immediate.
next := time.Now()
for {
wait := time.Until(next)
if wait < 0 {
wait = 0
}
select {
case <-ctx.Done():
return
case <-time.After(wait):
}
cfg := src.Config()
if cfg == nil {
next = time.Now().Add(defaultLBSyncInterval)
continue
}
interval := cfg.VPP.LB.SyncInterval
if interval <= 0 {
interval = defaultLBSyncInterval
}
if err := c.SyncLBStateAll(cfg); err != nil {
slog.Warn("vpp-lb-sync-error", "err", err)
}
next = time.Now().Add(interval)
}
}
// IsConnected returns true if both API and stats connections are active.
func (c *Client) IsConnected() bool {
c.mu.Lock()
defer c.mu.Unlock()
return c.apiConn != nil && c.statsConn != nil
}
// GetInfo returns the VPP version and connection metadata, or an error
// if VPP is not connected.
func (c *Client) GetInfo() (Info, error) {
c.mu.Lock()
defer c.mu.Unlock()
if c.apiConn == nil {
return Info{}, errNotConnected
}
return c.info, nil
}
// VIPStats satisfies metrics.VPPSource. It returns the latest snapshot of
// per-VIP LB stats-segment counters captured by lbStatsLoop. Returns nil
// until the first scrape completes, or after a disconnect (the pointer is
// cleared when the connection drops).
func (c *Client) VIPStats() []metrics.VIPStatEntry {
p := c.lbStatsSnap.Load()
if p == nil {
return nil
}
return *p
}
// VPPInfo satisfies metrics.VPPSource. It returns a copy of the cached
// connection info as a metrics-local struct so the metrics package doesn't
// need to import internal/vpp. Second return is false when VPP is not
// connected (the collector skips the vpp_* gauges in that case).
func (c *Client) VPPInfo() (metrics.VPPInfo, bool) {
c.mu.Lock()
defer c.mu.Unlock()
if c.apiConn == nil {
return metrics.VPPInfo{}, false
}
return metrics.VPPInfo{
Version: c.info.Version,
BuildDate: c.info.BuildDate,
PID: c.info.PID,
BootTime: c.info.BootTime,
ConnectedSince: c.info.ConnectedSince,
}, true
}
// connect establishes both API and stats connections. If either fails,
// both are torn down.
func (c *Client) connect() error {
sc := socketclient.NewVppClient(c.apiAddr)
sc.SetClientName("vpp-maglev")
apiConn, err := core.Connect(sc)
if err != nil {
return err
}
stc := statsclient.NewStatsClient(c.statsAddr)
statsConn, err := core.ConnectStats(stc)
if err != nil {
safeDisconnectAPI(apiConn)
return err
}
c.mu.Lock()
c.apiConn = apiConn
c.statsConn = statsConn
c.statsClient = stc
c.mu.Unlock()
return nil
}
// disconnect tears down both connections.
func (c *Client) disconnect() {
c.mu.Lock()
apiConn := c.apiConn
statsConn := c.statsConn
c.apiConn = nil
c.statsConn = nil
c.statsClient = nil
c.info = Info{}
c.lastLBConf = nil // force re-push of lb_conf on reconnect
c.mu.Unlock()
c.lbStatsSnap.Store(nil)
safeDisconnectAPI(apiConn)
safeDisconnectStats(statsConn)
}
// monitor blocks until the context is cancelled or a liveness ping fails.
func (c *Client) monitor(ctx context.Context) {
ticker := time.NewTicker(pingInterval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
if !c.ping() {
return
}
}
}
}
// ping sends a control_ping to VPP and returns true if it succeeds.
func (c *Client) ping() bool {
ch, err := c.apiChannel()
if err != nil {
return false
}
defer ch.Close()
req := &core.ControlPing{}
reply := &core.ControlPingReply{}
if err := ch.SendRequest(req).ReceiveReply(reply); err != nil {
slog.Debug("vpp-ping-failed", "err", err)
return false
}
return true
}
// fetchInfo queries VPP for version info, PID, and boot time.
// Must be called after connect succeeds (apiConn and statsClient are set).
func (c *Client) fetchInfo() Info {
info := Info{ConnectedSince: time.Now()}
ch, err := c.apiChannel()
if err != nil {
return info
}
defer ch.Close()
ver := &vpe.ShowVersionReply{}
if err := ch.SendRequest(&vpe.ShowVersion{}).ReceiveReply(ver); err == nil {
info.Version = ver.Version
info.BuildDate = ver.BuildDate
info.BuildDirectory = ver.BuildDirectory
}
ping := &core.ControlPingReply{}
if err := ch.SendRequest(&core.ControlPing{}).ReceiveReply(ping); err == nil {
info.PID = ping.VpePID
}
// Read VPP boot time from the stats segment.
c.mu.Lock()
sc := c.statsClient
c.mu.Unlock()
if sc != nil {
if entries, err := sc.DumpStats("/sys/boottime"); err == nil {
for _, e := range entries {
if s, ok := e.Data.(adapter.ScalarStat); ok && s != 0 {
info.BootTime = time.Unix(int64(s), 0)
}
}
}
}
return info
}
// safeDisconnectAPI disconnects an API connection, recovering from any panic
// that GoVPP may raise on a stale connection.
func safeDisconnectAPI(conn *core.Connection) {
if conn == nil {
return
}
defer func() { recover() }() //nolint:errcheck
conn.Disconnect()
}
// safeDisconnectStats disconnects a stats connection, recovering from panics.
func safeDisconnectStats(conn *core.StatsConnection) {
if conn == nil {
return
}
defer func() { recover() }() //nolint:errcheck
conn.Disconnect()
}
type vppError struct{ msg string }
func (e *vppError) Error() string { return e.msg }
var errNotConnected = &vppError{msg: "VPP API connection not established"}