Files
vpp-maglev/internal/vpp/client.go
Pim van Pelt 0049c2ae73 VPP reconciler: event-driven sync, pool failover, bug fixes
This commit wires the checker's state machine through to the VPP dataplane:
every backend state transition flows through a single code path that
recomputes the effective per-backend weight (with pool failover) and pushes
the result to VPP. Along the way several latent bugs in the state machine
and the sync path were fixed.

internal/vpp/reconciler.go (new)
- New Reconciler type subscribes to checker.Checker events and, on every
  transition, calls Client.SyncLBStateVIP for the affected frontend. This
  is the ONLY place in the codebase where backend state changes cause VPP
  calls — the "single path" discipline requested during design.
- Defines an EventSource interface (checker.Checker satisfies it) so the
  dependency direction stays vpp → checker; the checker never imports vpp.

internal/vpp/client.go
- Renamed ConfigSource → StateSource. The interface now has two methods:
  Config() and BackendState(name) — the reconciler and the desired-state
  builder both need live health state to compute effective weights.
- SetConfigSource → SetStateSource; internal cfgSrc field → stateSrc.
- New getStateSource() helper for internal locked access.
- lbSyncLoop still uses the state source for its periodic drift
  reconciliation; it's fully idempotent and runs the same code path as
  event-driven syncs.

internal/vpp/lbsync.go
- desiredAS grows a Flush bool so the mapping function can signal "on
  transition to weight 0, flush existing flow-table entries".
- asFromBackend is now the single source of truth for the state →
  (weight, flush) rule. Documented with a full truth table. Takes an
  activePool parameter so it can distinguish "up in active pool" from
  "up but standby".
- activePoolIndex(fe, states) implements priority failover: returns the
  index of the first pool containing any StateUp backend. pool[0] wins
  when at least one member is up; pool[1] takes over when pool[0] is
  empty; and so on. Defaults to 0 (unobservable, since all backends map
  to weight 0 when nothing is up).
- desiredFromFrontend snapshots backend states once, computes activePool,
  then walks every backend through asFromBackend. No more filtering on
  b.Enabled — disabled backends stay in the desired set so they keep
  their AS entry in VPP with weight=0. The previous filter caused delAS
  on disable, which destroyed the entry and broke enable afterwards.
- EffectiveWeights(fe, src) exported helper that returns the per-pool
  per-backend weight map for one frontend. Used by the gRPC GetFrontend
  handler and robot tests to observe failover without touching VPP.
- reconcileVIP computes flush at the weight-change call site:
    flush = desired.Flush && cur.Weight > 0 && desired.Weight == 0
  This ensures only the *transition* to disabled flushes sessions —
  steady-state syncs with already-zero weight skip the call entirely.
- setASWeight now plumbs IsFlush into lb_as_set_weight.

internal/vpp/lbsync_test.go (new)
- TestAsFromBackend: 15 cases locking down the truth table, including
  failover scenarios (up in standby pool, up promoted in pool[1]).
- TestActivePoolIndex: 8 cases covering pool[0]-has-up, pool[0]-all-down,
  all-disabled, all-paused, all-unknown, nothing-up-anywhere, and
  three-tier failover.
- TestDesiredFromFrontendFailover: 5 end-to-end scenarios wiring a fake
  StateSource through desiredFromFrontend and asserting the final
  per-IP weight map. Exercises the complete pipeline without VPP.

internal/checker/checker.go
- Added BackendState(name) (health.State, bool) — one-line method that
  satisfies vpp.StateSource. The checker is otherwise unchanged.
- EnableBackend rewritten to reuse the existing worker (parallel to
  ResumeBackend). The old code called startWorker which constructed a
  brand-new Backend via health.New, throwing away the transition
  history; the resulting 'backend-transition' log showed the bogus
  from=unknown,to=unknown. Now uses w.backend.Enable() to record a
  proper disabled→unknown transition and launches a fresh goroutine.
- Static (no-healthcheck) backends now fire their synthetic 'always up'
  pass on the first iteration of runProbe instead of sleeping 30s
  first. Previously static backends sat in StateUnknown for 30s after
  startup — useless for deterministic testing and surprising for
  operators. The fix is a simple first-iteration flag.

internal/health/state.go
- New Enable(maxHistory) method parallel to Disable. Transitions the
  backend from whatever state it's in (typically StateDisabled) to
  StateUnknown, resets the health counter to rise-1 so the expedited
  resolution kicks in on the first probe result, and emits a transition
  with code 'enabled'.

proto/maglev.proto
- PoolBackendInfo gains effective_weight: the state-aware weight that
  would be programmed into VPP (distinct from the configured weight in
  the YAML). Exposed via GetFrontend.

internal/grpcapi/server.go
- frontendToProto takes a vpp.StateSource, computes effective weights
  via vpp.EffectiveWeights, and populates PoolBackendInfo.EffectiveWeight.
- GetFrontend and SetFrontendPoolBackendWeight updated to pass the
  checker in.

cmd/maglevc/commands.go
- 'show frontends <name>' now renders every pool backend row as
    <name>  weight <cfg>  effective <eff>  [disabled]?
  so both values are always visible. The VPP-style key/value format
  avoids the ANSI-alignment pitfall we hit earlier and makes the output
  regex-parseable for robot tests.

cmd/maglevd/main.go
- Construct and start the Reconciler alongside the VPP client. Two
  extra lines, no other changes to startup.

tests/01-maglevd/maglevd-lab/maglev.yaml
- Two new static backends (static-primary, static-fallback) and a new
  failover-vip frontend with one backend per pool. No healthcheck, so
  the state machine resolves them to 'up' immediately via the synthetic
  pass. Used by the failover robot tests.

tests/01-maglevd/01-healthcheck.robot
- Three new test cases exercising pool failover end-to-end:
  1. primary up, secondary standby (initial state)
  2. disable primary → fallback takes over (effective weight flips)
  3. enable primary → fallback steps back
  All run without VPP: they scrape 'maglevc show frontends <name>' and
  regex-match the effective weight in the output. Deterministic and
  fast (~2s total) because the static backends don't probe.
- Two helper keywords: Static Backend Should Be Up and
  Effective Weight Should Be.

Net result: 16/16 robot tests pass. Backend state transitions now
flow through a single documented path (checker event → reconciler →
SyncLBStateVIP → desiredFromFrontend → asFromBackend → reconcileVIP →
setASWeight), and the pool failover / enable-after-disable / static-
backend-startup bugs are all fixed.
2026-04-12 12:40:09 +02:00

357 lines
9.5 KiB
Go

// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
// Package vpp manages the connection to a local VPP instance over its
// binary API and stats sockets. The Client reconnects automatically when
// VPP restarts.
package vpp
import (
"context"
"log/slog"
"sync"
"time"
"go.fd.io/govpp/adapter"
"go.fd.io/govpp/adapter/socketclient"
"go.fd.io/govpp/adapter/statsclient"
"go.fd.io/govpp/binapi/vpe"
"go.fd.io/govpp/core"
"git.ipng.ch/ipng/vpp-maglev/internal/config"
"git.ipng.ch/ipng/vpp-maglev/internal/health"
lb "git.ipng.ch/ipng/vpp-maglev/internal/vpp/binapi/lb"
)
// StateSource provides a live view of the running config and the current
// health state of each backend. checker.Checker satisfies this interface via
// its Config() and BackendState() methods. Decoupling via an interface avoids
// an import cycle with the checker package.
type StateSource interface {
Config() *config.Config
BackendState(name string) (health.State, bool)
}
const retryInterval = 5 * time.Second
const pingInterval = 10 * time.Second
const defaultLBSyncInterval = 30 * time.Second
// Info holds VPP version and connection metadata, populated on connect.
type Info struct {
Version string
BuildDate string
BuildDirectory string
PID uint32
BootTime time.Time // when VPP started (from /sys/boottime stats counter)
ConnectedSince time.Time // when maglevd connected to VPP
}
// Client manages connections to both the VPP API and stats sockets.
// Both connections are treated as a unit: if either drops, both are
// torn down and re-established together.
type Client struct {
apiAddr string
statsAddr string
mu sync.Mutex
apiConn *core.Connection
statsConn *core.StatsConnection
statsClient adapter.StatsAPI // raw adapter for DumpStats
info Info // populated on successful connect
stateSrc StateSource // optional; enables periodic LB sync
lastLBConf *lb.LbConf // cached last-pushed lb_conf (dedup)
}
// SetStateSource attaches a live config + health state source. When set, the
// VPP client runs a periodic SyncLBStateAll loop (at the interval from
// cfg.VPP.LB.SyncInterval) for as long as the VPP connection is up, and
// state-aware weights are used throughout the sync path. Must be called
// before Run.
func (c *Client) SetStateSource(src StateSource) {
c.mu.Lock()
defer c.mu.Unlock()
c.stateSrc = src
}
// getStateSource returns the registered state source under the mutex.
func (c *Client) getStateSource() StateSource {
c.mu.Lock()
defer c.mu.Unlock()
return c.stateSrc
}
// New creates a Client for the given socket paths.
func New(apiAddr, statsAddr string) *Client {
return &Client{apiAddr: apiAddr, statsAddr: statsAddr}
}
// Run connects to VPP and maintains the connection until ctx is cancelled.
// If VPP is unavailable or restarts, Run reconnects automatically.
func (c *Client) Run(ctx context.Context) {
for {
if err := c.connect(); err != nil {
slog.Debug("vpp-connect-failed", "err", err)
select {
case <-ctx.Done():
return
case <-time.After(retryInterval):
continue
}
}
// Fetch version info and record connect time.
// fetchInfo uses NewAPIChannel and statsClient which both take c.mu,
// so we must not hold c.mu here.
info := c.fetchInfo()
c.mu.Lock()
c.info = info
c.mu.Unlock()
slog.Info("vpp-connect", "version", c.info.Version,
"build-date", c.info.BuildDate,
"pid", c.info.PID,
"api", c.apiAddr, "stats", c.statsAddr)
// Read the current LB plugin state so we can log what's programmed.
if state, err := c.GetLBStateAll(); err != nil {
slog.Warn("vpp-lb-read-failed", "err", err)
} else {
totalAS := 0
for _, v := range state.VIPs {
totalAS += len(v.ASes)
}
slog.Info("vpp-lb-state",
"vips", len(state.VIPs),
"application-servers", totalAS,
"sticky-buckets-per-core", state.Conf.StickyBucketsPerCore,
"flow-timeout", state.Conf.FlowTimeout)
}
// Push global LB conf (src addresses, buckets, timeout) from the
// running config. On startup this is the initial set; on reconnect
// (VPP restart) VPP has forgotten everything, so we set it again.
c.mu.Lock()
src := c.stateSrc
c.mu.Unlock()
if src != nil {
if cfg := src.Config(); cfg != nil {
if err := c.SetLBConf(cfg); err != nil {
slog.Warn("vpp-lb-conf-set-failed", "err", err)
}
}
}
// Start the LB sync loop for as long as the connection is up.
// It exits when connCtx is cancelled (on disconnect or shutdown).
connCtx, connCancel := context.WithCancel(ctx)
go c.lbSyncLoop(connCtx)
// Hold the connection, pinging periodically to detect VPP restarts.
c.monitor(ctx)
connCancel()
// If ctx is done we're shutting down; otherwise VPP dropped and we retry.
c.disconnect()
if ctx.Err() != nil {
return
}
slog.Warn("vpp-disconnect", "msg", "connection lost, reconnecting")
}
}
// lbSyncLoop periodically runs SyncLBStateAll to catch drift between the
// maglev config and the VPP dataplane. The first run happens immediately
// on loop start (VPP has just connected, so any pre-existing state needs
// reconciliation). Subsequent runs fire every cfg.VPP.LB.SyncInterval.
// Exits when ctx is cancelled.
func (c *Client) lbSyncLoop(ctx context.Context) {
src := c.getStateSource()
if src == nil {
return // no state source registered; nothing to sync
}
// next-run timestamp starts at "now" so the first tick is immediate.
next := time.Now()
for {
wait := time.Until(next)
if wait < 0 {
wait = 0
}
select {
case <-ctx.Done():
return
case <-time.After(wait):
}
cfg := src.Config()
if cfg == nil {
next = time.Now().Add(defaultLBSyncInterval)
continue
}
interval := cfg.VPP.LB.SyncInterval
if interval <= 0 {
interval = defaultLBSyncInterval
}
if err := c.SyncLBStateAll(cfg); err != nil {
slog.Warn("vpp-lbsync-error", "err", err)
}
next = time.Now().Add(interval)
}
}
// IsConnected returns true if both API and stats connections are active.
func (c *Client) IsConnected() bool {
c.mu.Lock()
defer c.mu.Unlock()
return c.apiConn != nil && c.statsConn != nil
}
// GetInfo returns the VPP version and connection metadata, or an error
// if VPP is not connected.
func (c *Client) GetInfo() (Info, error) {
c.mu.Lock()
defer c.mu.Unlock()
if c.apiConn == nil {
return Info{}, errNotConnected
}
return c.info, nil
}
// connect establishes both API and stats connections. If either fails,
// both are torn down.
func (c *Client) connect() error {
sc := socketclient.NewVppClient(c.apiAddr)
sc.SetClientName("vpp-maglev")
apiConn, err := core.Connect(sc)
if err != nil {
return err
}
stc := statsclient.NewStatsClient(c.statsAddr)
statsConn, err := core.ConnectStats(stc)
if err != nil {
safeDisconnectAPI(apiConn)
return err
}
c.mu.Lock()
c.apiConn = apiConn
c.statsConn = statsConn
c.statsClient = stc
c.mu.Unlock()
return nil
}
// disconnect tears down both connections.
func (c *Client) disconnect() {
c.mu.Lock()
apiConn := c.apiConn
statsConn := c.statsConn
c.apiConn = nil
c.statsConn = nil
c.statsClient = nil
c.info = Info{}
c.lastLBConf = nil // force re-push of lb_conf on reconnect
c.mu.Unlock()
safeDisconnectAPI(apiConn)
safeDisconnectStats(statsConn)
}
// monitor blocks until the context is cancelled or a liveness ping fails.
func (c *Client) monitor(ctx context.Context) {
ticker := time.NewTicker(pingInterval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
if !c.ping() {
return
}
}
}
}
// ping sends a control_ping to VPP and returns true if it succeeds.
func (c *Client) ping() bool {
ch, err := c.apiChannel()
if err != nil {
return false
}
defer ch.Close()
req := &core.ControlPing{}
reply := &core.ControlPingReply{}
if err := ch.SendRequest(req).ReceiveReply(reply); err != nil {
slog.Debug("vpp-ping-failed", "err", err)
return false
}
return true
}
// fetchInfo queries VPP for version info, PID, and boot time.
// Must be called after connect succeeds (apiConn and statsClient are set).
func (c *Client) fetchInfo() Info {
info := Info{ConnectedSince: time.Now()}
ch, err := c.apiChannel()
if err != nil {
return info
}
defer ch.Close()
ver := &vpe.ShowVersionReply{}
if err := ch.SendRequest(&vpe.ShowVersion{}).ReceiveReply(ver); err == nil {
info.Version = ver.Version
info.BuildDate = ver.BuildDate
info.BuildDirectory = ver.BuildDirectory
}
ping := &core.ControlPingReply{}
if err := ch.SendRequest(&core.ControlPing{}).ReceiveReply(ping); err == nil {
info.PID = ping.VpePID
}
// Read VPP boot time from the stats segment.
c.mu.Lock()
sc := c.statsClient
c.mu.Unlock()
if sc != nil {
if entries, err := sc.DumpStats("/sys/boottime"); err == nil {
for _, e := range entries {
if s, ok := e.Data.(adapter.ScalarStat); ok && s != 0 {
info.BootTime = time.Unix(int64(s), 0)
}
}
}
}
return info
}
// safeDisconnectAPI disconnects an API connection, recovering from any panic
// that GoVPP may raise on a stale connection.
func safeDisconnectAPI(conn *core.Connection) {
if conn == nil {
return
}
defer func() { recover() }() //nolint:errcheck
conn.Disconnect()
}
// safeDisconnectStats disconnects a stats connection, recovering from panics.
func safeDisconnectStats(conn *core.StatsConnection) {
if conn == nil {
return
}
defer func() { recover() }() //nolint:errcheck
conn.Disconnect()
}
type vppError struct{ msg string }
func (e *vppError) Error() string { return e.msg }
var errNotConnected = &vppError{msg: "VPP API connection not established"}