Files
vpp-maglev/internal/metrics/metrics.go
Pim van Pelt 224167ce39 Dataplane reconcile fixes; LB counters cleanup; SPA scope cookie
Checker / reload:
- Reload's update-in-place branch now mirrors b.Address onto the
  runtime health.Backend. Without this, GetBackend kept returning
  the pre-reload address indefinitely after a config edit that
  touched addresses but not healthcheck settings — the VPP sync
  path reads cfg.Backends directly so the dataplane moved on
  while the gRPC and SPA view stayed wedged on the old IPv4/IPv6.

Sync (internal/vpp/lbsync.go):
- reconcileVIP now detects encap mismatch in addition to
  src-ip-sticky mismatch and takes the full tear-down / re-add
  path via a new shared recreateVIP helper. Triggered when every
  backend flips address family (gre4 <-> gre6) and the existing
  VIP can no longer accept new ASes — previously the sync wedged
  with 'Invalid address family' until a full maglevd restart.
- setASWeight is issued whenever the state machine requests
  flush (a.Flush=true), not only on the weight-value transition
  edge. Fixes the case where a backend reached StateDisabled
  after its effective weight had already been drained to 0 by
  pool failover — the sticky-cache entries pointing at it were
  previously never cleared.

maglev-frontend:
- signal.Ignore(SIGHUP) so a controlling-terminal disconnect
  doesn't kill the daemon.
- debian/vpp-maglev.service grants CAP_SYS_ADMIN in addition to
  CAP_NET_RAW so setns(CLONE_NEWNET) can join the healthcheck
  netns. Comment documents the 'operation not permitted' symptom
  and notes the knob can be dropped if the deployment doesn't use
  the 'netns:' healthcheck option.

LB plugin counters (internal/vpp/lbstats.go + friends):
- Fix the VIP counter regex: the LB plugin registers
  vlib_simple_counter_main_t names without a leading '/'
  (vlib_validate_simple_counter in counter.c:50 uses cm->name
  verbatim; only entries that set cm->stat_segment_name get a
  slash). first/next/untracked/no-server now read through as
  live values instead of zero.
- Drop the per-backend FIB counter block end-to-end (proto,
  grpcapi, metrics, vpp.Client, lbstats, maglevc). Traced from
  lb/node.c:558 into ip{4,6}_forward.h:141 — the LB plugin
  forwards by writing adj_index[VLIB_TX] directly and bypassing
  ip{4,6}_lookup_inline, which is the only path that increments
  lbm_to_counters. The backend's FIB load_balance stats_index
  literally never ticks for LB-forwarded traffic, so the column
  was always zero and misleading. docs/implementation/TODO
  records the full investigation and the recommended upstream
  path (new lb_as_stats_dump API message) for when we're ready
  to carry that VPP patch.
- maglevc show vpp lb counters: plain-text tabular headers.
  label() wraps strings in ANSI escapes (~11 bytes of overhead),
  but tabwriter counts bytes, not rendered width — so a header
  row with label()'d cells and data rows with plain cells drifts
  column alignment on every row. color.go comment now spells
  out the constraint: label() only works when column N is
  wrapped identically in every row (key-value layouts are fine,
  multi-column tables with header-only labelling are not).

SPA:
- stores/scope.ts is cookie-backed (maglev_scope, 1 year,
  SameSite=Lax). App.tsx hydrates from the cookie then validates
  against the fetched snapshots: a cookie referencing a maglevd
  that no longer exists falls through to snaps[0] instead of
  leaving the user on a ghost selection.
- components/Flash.tsx wraps props.value in createMemo. Solid's
  on() fires its callback on every dep notification, not on
  value change — source is right in solid-js/dist/solid.js:460,
  no equality check. Without the memo, flipping scope between
  two 'connected' maglevds (or any other cross-store reactive
  re-eval that doesn't actually change the concrete string)
  replays the animation every time. createMemo's default ===
  dedupe fixes it in one place for every Flash consumer,
  superseding the local createMemo workaround we'd added in
  BackendRow earlier.
2026-04-14 14:40:16 +02:00

356 lines
12 KiB
Go

// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
// Package metrics exposes Prometheus metrics for maglevd.
//
// Gauge-type metrics (backend state, health counter, weights, VPP connection
// info) are collected on demand when Prometheus scrapes /metrics via the
// Collector. Counter and histogram metrics (probe totals, probe duration,
// transitions, VPP API calls, LB sync operations) are updated inline from
// the probe loop and VPP sync paths.
package metrics
import (
"fmt"
"time"
"git.ipng.ch/ipng/vpp-maglev/internal/config"
"git.ipng.ch/ipng/vpp-maglev/internal/health"
"github.com/prometheus/client_golang/prometheus"
)
// BackendInfo holds the health and config state needed by the collector.
type BackendInfo struct {
Health *health.Backend
Enabled bool
HCName string // healthcheck name from config
}
// StateSource provides read-only access to the running checker state.
type StateSource interface {
ListBackends() []string
GetBackendInfo(name string) (BackendInfo, bool)
ListFrontends() []string
GetFrontend(name string) (config.Frontend, bool)
}
// VPPInfo mirrors vpp.Info so the metrics package doesn't need to import
// internal/vpp (which would create an import cycle — vpp imports metrics
// to update counters inline).
type VPPInfo struct {
Version string
BuildDate string
PID uint32
BootTime time.Time
ConnectedSince time.Time
}
// VIPStatEntry is a point-in-time snapshot of the per-VIP counters that
// VPP exposes via the stats segment: four SimpleCounters from the LB
// plugin (packets only) plus the FIB CombinedCounter at /net/route/to
// for the VIP's own host prefix (packets + bytes). Values are summed
// across worker threads. The labelling (prefix/protocol/port) matches
// the gRPC VPPLBVIP representation so a Prometheus time series
// corresponds 1:1 to a maglev frontend VIP.
type VIPStatEntry struct {
Prefix string // CIDR string, e.g. "192.0.2.1/32"
Protocol string // "tcp", "udp", "any"
Port uint16
// LB plugin SimpleCounters (packets only)
NextPkt uint64 // /packet from existing sessions
FirstPkt uint64 // /first session packet
Untracked uint64 // /untracked packet
NoServer uint64 // /no server configured
// FIB CombinedCounter from /net/route/to at the VIP prefix
Packets uint64
Bytes uint64
}
// VPPSource provides read-only access to the VPP client's state. vpp.Client
// is adapted to this interface via a small shim in the collector so the
// metrics package stays decoupled from the vpp package's concrete types.
type VPPSource interface {
IsConnected() bool
VPPInfo() (VPPInfo, bool)
// VIPStats returns the most recent snapshot of per-VIP stats-segment
// counters, as captured by the LB stats loop. Returns nil when VPP is
// disconnected or no scrape has happened yet.
VIPStats() []VIPStatEntry
}
// ---- inline metrics (updated per probe) ------------------------------------
var (
ProbeTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: "maglev",
Subsystem: "probe",
Name: "total",
Help: "Total number of health-check probes executed.",
}, []string{"backend", "type", "result", "code"})
ProbeDuration = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Namespace: "maglev",
Subsystem: "probe",
Name: "duration_seconds",
Help: "Health-check probe duration in seconds.",
Buckets: []float64{.001, .0025, .005, .01, .025, .05, .1, .25, .5, 1, 2.5},
}, []string{"backend", "type"})
TransitionTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: "maglev",
Subsystem: "backend",
Name: "transitions_total",
Help: "Total number of backend state transitions.",
}, []string{"backend", "from", "to"})
// ---- VPP API counters ---------------------------------------------------
VPPAPITotal = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: "maglev",
Subsystem: "vpp_api",
Name: "total",
Help: "Total number of VPP binary-API messages sent to or received from VPP.",
}, []string{"msg", "direction", "result"})
// ---- LB sync counters ---------------------------------------------------
// LBSyncTotal counts individual dataplane mutations performed by the
// sync path. kind ∈ {vip_added, vip_removed, as_added, as_removed,
// as_weight_updated}; scope ∈ {all, vip}.
LBSyncTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: "maglev",
Subsystem: "vpp_lbsync",
Name: "total",
Help: "Total number of VPP load-balancer sync operations applied to the dataplane.",
}, []string{"scope", "kind"})
)
// ---- collector (scraped on demand) -----------------------------------------
// Collector implements prometheus.Collector by querying the running checker
// on each scrape. This avoids stale label sets when backends are added or
// removed by a config reload.
type Collector struct {
src StateSource
vpp VPPSource // optional; nil when VPP integration is disabled
backendState *prometheus.Desc
backendHealth *prometheus.Desc
backendEnabled *prometheus.Desc
poolWeight *prometheus.Desc
vppConnected *prometheus.Desc
vppUptimeSeconds *prometheus.Desc
vppConnectedFor *prometheus.Desc
vppInfo *prometheus.Desc
vipPackets *prometheus.Desc // per-VIP LB counters from stats segment
vipRoutePkts *prometheus.Desc // per-VIP FIB combined counter: packets
vipRouteByts *prometheus.Desc // per-VIP FIB combined counter: bytes
}
// NewCollector creates a Collector backed by the given StateSource. vpp may
// be nil when VPP integration is disabled; in that case vpp_* metrics are
// simply not emitted.
func NewCollector(src StateSource, vpp VPPSource) *Collector {
return &Collector{
src: src,
vpp: vpp,
backendState: prometheus.NewDesc(
"maglev_backend_state",
"Current backend state (1 = active for the given state label).",
[]string{"backend", "address", "healthcheck", "state"}, nil,
),
backendHealth: prometheus.NewDesc(
"maglev_backend_health",
"Current health counter value.",
[]string{"backend"}, nil,
),
backendEnabled: prometheus.NewDesc(
"maglev_backend_enabled",
"Whether the backend is enabled (1) or disabled (0).",
[]string{"backend"}, nil,
),
poolWeight: prometheus.NewDesc(
"maglev_frontend_pool_backend_weight",
"Configured weight of a backend in a frontend pool (0-100).",
[]string{"frontend", "pool", "backend"}, nil,
),
vppConnected: prometheus.NewDesc(
"maglev_vpp_connected",
"Whether maglevd currently has an established connection to VPP (1) or not (0).",
nil, nil,
),
vppUptimeSeconds: prometheus.NewDesc(
"maglev_vpp_uptime_seconds",
"Seconds since VPP started (from the /sys/boottime stats counter).",
nil, nil,
),
vppConnectedFor: prometheus.NewDesc(
"maglev_vpp_connected_seconds",
"Seconds since maglevd established the current VPP connection.",
nil, nil,
),
vppInfo: prometheus.NewDesc(
"maglev_vpp_info",
"Static VPP build information. Always 1; metadata is conveyed via labels.",
[]string{"version", "build_date", "pid"}, nil,
),
vipPackets: prometheus.NewDesc(
"maglev_vpp_vip_packets_total",
"Per-VIP packet counters from the VPP LB plugin stats segment, summed across workers. kind ∈ {next, first, untracked, no_server}.",
[]string{"prefix", "protocol", "port", "kind"}, nil,
),
vipRoutePkts: prometheus.NewDesc(
"maglev_vpp_vip_route_packets_total",
"Packets forwarded by VPP's FIB toward each VIP's host prefix (from /net/route/to), summed across workers.",
[]string{"prefix", "protocol", "port"}, nil,
),
vipRouteByts: prometheus.NewDesc(
"maglev_vpp_vip_route_bytes_total",
"Bytes forwarded by VPP's FIB toward each VIP's host prefix (from /net/route/to), summed across workers.",
[]string{"prefix", "protocol", "port"}, nil,
),
}
}
// Describe implements prometheus.Collector.
func (c *Collector) Describe(ch chan<- *prometheus.Desc) {
ch <- c.backendState
ch <- c.backendHealth
ch <- c.backendEnabled
ch <- c.poolWeight
ch <- c.vppConnected
ch <- c.vppUptimeSeconds
ch <- c.vppConnectedFor
ch <- c.vppInfo
ch <- c.vipPackets
ch <- c.vipRoutePkts
ch <- c.vipRouteByts
}
// Collect implements prometheus.Collector.
func (c *Collector) Collect(ch chan<- prometheus.Metric) {
states := []health.State{
health.StateUnknown,
health.StateUp,
health.StateDown,
health.StatePaused,
health.StateDisabled,
health.StateRemoved,
}
for _, name := range c.src.ListBackends() {
info, ok := c.src.GetBackendInfo(name)
if !ok {
continue
}
addr := info.Health.Address.String()
// One time-series per possible state; the current state is 1, rest 0.
for _, s := range states {
val := 0.0
if info.Health.State == s {
val = 1.0
}
ch <- prometheus.MustNewConstMetric(
c.backendState, prometheus.GaugeValue, val,
name, addr, info.HCName, s.String(),
)
}
ch <- prometheus.MustNewConstMetric(
c.backendHealth, prometheus.GaugeValue,
float64(info.Health.Counter.Health), name,
)
enabled := 0.0
if info.Enabled {
enabled = 1.0
}
ch <- prometheus.MustNewConstMetric(
c.backendEnabled, prometheus.GaugeValue, enabled, name,
)
}
for _, feName := range c.src.ListFrontends() {
fe, ok := c.src.GetFrontend(feName)
if !ok {
continue
}
for _, pool := range fe.Pools {
for beName, pb := range pool.Backends {
ch <- prometheus.MustNewConstMetric(
c.poolWeight, prometheus.GaugeValue,
float64(pb.Weight), feName, pool.Name, beName,
)
}
}
}
// ---- VPP gauges -------------------------------------------------------
if c.vpp == nil {
return
}
connected := 0.0
if c.vpp.IsConnected() {
connected = 1.0
}
ch <- prometheus.MustNewConstMetric(c.vppConnected, prometheus.GaugeValue, connected)
info, ok := c.vpp.VPPInfo()
if !ok {
return
}
if !info.BootTime.IsZero() {
ch <- prometheus.MustNewConstMetric(
c.vppUptimeSeconds, prometheus.GaugeValue,
time.Since(info.BootTime).Seconds(),
)
}
if !info.ConnectedSince.IsZero() {
ch <- prometheus.MustNewConstMetric(
c.vppConnectedFor, prometheus.GaugeValue,
time.Since(info.ConnectedSince).Seconds(),
)
}
ch <- prometheus.MustNewConstMetric(
c.vppInfo, prometheus.GaugeValue, 1.0,
info.Version, info.BuildDate, fmt.Sprintf("%d", info.PID),
)
// Per-VIP packet counters, read from the snapshot updated by the LB
// stats loop in internal/vpp. CounterValue so rate()/increase() work
// as expected; VPP counter resets (e.g. VIP recreate) are handled by
// Prometheus's built-in counter-reset detection.
//
// No per-backend counters are exposed here: the LB plugin's
// forwarding node sets adj_index[VLIB_TX] directly and bypasses
// ip{4,6}_lookup_inline, which is the only path that increments
// lbm_to_counters — so /net/route/to at the backend's stats_index
// never ticks for LB-forwarded traffic. See the comment block in
// internal/vpp/lbstats.go::scrapeLBStats for the full chain.
for _, v := range c.vpp.VIPStats() {
port := fmt.Sprintf("%d", v.Port)
ch <- prometheus.MustNewConstMetric(c.vipPackets, prometheus.CounterValue, float64(v.NextPkt), v.Prefix, v.Protocol, port, "next")
ch <- prometheus.MustNewConstMetric(c.vipPackets, prometheus.CounterValue, float64(v.FirstPkt), v.Prefix, v.Protocol, port, "first")
ch <- prometheus.MustNewConstMetric(c.vipPackets, prometheus.CounterValue, float64(v.Untracked), v.Prefix, v.Protocol, port, "untracked")
ch <- prometheus.MustNewConstMetric(c.vipPackets, prometheus.CounterValue, float64(v.NoServer), v.Prefix, v.Protocol, port, "no_server")
ch <- prometheus.MustNewConstMetric(c.vipRoutePkts, prometheus.CounterValue, float64(v.Packets), v.Prefix, v.Protocol, port)
ch <- prometheus.MustNewConstMetric(c.vipRouteByts, prometheus.CounterValue, float64(v.Bytes), v.Prefix, v.Protocol, port)
}
}
// Register registers all metrics with the given registry. vpp may be nil
// to disable VPP-related metrics.
func Register(reg prometheus.Registerer, src StateSource, vpp VPPSource) *Collector {
coll := NewCollector(src, vpp)
reg.MustRegister(coll)
reg.MustRegister(ProbeTotal)
reg.MustRegister(ProbeDuration)
reg.MustRegister(TransitionTotal)
reg.MustRegister(VPPAPITotal)
reg.MustRegister(LBSyncTotal)
return coll
}