Files
vpp-maglev/internal/metrics/metrics.go
Pim van Pelt fb62532fd5 VPP LB counters, src-ip-sticky, and frontend state aggregation
New feature: per-VIP / per-backend runtime counters
  * New GetVPPLBCounters RPC serving an in-process snapshot refreshed
    by a 5s scrape loop (internal/vpp/lbstats.go). Each cycle pulls
    the LB plugin's four SimpleCounters (next, first, untracked,
    no-server) plus the FIB /net/route/to CombinedCounter for every
    VIP and every backend host prefix via a single DumpStats call.
  * FIB stats-index discovery via ip_route_lookup (internal/vpp/
    fibstats.go); per-worker reduction happens in the collector.
  * Prometheus collector exports vip_packets_total (kind label),
    vip_route_{packets,bytes}_total, and backend_route_{packets,
    bytes}_total. Metrics source interface extended with VIPStats /
    BackendRouteStats; vpp.Client publishes snapshots via
    atomic.Pointer and clears them on disconnect.
  * New 'show vpp lb counters' CLI command. The 'show vpp lbstate'
    and 'sync vpp lbstate' commands are restructured under 'show
    vpp lb {state,counters}' / 'sync vpp lb state' to make room
    for the new verb.

New feature: src-ip-sticky frontends
  * New frontend YAML key 'src-ip-sticky' (bool). Plumbed through
    config.Frontend, desiredVIP, and the lb_add_del_vip_v2 call.
  * Reflected in gRPC FrontendInfo.src_ip_sticky and VPPLBVIP.
    src_ip_sticky, and shown in 'show vpp lb state' output.
  * Scraped back from VPP by parsing 'show lb vips verbose' through
    cli_inband — lb_vip_details does not expose the flag. The same
    scrape also recovers the LB pool index for each VIP, which the
    stats-segment counters are keyed on. This is a documented
    temporary workaround until VPP ships an lb_vip_v2_dump.
  * src_ip_sticky cannot be mutated on a live VIP, so a flipped flag
    triggers a tear-down-and-recreate in reconcileVIP (ASes deleted
    with flush, VIP deleted, then re-added). Flip is logged.

New feature: frontend state aggregation and events
  * New health.FrontendState (unknown/up/down) and FrontendTransition
    types. A frontend is 'up' iff at least one backend has a nonzero
    effective weight, 'unknown' iff no backend has real state yet,
    and 'down' otherwise.
  * Checker tracks per-frontend aggregate state, recomputing after
    each backend transition and emitting a frontend-transition Event
    on change. Reload drops entries for removed frontends.
  * checker.Event gains an optional FrontendTransition pointer;
    backend- vs. frontend-transition events are demultiplexed on
    that field.
  * WatchEvents now sends an initial snapshot of frontend state on
    connect (mirroring the existing backend snapshot), subscribes
    once to the checker stream, and fans out to backend/frontend
    handlers based on the client's filter flags. The proto
    FrontendEvent message grows name + transition fields.
  * New Checker.FrontendState accessor.

Refactor: pure health helpers
  * Moved the priority-failover selector and the (pool idx, active
    pool, state, cfg weight) → (vpp weight, flush) mapping out of
    internal/vpp/lbsync.go into a new internal/health/weights.go so
    the checker can reuse them for frontend-state computation
    without importing internal/vpp.
  * New functions: health.ActivePoolIndex, BackendEffectiveWeight,
    EffectiveWeights, ComputeFrontendState. lbsync.go now calls
    these directly; vpp.EffectiveWeights is a thin wrapper over
    health.EffectiveWeights retained for the gRPC observability
    path. Fully unit-tested in internal/health/weights_test.go.

maglevc polish
  * --color default is now mode-aware: on in the interactive shell,
    off in one-shot mode so piped output is script-safe. Explicit
    --color=true/false still overrides.
  * New stripHostMask helper drops /32 and /128 from VIP display;
    non-host prefixes pass through unchanged.
  * Counter table column order fixed (first before next) and
    packets/bytes columns renamed to fib-packets/fib-bytes to
    clarify they come from the FIB, not the LB plugin.

Docs
  * config-guide: document src-ip-sticky, including the VIP
    recreate-on-change caveat.
  * user-guide, maglevc.1, maglevd.8: updated command tree, new
    counters command, color defaults, and the src-ip-sticky field.
2026-04-12 16:07:39 +02:00

386 lines
14 KiB
Go

// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
// Package metrics exposes Prometheus metrics for maglevd.
//
// Gauge-type metrics (backend state, health counter, weights, VPP connection
// info) are collected on demand when Prometheus scrapes /metrics via the
// Collector. Counter and histogram metrics (probe totals, probe duration,
// transitions, VPP API calls, LB sync operations) are updated inline from
// the probe loop and VPP sync paths.
package metrics
import (
"fmt"
"time"
"git.ipng.ch/ipng/vpp-maglev/internal/config"
"git.ipng.ch/ipng/vpp-maglev/internal/health"
"github.com/prometheus/client_golang/prometheus"
)
// BackendInfo holds the health and config state needed by the collector.
type BackendInfo struct {
Health *health.Backend
Enabled bool
HCName string // healthcheck name from config
}
// StateSource provides read-only access to the running checker state.
type StateSource interface {
ListBackends() []string
GetBackendInfo(name string) (BackendInfo, bool)
ListFrontends() []string
GetFrontend(name string) (config.Frontend, bool)
}
// VPPInfo mirrors vpp.Info so the metrics package doesn't need to import
// internal/vpp (which would create an import cycle — vpp imports metrics
// to update counters inline).
type VPPInfo struct {
Version string
BuildDate string
PID uint32
BootTime time.Time
ConnectedSince time.Time
}
// VIPStatEntry is a point-in-time snapshot of the per-VIP counters that
// VPP exposes via the stats segment: four SimpleCounters from the LB
// plugin (packets only) plus the FIB CombinedCounter at /net/route/to
// for the VIP's own host prefix (packets + bytes). Values are summed
// across worker threads. The labelling (prefix/protocol/port) matches
// the gRPC VPPLBVIP representation so a Prometheus time series
// corresponds 1:1 to a maglev frontend VIP.
type VIPStatEntry struct {
Prefix string // CIDR string, e.g. "192.0.2.1/32"
Protocol string // "tcp", "udp", "any"
Port uint16
// LB plugin SimpleCounters (packets only)
NextPkt uint64 // /packet from existing sessions
FirstPkt uint64 // /first session packet
Untracked uint64 // /untracked packet
NoServer uint64 // /no server configured
// FIB CombinedCounter from /net/route/to at the VIP prefix
Packets uint64
Bytes uint64
}
// BackendRouteStat is a point-in-time snapshot of the FIB combined counter
// (/net/route/to) for a single backend's host prefix. Values are summed
// across worker threads. Labels match the backend's identity so a time
// series corresponds 1:1 to a maglev backend entry.
type BackendRouteStat struct {
Backend string // backend name from the config
Address string // backend IP address as a string (e.g. "192.0.2.10")
Packets uint64
Bytes uint64
}
// VPPSource provides read-only access to the VPP client's state. vpp.Client
// is adapted to this interface via a small shim in the collector so the
// metrics package stays decoupled from the vpp package's concrete types.
type VPPSource interface {
IsConnected() bool
VPPInfo() (VPPInfo, bool)
// VIPStats returns the most recent snapshot of per-VIP stats-segment
// counters, as captured by the LB stats loop. Returns nil when VPP is
// disconnected or no scrape has happened yet.
VIPStats() []VIPStatEntry
// BackendRouteStats returns the most recent snapshot of per-backend
// FIB combined counters (/net/route/to), as captured by the LB stats
// loop. Returns nil when VPP is disconnected, no scrape has happened
// yet, or the route lookup for every backend failed.
BackendRouteStats() []BackendRouteStat
}
// ---- inline metrics (updated per probe) ------------------------------------
var (
ProbeTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: "maglev",
Subsystem: "probe",
Name: "total",
Help: "Total number of health-check probes executed.",
}, []string{"backend", "type", "result", "code"})
ProbeDuration = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Namespace: "maglev",
Subsystem: "probe",
Name: "duration_seconds",
Help: "Health-check probe duration in seconds.",
Buckets: []float64{.001, .0025, .005, .01, .025, .05, .1, .25, .5, 1, 2.5},
}, []string{"backend", "type"})
TransitionTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: "maglev",
Subsystem: "backend",
Name: "transitions_total",
Help: "Total number of backend state transitions.",
}, []string{"backend", "from", "to"})
// ---- VPP API counters ---------------------------------------------------
VPPAPITotal = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: "maglev",
Subsystem: "vpp_api",
Name: "total",
Help: "Total number of VPP binary-API messages sent to or received from VPP.",
}, []string{"msg", "direction", "result"})
// ---- LB sync counters ---------------------------------------------------
// LBSyncTotal counts individual dataplane mutations performed by the
// sync path. kind ∈ {vip_added, vip_removed, as_added, as_removed,
// as_weight_updated}; scope ∈ {all, vip}.
LBSyncTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: "maglev",
Subsystem: "vpp_lbsync",
Name: "total",
Help: "Total number of VPP load-balancer sync operations applied to the dataplane.",
}, []string{"scope", "kind"})
)
// ---- collector (scraped on demand) -----------------------------------------
// Collector implements prometheus.Collector by querying the running checker
// on each scrape. This avoids stale label sets when backends are added or
// removed by a config reload.
type Collector struct {
src StateSource
vpp VPPSource // optional; nil when VPP integration is disabled
backendState *prometheus.Desc
backendHealth *prometheus.Desc
backendEnabled *prometheus.Desc
poolWeight *prometheus.Desc
vppConnected *prometheus.Desc
vppUptimeSeconds *prometheus.Desc
vppConnectedFor *prometheus.Desc
vppInfo *prometheus.Desc
vipPackets *prometheus.Desc // per-VIP LB counters from stats segment
vipRoutePkts *prometheus.Desc // per-VIP FIB combined counter: packets
vipRouteByts *prometheus.Desc // per-VIP FIB combined counter: bytes
backendRoutePkts *prometheus.Desc // per-backend FIB combined counter: packets
backendRouteByts *prometheus.Desc // per-backend FIB combined counter: bytes
}
// NewCollector creates a Collector backed by the given StateSource. vpp may
// be nil when VPP integration is disabled; in that case vpp_* metrics are
// simply not emitted.
func NewCollector(src StateSource, vpp VPPSource) *Collector {
return &Collector{
src: src,
vpp: vpp,
backendState: prometheus.NewDesc(
"maglev_backend_state",
"Current backend state (1 = active for the given state label).",
[]string{"backend", "address", "healthcheck", "state"}, nil,
),
backendHealth: prometheus.NewDesc(
"maglev_backend_health",
"Current health counter value.",
[]string{"backend"}, nil,
),
backendEnabled: prometheus.NewDesc(
"maglev_backend_enabled",
"Whether the backend is enabled (1) or disabled (0).",
[]string{"backend"}, nil,
),
poolWeight: prometheus.NewDesc(
"maglev_frontend_pool_backend_weight",
"Configured weight of a backend in a frontend pool (0-100).",
[]string{"frontend", "pool", "backend"}, nil,
),
vppConnected: prometheus.NewDesc(
"maglev_vpp_connected",
"Whether maglevd currently has an established connection to VPP (1) or not (0).",
nil, nil,
),
vppUptimeSeconds: prometheus.NewDesc(
"maglev_vpp_uptime_seconds",
"Seconds since VPP started (from the /sys/boottime stats counter).",
nil, nil,
),
vppConnectedFor: prometheus.NewDesc(
"maglev_vpp_connected_seconds",
"Seconds since maglevd established the current VPP connection.",
nil, nil,
),
vppInfo: prometheus.NewDesc(
"maglev_vpp_info",
"Static VPP build information. Always 1; metadata is conveyed via labels.",
[]string{"version", "build_date", "pid"}, nil,
),
vipPackets: prometheus.NewDesc(
"maglev_vpp_vip_packets_total",
"Per-VIP packet counters from the VPP LB plugin stats segment, summed across workers. kind ∈ {next, first, untracked, no_server}.",
[]string{"prefix", "protocol", "port", "kind"}, nil,
),
vipRoutePkts: prometheus.NewDesc(
"maglev_vpp_vip_route_packets_total",
"Packets forwarded by VPP's FIB toward each VIP's host prefix (from /net/route/to), summed across workers.",
[]string{"prefix", "protocol", "port"}, nil,
),
vipRouteByts: prometheus.NewDesc(
"maglev_vpp_vip_route_bytes_total",
"Bytes forwarded by VPP's FIB toward each VIP's host prefix (from /net/route/to), summed across workers.",
[]string{"prefix", "protocol", "port"}, nil,
),
backendRoutePkts: prometheus.NewDesc(
"maglev_vpp_backend_route_packets_total",
"Packets forwarded by VPP's FIB toward each backend's host prefix (from /net/route/to), summed across workers.",
[]string{"backend", "address"}, nil,
),
backendRouteByts: prometheus.NewDesc(
"maglev_vpp_backend_route_bytes_total",
"Bytes forwarded by VPP's FIB toward each backend's host prefix (from /net/route/to), summed across workers.",
[]string{"backend", "address"}, nil,
),
}
}
// Describe implements prometheus.Collector.
func (c *Collector) Describe(ch chan<- *prometheus.Desc) {
ch <- c.backendState
ch <- c.backendHealth
ch <- c.backendEnabled
ch <- c.poolWeight
ch <- c.vppConnected
ch <- c.vppUptimeSeconds
ch <- c.vppConnectedFor
ch <- c.vppInfo
ch <- c.vipPackets
ch <- c.vipRoutePkts
ch <- c.vipRouteByts
ch <- c.backendRoutePkts
ch <- c.backendRouteByts
}
// Collect implements prometheus.Collector.
func (c *Collector) Collect(ch chan<- prometheus.Metric) {
states := []health.State{
health.StateUnknown,
health.StateUp,
health.StateDown,
health.StatePaused,
health.StateDisabled,
health.StateRemoved,
}
for _, name := range c.src.ListBackends() {
info, ok := c.src.GetBackendInfo(name)
if !ok {
continue
}
addr := info.Health.Address.String()
// One time-series per possible state; the current state is 1, rest 0.
for _, s := range states {
val := 0.0
if info.Health.State == s {
val = 1.0
}
ch <- prometheus.MustNewConstMetric(
c.backendState, prometheus.GaugeValue, val,
name, addr, info.HCName, s.String(),
)
}
ch <- prometheus.MustNewConstMetric(
c.backendHealth, prometheus.GaugeValue,
float64(info.Health.Counter.Health), name,
)
enabled := 0.0
if info.Enabled {
enabled = 1.0
}
ch <- prometheus.MustNewConstMetric(
c.backendEnabled, prometheus.GaugeValue, enabled, name,
)
}
for _, feName := range c.src.ListFrontends() {
fe, ok := c.src.GetFrontend(feName)
if !ok {
continue
}
for _, pool := range fe.Pools {
for beName, pb := range pool.Backends {
ch <- prometheus.MustNewConstMetric(
c.poolWeight, prometheus.GaugeValue,
float64(pb.Weight), feName, pool.Name, beName,
)
}
}
}
// ---- VPP gauges -------------------------------------------------------
if c.vpp == nil {
return
}
connected := 0.0
if c.vpp.IsConnected() {
connected = 1.0
}
ch <- prometheus.MustNewConstMetric(c.vppConnected, prometheus.GaugeValue, connected)
info, ok := c.vpp.VPPInfo()
if !ok {
return
}
if !info.BootTime.IsZero() {
ch <- prometheus.MustNewConstMetric(
c.vppUptimeSeconds, prometheus.GaugeValue,
time.Since(info.BootTime).Seconds(),
)
}
if !info.ConnectedSince.IsZero() {
ch <- prometheus.MustNewConstMetric(
c.vppConnectedFor, prometheus.GaugeValue,
time.Since(info.ConnectedSince).Seconds(),
)
}
ch <- prometheus.MustNewConstMetric(
c.vppInfo, prometheus.GaugeValue, 1.0,
info.Version, info.BuildDate, fmt.Sprintf("%d", info.PID),
)
// Per-VIP packet counters, read from the snapshot updated by the LB
// stats loop in internal/vpp. CounterValue so rate()/increase() work
// as expected; VPP counter resets (e.g. VIP recreate) are handled by
// Prometheus's built-in counter-reset detection.
for _, v := range c.vpp.VIPStats() {
port := fmt.Sprintf("%d", v.Port)
ch <- prometheus.MustNewConstMetric(c.vipPackets, prometheus.CounterValue, float64(v.NextPkt), v.Prefix, v.Protocol, port, "next")
ch <- prometheus.MustNewConstMetric(c.vipPackets, prometheus.CounterValue, float64(v.FirstPkt), v.Prefix, v.Protocol, port, "first")
ch <- prometheus.MustNewConstMetric(c.vipPackets, prometheus.CounterValue, float64(v.Untracked), v.Prefix, v.Protocol, port, "untracked")
ch <- prometheus.MustNewConstMetric(c.vipPackets, prometheus.CounterValue, float64(v.NoServer), v.Prefix, v.Protocol, port, "no_server")
ch <- prometheus.MustNewConstMetric(c.vipRoutePkts, prometheus.CounterValue, float64(v.Packets), v.Prefix, v.Protocol, port)
ch <- prometheus.MustNewConstMetric(c.vipRouteByts, prometheus.CounterValue, float64(v.Bytes), v.Prefix, v.Protocol, port)
}
// Per-backend FIB counters from /net/route/to. Same CounterValue
// semantics as above.
for _, b := range c.vpp.BackendRouteStats() {
ch <- prometheus.MustNewConstMetric(c.backendRoutePkts, prometheus.CounterValue, float64(b.Packets), b.Backend, b.Address)
ch <- prometheus.MustNewConstMetric(c.backendRouteByts, prometheus.CounterValue, float64(b.Bytes), b.Backend, b.Address)
}
}
// Register registers all metrics with the given registry. vpp may be nil
// to disable VPP-related metrics.
func Register(reg prometheus.Registerer, src StateSource, vpp VPPSource) *Collector {
coll := NewCollector(src, vpp)
reg.MustRegister(coll)
reg.MustRegister(ProbeTotal)
reg.MustRegister(ProbeDuration)
reg.MustRegister(TransitionTotal)
reg.MustRegister(VPPAPITotal)
reg.MustRegister(LBSyncTotal)
return coll
}