VPP LB counters, src-ip-sticky, and frontend state aggregation
New feature: per-VIP / per-backend runtime counters
* New GetVPPLBCounters RPC serving an in-process snapshot refreshed
by a 5s scrape loop (internal/vpp/lbstats.go). Each cycle pulls
the LB plugin's four SimpleCounters (next, first, untracked,
no-server) plus the FIB /net/route/to CombinedCounter for every
VIP and every backend host prefix via a single DumpStats call.
* FIB stats-index discovery via ip_route_lookup (internal/vpp/
fibstats.go); per-worker reduction happens in the collector.
* Prometheus collector exports vip_packets_total (kind label),
vip_route_{packets,bytes}_total, and backend_route_{packets,
bytes}_total. Metrics source interface extended with VIPStats /
BackendRouteStats; vpp.Client publishes snapshots via
atomic.Pointer and clears them on disconnect.
* New 'show vpp lb counters' CLI command. The 'show vpp lbstate'
and 'sync vpp lbstate' commands are restructured under 'show
vpp lb {state,counters}' / 'sync vpp lb state' to make room
for the new verb.
New feature: src-ip-sticky frontends
* New frontend YAML key 'src-ip-sticky' (bool). Plumbed through
config.Frontend, desiredVIP, and the lb_add_del_vip_v2 call.
* Reflected in gRPC FrontendInfo.src_ip_sticky and VPPLBVIP.
src_ip_sticky, and shown in 'show vpp lb state' output.
* Scraped back from VPP by parsing 'show lb vips verbose' through
cli_inband — lb_vip_details does not expose the flag. The same
scrape also recovers the LB pool index for each VIP, which the
stats-segment counters are keyed on. This is a documented
temporary workaround until VPP ships an lb_vip_v2_dump.
* src_ip_sticky cannot be mutated on a live VIP, so a flipped flag
triggers a tear-down-and-recreate in reconcileVIP (ASes deleted
with flush, VIP deleted, then re-added). Flip is logged.
New feature: frontend state aggregation and events
* New health.FrontendState (unknown/up/down) and FrontendTransition
types. A frontend is 'up' iff at least one backend has a nonzero
effective weight, 'unknown' iff no backend has real state yet,
and 'down' otherwise.
* Checker tracks per-frontend aggregate state, recomputing after
each backend transition and emitting a frontend-transition Event
on change. Reload drops entries for removed frontends.
* checker.Event gains an optional FrontendTransition pointer;
backend- vs. frontend-transition events are demultiplexed on
that field.
* WatchEvents now sends an initial snapshot of frontend state on
connect (mirroring the existing backend snapshot), subscribes
once to the checker stream, and fans out to backend/frontend
handlers based on the client's filter flags. The proto
FrontendEvent message grows name + transition fields.
* New Checker.FrontendState accessor.
Refactor: pure health helpers
* Moved the priority-failover selector and the (pool idx, active
pool, state, cfg weight) → (vpp weight, flush) mapping out of
internal/vpp/lbsync.go into a new internal/health/weights.go so
the checker can reuse them for frontend-state computation
without importing internal/vpp.
* New functions: health.ActivePoolIndex, BackendEffectiveWeight,
EffectiveWeights, ComputeFrontendState. lbsync.go now calls
these directly; vpp.EffectiveWeights is a thin wrapper over
health.EffectiveWeights retained for the gRPC observability
path. Fully unit-tested in internal/health/weights_test.go.
maglevc polish
* --color default is now mode-aware: on in the interactive shell,
off in one-shot mode so piped output is script-safe. Explicit
--color=true/false still overrides.
* New stripHostMask helper drops /32 and /128 from VIP display;
non-host prefixes pass through unchanged.
* Counter table column order fixed (first before next) and
packets/bytes columns renamed to fib-packets/fib-bytes to
clarify they come from the FIB, not the LB plugin.
Docs
* config-guide: document src-ip-sticky, including the VIP
recreate-on-change caveat.
* user-guide, maglevc.1, maglevd.8: updated command tree, new
counters command, color defaults, and the src-ip-sticky field.
This commit is contained in:
@@ -9,6 +9,7 @@ import (
|
||||
"context"
|
||||
"log/slog"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"go.fd.io/govpp/adapter"
|
||||
@@ -60,6 +61,16 @@ type Client struct {
|
||||
info Info // populated on successful connect
|
||||
stateSrc StateSource // optional; enables periodic LB sync
|
||||
lastLBConf *lb.LbConf // cached last-pushed lb_conf (dedup)
|
||||
|
||||
// lbStatsSnap is the most recent per-VIP stats snapshot captured by
|
||||
// lbStatsLoop. Published as an immutable slice via atomic.Pointer so
|
||||
// Prometheus scrapes (metrics.Collector.Collect) don't take any lock.
|
||||
lbStatsSnap atomic.Pointer[[]metrics.VIPStatEntry]
|
||||
|
||||
// backendRouteSnap is the most recent per-backend FIB stats snapshot
|
||||
// captured by lbStatsLoop. Same atomic-pointer publish pattern as
|
||||
// lbStatsSnap; see logBackendRouteStats in fibstats.go.
|
||||
backendRouteSnap atomic.Pointer[[]metrics.BackendRouteStat]
|
||||
}
|
||||
|
||||
// SetStateSource attaches a live config + health state source. When set, the
|
||||
@@ -140,10 +151,11 @@ func (c *Client) Run(ctx context.Context) {
|
||||
}
|
||||
}
|
||||
|
||||
// Start the LB sync loop for as long as the connection is up.
|
||||
// It exits when connCtx is cancelled (on disconnect or shutdown).
|
||||
// Start the LB sync and stats loops for as long as the connection
|
||||
// is up. Both exit when connCtx is cancelled.
|
||||
connCtx, connCancel := context.WithCancel(ctx)
|
||||
go c.lbSyncLoop(connCtx)
|
||||
go c.lbStatsLoop(connCtx)
|
||||
|
||||
// Hold the connection, pinging periodically to detect VPP restarts.
|
||||
c.monitor(ctx)
|
||||
@@ -217,6 +229,30 @@ func (c *Client) GetInfo() (Info, error) {
|
||||
return c.info, nil
|
||||
}
|
||||
|
||||
// VIPStats satisfies metrics.VPPSource. It returns the latest snapshot of
|
||||
// per-VIP LB stats-segment counters captured by lbStatsLoop. Returns nil
|
||||
// until the first scrape completes, or after a disconnect (the pointer is
|
||||
// cleared when the connection drops).
|
||||
func (c *Client) VIPStats() []metrics.VIPStatEntry {
|
||||
p := c.lbStatsSnap.Load()
|
||||
if p == nil {
|
||||
return nil
|
||||
}
|
||||
return *p
|
||||
}
|
||||
|
||||
// BackendRouteStats satisfies metrics.VPPSource. It returns the latest
|
||||
// snapshot of per-backend FIB combined counters (/net/route/to) captured
|
||||
// by lbStatsLoop. Returns nil until the first scrape completes, or after
|
||||
// a disconnect.
|
||||
func (c *Client) BackendRouteStats() []metrics.BackendRouteStat {
|
||||
p := c.backendRouteSnap.Load()
|
||||
if p == nil {
|
||||
return nil
|
||||
}
|
||||
return *p
|
||||
}
|
||||
|
||||
// VPPInfo satisfies metrics.VPPSource. It returns a copy of the cached
|
||||
// connection info as a metrics-local struct so the metrics package doesn't
|
||||
// need to import internal/vpp. Second return is false when VPP is not
|
||||
@@ -272,6 +308,8 @@ func (c *Client) disconnect() {
|
||||
c.info = Info{}
|
||||
c.lastLBConf = nil // force re-push of lb_conf on reconnect
|
||||
c.mu.Unlock()
|
||||
c.lbStatsSnap.Store(nil)
|
||||
c.backendRouteSnap.Store(nil)
|
||||
|
||||
safeDisconnectAPI(apiConn)
|
||||
safeDisconnectStats(statsConn)
|
||||
|
||||
Reference in New Issue
Block a user