VPP LB counters, src-ip-sticky, and frontend state aggregation
New feature: per-VIP / per-backend runtime counters
* New GetVPPLBCounters RPC serving an in-process snapshot refreshed
by a 5s scrape loop (internal/vpp/lbstats.go). Each cycle pulls
the LB plugin's four SimpleCounters (next, first, untracked,
no-server) plus the FIB /net/route/to CombinedCounter for every
VIP and every backend host prefix via a single DumpStats call.
* FIB stats-index discovery via ip_route_lookup (internal/vpp/
fibstats.go); per-worker reduction happens in the collector.
* Prometheus collector exports vip_packets_total (kind label),
vip_route_{packets,bytes}_total, and backend_route_{packets,
bytes}_total. Metrics source interface extended with VIPStats /
BackendRouteStats; vpp.Client publishes snapshots via
atomic.Pointer and clears them on disconnect.
* New 'show vpp lb counters' CLI command. The 'show vpp lbstate'
and 'sync vpp lbstate' commands are restructured under 'show
vpp lb {state,counters}' / 'sync vpp lb state' to make room
for the new verb.
New feature: src-ip-sticky frontends
* New frontend YAML key 'src-ip-sticky' (bool). Plumbed through
config.Frontend, desiredVIP, and the lb_add_del_vip_v2 call.
* Reflected in gRPC FrontendInfo.src_ip_sticky and VPPLBVIP.
src_ip_sticky, and shown in 'show vpp lb state' output.
* Scraped back from VPP by parsing 'show lb vips verbose' through
cli_inband — lb_vip_details does not expose the flag. The same
scrape also recovers the LB pool index for each VIP, which the
stats-segment counters are keyed on. This is a documented
temporary workaround until VPP ships an lb_vip_v2_dump.
* src_ip_sticky cannot be mutated on a live VIP, so a flipped flag
triggers a tear-down-and-recreate in reconcileVIP (ASes deleted
with flush, VIP deleted, then re-added). Flip is logged.
New feature: frontend state aggregation and events
* New health.FrontendState (unknown/up/down) and FrontendTransition
types. A frontend is 'up' iff at least one backend has a nonzero
effective weight, 'unknown' iff no backend has real state yet,
and 'down' otherwise.
* Checker tracks per-frontend aggregate state, recomputing after
each backend transition and emitting a frontend-transition Event
on change. Reload drops entries for removed frontends.
* checker.Event gains an optional FrontendTransition pointer;
backend- vs. frontend-transition events are demultiplexed on
that field.
* WatchEvents now sends an initial snapshot of frontend state on
connect (mirroring the existing backend snapshot), subscribes
once to the checker stream, and fans out to backend/frontend
handlers based on the client's filter flags. The proto
FrontendEvent message grows name + transition fields.
* New Checker.FrontendState accessor.
Refactor: pure health helpers
* Moved the priority-failover selector and the (pool idx, active
pool, state, cfg weight) → (vpp weight, flush) mapping out of
internal/vpp/lbsync.go into a new internal/health/weights.go so
the checker can reuse them for frontend-state computation
without importing internal/vpp.
* New functions: health.ActivePoolIndex, BackendEffectiveWeight,
EffectiveWeights, ComputeFrontendState. lbsync.go now calls
these directly; vpp.EffectiveWeights is a thin wrapper over
health.EffectiveWeights retained for the gRPC observability
path. Fully unit-tested in internal/health/weights_test.go.
maglevc polish
* --color default is now mode-aware: on in the interactive shell,
off in one-shot mode so piped output is script-safe. Explicit
--color=true/false still overrides.
* New stripHostMask helper drops /32 and /128 from VIP display;
non-host prefixes pass through unchanged.
* Counter table column order fixed (first before next) and
packets/bytes columns renamed to fib-packets/fib-bytes to
clarify they come from the FIB, not the LB plugin.
Docs
* config-guide: document src-ip-sticky, including the VIP
recreate-on-change caveat.
* user-guide, maglevc.1, maglevd.8: updated command tree, new
counters command, color defaults, and the src-ip-sticky field.
This commit is contained in:
@@ -45,12 +45,53 @@ type VPPInfo struct {
|
||||
ConnectedSince time.Time
|
||||
}
|
||||
|
||||
// VIPStatEntry is a point-in-time snapshot of the per-VIP counters that
|
||||
// VPP exposes via the stats segment: four SimpleCounters from the LB
|
||||
// plugin (packets only) plus the FIB CombinedCounter at /net/route/to
|
||||
// for the VIP's own host prefix (packets + bytes). Values are summed
|
||||
// across worker threads. The labelling (prefix/protocol/port) matches
|
||||
// the gRPC VPPLBVIP representation so a Prometheus time series
|
||||
// corresponds 1:1 to a maglev frontend VIP.
|
||||
type VIPStatEntry struct {
|
||||
Prefix string // CIDR string, e.g. "192.0.2.1/32"
|
||||
Protocol string // "tcp", "udp", "any"
|
||||
Port uint16
|
||||
// LB plugin SimpleCounters (packets only)
|
||||
NextPkt uint64 // /packet from existing sessions
|
||||
FirstPkt uint64 // /first session packet
|
||||
Untracked uint64 // /untracked packet
|
||||
NoServer uint64 // /no server configured
|
||||
// FIB CombinedCounter from /net/route/to at the VIP prefix
|
||||
Packets uint64
|
||||
Bytes uint64
|
||||
}
|
||||
|
||||
// BackendRouteStat is a point-in-time snapshot of the FIB combined counter
|
||||
// (/net/route/to) for a single backend's host prefix. Values are summed
|
||||
// across worker threads. Labels match the backend's identity so a time
|
||||
// series corresponds 1:1 to a maglev backend entry.
|
||||
type BackendRouteStat struct {
|
||||
Backend string // backend name from the config
|
||||
Address string // backend IP address as a string (e.g. "192.0.2.10")
|
||||
Packets uint64
|
||||
Bytes uint64
|
||||
}
|
||||
|
||||
// VPPSource provides read-only access to the VPP client's state. vpp.Client
|
||||
// is adapted to this interface via a small shim in the collector so the
|
||||
// metrics package stays decoupled from the vpp package's concrete types.
|
||||
type VPPSource interface {
|
||||
IsConnected() bool
|
||||
VPPInfo() (VPPInfo, bool)
|
||||
// VIPStats returns the most recent snapshot of per-VIP stats-segment
|
||||
// counters, as captured by the LB stats loop. Returns nil when VPP is
|
||||
// disconnected or no scrape has happened yet.
|
||||
VIPStats() []VIPStatEntry
|
||||
// BackendRouteStats returns the most recent snapshot of per-backend
|
||||
// FIB combined counters (/net/route/to), as captured by the LB stats
|
||||
// loop. Returns nil when VPP is disconnected, no scrape has happened
|
||||
// yet, or the route lookup for every backend failed.
|
||||
BackendRouteStats() []BackendRouteStat
|
||||
}
|
||||
|
||||
// ---- inline metrics (updated per probe) ------------------------------------
|
||||
@@ -118,6 +159,12 @@ type Collector struct {
|
||||
vppUptimeSeconds *prometheus.Desc
|
||||
vppConnectedFor *prometheus.Desc
|
||||
vppInfo *prometheus.Desc
|
||||
|
||||
vipPackets *prometheus.Desc // per-VIP LB counters from stats segment
|
||||
vipRoutePkts *prometheus.Desc // per-VIP FIB combined counter: packets
|
||||
vipRouteByts *prometheus.Desc // per-VIP FIB combined counter: bytes
|
||||
backendRoutePkts *prometheus.Desc // per-backend FIB combined counter: packets
|
||||
backendRouteByts *prometheus.Desc // per-backend FIB combined counter: bytes
|
||||
}
|
||||
|
||||
// NewCollector creates a Collector backed by the given StateSource. vpp may
|
||||
@@ -167,6 +214,31 @@ func NewCollector(src StateSource, vpp VPPSource) *Collector {
|
||||
"Static VPP build information. Always 1; metadata is conveyed via labels.",
|
||||
[]string{"version", "build_date", "pid"}, nil,
|
||||
),
|
||||
vipPackets: prometheus.NewDesc(
|
||||
"maglev_vpp_vip_packets_total",
|
||||
"Per-VIP packet counters from the VPP LB plugin stats segment, summed across workers. kind ∈ {next, first, untracked, no_server}.",
|
||||
[]string{"prefix", "protocol", "port", "kind"}, nil,
|
||||
),
|
||||
vipRoutePkts: prometheus.NewDesc(
|
||||
"maglev_vpp_vip_route_packets_total",
|
||||
"Packets forwarded by VPP's FIB toward each VIP's host prefix (from /net/route/to), summed across workers.",
|
||||
[]string{"prefix", "protocol", "port"}, nil,
|
||||
),
|
||||
vipRouteByts: prometheus.NewDesc(
|
||||
"maglev_vpp_vip_route_bytes_total",
|
||||
"Bytes forwarded by VPP's FIB toward each VIP's host prefix (from /net/route/to), summed across workers.",
|
||||
[]string{"prefix", "protocol", "port"}, nil,
|
||||
),
|
||||
backendRoutePkts: prometheus.NewDesc(
|
||||
"maglev_vpp_backend_route_packets_total",
|
||||
"Packets forwarded by VPP's FIB toward each backend's host prefix (from /net/route/to), summed across workers.",
|
||||
[]string{"backend", "address"}, nil,
|
||||
),
|
||||
backendRouteByts: prometheus.NewDesc(
|
||||
"maglev_vpp_backend_route_bytes_total",
|
||||
"Bytes forwarded by VPP's FIB toward each backend's host prefix (from /net/route/to), summed across workers.",
|
||||
[]string{"backend", "address"}, nil,
|
||||
),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -180,6 +252,11 @@ func (c *Collector) Describe(ch chan<- *prometheus.Desc) {
|
||||
ch <- c.vppUptimeSeconds
|
||||
ch <- c.vppConnectedFor
|
||||
ch <- c.vppInfo
|
||||
ch <- c.vipPackets
|
||||
ch <- c.vipRoutePkts
|
||||
ch <- c.vipRouteByts
|
||||
ch <- c.backendRoutePkts
|
||||
ch <- c.backendRouteByts
|
||||
}
|
||||
|
||||
// Collect implements prometheus.Collector.
|
||||
@@ -271,6 +348,27 @@ func (c *Collector) Collect(ch chan<- prometheus.Metric) {
|
||||
c.vppInfo, prometheus.GaugeValue, 1.0,
|
||||
info.Version, info.BuildDate, fmt.Sprintf("%d", info.PID),
|
||||
)
|
||||
|
||||
// Per-VIP packet counters, read from the snapshot updated by the LB
|
||||
// stats loop in internal/vpp. CounterValue so rate()/increase() work
|
||||
// as expected; VPP counter resets (e.g. VIP recreate) are handled by
|
||||
// Prometheus's built-in counter-reset detection.
|
||||
for _, v := range c.vpp.VIPStats() {
|
||||
port := fmt.Sprintf("%d", v.Port)
|
||||
ch <- prometheus.MustNewConstMetric(c.vipPackets, prometheus.CounterValue, float64(v.NextPkt), v.Prefix, v.Protocol, port, "next")
|
||||
ch <- prometheus.MustNewConstMetric(c.vipPackets, prometheus.CounterValue, float64(v.FirstPkt), v.Prefix, v.Protocol, port, "first")
|
||||
ch <- prometheus.MustNewConstMetric(c.vipPackets, prometheus.CounterValue, float64(v.Untracked), v.Prefix, v.Protocol, port, "untracked")
|
||||
ch <- prometheus.MustNewConstMetric(c.vipPackets, prometheus.CounterValue, float64(v.NoServer), v.Prefix, v.Protocol, port, "no_server")
|
||||
ch <- prometheus.MustNewConstMetric(c.vipRoutePkts, prometheus.CounterValue, float64(v.Packets), v.Prefix, v.Protocol, port)
|
||||
ch <- prometheus.MustNewConstMetric(c.vipRouteByts, prometheus.CounterValue, float64(v.Bytes), v.Prefix, v.Protocol, port)
|
||||
}
|
||||
|
||||
// Per-backend FIB counters from /net/route/to. Same CounterValue
|
||||
// semantics as above.
|
||||
for _, b := range c.vpp.BackendRouteStats() {
|
||||
ch <- prometheus.MustNewConstMetric(c.backendRoutePkts, prometheus.CounterValue, float64(b.Packets), b.Backend, b.Address)
|
||||
ch <- prometheus.MustNewConstMetric(c.backendRouteByts, prometheus.CounterValue, float64(b.Bytes), b.Backend, b.Address)
|
||||
}
|
||||
}
|
||||
|
||||
// Register registers all metrics with the given registry. vpp may be nil
|
||||
|
||||
Reference in New Issue
Block a user