New feature: per-VIP / per-backend runtime counters
* New GetVPPLBCounters RPC serving an in-process snapshot refreshed
by a 5s scrape loop (internal/vpp/lbstats.go). Each cycle pulls
the LB plugin's four SimpleCounters (next, first, untracked,
no-server) plus the FIB /net/route/to CombinedCounter for every
VIP and every backend host prefix via a single DumpStats call.
* FIB stats-index discovery via ip_route_lookup (internal/vpp/
fibstats.go); per-worker reduction happens in the collector.
* Prometheus collector exports vip_packets_total (kind label),
vip_route_{packets,bytes}_total, and backend_route_{packets,
bytes}_total. Metrics source interface extended with VIPStats /
BackendRouteStats; vpp.Client publishes snapshots via
atomic.Pointer and clears them on disconnect.
* New 'show vpp lb counters' CLI command. The 'show vpp lbstate'
and 'sync vpp lbstate' commands are restructured under 'show
vpp lb {state,counters}' / 'sync vpp lb state' to make room
for the new verb.
New feature: src-ip-sticky frontends
* New frontend YAML key 'src-ip-sticky' (bool). Plumbed through
config.Frontend, desiredVIP, and the lb_add_del_vip_v2 call.
* Reflected in gRPC FrontendInfo.src_ip_sticky and VPPLBVIP.
src_ip_sticky, and shown in 'show vpp lb state' output.
* Scraped back from VPP by parsing 'show lb vips verbose' through
cli_inband — lb_vip_details does not expose the flag. The same
scrape also recovers the LB pool index for each VIP, which the
stats-segment counters are keyed on. This is a documented
temporary workaround until VPP ships an lb_vip_v2_dump.
* src_ip_sticky cannot be mutated on a live VIP, so a flipped flag
triggers a tear-down-and-recreate in reconcileVIP (ASes deleted
with flush, VIP deleted, then re-added). Flip is logged.
New feature: frontend state aggregation and events
* New health.FrontendState (unknown/up/down) and FrontendTransition
types. A frontend is 'up' iff at least one backend has a nonzero
effective weight, 'unknown' iff no backend has real state yet,
and 'down' otherwise.
* Checker tracks per-frontend aggregate state, recomputing after
each backend transition and emitting a frontend-transition Event
on change. Reload drops entries for removed frontends.
* checker.Event gains an optional FrontendTransition pointer;
backend- vs. frontend-transition events are demultiplexed on
that field.
* WatchEvents now sends an initial snapshot of frontend state on
connect (mirroring the existing backend snapshot), subscribes
once to the checker stream, and fans out to backend/frontend
handlers based on the client's filter flags. The proto
FrontendEvent message grows name + transition fields.
* New Checker.FrontendState accessor.
Refactor: pure health helpers
* Moved the priority-failover selector and the (pool idx, active
pool, state, cfg weight) → (vpp weight, flush) mapping out of
internal/vpp/lbsync.go into a new internal/health/weights.go so
the checker can reuse them for frontend-state computation
without importing internal/vpp.
* New functions: health.ActivePoolIndex, BackendEffectiveWeight,
EffectiveWeights, ComputeFrontendState. lbsync.go now calls
these directly; vpp.EffectiveWeights is a thin wrapper over
health.EffectiveWeights retained for the gRPC observability
path. Fully unit-tested in internal/health/weights_test.go.
maglevc polish
* --color default is now mode-aware: on in the interactive shell,
off in one-shot mode so piped output is script-safe. Explicit
--color=true/false still overrides.
* New stripHostMask helper drops /32 and /128 from VIP display;
non-host prefixes pass through unchanged.
* Counter table column order fixed (first before next) and
packets/bytes columns renamed to fib-packets/fib-bytes to
clarify they come from the FIB, not the LB plugin.
Docs
* config-guide: document src-ip-sticky, including the VIP
recreate-on-change caveat.
* user-guide, maglevc.1, maglevd.8: updated command tree, new
counters command, color defaults, and the src-ip-sticky field.
119 lines
3.6 KiB
Go
119 lines
3.6 KiB
Go
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
|
|
|
package health
|
|
|
|
import (
|
|
"git.ipng.ch/ipng/vpp-maglev/internal/config"
|
|
)
|
|
|
|
// ActivePoolIndex returns the priority-failover pool index for fe given
|
|
// the current backend states. The active pool is the first pool that
|
|
// contains at least one backend in StateUp — pool[0] is the primary,
|
|
// pool[1] the first fallback, and so on. Returns 0 when no pool has
|
|
// any up backend, in which case every backend maps to weight 0 and the
|
|
// return value is unobservable.
|
|
func ActivePoolIndex(fe config.Frontend, states map[string]State) int {
|
|
for i, pool := range fe.Pools {
|
|
for bName := range pool.Backends {
|
|
if states[bName] == StateUp {
|
|
return i
|
|
}
|
|
}
|
|
}
|
|
return 0
|
|
}
|
|
|
|
// BackendEffectiveWeight is the pure mapping from (pool index, active pool,
|
|
// backend state, config weight) to the desired VPP AS weight and flush hint.
|
|
// This is the single source of truth for the state → dataplane rule.
|
|
//
|
|
// A backend gets its configured weight iff it is up AND belongs to the
|
|
// currently-active pool. Every other case yields weight 0. Only StateDisabled
|
|
// produces flush=true (immediate session teardown).
|
|
//
|
|
// state in active pool not in active pool flush
|
|
// -------- -------------- ------------------- -----
|
|
// unknown 0 0 no
|
|
// up configured 0 (standby) no
|
|
// down 0 0 no
|
|
// paused 0 0 no
|
|
// disabled 0 0 yes
|
|
func BackendEffectiveWeight(poolIdx, activePool int, state State, cfgWeight int) (weight uint8, flush bool) {
|
|
switch state {
|
|
case StateUp:
|
|
if poolIdx == activePool {
|
|
return clampWeight(cfgWeight), false
|
|
}
|
|
return 0, false
|
|
case StateDisabled:
|
|
return 0, true
|
|
default:
|
|
return 0, false
|
|
}
|
|
}
|
|
|
|
// EffectiveWeights computes per-pool per-backend effective weights for fe,
|
|
// given a snapshot of backend states. Result layout: weights[poolIdx][backendName].
|
|
func EffectiveWeights(fe config.Frontend, states map[string]State) map[int]map[string]uint8 {
|
|
activePool := ActivePoolIndex(fe, states)
|
|
out := make(map[int]map[string]uint8, len(fe.Pools))
|
|
for poolIdx, pool := range fe.Pools {
|
|
out[poolIdx] = make(map[string]uint8, len(pool.Backends))
|
|
for bName, pb := range pool.Backends {
|
|
w, _ := BackendEffectiveWeight(poolIdx, activePool, states[bName], pb.Weight)
|
|
out[poolIdx][bName] = w
|
|
}
|
|
}
|
|
return out
|
|
}
|
|
|
|
// ComputeFrontendState derives the FrontendState for fe from a snapshot of
|
|
// backend states. Rules:
|
|
//
|
|
// - no backends → unknown
|
|
// - every referenced backend is in StateUnknown → unknown
|
|
// - any backend has effective weight > 0 → up
|
|
// - otherwise → down
|
|
func ComputeFrontendState(fe config.Frontend, states map[string]State) FrontendState {
|
|
// Unique set of backends referenced by this frontend (a single backend
|
|
// may appear in multiple pools; we count it once).
|
|
seen := make(map[string]struct{})
|
|
for _, pool := range fe.Pools {
|
|
for bName := range pool.Backends {
|
|
seen[bName] = struct{}{}
|
|
}
|
|
}
|
|
if len(seen) == 0 {
|
|
return FrontendStateUnknown
|
|
}
|
|
allUnknown := true
|
|
for bName := range seen {
|
|
if states[bName] != StateUnknown {
|
|
allUnknown = false
|
|
break
|
|
}
|
|
}
|
|
if allUnknown {
|
|
return FrontendStateUnknown
|
|
}
|
|
ew := EffectiveWeights(fe, states)
|
|
for _, poolMap := range ew {
|
|
for _, w := range poolMap {
|
|
if w > 0 {
|
|
return FrontendStateUp
|
|
}
|
|
}
|
|
}
|
|
return FrontendStateDown
|
|
}
|
|
|
|
func clampWeight(w int) uint8 {
|
|
if w < 0 {
|
|
return 0
|
|
}
|
|
if w > 100 {
|
|
return 100
|
|
}
|
|
return uint8(w)
|
|
}
|