VPP LB counters, src-ip-sticky, and frontend state aggregation
New feature: per-VIP / per-backend runtime counters
* New GetVPPLBCounters RPC serving an in-process snapshot refreshed
by a 5s scrape loop (internal/vpp/lbstats.go). Each cycle pulls
the LB plugin's four SimpleCounters (next, first, untracked,
no-server) plus the FIB /net/route/to CombinedCounter for every
VIP and every backend host prefix via a single DumpStats call.
* FIB stats-index discovery via ip_route_lookup (internal/vpp/
fibstats.go); per-worker reduction happens in the collector.
* Prometheus collector exports vip_packets_total (kind label),
vip_route_{packets,bytes}_total, and backend_route_{packets,
bytes}_total. Metrics source interface extended with VIPStats /
BackendRouteStats; vpp.Client publishes snapshots via
atomic.Pointer and clears them on disconnect.
* New 'show vpp lb counters' CLI command. The 'show vpp lbstate'
and 'sync vpp lbstate' commands are restructured under 'show
vpp lb {state,counters}' / 'sync vpp lb state' to make room
for the new verb.
New feature: src-ip-sticky frontends
* New frontend YAML key 'src-ip-sticky' (bool). Plumbed through
config.Frontend, desiredVIP, and the lb_add_del_vip_v2 call.
* Reflected in gRPC FrontendInfo.src_ip_sticky and VPPLBVIP.
src_ip_sticky, and shown in 'show vpp lb state' output.
* Scraped back from VPP by parsing 'show lb vips verbose' through
cli_inband — lb_vip_details does not expose the flag. The same
scrape also recovers the LB pool index for each VIP, which the
stats-segment counters are keyed on. This is a documented
temporary workaround until VPP ships an lb_vip_v2_dump.
* src_ip_sticky cannot be mutated on a live VIP, so a flipped flag
triggers a tear-down-and-recreate in reconcileVIP (ASes deleted
with flush, VIP deleted, then re-added). Flip is logged.
New feature: frontend state aggregation and events
* New health.FrontendState (unknown/up/down) and FrontendTransition
types. A frontend is 'up' iff at least one backend has a nonzero
effective weight, 'unknown' iff no backend has real state yet,
and 'down' otherwise.
* Checker tracks per-frontend aggregate state, recomputing after
each backend transition and emitting a frontend-transition Event
on change. Reload drops entries for removed frontends.
* checker.Event gains an optional FrontendTransition pointer;
backend- vs. frontend-transition events are demultiplexed on
that field.
* WatchEvents now sends an initial snapshot of frontend state on
connect (mirroring the existing backend snapshot), subscribes
once to the checker stream, and fans out to backend/frontend
handlers based on the client's filter flags. The proto
FrontendEvent message grows name + transition fields.
* New Checker.FrontendState accessor.
Refactor: pure health helpers
* Moved the priority-failover selector and the (pool idx, active
pool, state, cfg weight) → (vpp weight, flush) mapping out of
internal/vpp/lbsync.go into a new internal/health/weights.go so
the checker can reuse them for frontend-state computation
without importing internal/vpp.
* New functions: health.ActivePoolIndex, BackendEffectiveWeight,
EffectiveWeights, ComputeFrontendState. lbsync.go now calls
these directly; vpp.EffectiveWeights is a thin wrapper over
health.EffectiveWeights retained for the gRPC observability
path. Fully unit-tested in internal/health/weights_test.go.
maglevc polish
* --color default is now mode-aware: on in the interactive shell,
off in one-shot mode so piped output is script-safe. Explicit
--color=true/false still overrides.
* New stripHostMask helper drops /32 and /128 from VIP display;
non-host prefixes pass through unchanged.
* Counter table column order fixed (first before next) and
packets/bytes columns renamed to fib-packets/fib-bytes to
clarify they come from the FIB, not the LB plugin.
Docs
* config-guide: document src-ip-sticky, including the VIP
recreate-on-change caveat.
* user-guide, maglevc.1, maglevd.8: updated command tree, new
counters command, color defaults, and the src-ip-sticky field.
This commit is contained in:
@@ -64,6 +64,43 @@ type Transition struct {
|
||||
Result ProbeResult
|
||||
}
|
||||
|
||||
// FrontendState is the aggregated state of a frontend derived from the
|
||||
// effective weights of its member backends. Frontends do not have their
|
||||
// own rise/fall counters: they're purely a reduction over backend state.
|
||||
//
|
||||
// - unknown: no backends, or every referenced backend is in StateUnknown
|
||||
// (the checker has no probe data yet).
|
||||
// - up: at least one backend has effective weight > 0 — the VIP has
|
||||
// something to serve.
|
||||
// - down: backends exist with real state, but none have effective
|
||||
// weight > 0 — the VIP has nothing to serve.
|
||||
type FrontendState int
|
||||
|
||||
const (
|
||||
FrontendStateUnknown FrontendState = iota
|
||||
FrontendStateUp
|
||||
FrontendStateDown
|
||||
)
|
||||
|
||||
func (s FrontendState) String() string {
|
||||
switch s {
|
||||
case FrontendStateUnknown:
|
||||
return "unknown"
|
||||
case FrontendStateUp:
|
||||
return "up"
|
||||
case FrontendStateDown:
|
||||
return "down"
|
||||
}
|
||||
return "unknown"
|
||||
}
|
||||
|
||||
// FrontendTransition records a frontend state change event.
|
||||
type FrontendTransition struct {
|
||||
From FrontendState
|
||||
To FrontendState
|
||||
At time.Time
|
||||
}
|
||||
|
||||
// HealthCounter is HAProxy's single-integer rise/fall model.
|
||||
//
|
||||
// Health ∈ [0, Rise+Fall-1]. Server is UP when Health >= Rise, DOWN when
|
||||
|
||||
118
internal/health/weights.go
Normal file
118
internal/health/weights.go
Normal file
@@ -0,0 +1,118 @@
|
||||
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
||||
|
||||
package health
|
||||
|
||||
import (
|
||||
"git.ipng.ch/ipng/vpp-maglev/internal/config"
|
||||
)
|
||||
|
||||
// ActivePoolIndex returns the priority-failover pool index for fe given
|
||||
// the current backend states. The active pool is the first pool that
|
||||
// contains at least one backend in StateUp — pool[0] is the primary,
|
||||
// pool[1] the first fallback, and so on. Returns 0 when no pool has
|
||||
// any up backend, in which case every backend maps to weight 0 and the
|
||||
// return value is unobservable.
|
||||
func ActivePoolIndex(fe config.Frontend, states map[string]State) int {
|
||||
for i, pool := range fe.Pools {
|
||||
for bName := range pool.Backends {
|
||||
if states[bName] == StateUp {
|
||||
return i
|
||||
}
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// BackendEffectiveWeight is the pure mapping from (pool index, active pool,
|
||||
// backend state, config weight) to the desired VPP AS weight and flush hint.
|
||||
// This is the single source of truth for the state → dataplane rule.
|
||||
//
|
||||
// A backend gets its configured weight iff it is up AND belongs to the
|
||||
// currently-active pool. Every other case yields weight 0. Only StateDisabled
|
||||
// produces flush=true (immediate session teardown).
|
||||
//
|
||||
// state in active pool not in active pool flush
|
||||
// -------- -------------- ------------------- -----
|
||||
// unknown 0 0 no
|
||||
// up configured 0 (standby) no
|
||||
// down 0 0 no
|
||||
// paused 0 0 no
|
||||
// disabled 0 0 yes
|
||||
func BackendEffectiveWeight(poolIdx, activePool int, state State, cfgWeight int) (weight uint8, flush bool) {
|
||||
switch state {
|
||||
case StateUp:
|
||||
if poolIdx == activePool {
|
||||
return clampWeight(cfgWeight), false
|
||||
}
|
||||
return 0, false
|
||||
case StateDisabled:
|
||||
return 0, true
|
||||
default:
|
||||
return 0, false
|
||||
}
|
||||
}
|
||||
|
||||
// EffectiveWeights computes per-pool per-backend effective weights for fe,
|
||||
// given a snapshot of backend states. Result layout: weights[poolIdx][backendName].
|
||||
func EffectiveWeights(fe config.Frontend, states map[string]State) map[int]map[string]uint8 {
|
||||
activePool := ActivePoolIndex(fe, states)
|
||||
out := make(map[int]map[string]uint8, len(fe.Pools))
|
||||
for poolIdx, pool := range fe.Pools {
|
||||
out[poolIdx] = make(map[string]uint8, len(pool.Backends))
|
||||
for bName, pb := range pool.Backends {
|
||||
w, _ := BackendEffectiveWeight(poolIdx, activePool, states[bName], pb.Weight)
|
||||
out[poolIdx][bName] = w
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// ComputeFrontendState derives the FrontendState for fe from a snapshot of
|
||||
// backend states. Rules:
|
||||
//
|
||||
// - no backends → unknown
|
||||
// - every referenced backend is in StateUnknown → unknown
|
||||
// - any backend has effective weight > 0 → up
|
||||
// - otherwise → down
|
||||
func ComputeFrontendState(fe config.Frontend, states map[string]State) FrontendState {
|
||||
// Unique set of backends referenced by this frontend (a single backend
|
||||
// may appear in multiple pools; we count it once).
|
||||
seen := make(map[string]struct{})
|
||||
for _, pool := range fe.Pools {
|
||||
for bName := range pool.Backends {
|
||||
seen[bName] = struct{}{}
|
||||
}
|
||||
}
|
||||
if len(seen) == 0 {
|
||||
return FrontendStateUnknown
|
||||
}
|
||||
allUnknown := true
|
||||
for bName := range seen {
|
||||
if states[bName] != StateUnknown {
|
||||
allUnknown = false
|
||||
break
|
||||
}
|
||||
}
|
||||
if allUnknown {
|
||||
return FrontendStateUnknown
|
||||
}
|
||||
ew := EffectiveWeights(fe, states)
|
||||
for _, poolMap := range ew {
|
||||
for _, w := range poolMap {
|
||||
if w > 0 {
|
||||
return FrontendStateUp
|
||||
}
|
||||
}
|
||||
}
|
||||
return FrontendStateDown
|
||||
}
|
||||
|
||||
func clampWeight(w int) uint8 {
|
||||
if w < 0 {
|
||||
return 0
|
||||
}
|
||||
if w > 100 {
|
||||
return 100
|
||||
}
|
||||
return uint8(w)
|
||||
}
|
||||
232
internal/health/weights_test.go
Normal file
232
internal/health/weights_test.go
Normal file
@@ -0,0 +1,232 @@
|
||||
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
||||
|
||||
package health
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"git.ipng.ch/ipng/vpp-maglev/internal/config"
|
||||
)
|
||||
|
||||
// TestBackendEffectiveWeight locks down the state → (weight, flush) truth
|
||||
// table. This is the single source of truth for how maglevd decides what
|
||||
// to program into VPP for each backend state. If this test needs updating
|
||||
// the behavior has deliberately changed.
|
||||
func TestBackendEffectiveWeight(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
poolIdx int
|
||||
activePool int
|
||||
state State
|
||||
cfgWeight int
|
||||
wantWeight uint8
|
||||
wantFlush bool
|
||||
}{
|
||||
{"up active w100", 0, 0, StateUp, 100, 100, false},
|
||||
{"up active w50", 0, 0, StateUp, 50, 50, false},
|
||||
{"up active w0", 0, 0, StateUp, 0, 0, false},
|
||||
{"up active clamp-high", 0, 0, StateUp, 150, 100, false},
|
||||
{"up active clamp-low", 0, 0, StateUp, -5, 0, false},
|
||||
|
||||
{"up standby pool0 active=1", 0, 1, StateUp, 100, 0, false},
|
||||
{"up standby pool1 active=0", 1, 0, StateUp, 100, 0, false},
|
||||
{"up standby pool2 active=0", 2, 0, StateUp, 100, 0, false},
|
||||
|
||||
{"up failover pool1 active=1", 1, 1, StateUp, 100, 100, false},
|
||||
|
||||
{"unknown pool0 active=0", 0, 0, StateUnknown, 100, 0, false},
|
||||
{"unknown pool1 active=0", 1, 0, StateUnknown, 100, 0, false},
|
||||
|
||||
{"down pool0 active=0", 0, 0, StateDown, 100, 0, false},
|
||||
{"down pool1 active=1", 1, 1, StateDown, 100, 0, false},
|
||||
|
||||
{"paused pool0 active=0", 0, 0, StatePaused, 100, 0, false},
|
||||
|
||||
{"disabled pool0 active=0", 0, 0, StateDisabled, 100, 0, true},
|
||||
{"disabled pool1 active=1", 1, 1, StateDisabled, 100, 0, true},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
w, f := BackendEffectiveWeight(tc.poolIdx, tc.activePool, tc.state, tc.cfgWeight)
|
||||
if w != tc.wantWeight {
|
||||
t.Errorf("weight: got %d, want %d", w, tc.wantWeight)
|
||||
}
|
||||
if f != tc.wantFlush {
|
||||
t.Errorf("flush: got %v, want %v", f, tc.wantFlush)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestActivePoolIndex locks down the priority-failover selector: the first
|
||||
// pool containing at least one up backend is the active pool. Default 0.
|
||||
func TestActivePoolIndex(t *testing.T) {
|
||||
mkFE := func(pools ...[]string) config.Frontend {
|
||||
out := make([]config.Pool, len(pools))
|
||||
for i, p := range pools {
|
||||
out[i] = config.Pool{Name: "p", Backends: map[string]config.PoolBackend{}}
|
||||
for _, name := range p {
|
||||
out[i].Backends[name] = config.PoolBackend{Weight: 100}
|
||||
}
|
||||
}
|
||||
return config.Frontend{Pools: out}
|
||||
}
|
||||
|
||||
cases := []struct {
|
||||
name string
|
||||
fe config.Frontend
|
||||
states map[string]State
|
||||
want int
|
||||
}{
|
||||
{
|
||||
name: "pool0 has up, pool1 standby",
|
||||
fe: mkFE([]string{"a", "b"}, []string{"c", "d"}),
|
||||
states: map[string]State{"a": StateUp, "b": StateDown, "c": StateUp, "d": StateUp},
|
||||
want: 0,
|
||||
},
|
||||
{
|
||||
name: "pool0 all down, pool1 has up → failover",
|
||||
fe: mkFE([]string{"a", "b"}, []string{"c", "d"}),
|
||||
states: map[string]State{"a": StateDown, "b": StateDown, "c": StateUp, "d": StateUp},
|
||||
want: 1,
|
||||
},
|
||||
{
|
||||
name: "pool0 all disabled, pool1 has up → failover",
|
||||
fe: mkFE([]string{"a", "b"}, []string{"c"}),
|
||||
states: map[string]State{"a": StateDisabled, "b": StateDisabled, "c": StateUp},
|
||||
want: 1,
|
||||
},
|
||||
{
|
||||
name: "pool0 all paused, pool1 has up → failover",
|
||||
fe: mkFE([]string{"a"}, []string{"c"}),
|
||||
states: map[string]State{"a": StatePaused, "c": StateUp},
|
||||
want: 1,
|
||||
},
|
||||
{
|
||||
name: "pool0 all unknown (startup), pool1 up → pool1",
|
||||
fe: mkFE([]string{"a"}, []string{"c"}),
|
||||
states: map[string]State{"a": StateUnknown, "c": StateUp},
|
||||
want: 1,
|
||||
},
|
||||
{
|
||||
name: "nothing up anywhere → default 0",
|
||||
fe: mkFE([]string{"a"}, []string{"c"}),
|
||||
states: map[string]State{"a": StateDown, "c": StateDown},
|
||||
want: 0,
|
||||
},
|
||||
{
|
||||
name: "1 up in pool0 is enough",
|
||||
fe: mkFE([]string{"a", "b", "c"}, []string{"d"}),
|
||||
states: map[string]State{"a": StateDown, "b": StateDown, "c": StateUp, "d": StateUp},
|
||||
want: 0,
|
||||
},
|
||||
{
|
||||
name: "three tiers, pool0 and pool1 both empty → pool2",
|
||||
fe: mkFE([]string{"a"}, []string{"b"}, []string{"c"}),
|
||||
states: map[string]State{"a": StateDown, "b": StateDown, "c": StateUp},
|
||||
want: 2,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
got := ActivePoolIndex(tc.fe, tc.states)
|
||||
if got != tc.want {
|
||||
t.Errorf("got pool %d, want pool %d", got, tc.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestComputeFrontendState locks down the reduction rule: frontends are
|
||||
// up iff any backend has effective weight > 0, unknown iff all backends
|
||||
// are still in StateUnknown (or there are no backends), and down otherwise.
|
||||
func TestComputeFrontendState(t *testing.T) {
|
||||
mkFE := func(pools ...[]string) config.Frontend {
|
||||
out := make([]config.Pool, len(pools))
|
||||
for i, p := range pools {
|
||||
out[i] = config.Pool{Name: "p", Backends: map[string]config.PoolBackend{}}
|
||||
for _, name := range p {
|
||||
out[i].Backends[name] = config.PoolBackend{Weight: 100}
|
||||
}
|
||||
}
|
||||
return config.Frontend{Pools: out}
|
||||
}
|
||||
|
||||
cases := []struct {
|
||||
name string
|
||||
fe config.Frontend
|
||||
states map[string]State
|
||||
want FrontendState
|
||||
}{
|
||||
{
|
||||
name: "no backends → unknown",
|
||||
fe: config.Frontend{Pools: []config.Pool{{Name: "primary", Backends: map[string]config.PoolBackend{}}}},
|
||||
want: FrontendStateUnknown,
|
||||
},
|
||||
{
|
||||
name: "all unknown (startup) → unknown",
|
||||
fe: mkFE([]string{"a", "b"}),
|
||||
states: map[string]State{"a": StateUnknown, "b": StateUnknown},
|
||||
want: FrontendStateUnknown,
|
||||
},
|
||||
{
|
||||
name: "one up in primary → up",
|
||||
fe: mkFE([]string{"a", "b"}),
|
||||
states: map[string]State{"a": StateUp, "b": StateDown},
|
||||
want: FrontendStateUp,
|
||||
},
|
||||
{
|
||||
name: "all down → down",
|
||||
fe: mkFE([]string{"a", "b"}),
|
||||
states: map[string]State{"a": StateDown, "b": StateDown},
|
||||
want: FrontendStateDown,
|
||||
},
|
||||
{
|
||||
name: "all disabled → down",
|
||||
fe: mkFE([]string{"a", "b"}),
|
||||
states: map[string]State{"a": StateDisabled, "b": StateDisabled},
|
||||
want: FrontendStateDown,
|
||||
},
|
||||
{
|
||||
name: "all paused → down",
|
||||
fe: mkFE([]string{"a"}),
|
||||
states: map[string]State{"a": StatePaused},
|
||||
want: FrontendStateDown,
|
||||
},
|
||||
{
|
||||
name: "primary down, secondary up → up (failover)",
|
||||
fe: mkFE([]string{"a"}, []string{"b"}),
|
||||
states: map[string]State{"a": StateDown, "b": StateUp},
|
||||
want: FrontendStateUp,
|
||||
},
|
||||
{
|
||||
name: "primary up, secondary down → up (secondary standby ignored)",
|
||||
fe: mkFE([]string{"a"}, []string{"b"}),
|
||||
states: map[string]State{"a": StateUp, "b": StateDown},
|
||||
want: FrontendStateUp,
|
||||
},
|
||||
{
|
||||
name: "primary unknown, secondary unknown → unknown",
|
||||
fe: mkFE([]string{"a"}, []string{"b"}),
|
||||
states: map[string]State{"a": StateUnknown, "b": StateUnknown},
|
||||
want: FrontendStateUnknown,
|
||||
},
|
||||
{
|
||||
name: "primary down, secondary unknown → down",
|
||||
fe: mkFE([]string{"a"}, []string{"b"}),
|
||||
states: map[string]State{"a": StateDown, "b": StateUnknown},
|
||||
want: FrontendStateDown,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
got := ComputeFrontendState(tc.fe, tc.states)
|
||||
if got != tc.want {
|
||||
t.Errorf("got %s, want %s", got, tc.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user