VPP LB counters, src-ip-sticky, and frontend state aggregation

New feature: per-VIP / per-backend runtime counters
  * New GetVPPLBCounters RPC serving an in-process snapshot refreshed
    by a 5s scrape loop (internal/vpp/lbstats.go). Each cycle pulls
    the LB plugin's four SimpleCounters (next, first, untracked,
    no-server) plus the FIB /net/route/to CombinedCounter for every
    VIP and every backend host prefix via a single DumpStats call.
  * FIB stats-index discovery via ip_route_lookup (internal/vpp/
    fibstats.go); per-worker reduction happens in the collector.
  * Prometheus collector exports vip_packets_total (kind label),
    vip_route_{packets,bytes}_total, and backend_route_{packets,
    bytes}_total. Metrics source interface extended with VIPStats /
    BackendRouteStats; vpp.Client publishes snapshots via
    atomic.Pointer and clears them on disconnect.
  * New 'show vpp lb counters' CLI command. The 'show vpp lbstate'
    and 'sync vpp lbstate' commands are restructured under 'show
    vpp lb {state,counters}' / 'sync vpp lb state' to make room
    for the new verb.

New feature: src-ip-sticky frontends
  * New frontend YAML key 'src-ip-sticky' (bool). Plumbed through
    config.Frontend, desiredVIP, and the lb_add_del_vip_v2 call.
  * Reflected in gRPC FrontendInfo.src_ip_sticky and VPPLBVIP.
    src_ip_sticky, and shown in 'show vpp lb state' output.
  * Scraped back from VPP by parsing 'show lb vips verbose' through
    cli_inband — lb_vip_details does not expose the flag. The same
    scrape also recovers the LB pool index for each VIP, which the
    stats-segment counters are keyed on. This is a documented
    temporary workaround until VPP ships an lb_vip_v2_dump.
  * src_ip_sticky cannot be mutated on a live VIP, so a flipped flag
    triggers a tear-down-and-recreate in reconcileVIP (ASes deleted
    with flush, VIP deleted, then re-added). Flip is logged.

New feature: frontend state aggregation and events
  * New health.FrontendState (unknown/up/down) and FrontendTransition
    types. A frontend is 'up' iff at least one backend has a nonzero
    effective weight, 'unknown' iff no backend has real state yet,
    and 'down' otherwise.
  * Checker tracks per-frontend aggregate state, recomputing after
    each backend transition and emitting a frontend-transition Event
    on change. Reload drops entries for removed frontends.
  * checker.Event gains an optional FrontendTransition pointer;
    backend- vs. frontend-transition events are demultiplexed on
    that field.
  * WatchEvents now sends an initial snapshot of frontend state on
    connect (mirroring the existing backend snapshot), subscribes
    once to the checker stream, and fans out to backend/frontend
    handlers based on the client's filter flags. The proto
    FrontendEvent message grows name + transition fields.
  * New Checker.FrontendState accessor.

Refactor: pure health helpers
  * Moved the priority-failover selector and the (pool idx, active
    pool, state, cfg weight) → (vpp weight, flush) mapping out of
    internal/vpp/lbsync.go into a new internal/health/weights.go so
    the checker can reuse them for frontend-state computation
    without importing internal/vpp.
  * New functions: health.ActivePoolIndex, BackendEffectiveWeight,
    EffectiveWeights, ComputeFrontendState. lbsync.go now calls
    these directly; vpp.EffectiveWeights is a thin wrapper over
    health.EffectiveWeights retained for the gRPC observability
    path. Fully unit-tested in internal/health/weights_test.go.

maglevc polish
  * --color default is now mode-aware: on in the interactive shell,
    off in one-shot mode so piped output is script-safe. Explicit
    --color=true/false still overrides.
  * New stripHostMask helper drops /32 and /128 from VIP display;
    non-host prefixes pass through unchanged.
  * Counter table column order fixed (first before next) and
    packets/bytes columns renamed to fib-packets/fib-bytes to
    clarify they come from the FIB, not the LB plugin.

Docs
  * config-guide: document src-ip-sticky, including the VIP
    recreate-on-change caveat.
  * user-guide, maglevc.1, maglevd.8: updated command tree, new
    counters command, color defaults, and the src-ip-sticky field.
This commit is contained in:
2026-04-12 15:59:02 +02:00
parent d5fbf5c640
commit fb62532fd5
25 changed files with 2163 additions and 549 deletions

View File

@@ -10,141 +10,45 @@ import (
"git.ipng.ch/ipng/vpp-maglev/internal/health"
)
// TestAsFromBackend locks down the state → (weight, flush) truth table.
// This is the single source of truth for how maglevd decides what to
// program into VPP for each backend state. If this test needs updating
// the behavior has deliberately changed.
func TestAsFromBackend(t *testing.T) {
cases := []struct {
name string
poolIdx int
activePool int
state health.State
cfgWeight int
wantWeight uint8
wantFlush bool
}{
// up in active pool → configured weight, no flush
{"up active w100", 0, 0, health.StateUp, 100, 100, false},
{"up active w50", 0, 0, health.StateUp, 50, 50, false},
{"up active w0", 0, 0, health.StateUp, 0, 0, false},
{"up active clamp-high", 0, 0, health.StateUp, 150, 100, false},
{"up active clamp-low", 0, 0, health.StateUp, -5, 0, false},
// up in non-active pool → standby (weight 0), no flush
{"up standby pool0 active=1", 0, 1, health.StateUp, 100, 0, false},
{"up standby pool1 active=0", 1, 0, health.StateUp, 100, 0, false},
{"up standby pool2 active=0", 2, 0, health.StateUp, 100, 0, false},
// up in secondary, promoted because pool[1] is now active
{"up failover pool1 active=1", 1, 1, health.StateUp, 100, 100, false},
// unknown → off, drain
{"unknown pool0 active=0", 0, 0, health.StateUnknown, 100, 0, false},
{"unknown pool1 active=0", 1, 0, health.StateUnknown, 100, 0, false},
// down → off, drain (probe might be wrong)
{"down pool0 active=0", 0, 0, health.StateDown, 100, 0, false},
{"down pool1 active=1", 1, 1, health.StateDown, 100, 0, false},
// paused → off, drain (graceful maintenance)
{"paused pool0 active=0", 0, 0, health.StatePaused, 100, 0, false},
// disabled → off, flush (hard stop)
{"disabled pool0 active=0", 0, 0, health.StateDisabled, 100, 0, true},
{"disabled pool1 active=1", 1, 1, health.StateDisabled, 100, 0, true},
// TestParseLBVIPSnapshot pins the parser for `show lb vips verbose` output.
// The text below is a synthetic sample that mirrors format_lb_vip_detailed
// in src/plugins/lb/lb.c: a header line per VIP optionally carrying the
// src_ip_sticky token, followed by a protocol:/port: sub-line for non all-
// port VIPs. If VPP changes this format the test will fail loudly — the
// scrape is a temporary workaround until lb_vip_v2_dump exists.
func TestParseLBVIPSnapshot(t *testing.T) {
text := ` ip4-gre4 [1] 192.0.2.1/32 src_ip_sticky
new_size:1024
protocol:6 port:80
counters:
ip4-gre4 [2] 192.0.2.2/32
new_size:1024
protocol:17 port:53
ip6-gre6 [3] 2001:db8::1/128 src_ip_sticky
new_size:1024
protocol:6 port:443
ip4-gre4 [4] 192.0.2.3/32
new_size:1024
`
got := parseLBVIPSnapshot(text)
want := map[vipKey]lbVIPSnapshot{
{prefix: "192.0.2.1/32", protocol: 6, port: 80}: {index: 1, sticky: true},
{prefix: "192.0.2.2/32", protocol: 17, port: 53}: {index: 2, sticky: false},
{prefix: "2001:db8::1/128", protocol: 6, port: 443}: {index: 3, sticky: true},
{prefix: "192.0.2.3/32", protocol: 255, port: 0}: {index: 4, sticky: false}, // all-port VIP
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
w, f := asFromBackend(tc.poolIdx, tc.activePool, tc.state, tc.cfgWeight)
if w != tc.wantWeight {
t.Errorf("weight: got %d, want %d", w, tc.wantWeight)
}
if f != tc.wantFlush {
t.Errorf("flush: got %v, want %v", f, tc.wantFlush)
}
})
if len(got) != len(want) {
t.Errorf("got %d entries, want %d: %#v", len(got), len(want), got)
}
}
// TestActivePoolIndex locks down the priority-failover selector: the first
// pool containing at least one up backend is the active pool. Default 0.
func TestActivePoolIndex(t *testing.T) {
mkFE := func(pools ...[]string) config.Frontend {
out := make([]config.Pool, len(pools))
for i, p := range pools {
out[i] = config.Pool{Name: "p", Backends: map[string]config.PoolBackend{}}
for _, name := range p {
out[i].Backends[name] = config.PoolBackend{Weight: 100}
}
for k, v := range want {
g, ok := got[k]
if !ok {
t.Errorf("missing key %+v", k)
continue
}
if g != v {
t.Errorf("key %+v: got %+v, want %+v", k, g, v)
}
return config.Frontend{Pools: out}
}
cases := []struct {
name string
fe config.Frontend
states map[string]health.State
want int
}{
{
name: "pool0 has up, pool1 standby",
fe: mkFE([]string{"a", "b"}, []string{"c", "d"}),
states: map[string]health.State{"a": health.StateUp, "b": health.StateDown, "c": health.StateUp, "d": health.StateUp},
want: 0,
},
{
name: "pool0 all down, pool1 has up → failover",
fe: mkFE([]string{"a", "b"}, []string{"c", "d"}),
states: map[string]health.State{"a": health.StateDown, "b": health.StateDown, "c": health.StateUp, "d": health.StateUp},
want: 1,
},
{
name: "pool0 all disabled, pool1 has up → failover",
fe: mkFE([]string{"a", "b"}, []string{"c"}),
states: map[string]health.State{"a": health.StateDisabled, "b": health.StateDisabled, "c": health.StateUp},
want: 1,
},
{
name: "pool0 all paused, pool1 has up → failover",
fe: mkFE([]string{"a"}, []string{"c"}),
states: map[string]health.State{"a": health.StatePaused, "c": health.StateUp},
want: 1,
},
{
name: "pool0 all unknown (startup), pool1 up → pool1",
fe: mkFE([]string{"a"}, []string{"c"}),
states: map[string]health.State{"a": health.StateUnknown, "c": health.StateUp},
want: 1,
},
{
name: "nothing up anywhere → default 0",
fe: mkFE([]string{"a"}, []string{"c"}),
states: map[string]health.State{"a": health.StateDown, "c": health.StateDown},
want: 0,
},
{
name: "1 up in pool0 is enough",
fe: mkFE([]string{"a", "b", "c"}, []string{"d"}),
states: map[string]health.State{"a": health.StateDown, "b": health.StateDown, "c": health.StateUp, "d": health.StateUp},
want: 0,
},
{
name: "three tiers, pool0 and pool1 both empty → pool2",
fe: mkFE([]string{"a"}, []string{"b"}, []string{"c"}),
states: map[string]health.State{"a": health.StateDown, "b": health.StateDown, "c": health.StateUp},
want: 2,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got := activePoolIndex(tc.fe, tc.states)
if got != tc.want {
t.Errorf("got pool %d, want pool %d", got, tc.want)
}
})
}
}
@@ -161,8 +65,10 @@ func (f *fakeStateSource) BackendState(name string) (health.State, bool) {
}
// TestDesiredFromFrontendFailover is the end-to-end integration test for
// priority-failover: given a frontend with two pools, the desired weights
// flip between pools based on which has any up backends.
// priority-failover in the VPP sync path: given a frontend with two pools,
// the desired weights flip between pools based on which has any up backends.
// This exercises vpp.desiredFromFrontend which wraps the pure helpers in
// the health package; those helpers are unit-tested separately in health.
func TestDesiredFromFrontendFailover(t *testing.T) {
ip := func(s string) net.IP { return net.ParseIP(s).To4() }
cfg := &config.Config{