New feature: per-VIP / per-backend runtime counters
* New GetVPPLBCounters RPC serving an in-process snapshot refreshed
by a 5s scrape loop (internal/vpp/lbstats.go). Each cycle pulls
the LB plugin's four SimpleCounters (next, first, untracked,
no-server) plus the FIB /net/route/to CombinedCounter for every
VIP and every backend host prefix via a single DumpStats call.
* FIB stats-index discovery via ip_route_lookup (internal/vpp/
fibstats.go); per-worker reduction happens in the collector.
* Prometheus collector exports vip_packets_total (kind label),
vip_route_{packets,bytes}_total, and backend_route_{packets,
bytes}_total. Metrics source interface extended with VIPStats /
BackendRouteStats; vpp.Client publishes snapshots via
atomic.Pointer and clears them on disconnect.
* New 'show vpp lb counters' CLI command. The 'show vpp lbstate'
and 'sync vpp lbstate' commands are restructured under 'show
vpp lb {state,counters}' / 'sync vpp lb state' to make room
for the new verb.
New feature: src-ip-sticky frontends
* New frontend YAML key 'src-ip-sticky' (bool). Plumbed through
config.Frontend, desiredVIP, and the lb_add_del_vip_v2 call.
* Reflected in gRPC FrontendInfo.src_ip_sticky and VPPLBVIP.
src_ip_sticky, and shown in 'show vpp lb state' output.
* Scraped back from VPP by parsing 'show lb vips verbose' through
cli_inband — lb_vip_details does not expose the flag. The same
scrape also recovers the LB pool index for each VIP, which the
stats-segment counters are keyed on. This is a documented
temporary workaround until VPP ships an lb_vip_v2_dump.
* src_ip_sticky cannot be mutated on a live VIP, so a flipped flag
triggers a tear-down-and-recreate in reconcileVIP (ASes deleted
with flush, VIP deleted, then re-added). Flip is logged.
New feature: frontend state aggregation and events
* New health.FrontendState (unknown/up/down) and FrontendTransition
types. A frontend is 'up' iff at least one backend has a nonzero
effective weight, 'unknown' iff no backend has real state yet,
and 'down' otherwise.
* Checker tracks per-frontend aggregate state, recomputing after
each backend transition and emitting a frontend-transition Event
on change. Reload drops entries for removed frontends.
* checker.Event gains an optional FrontendTransition pointer;
backend- vs. frontend-transition events are demultiplexed on
that field.
* WatchEvents now sends an initial snapshot of frontend state on
connect (mirroring the existing backend snapshot), subscribes
once to the checker stream, and fans out to backend/frontend
handlers based on the client's filter flags. The proto
FrontendEvent message grows name + transition fields.
* New Checker.FrontendState accessor.
Refactor: pure health helpers
* Moved the priority-failover selector and the (pool idx, active
pool, state, cfg weight) → (vpp weight, flush) mapping out of
internal/vpp/lbsync.go into a new internal/health/weights.go so
the checker can reuse them for frontend-state computation
without importing internal/vpp.
* New functions: health.ActivePoolIndex, BackendEffectiveWeight,
EffectiveWeights, ComputeFrontendState. lbsync.go now calls
these directly; vpp.EffectiveWeights is a thin wrapper over
health.EffectiveWeights retained for the gRPC observability
path. Fully unit-tested in internal/health/weights_test.go.
maglevc polish
* --color default is now mode-aware: on in the interactive shell,
off in one-shot mode so piped output is script-safe. Explicit
--color=true/false still overrides.
* New stripHostMask helper drops /32 and /128 from VIP display;
non-host prefixes pass through unchanged.
* Counter table column order fixed (first before next) and
packets/bytes columns renamed to fib-packets/fib-bytes to
clarify they come from the FIB, not the LB plugin.
Docs
* config-guide: document src-ip-sticky, including the VIP
recreate-on-change caveat.
* user-guide, maglevc.1, maglevd.8: updated command tree, new
counters command, color defaults, and the src-ip-sticky field.
233 lines
7.1 KiB
Go
233 lines
7.1 KiB
Go
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
|
|
|
package health
|
|
|
|
import (
|
|
"testing"
|
|
|
|
"git.ipng.ch/ipng/vpp-maglev/internal/config"
|
|
)
|
|
|
|
// TestBackendEffectiveWeight locks down the state → (weight, flush) truth
|
|
// table. This is the single source of truth for how maglevd decides what
|
|
// to program into VPP for each backend state. If this test needs updating
|
|
// the behavior has deliberately changed.
|
|
func TestBackendEffectiveWeight(t *testing.T) {
|
|
cases := []struct {
|
|
name string
|
|
poolIdx int
|
|
activePool int
|
|
state State
|
|
cfgWeight int
|
|
wantWeight uint8
|
|
wantFlush bool
|
|
}{
|
|
{"up active w100", 0, 0, StateUp, 100, 100, false},
|
|
{"up active w50", 0, 0, StateUp, 50, 50, false},
|
|
{"up active w0", 0, 0, StateUp, 0, 0, false},
|
|
{"up active clamp-high", 0, 0, StateUp, 150, 100, false},
|
|
{"up active clamp-low", 0, 0, StateUp, -5, 0, false},
|
|
|
|
{"up standby pool0 active=1", 0, 1, StateUp, 100, 0, false},
|
|
{"up standby pool1 active=0", 1, 0, StateUp, 100, 0, false},
|
|
{"up standby pool2 active=0", 2, 0, StateUp, 100, 0, false},
|
|
|
|
{"up failover pool1 active=1", 1, 1, StateUp, 100, 100, false},
|
|
|
|
{"unknown pool0 active=0", 0, 0, StateUnknown, 100, 0, false},
|
|
{"unknown pool1 active=0", 1, 0, StateUnknown, 100, 0, false},
|
|
|
|
{"down pool0 active=0", 0, 0, StateDown, 100, 0, false},
|
|
{"down pool1 active=1", 1, 1, StateDown, 100, 0, false},
|
|
|
|
{"paused pool0 active=0", 0, 0, StatePaused, 100, 0, false},
|
|
|
|
{"disabled pool0 active=0", 0, 0, StateDisabled, 100, 0, true},
|
|
{"disabled pool1 active=1", 1, 1, StateDisabled, 100, 0, true},
|
|
}
|
|
|
|
for _, tc := range cases {
|
|
t.Run(tc.name, func(t *testing.T) {
|
|
w, f := BackendEffectiveWeight(tc.poolIdx, tc.activePool, tc.state, tc.cfgWeight)
|
|
if w != tc.wantWeight {
|
|
t.Errorf("weight: got %d, want %d", w, tc.wantWeight)
|
|
}
|
|
if f != tc.wantFlush {
|
|
t.Errorf("flush: got %v, want %v", f, tc.wantFlush)
|
|
}
|
|
})
|
|
}
|
|
}
|
|
|
|
// TestActivePoolIndex locks down the priority-failover selector: the first
|
|
// pool containing at least one up backend is the active pool. Default 0.
|
|
func TestActivePoolIndex(t *testing.T) {
|
|
mkFE := func(pools ...[]string) config.Frontend {
|
|
out := make([]config.Pool, len(pools))
|
|
for i, p := range pools {
|
|
out[i] = config.Pool{Name: "p", Backends: map[string]config.PoolBackend{}}
|
|
for _, name := range p {
|
|
out[i].Backends[name] = config.PoolBackend{Weight: 100}
|
|
}
|
|
}
|
|
return config.Frontend{Pools: out}
|
|
}
|
|
|
|
cases := []struct {
|
|
name string
|
|
fe config.Frontend
|
|
states map[string]State
|
|
want int
|
|
}{
|
|
{
|
|
name: "pool0 has up, pool1 standby",
|
|
fe: mkFE([]string{"a", "b"}, []string{"c", "d"}),
|
|
states: map[string]State{"a": StateUp, "b": StateDown, "c": StateUp, "d": StateUp},
|
|
want: 0,
|
|
},
|
|
{
|
|
name: "pool0 all down, pool1 has up → failover",
|
|
fe: mkFE([]string{"a", "b"}, []string{"c", "d"}),
|
|
states: map[string]State{"a": StateDown, "b": StateDown, "c": StateUp, "d": StateUp},
|
|
want: 1,
|
|
},
|
|
{
|
|
name: "pool0 all disabled, pool1 has up → failover",
|
|
fe: mkFE([]string{"a", "b"}, []string{"c"}),
|
|
states: map[string]State{"a": StateDisabled, "b": StateDisabled, "c": StateUp},
|
|
want: 1,
|
|
},
|
|
{
|
|
name: "pool0 all paused, pool1 has up → failover",
|
|
fe: mkFE([]string{"a"}, []string{"c"}),
|
|
states: map[string]State{"a": StatePaused, "c": StateUp},
|
|
want: 1,
|
|
},
|
|
{
|
|
name: "pool0 all unknown (startup), pool1 up → pool1",
|
|
fe: mkFE([]string{"a"}, []string{"c"}),
|
|
states: map[string]State{"a": StateUnknown, "c": StateUp},
|
|
want: 1,
|
|
},
|
|
{
|
|
name: "nothing up anywhere → default 0",
|
|
fe: mkFE([]string{"a"}, []string{"c"}),
|
|
states: map[string]State{"a": StateDown, "c": StateDown},
|
|
want: 0,
|
|
},
|
|
{
|
|
name: "1 up in pool0 is enough",
|
|
fe: mkFE([]string{"a", "b", "c"}, []string{"d"}),
|
|
states: map[string]State{"a": StateDown, "b": StateDown, "c": StateUp, "d": StateUp},
|
|
want: 0,
|
|
},
|
|
{
|
|
name: "three tiers, pool0 and pool1 both empty → pool2",
|
|
fe: mkFE([]string{"a"}, []string{"b"}, []string{"c"}),
|
|
states: map[string]State{"a": StateDown, "b": StateDown, "c": StateUp},
|
|
want: 2,
|
|
},
|
|
}
|
|
|
|
for _, tc := range cases {
|
|
t.Run(tc.name, func(t *testing.T) {
|
|
got := ActivePoolIndex(tc.fe, tc.states)
|
|
if got != tc.want {
|
|
t.Errorf("got pool %d, want pool %d", got, tc.want)
|
|
}
|
|
})
|
|
}
|
|
}
|
|
|
|
// TestComputeFrontendState locks down the reduction rule: frontends are
|
|
// up iff any backend has effective weight > 0, unknown iff all backends
|
|
// are still in StateUnknown (or there are no backends), and down otherwise.
|
|
func TestComputeFrontendState(t *testing.T) {
|
|
mkFE := func(pools ...[]string) config.Frontend {
|
|
out := make([]config.Pool, len(pools))
|
|
for i, p := range pools {
|
|
out[i] = config.Pool{Name: "p", Backends: map[string]config.PoolBackend{}}
|
|
for _, name := range p {
|
|
out[i].Backends[name] = config.PoolBackend{Weight: 100}
|
|
}
|
|
}
|
|
return config.Frontend{Pools: out}
|
|
}
|
|
|
|
cases := []struct {
|
|
name string
|
|
fe config.Frontend
|
|
states map[string]State
|
|
want FrontendState
|
|
}{
|
|
{
|
|
name: "no backends → unknown",
|
|
fe: config.Frontend{Pools: []config.Pool{{Name: "primary", Backends: map[string]config.PoolBackend{}}}},
|
|
want: FrontendStateUnknown,
|
|
},
|
|
{
|
|
name: "all unknown (startup) → unknown",
|
|
fe: mkFE([]string{"a", "b"}),
|
|
states: map[string]State{"a": StateUnknown, "b": StateUnknown},
|
|
want: FrontendStateUnknown,
|
|
},
|
|
{
|
|
name: "one up in primary → up",
|
|
fe: mkFE([]string{"a", "b"}),
|
|
states: map[string]State{"a": StateUp, "b": StateDown},
|
|
want: FrontendStateUp,
|
|
},
|
|
{
|
|
name: "all down → down",
|
|
fe: mkFE([]string{"a", "b"}),
|
|
states: map[string]State{"a": StateDown, "b": StateDown},
|
|
want: FrontendStateDown,
|
|
},
|
|
{
|
|
name: "all disabled → down",
|
|
fe: mkFE([]string{"a", "b"}),
|
|
states: map[string]State{"a": StateDisabled, "b": StateDisabled},
|
|
want: FrontendStateDown,
|
|
},
|
|
{
|
|
name: "all paused → down",
|
|
fe: mkFE([]string{"a"}),
|
|
states: map[string]State{"a": StatePaused},
|
|
want: FrontendStateDown,
|
|
},
|
|
{
|
|
name: "primary down, secondary up → up (failover)",
|
|
fe: mkFE([]string{"a"}, []string{"b"}),
|
|
states: map[string]State{"a": StateDown, "b": StateUp},
|
|
want: FrontendStateUp,
|
|
},
|
|
{
|
|
name: "primary up, secondary down → up (secondary standby ignored)",
|
|
fe: mkFE([]string{"a"}, []string{"b"}),
|
|
states: map[string]State{"a": StateUp, "b": StateDown},
|
|
want: FrontendStateUp,
|
|
},
|
|
{
|
|
name: "primary unknown, secondary unknown → unknown",
|
|
fe: mkFE([]string{"a"}, []string{"b"}),
|
|
states: map[string]State{"a": StateUnknown, "b": StateUnknown},
|
|
want: FrontendStateUnknown,
|
|
},
|
|
{
|
|
name: "primary down, secondary unknown → down",
|
|
fe: mkFE([]string{"a"}, []string{"b"}),
|
|
states: map[string]State{"a": StateDown, "b": StateUnknown},
|
|
want: FrontendStateDown,
|
|
},
|
|
}
|
|
|
|
for _, tc := range cases {
|
|
t.Run(tc.name, func(t *testing.T) {
|
|
got := ComputeFrontendState(tc.fe, tc.states)
|
|
if got != tc.want {
|
|
t.Errorf("got %s, want %s", got, tc.want)
|
|
}
|
|
})
|
|
}
|
|
}
|