New feature: per-VIP / per-backend runtime counters
* New GetVPPLBCounters RPC serving an in-process snapshot refreshed
by a 5s scrape loop (internal/vpp/lbstats.go). Each cycle pulls
the LB plugin's four SimpleCounters (next, first, untracked,
no-server) plus the FIB /net/route/to CombinedCounter for every
VIP and every backend host prefix via a single DumpStats call.
* FIB stats-index discovery via ip_route_lookup (internal/vpp/
fibstats.go); per-worker reduction happens in the collector.
* Prometheus collector exports vip_packets_total (kind label),
vip_route_{packets,bytes}_total, and backend_route_{packets,
bytes}_total. Metrics source interface extended with VIPStats /
BackendRouteStats; vpp.Client publishes snapshots via
atomic.Pointer and clears them on disconnect.
* New 'show vpp lb counters' CLI command. The 'show vpp lbstate'
and 'sync vpp lbstate' commands are restructured under 'show
vpp lb {state,counters}' / 'sync vpp lb state' to make room
for the new verb.
New feature: src-ip-sticky frontends
* New frontend YAML key 'src-ip-sticky' (bool). Plumbed through
config.Frontend, desiredVIP, and the lb_add_del_vip_v2 call.
* Reflected in gRPC FrontendInfo.src_ip_sticky and VPPLBVIP.
src_ip_sticky, and shown in 'show vpp lb state' output.
* Scraped back from VPP by parsing 'show lb vips verbose' through
cli_inband — lb_vip_details does not expose the flag. The same
scrape also recovers the LB pool index for each VIP, which the
stats-segment counters are keyed on. This is a documented
temporary workaround until VPP ships an lb_vip_v2_dump.
* src_ip_sticky cannot be mutated on a live VIP, so a flipped flag
triggers a tear-down-and-recreate in reconcileVIP (ASes deleted
with flush, VIP deleted, then re-added). Flip is logged.
New feature: frontend state aggregation and events
* New health.FrontendState (unknown/up/down) and FrontendTransition
types. A frontend is 'up' iff at least one backend has a nonzero
effective weight, 'unknown' iff no backend has real state yet,
and 'down' otherwise.
* Checker tracks per-frontend aggregate state, recomputing after
each backend transition and emitting a frontend-transition Event
on change. Reload drops entries for removed frontends.
* checker.Event gains an optional FrontendTransition pointer;
backend- vs. frontend-transition events are demultiplexed on
that field.
* WatchEvents now sends an initial snapshot of frontend state on
connect (mirroring the existing backend snapshot), subscribes
once to the checker stream, and fans out to backend/frontend
handlers based on the client's filter flags. The proto
FrontendEvent message grows name + transition fields.
* New Checker.FrontendState accessor.
Refactor: pure health helpers
* Moved the priority-failover selector and the (pool idx, active
pool, state, cfg weight) → (vpp weight, flush) mapping out of
internal/vpp/lbsync.go into a new internal/health/weights.go so
the checker can reuse them for frontend-state computation
without importing internal/vpp.
* New functions: health.ActivePoolIndex, BackendEffectiveWeight,
EffectiveWeights, ComputeFrontendState. lbsync.go now calls
these directly; vpp.EffectiveWeights is a thin wrapper over
health.EffectiveWeights retained for the gRPC observability
path. Fully unit-tested in internal/health/weights_test.go.
maglevc polish
* --color default is now mode-aware: on in the interactive shell,
off in one-shot mode so piped output is script-safe. Explicit
--color=true/false still overrides.
* New stripHostMask helper drops /32 and /128 from VIP display;
non-host prefixes pass through unchanged.
* Counter table column order fixed (first before next) and
packets/bytes columns renamed to fib-packets/fib-bytes to
clarify they come from the FIB, not the LB plugin.
Docs
* config-guide: document src-ip-sticky, including the VIP
recreate-on-change caveat.
* user-guide, maglevc.1, maglevd.8: updated command tree, new
counters command, color defaults, and the src-ip-sticky field.
180 lines
5.3 KiB
Go
180 lines
5.3 KiB
Go
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
|
|
|
package vpp
|
|
|
|
import (
|
|
"net"
|
|
"testing"
|
|
|
|
"git.ipng.ch/ipng/vpp-maglev/internal/config"
|
|
"git.ipng.ch/ipng/vpp-maglev/internal/health"
|
|
)
|
|
|
|
// TestParseLBVIPSnapshot pins the parser for `show lb vips verbose` output.
|
|
// The text below is a synthetic sample that mirrors format_lb_vip_detailed
|
|
// in src/plugins/lb/lb.c: a header line per VIP optionally carrying the
|
|
// src_ip_sticky token, followed by a protocol:/port: sub-line for non all-
|
|
// port VIPs. If VPP changes this format the test will fail loudly — the
|
|
// scrape is a temporary workaround until lb_vip_v2_dump exists.
|
|
func TestParseLBVIPSnapshot(t *testing.T) {
|
|
text := ` ip4-gre4 [1] 192.0.2.1/32 src_ip_sticky
|
|
new_size:1024
|
|
protocol:6 port:80
|
|
counters:
|
|
ip4-gre4 [2] 192.0.2.2/32
|
|
new_size:1024
|
|
protocol:17 port:53
|
|
ip6-gre6 [3] 2001:db8::1/128 src_ip_sticky
|
|
new_size:1024
|
|
protocol:6 port:443
|
|
ip4-gre4 [4] 192.0.2.3/32
|
|
new_size:1024
|
|
`
|
|
got := parseLBVIPSnapshot(text)
|
|
want := map[vipKey]lbVIPSnapshot{
|
|
{prefix: "192.0.2.1/32", protocol: 6, port: 80}: {index: 1, sticky: true},
|
|
{prefix: "192.0.2.2/32", protocol: 17, port: 53}: {index: 2, sticky: false},
|
|
{prefix: "2001:db8::1/128", protocol: 6, port: 443}: {index: 3, sticky: true},
|
|
{prefix: "192.0.2.3/32", protocol: 255, port: 0}: {index: 4, sticky: false}, // all-port VIP
|
|
}
|
|
if len(got) != len(want) {
|
|
t.Errorf("got %d entries, want %d: %#v", len(got), len(want), got)
|
|
}
|
|
for k, v := range want {
|
|
g, ok := got[k]
|
|
if !ok {
|
|
t.Errorf("missing key %+v", k)
|
|
continue
|
|
}
|
|
if g != v {
|
|
t.Errorf("key %+v: got %+v, want %+v", k, g, v)
|
|
}
|
|
}
|
|
}
|
|
|
|
// fakeStateSource implements StateSource from a static map.
|
|
type fakeStateSource struct {
|
|
cfg *config.Config
|
|
states map[string]health.State
|
|
}
|
|
|
|
func (f *fakeStateSource) Config() *config.Config { return f.cfg }
|
|
func (f *fakeStateSource) BackendState(name string) (health.State, bool) {
|
|
s, ok := f.states[name]
|
|
return s, ok
|
|
}
|
|
|
|
// TestDesiredFromFrontendFailover is the end-to-end integration test for
|
|
// priority-failover in the VPP sync path: given a frontend with two pools,
|
|
// the desired weights flip between pools based on which has any up backends.
|
|
// This exercises vpp.desiredFromFrontend which wraps the pure helpers in
|
|
// the health package; those helpers are unit-tested separately in health.
|
|
func TestDesiredFromFrontendFailover(t *testing.T) {
|
|
ip := func(s string) net.IP { return net.ParseIP(s).To4() }
|
|
cfg := &config.Config{
|
|
Backends: map[string]config.Backend{
|
|
"p1": {Address: ip("10.0.0.1"), Enabled: true},
|
|
"p2": {Address: ip("10.0.0.2"), Enabled: true},
|
|
"s1": {Address: ip("10.0.0.11"), Enabled: true},
|
|
"s2": {Address: ip("10.0.0.12"), Enabled: true},
|
|
},
|
|
}
|
|
fe := config.Frontend{
|
|
Address: ip("192.0.2.1"),
|
|
Protocol: "tcp",
|
|
Port: 80,
|
|
Pools: []config.Pool{
|
|
{Name: "primary", Backends: map[string]config.PoolBackend{
|
|
"p1": {Weight: 100},
|
|
"p2": {Weight: 100},
|
|
}},
|
|
{Name: "fallback", Backends: map[string]config.PoolBackend{
|
|
"s1": {Weight: 100},
|
|
"s2": {Weight: 100},
|
|
}},
|
|
},
|
|
}
|
|
|
|
tests := []struct {
|
|
name string
|
|
states map[string]health.State
|
|
want map[string]uint8 // backend IP → expected weight
|
|
}{
|
|
{
|
|
name: "primary all up → primary serves, secondary standby",
|
|
states: map[string]health.State{
|
|
"p1": health.StateUp, "p2": health.StateUp,
|
|
"s1": health.StateUp, "s2": health.StateUp,
|
|
},
|
|
want: map[string]uint8{
|
|
"10.0.0.1": 100, "10.0.0.2": 100,
|
|
"10.0.0.11": 0, "10.0.0.12": 0,
|
|
},
|
|
},
|
|
{
|
|
name: "primary 1 up → primary still serves",
|
|
states: map[string]health.State{
|
|
"p1": health.StateDown, "p2": health.StateUp,
|
|
"s1": health.StateUp, "s2": health.StateUp,
|
|
},
|
|
want: map[string]uint8{
|
|
"10.0.0.1": 0, "10.0.0.2": 100,
|
|
"10.0.0.11": 0, "10.0.0.12": 0,
|
|
},
|
|
},
|
|
{
|
|
name: "primary all down → failover to secondary",
|
|
states: map[string]health.State{
|
|
"p1": health.StateDown, "p2": health.StateDown,
|
|
"s1": health.StateUp, "s2": health.StateUp,
|
|
},
|
|
want: map[string]uint8{
|
|
"10.0.0.1": 0, "10.0.0.2": 0,
|
|
"10.0.0.11": 100, "10.0.0.12": 100,
|
|
},
|
|
},
|
|
{
|
|
name: "primary all disabled → failover",
|
|
states: map[string]health.State{
|
|
"p1": health.StateDisabled, "p2": health.StateDisabled,
|
|
"s1": health.StateUp, "s2": health.StateUp,
|
|
},
|
|
want: map[string]uint8{
|
|
"10.0.0.1": 0, "10.0.0.2": 0,
|
|
"10.0.0.11": 100, "10.0.0.12": 100,
|
|
},
|
|
},
|
|
{
|
|
name: "everything down → all zero, no serving",
|
|
states: map[string]health.State{
|
|
"p1": health.StateDown, "p2": health.StateDown,
|
|
"s1": health.StateDown, "s2": health.StateDown,
|
|
},
|
|
want: map[string]uint8{
|
|
"10.0.0.1": 0, "10.0.0.2": 0,
|
|
"10.0.0.11": 0, "10.0.0.12": 0,
|
|
},
|
|
},
|
|
}
|
|
|
|
for _, tc := range tests {
|
|
t.Run(tc.name, func(t *testing.T) {
|
|
src := &fakeStateSource{cfg: cfg, states: tc.states}
|
|
d := desiredFromFrontend(cfg, fe, src)
|
|
for addr, wantW := range tc.want {
|
|
got, ok := d.ASes[addr]
|
|
if !ok {
|
|
t.Errorf("%s: missing from desired set", addr)
|
|
continue
|
|
}
|
|
if got.Weight != wantW {
|
|
t.Errorf("%s: weight got %d, want %d", addr, got.Weight, wantW)
|
|
}
|
|
}
|
|
if len(d.ASes) != len(tc.want) {
|
|
t.Errorf("got %d ASes, want %d", len(d.ASes), len(tc.want))
|
|
}
|
|
})
|
|
}
|
|
}
|