VPP LB counters, src-ip-sticky, and frontend state aggregation
New feature: per-VIP / per-backend runtime counters
* New GetVPPLBCounters RPC serving an in-process snapshot refreshed
by a 5s scrape loop (internal/vpp/lbstats.go). Each cycle pulls
the LB plugin's four SimpleCounters (next, first, untracked,
no-server) plus the FIB /net/route/to CombinedCounter for every
VIP and every backend host prefix via a single DumpStats call.
* FIB stats-index discovery via ip_route_lookup (internal/vpp/
fibstats.go); per-worker reduction happens in the collector.
* Prometheus collector exports vip_packets_total (kind label),
vip_route_{packets,bytes}_total, and backend_route_{packets,
bytes}_total. Metrics source interface extended with VIPStats /
BackendRouteStats; vpp.Client publishes snapshots via
atomic.Pointer and clears them on disconnect.
* New 'show vpp lb counters' CLI command. The 'show vpp lbstate'
and 'sync vpp lbstate' commands are restructured under 'show
vpp lb {state,counters}' / 'sync vpp lb state' to make room
for the new verb.
New feature: src-ip-sticky frontends
* New frontend YAML key 'src-ip-sticky' (bool). Plumbed through
config.Frontend, desiredVIP, and the lb_add_del_vip_v2 call.
* Reflected in gRPC FrontendInfo.src_ip_sticky and VPPLBVIP.
src_ip_sticky, and shown in 'show vpp lb state' output.
* Scraped back from VPP by parsing 'show lb vips verbose' through
cli_inband — lb_vip_details does not expose the flag. The same
scrape also recovers the LB pool index for each VIP, which the
stats-segment counters are keyed on. This is a documented
temporary workaround until VPP ships an lb_vip_v2_dump.
* src_ip_sticky cannot be mutated on a live VIP, so a flipped flag
triggers a tear-down-and-recreate in reconcileVIP (ASes deleted
with flush, VIP deleted, then re-added). Flip is logged.
New feature: frontend state aggregation and events
* New health.FrontendState (unknown/up/down) and FrontendTransition
types. A frontend is 'up' iff at least one backend has a nonzero
effective weight, 'unknown' iff no backend has real state yet,
and 'down' otherwise.
* Checker tracks per-frontend aggregate state, recomputing after
each backend transition and emitting a frontend-transition Event
on change. Reload drops entries for removed frontends.
* checker.Event gains an optional FrontendTransition pointer;
backend- vs. frontend-transition events are demultiplexed on
that field.
* WatchEvents now sends an initial snapshot of frontend state on
connect (mirroring the existing backend snapshot), subscribes
once to the checker stream, and fans out to backend/frontend
handlers based on the client's filter flags. The proto
FrontendEvent message grows name + transition fields.
* New Checker.FrontendState accessor.
Refactor: pure health helpers
* Moved the priority-failover selector and the (pool idx, active
pool, state, cfg weight) → (vpp weight, flush) mapping out of
internal/vpp/lbsync.go into a new internal/health/weights.go so
the checker can reuse them for frontend-state computation
without importing internal/vpp.
* New functions: health.ActivePoolIndex, BackendEffectiveWeight,
EffectiveWeights, ComputeFrontendState. lbsync.go now calls
these directly; vpp.EffectiveWeights is a thin wrapper over
health.EffectiveWeights retained for the gRPC observability
path. Fully unit-tested in internal/health/weights_test.go.
maglevc polish
* --color default is now mode-aware: on in the interactive shell,
off in one-shot mode so piped output is script-safe. Explicit
--color=true/false still overrides.
* New stripHostMask helper drops /32 and /128 from VIP display;
non-host prefixes pass through unchanged.
* Counter table column order fixed (first before next) and
packets/bytes columns renamed to fib-packets/fib-bytes to
clarify they come from the FIB, not the LB plugin.
Docs
* config-guide: document src-ip-sticky, including the VIP
recreate-on-change caveat.
* user-guide, maglevc.1, maglevd.8: updated command tree, new
counters command, color defaults, and the src-ip-sticky field.
This commit is contained in:
@@ -23,6 +23,7 @@ service Maglev {
|
||||
rpc GetVPPInfo(GetVPPInfoRequest) returns (VPPInfo);
|
||||
rpc GetVPPLBState(GetVPPLBStateRequest) returns (VPPLBState);
|
||||
rpc SyncVPPLBState(SyncVPPLBStateRequest) returns (SyncVPPLBStateResponse);
|
||||
rpc GetVPPLBCounters(GetVPPLBCountersRequest) returns (VPPLBCounters);
|
||||
}
|
||||
|
||||
// ---- requests ---------------------------------------------------------------
|
||||
@@ -108,6 +109,7 @@ message VPPLBVIP {
|
||||
string encap = 4; // gre4|gre6|l3dsr|nat4|nat6
|
||||
uint32 flow_table_length = 5;
|
||||
repeated VPPLBAS application_servers = 6;
|
||||
bool src_ip_sticky = 7; // source-IP based sticky session (scraped via cli_inband)
|
||||
}
|
||||
|
||||
message VPPLBState {
|
||||
@@ -126,6 +128,44 @@ message SyncVPPLBStateRequest {
|
||||
|
||||
message SyncVPPLBStateResponse {}
|
||||
|
||||
// ---- VPP load-balancer runtime counters ------------------------------------
|
||||
|
||||
// GetVPPLBCountersRequest asks maglevd for the most recent per-VIP and
|
||||
// per-backend counter snapshot. The data is served from an in-process
|
||||
// cache that is refreshed every ~5 seconds server-side; the call itself
|
||||
// is cheap and does not hit VPP.
|
||||
message GetVPPLBCountersRequest {}
|
||||
|
||||
// VPPLBVIPCounters is the point-in-time counter row for a single VIP.
|
||||
// The four lb_* fields are the LB plugin's SimpleCounters (packets only);
|
||||
// packets / bytes come from the VPP FIB's combined counter at the VIP's
|
||||
// host prefix (/net/route/to).
|
||||
message VPPLBVIPCounters {
|
||||
string prefix = 1; // CIDR, e.g. 192.0.2.1/32
|
||||
string protocol = 2; // tcp | udp | any
|
||||
uint32 port = 3;
|
||||
uint64 next_packet = 4; // "/packet from existing sessions"
|
||||
uint64 first_packet = 5; // "/first session packet"
|
||||
uint64 untracked_packet = 6; // "/untracked packet"
|
||||
uint64 no_server = 7; // "/no server configured"
|
||||
uint64 packets = 8; // /net/route/to (FIB, summed across workers)
|
||||
uint64 bytes = 9; // /net/route/to (FIB, summed across workers)
|
||||
}
|
||||
|
||||
// VPPLBBackendCounters is the FIB combined counter for a single backend's
|
||||
// host prefix, summed across worker threads.
|
||||
message VPPLBBackendCounters {
|
||||
string backend = 1; // backend name from config
|
||||
string address = 2; // backend IP address
|
||||
uint64 packets = 3;
|
||||
uint64 bytes = 4;
|
||||
}
|
||||
|
||||
message VPPLBCounters {
|
||||
repeated VPPLBVIPCounters vips = 1;
|
||||
repeated VPPLBBackendCounters backends = 2;
|
||||
}
|
||||
|
||||
message SetWeightRequest {
|
||||
string frontend = 1;
|
||||
string pool = 2;
|
||||
@@ -166,6 +206,7 @@ message FrontendInfo {
|
||||
uint32 port = 4;
|
||||
repeated PoolInfo pools = 5;
|
||||
string description = 6;
|
||||
bool src_ip_sticky = 7; // VPP LB uses src-IP-based stickiness for this VIP
|
||||
}
|
||||
|
||||
message ListBackendsResponse {
|
||||
@@ -245,8 +286,12 @@ message BackendEvent {
|
||||
TransitionRecord transition = 2;
|
||||
}
|
||||
|
||||
// FrontendEvent is reserved for future frontend-level events.
|
||||
message FrontendEvent {}
|
||||
// FrontendEvent is emitted when a frontend's aggregate state changes.
|
||||
// Frontends have three states: unknown, up, down. See docs/healthchecks.md.
|
||||
message FrontendEvent {
|
||||
string frontend_name = 1;
|
||||
TransitionRecord transition = 2;
|
||||
}
|
||||
|
||||
// Event is the envelope returned by WatchEvents.
|
||||
message Event {
|
||||
|
||||
Reference in New Issue
Block a user