VPP LB counters, src-ip-sticky, and frontend state aggregation
New feature: per-VIP / per-backend runtime counters
* New GetVPPLBCounters RPC serving an in-process snapshot refreshed
by a 5s scrape loop (internal/vpp/lbstats.go). Each cycle pulls
the LB plugin's four SimpleCounters (next, first, untracked,
no-server) plus the FIB /net/route/to CombinedCounter for every
VIP and every backend host prefix via a single DumpStats call.
* FIB stats-index discovery via ip_route_lookup (internal/vpp/
fibstats.go); per-worker reduction happens in the collector.
* Prometheus collector exports vip_packets_total (kind label),
vip_route_{packets,bytes}_total, and backend_route_{packets,
bytes}_total. Metrics source interface extended with VIPStats /
BackendRouteStats; vpp.Client publishes snapshots via
atomic.Pointer and clears them on disconnect.
* New 'show vpp lb counters' CLI command. The 'show vpp lbstate'
and 'sync vpp lbstate' commands are restructured under 'show
vpp lb {state,counters}' / 'sync vpp lb state' to make room
for the new verb.
New feature: src-ip-sticky frontends
* New frontend YAML key 'src-ip-sticky' (bool). Plumbed through
config.Frontend, desiredVIP, and the lb_add_del_vip_v2 call.
* Reflected in gRPC FrontendInfo.src_ip_sticky and VPPLBVIP.
src_ip_sticky, and shown in 'show vpp lb state' output.
* Scraped back from VPP by parsing 'show lb vips verbose' through
cli_inband — lb_vip_details does not expose the flag. The same
scrape also recovers the LB pool index for each VIP, which the
stats-segment counters are keyed on. This is a documented
temporary workaround until VPP ships an lb_vip_v2_dump.
* src_ip_sticky cannot be mutated on a live VIP, so a flipped flag
triggers a tear-down-and-recreate in reconcileVIP (ASes deleted
with flush, VIP deleted, then re-added). Flip is logged.
New feature: frontend state aggregation and events
* New health.FrontendState (unknown/up/down) and FrontendTransition
types. A frontend is 'up' iff at least one backend has a nonzero
effective weight, 'unknown' iff no backend has real state yet,
and 'down' otherwise.
* Checker tracks per-frontend aggregate state, recomputing after
each backend transition and emitting a frontend-transition Event
on change. Reload drops entries for removed frontends.
* checker.Event gains an optional FrontendTransition pointer;
backend- vs. frontend-transition events are demultiplexed on
that field.
* WatchEvents now sends an initial snapshot of frontend state on
connect (mirroring the existing backend snapshot), subscribes
once to the checker stream, and fans out to backend/frontend
handlers based on the client's filter flags. The proto
FrontendEvent message grows name + transition fields.
* New Checker.FrontendState accessor.
Refactor: pure health helpers
* Moved the priority-failover selector and the (pool idx, active
pool, state, cfg weight) → (vpp weight, flush) mapping out of
internal/vpp/lbsync.go into a new internal/health/weights.go so
the checker can reuse them for frontend-state computation
without importing internal/vpp.
* New functions: health.ActivePoolIndex, BackendEffectiveWeight,
EffectiveWeights, ComputeFrontendState. lbsync.go now calls
these directly; vpp.EffectiveWeights is a thin wrapper over
health.EffectiveWeights retained for the gRPC observability
path. Fully unit-tested in internal/health/weights_test.go.
maglevc polish
* --color default is now mode-aware: on in the interactive shell,
off in one-shot mode so piped output is script-safe. Explicit
--color=true/false still overrides.
* New stripHostMask helper drops /32 and /128 from VIP display;
non-host prefixes pass through unchanged.
* Counter table column order fixed (first before next) and
packets/bytes columns renamed to fib-packets/fib-bytes to
clarify they come from the FIB, not the LB plugin.
Docs
* config-guide: document src-ip-sticky, including the VIP
recreate-on-change caveat.
* user-guide, maglevc.1, maglevd.8: updated command tree, new
counters command, color defaults, and the src-ip-sticky field.
This commit is contained in:
87
internal/vpp/fibstats.go
Normal file
87
internal/vpp/fibstats.go
Normal file
@@ -0,0 +1,87 @@
|
||||
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
||||
|
||||
package vpp
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
|
||||
"go.fd.io/govpp/adapter"
|
||||
"go.fd.io/govpp/binapi/ip"
|
||||
"go.fd.io/govpp/binapi/ip_types"
|
||||
)
|
||||
|
||||
// routeToStatPath is the VPP stats-segment path exposing the per-FIB-entry
|
||||
// "route-to" combined counter (packets + bytes), indexed by the load-
|
||||
// balance index of each FIB entry. See lbm_to_counters in
|
||||
// src/vnet/dpo/load_balance.c.
|
||||
const routeToStatPath = "/net/route/to"
|
||||
|
||||
// fibStatsIndex returns the FIB entry's stats_index (load_balance index)
|
||||
// for the host prefix of addr. Uses exact=0 (longest-match) so a covering
|
||||
// route is returned if there is no host-prefix entry — note this means
|
||||
// two maglev entities sharing a covering route will report identical
|
||||
// /net/route/to counters.
|
||||
func fibStatsIndex(ch *loggedChannel, addr net.IP) (uint32, error) {
|
||||
var prefix ip_types.Prefix
|
||||
if v4 := addr.To4(); v4 != nil {
|
||||
prefix.Address.Af = ip_types.ADDRESS_IP4
|
||||
copy(prefix.Address.Un.XXX_UnionData[:4], v4)
|
||||
prefix.Len = 32
|
||||
} else {
|
||||
prefix.Address.Af = ip_types.ADDRESS_IP6
|
||||
copy(prefix.Address.Un.XXX_UnionData[:], addr.To16())
|
||||
prefix.Len = 128
|
||||
}
|
||||
req := &ip.IPRouteLookup{
|
||||
TableID: 0,
|
||||
Exact: 0,
|
||||
Prefix: prefix,
|
||||
}
|
||||
reply := &ip.IPRouteLookupReply{}
|
||||
if err := ch.SendRequest(req).ReceiveReply(reply); err != nil {
|
||||
return 0, fmt.Errorf("ip_route_lookup: %w", err)
|
||||
}
|
||||
if reply.Retval != 0 {
|
||||
return 0, fmt.Errorf("ip_route_lookup: retval=%d", reply.Retval)
|
||||
}
|
||||
return reply.Route.StatsIndex, nil
|
||||
}
|
||||
|
||||
// findCombinedCounter returns the CombinedCounterStat matching name, or
|
||||
// nil if not found or the wrong type.
|
||||
func findCombinedCounter(entries []adapter.StatEntry, name string) adapter.CombinedCounterStat {
|
||||
for _, e := range entries {
|
||||
if string(e.Name) != name {
|
||||
continue
|
||||
}
|
||||
if s, ok := e.Data.(adapter.CombinedCounterStat); ok {
|
||||
return s
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// reduceCombinedCounter sums the (packets, bytes) CombinedCounter across
|
||||
// workers at column i, tolerating short per-worker vectors.
|
||||
func reduceCombinedCounter(s adapter.CombinedCounterStat, i int) (pkts, byts uint64) {
|
||||
for _, thread := range s {
|
||||
if i >= 0 && i < len(thread) {
|
||||
pkts += thread[i][0]
|
||||
byts += thread[i][1]
|
||||
}
|
||||
}
|
||||
return pkts, byts
|
||||
}
|
||||
|
||||
// vipKeyToIP extracts the VIP address from a vipKey's CIDR string. The
|
||||
// second return is the prefix length. Used by the scrape path to feed
|
||||
// a VIP prefix into fibStatsIndex.
|
||||
func vipKeyToIP(k vipKey) (net.IP, int, error) {
|
||||
ip, ipnet, err := net.ParseCIDR(k.prefix)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
ones, _ := ipnet.Mask.Size()
|
||||
return ip, ones, nil
|
||||
}
|
||||
Reference in New Issue
Block a user