VPP LB counters, src-ip-sticky, and frontend state aggregation

New feature: per-VIP / per-backend runtime counters
  * New GetVPPLBCounters RPC serving an in-process snapshot refreshed
    by a 5s scrape loop (internal/vpp/lbstats.go). Each cycle pulls
    the LB plugin's four SimpleCounters (next, first, untracked,
    no-server) plus the FIB /net/route/to CombinedCounter for every
    VIP and every backend host prefix via a single DumpStats call.
  * FIB stats-index discovery via ip_route_lookup (internal/vpp/
    fibstats.go); per-worker reduction happens in the collector.
  * Prometheus collector exports vip_packets_total (kind label),
    vip_route_{packets,bytes}_total, and backend_route_{packets,
    bytes}_total. Metrics source interface extended with VIPStats /
    BackendRouteStats; vpp.Client publishes snapshots via
    atomic.Pointer and clears them on disconnect.
  * New 'show vpp lb counters' CLI command. The 'show vpp lbstate'
    and 'sync vpp lbstate' commands are restructured under 'show
    vpp lb {state,counters}' / 'sync vpp lb state' to make room
    for the new verb.

New feature: src-ip-sticky frontends
  * New frontend YAML key 'src-ip-sticky' (bool). Plumbed through
    config.Frontend, desiredVIP, and the lb_add_del_vip_v2 call.
  * Reflected in gRPC FrontendInfo.src_ip_sticky and VPPLBVIP.
    src_ip_sticky, and shown in 'show vpp lb state' output.
  * Scraped back from VPP by parsing 'show lb vips verbose' through
    cli_inband — lb_vip_details does not expose the flag. The same
    scrape also recovers the LB pool index for each VIP, which the
    stats-segment counters are keyed on. This is a documented
    temporary workaround until VPP ships an lb_vip_v2_dump.
  * src_ip_sticky cannot be mutated on a live VIP, so a flipped flag
    triggers a tear-down-and-recreate in reconcileVIP (ASes deleted
    with flush, VIP deleted, then re-added). Flip is logged.

New feature: frontend state aggregation and events
  * New health.FrontendState (unknown/up/down) and FrontendTransition
    types. A frontend is 'up' iff at least one backend has a nonzero
    effective weight, 'unknown' iff no backend has real state yet,
    and 'down' otherwise.
  * Checker tracks per-frontend aggregate state, recomputing after
    each backend transition and emitting a frontend-transition Event
    on change. Reload drops entries for removed frontends.
  * checker.Event gains an optional FrontendTransition pointer;
    backend- vs. frontend-transition events are demultiplexed on
    that field.
  * WatchEvents now sends an initial snapshot of frontend state on
    connect (mirroring the existing backend snapshot), subscribes
    once to the checker stream, and fans out to backend/frontend
    handlers based on the client's filter flags. The proto
    FrontendEvent message grows name + transition fields.
  * New Checker.FrontendState accessor.

Refactor: pure health helpers
  * Moved the priority-failover selector and the (pool idx, active
    pool, state, cfg weight) → (vpp weight, flush) mapping out of
    internal/vpp/lbsync.go into a new internal/health/weights.go so
    the checker can reuse them for frontend-state computation
    without importing internal/vpp.
  * New functions: health.ActivePoolIndex, BackendEffectiveWeight,
    EffectiveWeights, ComputeFrontendState. lbsync.go now calls
    these directly; vpp.EffectiveWeights is a thin wrapper over
    health.EffectiveWeights retained for the gRPC observability
    path. Fully unit-tested in internal/health/weights_test.go.

maglevc polish
  * --color default is now mode-aware: on in the interactive shell,
    off in one-shot mode so piped output is script-safe. Explicit
    --color=true/false still overrides.
  * New stripHostMask helper drops /32 and /128 from VIP display;
    non-host prefixes pass through unchanged.
  * Counter table column order fixed (first before next) and
    packets/bytes columns renamed to fib-packets/fib-bytes to
    clarify they come from the FIB, not the LB plugin.

Docs
  * config-guide: document src-ip-sticky, including the VIP
    recreate-on-change caveat.
  * user-guide, maglevc.1, maglevd.8: updated command tree, new
    counters command, color defaults, and the src-ip-sticky field.
This commit is contained in:
2026-04-12 15:59:02 +02:00
parent d5fbf5c640
commit fb62532fd5
25 changed files with 2163 additions and 549 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -36,6 +36,7 @@ const (
Maglev_GetVPPInfo_FullMethodName = "/maglev.Maglev/GetVPPInfo"
Maglev_GetVPPLBState_FullMethodName = "/maglev.Maglev/GetVPPLBState"
Maglev_SyncVPPLBState_FullMethodName = "/maglev.Maglev/SyncVPPLBState"
Maglev_GetVPPLBCounters_FullMethodName = "/maglev.Maglev/GetVPPLBCounters"
)
// MaglevClient is the client API for Maglev service.
@@ -61,6 +62,7 @@ type MaglevClient interface {
GetVPPInfo(ctx context.Context, in *GetVPPInfoRequest, opts ...grpc.CallOption) (*VPPInfo, error)
GetVPPLBState(ctx context.Context, in *GetVPPLBStateRequest, opts ...grpc.CallOption) (*VPPLBState, error)
SyncVPPLBState(ctx context.Context, in *SyncVPPLBStateRequest, opts ...grpc.CallOption) (*SyncVPPLBStateResponse, error)
GetVPPLBCounters(ctx context.Context, in *GetVPPLBCountersRequest, opts ...grpc.CallOption) (*VPPLBCounters, error)
}
type maglevClient struct {
@@ -250,6 +252,16 @@ func (c *maglevClient) SyncVPPLBState(ctx context.Context, in *SyncVPPLBStateReq
return out, nil
}
func (c *maglevClient) GetVPPLBCounters(ctx context.Context, in *GetVPPLBCountersRequest, opts ...grpc.CallOption) (*VPPLBCounters, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(VPPLBCounters)
err := c.cc.Invoke(ctx, Maglev_GetVPPLBCounters_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
// MaglevServer is the server API for Maglev service.
// All implementations must embed UnimplementedMaglevServer
// for forward compatibility.
@@ -273,6 +285,7 @@ type MaglevServer interface {
GetVPPInfo(context.Context, *GetVPPInfoRequest) (*VPPInfo, error)
GetVPPLBState(context.Context, *GetVPPLBStateRequest) (*VPPLBState, error)
SyncVPPLBState(context.Context, *SyncVPPLBStateRequest) (*SyncVPPLBStateResponse, error)
GetVPPLBCounters(context.Context, *GetVPPLBCountersRequest) (*VPPLBCounters, error)
mustEmbedUnimplementedMaglevServer()
}
@@ -334,6 +347,9 @@ func (UnimplementedMaglevServer) GetVPPLBState(context.Context, *GetVPPLBStateRe
func (UnimplementedMaglevServer) SyncVPPLBState(context.Context, *SyncVPPLBStateRequest) (*SyncVPPLBStateResponse, error) {
return nil, status.Error(codes.Unimplemented, "method SyncVPPLBState not implemented")
}
func (UnimplementedMaglevServer) GetVPPLBCounters(context.Context, *GetVPPLBCountersRequest) (*VPPLBCounters, error) {
return nil, status.Error(codes.Unimplemented, "method GetVPPLBCounters not implemented")
}
func (UnimplementedMaglevServer) mustEmbedUnimplementedMaglevServer() {}
func (UnimplementedMaglevServer) testEmbeddedByValue() {}
@@ -654,6 +670,24 @@ func _Maglev_SyncVPPLBState_Handler(srv interface{}, ctx context.Context, dec fu
return interceptor(ctx, in, info, handler)
}
func _Maglev_GetVPPLBCounters_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetVPPLBCountersRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(MaglevServer).GetVPPLBCounters(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: Maglev_GetVPPLBCounters_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(MaglevServer).GetVPPLBCounters(ctx, req.(*GetVPPLBCountersRequest))
}
return interceptor(ctx, in, info, handler)
}
// Maglev_ServiceDesc is the grpc.ServiceDesc for Maglev service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
@@ -725,6 +759,10 @@ var Maglev_ServiceDesc = grpc.ServiceDesc{
MethodName: "SyncVPPLBState",
Handler: _Maglev_SyncVPPLBState_Handler,
},
{
MethodName: "GetVPPLBCounters",
Handler: _Maglev_GetVPPLBCounters_Handler,
},
},
Streams: []grpc.StreamDesc{
{

View File

@@ -7,6 +7,7 @@ import (
"errors"
"log/slog"
"net"
"sort"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
@@ -128,14 +129,13 @@ func (s *Server) GetHealthCheck(_ context.Context, req *GetHealthCheckRequest) (
}
// WatchEvents streams events to the client. On connect, the current state of
// all backends is sent as synthetic BackendEvents. Afterwards, live events are
// forwarded based on the filter flags in req. An unset (nil) flag defaults to
// true (subscribe). An empty log_level defaults to "info".
// every backend and/or frontend is sent as a synthetic event. Afterwards,
// live events are forwarded based on the filter flags in req. An unset (nil)
// flag defaults to true (subscribe). An empty log_level defaults to "info".
func (s *Server) WatchEvents(req *WatchRequest, stream Maglev_WatchEventsServer) error {
wantLog := req.Log == nil || *req.Log
wantBackend := req.Backend == nil || *req.Backend
wantFrontend := req.Frontend == nil || *req.Frontend
_ = wantFrontend // no frontend events emitted yet
logLevel := slog.LevelInfo
if req.LogLevel != "" {
@@ -152,8 +152,20 @@ func (s *Server) WatchEvents(req *WatchRequest, stream Maglev_WatchEventsServer)
defer unsub()
}
// Subscribe to backend events; send initial state snapshot first.
var backendCh <-chan checker.Event
// Subscribe to the checker event stream once; we demultiplex backend
// and frontend events in the select below. Skip the subscription if
// neither kind is wanted.
var eventCh <-chan checker.Event
if wantBackend || wantFrontend {
var unsub func()
eventCh, unsub = s.checker.Subscribe()
defer unsub()
}
// Send initial state snapshot: one synthetic event per existing backend
// (if wanted), and one per existing frontend (if wanted). Clients that
// connect mid-flight see the current state immediately instead of
// waiting for the next transition.
if wantBackend {
for _, name := range s.checker.ListBackends() {
snap, ok := s.checker.GetBackend(name)
@@ -172,9 +184,25 @@ func (s *Server) WatchEvents(req *WatchRequest, stream Maglev_WatchEventsServer)
return err
}
}
var unsub func()
backendCh, unsub = s.checker.Subscribe()
defer unsub()
}
if wantFrontend {
for _, name := range s.checker.ListFrontends() {
fs, ok := s.checker.FrontendState(name)
if !ok {
continue
}
ev := &Event{Event: &Event_Frontend{Frontend: &FrontendEvent{
FrontendName: name,
Transition: &TransitionRecord{
From: fs.String(),
To: fs.String(),
AtUnixNs: 0,
},
}}}
if err := stream.Send(ev); err != nil {
return err
}
}
}
for {
@@ -190,10 +218,29 @@ func (s *Server) WatchEvents(req *WatchRequest, stream Maglev_WatchEventsServer)
if err := stream.Send(&Event{Event: &Event_Log{Log: le}}); err != nil {
return err
}
case e, ok := <-backendCh:
case e, ok := <-eventCh:
if !ok {
return nil
}
if e.FrontendTransition != nil {
if !wantFrontend {
continue
}
if err := stream.Send(&Event{Event: &Event_Frontend{Frontend: &FrontendEvent{
FrontendName: e.FrontendName,
Transition: &TransitionRecord{
From: e.FrontendTransition.From.String(),
To: e.FrontendTransition.To.String(),
AtUnixNs: e.FrontendTransition.At.UnixNano(),
},
}}}); err != nil {
return err
}
continue
}
if !wantBackend {
continue
}
if err := stream.Send(&Event{Event: &Event_Backend{Backend: &BackendEvent{
BackendName: e.BackendName,
Transition: transitionToProto(e.Transition),
@@ -302,6 +349,51 @@ func (s *Server) GetVPPLBState(_ context.Context, _ *GetVPPLBStateRequest) (*VPP
return lbStateToProto(state), nil
}
// GetVPPLBCounters returns the most recent per-VIP and per-backend counter
// snapshot captured by the client's 5s scrape loop. The call is served
// from an in-process cache and does not hit VPP. An empty response is
// returned when VPP is disconnected or no scrape has completed yet.
func (s *Server) GetVPPLBCounters(_ context.Context, _ *GetVPPLBCountersRequest) (*VPPLBCounters, error) {
if s.vppClient == nil {
return nil, status.Error(codes.Unavailable, "VPP integration is disabled")
}
out := &VPPLBCounters{}
for _, v := range s.vppClient.VIPStats() {
out.Vips = append(out.Vips, &VPPLBVIPCounters{
Prefix: v.Prefix,
Protocol: v.Protocol,
Port: uint32(v.Port),
NextPacket: v.NextPkt,
FirstPacket: v.FirstPkt,
UntrackedPacket: v.Untracked,
NoServer: v.NoServer,
Packets: v.Packets,
Bytes: v.Bytes,
})
}
sort.Slice(out.Vips, func(i, j int) bool {
if out.Vips[i].Prefix != out.Vips[j].Prefix {
return out.Vips[i].Prefix < out.Vips[j].Prefix
}
if out.Vips[i].Protocol != out.Vips[j].Protocol {
return out.Vips[i].Protocol < out.Vips[j].Protocol
}
return out.Vips[i].Port < out.Vips[j].Port
})
for _, b := range s.vppClient.BackendRouteStats() {
out.Backends = append(out.Backends, &VPPLBBackendCounters{
Backend: b.Backend,
Address: b.Address,
Packets: b.Packets,
Bytes: b.Bytes,
})
}
sort.Slice(out.Backends, func(i, j int) bool {
return out.Backends[i].Backend < out.Backends[j].Backend
})
return out, nil
}
// SyncVPPLBState runs the LB reconciler. With frontend_name unset it does a
// full sync (SyncLBStateAll), which may remove stale VIPs. With frontend_name
// set it does a single-VIP sync (SyncLBStateVIP) that only adds/updates.
@@ -342,6 +434,7 @@ func lbStateToProto(s *vpp.LBState) *VPPLBState {
Port: uint32(v.Port),
Encap: v.Encap,
FlowTableLength: uint32(v.FlowTableLength),
SrcIpSticky: v.SrcIPSticky,
}
for _, a := range v.ASes {
var ts int64
@@ -393,6 +486,7 @@ func frontendToProto(name string, fe config.Frontend, src vpp.StateSource) *Fron
Port: uint32(fe.Port),
Description: fe.Description,
Pools: pools,
SrcIpSticky: fe.SrcIPSticky,
}
}

View File

@@ -353,9 +353,11 @@ func TestWatchEventsServerShutdown(t *testing.T) {
if err != nil {
t.Fatalf("WatchEvents: %v", err)
}
// Drain the initial synthetic backend event.
if _, err := stream.Recv(); err != nil {
t.Fatalf("initial Recv: %v", err)
// Drain the initial synthetic snapshots (one per backend, one per frontend).
for i := 0; i < 2; i++ {
if _, err := stream.Recv(); err != nil {
t.Fatalf("initial Recv %d: %v", i, err)
}
}
// Cancel the server context; the stream must terminate.