VPP LB counters, src-ip-sticky, and frontend state aggregation
New feature: per-VIP / per-backend runtime counters
* New GetVPPLBCounters RPC serving an in-process snapshot refreshed
by a 5s scrape loop (internal/vpp/lbstats.go). Each cycle pulls
the LB plugin's four SimpleCounters (next, first, untracked,
no-server) plus the FIB /net/route/to CombinedCounter for every
VIP and every backend host prefix via a single DumpStats call.
* FIB stats-index discovery via ip_route_lookup (internal/vpp/
fibstats.go); per-worker reduction happens in the collector.
* Prometheus collector exports vip_packets_total (kind label),
vip_route_{packets,bytes}_total, and backend_route_{packets,
bytes}_total. Metrics source interface extended with VIPStats /
BackendRouteStats; vpp.Client publishes snapshots via
atomic.Pointer and clears them on disconnect.
* New 'show vpp lb counters' CLI command. The 'show vpp lbstate'
and 'sync vpp lbstate' commands are restructured under 'show
vpp lb {state,counters}' / 'sync vpp lb state' to make room
for the new verb.
New feature: src-ip-sticky frontends
* New frontend YAML key 'src-ip-sticky' (bool). Plumbed through
config.Frontend, desiredVIP, and the lb_add_del_vip_v2 call.
* Reflected in gRPC FrontendInfo.src_ip_sticky and VPPLBVIP.
src_ip_sticky, and shown in 'show vpp lb state' output.
* Scraped back from VPP by parsing 'show lb vips verbose' through
cli_inband — lb_vip_details does not expose the flag. The same
scrape also recovers the LB pool index for each VIP, which the
stats-segment counters are keyed on. This is a documented
temporary workaround until VPP ships an lb_vip_v2_dump.
* src_ip_sticky cannot be mutated on a live VIP, so a flipped flag
triggers a tear-down-and-recreate in reconcileVIP (ASes deleted
with flush, VIP deleted, then re-added). Flip is logged.
New feature: frontend state aggregation and events
* New health.FrontendState (unknown/up/down) and FrontendTransition
types. A frontend is 'up' iff at least one backend has a nonzero
effective weight, 'unknown' iff no backend has real state yet,
and 'down' otherwise.
* Checker tracks per-frontend aggregate state, recomputing after
each backend transition and emitting a frontend-transition Event
on change. Reload drops entries for removed frontends.
* checker.Event gains an optional FrontendTransition pointer;
backend- vs. frontend-transition events are demultiplexed on
that field.
* WatchEvents now sends an initial snapshot of frontend state on
connect (mirroring the existing backend snapshot), subscribes
once to the checker stream, and fans out to backend/frontend
handlers based on the client's filter flags. The proto
FrontendEvent message grows name + transition fields.
* New Checker.FrontendState accessor.
Refactor: pure health helpers
* Moved the priority-failover selector and the (pool idx, active
pool, state, cfg weight) → (vpp weight, flush) mapping out of
internal/vpp/lbsync.go into a new internal/health/weights.go so
the checker can reuse them for frontend-state computation
without importing internal/vpp.
* New functions: health.ActivePoolIndex, BackendEffectiveWeight,
EffectiveWeights, ComputeFrontendState. lbsync.go now calls
these directly; vpp.EffectiveWeights is a thin wrapper over
health.EffectiveWeights retained for the gRPC observability
path. Fully unit-tested in internal/health/weights_test.go.
maglevc polish
* --color default is now mode-aware: on in the interactive shell,
off in one-shot mode so piped output is script-safe. Explicit
--color=true/false still overrides.
* New stripHostMask helper drops /32 and /128 from VIP display;
non-host prefixes pass through unchanged.
* Counter table column order fixed (first before next) and
packets/bytes columns renamed to fib-packets/fib-bytes to
clarify they come from the FIB, not the LB plugin.
Docs
* config-guide: document src-ip-sticky, including the VIP
recreate-on-change caveat.
* user-guide, maglevc.1, maglevd.8: updated command tree, new
counters command, color defaults, and the src-ip-sticky field.
This commit is contained in:
@@ -7,6 +7,7 @@ import (
|
||||
"errors"
|
||||
"log/slog"
|
||||
"net"
|
||||
"sort"
|
||||
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
@@ -128,14 +129,13 @@ func (s *Server) GetHealthCheck(_ context.Context, req *GetHealthCheckRequest) (
|
||||
}
|
||||
|
||||
// WatchEvents streams events to the client. On connect, the current state of
|
||||
// all backends is sent as synthetic BackendEvents. Afterwards, live events are
|
||||
// forwarded based on the filter flags in req. An unset (nil) flag defaults to
|
||||
// true (subscribe). An empty log_level defaults to "info".
|
||||
// every backend and/or frontend is sent as a synthetic event. Afterwards,
|
||||
// live events are forwarded based on the filter flags in req. An unset (nil)
|
||||
// flag defaults to true (subscribe). An empty log_level defaults to "info".
|
||||
func (s *Server) WatchEvents(req *WatchRequest, stream Maglev_WatchEventsServer) error {
|
||||
wantLog := req.Log == nil || *req.Log
|
||||
wantBackend := req.Backend == nil || *req.Backend
|
||||
wantFrontend := req.Frontend == nil || *req.Frontend
|
||||
_ = wantFrontend // no frontend events emitted yet
|
||||
|
||||
logLevel := slog.LevelInfo
|
||||
if req.LogLevel != "" {
|
||||
@@ -152,8 +152,20 @@ func (s *Server) WatchEvents(req *WatchRequest, stream Maglev_WatchEventsServer)
|
||||
defer unsub()
|
||||
}
|
||||
|
||||
// Subscribe to backend events; send initial state snapshot first.
|
||||
var backendCh <-chan checker.Event
|
||||
// Subscribe to the checker event stream once; we demultiplex backend
|
||||
// and frontend events in the select below. Skip the subscription if
|
||||
// neither kind is wanted.
|
||||
var eventCh <-chan checker.Event
|
||||
if wantBackend || wantFrontend {
|
||||
var unsub func()
|
||||
eventCh, unsub = s.checker.Subscribe()
|
||||
defer unsub()
|
||||
}
|
||||
|
||||
// Send initial state snapshot: one synthetic event per existing backend
|
||||
// (if wanted), and one per existing frontend (if wanted). Clients that
|
||||
// connect mid-flight see the current state immediately instead of
|
||||
// waiting for the next transition.
|
||||
if wantBackend {
|
||||
for _, name := range s.checker.ListBackends() {
|
||||
snap, ok := s.checker.GetBackend(name)
|
||||
@@ -172,9 +184,25 @@ func (s *Server) WatchEvents(req *WatchRequest, stream Maglev_WatchEventsServer)
|
||||
return err
|
||||
}
|
||||
}
|
||||
var unsub func()
|
||||
backendCh, unsub = s.checker.Subscribe()
|
||||
defer unsub()
|
||||
}
|
||||
if wantFrontend {
|
||||
for _, name := range s.checker.ListFrontends() {
|
||||
fs, ok := s.checker.FrontendState(name)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
ev := &Event{Event: &Event_Frontend{Frontend: &FrontendEvent{
|
||||
FrontendName: name,
|
||||
Transition: &TransitionRecord{
|
||||
From: fs.String(),
|
||||
To: fs.String(),
|
||||
AtUnixNs: 0,
|
||||
},
|
||||
}}}
|
||||
if err := stream.Send(ev); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for {
|
||||
@@ -190,10 +218,29 @@ func (s *Server) WatchEvents(req *WatchRequest, stream Maglev_WatchEventsServer)
|
||||
if err := stream.Send(&Event{Event: &Event_Log{Log: le}}); err != nil {
|
||||
return err
|
||||
}
|
||||
case e, ok := <-backendCh:
|
||||
case e, ok := <-eventCh:
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
if e.FrontendTransition != nil {
|
||||
if !wantFrontend {
|
||||
continue
|
||||
}
|
||||
if err := stream.Send(&Event{Event: &Event_Frontend{Frontend: &FrontendEvent{
|
||||
FrontendName: e.FrontendName,
|
||||
Transition: &TransitionRecord{
|
||||
From: e.FrontendTransition.From.String(),
|
||||
To: e.FrontendTransition.To.String(),
|
||||
AtUnixNs: e.FrontendTransition.At.UnixNano(),
|
||||
},
|
||||
}}}); err != nil {
|
||||
return err
|
||||
}
|
||||
continue
|
||||
}
|
||||
if !wantBackend {
|
||||
continue
|
||||
}
|
||||
if err := stream.Send(&Event{Event: &Event_Backend{Backend: &BackendEvent{
|
||||
BackendName: e.BackendName,
|
||||
Transition: transitionToProto(e.Transition),
|
||||
@@ -302,6 +349,51 @@ func (s *Server) GetVPPLBState(_ context.Context, _ *GetVPPLBStateRequest) (*VPP
|
||||
return lbStateToProto(state), nil
|
||||
}
|
||||
|
||||
// GetVPPLBCounters returns the most recent per-VIP and per-backend counter
|
||||
// snapshot captured by the client's 5s scrape loop. The call is served
|
||||
// from an in-process cache and does not hit VPP. An empty response is
|
||||
// returned when VPP is disconnected or no scrape has completed yet.
|
||||
func (s *Server) GetVPPLBCounters(_ context.Context, _ *GetVPPLBCountersRequest) (*VPPLBCounters, error) {
|
||||
if s.vppClient == nil {
|
||||
return nil, status.Error(codes.Unavailable, "VPP integration is disabled")
|
||||
}
|
||||
out := &VPPLBCounters{}
|
||||
for _, v := range s.vppClient.VIPStats() {
|
||||
out.Vips = append(out.Vips, &VPPLBVIPCounters{
|
||||
Prefix: v.Prefix,
|
||||
Protocol: v.Protocol,
|
||||
Port: uint32(v.Port),
|
||||
NextPacket: v.NextPkt,
|
||||
FirstPacket: v.FirstPkt,
|
||||
UntrackedPacket: v.Untracked,
|
||||
NoServer: v.NoServer,
|
||||
Packets: v.Packets,
|
||||
Bytes: v.Bytes,
|
||||
})
|
||||
}
|
||||
sort.Slice(out.Vips, func(i, j int) bool {
|
||||
if out.Vips[i].Prefix != out.Vips[j].Prefix {
|
||||
return out.Vips[i].Prefix < out.Vips[j].Prefix
|
||||
}
|
||||
if out.Vips[i].Protocol != out.Vips[j].Protocol {
|
||||
return out.Vips[i].Protocol < out.Vips[j].Protocol
|
||||
}
|
||||
return out.Vips[i].Port < out.Vips[j].Port
|
||||
})
|
||||
for _, b := range s.vppClient.BackendRouteStats() {
|
||||
out.Backends = append(out.Backends, &VPPLBBackendCounters{
|
||||
Backend: b.Backend,
|
||||
Address: b.Address,
|
||||
Packets: b.Packets,
|
||||
Bytes: b.Bytes,
|
||||
})
|
||||
}
|
||||
sort.Slice(out.Backends, func(i, j int) bool {
|
||||
return out.Backends[i].Backend < out.Backends[j].Backend
|
||||
})
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// SyncVPPLBState runs the LB reconciler. With frontend_name unset it does a
|
||||
// full sync (SyncLBStateAll), which may remove stale VIPs. With frontend_name
|
||||
// set it does a single-VIP sync (SyncLBStateVIP) that only adds/updates.
|
||||
@@ -342,6 +434,7 @@ func lbStateToProto(s *vpp.LBState) *VPPLBState {
|
||||
Port: uint32(v.Port),
|
||||
Encap: v.Encap,
|
||||
FlowTableLength: uint32(v.FlowTableLength),
|
||||
SrcIpSticky: v.SrcIPSticky,
|
||||
}
|
||||
for _, a := range v.ASes {
|
||||
var ts int64
|
||||
@@ -393,6 +486,7 @@ func frontendToProto(name string, fe config.Frontend, src vpp.StateSource) *Fron
|
||||
Port: uint32(fe.Port),
|
||||
Description: fe.Description,
|
||||
Pools: pools,
|
||||
SrcIpSticky: fe.SrcIPSticky,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user