This session covers three distinct arcs: correctness bug fixes in the
VPP sync path and frontend reducers, new config validation, and a
large polish pass on the web frontend (tighter layout, backend kebab
dialogs, live grouped-table, live config-reload re-sync).
- encap for a VIP is now derived from the backend address family,
not the VIP's. A v6 VIP with v4 backends is programmed as IP6_GRE4
(not the buggy IP6_GRE6), matching the VPP LB plugin's
requirement that encap reflects the tunnel inner family. desiredVIP
gained an Encap field populated in desiredFromFrontend.
- ActivePoolIndex now requires at least one backend in a pool to be
BOTH in StateUp AND pb.Weight>0 before the pool counts as active.
Previously a primary pool with every backend manually zeroed would
still win over a fallback with weight=100, so fallback traffic
never materialized. New TestActivePoolIndexWeightedFailover table
pins the rule in five subcases.
- SyncLBStateVIP gained a flushAddress parameter threaded through
reconcileVIP; it forces flush=true on the setASWeight call for a
specific backend regardless of the usual 0→N heuristic. Wires up
the explicit [flush] knob the CLI exposes.
- convertFrontend already enforced that backends within one frontend
share a family. New cross-frontend pass validateVIPFamilyConsistency
rejects configs where two frontends share a VIP address but carry
backends in different families — VPP's LB plugin requires every
VIP on a prefix to have the same encap type, so such a config
would fail at lb_add_del_vip_v2 time with VNET_API_ERROR_INVALID
_ARGUMENT (-73). Catching it at config load turns a silent
runtime failure into a clear startup error.
- Two new TestValidationErrors cases pin the behavior: mismatched
families reject, same-family frontends on one VIP address allowed.
- Proto adds `bool flush = 5` to SetWeightRequest. The RPC now
drives a VIP sync immediately after mutating config (fixing the
latent "weight change only takes effect at the next 30s periodic
reconcile" gap), passing flushAddress = backend IP when req.Flush
is true.
- maglevc grows an optional [flush] token: `set frontend F pool P
backend B weight N [flush]`. Implementation uses two Run closures
(runSetFrontendPoolBackendWeight and -Flush) because the tree
walker only puts slot tokens in args — literal keywords like
`flush` advance the node but don't appear in the arg list.
- docs/user-guide.md updated with the [flush] optional and a
three-paragraph explainer of the graceful-drain vs. flush
semantics at the VPP level.
- checker.ListFrontends now sorts alphabetically to match the
existing sort in ListBackends / ListHealthChecks — RPC responses
no longer shuffle VIPs per call. cmd/frontend/client.go also
sorts defensively in refreshAll so an old maglevd build renders
alphabetically too.
- backendFromProto was returning out.Transitions[n-1] as the
LastTransition, but maglevd stores (and the proto carries)
transitions newest-first, so [n-1] was actually the oldest.
Reverse on read, which normalizes the client's Transitions slice
to oldest-first and makes [n-1] genuinely the newest. LastTransition
now points at the actual latest transition record.
- applyBackendTransition (Go and TS) derives Enabled = state!="disabled"
so the two fields stay in lockstep — closed a drift window where
a recently re-enabled backend still rendered with a stuck
[disabled] tag. The tag was later removed entirely since state
and enabled carry the same information.
- Layout tightened substantially: "FRONTENDS" panel header removed,
zippy-summary and zippy-body paddings cut, backend-table row
padding dropped to 2px, per-pool <h3> removed. Pools now live in
a single consolidated table per frontend with a dedicated "pool"
column that shows the pool name only on the first row of each
group — classic grouped-table layout, maximally dense.
- Description moved inline into the Zippy summary as muted italic
text, freeing a vertical line per frontend card.
- formatVIPAddress() helper renders IPv6 VIPs as [addr]:port and
IPv4 as addr:port, matching RFC 3986 authority syntax.
- Pools with effective_weight=0 on every backend (standby
fallbacks, fully-drained primaries) render at opacity 0.35 on
their non-actions cells; the kebab column stays at full contrast
because its menu is still fully functional on standby backends.
- Config-reload propagation: a maglevd config-reload-done log
event triggers triggerConfigResync() on the frontend side —
refreshAll() runs off the event-dispatch goroutine, then a
BrowserEvent{Type:"resync"} is published through the broker.
writeEvent emits type="resync" as a named SSE frame so the
SPA's existing addEventListener("resync") handler picks it up
and calls fetchAllState → replaceAll.
- recomputeEffectiveWeights in stores/state.ts mirrors the
server-side health.EffectiveWeights logic so the SPA keeps
pool.effective_weight correct the moment a backend transitions,
without waiting for the 30s refresh. Fixed a nasty bug where
applyBackendEffectiveWeight wrote VIP-scoped vpp-lb-sync-as-*
event weights into every frontend sharing the backend,
corrupting frontends with different per-pool configured weights.
The old log-event reducer was removed; applyConfiguredWeight is
the narrower replacement used by the kebab set-weight flow.
- applyBackendTransition calls recomputeEffectiveWeights after
state updates so pool-failover transitions (primary ⇌ fallback)
reflect instantly in the UI.
- Confirmation dialogs via a new Modal primitive
(Portal-mounted to document.body, escape/click-outside close,
click-outside debounced on mousedown so mid-row-text-selection
drags don't dismiss).
- pause/resume/enable/disable each show a Modal with a consequence
paragraph explaining what hits live traffic ("will keep existing
flows", "will flush VPP's flow table", etc.). The disable commit
button is styled btn-danger red.
- set-weight action shows a Modal with a range slider (0-100,
seeded from the current configured weight, accent-colored live
numeric readout via <output>) plus a flush checkbox and a live-
swapping note/warn paragraph describing what will happen. On
commit, the SPA also updates its local store via
applyConfiguredWeight so the operator sees the new weight
immediately without waiting for the next refresh.
- ProbeHeartbeat is now state-aware: ▶ (play) at rest for up/
down/unknown backends, ⏸ (pause) for paused, ⏹ (stop) for
disabled/removed, ❤️ (heart) during an in-flight probe.
- Drop the probe-done event listener — fast probes (<10ms)
could fire probe-done in the same render tick as probe-start
and the heart would never visibly paint. Each probe-start now
runs a fixed 400ms scale-pop animation on a timer; subsequent
probe-start events reset the timer, so fast cadences produce a
continuous heart pulse.
- Fixed wrapper box (16x14 px, overflow hidden) so the row
doesn't jiggle when the glyph swaps between the narrow ▶/⏸/⏹
text glyphs and the wider ❤️ emoji.
- Brand wordmark changed from "maglev" to "vpp-maglev" and wrapped
in an <a> linking to https://git.ipng.ch/ipng/vpp-maglev. Logo
link changed to https://ipng.ch/. Both open in a new tab with
rel="noopener".
- .gitignore fix: `frontend`, `maglevc`, `maglevd` were matching
ANY file or directory with those names anywhere in the tree,
silently ignoring cmd/frontend and friends. Anchored with
leading slashes so only repo-root build artifacts match.
586 lines
19 KiB
Go
586 lines
19 KiB
Go
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
|
|
|
package grpcapi
|
|
|
|
import (
|
|
"context"
|
|
"errors"
|
|
"log/slog"
|
|
"net"
|
|
"sort"
|
|
|
|
"google.golang.org/grpc/codes"
|
|
"google.golang.org/grpc/status"
|
|
|
|
"git.ipng.ch/ipng/vpp-maglev/internal/checker"
|
|
"git.ipng.ch/ipng/vpp-maglev/internal/config"
|
|
"git.ipng.ch/ipng/vpp-maglev/internal/health"
|
|
"git.ipng.ch/ipng/vpp-maglev/internal/vpp"
|
|
)
|
|
|
|
// Server implements the MaglevServer gRPC interface.
|
|
type Server struct {
|
|
UnimplementedMaglevServer
|
|
ctx context.Context
|
|
checker *checker.Checker
|
|
logs *LogBroadcaster
|
|
configPath string
|
|
vppClient *vpp.Client // nil when VPP integration is disabled
|
|
}
|
|
|
|
// NewServer creates a Server backed by the given Checker. logs may be nil, in
|
|
// which case log events are never sent to WatchEvents streams. configPath is
|
|
// used by CheckConfig to reload and validate the configuration file on demand.
|
|
// vppClient may be nil if VPP integration is disabled. The provided context
|
|
// controls the lifetime of streaming RPCs: cancelling it closes all active
|
|
// WatchEvents streams so that grpc.Server.GracefulStop can complete.
|
|
func NewServer(ctx context.Context, c *checker.Checker, logs *LogBroadcaster, configPath string, vppClient *vpp.Client) *Server {
|
|
return &Server{ctx: ctx, checker: c, logs: logs, configPath: configPath, vppClient: vppClient}
|
|
}
|
|
|
|
// ListFrontends returns the names of all configured frontends.
|
|
func (s *Server) ListFrontends(_ context.Context, _ *ListFrontendsRequest) (*ListFrontendsResponse, error) {
|
|
return &ListFrontendsResponse{FrontendNames: s.checker.ListFrontends()}, nil
|
|
}
|
|
|
|
// GetFrontend returns configuration details for a single frontend.
|
|
func (s *Server) GetFrontend(_ context.Context, req *GetFrontendRequest) (*FrontendInfo, error) {
|
|
fe, ok := s.checker.GetFrontend(req.Name)
|
|
if !ok {
|
|
return nil, status.Errorf(codes.NotFound, "frontend %q not found", req.Name)
|
|
}
|
|
return frontendToProto(req.Name, fe, s.checker), nil
|
|
}
|
|
|
|
// ListBackends returns the names of all active backends.
|
|
func (s *Server) ListBackends(_ context.Context, _ *ListBackendsRequest) (*ListBackendsResponse, error) {
|
|
return &ListBackendsResponse{BackendNames: s.checker.ListBackends()}, nil
|
|
}
|
|
|
|
// GetBackend returns health state for a backend by name.
|
|
func (s *Server) GetBackend(_ context.Context, req *GetBackendRequest) (*BackendInfo, error) {
|
|
b, ok := s.checker.GetBackend(req.Name)
|
|
if !ok {
|
|
return nil, status.Errorf(codes.NotFound, "backend %q not found", req.Name)
|
|
}
|
|
return backendToProto(b), nil
|
|
}
|
|
|
|
// PauseBackend pauses health checking for a backend by name.
|
|
func (s *Server) PauseBackend(_ context.Context, req *BackendRequest) (*BackendInfo, error) {
|
|
b, err := s.checker.PauseBackend(req.Name)
|
|
if err != nil {
|
|
return nil, status.Errorf(codes.FailedPrecondition, "%v", err)
|
|
}
|
|
return backendToProto(b), nil
|
|
}
|
|
|
|
// ResumeBackend resumes health checking for a backend by name.
|
|
func (s *Server) ResumeBackend(_ context.Context, req *BackendRequest) (*BackendInfo, error) {
|
|
b, err := s.checker.ResumeBackend(req.Name)
|
|
if err != nil {
|
|
return nil, status.Errorf(codes.FailedPrecondition, "%v", err)
|
|
}
|
|
return backendToProto(b), nil
|
|
}
|
|
|
|
// EnableBackend re-enables a previously disabled backend.
|
|
func (s *Server) EnableBackend(_ context.Context, req *BackendRequest) (*BackendInfo, error) {
|
|
b, ok := s.checker.EnableBackend(req.Name)
|
|
if !ok {
|
|
return nil, status.Errorf(codes.NotFound, "backend %q not found", req.Name)
|
|
}
|
|
return backendToProto(b), nil
|
|
}
|
|
|
|
// DisableBackend disables a backend, stopping its probe goroutine.
|
|
func (s *Server) DisableBackend(_ context.Context, req *BackendRequest) (*BackendInfo, error) {
|
|
b, ok := s.checker.DisableBackend(req.Name)
|
|
if !ok {
|
|
return nil, status.Errorf(codes.NotFound, "backend %q not found", req.Name)
|
|
}
|
|
return backendToProto(b), nil
|
|
}
|
|
|
|
// SetFrontendPoolBackendWeight updates the weight of a backend in a pool
|
|
// and immediately pushes the change into VPP via a targeted single-VIP
|
|
// sync. When req.Flush is true the backend's AS row is rewritten with
|
|
// lb_as_set_weight(is_flush=true), which tears down VPP's flow table for
|
|
// that AS so existing sessions are dropped; when false the flow table is
|
|
// left alone and only Maglev's new-bucket mapping is updated, so existing
|
|
// sessions keep reaching this backend until they naturally drain.
|
|
func (s *Server) SetFrontendPoolBackendWeight(_ context.Context, req *SetWeightRequest) (*FrontendInfo, error) {
|
|
if req.Weight < 0 || req.Weight > 100 {
|
|
return nil, status.Errorf(codes.InvalidArgument, "weight %d out of range [0, 100]", req.Weight)
|
|
}
|
|
fe, err := s.checker.SetFrontendPoolBackendWeight(req.Frontend, req.Pool, req.Backend, int(req.Weight))
|
|
if err != nil {
|
|
return nil, status.Errorf(codes.NotFound, "%v", err)
|
|
}
|
|
|
|
// Push the change into VPP so the operator doesn't have to wait
|
|
// for the periodic 30s reconcile to pick it up. Silently skipped
|
|
// when VPP integration is disabled — the mutation still lands in
|
|
// config and any future sync will reconcile it.
|
|
if s.vppClient != nil && s.vppClient.IsConnected() {
|
|
cfg := s.checker.Config()
|
|
flushAddr := ""
|
|
if req.Flush {
|
|
if b, ok := cfg.Backends[req.Backend]; ok && b.Address != nil {
|
|
flushAddr = b.Address.String()
|
|
}
|
|
}
|
|
if err := s.vppClient.SyncLBStateVIP(cfg, req.Frontend, flushAddr); err != nil && !errors.Is(err, vpp.ErrFrontendNotFound) {
|
|
slog.Warn("set-weight-sync",
|
|
"frontend", req.Frontend, "backend", req.Backend,
|
|
"weight", req.Weight, "flush", req.Flush, "err", err)
|
|
}
|
|
}
|
|
|
|
return frontendToProto(req.Frontend, fe, s.checker), nil
|
|
}
|
|
|
|
// ListHealthChecks returns the names of all configured health checks.
|
|
func (s *Server) ListHealthChecks(_ context.Context, _ *ListHealthChecksRequest) (*ListHealthChecksResponse, error) {
|
|
return &ListHealthChecksResponse{Names: s.checker.ListHealthChecks()}, nil
|
|
}
|
|
|
|
// GetHealthCheck returns the full configuration for a health check by name.
|
|
func (s *Server) GetHealthCheck(_ context.Context, req *GetHealthCheckRequest) (*HealthCheckInfo, error) {
|
|
hc, ok := s.checker.GetHealthCheck(req.Name)
|
|
if !ok {
|
|
return nil, status.Errorf(codes.NotFound, "healthcheck %q not found", req.Name)
|
|
}
|
|
return healthCheckToProto(req.Name, hc), nil
|
|
}
|
|
|
|
// WatchEvents streams events to the client. On connect, the current state of
|
|
// every backend and/or frontend is sent as a synthetic event. Afterwards,
|
|
// live events are forwarded based on the filter flags in req. An unset (nil)
|
|
// flag defaults to true (subscribe). An empty log_level defaults to "info".
|
|
func (s *Server) WatchEvents(req *WatchRequest, stream Maglev_WatchEventsServer) error {
|
|
wantLog := req.Log == nil || *req.Log
|
|
wantBackend := req.Backend == nil || *req.Backend
|
|
wantFrontend := req.Frontend == nil || *req.Frontend
|
|
|
|
logLevel := slog.LevelInfo
|
|
if req.LogLevel != "" {
|
|
if err := logLevel.UnmarshalText([]byte(req.LogLevel)); err != nil {
|
|
return status.Errorf(codes.InvalidArgument, "invalid log_level %q: must be debug, info, warn, or error", req.LogLevel)
|
|
}
|
|
}
|
|
|
|
// Subscribe to log events (nil channel blocks forever when not wanted).
|
|
var logCh <-chan *LogEvent
|
|
if wantLog && s.logs != nil {
|
|
var unsub func()
|
|
logCh, unsub = s.logs.Subscribe(logLevel)
|
|
defer unsub()
|
|
}
|
|
|
|
// Subscribe to the checker event stream once; we demultiplex backend
|
|
// and frontend events in the select below. Skip the subscription if
|
|
// neither kind is wanted.
|
|
var eventCh <-chan checker.Event
|
|
if wantBackend || wantFrontend {
|
|
var unsub func()
|
|
eventCh, unsub = s.checker.Subscribe()
|
|
defer unsub()
|
|
}
|
|
|
|
// Send initial state snapshot: one synthetic event per existing backend
|
|
// (if wanted), and one per existing frontend (if wanted). Clients that
|
|
// connect mid-flight see the current state immediately instead of
|
|
// waiting for the next transition.
|
|
if wantBackend {
|
|
for _, name := range s.checker.ListBackends() {
|
|
snap, ok := s.checker.GetBackend(name)
|
|
if !ok {
|
|
continue
|
|
}
|
|
ev := &Event{Event: &Event_Backend{Backend: &BackendEvent{
|
|
BackendName: name,
|
|
Transition: &TransitionRecord{
|
|
From: snap.Health.State.String(),
|
|
To: snap.Health.State.String(),
|
|
AtUnixNs: 0,
|
|
},
|
|
}}}
|
|
if err := stream.Send(ev); err != nil {
|
|
return err
|
|
}
|
|
}
|
|
}
|
|
if wantFrontend {
|
|
for _, name := range s.checker.ListFrontends() {
|
|
fs, ok := s.checker.FrontendState(name)
|
|
if !ok {
|
|
continue
|
|
}
|
|
ev := &Event{Event: &Event_Frontend{Frontend: &FrontendEvent{
|
|
FrontendName: name,
|
|
Transition: &TransitionRecord{
|
|
From: fs.String(),
|
|
To: fs.String(),
|
|
AtUnixNs: 0,
|
|
},
|
|
}}}
|
|
if err := stream.Send(ev); err != nil {
|
|
return err
|
|
}
|
|
}
|
|
}
|
|
|
|
for {
|
|
select {
|
|
case <-s.ctx.Done():
|
|
return status.Error(codes.Unavailable, "server shutting down")
|
|
case <-stream.Context().Done():
|
|
return nil
|
|
case le, ok := <-logCh:
|
|
if !ok {
|
|
return nil
|
|
}
|
|
if err := stream.Send(&Event{Event: &Event_Log{Log: le}}); err != nil {
|
|
return err
|
|
}
|
|
case e, ok := <-eventCh:
|
|
if !ok {
|
|
return nil
|
|
}
|
|
if e.FrontendTransition != nil {
|
|
if !wantFrontend {
|
|
continue
|
|
}
|
|
if err := stream.Send(&Event{Event: &Event_Frontend{Frontend: &FrontendEvent{
|
|
FrontendName: e.FrontendName,
|
|
Transition: &TransitionRecord{
|
|
From: e.FrontendTransition.From.String(),
|
|
To: e.FrontendTransition.To.String(),
|
|
AtUnixNs: e.FrontendTransition.At.UnixNano(),
|
|
},
|
|
}}}); err != nil {
|
|
return err
|
|
}
|
|
continue
|
|
}
|
|
if !wantBackend {
|
|
continue
|
|
}
|
|
if err := stream.Send(&Event{Event: &Event_Backend{Backend: &BackendEvent{
|
|
BackendName: e.BackendName,
|
|
Transition: transitionToProto(e.Transition),
|
|
}}}); err != nil {
|
|
return err
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
// CheckConfig reads and validates the configuration file, returning a
|
|
// structured result that distinguishes YAML parse errors from semantic errors.
|
|
func (s *Server) CheckConfig(_ context.Context, _ *CheckConfigRequest) (*CheckConfigResponse, error) {
|
|
slog.Info("config-check-start", "path", s.configPath)
|
|
_, result := config.Check(s.configPath)
|
|
resp := &CheckConfigResponse{
|
|
Ok: result.OK(),
|
|
ParseError: result.ParseError,
|
|
SemanticError: result.SemanticError,
|
|
}
|
|
if result.OK() {
|
|
slog.Info("config-check-done", "result", "ok")
|
|
} else if result.ParseError != "" {
|
|
slog.Info("config-check-done", "result", "failed", "type", "parse", "err", result.ParseError)
|
|
} else {
|
|
slog.Info("config-check-done", "result", "failed", "type", "semantic", "err", result.SemanticError)
|
|
}
|
|
return resp, nil
|
|
}
|
|
|
|
// ReloadConfig checks the configuration file and, if valid, applies it to the
|
|
// running checker. This is the same code path used by SIGHUP.
|
|
func (s *Server) ReloadConfig(_ context.Context, _ *ReloadConfigRequest) (*ReloadConfigResponse, error) {
|
|
return s.doReloadConfig(), nil
|
|
}
|
|
|
|
// TriggerReload performs a config check and reload. Intended for use by the
|
|
// SIGHUP handler so that signals and gRPC share the same code path.
|
|
func (s *Server) TriggerReload() {
|
|
s.doReloadConfig()
|
|
}
|
|
|
|
func (s *Server) doReloadConfig() *ReloadConfigResponse {
|
|
slog.Info("config-reload-start")
|
|
newCfg, result := config.Check(s.configPath)
|
|
if !result.OK() {
|
|
if result.ParseError != "" {
|
|
slog.Error("config-check-failed", "type", "parse", "err", result.ParseError)
|
|
} else {
|
|
slog.Error("config-check-failed", "type", "semantic", "err", result.SemanticError)
|
|
}
|
|
return &ReloadConfigResponse{
|
|
ParseError: result.ParseError,
|
|
SemanticError: result.SemanticError,
|
|
}
|
|
}
|
|
if err := s.checker.Reload(s.ctx, newCfg); err != nil {
|
|
slog.Error("checker-reload-error", "err", err)
|
|
return &ReloadConfigResponse{
|
|
ReloadError: err.Error(),
|
|
}
|
|
}
|
|
// Push new global LB conf to VPP if anything changed. SetLBConf is a
|
|
// no-op when VPP isn't connected or when the values are unchanged.
|
|
if s.vppClient != nil {
|
|
if err := s.vppClient.SetLBConf(newCfg); err != nil {
|
|
slog.Warn("vpp-lb-conf-set-failed", "err", err)
|
|
}
|
|
}
|
|
slog.Info("config-reload-done", "frontends", len(newCfg.Frontends))
|
|
return &ReloadConfigResponse{Ok: true}
|
|
}
|
|
|
|
// GetVPPInfo returns VPP version and runtime information.
|
|
func (s *Server) GetVPPInfo(_ context.Context, _ *GetVPPInfoRequest) (*VPPInfo, error) {
|
|
if s.vppClient == nil {
|
|
return nil, status.Error(codes.Unavailable, "VPP integration is disabled")
|
|
}
|
|
info, err := s.vppClient.GetInfo()
|
|
if err != nil {
|
|
return nil, status.Errorf(codes.Unavailable, "%v", err)
|
|
}
|
|
var boottimeNs int64
|
|
if !info.BootTime.IsZero() {
|
|
boottimeNs = info.BootTime.UnixNano()
|
|
}
|
|
return &VPPInfo{
|
|
Version: info.Version,
|
|
BuildDate: info.BuildDate,
|
|
BuildDirectory: info.BuildDirectory,
|
|
Pid: info.PID,
|
|
BoottimeNs: boottimeNs,
|
|
ConnecttimeNs: info.ConnectedSince.UnixNano(),
|
|
}, nil
|
|
}
|
|
|
|
// GetVPPLBState returns a snapshot of the VPP load-balancer plugin state.
|
|
func (s *Server) GetVPPLBState(_ context.Context, _ *GetVPPLBStateRequest) (*VPPLBState, error) {
|
|
if s.vppClient == nil {
|
|
return nil, status.Error(codes.Unavailable, "VPP integration is disabled")
|
|
}
|
|
state, err := s.vppClient.GetLBStateAll()
|
|
if err != nil {
|
|
return nil, status.Errorf(codes.Unavailable, "%v", err)
|
|
}
|
|
return lbStateToProto(state), nil
|
|
}
|
|
|
|
// GetVPPLBCounters returns the most recent per-VIP and per-backend counter
|
|
// snapshot captured by the client's 5s scrape loop. The call is served
|
|
// from an in-process cache and does not hit VPP. An empty response is
|
|
// returned when VPP is disconnected or no scrape has completed yet.
|
|
func (s *Server) GetVPPLBCounters(_ context.Context, _ *GetVPPLBCountersRequest) (*VPPLBCounters, error) {
|
|
if s.vppClient == nil {
|
|
return nil, status.Error(codes.Unavailable, "VPP integration is disabled")
|
|
}
|
|
out := &VPPLBCounters{}
|
|
for _, v := range s.vppClient.VIPStats() {
|
|
out.Vips = append(out.Vips, &VPPLBVIPCounters{
|
|
Prefix: v.Prefix,
|
|
Protocol: v.Protocol,
|
|
Port: uint32(v.Port),
|
|
NextPacket: v.NextPkt,
|
|
FirstPacket: v.FirstPkt,
|
|
UntrackedPacket: v.Untracked,
|
|
NoServer: v.NoServer,
|
|
Packets: v.Packets,
|
|
Bytes: v.Bytes,
|
|
})
|
|
}
|
|
sort.Slice(out.Vips, func(i, j int) bool {
|
|
if out.Vips[i].Prefix != out.Vips[j].Prefix {
|
|
return out.Vips[i].Prefix < out.Vips[j].Prefix
|
|
}
|
|
if out.Vips[i].Protocol != out.Vips[j].Protocol {
|
|
return out.Vips[i].Protocol < out.Vips[j].Protocol
|
|
}
|
|
return out.Vips[i].Port < out.Vips[j].Port
|
|
})
|
|
for _, b := range s.vppClient.BackendRouteStats() {
|
|
out.Backends = append(out.Backends, &VPPLBBackendCounters{
|
|
Backend: b.Backend,
|
|
Address: b.Address,
|
|
Packets: b.Packets,
|
|
Bytes: b.Bytes,
|
|
})
|
|
}
|
|
sort.Slice(out.Backends, func(i, j int) bool {
|
|
return out.Backends[i].Backend < out.Backends[j].Backend
|
|
})
|
|
return out, nil
|
|
}
|
|
|
|
// SyncVPPLBState runs the LB reconciler. With frontend_name unset it does a
|
|
// full sync (SyncLBStateAll), which may remove stale VIPs. With frontend_name
|
|
// set it does a single-VIP sync (SyncLBStateVIP) that only adds/updates.
|
|
func (s *Server) SyncVPPLBState(_ context.Context, req *SyncVPPLBStateRequest) (*SyncVPPLBStateResponse, error) {
|
|
if s.vppClient == nil {
|
|
return nil, status.Error(codes.Unavailable, "VPP integration is disabled")
|
|
}
|
|
cfg := s.checker.Config()
|
|
if req.FrontendName != nil && *req.FrontendName != "" {
|
|
if err := s.vppClient.SyncLBStateVIP(cfg, *req.FrontendName, ""); err != nil {
|
|
if errors.Is(err, vpp.ErrFrontendNotFound) {
|
|
return nil, status.Errorf(codes.NotFound, "%v", err)
|
|
}
|
|
return nil, status.Errorf(codes.Unavailable, "%v", err)
|
|
}
|
|
return &SyncVPPLBStateResponse{}, nil
|
|
}
|
|
if err := s.vppClient.SyncLBStateAll(cfg); err != nil {
|
|
return nil, status.Errorf(codes.Unavailable, "%v", err)
|
|
}
|
|
return &SyncVPPLBStateResponse{}, nil
|
|
}
|
|
|
|
// lbStateToProto converts the vpp package's LBState into the proto message.
|
|
func lbStateToProto(s *vpp.LBState) *VPPLBState {
|
|
out := &VPPLBState{
|
|
Conf: &VPPLBConf{
|
|
Ip4SrcAddress: ipStringOrEmpty(s.Conf.IP4SrcAddress),
|
|
Ip6SrcAddress: ipStringOrEmpty(s.Conf.IP6SrcAddress),
|
|
StickyBucketsPerCore: s.Conf.StickyBucketsPerCore,
|
|
FlowTimeout: s.Conf.FlowTimeout,
|
|
},
|
|
}
|
|
for _, v := range s.VIPs {
|
|
pv := &VPPLBVIP{
|
|
Prefix: v.Prefix.String(),
|
|
Protocol: uint32(v.Protocol),
|
|
Port: uint32(v.Port),
|
|
Encap: v.Encap,
|
|
FlowTableLength: uint32(v.FlowTableLength),
|
|
SrcIpSticky: v.SrcIPSticky,
|
|
}
|
|
for _, a := range v.ASes {
|
|
var ts int64
|
|
if !a.InUseSince.IsZero() {
|
|
ts = a.InUseSince.UnixNano()
|
|
}
|
|
pv.ApplicationServers = append(pv.ApplicationServers, &VPPLBAS{
|
|
Address: a.Address.String(),
|
|
Weight: uint32(a.Weight),
|
|
Flags: uint32(a.Flags),
|
|
NumBuckets: a.NumBuckets,
|
|
InUseSinceNs: ts,
|
|
})
|
|
}
|
|
out.Vips = append(out.Vips, pv)
|
|
}
|
|
return out
|
|
}
|
|
|
|
func ipStringOrEmpty(ip net.IP) string {
|
|
if len(ip) == 0 || ip.IsUnspecified() {
|
|
return ""
|
|
}
|
|
return ip.String()
|
|
}
|
|
|
|
// ---- conversion helpers ----------------------------------------------------
|
|
|
|
func frontendToProto(name string, fe config.Frontend, src vpp.StateSource) *FrontendInfo {
|
|
// Compute the state-aware effective weights once; these reflect the
|
|
// pool-failover logic and what would be programmed into VPP.
|
|
eff := vpp.EffectiveWeights(fe, src)
|
|
pools := make([]*PoolInfo, 0, len(fe.Pools))
|
|
for poolIdx, p := range fe.Pools {
|
|
pi := &PoolInfo{Name: p.Name}
|
|
for bName, pb := range p.Backends {
|
|
pi.Backends = append(pi.Backends, &PoolBackendInfo{
|
|
Name: bName,
|
|
Weight: int32(pb.Weight),
|
|
EffectiveWeight: int32(eff[poolIdx][bName]),
|
|
})
|
|
}
|
|
pools = append(pools, pi)
|
|
}
|
|
return &FrontendInfo{
|
|
Name: name,
|
|
Address: fe.Address.String(),
|
|
Protocol: fe.Protocol,
|
|
Port: uint32(fe.Port),
|
|
Description: fe.Description,
|
|
Pools: pools,
|
|
SrcIpSticky: fe.SrcIPSticky,
|
|
}
|
|
}
|
|
|
|
func backendToProto(snap checker.BackendSnapshot) *BackendInfo {
|
|
info := &BackendInfo{
|
|
Name: snap.Health.Name,
|
|
Address: snap.Health.Address.String(),
|
|
State: snap.Health.State.String(),
|
|
Enabled: snap.Config.Enabled,
|
|
Healthcheck: snap.Config.HealthCheck,
|
|
}
|
|
for _, t := range snap.Health.Transitions {
|
|
info.Transitions = append(info.Transitions, transitionToProto(t))
|
|
}
|
|
return info
|
|
}
|
|
|
|
func healthCheckToProto(name string, hc config.HealthCheck) *HealthCheckInfo {
|
|
info := &HealthCheckInfo{
|
|
Name: name,
|
|
Type: hc.Type,
|
|
Port: uint32(hc.Port),
|
|
IntervalNs: hc.Interval.Nanoseconds(),
|
|
FastIntervalNs: hc.FastInterval.Nanoseconds(),
|
|
DownIntervalNs: hc.DownInterval.Nanoseconds(),
|
|
TimeoutNs: hc.Timeout.Nanoseconds(),
|
|
Rise: int32(hc.Rise),
|
|
Fall: int32(hc.Fall),
|
|
}
|
|
if hc.ProbeIPv4Src != nil {
|
|
info.ProbeIpv4Src = hc.ProbeIPv4Src.String()
|
|
}
|
|
if hc.ProbeIPv6Src != nil {
|
|
info.ProbeIpv6Src = hc.ProbeIPv6Src.String()
|
|
}
|
|
if hc.HTTP != nil {
|
|
re := ""
|
|
if hc.HTTP.ResponseRegexp != nil {
|
|
re = hc.HTTP.ResponseRegexp.String()
|
|
}
|
|
info.Http = &HTTPCheckParams{
|
|
Path: hc.HTTP.Path,
|
|
Host: hc.HTTP.Host,
|
|
ResponseCodeMin: int32(hc.HTTP.ResponseCodeMin),
|
|
ResponseCodeMax: int32(hc.HTTP.ResponseCodeMax),
|
|
ResponseRegexp: re,
|
|
ServerName: hc.HTTP.ServerName,
|
|
InsecureSkipVerify: hc.HTTP.InsecureSkipVerify,
|
|
}
|
|
}
|
|
if hc.TCP != nil {
|
|
info.Tcp = &TCPCheckParams{
|
|
Ssl: hc.TCP.SSL,
|
|
ServerName: hc.TCP.ServerName,
|
|
InsecureSkipVerify: hc.TCP.InsecureSkipVerify,
|
|
}
|
|
}
|
|
return info
|
|
}
|
|
|
|
func transitionToProto(t health.Transition) *TransitionRecord {
|
|
return &TransitionRecord{
|
|
From: t.From.String(),
|
|
To: t.To.String(),
|
|
AtUnixNs: t.At.UnixNano(),
|
|
}
|
|
}
|
|
|
|
// Ensure net.IP is imported (used via b.Address.String()).
|
|
var _ = net.IP{}
|