Dataplane reconcile fixes; LB counters cleanup; SPA scope cookie
Checker / reload:
- Reload's update-in-place branch now mirrors b.Address onto the
runtime health.Backend. Without this, GetBackend kept returning
the pre-reload address indefinitely after a config edit that
touched addresses but not healthcheck settings — the VPP sync
path reads cfg.Backends directly so the dataplane moved on
while the gRPC and SPA view stayed wedged on the old IPv4/IPv6.
Sync (internal/vpp/lbsync.go):
- reconcileVIP now detects encap mismatch in addition to
src-ip-sticky mismatch and takes the full tear-down / re-add
path via a new shared recreateVIP helper. Triggered when every
backend flips address family (gre4 <-> gre6) and the existing
VIP can no longer accept new ASes — previously the sync wedged
with 'Invalid address family' until a full maglevd restart.
- setASWeight is issued whenever the state machine requests
flush (a.Flush=true), not only on the weight-value transition
edge. Fixes the case where a backend reached StateDisabled
after its effective weight had already been drained to 0 by
pool failover — the sticky-cache entries pointing at it were
previously never cleared.
maglev-frontend:
- signal.Ignore(SIGHUP) so a controlling-terminal disconnect
doesn't kill the daemon.
- debian/vpp-maglev.service grants CAP_SYS_ADMIN in addition to
CAP_NET_RAW so setns(CLONE_NEWNET) can join the healthcheck
netns. Comment documents the 'operation not permitted' symptom
and notes the knob can be dropped if the deployment doesn't use
the 'netns:' healthcheck option.
LB plugin counters (internal/vpp/lbstats.go + friends):
- Fix the VIP counter regex: the LB plugin registers
vlib_simple_counter_main_t names without a leading '/'
(vlib_validate_simple_counter in counter.c:50 uses cm->name
verbatim; only entries that set cm->stat_segment_name get a
slash). first/next/untracked/no-server now read through as
live values instead of zero.
- Drop the per-backend FIB counter block end-to-end (proto,
grpcapi, metrics, vpp.Client, lbstats, maglevc). Traced from
lb/node.c:558 into ip{4,6}_forward.h:141 — the LB plugin
forwards by writing adj_index[VLIB_TX] directly and bypassing
ip{4,6}_lookup_inline, which is the only path that increments
lbm_to_counters. The backend's FIB load_balance stats_index
literally never ticks for LB-forwarded traffic, so the column
was always zero and misleading. docs/implementation/TODO
records the full investigation and the recommended upstream
path (new lb_as_stats_dump API message) for when we're ready
to carry that VPP patch.
- maglevc show vpp lb counters: plain-text tabular headers.
label() wraps strings in ANSI escapes (~11 bytes of overhead),
but tabwriter counts bytes, not rendered width — so a header
row with label()'d cells and data rows with plain cells drifts
column alignment on every row. color.go comment now spells
out the constraint: label() only works when column N is
wrapped identically in every row (key-value layouts are fine,
multi-column tables with header-only labelling are not).
SPA:
- stores/scope.ts is cookie-backed (maglev_scope, 1 year,
SameSite=Lax). App.tsx hydrates from the cookie then validates
against the fetched snapshots: a cookie referencing a maglevd
that no longer exists falls through to snaps[0] instead of
leaving the user on a ghost selection.
- components/Flash.tsx wraps props.value in createMemo. Solid's
on() fires its callback on every dep notification, not on
value change — source is right in solid-js/dist/solid.js:460,
no equality check. Without the memo, flipping scope between
two 'connected' maglevds (or any other cross-store reactive
re-eval that doesn't actually change the concrete string)
replays the animation every time. createMemo's default ===
dedupe fixes it in one place for every Flash consumer,
superseding the local createMemo workaround we'd added in
BackendRow earlier.
This commit is contained in:
@@ -66,17 +66,6 @@ type VIPStatEntry struct {
|
||||
Bytes uint64
|
||||
}
|
||||
|
||||
// BackendRouteStat is a point-in-time snapshot of the FIB combined counter
|
||||
// (/net/route/to) for a single backend's host prefix. Values are summed
|
||||
// across worker threads. Labels match the backend's identity so a time
|
||||
// series corresponds 1:1 to a maglev backend entry.
|
||||
type BackendRouteStat struct {
|
||||
Backend string // backend name from the config
|
||||
Address string // backend IP address as a string (e.g. "192.0.2.10")
|
||||
Packets uint64
|
||||
Bytes uint64
|
||||
}
|
||||
|
||||
// VPPSource provides read-only access to the VPP client's state. vpp.Client
|
||||
// is adapted to this interface via a small shim in the collector so the
|
||||
// metrics package stays decoupled from the vpp package's concrete types.
|
||||
@@ -87,11 +76,6 @@ type VPPSource interface {
|
||||
// counters, as captured by the LB stats loop. Returns nil when VPP is
|
||||
// disconnected or no scrape has happened yet.
|
||||
VIPStats() []VIPStatEntry
|
||||
// BackendRouteStats returns the most recent snapshot of per-backend
|
||||
// FIB combined counters (/net/route/to), as captured by the LB stats
|
||||
// loop. Returns nil when VPP is disconnected, no scrape has happened
|
||||
// yet, or the route lookup for every backend failed.
|
||||
BackendRouteStats() []BackendRouteStat
|
||||
}
|
||||
|
||||
// ---- inline metrics (updated per probe) ------------------------------------
|
||||
@@ -160,11 +144,9 @@ type Collector struct {
|
||||
vppConnectedFor *prometheus.Desc
|
||||
vppInfo *prometheus.Desc
|
||||
|
||||
vipPackets *prometheus.Desc // per-VIP LB counters from stats segment
|
||||
vipRoutePkts *prometheus.Desc // per-VIP FIB combined counter: packets
|
||||
vipRouteByts *prometheus.Desc // per-VIP FIB combined counter: bytes
|
||||
backendRoutePkts *prometheus.Desc // per-backend FIB combined counter: packets
|
||||
backendRouteByts *prometheus.Desc // per-backend FIB combined counter: bytes
|
||||
vipPackets *prometheus.Desc // per-VIP LB counters from stats segment
|
||||
vipRoutePkts *prometheus.Desc // per-VIP FIB combined counter: packets
|
||||
vipRouteByts *prometheus.Desc // per-VIP FIB combined counter: bytes
|
||||
}
|
||||
|
||||
// NewCollector creates a Collector backed by the given StateSource. vpp may
|
||||
@@ -229,16 +211,6 @@ func NewCollector(src StateSource, vpp VPPSource) *Collector {
|
||||
"Bytes forwarded by VPP's FIB toward each VIP's host prefix (from /net/route/to), summed across workers.",
|
||||
[]string{"prefix", "protocol", "port"}, nil,
|
||||
),
|
||||
backendRoutePkts: prometheus.NewDesc(
|
||||
"maglev_vpp_backend_route_packets_total",
|
||||
"Packets forwarded by VPP's FIB toward each backend's host prefix (from /net/route/to), summed across workers.",
|
||||
[]string{"backend", "address"}, nil,
|
||||
),
|
||||
backendRouteByts: prometheus.NewDesc(
|
||||
"maglev_vpp_backend_route_bytes_total",
|
||||
"Bytes forwarded by VPP's FIB toward each backend's host prefix (from /net/route/to), summed across workers.",
|
||||
[]string{"backend", "address"}, nil,
|
||||
),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -255,8 +227,6 @@ func (c *Collector) Describe(ch chan<- *prometheus.Desc) {
|
||||
ch <- c.vipPackets
|
||||
ch <- c.vipRoutePkts
|
||||
ch <- c.vipRouteByts
|
||||
ch <- c.backendRoutePkts
|
||||
ch <- c.backendRouteByts
|
||||
}
|
||||
|
||||
// Collect implements prometheus.Collector.
|
||||
@@ -353,6 +323,13 @@ func (c *Collector) Collect(ch chan<- prometheus.Metric) {
|
||||
// stats loop in internal/vpp. CounterValue so rate()/increase() work
|
||||
// as expected; VPP counter resets (e.g. VIP recreate) are handled by
|
||||
// Prometheus's built-in counter-reset detection.
|
||||
//
|
||||
// No per-backend counters are exposed here: the LB plugin's
|
||||
// forwarding node sets adj_index[VLIB_TX] directly and bypasses
|
||||
// ip{4,6}_lookup_inline, which is the only path that increments
|
||||
// lbm_to_counters — so /net/route/to at the backend's stats_index
|
||||
// never ticks for LB-forwarded traffic. See the comment block in
|
||||
// internal/vpp/lbstats.go::scrapeLBStats for the full chain.
|
||||
for _, v := range c.vpp.VIPStats() {
|
||||
port := fmt.Sprintf("%d", v.Port)
|
||||
ch <- prometheus.MustNewConstMetric(c.vipPackets, prometheus.CounterValue, float64(v.NextPkt), v.Prefix, v.Protocol, port, "next")
|
||||
@@ -362,13 +339,6 @@ func (c *Collector) Collect(ch chan<- prometheus.Metric) {
|
||||
ch <- prometheus.MustNewConstMetric(c.vipRoutePkts, prometheus.CounterValue, float64(v.Packets), v.Prefix, v.Protocol, port)
|
||||
ch <- prometheus.MustNewConstMetric(c.vipRouteByts, prometheus.CounterValue, float64(v.Bytes), v.Prefix, v.Protocol, port)
|
||||
}
|
||||
|
||||
// Per-backend FIB counters from /net/route/to. Same CounterValue
|
||||
// semantics as above.
|
||||
for _, b := range c.vpp.BackendRouteStats() {
|
||||
ch <- prometheus.MustNewConstMetric(c.backendRoutePkts, prometheus.CounterValue, float64(b.Packets), b.Backend, b.Address)
|
||||
ch <- prometheus.MustNewConstMetric(c.backendRouteByts, prometheus.CounterValue, float64(b.Bytes), b.Backend, b.Address)
|
||||
}
|
||||
}
|
||||
|
||||
// Register registers all metrics with the given registry. vpp may be nil
|
||||
|
||||
Reference in New Issue
Block a user