install-deps Makefile target; docs refresh; golangci-lint v2 clean

Makefile:
- New install-deps umbrella target split into three sub-targets:
  install-deps-apt        — Debian/Trixie-packaged build deps
                            (nodejs, npm, protobuf-compiler, git, make,
                            dpkg-dev, ca-certificates, curl, tar). Uses
                            sudo when not already root.
  install-deps-go         — ensures a Go toolchain >= GO_VERSION (go.mod
                            floor, default 1.25.0). Short-circuits when
                            the system Go is already recent enough;
                            otherwise downloads the upstream tarball
                            from go.dev/dl/ into /usr/local/go. Trixie
                            only ships 1.24 so this step is load-bearing.
  install-deps-go-tools   — go install protoc-gen-go, protoc-gen-go-grpc,
                            and golangci-lint/v2/cmd/golangci-lint. Then
                            asserts the installed golangci-lint version
                            parses as >= GOLANGCI_LINT_VERSION (default
                            1.64.0, the floor that supports Go 1.25
                            syntax) to catch stale binaries in $GOPATH
                            /bin before they silently run against Go
                            1.25 code.
- Parser bug fixed: golangci-lint v1.x prints "has version v1.64.8" but
  v2.x dropped the 'v' prefix and prints "has version 2.11.4". The
  original sed regex required the 'v' and returned an empty match on
  v2.x, making the assertion explode with "could not parse version
  output". Fixed by switching to extended regex (sed -En) with 'v?' so
  both forms parse cleanly.
- GO_VERSION and GOLANGCI_LINT_VERSION exposed as Makefile variables
  so operators can override on the command line, e.g.
    make install-deps GO_VERSION=1.25.5 GOLANGCI_LINT_VERSION=2.0.0
- .PHONY extended with the four new target names.

Docs:
- README.md: capability note rewritten to cover CAP_NET_RAW (ICMP) and
  the new CAP_SYS_ADMIN requirement when healthchecker.netns is set,
  plus a paragraph explaining that the Debian systemd unit grants both
  automatically. Docker example gained a second variant that shows the
  additional --cap-add SYS_ADMIN and /var/run/netns bind mount for
  netns-scoped deployments. Also notes that maglevd-frontend ignores
  SIGHUP so controlling-terminal disconnects don't kill it.
- docs/user-guide.md: Capabilities section rewritten as a bulleted
  list covering both caps, with the EPERM error string and three
  different ways to grant them (systemd unit, setcap, systemd-run);
  'show vpp lb counters' command description updated to explain that
  per-backend packet counts are no longer shown (LB plugin's
  forwarding node bypasses ip{4,6}_lookup_inline, so /net/route/to at
  the backend's FIB entry never ticks for LB-forwarded traffic); new
  ~75-line "What the SPA shows" subsection covering the scope
  selector + maglev_scope cookie, the per-maglevd frontend cards, the
  health-cascade icon table (ok / bug-buckets / primary-drained /
  degraded / unknown), the lb buckets column semantics, the
  maglev_zippy_open cookie, the admin-mode lifecycle dialogs with
  their plain-English consequence text, and the debug panel.
- docs/config-guide.md: healthchecker.netns field gains a capability-
  requirement note spelling out setns(CLONE_NEWNET), the EPERM
  symptom string, and the /var/run/netns/ readability requirement.
- docs/healthchecks.md: new "Jitter" subsection explaining the +/-10%
  scaling on every computed interval, and a "Probe timing while a
  probe is in flight" subsection that explains why fast-interval alone
  doesn't give fast fault detection against hanging backends (the
  probe loop is synchronous, so each iteration is timeout +
  fast-interval; the advice is to lower timeout, not fast-interval).
- docs/maglevd.8: description paragraph corrected (dropped the
  per-backend stats claim and added a short note pointing at the LB
  plugin forwarding-path bypass); new CAPABILITIES section between
  SIGNALS and FILES covering both CAP_NET_RAW and CAP_SYS_ADMIN with
  the drop-in-override hint.
- docs/maglevd-frontend.8: new SIGNALS section documenting the
  explicit SIGHUP ignore (so a controlling-terminal disconnect doesn't
  kill the daemon); description extended with paragraphs on the two
  persistence cookies (maglev_scope, maglev_zippy_open) and on the
  health-cascade icon + lb buckets column.
- docs/maglevc.1: left untouched — intentionally minimal and delegates
  to docs/user-guide.md.

Lint (26 issues across 12 files, all errcheck / ineffassign / S1021):
- cmd/frontend/handlers.go: _, _ = fmt.Fprintf(...) for the SSE retry
  hint and resync control-event writes.
- cmd/maglevc/commands.go: bulk-prefix every fmt.Fprintf(w, ...) with
  _, _ =; also merged 'var watchEventsOptSlot *Node; ... = &Node{...}'
  into a single := declaration (staticcheck S1021) — the self-
  referencing pattern still works because the Children back-ref is
  assigned on the next statement, not inside the struct literal.
- cmd/maglevc/complete.go: _, _ = fmt.Fprintf(ql.rl.Stderr(), ...)
  for the banner and help writes; removed the ineffectual
  'partial = ""' assignment (nothing downstream reads partial after
  that branch, so setting it was dead code flagged by ineffassign).
- cmd/maglevc/shell.go: defer func() { _ = rl.Close() }() for the
  readline instance; _, _ = fmt.Fprintf(rl.Stderr(), ...) for error
  display in the REPL loop.
- cmd/maglevc/main.go: defer func() { _ = conn.Close() }() for the
  gRPC client connection.
- internal/grpcapi/server_test.go: _ = conn.Close() in the test
  teardown closure.
- internal/prober/http.go: _ = c.Close() in the TLS-handshake-failed
  path; defer func() { _ = conn.Close() }() and defer func() { _ =
  resp.Body.Close() }() for the two deferred cleanups.
- internal/prober/http_test.go: defer func() { _ = resp.Body.Close()
  }() plus three _, _ = fmt.Fprint(w, ...) in the httptest.Server
  handlers and _, _ = fmt.Sscanf(...) when parsing the test listener's
  port.
- internal/prober/icmp.go: defer func() { _ = pc.Close() }() for the
  ICMP packet conn.
- internal/prober/netns.go: defer func() { _ = origNs.Close() }(),
  defer func() { _ = netns.Set(origNs) }(), defer func() { _ =
  targetNs.Close() }() — also dropped a stray //nolint:errcheck that
  was no longer needed once the closure wrapping handled the discard.
- internal/prober/tcp.go: _ = conn.Close() in the L4-only path,
  _ = tlsConn.Close() in the failed and succeeded handshake branches,
  _ = tlsConn.SetDeadline(...) (also dropped a //nolint:errcheck
  previously covering it).

Iterative 'make lint' runs were needed because golangci-lint v2.x
caps same-linter reports per pass, so the first pass reported 21,
then 4, then 3, then 1, then 0. Final pass: 0 issues. make test is
green across every package, and make build produces all three
binaries cleanly.
This commit is contained in:
2026-04-14 17:37:43 +02:00
parent 224167ce39
commit 744b1cb3d2
18 changed files with 502 additions and 107 deletions

View File

@@ -57,6 +57,22 @@ Global settings for the health checker engine.
empty or omitted, probes run in the current (default) network namespace. Useful when
backends are reachable only through a dedicated dataplane namespace.
**Capability requirement**: setting this field makes `maglevd` call
`setns(CLONE_NEWNET)` on the probe thread before each probe, which the
kernel only permits to processes holding `CAP_SYS_ADMIN` in the target
namespace's user namespace (`setns(2)`). The Debian systemd unit
(`vpp-maglev.service`) already grants this capability; if you run
`maglevd` by hand under a non-root user make sure the binary has
`CAP_SYS_ADMIN` via `setcap cap_net_raw,cap_sys_admin=eip
/usr/sbin/maglevd` or equivalent, otherwise every probe fails with
`enter netns "<name>": operation not permitted` and all backends
transition to `down` on their first probe.
Also make sure the named namespace is mounted under `/var/run/netns/`
(which is where `ip netns add` puts it) and that it is readable by
the user `maglevd` runs as — the default mode from `ip netns add` is
`0644`, which is fine for any user.
Example:
```yaml
maglev:

View File

@@ -88,6 +88,28 @@ recovering backend is re-evaluated quickly without waiting a full `interval`.
Using `down-interval` for fully down backends reduces probe traffic to servers
that are known to be offline.
### Jitter
Every computed interval is then scaled by a uniformly-distributed random
factor in `[0.9, 1.1)` before the probe worker sleeps. The `±10%` jitter
prevents all probes from aligning on the same tick after a restart or a
config reload — a deployment with dozens of backends would otherwise send a
bursty, phase-locked flight of probes every `interval`. The jitter is
applied once per probe iteration, not averaged across iterations, so the
long-run cadence is still the configured `interval`.
### Probe timing while a probe is in flight
The probe worker loop is synchronous: each iteration blocks on the probe's
completion (or its `timeout`) before computing the next `sleepFor`. That
means a fully-timing-out probe effectively runs at
`timeout + fast-interval` cadence, not `fast-interval` cadence. If you
want fast fault detection against backends that hang rather than refuse
the connection (e.g. a dead TCP stack, or an unreachable backend via a
blackhole route), lower `timeout` rather than `fast-interval`. Setting
`fast-interval` below `timeout` doesn't make probes fire more frequently —
it just changes the idle gap between a completed probe and the next one.
---
## Transition events

View File

@@ -67,6 +67,36 @@ and
are set to non\-empty values at startup; otherwise
.B /admin/
returns 404 and the SPA hides the admin\-toggle button entirely.
.PP
Per\-user persistent state lives in two cookies:
.B maglev_scope
remembers which maglevd the user was last looking at (hydrated on
page load and reconciled against the fetched server list, so a
removed/renamed maglevd falls through cleanly instead of leaving a
ghost selection), and
.B maglev_zippy_open
remembers which collapsible cards are open, scoped per\-maglevd so
opening a frontend card on one server doesn't affect the equivalent
card on another. Both are
.BR "Path=/; Max-Age=1y; SameSite=Lax" ,
are best\-effort (a missing or corrupt value just falls back to
"everything closed" / "first maglevd"), and hold no sensitive data.
.PP
The SPA shows a health\-cascade icon next to every frontend name:
.B \(OK
for fully healthy, a double\-bang for a control\-plane vs dataplane
disagreement (eff_weight > 0 but zero VPP buckets), an exclamation
mark for a fully\-drained primary pool, a warning triangle for any
backend not in
.B up
state, and a question mark as a fallthrough for logic bugs in the
cascade. The
.B "lb buckets"
column on each backend row reports VPP's Maglev hash table share
for that AS, debounced to at most one
.B GetVPPLBState
fetch per second per maglevd and refreshed live on every backend
transition or weight edit.
.SH OPTIONS
Each flag may also be supplied via an environment variable (shown in
parentheses); the flag takes precedence when both are set. All env
@@ -154,6 +184,30 @@ Returns the fresh backend snapshot as JSON.
Weight change POST. Body is
.B {"weight": 0\-100, "flush": bool} .
Returns the fresh frontend snapshot as JSON.
.SH SIGNALS
.TP
.BR SIGTERM ", " SIGINT
Graceful shutdown: active gRPC streams are closed, the HTTP server
drains, then the process exits.
.TP
.B SIGHUP
Explicitly ignored. A controlling\-terminal disconnect (closing the
SSH session the dashboard was started from, for example) would
otherwise deliver
.B SIGHUP
under Go's default handler and terminate the process with
.BR Hangup .
Since
.B maglevd\-frontend
has no config file beyond its command\-line flags there is nothing
meaningful to
.I reload
on
.BR SIGHUP ,
and inheriting the default "exit on hangup" semantics is the wrong
behaviour for a long\-running network daemon. Use
.B SIGTERM
for clean shutdown instead.
.SH REVERSE PROXY NOTES
The SSE stream has a handful of operational requirements that every
reverse proxy must satisfy:

View File

@@ -36,11 +36,19 @@ default 30s), on
reloads, and on operator request via
.BR maglevc .
.PP
The aggregated backend state, VPP dataplane state, and per\-VIP /
per\-backend stats\-segment counters are exposed via a gRPC API (and
scraped into Prometheus when the
The aggregated backend state, VPP dataplane state, and per\-VIP
stats\-segment counters are exposed via a gRPC API (and scraped
into Prometheus when the
.B /metrics
endpoint is enabled).
endpoint is enabled). Per\-backend packet counters are intentionally
not exposed: VPP's LB plugin forwards by writing
.B adj_index[VLIB_TX]
directly and bypassing
.BR ip4_lookup_inline " / " ip6_lookup_inline ,
which is the only path that increments
.BR /net/route/to ,
so the backend's FIB entry stats index never ticks for LB\-forwarded
traffic.
See
.BR maglevc (1)
for the interactive CLI client.
@@ -94,6 +102,42 @@ immediately.
Gracefully shut down: drain active gRPC streams, then exit. VPP
dataplane state is left in place so that existing VIPs continue to
forward traffic during a restart.
.SH CAPABILITIES
.TP
.B CAP_NET_RAW
Required when any health check uses
.BR "type: icmp" .
Raw sockets for ICMP echo. TCP and HTTP(S) checks use normal TCP
sockets and need no special capability.
.TP
.B CAP_SYS_ADMIN
Required when the
.B healthchecker.netns
field is set in the YAML configuration. The probe loop calls
.BR setns (2)
with
.B CLONE_NEWNET
to enter the target network namespace before each probe; the
kernel only permits that to processes holding
.B CAP_SYS_ADMIN
in the target namespace's user namespace. Without it, every probe
fails with
.B enter netns "<name>": operation not permitted
and every backend flips to
.B down
on its first probe. Omit the capability when the deployment doesn't
use namespace\-scoped health checks \(em the Debian systemd unit
ships with both
.B CAP_NET_RAW
and
.B CAP_SYS_ADMIN
in its
.B AmbientCapabilities
and
.B CapabilityBoundingSet
by default, and operators can drop
.B CAP_SYS_ADMIN
via a drop\-in override if they prefer the narrower surface.
.SH FILES
.TP
.I /etc/vpp-maglev/maglev.yaml

View File

@@ -32,9 +32,34 @@ are used for anything not set.
### Capabilities
`maglevd` requires `CAP_NET_RAW` when any health check uses `type: icmp`.
All other check types (`tcp`, `http`) use normal TCP sockets and require no
special capabilities.
`maglevd` requires:
- **`CAP_NET_RAW`** when any health check uses `type: icmp` — raw
sockets for ICMP echo. `tcp`, `http`, and `https` checks use
normal TCP sockets and do not need this capability.
- **`CAP_SYS_ADMIN`** when `healthchecker.netns` is set in the
config — the probe loop calls `setns(CLONE_NEWNET)` to join the
target network namespace, and the kernel only permits that to
processes holding `CAP_SYS_ADMIN` in the target's user namespace
(see `setns(2)`). Without it the probe fails with
`enter netns "<name>": operation not permitted` and every backend
flips to `down` / `L4CON` on its first probe.
The Debian systemd unit (`vpp-maglev.service`) grants both via
`AmbientCapabilities` and `CapabilityBoundingSet`, so
`systemctl start vpp-maglev` works out of the box under the
unprivileged `maglevd` user. When running the binary by hand under
a non-root account, either:
- `setcap cap_net_raw,cap_sys_admin=eip /usr/sbin/maglevd` once at
install time, or
- run under `systemd-run -p AmbientCapabilities='CAP_NET_RAW CAP_SYS_ADMIN' ...`
for ad-hoc tests.
If your deployment doesn't use `netns:` at all, drop
`CAP_SYS_ADMIN` from the bounding set in the service unit — it's a
broad capability and there's no value in keeping it when nothing
calls `setns`.
### Logging
@@ -139,14 +164,22 @@ show vpp lb state Show the VPP load-balancer plugin state: global
configuration, configured VIPs, and their attached
application servers (address, weight, bucket count).
Returns an error if VPP is not connected.
show vpp lb counters Show per-VIP and per-backend packet/byte counters
from the VPP stats segment, refreshed roughly every
five seconds by maglevd. Each VIP row reports the LB
plugin counters (next, first, untracked, no-server)
and the FIB packets/bytes at the VIP's host prefix.
Each backend row reports FIB packets/bytes at the
backend's /32 or /128 prefix. Use Prometheus for
live rates; this command shows absolute values.
show vpp lb counters Show per-VIP packet/byte counters from the VPP stats
segment, refreshed roughly every five seconds by
maglevd. Each row reports the four LB plugin counters
(first, next, untracked, no-server) and the FIB
packets/bytes at the VIP's host prefix. Use Prometheus
for live rates; this command shows absolute values.
Per-backend packet counters are not shown: VPP's LB
plugin forwarding node writes adj_index[VLIB_TX]
directly and bypasses ip{4,6}_lookup_inline, which is
the only path that increments /net/route/to. The
backend's FIB load_balance stats_index therefore
never ticks for LB-forwarded traffic, and exposing
zeros would mislead. See docs/implementation/TODO
for the upstream path that would fix this (new
lb_as_stats_dump API message).
sync vpp lb state [<name>] Reconcile the VPP load-balancer dataplane from the
running config. Without a name: runs a full sync —
@@ -285,6 +318,79 @@ the SPA's "admin…" toggle becomes visible. When either is missing or
empty the `/admin/` route returns 404 and the SPA hides the toggle —
`/view/` is always reachable read-only.
### What the SPA shows
After the dashboard loads, the header carries a **scope selector**:
one pill per configured maglevd, coloured green when the frontend's
gRPC channel to that maglevd is alive and red when it's dropped.
Click a pill to flip the view to that maglevd's frontends. Your
selection is persisted in a `maglev_scope` cookie (Path=/;
Max-Age=1y; SameSite=Lax), so the next page load lands on the same
server you were last looking at. If the cookie references a
maglevd that's no longer in the server list (it was removed from
`-server` or renamed), the hydration path falls through to the
first maglevd in the list instead of leaving you on a ghost
selection.
The **frontend list** is a stack of collapsible cards
(`<details>` elements) — one per VIP. Each card header shows a
fixed-width slot carrying a health icon, the frontend name, its
aggregate state badge (`up` / `down` / `unknown`), and the
address, protocol, and description. The health icon is a cascade
derived from the current backend state + VPP bucket allocation:
| Icon | Meaning |
|---|---|
| ✅ | All backends `up`, the primary pool is serving, and every backend with `effective_weight > 0` has VPP buckets > 0. |
| ‼️ | At least one backend has `effective_weight > 0` but zero VPP buckets — the control plane and dataplane disagree, almost always a bug worth investigating. |
| ❗ | The primary pool has no serving backend (every pool[0] backend has `effective_weight = 0`); the VIP is running on its fallback or nothing at all. |
| ⚠️ | At least one backend is not `up`, nothing worse. Typical maintenance / partial outage state. |
| ❓ | Fallthrough; should be unreachable in practice and indicates a logic bug in the health-cascade code. |
The card body is a table with one row per `(pool, backend)` tuple.
Columns: `pool`, `backend`, `address`, `state`, `weight`,
`effective`, `lb buckets`, `last transition`, and (in admin mode) a
kebab `⋮` menu for per-backend actions. The **LB buckets** column
reports VPP's Maglev hash table bucket count for that backend,
refreshed live via a debounced `GetVPPLBState` scrape whenever a
transition or weight edit happens (at most once per second per
maglevd). A value of `0` means "in VPP but drained", `—` means
"not in VPP at all" (e.g. between a sync and the next poll), and a
non-zero number is the share of the 1024-bucket table currently
pointing at that AS.
Card open/closed state is also persisted per-panel in a
`maglev_zippy_open` cookie, **scoped per maglevd** (the id is
`frontend-<maglevd>-<frontendName>`), so collapsing a card on
`chbtl2` doesn't also collapse the equivalent card on `localhost`.
On first load every card starts closed; unfolding one writes it to
the cookie for subsequent visits. The cookie is a best-effort hint
— a missing or corrupt value just falls back to "everything
closed", so losing it (browser clear, expiry, private window, etc.)
is purely cosmetic.
When `admin_enabled` is true the header gains an **admin toggle**
that switches between `/view/` (read-only) and `/admin/` (basic
auth, mutation actions exposed). Inside admin mode every backend
row grows a `⋮` menu with `pause`, `resume`, `enable`, `disable`,
and `set weight…` entries. Lifecycle actions open a confirmation
dialog that spells out the dataplane consequence in plain English
(`disable` specifically calls out that it drops live sessions via
the flow-table flush). The weight dialog has a 0-100 slider and a
`flush existing flows` checkbox — unchecked is the graceful drain
(new flows move, existing ones finish naturally), checked is the
immediate session-drop path.
Also visible in admin mode: a **Debug panel** at the bottom of the
page with a rolling tail of every event the SPA has seen across
all maglevds — `backend` and `frontend` transitions, log lines,
`maglevd-status` flips, `vpp-status` flips, and the VPP LB sync
events (`vpp-lb-sync-*`) with their full attribute set formatted
for scanning. A scope filter keeps the tail narrowed to the
current maglevd by default; a `all maglevds` checkbox flips it to
firehose mode, and a `pause` button freezes the tail so you can
read back.
### HTTP surface
- **`/view/`** — static SPA (dashboard). No authentication.