install-deps Makefile target; docs refresh; golangci-lint v2 clean

Makefile:
- New install-deps umbrella target split into three sub-targets:
  install-deps-apt        — Debian/Trixie-packaged build deps
                            (nodejs, npm, protobuf-compiler, git, make,
                            dpkg-dev, ca-certificates, curl, tar). Uses
                            sudo when not already root.
  install-deps-go         — ensures a Go toolchain >= GO_VERSION (go.mod
                            floor, default 1.25.0). Short-circuits when
                            the system Go is already recent enough;
                            otherwise downloads the upstream tarball
                            from go.dev/dl/ into /usr/local/go. Trixie
                            only ships 1.24 so this step is load-bearing.
  install-deps-go-tools   — go install protoc-gen-go, protoc-gen-go-grpc,
                            and golangci-lint/v2/cmd/golangci-lint. Then
                            asserts the installed golangci-lint version
                            parses as >= GOLANGCI_LINT_VERSION (default
                            1.64.0, the floor that supports Go 1.25
                            syntax) to catch stale binaries in $GOPATH
                            /bin before they silently run against Go
                            1.25 code.
- Parser bug fixed: golangci-lint v1.x prints "has version v1.64.8" but
  v2.x dropped the 'v' prefix and prints "has version 2.11.4". The
  original sed regex required the 'v' and returned an empty match on
  v2.x, making the assertion explode with "could not parse version
  output". Fixed by switching to extended regex (sed -En) with 'v?' so
  both forms parse cleanly.
- GO_VERSION and GOLANGCI_LINT_VERSION exposed as Makefile variables
  so operators can override on the command line, e.g.
    make install-deps GO_VERSION=1.25.5 GOLANGCI_LINT_VERSION=2.0.0
- .PHONY extended with the four new target names.

Docs:
- README.md: capability note rewritten to cover CAP_NET_RAW (ICMP) and
  the new CAP_SYS_ADMIN requirement when healthchecker.netns is set,
  plus a paragraph explaining that the Debian systemd unit grants both
  automatically. Docker example gained a second variant that shows the
  additional --cap-add SYS_ADMIN and /var/run/netns bind mount for
  netns-scoped deployments. Also notes that maglevd-frontend ignores
  SIGHUP so controlling-terminal disconnects don't kill it.
- docs/user-guide.md: Capabilities section rewritten as a bulleted
  list covering both caps, with the EPERM error string and three
  different ways to grant them (systemd unit, setcap, systemd-run);
  'show vpp lb counters' command description updated to explain that
  per-backend packet counts are no longer shown (LB plugin's
  forwarding node bypasses ip{4,6}_lookup_inline, so /net/route/to at
  the backend's FIB entry never ticks for LB-forwarded traffic); new
  ~75-line "What the SPA shows" subsection covering the scope
  selector + maglev_scope cookie, the per-maglevd frontend cards, the
  health-cascade icon table (ok / bug-buckets / primary-drained /
  degraded / unknown), the lb buckets column semantics, the
  maglev_zippy_open cookie, the admin-mode lifecycle dialogs with
  their plain-English consequence text, and the debug panel.
- docs/config-guide.md: healthchecker.netns field gains a capability-
  requirement note spelling out setns(CLONE_NEWNET), the EPERM
  symptom string, and the /var/run/netns/ readability requirement.
- docs/healthchecks.md: new "Jitter" subsection explaining the +/-10%
  scaling on every computed interval, and a "Probe timing while a
  probe is in flight" subsection that explains why fast-interval alone
  doesn't give fast fault detection against hanging backends (the
  probe loop is synchronous, so each iteration is timeout +
  fast-interval; the advice is to lower timeout, not fast-interval).
- docs/maglevd.8: description paragraph corrected (dropped the
  per-backend stats claim and added a short note pointing at the LB
  plugin forwarding-path bypass); new CAPABILITIES section between
  SIGNALS and FILES covering both CAP_NET_RAW and CAP_SYS_ADMIN with
  the drop-in-override hint.
- docs/maglevd-frontend.8: new SIGNALS section documenting the
  explicit SIGHUP ignore (so a controlling-terminal disconnect doesn't
  kill the daemon); description extended with paragraphs on the two
  persistence cookies (maglev_scope, maglev_zippy_open) and on the
  health-cascade icon + lb buckets column.
- docs/maglevc.1: left untouched — intentionally minimal and delegates
  to docs/user-guide.md.

Lint (26 issues across 12 files, all errcheck / ineffassign / S1021):
- cmd/frontend/handlers.go: _, _ = fmt.Fprintf(...) for the SSE retry
  hint and resync control-event writes.
- cmd/maglevc/commands.go: bulk-prefix every fmt.Fprintf(w, ...) with
  _, _ =; also merged 'var watchEventsOptSlot *Node; ... = &Node{...}'
  into a single := declaration (staticcheck S1021) — the self-
  referencing pattern still works because the Children back-ref is
  assigned on the next statement, not inside the struct literal.
- cmd/maglevc/complete.go: _, _ = fmt.Fprintf(ql.rl.Stderr(), ...)
  for the banner and help writes; removed the ineffectual
  'partial = ""' assignment (nothing downstream reads partial after
  that branch, so setting it was dead code flagged by ineffassign).
- cmd/maglevc/shell.go: defer func() { _ = rl.Close() }() for the
  readline instance; _, _ = fmt.Fprintf(rl.Stderr(), ...) for error
  display in the REPL loop.
- cmd/maglevc/main.go: defer func() { _ = conn.Close() }() for the
  gRPC client connection.
- internal/grpcapi/server_test.go: _ = conn.Close() in the test
  teardown closure.
- internal/prober/http.go: _ = c.Close() in the TLS-handshake-failed
  path; defer func() { _ = conn.Close() }() and defer func() { _ =
  resp.Body.Close() }() for the two deferred cleanups.
- internal/prober/http_test.go: defer func() { _ = resp.Body.Close()
  }() plus three _, _ = fmt.Fprint(w, ...) in the httptest.Server
  handlers and _, _ = fmt.Sscanf(...) when parsing the test listener's
  port.
- internal/prober/icmp.go: defer func() { _ = pc.Close() }() for the
  ICMP packet conn.
- internal/prober/netns.go: defer func() { _ = origNs.Close() }(),
  defer func() { _ = netns.Set(origNs) }(), defer func() { _ =
  targetNs.Close() }() — also dropped a stray //nolint:errcheck that
  was no longer needed once the closure wrapping handled the discard.
- internal/prober/tcp.go: _ = conn.Close() in the L4-only path,
  _ = tlsConn.Close() in the failed and succeeded handshake branches,
  _ = tlsConn.SetDeadline(...) (also dropped a //nolint:errcheck
  previously covering it).

Iterative 'make lint' runs were needed because golangci-lint v2.x
caps same-linter reports per pass, so the first pass reported 21,
then 4, then 3, then 1, then 0. Final pass: 0 issues. make test is
green across every package, and make build produces all three
binaries cleanly.
This commit is contained in:
2026-04-14 17:37:43 +02:00
parent 224167ce39
commit 744b1cb3d2
18 changed files with 502 additions and 107 deletions

131
Makefile
View File

@@ -35,7 +35,26 @@ TEST ?= tests/
VPP_API_DIR ?= $(HOME)/src/vpp/build-root/install-vpp_debug-native/vpp/share/vpp/api VPP_API_DIR ?= $(HOME)/src/vpp/build-root/install-vpp_debug-native/vpp/share/vpp/api
.PHONY: all build build-amd64 build-arm64 test proto vpp-binapi lint fixstyle fixstyle-web pkg-deb robot-test clean maglevd-frontend-web # GO_VERSION is what install-deps-go downloads from go.dev when the
# system Go is missing or older than this. Debian Trixie only ships
# golang-go 1.24 (main), and go.mod requires 1.25+, so the `apt install
# golang-go` path isn't sufficient — we fall back to the upstream
# tarball in /usr/local/go. Override on the command line to pull a
# specific patch release: make install-deps GO_VERSION=1.25.5
GO_VERSION ?= 1.25.0
# GOLANGCI_LINT_VERSION is the minimum golangci-lint version that
# install-deps-go-tools accepts. Raised to 1.64.0 because earlier
# releases don't understand Go 1.25 syntax (1.64 is the last v1 line
# and shipped Go 1.25 support; any v2.x release satisfies the floor
# trivially via version sort). install-deps-go-tools always `go
# install`s @latest, then asserts the resulting binary reports a
# version >= this floor as a sanity check. Override on the command
# line if you want to force a specific minimum, e.g.
# make install-deps GOLANGCI_LINT_VERSION=2.0.0
GOLANGCI_LINT_VERSION ?= 1.64.0
.PHONY: all build build-amd64 build-arm64 test proto vpp-binapi lint fixstyle fixstyle-web pkg-deb robot-test clean maglevd-frontend-web install-deps install-deps-apt install-deps-go install-deps-go-tools
all: build all: build
@@ -110,6 +129,116 @@ fixstyle-web:
lint: lint:
golangci-lint run ./... golangci-lint run ./...
# install-deps is an opt-in "set up a fresh developer box" target. Tested
# on Debian Trixie; the apt half should also work on Bookworm and recent
# Ubuntu LTS. Splits into three sub-targets so they can be run individually:
#
# install-deps-apt — Debian-packaged build-time deps (nodejs, npm,
# protoc, git, make, dpkg-dev, curl).
# install-deps-go — ensure a Go toolchain >= $(GO_VERSION) is on
# the system. Downloads the upstream tarball
# into /usr/local/go when the system Go is
# missing or older than the go.mod floor.
# install-deps-go-tools — `go install` the helpers this repo needs
# (protoc-gen-go, protoc-gen-go-grpc, golangci-
# lint) and assert golangci-lint is new enough
# to understand Go 1.25 syntax.
#
# Each sub-target is idempotent and safe to re-run.
install-deps: install-deps-apt install-deps-go install-deps-go-tools
@echo ""
@echo "==> All build dependencies installed."
@echo " Make sure these are on PATH:"
@echo " /usr/local/go/bin (Go toolchain)"
@echo " \$$(go env GOPATH)/bin (protoc-gen-go, golangci-lint, ...)"
install-deps-apt:
@set -eu; \
if [ "$$(id -u)" = 0 ]; then SUDO=""; else SUDO="sudo"; fi; \
echo "==> Installing apt packages (nodejs, npm, protoc, git, make, dpkg-dev)"; \
$$SUDO apt-get update; \
$$SUDO apt-get install -y --no-install-recommends \
nodejs npm protobuf-compiler git make dpkg-dev \
ca-certificates curl tar
# install-deps-go short-circuits when go env GOVERSION already reports a
# version >= GO_VERSION. Otherwise it downloads the official upstream
# tarball (https://go.dev/dl/) and extracts it to /usr/local/go, matching
# the layout that go.dev recommends and that most Debian setups use for
# "Go newer than apt provides".
install-deps-go:
@set -eu; \
if [ "$$(id -u)" = 0 ]; then SUDO=""; else SUDO="sudo"; fi; \
echo "==> Checking Go toolchain (required: $(GO_VERSION)+)"; \
if command -v go >/dev/null 2>&1; then \
CURRENT=$$(go env GOVERSION 2>/dev/null | sed 's/^go//'); \
OLDEST=$$(printf '%s\n%s\n' "$(GO_VERSION)" "$$CURRENT" | sort -V | head -n1); \
if [ "$$OLDEST" = "$(GO_VERSION)" ] && [ -n "$$CURRENT" ]; then \
echo " go$$CURRENT already installed (>= $(GO_VERSION)), skipping."; \
exit 0; \
fi; \
echo " go$$CURRENT is older than $(GO_VERSION), upgrading."; \
else \
echo " no Go toolchain on PATH, installing."; \
fi; \
DEB_ARCH=$$(dpkg --print-architecture); \
case "$$DEB_ARCH" in \
amd64) GOARCH=amd64 ;; \
arm64) GOARCH=arm64 ;; \
armhf) GOARCH=armv6l ;; \
*) echo " unsupported architecture: $$DEB_ARCH" >&2; exit 1 ;; \
esac; \
TARBALL="go$(GO_VERSION).linux-$$GOARCH.tar.gz"; \
URL="https://go.dev/dl/$$TARBALL"; \
echo " downloading $$URL"; \
curl -fsSL -o "/tmp/$$TARBALL" "$$URL"; \
echo " installing to /usr/local/go"; \
$$SUDO rm -rf /usr/local/go; \
$$SUDO tar -C /usr/local -xzf "/tmp/$$TARBALL"; \
rm -f "/tmp/$$TARBALL"; \
echo " installed $$(/usr/local/go/bin/go version)"
# install-deps-go-tools installs the three Go binaries this repo calls
# out to during `make proto` and `make lint`. protoc-gen-go and
# protoc-gen-go-grpc pin to specific upstream release branches; golangci-
# lint pulls @latest (the v2 install path) and then we assert the
# installed version parses as >= GOLANGCI_LINT_VERSION so a stale binary
# in $GOPATH/bin from a previous dev session doesn't silently get used
# against Go 1.25 code it can't parse. Run `make install-deps
# GOLANGCI_LINT_VERSION=2.0.0` if you want to enforce a tighter floor.
install-deps-go-tools:
@set -eu; \
if ! command -v go >/dev/null 2>&1; then \
export PATH="/usr/local/go/bin:$$PATH"; \
fi; \
echo "==> Installing Go tools via 'go install'"; \
echo " google.golang.org/protobuf/cmd/protoc-gen-go"; \
go install google.golang.org/protobuf/cmd/protoc-gen-go@latest; \
echo " google.golang.org/grpc/cmd/protoc-gen-go-grpc"; \
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest; \
echo " github.com/golangci/golangci-lint/v2/cmd/golangci-lint"; \
go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@latest; \
GOBIN="$$(go env GOBIN)"; \
if [ -z "$$GOBIN" ]; then GOBIN="$$(go env GOPATH)/bin"; fi; \
echo "==> Asserting golangci-lint version >= $(GOLANGCI_LINT_VERSION)"; \
if ! "$$GOBIN/golangci-lint" version >/dev/null 2>&1; then \
echo " ERROR: $$GOBIN/golangci-lint is not executable" >&2; \
exit 1; \
fi; \
INSTALLED=$$("$$GOBIN/golangci-lint" version 2>&1 | sed -En 's/.*has version v?([0-9][0-9.]*).*/\1/p' | head -n1); \
if [ -z "$$INSTALLED" ]; then \
echo " ERROR: could not parse golangci-lint version output" >&2; \
"$$GOBIN/golangci-lint" version >&2; \
exit 1; \
fi; \
OLDEST=$$(printf '%s\n%s\n' "$(GOLANGCI_LINT_VERSION)" "$$INSTALLED" | sort -V | head -n1); \
if [ "$$OLDEST" != "$(GOLANGCI_LINT_VERSION)" ]; then \
echo " ERROR: golangci-lint $$INSTALLED is older than the required $(GOLANGCI_LINT_VERSION)" >&2; \
echo " The tool understands Go 1.25 syntax only from v1.64.0 / v2.x onward." >&2; \
exit 1; \
fi; \
echo " golangci-lint $$INSTALLED (>= $(GOLANGCI_LINT_VERSION)) OK"
tests/.venv: tests/requirements.txt tests/.venv: tests/requirements.txt
python3 -m venv tests/.venv python3 -m venv tests/.venv
tests/.venv/bin/pip install -q -r tests/requirements.txt tests/.venv/bin/pip install -q -r tests/requirements.txt

View File

@@ -10,17 +10,18 @@ Debian package:
over a gRPC API + Prometheus `/metrics` endpoint. over a gRPC API + Prometheus `/metrics` endpoint.
- **`maglevc`** — the interactive CLI client. Tab-completing shell with - **`maglevc`** — the interactive CLI client. Tab-completing shell with
inline help; also runs one-shot commands for scripting. inline help; also runs one-shot commands for scripting.
- **`maglevd-frontend`** — optional web dashboard. One binary with the - **`maglevd-frontend`** — optional web dashboard. One binary with a
SolidJS SPA embedded via `//go:embed`; connects to one or more SolidJS Single-Page-App; connects to one or more maglevds over gRPC and
maglevds over gRPC and serves a live HTTP view (read-only `/view/` serves a live HTTP view (read-only `/view/` and optional basic-auth
and optional basic-auth `/admin/`). `/admin/` with mutating commands).
## Build and install ## Build and install
```sh ```sh
make # builds build/<arch>/{maglevd,maglevc,maglevd-frontend} make install-deps # installs all build-time dependencies
make test # runs all tests make # builds build/<arch>/ binaries
make pkg-deb # creates a Debian package for amd64 and arm64 make test # runs all tests
make pkg-deb # creates a Debian package for amd64 and arm64
``` ```
Requires Go 1.25+ and (for `make proto`) `protoc` with `protoc-gen-go` Requires Go 1.25+ and (for `make proto`) `protoc` with `protoc-gen-go`
@@ -66,7 +67,24 @@ maglevd-frontend -server localhost:9090 -listen :8080
``` ```
Send `SIGHUP` to `maglevd` to reload config without restarting. Send `SIGHUP` to `maglevd` to reload config without restarting.
`maglevd` requires `CAP_NET_RAW` for ICMP health checks. `maglevd` requires:
- `CAP_NET_RAW` for ICMP health checks (raw sockets).
- `CAP_SYS_ADMIN` when `healthchecker.netns` is set so probes can
`setns(CLONE_NEWNET)` into the dataplane namespace. Without it,
every probe errors out with `enter netns "<name>": operation not
permitted`.
The Debian systemd unit grants both via `AmbientCapabilities` /
`CapabilityBoundingSet`, so `systemctl start vpp-maglev` works out
of the box. When running by hand under a non-root user, grant them
via `setcap cap_net_raw,cap_sys_admin=eip /usr/sbin/maglevd` or
equivalent.
`maglevd-frontend` also ignores `SIGHUP` so a controlling-terminal
disconnect (e.g. closing the SSH session it was started from)
doesn't kill the daemon; `SIGTERM` / `SIGINT` remain the clean
shutdown signals.
Every flag on every binary also has an environment-variable Every flag on every binary also has an environment-variable
equivalent (e.g. `MAGLEV_CONFIG`, `MAGLEV_GRPC_ADDR`, equivalent (e.g. `MAGLEV_CONFIG`, `MAGLEV_GRPC_ADDR`,
@@ -89,5 +107,11 @@ deployments.
```sh ```sh
docker build -t maglevd . docker build -t maglevd .
docker run --cap-add NET_RAW -v /etc/vpp-maglev:/etc/vpp-maglev maglevd docker run --cap-add NET_RAW \
-v /etc/vpp-maglev:/etc/vpp-maglev maglevd
# With netns-scoped health checks (maglev.yaml sets healthchecker.netns):
docker run --cap-add NET_RAW --cap-add SYS_ADMIN \
-v /etc/vpp-maglev:/etc/vpp-maglev \
-v /var/run/netns:/var/run/netns maglevd
``` ```

View File

@@ -302,7 +302,7 @@ func serveSSE(w http.ResponseWriter, r *http.Request, broker *Broker) {
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
// Reconnect hint: EventSource default is 35s; 2s feels livelier. // Reconnect hint: EventSource default is 35s; 2s feels livelier.
fmt.Fprintf(w, "retry: 2000\n\n") _, _ = fmt.Fprintf(w, "retry: 2000\n\n")
flusher.Flush() flusher.Flush()
result := broker.Subscribe(r.Header.Get("Last-Event-ID")) result := broker.Subscribe(r.Header.Get("Last-Event-ID"))
@@ -311,7 +311,7 @@ func serveSSE(w http.ResponseWriter, r *http.Request, broker *Broker) {
if result.NeedResync { if result.NeedResync {
// No id: line — the browser keeps whatever Last-Event-ID it had, // No id: line — the browser keeps whatever Last-Event-ID it had,
// so subsequent reconnects compare against a real event ID. // so subsequent reconnects compare against a real event ID.
fmt.Fprintf(w, "event: resync\ndata: {}\n\n") _, _ = fmt.Fprintf(w, "event: resync\ndata: {}\n\n")
flusher.Flush() flusher.Flush()
} }
for _, ev := range result.ReplayEvents { for _, ev := range result.ReplayEvents {

View File

@@ -162,8 +162,7 @@ func buildTree() *Node {
// All tokens after 'events' are captured as args via a self-referencing slot // All tokens after 'events' are captured as args via a self-referencing slot
// node. This lets runWatchEvents parse the optional flags manually while still // node. This lets runWatchEvents parse the optional flags manually while still
// providing tab-completion through the dynamic enumerator. // providing tab-completion through the dynamic enumerator.
var watchEventsOptSlot *Node watchEventsOptSlot := &Node{
watchEventsOptSlot = &Node{
Word: "<opt>", Word: "<opt>",
Help: "Stream events with options", Help: "Stream events with options",
Dynamic: dynWatchEventOpts, Dynamic: dynWatchEventOpts,
@@ -268,18 +267,18 @@ func runShowVPPInfo(ctx context.Context, client grpcapi.MaglevClient, _ []string
return err return err
} }
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0) w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "%s\t%s\n", label("version"), info.Version) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("version"), info.Version)
fmt.Fprintf(w, "%s\t%s\n", label("build-date"), info.BuildDate) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("build-date"), info.BuildDate)
fmt.Fprintf(w, "%s\t%s\n", label("build-dir"), info.BuildDirectory) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("build-dir"), info.BuildDirectory)
fmt.Fprintf(w, "%s\t%d\n", label("vpp-pid"), info.Pid) _, _ = fmt.Fprintf(w, "%s\t%d\n", label("vpp-pid"), info.Pid)
if info.BoottimeNs > 0 { if info.BoottimeNs > 0 {
bootTime := time.Unix(0, info.BoottimeNs) bootTime := time.Unix(0, info.BoottimeNs)
fmt.Fprintf(w, "%s\t%s (%s)\n", label("vpp-boottime"), _, _ = fmt.Fprintf(w, "%s\t%s (%s)\n", label("vpp-boottime"),
bootTime.Format("2006-01-02 15:04:05"), bootTime.Format("2006-01-02 15:04:05"),
formatDuration(time.Since(bootTime))) formatDuration(time.Since(bootTime)))
} }
connTime := time.Unix(0, info.ConnecttimeNs) connTime := time.Unix(0, info.ConnecttimeNs)
fmt.Fprintf(w, "%s\t%s (%s)\n", label("connected"), _, _ = fmt.Fprintf(w, "%s\t%s (%s)\n", label("connected"),
connTime.Format("2006-01-02 15:04:05"), connTime.Format("2006-01-02 15:04:05"),
formatDuration(time.Since(connTime))) formatDuration(time.Since(connTime)))
return w.Flush() return w.Flush()
@@ -295,15 +294,15 @@ func runShowVPPLBState(ctx context.Context, client grpcapi.MaglevClient, _ []str
// ---- global config ---- // ---- global config ----
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0) w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "%s\n", label("global")) _, _ = fmt.Fprintf(w, "%s\n", label("global"))
if state.Conf.Ip4SrcAddress != "" { if state.Conf.Ip4SrcAddress != "" {
fmt.Fprintf(w, " %s\t%s\n", label("ip4-src"), state.Conf.Ip4SrcAddress) _, _ = fmt.Fprintf(w, " %s\t%s\n", label("ip4-src"), state.Conf.Ip4SrcAddress)
} }
if state.Conf.Ip6SrcAddress != "" { if state.Conf.Ip6SrcAddress != "" {
fmt.Fprintf(w, " %s\t%s\n", label("ip6-src"), state.Conf.Ip6SrcAddress) _, _ = fmt.Fprintf(w, " %s\t%s\n", label("ip6-src"), state.Conf.Ip6SrcAddress)
} }
fmt.Fprintf(w, " %s\t%d\n", label("sticky-buckets-per-core"), state.Conf.StickyBucketsPerCore) _, _ = fmt.Fprintf(w, " %s\t%d\n", label("sticky-buckets-per-core"), state.Conf.StickyBucketsPerCore)
fmt.Fprintf(w, " %s\t%ds\n", label("flow-timeout"), state.Conf.FlowTimeout) _, _ = fmt.Fprintf(w, " %s\t%ds\n", label("flow-timeout"), state.Conf.FlowTimeout)
if err := w.Flush(); err != nil { if err := w.Flush(); err != nil {
return err return err
} }
@@ -317,13 +316,13 @@ func runShowVPPLBState(ctx context.Context, client grpcapi.MaglevClient, _ []str
for _, v := range state.Vips { for _, v := range state.Vips {
fmt.Println() fmt.Println()
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0) w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "%s\t%s\n", label("vip"), stripHostMask(v.Prefix)) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("vip"), stripHostMask(v.Prefix))
fmt.Fprintf(w, " %s\t%s\n", label("protocol"), protoString(v.Protocol)) _, _ = fmt.Fprintf(w, " %s\t%s\n", label("protocol"), protoString(v.Protocol))
fmt.Fprintf(w, " %s\t%d\n", label("port"), v.Port) _, _ = fmt.Fprintf(w, " %s\t%d\n", label("port"), v.Port)
fmt.Fprintf(w, " %s\t%s\n", label("encap"), v.Encap) _, _ = fmt.Fprintf(w, " %s\t%s\n", label("encap"), v.Encap)
fmt.Fprintf(w, " %s\t%t\n", label("src-ip-sticky"), v.SrcIpSticky) _, _ = fmt.Fprintf(w, " %s\t%t\n", label("src-ip-sticky"), v.SrcIpSticky)
fmt.Fprintf(w, " %s\t%d\n", label("flow-table-length"), v.FlowTableLength) _, _ = fmt.Fprintf(w, " %s\t%d\n", label("flow-table-length"), v.FlowTableLength)
fmt.Fprintf(w, " %s\t%d\n", label("application-servers"), len(v.ApplicationServers)) _, _ = fmt.Fprintf(w, " %s\t%d\n", label("application-servers"), len(v.ApplicationServers))
if err := w.Flush(); err != nil { if err := w.Flush(); err != nil {
return err return err
} }
@@ -367,9 +366,9 @@ func runShowVPPLBCounters(ctx context.Context, client grpcapi.MaglevClient, _ []
// every packet count). // every packet count).
fmt.Println(label("frontend-counters")) fmt.Println(label("frontend-counters"))
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0) w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, " vip\tproto\tport\tfirst\tnext\tuntracked\tno-server\tfib-packets\tfib-bytes\n") _, _ = fmt.Fprintf(w, " vip\tproto\tport\tfirst\tnext\tuntracked\tno-server\tfib-packets\tfib-bytes\n")
for _, v := range resp.Vips { for _, v := range resp.Vips {
fmt.Fprintf(w, " %s\t%s\t%d\t%d\t%d\t%d\t%d\t%d\t%d\n", _, _ = fmt.Fprintf(w, " %s\t%s\t%d\t%d\t%d\t%d\t%d\t%d\t%d\n",
stripHostMask(v.Prefix), v.Protocol, v.Port, stripHostMask(v.Prefix), v.Protocol, v.Port,
v.FirstPacket, v.NextPacket, v.FirstPacket, v.NextPacket,
v.UntrackedPacket, v.NoServer, v.UntrackedPacket, v.NoServer,
@@ -458,16 +457,16 @@ func runShowFrontend(ctx context.Context, client grpcapi.MaglevClient, args []st
} }
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0) w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "%s\t%s\n", label("name"), info.Name) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("name"), info.Name)
fmt.Fprintf(w, "%s\t%s\n", label("address"), info.Address) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("address"), info.Address)
fmt.Fprintf(w, "%s\t%s\n", label("protocol"), info.Protocol) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("protocol"), info.Protocol)
fmt.Fprintf(w, "%s\t%d\n", label("port"), info.Port) _, _ = fmt.Fprintf(w, "%s\t%d\n", label("port"), info.Port)
fmt.Fprintf(w, "%s\t%t\n", label("src-ip-sticky"), info.SrcIpSticky) _, _ = fmt.Fprintf(w, "%s\t%t\n", label("src-ip-sticky"), info.SrcIpSticky)
if info.Description != "" { if info.Description != "" {
fmt.Fprintf(w, "%s\t%s\n", label("description"), info.Description) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("description"), info.Description)
} }
if len(info.Pools) > 0 { if len(info.Pools) > 0 {
fmt.Fprintf(w, "%s\n", label("pools")) _, _ = fmt.Fprintf(w, "%s\n", label("pools"))
} }
if err := w.Flush(); err != nil { if err := w.Flush(); err != nil {
return err return err
@@ -533,16 +532,16 @@ func runShowBackend(ctx context.Context, client grpcapi.MaglevClient, args []str
return err return err
} }
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0) w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "%s\t%s\n", label("name"), info.Name) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("name"), info.Name)
fmt.Fprintf(w, "%s\t%s\n", label("address"), info.Address) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("address"), info.Address)
stateDur := "" stateDur := ""
if len(info.Transitions) > 0 { if len(info.Transitions) > 0 {
since := time.Since(time.Unix(0, info.Transitions[0].AtUnixNs)) since := time.Since(time.Unix(0, info.Transitions[0].AtUnixNs))
stateDur = " for " + formatDuration(since) stateDur = " for " + formatDuration(since)
} }
fmt.Fprintf(w, "%s\t%s%s\n", label("state"), info.State, stateDur) _, _ = fmt.Fprintf(w, "%s\t%s%s\n", label("state"), info.State, stateDur)
fmt.Fprintf(w, "%s\t%v\n", label("enabled"), info.Enabled) _, _ = fmt.Fprintf(w, "%s\t%v\n", label("enabled"), info.Enabled)
fmt.Fprintf(w, "%s\t%s\n", label("healthcheck"), info.Healthcheck) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("healthcheck"), info.Healthcheck)
for i, t := range info.Transitions { for i, t := range info.Transitions {
ts := time.Unix(0, t.AtUnixNs) ts := time.Unix(0, t.AtUnixNs)
var lbl string var lbl string
@@ -554,7 +553,7 @@ func runShowBackend(ctx context.Context, client grpcapi.MaglevClient, args []str
// is identical on every row, keeping columns aligned). // is identical on every row, keeping columns aligned).
lbl = label(" ") lbl = label(" ")
} }
fmt.Fprintf(w, "%s\t%s → %s\t%s\t%s\n", _, _ = fmt.Fprintf(w, "%s\t%s → %s\t%s\t%s\n",
lbl, lbl,
t.From, t.To, t.From, t.To,
ts.Format("2006-01-02 15:04:05.000"), ts.Format("2006-01-02 15:04:05.000"),
@@ -588,41 +587,41 @@ func runShowHealthCheck(ctx context.Context, client grpcapi.MaglevClient, args [
return err return err
} }
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0) w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "%s\t%s\n", label("name"), info.Name) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("name"), info.Name)
fmt.Fprintf(w, "%s\t%s\n", label("type"), info.Type) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("type"), info.Type)
if info.Port > 0 { if info.Port > 0 {
fmt.Fprintf(w, "%s\t%d\n", label("port"), info.Port) _, _ = fmt.Fprintf(w, "%s\t%d\n", label("port"), info.Port)
} }
fmt.Fprintf(w, "%s\t%s\n", label("interval"), time.Duration(info.IntervalNs)) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("interval"), time.Duration(info.IntervalNs))
if info.FastIntervalNs > 0 { if info.FastIntervalNs > 0 {
fmt.Fprintf(w, "%s\t%s\n", label("fast-interval"), time.Duration(info.FastIntervalNs)) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("fast-interval"), time.Duration(info.FastIntervalNs))
} }
if info.DownIntervalNs > 0 { if info.DownIntervalNs > 0 {
fmt.Fprintf(w, "%s\t%s\n", label("down-interval"), time.Duration(info.DownIntervalNs)) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("down-interval"), time.Duration(info.DownIntervalNs))
} }
fmt.Fprintf(w, "%s\t%s\n", label("timeout"), time.Duration(info.TimeoutNs)) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("timeout"), time.Duration(info.TimeoutNs))
fmt.Fprintf(w, "%s\t%d\n", label("rise"), info.Rise) _, _ = fmt.Fprintf(w, "%s\t%d\n", label("rise"), info.Rise)
fmt.Fprintf(w, "%s\t%d\n", label("fall"), info.Fall) _, _ = fmt.Fprintf(w, "%s\t%d\n", label("fall"), info.Fall)
if info.ProbeIpv4Src != "" { if info.ProbeIpv4Src != "" {
fmt.Fprintf(w, "%s\t%s\n", label("probe-ipv4-src"), info.ProbeIpv4Src) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("probe-ipv4-src"), info.ProbeIpv4Src)
} }
if info.ProbeIpv6Src != "" { if info.ProbeIpv6Src != "" {
fmt.Fprintf(w, "%s\t%s\n", label("probe-ipv6-src"), info.ProbeIpv6Src) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("probe-ipv6-src"), info.ProbeIpv6Src)
} }
if h := info.Http; h != nil { if h := info.Http; h != nil {
fmt.Fprintf(w, "%s\t%s\n", label("http.path"), h.Path) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("http.path"), h.Path)
if h.Host != "" { if h.Host != "" {
fmt.Fprintf(w, "%s\t%s\n", label("http.host"), h.Host) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("http.host"), h.Host)
} }
fmt.Fprintf(w, "%s\t%d-%d\n", label("http.response-code"), h.ResponseCodeMin, h.ResponseCodeMax) _, _ = fmt.Fprintf(w, "%s\t%d-%d\n", label("http.response-code"), h.ResponseCodeMin, h.ResponseCodeMax)
if h.ResponseRegexp != "" { if h.ResponseRegexp != "" {
fmt.Fprintf(w, "%s\t%s\n", label("http.response-regexp"), h.ResponseRegexp) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("http.response-regexp"), h.ResponseRegexp)
} }
} }
if t := info.Tcp; t != nil { if t := info.Tcp; t != nil {
fmt.Fprintf(w, "%s\t%v\n", label("tcp.ssl"), t.Ssl) _, _ = fmt.Fprintf(w, "%s\t%v\n", label("tcp.ssl"), t.Ssl)
if t.ServerName != "" { if t.ServerName != "" {
fmt.Fprintf(w, "%s\t%s\n", label("tcp.server-name"), t.ServerName) _, _ = fmt.Fprintf(w, "%s\t%s\n", label("tcp.server-name"), t.ServerName)
} }
} }
return w.Flush() return w.Flush()

View File

@@ -96,7 +96,9 @@ func (ql *questionListener) OnChange(line []rune, pos int, key rune) (newLine []
// "unknown" banner, then list what's available at the deepest // "unknown" banner, then list what's available at the deepest
// node we *did* reach so the operator can see what they could // node we *did* reach so the operator can see what they could
// have typed instead. The partial at the cursor is irrelevant // have typed instead. The partial at the cursor is irrelevant
// once the left context is already broken. // once the left context is already broken — no downstream
// branch reads it after we enter this branch, so we don't
// bother clearing it.
consumed := prefix[:len(prefix)-len(remaining)] consumed := prefix[:len(prefix)-len(remaining)]
bad := remaining[0] bad := remaining[0]
if len(consumed) == 0 { if len(consumed) == 0 {
@@ -105,7 +107,6 @@ func (ql *questionListener) OnChange(line []rune, pos int, key rune) (newLine []
unknownMsg = fmt.Sprintf("unknown subcommand %q after %q", bad, strings.Join(consumed, " ")) unknownMsg = fmt.Sprintf("unknown subcommand %q after %q", bad, strings.Join(consumed, " "))
} }
displayPrefix = strings.Join(consumed, " ") displayPrefix = strings.Join(consumed, " ")
partial = ""
} else if partial != "" { } else if partial != "" {
if next := matchFixedChild(node.Children, partial); next != nil { if next := matchFixedChild(node.Children, partial); next != nil {
// Partial uniquely matched a fixed child — descend into it. // Partial uniquely matched a fixed child — descend into it.
@@ -152,22 +153,22 @@ func (ql *questionListener) OnChange(line []rune, pos int, key rune) (newLine []
// full "maglev> show vpp lb ?" ourselves as the first write — // full "maglev> show vpp lb ?" ourselves as the first write —
// that lands on the just-cleaned row, birdc-style, and the // that lands on the just-cleaned row, birdc-style, and the
// subsequent Fprintfs each redraw a fresh prompt below the help. // subsequent Fprintfs each redraw a fresh prompt below the help.
fmt.Fprintf(ql.rl.Stderr(), "%s%s\r\n", ql.rl.Config.Prompt, string(line)) _, _ = fmt.Fprintf(ql.rl.Stderr(), "%s%s\r\n", ql.rl.Config.Prompt, string(line))
if unknownMsg != "" { if unknownMsg != "" {
fmt.Fprintf(ql.rl.Stderr(), " %s\r\n", unknownMsg) _, _ = fmt.Fprintf(ql.rl.Stderr(), " %s\r\n", unknownMsg)
} }
if len(lines) == 0 { if len(lines) == 0 {
fmt.Fprintf(ql.rl.Stderr(), " <no completions>\r\n") _, _ = fmt.Fprintf(ql.rl.Stderr(), " <no completions>\r\n")
} else { } else {
for _, l := range lines { for _, l := range lines {
if l.help != "" { if l.help != "" {
fmt.Fprintf(ql.rl.Stderr(), "%-*s %s\r\n", maxLen+2, l.path, l.help) _, _ = fmt.Fprintf(ql.rl.Stderr(), "%-*s %s\r\n", maxLen+2, l.path, l.help)
} else { } else {
fmt.Fprintf(ql.rl.Stderr(), "%s\r\n", l.path) _, _ = fmt.Fprintf(ql.rl.Stderr(), "%s\r\n", l.path)
} }
} }
if len(dynValues) > 0 { if len(dynValues) > 0 {
fmt.Fprintf(ql.rl.Stderr(), " %s: %s\r\n", dynWord, strings.Join(dynValues, " ")) _, _ = fmt.Fprintf(ql.rl.Stderr(), " %s: %s\r\n", dynWord, strings.Join(dynValues, " "))
} }
} }

View File

@@ -43,7 +43,7 @@ func run() error {
if err != nil { if err != nil {
return fmt.Errorf("connect %s: %w", *serverAddr, err) return fmt.Errorf("connect %s: %w", *serverAddr, err)
} }
defer conn.Close() defer func() { _ = conn.Close() }()
client := grpcapi.NewMaglevClient(conn) client := grpcapi.NewMaglevClient(conn)
ctx := context.Background() ctx := context.Background()

View File

@@ -36,7 +36,7 @@ func runShell(ctx context.Context, client grpcapi.MaglevClient) error {
return fmt.Errorf("readline init: %w", err) return fmt.Errorf("readline init: %w", err)
} }
ql.rl = rl ql.rl = rl
defer rl.Close() defer func() { _ = rl.Close() }()
for { for {
line, err := rl.Readline() line, err := rl.Readline()
@@ -59,7 +59,7 @@ func runShell(ctx context.Context, client grpcapi.MaglevClient) error {
if errors.Is(err, errQuit) { if errors.Is(err, errQuit) {
return nil return nil
} }
fmt.Fprintf(rl.Stderr(), "%s\n", formatError(err)) _, _ = fmt.Fprintf(rl.Stderr(), "%s\n", formatError(err))
} }
} }
} }

View File

@@ -57,6 +57,22 @@ Global settings for the health checker engine.
empty or omitted, probes run in the current (default) network namespace. Useful when empty or omitted, probes run in the current (default) network namespace. Useful when
backends are reachable only through a dedicated dataplane namespace. backends are reachable only through a dedicated dataplane namespace.
**Capability requirement**: setting this field makes `maglevd` call
`setns(CLONE_NEWNET)` on the probe thread before each probe, which the
kernel only permits to processes holding `CAP_SYS_ADMIN` in the target
namespace's user namespace (`setns(2)`). The Debian systemd unit
(`vpp-maglev.service`) already grants this capability; if you run
`maglevd` by hand under a non-root user make sure the binary has
`CAP_SYS_ADMIN` via `setcap cap_net_raw,cap_sys_admin=eip
/usr/sbin/maglevd` or equivalent, otherwise every probe fails with
`enter netns "<name>": operation not permitted` and all backends
transition to `down` on their first probe.
Also make sure the named namespace is mounted under `/var/run/netns/`
(which is where `ip netns add` puts it) and that it is readable by
the user `maglevd` runs as — the default mode from `ip netns add` is
`0644`, which is fine for any user.
Example: Example:
```yaml ```yaml
maglev: maglev:

View File

@@ -88,6 +88,28 @@ recovering backend is re-evaluated quickly without waiting a full `interval`.
Using `down-interval` for fully down backends reduces probe traffic to servers Using `down-interval` for fully down backends reduces probe traffic to servers
that are known to be offline. that are known to be offline.
### Jitter
Every computed interval is then scaled by a uniformly-distributed random
factor in `[0.9, 1.1)` before the probe worker sleeps. The `±10%` jitter
prevents all probes from aligning on the same tick after a restart or a
config reload — a deployment with dozens of backends would otherwise send a
bursty, phase-locked flight of probes every `interval`. The jitter is
applied once per probe iteration, not averaged across iterations, so the
long-run cadence is still the configured `interval`.
### Probe timing while a probe is in flight
The probe worker loop is synchronous: each iteration blocks on the probe's
completion (or its `timeout`) before computing the next `sleepFor`. That
means a fully-timing-out probe effectively runs at
`timeout + fast-interval` cadence, not `fast-interval` cadence. If you
want fast fault detection against backends that hang rather than refuse
the connection (e.g. a dead TCP stack, or an unreachable backend via a
blackhole route), lower `timeout` rather than `fast-interval`. Setting
`fast-interval` below `timeout` doesn't make probes fire more frequently —
it just changes the idle gap between a completed probe and the next one.
--- ---
## Transition events ## Transition events

View File

@@ -67,6 +67,36 @@ and
are set to non\-empty values at startup; otherwise are set to non\-empty values at startup; otherwise
.B /admin/ .B /admin/
returns 404 and the SPA hides the admin\-toggle button entirely. returns 404 and the SPA hides the admin\-toggle button entirely.
.PP
Per\-user persistent state lives in two cookies:
.B maglev_scope
remembers which maglevd the user was last looking at (hydrated on
page load and reconciled against the fetched server list, so a
removed/renamed maglevd falls through cleanly instead of leaving a
ghost selection), and
.B maglev_zippy_open
remembers which collapsible cards are open, scoped per\-maglevd so
opening a frontend card on one server doesn't affect the equivalent
card on another. Both are
.BR "Path=/; Max-Age=1y; SameSite=Lax" ,
are best\-effort (a missing or corrupt value just falls back to
"everything closed" / "first maglevd"), and hold no sensitive data.
.PP
The SPA shows a health\-cascade icon next to every frontend name:
.B \(OK
for fully healthy, a double\-bang for a control\-plane vs dataplane
disagreement (eff_weight > 0 but zero VPP buckets), an exclamation
mark for a fully\-drained primary pool, a warning triangle for any
backend not in
.B up
state, and a question mark as a fallthrough for logic bugs in the
cascade. The
.B "lb buckets"
column on each backend row reports VPP's Maglev hash table share
for that AS, debounced to at most one
.B GetVPPLBState
fetch per second per maglevd and refreshed live on every backend
transition or weight edit.
.SH OPTIONS .SH OPTIONS
Each flag may also be supplied via an environment variable (shown in Each flag may also be supplied via an environment variable (shown in
parentheses); the flag takes precedence when both are set. All env parentheses); the flag takes precedence when both are set. All env
@@ -154,6 +184,30 @@ Returns the fresh backend snapshot as JSON.
Weight change POST. Body is Weight change POST. Body is
.B {"weight": 0\-100, "flush": bool} . .B {"weight": 0\-100, "flush": bool} .
Returns the fresh frontend snapshot as JSON. Returns the fresh frontend snapshot as JSON.
.SH SIGNALS
.TP
.BR SIGTERM ", " SIGINT
Graceful shutdown: active gRPC streams are closed, the HTTP server
drains, then the process exits.
.TP
.B SIGHUP
Explicitly ignored. A controlling\-terminal disconnect (closing the
SSH session the dashboard was started from, for example) would
otherwise deliver
.B SIGHUP
under Go's default handler and terminate the process with
.BR Hangup .
Since
.B maglevd\-frontend
has no config file beyond its command\-line flags there is nothing
meaningful to
.I reload
on
.BR SIGHUP ,
and inheriting the default "exit on hangup" semantics is the wrong
behaviour for a long\-running network daemon. Use
.B SIGTERM
for clean shutdown instead.
.SH REVERSE PROXY NOTES .SH REVERSE PROXY NOTES
The SSE stream has a handful of operational requirements that every The SSE stream has a handful of operational requirements that every
reverse proxy must satisfy: reverse proxy must satisfy:

View File

@@ -36,11 +36,19 @@ default 30s), on
reloads, and on operator request via reloads, and on operator request via
.BR maglevc . .BR maglevc .
.PP .PP
The aggregated backend state, VPP dataplane state, and per\-VIP / The aggregated backend state, VPP dataplane state, and per\-VIP
per\-backend stats\-segment counters are exposed via a gRPC API (and stats\-segment counters are exposed via a gRPC API (and scraped
scraped into Prometheus when the into Prometheus when the
.B /metrics .B /metrics
endpoint is enabled). endpoint is enabled). Per\-backend packet counters are intentionally
not exposed: VPP's LB plugin forwards by writing
.B adj_index[VLIB_TX]
directly and bypassing
.BR ip4_lookup_inline " / " ip6_lookup_inline ,
which is the only path that increments
.BR /net/route/to ,
so the backend's FIB entry stats index never ticks for LB\-forwarded
traffic.
See See
.BR maglevc (1) .BR maglevc (1)
for the interactive CLI client. for the interactive CLI client.
@@ -94,6 +102,42 @@ immediately.
Gracefully shut down: drain active gRPC streams, then exit. VPP Gracefully shut down: drain active gRPC streams, then exit. VPP
dataplane state is left in place so that existing VIPs continue to dataplane state is left in place so that existing VIPs continue to
forward traffic during a restart. forward traffic during a restart.
.SH CAPABILITIES
.TP
.B CAP_NET_RAW
Required when any health check uses
.BR "type: icmp" .
Raw sockets for ICMP echo. TCP and HTTP(S) checks use normal TCP
sockets and need no special capability.
.TP
.B CAP_SYS_ADMIN
Required when the
.B healthchecker.netns
field is set in the YAML configuration. The probe loop calls
.BR setns (2)
with
.B CLONE_NEWNET
to enter the target network namespace before each probe; the
kernel only permits that to processes holding
.B CAP_SYS_ADMIN
in the target namespace's user namespace. Without it, every probe
fails with
.B enter netns "<name>": operation not permitted
and every backend flips to
.B down
on its first probe. Omit the capability when the deployment doesn't
use namespace\-scoped health checks \(em the Debian systemd unit
ships with both
.B CAP_NET_RAW
and
.B CAP_SYS_ADMIN
in its
.B AmbientCapabilities
and
.B CapabilityBoundingSet
by default, and operators can drop
.B CAP_SYS_ADMIN
via a drop\-in override if they prefer the narrower surface.
.SH FILES .SH FILES
.TP .TP
.I /etc/vpp-maglev/maglev.yaml .I /etc/vpp-maglev/maglev.yaml

View File

@@ -32,9 +32,34 @@ are used for anything not set.
### Capabilities ### Capabilities
`maglevd` requires `CAP_NET_RAW` when any health check uses `type: icmp`. `maglevd` requires:
All other check types (`tcp`, `http`) use normal TCP sockets and require no
special capabilities. - **`CAP_NET_RAW`** when any health check uses `type: icmp` — raw
sockets for ICMP echo. `tcp`, `http`, and `https` checks use
normal TCP sockets and do not need this capability.
- **`CAP_SYS_ADMIN`** when `healthchecker.netns` is set in the
config — the probe loop calls `setns(CLONE_NEWNET)` to join the
target network namespace, and the kernel only permits that to
processes holding `CAP_SYS_ADMIN` in the target's user namespace
(see `setns(2)`). Without it the probe fails with
`enter netns "<name>": operation not permitted` and every backend
flips to `down` / `L4CON` on its first probe.
The Debian systemd unit (`vpp-maglev.service`) grants both via
`AmbientCapabilities` and `CapabilityBoundingSet`, so
`systemctl start vpp-maglev` works out of the box under the
unprivileged `maglevd` user. When running the binary by hand under
a non-root account, either:
- `setcap cap_net_raw,cap_sys_admin=eip /usr/sbin/maglevd` once at
install time, or
- run under `systemd-run -p AmbientCapabilities='CAP_NET_RAW CAP_SYS_ADMIN' ...`
for ad-hoc tests.
If your deployment doesn't use `netns:` at all, drop
`CAP_SYS_ADMIN` from the bounding set in the service unit — it's a
broad capability and there's no value in keeping it when nothing
calls `setns`.
### Logging ### Logging
@@ -139,14 +164,22 @@ show vpp lb state Show the VPP load-balancer plugin state: global
configuration, configured VIPs, and their attached configuration, configured VIPs, and their attached
application servers (address, weight, bucket count). application servers (address, weight, bucket count).
Returns an error if VPP is not connected. Returns an error if VPP is not connected.
show vpp lb counters Show per-VIP and per-backend packet/byte counters show vpp lb counters Show per-VIP packet/byte counters from the VPP stats
from the VPP stats segment, refreshed roughly every segment, refreshed roughly every five seconds by
five seconds by maglevd. Each VIP row reports the LB maglevd. Each row reports the four LB plugin counters
plugin counters (next, first, untracked, no-server) (first, next, untracked, no-server) and the FIB
and the FIB packets/bytes at the VIP's host prefix. packets/bytes at the VIP's host prefix. Use Prometheus
Each backend row reports FIB packets/bytes at the for live rates; this command shows absolute values.
backend's /32 or /128 prefix. Use Prometheus for
live rates; this command shows absolute values. Per-backend packet counters are not shown: VPP's LB
plugin forwarding node writes adj_index[VLIB_TX]
directly and bypasses ip{4,6}_lookup_inline, which is
the only path that increments /net/route/to. The
backend's FIB load_balance stats_index therefore
never ticks for LB-forwarded traffic, and exposing
zeros would mislead. See docs/implementation/TODO
for the upstream path that would fix this (new
lb_as_stats_dump API message).
sync vpp lb state [<name>] Reconcile the VPP load-balancer dataplane from the sync vpp lb state [<name>] Reconcile the VPP load-balancer dataplane from the
running config. Without a name: runs a full sync — running config. Without a name: runs a full sync —
@@ -285,6 +318,79 @@ the SPA's "admin…" toggle becomes visible. When either is missing or
empty the `/admin/` route returns 404 and the SPA hides the toggle — empty the `/admin/` route returns 404 and the SPA hides the toggle —
`/view/` is always reachable read-only. `/view/` is always reachable read-only.
### What the SPA shows
After the dashboard loads, the header carries a **scope selector**:
one pill per configured maglevd, coloured green when the frontend's
gRPC channel to that maglevd is alive and red when it's dropped.
Click a pill to flip the view to that maglevd's frontends. Your
selection is persisted in a `maglev_scope` cookie (Path=/;
Max-Age=1y; SameSite=Lax), so the next page load lands on the same
server you were last looking at. If the cookie references a
maglevd that's no longer in the server list (it was removed from
`-server` or renamed), the hydration path falls through to the
first maglevd in the list instead of leaving you on a ghost
selection.
The **frontend list** is a stack of collapsible cards
(`<details>` elements) — one per VIP. Each card header shows a
fixed-width slot carrying a health icon, the frontend name, its
aggregate state badge (`up` / `down` / `unknown`), and the
address, protocol, and description. The health icon is a cascade
derived from the current backend state + VPP bucket allocation:
| Icon | Meaning |
|---|---|
| ✅ | All backends `up`, the primary pool is serving, and every backend with `effective_weight > 0` has VPP buckets > 0. |
| ‼️ | At least one backend has `effective_weight > 0` but zero VPP buckets — the control plane and dataplane disagree, almost always a bug worth investigating. |
| ❗ | The primary pool has no serving backend (every pool[0] backend has `effective_weight = 0`); the VIP is running on its fallback or nothing at all. |
| ⚠️ | At least one backend is not `up`, nothing worse. Typical maintenance / partial outage state. |
| ❓ | Fallthrough; should be unreachable in practice and indicates a logic bug in the health-cascade code. |
The card body is a table with one row per `(pool, backend)` tuple.
Columns: `pool`, `backend`, `address`, `state`, `weight`,
`effective`, `lb buckets`, `last transition`, and (in admin mode) a
kebab `⋮` menu for per-backend actions. The **LB buckets** column
reports VPP's Maglev hash table bucket count for that backend,
refreshed live via a debounced `GetVPPLBState` scrape whenever a
transition or weight edit happens (at most once per second per
maglevd). A value of `0` means "in VPP but drained", `—` means
"not in VPP at all" (e.g. between a sync and the next poll), and a
non-zero number is the share of the 1024-bucket table currently
pointing at that AS.
Card open/closed state is also persisted per-panel in a
`maglev_zippy_open` cookie, **scoped per maglevd** (the id is
`frontend-<maglevd>-<frontendName>`), so collapsing a card on
`chbtl2` doesn't also collapse the equivalent card on `localhost`.
On first load every card starts closed; unfolding one writes it to
the cookie for subsequent visits. The cookie is a best-effort hint
— a missing or corrupt value just falls back to "everything
closed", so losing it (browser clear, expiry, private window, etc.)
is purely cosmetic.
When `admin_enabled` is true the header gains an **admin toggle**
that switches between `/view/` (read-only) and `/admin/` (basic
auth, mutation actions exposed). Inside admin mode every backend
row grows a `⋮` menu with `pause`, `resume`, `enable`, `disable`,
and `set weight…` entries. Lifecycle actions open a confirmation
dialog that spells out the dataplane consequence in plain English
(`disable` specifically calls out that it drops live sessions via
the flow-table flush). The weight dialog has a 0-100 slider and a
`flush existing flows` checkbox — unchecked is the graceful drain
(new flows move, existing ones finish naturally), checked is the
immediate session-drop path.
Also visible in admin mode: a **Debug panel** at the bottom of the
page with a rolling tail of every event the SPA has seen across
all maglevds — `backend` and `frontend` transitions, log lines,
`maglevd-status` flips, `vpp-status` flips, and the VPP LB sync
events (`vpp-lb-sync-*`) with their full attribute set formatted
for scanning. A scope filter keeps the tail narrowed to the
current maglevd by default; a `all maglevds` checkbox flips it to
firehose mode, and a `pause` button freezes the tail so you can
read back.
### HTTP surface ### HTTP surface
- **`/view/`** — static SPA (dashboard). No authentication. - **`/view/`** — static SPA (dashboard). No authentication.

View File

@@ -71,7 +71,7 @@ func startTestServer(t *testing.T, ctx context.Context, c *checker.Checker) (Mag
t.Fatalf("dial: %v", err) t.Fatalf("dial: %v", err)
} }
return NewMaglevClient(conn), func() { return NewMaglevClient(conn), func() {
conn.Close() _ = conn.Close()
srv.Stop() srv.Stop()
} }
} }

View File

@@ -69,7 +69,7 @@ func doHTTPProbe(ctx context.Context, cfg ProbeConfig, useTLS bool) health.Probe
if useTLS { if useTLS {
tlsConn := tls.Client(c, tlsConfig(p.ServerName, p.InsecureSkipVerify)) tlsConn := tls.Client(c, tlsConfig(p.ServerName, p.InsecureSkipVerify))
if err := tlsConn.HandshakeContext(ctx); err != nil { if err := tlsConn.HandshakeContext(ctx); err != nil {
c.Close() _ = c.Close()
return err return err
} }
conn = tlsConn conn = tlsConn
@@ -92,7 +92,7 @@ func doHTTPProbe(ctx context.Context, cfg ProbeConfig, useTLS bool) health.Probe
} }
return health.ProbeResult{OK: false, Layer: health.LayerL4, Code: "L4CON", Detail: dialErr.Error()} return health.ProbeResult{OK: false, Layer: health.LayerL4, Code: "L4CON", Detail: dialErr.Error()}
} }
defer conn.Close() defer func() { _ = conn.Close() }()
transport := &http.Transport{ transport := &http.Transport{
DialContext: func(_ context.Context, _, _ string) (net.Conn, error) { DialContext: func(_ context.Context, _, _ string) (net.Conn, error) {
@@ -122,7 +122,7 @@ func doHTTPProbe(ctx context.Context, cfg ProbeConfig, useTLS bool) health.Probe
} }
return health.ProbeResult{OK: false, Layer: health.LayerL7, Code: "L7RSP", Detail: err.Error()} return health.ProbeResult{OK: false, Layer: health.LayerL7, Code: "L7RSP", Detail: err.Error()}
} }
defer resp.Body.Close() defer func() { _ = resp.Body.Close() }()
if resp.StatusCode < p.ResponseCodeMin || resp.StatusCode > p.ResponseCodeMax { if resp.StatusCode < p.ResponseCodeMin || resp.StatusCode > p.ResponseCodeMax {
return health.ProbeResult{ return health.ProbeResult{

View File

@@ -61,7 +61,7 @@ func dialAndProbe(ctx context.Context, addr string, cfg ProbeConfig) (bool, erro
if err != nil { if err != nil {
return false, err return false, err
} }
defer resp.Body.Close() defer func() { _ = resp.Body.Close() }()
if resp.StatusCode < p.ResponseCodeMin || resp.StatusCode > p.ResponseCodeMax { if resp.StatusCode < p.ResponseCodeMin || resp.StatusCode > p.ResponseCodeMax {
return false, nil return false, nil
@@ -78,7 +78,7 @@ func dialAndProbe(ctx context.Context, addr string, cfg ProbeConfig) (bool, erro
func TestHTTPProbeStatusCode(t *testing.T) { func TestHTTPProbeStatusCode(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
fmt.Fprint(w, "healthy") _, _ = fmt.Fprint(w, "healthy")
})) }))
defer srv.Close() defer srv.Close()
@@ -124,7 +124,7 @@ func TestHTTPProbeWrongStatusCode(t *testing.T) {
func TestHTTPProbeRegexpMatch(t *testing.T) { func TestHTTPProbeRegexpMatch(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
fmt.Fprint(w, `{"status":"ok"}`) _, _ = fmt.Fprint(w, `{"status":"ok"}`)
})) }))
defer srv.Close() defer srv.Close()
@@ -148,7 +148,7 @@ func TestHTTPProbeRegexpMatch(t *testing.T) {
func TestHTTPProbeRegexpNoMatch(t *testing.T) { func TestHTTPProbeRegexpNoMatch(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
fmt.Fprint(w, `{"status":"degraded"}`) _, _ = fmt.Fprint(w, `{"status":"degraded"}`)
})) }))
defer srv.Close() defer srv.Close()
@@ -178,7 +178,7 @@ func TestHTTPSProbe(t *testing.T) {
host, portStr, _ := net.SplitHostPort(srv.Listener.Addr().String()) host, portStr, _ := net.SplitHostPort(srv.Listener.Addr().String())
port := uint16(0) port := uint16(0)
fmt.Sscanf(portStr, "%d", &port) _, _ = fmt.Sscanf(portStr, "%d", &port)
cfg := ProbeConfig{ cfg := ProbeConfig{
Target: net.ParseIP(host), Target: net.ParseIP(host),

View File

@@ -44,7 +44,7 @@ func ICMPProbe(ctx context.Context, cfg ProbeConfig) health.ProbeResult {
if err != nil { if err != nil {
return fmt.Errorf("listen icmp (%s): %w", network, err) return fmt.Errorf("listen icmp (%s): %w", network, err)
} }
defer pc.Close() defer func() { _ = pc.Close() }()
id := rand.IntN(0xffff) + 1 id := rand.IntN(0xffff) + 1
seq := rand.IntN(0xffff) + 1 seq := rand.IntN(0xffff) + 1

View File

@@ -25,14 +25,14 @@ func inNetns(nsName string, fn func() error) error {
if err != nil { if err != nil {
return fmt.Errorf("get current netns: %w", err) return fmt.Errorf("get current netns: %w", err)
} }
defer origNs.Close() defer func() { _ = origNs.Close() }()
defer netns.Set(origNs) //nolint:errcheck defer func() { _ = netns.Set(origNs) }()
targetNs, err := netns.GetFromName(nsName) targetNs, err := netns.GetFromName(nsName)
if err != nil { if err != nil {
return fmt.Errorf("get netns %q: %w", nsName, err) return fmt.Errorf("get netns %q: %w", nsName, err)
} }
defer targetNs.Close() defer func() { _ = targetNs.Close() }()
if err := netns.Set(targetNs); err != nil { if err := netns.Set(targetNs); err != nil {
return fmt.Errorf("enter netns %q: %w", nsName, err) return fmt.Errorf("enter netns %q: %w", nsName, err)

View File

@@ -49,16 +49,16 @@ func TCPProbe(ctx context.Context, cfg ProbeConfig) health.ProbeResult {
} }
if !doTLS { if !doTLS {
conn.Close() _ = conn.Close()
result = health.ProbeResult{OK: true, Layer: health.LayerL4, Code: "L4OK"} result = health.ProbeResult{OK: true, Layer: health.LayerL4, Code: "L4OK"}
return nil return nil
} }
// TLS handshake. // TLS handshake.
tlsConn := tls.Client(conn, tlsConfig(serverName, insecureSkipVerify)) tlsConn := tls.Client(conn, tlsConfig(serverName, insecureSkipVerify))
tlsConn.SetDeadline(time.Now().Add(cfg.Timeout)) //nolint:errcheck _ = tlsConn.SetDeadline(time.Now().Add(cfg.Timeout))
if err := tlsConn.HandshakeContext(ctx); err != nil { if err := tlsConn.HandshakeContext(ctx); err != nil {
tlsConn.Close() _ = tlsConn.Close()
if isTimeout(err) { if isTimeout(err) {
result = health.ProbeResult{OK: false, Layer: health.LayerL6, Code: "L6TOUT", Detail: err.Error()} result = health.ProbeResult{OK: false, Layer: health.LayerL6, Code: "L6TOUT", Detail: err.Error()}
} else { } else {
@@ -66,7 +66,7 @@ func TCPProbe(ctx context.Context, cfg ProbeConfig) health.ProbeResult {
} }
return nil return nil
} }
tlsConn.Close() _ = tlsConn.Close()
result = health.ProbeResult{OK: true, Layer: health.LayerL6, Code: "L6OK"} result = health.ProbeResult{OK: true, Layer: health.LayerL6, Code: "L6OK"}
return nil return nil
}) })