Files
vpp-maglev/docs/user-guide.md
Pim van Pelt 744b1cb3d2 install-deps Makefile target; docs refresh; golangci-lint v2 clean
Makefile:
- New install-deps umbrella target split into three sub-targets:
  install-deps-apt        — Debian/Trixie-packaged build deps
                            (nodejs, npm, protobuf-compiler, git, make,
                            dpkg-dev, ca-certificates, curl, tar). Uses
                            sudo when not already root.
  install-deps-go         — ensures a Go toolchain >= GO_VERSION (go.mod
                            floor, default 1.25.0). Short-circuits when
                            the system Go is already recent enough;
                            otherwise downloads the upstream tarball
                            from go.dev/dl/ into /usr/local/go. Trixie
                            only ships 1.24 so this step is load-bearing.
  install-deps-go-tools   — go install protoc-gen-go, protoc-gen-go-grpc,
                            and golangci-lint/v2/cmd/golangci-lint. Then
                            asserts the installed golangci-lint version
                            parses as >= GOLANGCI_LINT_VERSION (default
                            1.64.0, the floor that supports Go 1.25
                            syntax) to catch stale binaries in $GOPATH
                            /bin before they silently run against Go
                            1.25 code.
- Parser bug fixed: golangci-lint v1.x prints "has version v1.64.8" but
  v2.x dropped the 'v' prefix and prints "has version 2.11.4". The
  original sed regex required the 'v' and returned an empty match on
  v2.x, making the assertion explode with "could not parse version
  output". Fixed by switching to extended regex (sed -En) with 'v?' so
  both forms parse cleanly.
- GO_VERSION and GOLANGCI_LINT_VERSION exposed as Makefile variables
  so operators can override on the command line, e.g.
    make install-deps GO_VERSION=1.25.5 GOLANGCI_LINT_VERSION=2.0.0
- .PHONY extended with the four new target names.

Docs:
- README.md: capability note rewritten to cover CAP_NET_RAW (ICMP) and
  the new CAP_SYS_ADMIN requirement when healthchecker.netns is set,
  plus a paragraph explaining that the Debian systemd unit grants both
  automatically. Docker example gained a second variant that shows the
  additional --cap-add SYS_ADMIN and /var/run/netns bind mount for
  netns-scoped deployments. Also notes that maglevd-frontend ignores
  SIGHUP so controlling-terminal disconnects don't kill it.
- docs/user-guide.md: Capabilities section rewritten as a bulleted
  list covering both caps, with the EPERM error string and three
  different ways to grant them (systemd unit, setcap, systemd-run);
  'show vpp lb counters' command description updated to explain that
  per-backend packet counts are no longer shown (LB plugin's
  forwarding node bypasses ip{4,6}_lookup_inline, so /net/route/to at
  the backend's FIB entry never ticks for LB-forwarded traffic); new
  ~75-line "What the SPA shows" subsection covering the scope
  selector + maglev_scope cookie, the per-maglevd frontend cards, the
  health-cascade icon table (ok / bug-buckets / primary-drained /
  degraded / unknown), the lb buckets column semantics, the
  maglev_zippy_open cookie, the admin-mode lifecycle dialogs with
  their plain-English consequence text, and the debug panel.
- docs/config-guide.md: healthchecker.netns field gains a capability-
  requirement note spelling out setns(CLONE_NEWNET), the EPERM
  symptom string, and the /var/run/netns/ readability requirement.
- docs/healthchecks.md: new "Jitter" subsection explaining the +/-10%
  scaling on every computed interval, and a "Probe timing while a
  probe is in flight" subsection that explains why fast-interval alone
  doesn't give fast fault detection against hanging backends (the
  probe loop is synchronous, so each iteration is timeout +
  fast-interval; the advice is to lower timeout, not fast-interval).
- docs/maglevd.8: description paragraph corrected (dropped the
  per-backend stats claim and added a short note pointing at the LB
  plugin forwarding-path bypass); new CAPABILITIES section between
  SIGNALS and FILES covering both CAP_NET_RAW and CAP_SYS_ADMIN with
  the drop-in-override hint.
- docs/maglevd-frontend.8: new SIGNALS section documenting the
  explicit SIGHUP ignore (so a controlling-terminal disconnect doesn't
  kill the daemon); description extended with paragraphs on the two
  persistence cookies (maglev_scope, maglev_zippy_open) and on the
  health-cascade icon + lb buckets column.
- docs/maglevc.1: left untouched — intentionally minimal and delegates
  to docs/user-guide.md.

Lint (26 issues across 12 files, all errcheck / ineffassign / S1021):
- cmd/frontend/handlers.go: _, _ = fmt.Fprintf(...) for the SSE retry
  hint and resync control-event writes.
- cmd/maglevc/commands.go: bulk-prefix every fmt.Fprintf(w, ...) with
  _, _ =; also merged 'var watchEventsOptSlot *Node; ... = &Node{...}'
  into a single := declaration (staticcheck S1021) — the self-
  referencing pattern still works because the Children back-ref is
  assigned on the next statement, not inside the struct literal.
- cmd/maglevc/complete.go: _, _ = fmt.Fprintf(ql.rl.Stderr(), ...)
  for the banner and help writes; removed the ineffectual
  'partial = ""' assignment (nothing downstream reads partial after
  that branch, so setting it was dead code flagged by ineffassign).
- cmd/maglevc/shell.go: defer func() { _ = rl.Close() }() for the
  readline instance; _, _ = fmt.Fprintf(rl.Stderr(), ...) for error
  display in the REPL loop.
- cmd/maglevc/main.go: defer func() { _ = conn.Close() }() for the
  gRPC client connection.
- internal/grpcapi/server_test.go: _ = conn.Close() in the test
  teardown closure.
- internal/prober/http.go: _ = c.Close() in the TLS-handshake-failed
  path; defer func() { _ = conn.Close() }() and defer func() { _ =
  resp.Body.Close() }() for the two deferred cleanups.
- internal/prober/http_test.go: defer func() { _ = resp.Body.Close()
  }() plus three _, _ = fmt.Fprint(w, ...) in the httptest.Server
  handlers and _, _ = fmt.Sscanf(...) when parsing the test listener's
  port.
- internal/prober/icmp.go: defer func() { _ = pc.Close() }() for the
  ICMP packet conn.
- internal/prober/netns.go: defer func() { _ = origNs.Close() }(),
  defer func() { _ = netns.Set(origNs) }(), defer func() { _ =
  targetNs.Close() }() — also dropped a stray //nolint:errcheck that
  was no longer needed once the closure wrapping handled the discard.
- internal/prober/tcp.go: _ = conn.Close() in the L4-only path,
  _ = tlsConn.Close() in the failed and succeeded handshake branches,
  _ = tlsConn.SetDeadline(...) (also dropped a //nolint:errcheck
  previously covering it).

Iterative 'make lint' runs were needed because golangci-lint v2.x
caps same-linter reports per pass, so the first pass reported 21,
then 4, then 3, then 1, then 0. Final pass: 0 issues. make test is
green across every package, and make build produces all three
binaries cleanly.
2026-04-14 17:37:53 +02:00

430 lines
24 KiB
Markdown

# User Guide
## maglevd
`maglevd` is the health-checker daemon. It probes backends according to the
configuration file, maintains their health state, and exposes a gRPC API for
inspection and control.
### Flags
| Flag | Environment variable | Default | Description |
|---|---|---|---|
| `--config` | `MAGLEV_CONFIG` | `/etc/vpp-maglev/maglev.yaml` | Path to the YAML configuration file. |
| `--grpc-addr` | `MAGLEV_GRPC_ADDR` | `:9090` | TCP address on which the gRPC server listens. |
| `--metrics-addr` | `MAGLEV_METRICS_ADDR` | `:9091` | TCP address for the Prometheus `/metrics` HTTP endpoint. Set to empty to disable. |
| `--vpp-api-addr` | `MAGLEV_VPP_API_ADDR` | `/run/vpp/api.sock` | VPP binary API socket path. Set to empty to disable VPP integration. |
| `--vpp-stats-addr` | `MAGLEV_VPP_STATS_ADDR` | `/run/vpp/stats.sock` | VPP stats socket path. |
| `--log-level` | `MAGLEV_LOG_LEVEL` | `info` | Log verbosity: `debug`, `info`, `warn`, or `error`. |
| `--check` | — | — | Read and validate the config file, then exit. Exits 0 if the config is valid, 1 on YAML parse error, 2 on semantic error. |
| `--reflection` | — | `true` | Enable gRPC server reflection. Allows `grpcurl` to introspect the API without the `.proto` file. Set to `false` to disable. |
| `--version` | — | — | Print version, commit hash, and build date, then exit. |
Flags take precedence over environment variables. Both are optional; defaults
are used for anything not set.
### Signals
| Signal | Effect |
|---|---|
| `SIGHUP` | Reload the configuration file (same code path as `config reload` in `maglevc`). The file is checked before applying; if there is a parse or semantic error the reload is aborted and the error is logged (the daemon continues running with its current config). New backends are started, removed backends are stopped, backends whose health-check config is unchanged continue probing without interruption. |
| `SIGTERM` / `SIGINT` | Graceful shutdown. Active gRPC streams are closed, the server drains, then the process exits. |
### Capabilities
`maglevd` requires:
- **`CAP_NET_RAW`** when any health check uses `type: icmp` — raw
sockets for ICMP echo. `tcp`, `http`, and `https` checks use
normal TCP sockets and do not need this capability.
- **`CAP_SYS_ADMIN`** when `healthchecker.netns` is set in the
config — the probe loop calls `setns(CLONE_NEWNET)` to join the
target network namespace, and the kernel only permits that to
processes holding `CAP_SYS_ADMIN` in the target's user namespace
(see `setns(2)`). Without it the probe fails with
`enter netns "<name>": operation not permitted` and every backend
flips to `down` / `L4CON` on its first probe.
The Debian systemd unit (`vpp-maglev.service`) grants both via
`AmbientCapabilities` and `CapabilityBoundingSet`, so
`systemctl start vpp-maglev` works out of the box under the
unprivileged `maglevd` user. When running the binary by hand under
a non-root account, either:
- `setcap cap_net_raw,cap_sys_admin=eip /usr/sbin/maglevd` once at
install time, or
- run under `systemd-run -p AmbientCapabilities='CAP_NET_RAW CAP_SYS_ADMIN' ...`
for ad-hoc tests.
If your deployment doesn't use `netns:` at all, drop
`CAP_SYS_ADMIN` from the bounding set in the service unit — it's a
broad capability and there's no value in keeping it when nothing
calls `setns`.
### Logging
All log output is written to stdout as JSON using Go's `log/slog`. The first
line logged after the logger is configured is a `starting` record that includes
`version`, `commit`, and `date`. Every state change emits a `backend-transition`
line at `INFO` level. Per-mutation VPP LB sync events
(`vpp-lb-sync-vip-added`, `vpp-lb-sync-vip-removed`, `vpp-lb-sync-as-added`,
`vpp-lb-sync-as-removed`, `vpp-lb-sync-as-weight-updated`) are also emitted
at `INFO` so the CLI `watch events` stream and the web frontend see every
dataplane change without raising the log level. Set `--log-level debug` to
see individual probe attempts and every VPP binary-API call
(`vpp-api-send` / `vpp-api-recv` with full payload) as they happen.
### Prometheus metrics
`maglevd` exposes Prometheus metrics on `--metrics-addr` (default `:9091`) at
the `/metrics` path. Metric families:
**Health-check and backend state (gauges, on-demand):**
| Metric | Labels | Description |
|---|---|---|
| `maglev_backend_state` | `backend`, `address`, `healthcheck`, `state` | 1 for the current state row per backend, 0 otherwise. |
| `maglev_backend_health` | `backend` | Current rise/fall counter value. |
| `maglev_backend_enabled` | `backend` | 1 if enabled, 0 if disabled. |
| `maglev_frontend_pool_backend_weight` | `frontend`, `pool`, `backend` | Configured weight from YAML. |
**Probe counters and latency (inline):**
| Metric | Labels | Description |
|---|---|---|
| `maglev_probe_total` | `backend`, `type`, `result`, `code` | Probes executed. `result` is `success` or `failure`. |
| `maglev_probe_duration_seconds` | `backend`, `type` | Histogram of probe wall time. |
| `maglev_backend_transitions_total` | `backend`, `from`, `to` | State machine transitions. |
**VPP integration (when enabled):**
| Metric | Labels | Description |
|---|---|---|
| `maglev_vpp_connected` | — | 1 if maglevd currently has a live VPP connection. |
| `maglev_vpp_uptime_seconds` | — | Seconds since VPP started (from `/sys/boottime`). |
| `maglev_vpp_connected_seconds` | — | Seconds since maglevd established the current VPP connection. |
| `maglev_vpp_info` | `version`, `build_date`, `pid` | Static VPP build metadata; always 1. |
| `maglev_vpp_api_total` | `msg`, `direction`, `result` | VPP binary-API calls. `direction` is `send` or `recv`; `result` is `success` or `failure`. |
| `maglev_vpp_lbsync_total` | `scope`, `kind` | Per-mutation sync counters. `scope` is `all` or `vip`; `kind` is one of `vip_added`, `vip_removed`, `as_added`, `as_removed`, `as_weight_updated`. |
**gRPC server (standard `go-grpc-middleware/prometheus` metrics):**
`grpc_server_started_total`, `grpc_server_handled_total`,
`grpc_server_msg_received_total`, `grpc_server_msg_sent_total`, and
`grpc_server_handling_seconds` — all labelled by `grpc_service`,
`grpc_method`, `grpc_type`, and `grpc_code`. Every method is
pre-registered at zero so time series exist on the first scrape.
---
## maglevc
`maglevc` is the interactive control-plane client. It connects to a running
`maglevd` over gRPC and either executes a single command or drops into an
interactive shell.
### Usage
```sh
maglevc [--server host:port] [--color[=bool]] [command...]
```
| Flag | Default | Description |
|---|---|---|
| `--server` | `localhost:9090` | Address of the `maglevd` gRPC server. |
| `--color` | mode-aware | Colorize static field labels (dark blue ANSI). Defaults to `true` in the interactive shell and `false` in one-shot mode, so output piped into scripts stays free of escape codes. Pass `--color=true` or `--color=false` explicitly to override either default. |
When `command` arguments are supplied the command is executed and `maglevc`
exits; in this mode ANSI color is off by default so the output is script-safe.
When no arguments are given an interactive shell is started, the build version
is printed on entry, and color is on by default.
### Commands
```
show version Print build version, commit hash, and build date.
show frontends [<name>] Without name: list all frontend names.
With name: show address, protocol, port, src-ip-sticky,
description, and pools. Each pool lists its backends
with two weight columns:
weight — configured weight from the YAML
effective — state-aware weight after pool failover
(what gets programmed into VPP)
Disabled backends are marked with [disabled].
show backends [<name>] Without name: list all backend names.
With name: show address, current state (with duration),
enabled flag, health check, and recent state transitions
with timestamps and how long ago each occurred.
show healthchecks [<name>] Without name: list all health-check names.
With name: show full health-check configuration.
show vpp info Show VPP version, build date, PID, uptime, and when
maglevd connected. Returns an error if VPP is not
connected.
show vpp lb state Show the VPP load-balancer plugin state: global
configuration, configured VIPs, and their attached
application servers (address, weight, bucket count).
Returns an error if VPP is not connected.
show vpp lb counters Show per-VIP packet/byte counters from the VPP stats
segment, refreshed roughly every five seconds by
maglevd. Each row reports the four LB plugin counters
(first, next, untracked, no-server) and the FIB
packets/bytes at the VIP's host prefix. Use Prometheus
for live rates; this command shows absolute values.
Per-backend packet counters are not shown: VPP's LB
plugin forwarding node writes adj_index[VLIB_TX]
directly and bypasses ip{4,6}_lookup_inline, which is
the only path that increments /net/route/to. The
backend's FIB load_balance stats_index therefore
never ticks for LB-forwarded traffic, and exposing
zeros would mislead. See docs/implementation/TODO
for the upstream path that would fix this (new
lb_as_stats_dump API message).
sync vpp lb state [<name>] Reconcile the VPP load-balancer dataplane from the
running config. Without a name: runs a full sync —
creates missing VIPs, removes stale VIPs, and adjusts
application-server membership and weights across all
frontends. With a name: only the named frontend's VIP
is reconciled, and no VIPs are removed. A full sync
also runs automatically every
maglev.vpp.lb.sync-interval (default 30s) to catch
drift, and once on startup.
set backend <name> pause Stop health checking for a backend. Cancels the probe
goroutine so no further traffic is sent, and sets the
state to 'paused'. The backend's transition history is
preserved, so 'show backend <name>' still shows where
it came from.
set backend <name> resume Resume health checking. A fresh probe goroutine is
started and the backend re-enters unknown state.
set backend <name> disable Stop probing entirely and remove the backend from
rotation. The backend remains visible (state: disabled)
with its transition history intact and can be re-enabled
without reloading configuration.
set backend <name> enable Re-enable a disabled backend. A fresh probe goroutine is
started and the backend re-enters unknown state.
set frontend <name> pool <pool> backend <name> weight <0-100> [flush]
Set the weight of a backend within a pool. Weight 0 keeps
the backend in the pool but assigns it no traffic. Takes
effect immediately: maglevd pushes the change into VPP
via a targeted single-VIP reconcile, so there's no need
to wait for the periodic sync tick.
Without `flush`, the new weight is installed in Maglev's
new-bucket mapping but VPP's flow table is left alone.
Existing sessions keep reaching this backend until they
naturally drain — useful for graceful draining where
you want new connections to land elsewhere but don't
want to reset any in-flight traffic.
With `flush`, the corresponding application-server row
is rewritten with `lb_as_set_weight(is_flush=true)`,
which clears VPP's flow table entries for this backend.
Existing sessions are dropped immediately — useful when
the backend is being taken out of service for emergency
reasons and you don't want to wait for flows to drain.
Examples:
set frontend web pool primary backend nginx0 weight 50
set frontend web pool primary backend nginx0 weight 0 flush
watch events Stream all events (log, backend transitions, frontend)
[num <n>] Stop after receiving n events.
[log [level <level>]] Include log events. level is debug|info|warn|error
(default: info). Omitting log/backend/frontend enables all.
[backend] Include backend transition events.
[frontend] Include frontend events (reserved for future use).
Each event is printed as compact JSON on its own line.
Press any key or Ctrl-C to stop. Examples:
watch events
watch events num 20
watch events log level debug
watch events backend num 100
watch events log level debug backend
config check Ask maglevd to read and validate its current config file.
Prints "config ok" on success, or the error (parse or
semantic) returned by the daemon.
config reload Check and reload the configuration file. Equivalent to
sending SIGHUP to maglevd. Prints "config reloaded" on
success, or the specific error (parse, semantic, or
reload) that prevented the reload.
quit / exit Leave the interactive shell.
```
### Interactive shell
The shell prompt is `maglev> `. Two completion mechanisms are available:
**Tab completion** — pressing `<Tab>` at any point completes the current token.
Fixed keywords (commands and subcommands) are completed from the command tree.
Backend, frontend, and health-check names are fetched live from the server with
a 1-second timeout. If the partial token is unambiguous the word is completed
in place; if multiple candidates exist they are listed and the prompt is
restored.
**Inline help (`?`)** — typing `?` at any point prints the available
completions for the current position, with a short description next to each
keyword. The `?` character is not added to the input line.
Commands and keywords support **prefix matching**: typing `sh ba` is equivalent
to `show backends`, and `sh ba nginx0` is equivalent to `show backends nginx0`.
---
## maglevd-frontend
`maglevd-frontend` is an optional web dashboard that connects to one or
more running `maglevd` instances over gRPC and renders a live view of
frontends, backends, health checks, and VPP load-balancer state. It is
a single Go binary with the SolidJS SPA embedded via `//go:embed`; no
runtime file dependencies.
Installed by the Debian package to `/usr/sbin/maglevd-frontend` but
**not** enabled by default — the operator opts in via:
```sh
systemctl enable --now vpp-maglev-frontend
```
The systemd unit (`vpp-maglev-frontend.service`) reads its arguments
from `/etc/default/vpp-maglev` via `MAGLEV_FRONTEND_ARGS`. The same
env file is shared with `maglevd`; all `maglevd-frontend`-specific
variables are prefixed with `MAGLEV_FRONTEND_` so there's no overlap.
### Flags
| Flag | Environment variable | Default | Description |
|---|---|---|---|
| `--server` | `MAGLEV_FRONTEND_SERVERS` | *(required)* | Comma-separated list of `host:port` maglevd addresses. |
| `--listen` | `MAGLEV_FRONTEND_LISTEN` | `:8080` | HTTP bind address. |
| `--log-level` | `MAGLEV_FRONTEND_LOG_LEVEL` | `info` | Structured-log verbosity for `maglevd-frontend`'s own logs. |
| `--version` | — | — | Print version, commit hash, and build date, then exit. |
In addition to flags, two env-only variables control the admin surface:
| Environment variable | Purpose |
|---|---|
| `MAGLEV_FRONTEND_USER` | HTTP basic-auth username for `/admin/`. |
| `MAGLEV_FRONTEND_PASSWORD` | HTTP basic-auth password for `/admin/`. |
When **both** are set and non-empty the admin surface is mounted and
the SPA's "admin…" toggle becomes visible. When either is missing or
empty the `/admin/` route returns 404 and the SPA hides the toggle —
`/view/` is always reachable read-only.
### What the SPA shows
After the dashboard loads, the header carries a **scope selector**:
one pill per configured maglevd, coloured green when the frontend's
gRPC channel to that maglevd is alive and red when it's dropped.
Click a pill to flip the view to that maglevd's frontends. Your
selection is persisted in a `maglev_scope` cookie (Path=/;
Max-Age=1y; SameSite=Lax), so the next page load lands on the same
server you were last looking at. If the cookie references a
maglevd that's no longer in the server list (it was removed from
`-server` or renamed), the hydration path falls through to the
first maglevd in the list instead of leaving you on a ghost
selection.
The **frontend list** is a stack of collapsible cards
(`<details>` elements) — one per VIP. Each card header shows a
fixed-width slot carrying a health icon, the frontend name, its
aggregate state badge (`up` / `down` / `unknown`), and the
address, protocol, and description. The health icon is a cascade
derived from the current backend state + VPP bucket allocation:
| Icon | Meaning |
|---|---|
| ✅ | All backends `up`, the primary pool is serving, and every backend with `effective_weight > 0` has VPP buckets > 0. |
| ‼️ | At least one backend has `effective_weight > 0` but zero VPP buckets — the control plane and dataplane disagree, almost always a bug worth investigating. |
| ❗ | The primary pool has no serving backend (every pool[0] backend has `effective_weight = 0`); the VIP is running on its fallback or nothing at all. |
| ⚠️ | At least one backend is not `up`, nothing worse. Typical maintenance / partial outage state. |
| ❓ | Fallthrough; should be unreachable in practice and indicates a logic bug in the health-cascade code. |
The card body is a table with one row per `(pool, backend)` tuple.
Columns: `pool`, `backend`, `address`, `state`, `weight`,
`effective`, `lb buckets`, `last transition`, and (in admin mode) a
kebab `⋮` menu for per-backend actions. The **LB buckets** column
reports VPP's Maglev hash table bucket count for that backend,
refreshed live via a debounced `GetVPPLBState` scrape whenever a
transition or weight edit happens (at most once per second per
maglevd). A value of `0` means "in VPP but drained", `—` means
"not in VPP at all" (e.g. between a sync and the next poll), and a
non-zero number is the share of the 1024-bucket table currently
pointing at that AS.
Card open/closed state is also persisted per-panel in a
`maglev_zippy_open` cookie, **scoped per maglevd** (the id is
`frontend-<maglevd>-<frontendName>`), so collapsing a card on
`chbtl2` doesn't also collapse the equivalent card on `localhost`.
On first load every card starts closed; unfolding one writes it to
the cookie for subsequent visits. The cookie is a best-effort hint
— a missing or corrupt value just falls back to "everything
closed", so losing it (browser clear, expiry, private window, etc.)
is purely cosmetic.
When `admin_enabled` is true the header gains an **admin toggle**
that switches between `/view/` (read-only) and `/admin/` (basic
auth, mutation actions exposed). Inside admin mode every backend
row grows a `⋮` menu with `pause`, `resume`, `enable`, `disable`,
and `set weight…` entries. Lifecycle actions open a confirmation
dialog that spells out the dataplane consequence in plain English
(`disable` specifically calls out that it drops live sessions via
the flow-table flush). The weight dialog has a 0-100 slider and a
`flush existing flows` checkbox — unchecked is the graceful drain
(new flows move, existing ones finish naturally), checked is the
immediate session-drop path.
Also visible in admin mode: a **Debug panel** at the bottom of the
page with a rolling tail of every event the SPA has seen across
all maglevds — `backend` and `frontend` transitions, log lines,
`maglevd-status` flips, `vpp-status` flips, and the VPP LB sync
events (`vpp-lb-sync-*`) with their full attribute set formatted
for scanning. A scope filter keeps the tail narrowed to the
current maglevd by default; a `all maglevds` checkbox flips it to
firehose mode, and a `pause` button freezes the tail so you can
read back.
### HTTP surface
- **`/view/`** — static SPA (dashboard). No authentication.
- **`/view/api/state`**, **`/view/api/state/{name}`** — full JSON
snapshot for every maglevd, or one maglevd.
- **`/view/api/maglevds`** — configured maglevds and connection status.
- **`/view/api/version`** — build info + `admin_enabled` flag.
- **`/view/api/events`** — Server-Sent Events stream; log, backend,
frontend, maglevd-status, vpp-status events with
`Last-Event-ID` replay from a 30-second / 2000-event ring buffer.
- **`/healthz`** — liveness; returns 200 if the HTTP server is up.
- **`/admin/`** — SPA shell behind basic auth (when configured).
- **`POST /admin/api/{maglevd}/backend/{name}/{action}`** — backend
lifecycle action. `action` is `pause`, `resume`, `enable`, or
`disable`. Returns the fresh backend snapshot as JSON.
- **`POST /admin/api/{maglevd}/frontend/{fe}/pool/{pool}/backend/{name}/weight`**
— weight change. Body: `{"weight": 0-100, "flush": bool}`. When
`flush=true`, VPP's flow table for the backend is cleared;
otherwise only the new-buckets map is updated and existing
sessions keep reaching the backend until they finish.
### Reverse-proxy requirements (SSE)
Nginx, HAProxy, or any proxy in front of `maglevd-frontend` must:
- Disable buffering on the events endpoint. `X-Accel-Buffering: no`
is sent by the server; a global `proxy_buffering off;` in the
nginx server block is the more robust answer.
- Raise `proxy_read_timeout` to at least 300s so the stream isn't
torn down between the 15-second `: ping` heartbeats the server
sends.
- Not wrap the events endpoint in any gzip/brotli middleware —
response compression buffers until its window fills and destroys
the live-stream property.
See `maglevd-frontend(8)` for the full reference.