Files
vpp-maglev/docs/user-guide.md

32 KiB

User Guide

maglevd

maglevd is the health-checker daemon. It probes backends according to the configuration file, maintains their health state, and exposes a gRPC API for inspection and control.

Flags

Flag Environment variable Default Description
--config MAGLEV_CONFIG /etc/vpp-maglev/maglev.yaml Path to the YAML configuration file.
--grpc-addr MAGLEV_GRPC_ADDR :9090 TCP address on which the gRPC server listens.
--metrics-addr MAGLEV_METRICS_ADDR :9091 TCP address for the Prometheus /metrics HTTP endpoint. Set to empty to disable.
--vpp-api-addr MAGLEV_VPP_API_ADDR /run/vpp/api.sock VPP binary API socket path. Set to empty to disable VPP integration.
--vpp-stats-addr MAGLEV_VPP_STATS_ADDR /run/vpp/stats.sock VPP stats socket path.
--log-level MAGLEV_LOG_LEVEL info Log verbosity: debug, info, warn, or error.
--check Read and validate the config file, then exit. Exits 0 if the config is valid, 1 on YAML parse error, 2 on semantic error.
--reflection true Enable gRPC server reflection. Allows grpcurl to introspect the API without the .proto file. Set to false to disable.
--version Print version, commit hash, and build date, then exit.

Flags take precedence over environment variables. Both are optional; defaults are used for anything not set.

Signals

Signal Effect
SIGHUP Reload the configuration file (same code path as config reload in maglevc). The file is checked before applying; if there is a parse or semantic error the reload is aborted and the error is logged (the daemon continues running with its current config). New backends are started, removed backends are stopped, backends whose health-check config is unchanged continue probing without interruption.
SIGTERM / SIGINT Graceful shutdown. Active gRPC streams are closed, the server drains, then the process exits.

Restart behaviour

A maglevd restart is designed to be dataplane-neutral: SIGTERM → bounce → steady state should not cause any visible disruption to flows traversing the VIP, assuming vpp itself stays up throughout. This is enforced by a two-phase startup warmup controlled by vpp.lb.startup-min-delay (default 5s) and vpp.lb.startup-max-delay (default 30s):

  1. [0, min-delay) — absolute hands-off window. Neither the periodic SyncLBStateAll loop nor the per-transition SyncLBStateVIP path from the reconciler touches VPP. Probes run, the checker accumulates state, and any backend transitions are logged at DEBUG level but suppressed from the dataplane. VPP continues serving whatever it had programmed before the restart, unmodified.

  2. [min-delay, max-delay) — per-VIP release phase. Each frontend is released (and one SyncLBStateVIP runs against it) as soon as every backend it references has reached a non- Unknown state, i.e. the checker's rise counter has completed for every probe. The reconciler event path and a 250ms background poll both attempt to release VIPs; whichever wins the race logs vpp-lb-warmup-release with trigger=reconciler-event or trigger=poll.

  3. Exit. vpp-lb-warmup-max-delay-elapsed always fires at the max-delay boundary, regardless of how the warmup got there. One of two paths gets taken:

    • Happy path: every frontend was released individually during the release phase before max-delay expired. Logged as vpp-lb-warmup-complete at the moment all releases complete (anywhere in [min-delay, max-delay)). The warmup gates open immediately at -complete, so the periodic sync loop can start drift-correction right away. The warmup driver then sleeps until max-delay and emits vpp-lb-warmup-max-delay-elapsed as a gratuitous timeline marker — the gate is already open, but the line keeps the log sequence symmetric with the watchdog path.
    • Watchdog path: max-delay reached with one or more frontends still holding StateUnknown backends. Logged as vpp-lb-warmup-max-delay-elapsed at the boundary, followed by a final SyncLBStateAll that sweeps the stragglers — anything still in StateUnknown at this point is programmed as weight 0.

    After either path, the reconciler and the periodic sync loop run unconditionally on every transition.

The warmup clock is measured from vpp.New() (shortly after process start) and is not reset by config reloads, VPP reconnects, or SIGHUP — it's strictly tied to the maglevd process lifetime. A VPP drop mid-warmup is handled transparently: when VPP reconnects, the warmup driver picks up wherever the process-relative clock now stands.

To disable the warmup entirely — first sync fires immediately at startup, backends may be black-holed for a few seconds until rise probes complete — set both startup-min-delay and startup-max-delay to 0s in the config. This is useful for tests and dev setups where a couple of seconds of downtime on restart is acceptable and the extra observability is not worth the delay.

Relevant log lines (all at INFO unless noted):

  • vpp-lb-warmup-start — warmup begins, with the configured delay values.
  • vpp-lb-warmup-min-delay-elapsed — absolute hands-off window ended; per-VIP release phase starting.
  • vpp-lb-warmup-release — a frontend has been individually released; trigger is poll or reconciler-event depending on which path won the race.
  • vpp-lb-warmup-complete — every VIP was released individually before max-delay. Fires any time in [min-delay, max-delay) depending on how quickly backends settled. On the happy path the warmup gates open at this moment; -max-delay-elapsed still fires later at the boundary as a timeline marker.
  • vpp-lb-warmup-max-delay-elapsedmax-delay boundary reached. Always fires, on both the happy and watchdog paths. On the watchdog path it's followed immediately by a full SyncLBStateAll to sweep stragglers still in StateUnknown; on the happy path the gates are already open and this line is purely informational.
  • vpp-lb-warmup-skipped — both delays were configured to 0 and the warmup was bypassed entirely.
  • vpp-reconciler-suppressed-min-delay (DEBUG) — a transition event arrived during min-delay and was dropped.
  • vpp-reconciler-suppressed-warmup (DEBUG) — a transition event arrived after min-delay but the frontend has backends still in StateUnknown.

Capabilities

maglevd requires:

  • CAP_NET_RAW when any health check uses type: icmp — raw sockets for ICMP echo. tcp, http, and https checks use normal TCP sockets and do not need this capability.
  • CAP_SYS_ADMIN when healthchecker.netns is set in the config — the probe loop calls setns(CLONE_NEWNET) to join the target network namespace, and the kernel only permits that to processes holding CAP_SYS_ADMIN in the target's user namespace (see setns(2)). Without it the probe fails with enter netns "<name>": operation not permitted and every backend flips to down / L4CON on its first probe.

The Debian systemd unit (vpp-maglev.service) grants both via AmbientCapabilities and CapabilityBoundingSet, so systemctl start vpp-maglev works out of the box under the unprivileged maglevd user. When running the binary by hand under a non-root account, either:

  • setcap cap_net_raw,cap_sys_admin=eip /usr/sbin/maglevd once at install time, or
  • run under systemd-run -p AmbientCapabilities='CAP_NET_RAW CAP_SYS_ADMIN' ... for ad-hoc tests.

If your deployment doesn't use netns: at all, drop CAP_SYS_ADMIN from the bounding set in the service unit — it's a broad capability and there's no value in keeping it when nothing calls setns.

Logging

All log output is written to stdout as JSON using Go's log/slog. The first line logged after the logger is configured is a starting record that includes version, commit, and date. Every state change emits a backend-transition line at INFO level. Per-mutation VPP LB sync events (vpp-lb-sync-vip-added, vpp-lb-sync-vip-removed, vpp-lb-sync-as-added, vpp-lb-sync-as-removed, vpp-lb-sync-as-weight-updated) are also emitted at INFO so the CLI watch events stream and the web frontend see every dataplane change without raising the log level. Set --log-level debug to see individual probe attempts and every VPP binary-API call (vpp-api-send / vpp-api-recv with full payload) as they happen.

Within a single VIP reconcile, maglevd issues lb_as_add_del calls in ascending numeric order of the AS's IP address (all IPv4 before all IPv6, numeric-ascending within each family), not Go map iteration order. This matters because VPP's LB plugin stores ASes in an internal vec in insertion order and breaks per-bucket ties in the Maglev lookup table by whichever AS comes earlier in the vec — so without a stable call order, two maglevd instances serving identical configs can end up programming different new-flow tables on their respective VPP boxes, and per-bucket debugging becomes non-reproducible. Numeric (rather than lexicographic) ordering is chosen because a string sort would place 10.0.0.10 before 10.0.0.2 (and 2001:db8::10 before 2001:db8::2), which would satisfy determinism but produce sync-log output that looks scrambled to human readers. The sort is a correctness property, not just a cosmetic one, and the sync log lines appear in that same order so watch events output is comparable across instances. Note that this is the first half of the fix; the second half (a matching sort inside VPP's own lb_vip_update_new_flow_table to close the flap-history case where freed as_pool slots are reused in locally-visited order) is a separate change to VPP upstream.

Prometheus metrics

maglevd exposes Prometheus metrics on --metrics-addr (default :9091) at the /metrics path. Metric families:

Health-check and backend state (gauges, on-demand):

Metric Labels Description
maglev_backend_state backend, address, healthcheck, state 1 for the current state row per backend, 0 otherwise.
maglev_backend_health backend Current rise/fall counter value.
maglev_backend_enabled backend 1 if enabled, 0 if disabled.
maglev_frontend_pool_backend_weight frontend, pool, backend Configured weight from YAML.

Probe counters and latency (inline):

Metric Labels Description
maglev_probe_total backend, type, result, code Probes executed. result is success or failure.
maglev_probe_duration_seconds backend, type Histogram of probe wall time.
maglev_backend_transitions_total backend, from, to State machine transitions.

VPP integration (when enabled):

Metric Labels Description
maglev_vpp_connected 1 if maglevd currently has a live VPP connection.
maglev_vpp_uptime_seconds Seconds since VPP started (from /sys/boottime).
maglev_vpp_connected_seconds Seconds since maglevd established the current VPP connection.
maglev_vpp_info version, build_date, pid Static VPP build metadata; always 1.
maglev_vpp_api_total msg, direction, result VPP binary-API calls. direction is send or recv; result is success or failure.
maglev_vpp_lbsync_total scope, kind Per-mutation sync counters. scope is all or vip; kind is one of vip_added, vip_removed, as_added, as_removed, as_weight_updated.

gRPC server (standard go-grpc-middleware/prometheus metrics): grpc_server_started_total, grpc_server_handled_total, grpc_server_msg_received_total, grpc_server_msg_sent_total, and grpc_server_handling_seconds — all labelled by grpc_service, grpc_method, grpc_type, and grpc_code. Every method is pre-registered at zero so time series exist on the first scrape.


maglevc

maglevc is the interactive control-plane client. It connects to a running maglevd over gRPC and either executes a single command or drops into an interactive shell.

Usage

maglevc [--server host:port] [--color[=bool]] [command...]
Flag Default Description
--server localhost:9090 Address of the maglevd gRPC server.
--color mode-aware Colorize static field labels (dark blue ANSI). Defaults to true in the interactive shell and false in one-shot mode, so output piped into scripts stays free of escape codes. Pass --color=true or --color=false explicitly to override either default.

When command arguments are supplied the command is executed and maglevc exits; in this mode ANSI color is off by default so the output is script-safe. When no arguments are given an interactive shell is started, the build version is printed on entry, and color is on by default.

Commands

show version                     Print build version, commit hash, and build date.

show frontends [<name>]          Without name: list all frontend names.
                                 With name: show address, protocol, port, src-ip-sticky,
                                 description, and pools. Each pool lists its backends
                                 with two weight columns:
                                   weight     — configured weight from the YAML
                                   effective  — state-aware weight after pool failover
                                                (what gets programmed into VPP)
                                 Disabled backends are marked with [disabled].

show backends [<name>]           Without name: list all backend names.
                                 With name: show address, current state (with duration),
                                 enabled flag, health check, and recent state transitions
                                 with timestamps and how long ago each occurred.

show healthchecks [<name>]       Without name: list all health-check names.
                                 With name: show full health-check configuration.

show vpp info                    Show VPP version, build date, PID, uptime, and when
                                 maglevd connected. Returns an error if VPP is not
                                 connected.
show vpp lb state                Show the VPP load-balancer plugin state: global
                                 configuration, configured VIPs, and their attached
                                 application servers (address, weight, bucket count).
                                 Returns an error if VPP is not connected.
show vpp lb counters             Show per-VIP packet/byte counters from the VPP stats
                                 segment, refreshed roughly every five seconds by
                                 maglevd. Each row reports the four LB plugin counters
                                 (first, next, untracked, no-server) and the FIB
                                 packets/bytes at the VIP's host prefix. Use Prometheus
                                 for live rates; this command shows absolute values.

                                 Per-backend packet counters are not shown: VPP's LB
                                 plugin forwarding node writes adj_index[VLIB_TX]
                                 directly and bypasses ip{4,6}_lookup_inline, which is
                                 the only path that increments /net/route/to. The
                                 backend's FIB load_balance stats_index therefore
                                 never ticks for LB-forwarded traffic, and exposing
                                 zeros would mislead. See docs/implementation/TODO
                                 for the upstream path that would fix this (new
                                 lb_as_stats_dump API message).

sync vpp lb state [<name>]       Reconcile the VPP load-balancer dataplane from the
                                 running config. Without a name: runs a full sync —
                                 creates missing VIPs, removes stale VIPs, and adjusts
                                 application-server membership and weights across all
                                 frontends. With a name: only the named frontend's VIP
                                 is reconciled, and no VIPs are removed. A full sync
                                 also runs automatically every
                                 maglev.vpp.lb.sync-interval (default 30s) to catch
                                 drift, and once on startup.

set backend <name> pause         Stop health checking for a backend. Cancels the probe
                                 goroutine so no further traffic is sent, and sets the
                                 state to 'paused'. The backend's transition history is
                                 preserved, so 'show backend <name>' still shows where
                                 it came from.
set backend <name> resume        Resume health checking. A fresh probe goroutine is
                                 started and the backend re-enters unknown state.
set backend <name> disable       Stop probing entirely and remove the backend from
                                 rotation. The backend remains visible (state: disabled)
                                 with its transition history intact and can be re-enabled
                                 without reloading configuration.
set backend <name> enable        Re-enable a disabled backend. A fresh probe goroutine is
                                 started and the backend re-enters unknown state.

set frontend <name> pool <pool> backend <name> weight <0-100> [flush]
                                 Set the weight of a backend within a pool. Weight 0 keeps
                                 the backend in the pool but assigns it no traffic. Takes
                                 effect immediately: maglevd pushes the change into VPP
                                 via a targeted single-VIP reconcile, so there's no need
                                 to wait for the periodic sync tick.

                                 Without `flush`, the new weight is installed in Maglev's
                                 new-bucket mapping but VPP's flow table is left alone.
                                 Existing sessions keep reaching this backend until they
                                 naturally drain — useful for graceful draining where
                                 you want new connections to land elsewhere but don't
                                 want to reset any in-flight traffic.

                                 With `flush`, the corresponding application-server row
                                 is rewritten with `lb_as_set_weight(is_flush=true)`,
                                 which clears VPP's flow table entries for this backend.
                                 Existing sessions are dropped immediately — useful when
                                 the backend is being taken out of service for emergency
                                 reasons and you don't want to wait for flows to drain.

                                 Examples:
                                   set frontend web pool primary backend nginx0 weight 50
                                   set frontend web pool primary backend nginx0 weight 0 flush

watch events                     Stream all events (log, backend transitions, frontend)
  [num <n>]                      Stop after receiving n events.
  [log [level <level>]]          Include log events. level is debug|info|warn|error
                                 (default: info). Omitting log/backend/frontend enables all.
  [backend]                      Include backend transition events.
  [frontend]                     Include frontend events (reserved for future use).

                                 Each event is printed as compact JSON on its own line.
                                 Press any key or Ctrl-C to stop. Examples:

                                   watch events
                                   watch events num 20
                                   watch events log level debug
                                   watch events backend num 100
                                   watch events log level debug backend

config check                     Ask maglevd to read and validate its current config file.
                                 Prints "config ok" on success, or the error (parse or
                                 semantic) returned by the daemon.
config reload                    Check and reload the configuration file. Equivalent to
                                 sending SIGHUP to maglevd. Prints "config reloaded" on
                                 success, or the specific error (parse, semantic, or
                                 reload) that prevented the reload.

quit / exit                      Leave the interactive shell.

Interactive shell

The shell prompt is maglev> . Two completion mechanisms are available:

Tab completion — pressing <Tab> at any point completes the current token. Fixed keywords (commands and subcommands) are completed from the command tree. Backend, frontend, and health-check names are fetched live from the server with a 1-second timeout. If the partial token is unambiguous the word is completed in place; if multiple candidates exist they are listed and the prompt is restored.

Inline help (?) — typing ? at any point prints the available completions for the current position, with a short description next to each keyword. The ? character is not added to the input line.

Commands and keywords support prefix matching: typing sh ba is equivalent to show backends, and sh ba nginx0 is equivalent to show backends nginx0.


maglevd-frontend

maglevd-frontend is an optional web dashboard that connects to one or more running maglevd instances over gRPC and renders a live view of frontends, backends, health checks, and VPP load-balancer state. It is a single Go binary with the SolidJS SPA embedded via //go:embed; no runtime file dependencies.

Installed by the Debian package to /usr/sbin/maglevd-frontend but not enabled by default — the operator opts in via:

systemctl enable --now vpp-maglev-frontend

The systemd unit (vpp-maglev-frontend.service) reads its arguments from /etc/default/vpp-maglev via MAGLEV_FRONTEND_ARGS. The same env file is shared with maglevd; all maglevd-frontend-specific variables are prefixed with MAGLEV_FRONTEND_ so there's no overlap.

Flags

Flag Environment variable Default Description
--server MAGLEV_FRONTEND_SERVERS (required) Comma-separated list of host:port maglevd addresses.
--listen MAGLEV_FRONTEND_LISTEN :8080 HTTP bind address.
--log-level MAGLEV_FRONTEND_LOG_LEVEL info Structured-log verbosity for maglevd-frontend's own logs.
--version Print version, commit hash, and build date, then exit.

In addition to flags, two env-only variables control the admin surface:

Environment variable Purpose
MAGLEV_FRONTEND_USER HTTP basic-auth username for /admin/.
MAGLEV_FRONTEND_PASSWORD HTTP basic-auth password for /admin/.

When both are set and non-empty the admin surface is mounted and the SPA's "admin…" toggle becomes visible. When either is missing or empty the /admin/ route returns 404 and the SPA hides the toggle — /view/ is always reachable read-only.

What the SPA shows

After the dashboard loads, the header carries a scope selector: one pill per configured maglevd, coloured green when the frontend's gRPC channel to that maglevd is alive and red when it's dropped. Click a pill to flip the view to that maglevd's frontends. Your selection is persisted in a maglev_scope cookie (Path=/; Max-Age=1y; SameSite=Lax), so the next page load lands on the same server you were last looking at. If the cookie references a maglevd that's no longer in the server list (it was removed from -server or renamed), the hydration path falls through to the first maglevd in the list instead of leaving you on a ghost selection.

The frontend list is a stack of collapsible cards (<details> elements) — one per VIP. Each card header shows a fixed-width slot carrying a health icon, the frontend name, its aggregate state badge (up / down / unknown), and the address, protocol, and description. The health icon is a cascade derived from the current backend state + VPP bucket allocation:

Icon Meaning
All backends up, the primary pool is serving, and every backend with effective_weight > 0 has VPP buckets > 0.
‼️ At least one backend has effective_weight > 0 but zero VPP buckets — the control plane and dataplane disagree, almost always a bug worth investigating.
The primary pool has no serving backend (every pool[0] backend has effective_weight = 0); the VIP is running on its fallback or nothing at all.
⚠️ At least one backend is not up, nothing worse. Typical maintenance / partial outage state.
Fallthrough; should be unreachable in practice and indicates a logic bug in the health-cascade code.

The card body is a table with one row per (pool, backend) tuple. Columns: pool, backend, address, state, weight, effective, lb buckets, last transition, and (in admin mode) a kebab menu for per-backend actions. The LB buckets column reports VPP's Maglev hash table bucket count for that backend, refreshed live via a debounced GetVPPLBState scrape whenever a transition or weight edit happens (at most once per second per maglevd). A value of 0 means "in VPP but drained", means "not in VPP at all" (e.g. between a sync and the next poll), and a non-zero number is the share of the 1024-bucket table currently pointing at that AS.

Card open/closed state is also persisted per-panel in a maglev_zippy_open cookie, scoped per maglevd (the id is frontend-<maglevd>-<frontendName>), so collapsing a card on chbtl2 doesn't also collapse the equivalent card on localhost. On first load every card starts closed; unfolding one writes it to the cookie for subsequent visits. The cookie is a best-effort hint — a missing or corrupt value just falls back to "everything closed", so losing it (browser clear, expiry, private window, etc.) is purely cosmetic.

When admin_enabled is true the header gains an admin toggle that switches between /view/ (read-only) and /admin/ (basic auth, mutation actions exposed). Inside admin mode every backend row grows a menu with pause, resume, enable, disable, and set weight… entries. Lifecycle actions open a confirmation dialog that spells out the dataplane consequence in plain English (disable specifically calls out that it drops live sessions via the flow-table flush). The weight dialog has a 0-100 slider and a flush existing flows checkbox — unchecked is the graceful drain (new flows move, existing ones finish naturally), checked is the immediate session-drop path.

Also visible in admin mode: a Debug panel at the bottom of the page with a rolling tail of every event the SPA has seen across all maglevds — backend and frontend transitions, log lines, maglevd-status flips, vpp-status flips, and the VPP LB sync events (vpp-lb-sync-*) with their full attribute set formatted for scanning. A scope filter keeps the tail narrowed to the current maglevd by default; a all maglevds checkbox flips it to firehose mode, and a pause button freezes the tail so you can read back.

HTTP surface

  • /view/ — static SPA (dashboard). No authentication.
  • /view/api/state, /view/api/state/{name} — full JSON snapshot for every maglevd, or one maglevd.
  • /view/api/maglevds — configured maglevds and connection status.
  • /view/api/version — build info + admin_enabled flag.
  • /view/api/events — Server-Sent Events stream; log, backend, frontend, maglevd-status, vpp-status events with Last-Event-ID replay from a 30-second / 2000-event ring buffer.
  • /healthz — liveness; returns 200 if the HTTP server is up.
  • /admin/ — SPA shell behind basic auth (when configured).
  • POST /admin/api/{maglevd}/backend/{name}/{action} — backend lifecycle action. action is pause, resume, enable, or disable. Returns the fresh backend snapshot as JSON.
  • POST /admin/api/{maglevd}/frontend/{fe}/pool/{pool}/backend/{name}/weight — weight change. Body: {"weight": 0-100, "flush": bool}. When flush=true, VPP's flow table for the backend is cleared; otherwise only the new-buckets map is updated and existing sessions keep reaching the backend until they finish.

Reverse-proxy requirements (SSE)

Nginx, HAProxy, or any proxy in front of maglevd-frontend must:

  • Disable buffering on the events endpoint. X-Accel-Buffering: no is sent by the server; a global proxy_buffering off; in the nginx server block is the more robust answer.
  • Raise proxy_read_timeout to at least 300s so the stream isn't torn down between the 15-second : ping heartbeats the server sends.
  • Not wrap the events endpoint in any gzip/brotli middleware — response compression buffers until its window fills and destroys the live-stream property.

See maglevd-frontend(8) for the full reference.


maglevt

maglevt is an optional out-of-band VIP probe TUI. It reads one or more maglev.yaml files, enumerates the configured TCP/HTTP frontends, and probes each one on a configurable HTTP path at a configurable interval. It does not talk gRPC and does not depend on a running maglevd — it's a purely client-side view of the VIPs, driven entirely from the config file on disk.

It's useful for a handful of things in particular:

  • Validating a maglevd restart end-to-end from a client perspective: the probe tally keeps running regardless of what the control plane is doing, so a brief blip or a missed failover is visible directly.
  • Debugging pool failover: with keep-alives off, every probe opens a fresh TCP connection and is reshuffled by VPP's Maglev hash, so the response-header tally visibly reshuffles the moment a standby pool takes over.
  • Sanity-checking VIP reachability across multi-site deployments, especially when the gRPC control plane isn't reachable from the machine you're debugging on.

maglevt is built by make alongside the other binaries but is not shipped in the Debian package; run it from the build/ tree or copy it onto the host by hand.

Flags

Flag Environment variable Default Description
--config /etc/vpp-maglev/maglev.yaml Path to a maglev.yaml file. Repeatable; also accepts a comma-separated list. Frontends are unioned across files and de-duplicated by (address, protocol, port).
--interval 100ms Probe interval per VIP, with ±10% jitter applied per probe to avoid phase-locking.
--timeout 2s Per-request timeout.
--host (VIP address) Override for the HTTP Host header. Defaults to the VIP address literal.
--uri / --path /.well-known/ipng/healthz HTTP request path used in the GET. --path is an alias for --uri.
--header X-IPng-Frontend Response header whose value is extracted and tallied, so you can see which backend served each request.
--insecure true Skip TLS verification for HTTPS frontends.
--keepalive / -k false Enable HTTP keep-alives. Off by default so every probe opens a fresh connection — required for failover visibility, because a pinned keep-alive would mask a Maglev reshuffle.
--filter Regular expression; only probe frontends whose name matches.
--version Print version, commit hash, and build date, then exit.

UI

The TUI is built with Bubble Tea and shows a deterministic grid — one tile per (scheme, address, port) VIP, IPv6 before IPv4 and HTTPS before HTTP, so the layout is stable across runs and across machines. Each tile carries a rolling latency summary (min, max, average, plus a few percentiles), running success and failure counts, and a tally of the configured response-header values seen from that VIP. Press d to toggle reverse-DNS resolution on the addresses shown in the tile headers; press q or Ctrl-C to exit.

There is no machine-readable output. If you need metrics, scrape Prometheus on maglevd instead.