Replace the cfgPath field in the TUI header with the system's
fully-qualified hostname via gethostname + CNAME lookup, matching
what `hostname -f` produces.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
runProbeLoop used to sleep the full jittered interval after every
probe, so a 100ms --interval with a 30ms probe actually produced a
130ms period — the flag quietly lied about cadence and the effective
rate depended on backend latency. The fix subtracts result.Duration
from the sleep so the loop's period matches --interval. Probes that
overrun the interval clamp the sleep to zero and fire immediately
without trying to catch up on missed cycles, so a slow backend
doesn't get flooded with back-to-back retries exactly when it's
already struggling.
A small bubbletea TUI that reads maglev.yaml (repeatable --config),
enumerates every VIP, and probes each from outside the load balancer
on a tight cadence (default 100ms, ±10% jitter). HTTP/HTTPS VIPs get
a GET against a configurable URI (default /.well-known/ipng/healthz)
with per-VIP rolling latency (p50/p95/p99/max), lifetime N/FAIL
counters, LAST status, and a response-header tally. Non-HTTP VIPs
get a TCP connect probe. A bounded error panel classifies anomalies
as timeout / http-err / net-err / spike and auto-sizes to fill the
screen.
Utility: during a failover drill (backend flap, AS drain, config
push) the tally panel shows which backend each VIP is actually
steering to, with two-colour activity highlighting over a 5s
window — white = receiving traffic, grey = drained. Paired with
the rolling OK%/latency columns it gives an at-a-glance answer to
"is the VIP healthy from the outside right now, and which backend
is it hitting", without relying on maglevd's own view of the
world.
Also bumps Makefile/go.mod to build the new binary.