New maglev-frontend component; promote LB sync events to INFO

Introduces maglev-frontend, a responsive, real-time web dashboard for one
or more running maglevd instances. Source lives at cmd/frontend/; the
built binary is maglev-frontend. It is a single Go process with the
SolidJS SPA embedded via //go:embed — no runtime file dependencies.

Architecture
 - One persistent gRPC connection per configured maglevd (-server A,B,C).
   Each connection runs three background loops: a WatchEvents stream
   subscribed at log_level=debug for live events, a 30s refresh loop as
   a safety net for drift, and a 5s health loop that surfaces connection
   drops quickly.
 - In-process pub/sub broker with a 30s / 2000-event replay ring using
   <epoch>-<seq> monotonic IDs. Short browser reconnects (nginx idle,
   wifi flap, laptop wake) silently replay buffered events via the
   EventSource Last-Event-ID header; longer outages or frontend restarts
   fall through to a "resync" event that triggers a full state refetch.
 - HTTP surface: /view/ (SPA), /view/api/state, /view/api/state/{name},
   /view/api/maglevds, /view/api/version, /view/api/events (SSE),
   /healthz, and an /admin/* placeholder returning 501 for a future
   basic-auth mutation surface.
 - SSE handler follows the full operational checklist: retry hint, 15s
   : ping heartbeat, Flush after every write, r.Context().Done() teardown,
   X-Accel-Buffering: no, and no gzip.

SolidJS SPA (cmd/frontend/web/, Vite + TypeScript)
 - solid-js/store for a reactive per-maglevd state tree; reducers apply
   backend transitions, maglevd-status flips, and resync refetches.
 - Scope selector tabs for multi-maglevd support, per-maglevd frontend
   cards with pool tables showing state, configured weight, effective
   weight, and last-transition age.
 - ProbeHeartbeat component turns a middle-dot into ❤️ on probe-start and
   back on probe-done, driven by real log events; fixed-size wrapper so
   the emoji swap doesn't jiggle the row.
 - Flash wrapper animates any primitive on change (1s yellow fade via
   Web Animations API, skipped on first mount). Wired into the state
   badge, configured weight, and effective weight columns.
 - DebugPanel: chronological rolling event tail with tail-style auto-
   scroll, pause/resume, and scope/firehose filter. Syntactic highlight
   for vpp-lb-sync-* events with fixed-order attribute formatting.
 - Live effective_weight updates: vpp-lb-sync-as-added/removed/weight-
   updated log events are routed through a reducer that walks the
   snapshot's pool rows and sets effective_weight on every match
   without waiting for the 30s refresh.
 - Header shows build version + commit with build date in a tooltip,
   fetched once from /view/api/version on mount.
 - Prettier wired in as the web-side fixstyle; make fixstyle now tidies
   both Go and web in one shot via a new fixstyle-web target.

Per-mutation VPP LB sync logging
 - Promotes the addVIP/delVIP/addAS/delAS/setASWeight helpers from
   slog.Debug to slog.Info and renames them from vpp-lbsync-* to
   vpp-lb-sync-{vip-added,vip-removed,as-added,as-removed,as-weight-
   updated}. Matching rename for vpp-lb-sync-start / -done / -error /
   -vip-recreate. The Prometheus metric name (maglev_vpp_lbsync_total)
   is left alone to preserve dashboards.
 - setASWeight now takes the prior weight so the event can emit
   from=X to=Y and the UI can show the delta.
 - The vip field in every event is the bare address (no /32 or /128
   mask), matching the CLI output style.
 - Any listener on the gRPC WatchEvents stream — CLI watch events or
   maglev-frontend — now sees every VIP/AS dataplane change in real
   time without needing to raise the log level.

Build and tooling
 - Makefile: maglev-frontend added to BINARIES; build / build-amd64 /
   build-arm64 emit the binary alongside maglevd and maglevc. A new
   maglev-frontend-web target rebuilds the SolidJS bundle via npm.
 - web/dist/ is tracked so a bare `go build` keeps working for Go-only
   contributors and CI.
 - .gitignore skips cmd/frontend/web/node_modules/.

Stability fixes
 - maglevd's WatchEvents synthetic replay events (from==to, at_unix_ns=0)
   were corrupting the frontend's LastTransition cache with at=0,
   rendering as "20555d ago" in the browser. Client now skips synthetic
   events: the cache comes from refreshAll and doesn't need them.
 - Frontends, Backends, and HealthChecks are now served in the order
   returned by the corresponding List* RPC instead of Go map iteration
   order, so reloads and refreshes keep the SPA stable.
This commit is contained in:
2026-04-12 17:48:12 +02:00
parent fb62532fd5
commit 284b4cc9a4
42 changed files with 4366 additions and 35 deletions

113
cmd/frontend/broker_test.go Normal file
View File

@@ -0,0 +1,113 @@
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
package main
import (
"encoding/json"
"fmt"
"testing"
"time"
)
func mkEvent(i int) BrowserEvent {
p, _ := json.Marshal(map[string]int{"i": i})
return BrowserEvent{
Maglevd: "lb",
Type: "backend",
AtUnixNs: time.Now().UnixNano(),
Payload: p,
}
}
func TestBrokerSubscribeNoHeaderResync(t *testing.T) {
b := NewBroker()
b.Publish(mkEvent(1))
res := b.Subscribe("")
defer b.Unsubscribe(res.Channel)
if !res.NeedResync {
t.Errorf("expected NeedResync=true when Last-Event-ID is empty")
}
if len(res.ReplayEvents) != 0 {
t.Errorf("expected no replay events, got %d", len(res.ReplayEvents))
}
}
func TestBrokerReplayMatchingEpoch(t *testing.T) {
b := NewBroker()
b.Publish(mkEvent(1))
b.Publish(mkEvent(2))
b.Publish(mkEvent(3))
// Publishes used seqs 0,1,2. Client last saw seq 0, so we expect
// replay of seqs 1 and 2.
lastID := fmt.Sprintf("%d-0", b.Epoch())
res := b.Subscribe(lastID)
defer b.Unsubscribe(res.Channel)
if res.NeedResync {
t.Errorf("expected no resync when seqs are in buffer")
}
if len(res.ReplayEvents) != 2 {
t.Fatalf("expected 2 replay events (seqs 1,2), got %d", len(res.ReplayEvents))
}
if res.ReplayEvents[0].ID != fmt.Sprintf("%d-1", b.Epoch()) {
t.Errorf("replay[0] ID = %q, want epoch-1", res.ReplayEvents[0].ID)
}
}
func TestBrokerEpochMismatchResyncs(t *testing.T) {
b := NewBroker()
b.Publish(mkEvent(1))
res := b.Subscribe("9999-0")
defer b.Unsubscribe(res.Channel)
if !res.NeedResync {
t.Errorf("expected resync on epoch mismatch")
}
}
func TestBrokerLiveDelivery(t *testing.T) {
b := NewBroker()
res := b.Subscribe("")
defer b.Unsubscribe(res.Channel)
b.Publish(mkEvent(42))
select {
case ev := <-res.Channel:
if ev.ID == "" {
t.Errorf("live event should have an ID")
}
case <-time.After(500 * time.Millisecond):
t.Fatalf("timed out waiting for live event delivery")
}
}
func TestBrokerExactlyOnceOverSubscribeBoundary(t *testing.T) {
// Invariant: an event published while a subscriber is mid-subscribe
// should be delivered exactly once — either via replay or via the
// live channel. We approximate by publishing, subscribing with the
// previous event ID, and checking that exactly one delivery happens.
b := NewBroker()
b.Publish(mkEvent(1)) // seq 0
b.Publish(mkEvent(2)) // seq 1
lastID := fmt.Sprintf("%d-0", b.Epoch())
res := b.Subscribe(lastID)
defer b.Unsubscribe(res.Channel)
// Expect seq 1 in replay, nothing on the live channel yet.
if len(res.ReplayEvents) != 1 {
t.Fatalf("expected 1 replay event, got %d", len(res.ReplayEvents))
}
select {
case <-res.Channel:
t.Fatalf("unexpected live event without any publish")
case <-time.After(50 * time.Millisecond):
}
// New publish should arrive live.
b.Publish(mkEvent(3))
select {
case <-res.Channel:
case <-time.After(500 * time.Millisecond):
t.Fatalf("new publish not delivered live")
}
}