New maglev-frontend component; promote LB sync events to INFO
Introduces maglev-frontend, a responsive, real-time web dashboard for one
or more running maglevd instances. Source lives at cmd/frontend/; the
built binary is maglev-frontend. It is a single Go process with the
SolidJS SPA embedded via //go:embed — no runtime file dependencies.
Architecture
- One persistent gRPC connection per configured maglevd (-server A,B,C).
Each connection runs three background loops: a WatchEvents stream
subscribed at log_level=debug for live events, a 30s refresh loop as
a safety net for drift, and a 5s health loop that surfaces connection
drops quickly.
- In-process pub/sub broker with a 30s / 2000-event replay ring using
<epoch>-<seq> monotonic IDs. Short browser reconnects (nginx idle,
wifi flap, laptop wake) silently replay buffered events via the
EventSource Last-Event-ID header; longer outages or frontend restarts
fall through to a "resync" event that triggers a full state refetch.
- HTTP surface: /view/ (SPA), /view/api/state, /view/api/state/{name},
/view/api/maglevds, /view/api/version, /view/api/events (SSE),
/healthz, and an /admin/* placeholder returning 501 for a future
basic-auth mutation surface.
- SSE handler follows the full operational checklist: retry hint, 15s
: ping heartbeat, Flush after every write, r.Context().Done() teardown,
X-Accel-Buffering: no, and no gzip.
SolidJS SPA (cmd/frontend/web/, Vite + TypeScript)
- solid-js/store for a reactive per-maglevd state tree; reducers apply
backend transitions, maglevd-status flips, and resync refetches.
- Scope selector tabs for multi-maglevd support, per-maglevd frontend
cards with pool tables showing state, configured weight, effective
weight, and last-transition age.
- ProbeHeartbeat component turns a middle-dot into ❤️ on probe-start and
back on probe-done, driven by real log events; fixed-size wrapper so
the emoji swap doesn't jiggle the row.
- Flash wrapper animates any primitive on change (1s yellow fade via
Web Animations API, skipped on first mount). Wired into the state
badge, configured weight, and effective weight columns.
- DebugPanel: chronological rolling event tail with tail-style auto-
scroll, pause/resume, and scope/firehose filter. Syntactic highlight
for vpp-lb-sync-* events with fixed-order attribute formatting.
- Live effective_weight updates: vpp-lb-sync-as-added/removed/weight-
updated log events are routed through a reducer that walks the
snapshot's pool rows and sets effective_weight on every match
without waiting for the 30s refresh.
- Header shows build version + commit with build date in a tooltip,
fetched once from /view/api/version on mount.
- Prettier wired in as the web-side fixstyle; make fixstyle now tidies
both Go and web in one shot via a new fixstyle-web target.
Per-mutation VPP LB sync logging
- Promotes the addVIP/delVIP/addAS/delAS/setASWeight helpers from
slog.Debug to slog.Info and renames them from vpp-lbsync-* to
vpp-lb-sync-{vip-added,vip-removed,as-added,as-removed,as-weight-
updated}. Matching rename for vpp-lb-sync-start / -done / -error /
-vip-recreate. The Prometheus metric name (maglev_vpp_lbsync_total)
is left alone to preserve dashboards.
- setASWeight now takes the prior weight so the event can emit
from=X to=Y and the UI can show the delta.
- The vip field in every event is the bare address (no /32 or /128
mask), matching the CLI output style.
- Any listener on the gRPC WatchEvents stream — CLI watch events or
maglev-frontend — now sees every VIP/AS dataplane change in real
time without needing to raise the log level.
Build and tooling
- Makefile: maglev-frontend added to BINARIES; build / build-amd64 /
build-arm64 emit the binary alongside maglevd and maglevc. A new
maglev-frontend-web target rebuilds the SolidJS bundle via npm.
- web/dist/ is tracked so a bare `go build` keeps working for Go-only
contributors and CI.
- .gitignore skips cmd/frontend/web/node_modules/.
Stability fixes
- maglevd's WatchEvents synthetic replay events (from==to, at_unix_ns=0)
were corrupting the frontend's LastTransition cache with at=0,
rendering as "20555d ago" in the browser. Client now skips synthetic
events: the cache comes from refreshAll and doesn't need them.
- Frontends, Backends, and HealthChecks are now served in the order
returned by the corresponding List* RPC instead of Go map iteration
order, so reloads and refreshes keep the SPA stable.
This commit is contained in:
1
.gitignore
vendored
1
.gitignore
vendored
@@ -5,3 +5,4 @@ tests/out/
|
|||||||
tests/.venv/
|
tests/.venv/
|
||||||
tests/**/maglevd.log
|
tests/**/maglevd.log
|
||||||
tests/**/clab-*/
|
tests/**/clab-*/
|
||||||
|
cmd/frontend/web/node_modules/
|
||||||
|
|||||||
36
Makefile
36
Makefile
@@ -1,9 +1,19 @@
|
|||||||
BINARIES := maglevd maglevc
|
BINARIES := maglevd maglevc maglev-frontend
|
||||||
MODULE := git.ipng.ch/ipng/vpp-maglev
|
MODULE := git.ipng.ch/ipng/vpp-maglev
|
||||||
PROTO_DIR := proto
|
PROTO_DIR := proto
|
||||||
PROTO_FILE := $(PROTO_DIR)/maglev.proto
|
PROTO_FILE := $(PROTO_DIR)/maglev.proto
|
||||||
GEN_FILES := internal/grpcapi/maglev.pb.go internal/grpcapi/maglev_grpc.pb.go
|
GEN_FILES := internal/grpcapi/maglev.pb.go internal/grpcapi/maglev_grpc.pb.go
|
||||||
|
|
||||||
|
# Web bundle is built by Vite and embedded by the Go binary via //go:embed.
|
||||||
|
# Any change under cmd/frontend/web/src/ retriggers an npm build; the
|
||||||
|
# generated cmd/frontend/web/dist/index.html is the sentinel.
|
||||||
|
FRONTEND_WEB_SRC := $(shell find cmd/frontend/web/src -type f 2>/dev/null) \
|
||||||
|
cmd/frontend/web/index.html \
|
||||||
|
cmd/frontend/web/package.json \
|
||||||
|
cmd/frontend/web/vite.config.ts \
|
||||||
|
cmd/frontend/web/tsconfig.json
|
||||||
|
FRONTEND_WEB_DIST := cmd/frontend/web/dist/index.html
|
||||||
|
|
||||||
NATIVE_ARCH := $(shell go env GOARCH)
|
NATIVE_ARCH := $(shell go env GOARCH)
|
||||||
VERSION := 0.1.1
|
VERSION := 0.1.1
|
||||||
COMMIT_HASH := $(shell git rev-parse --short HEAD 2>/dev/null || echo unknown)
|
COMMIT_HASH := $(shell git rev-parse --short HEAD 2>/dev/null || echo unknown)
|
||||||
@@ -16,24 +26,35 @@ TEST ?= tests/
|
|||||||
|
|
||||||
VPP_API_DIR ?= $(HOME)/src/vpp/build-root/install-vpp_debug-native/vpp/share/vpp/api
|
VPP_API_DIR ?= $(HOME)/src/vpp/build-root/install-vpp_debug-native/vpp/share/vpp/api
|
||||||
|
|
||||||
.PHONY: all build build-amd64 build-arm64 test proto vpp-binapi lint fixstyle pkg-deb robot-test clean
|
.PHONY: all build build-amd64 build-arm64 test proto vpp-binapi lint fixstyle fixstyle-web pkg-deb robot-test clean maglev-frontend-web
|
||||||
|
|
||||||
all: build
|
all: build
|
||||||
|
|
||||||
build: $(GEN_FILES)
|
build: $(GEN_FILES) $(FRONTEND_WEB_DIST)
|
||||||
mkdir -p build/$(NATIVE_ARCH)
|
mkdir -p build/$(NATIVE_ARCH)
|
||||||
go build -ldflags "$(LDFLAGS)" -o build/$(NATIVE_ARCH)/maglevd ./cmd/maglevd/
|
go build -ldflags "$(LDFLAGS)" -o build/$(NATIVE_ARCH)/maglevd ./cmd/maglevd/
|
||||||
go build -ldflags "$(LDFLAGS)" -o build/$(NATIVE_ARCH)/maglevc ./cmd/maglevc/
|
go build -ldflags "$(LDFLAGS)" -o build/$(NATIVE_ARCH)/maglevc ./cmd/maglevc/
|
||||||
|
go build -ldflags "$(LDFLAGS)" -o build/$(NATIVE_ARCH)/maglev-frontend ./cmd/frontend/
|
||||||
|
|
||||||
build-amd64: $(GEN_FILES)
|
build-amd64: $(GEN_FILES) $(FRONTEND_WEB_DIST)
|
||||||
mkdir -p build/amd64
|
mkdir -p build/amd64
|
||||||
GOOS=linux GOARCH=amd64 go build -ldflags "$(LDFLAGS)" -o build/amd64/maglevd ./cmd/maglevd/
|
GOOS=linux GOARCH=amd64 go build -ldflags "$(LDFLAGS)" -o build/amd64/maglevd ./cmd/maglevd/
|
||||||
GOOS=linux GOARCH=amd64 go build -ldflags "$(LDFLAGS)" -o build/amd64/maglevc ./cmd/maglevc/
|
GOOS=linux GOARCH=amd64 go build -ldflags "$(LDFLAGS)" -o build/amd64/maglevc ./cmd/maglevc/
|
||||||
|
GOOS=linux GOARCH=amd64 go build -ldflags "$(LDFLAGS)" -o build/amd64/maglev-frontend ./cmd/frontend/
|
||||||
|
|
||||||
build-arm64: $(GEN_FILES)
|
build-arm64: $(GEN_FILES) $(FRONTEND_WEB_DIST)
|
||||||
mkdir -p build/arm64
|
mkdir -p build/arm64
|
||||||
GOOS=linux GOARCH=arm64 go build -ldflags "$(LDFLAGS)" -o build/arm64/maglevd ./cmd/maglevd/
|
GOOS=linux GOARCH=arm64 go build -ldflags "$(LDFLAGS)" -o build/arm64/maglevd ./cmd/maglevd/
|
||||||
GOOS=linux GOARCH=arm64 go build -ldflags "$(LDFLAGS)" -o build/arm64/maglevc ./cmd/maglevc/
|
GOOS=linux GOARCH=arm64 go build -ldflags "$(LDFLAGS)" -o build/arm64/maglevc ./cmd/maglevc/
|
||||||
|
GOOS=linux GOARCH=arm64 go build -ldflags "$(LDFLAGS)" -o build/arm64/maglev-frontend ./cmd/frontend/
|
||||||
|
|
||||||
|
# maglev-frontend-web rebuilds the SolidJS bundle. The Go binary embeds the
|
||||||
|
# resulting cmd/frontend/web/dist/ via //go:embed, so a `go build` after
|
||||||
|
# this target picks up any asset changes automatically.
|
||||||
|
maglev-frontend-web: $(FRONTEND_WEB_DIST)
|
||||||
|
|
||||||
|
$(FRONTEND_WEB_DIST): $(FRONTEND_WEB_SRC)
|
||||||
|
cd cmd/frontend/web && npm install && npm run build
|
||||||
|
|
||||||
pkg-deb: build-amd64 build-arm64
|
pkg-deb: build-amd64 build-arm64
|
||||||
debian/build-deb.sh amd64 $(VERSION) $(COMMIT_HASH)
|
debian/build-deb.sh amd64 $(VERSION) $(COMMIT_HASH)
|
||||||
@@ -71,9 +92,12 @@ vpp-binapi:
|
|||||||
lb lb_types
|
lb lb_types
|
||||||
rm -f internal/vpp/binapi/lb/lb_rpc.ba.go
|
rm -f internal/vpp/binapi/lb/lb_rpc.ba.go
|
||||||
|
|
||||||
fixstyle:
|
fixstyle: fixstyle-web
|
||||||
gofmt -w .
|
gofmt -w .
|
||||||
|
|
||||||
|
fixstyle-web:
|
||||||
|
cd cmd/frontend/web && npx prettier --write .
|
||||||
|
|
||||||
lint:
|
lint:
|
||||||
golangci-lint run ./...
|
golangci-lint run ./...
|
||||||
|
|
||||||
|
|||||||
8
cmd/frontend/assets.go
Normal file
8
cmd/frontend/assets.go
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
||||||
|
|
||||||
|
package main
|
||||||
|
|
||||||
|
import "embed"
|
||||||
|
|
||||||
|
//go:embed web/dist
|
||||||
|
var webFS embed.FS
|
||||||
197
cmd/frontend/broker.go
Normal file
197
cmd/frontend/broker.go
Normal file
@@ -0,0 +1,197 @@
|
|||||||
|
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
||||||
|
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
bufferMaxAge = 30 * time.Second
|
||||||
|
bufferMaxCount = 2000
|
||||||
|
subscriberBuf = 256
|
||||||
|
)
|
||||||
|
|
||||||
|
// Broker is a pub/sub fan-out with a bounded replay ring. The watchLoop
|
||||||
|
// goroutines publish; SSE handlers subscribe. The ring lets reconnecting
|
||||||
|
// browsers replay events they missed during short disconnects (the common
|
||||||
|
// nginx-idle-reconnect / wifi-flap / laptop-wake case) without triggering a
|
||||||
|
// full state refetch.
|
||||||
|
type Broker struct {
|
||||||
|
mu sync.Mutex
|
||||||
|
epoch int64 // ns at broker start; folded into every event ID
|
||||||
|
nextSeq uint64 // monotonic within this epoch
|
||||||
|
buffer []bufferedEvent
|
||||||
|
subs map[chan deliveredEvent]struct{}
|
||||||
|
}
|
||||||
|
|
||||||
|
type bufferedEvent struct {
|
||||||
|
ID string
|
||||||
|
AtUnixNs int64
|
||||||
|
Event BrowserEvent
|
||||||
|
}
|
||||||
|
|
||||||
|
// deliveredEvent is what subscribers receive: the event plus its ring ID
|
||||||
|
// so the SSE handler can emit `id:` lines without re-deriving it.
|
||||||
|
type deliveredEvent struct {
|
||||||
|
ID string
|
||||||
|
Event BrowserEvent
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewBroker creates a Broker with a fresh epoch.
|
||||||
|
func NewBroker() *Broker {
|
||||||
|
return &Broker{
|
||||||
|
epoch: time.Now().UnixNano(),
|
||||||
|
subs: map[chan deliveredEvent]struct{}{},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Epoch returns the broker's epoch (ns at startup). Exposed for testing.
|
||||||
|
func (b *Broker) Epoch() int64 { return b.epoch }
|
||||||
|
|
||||||
|
// Publish assigns the event an ID, evicts aged/overflow entries from the
|
||||||
|
// ring, appends it, and fans out to every live subscriber.
|
||||||
|
func (b *Broker) Publish(ev BrowserEvent) {
|
||||||
|
b.mu.Lock()
|
||||||
|
defer b.mu.Unlock()
|
||||||
|
|
||||||
|
seq := b.nextSeq
|
||||||
|
b.nextSeq++
|
||||||
|
id := fmt.Sprintf("%d-%d", b.epoch, seq)
|
||||||
|
at := ev.AtUnixNs
|
||||||
|
if at == 0 {
|
||||||
|
at = time.Now().UnixNano()
|
||||||
|
}
|
||||||
|
entry := bufferedEvent{ID: id, AtUnixNs: at, Event: ev}
|
||||||
|
|
||||||
|
cutoff := time.Now().UnixNano() - int64(bufferMaxAge)
|
||||||
|
i := 0
|
||||||
|
for i < len(b.buffer) && b.buffer[i].AtUnixNs < cutoff {
|
||||||
|
i++
|
||||||
|
}
|
||||||
|
if i > 0 {
|
||||||
|
b.buffer = b.buffer[i:]
|
||||||
|
}
|
||||||
|
if len(b.buffer) >= bufferMaxCount {
|
||||||
|
b.buffer = b.buffer[len(b.buffer)-bufferMaxCount+1:]
|
||||||
|
}
|
||||||
|
b.buffer = append(b.buffer, entry)
|
||||||
|
|
||||||
|
delivered := deliveredEvent{ID: id, Event: ev}
|
||||||
|
for ch := range b.subs {
|
||||||
|
select {
|
||||||
|
case ch <- delivered:
|
||||||
|
default:
|
||||||
|
// Drop the oldest queued event for this subscriber to make
|
||||||
|
// room. If we're still wedged, give up for this publish —
|
||||||
|
// the next replay on reconnect will heal the gap.
|
||||||
|
select {
|
||||||
|
case <-ch:
|
||||||
|
default:
|
||||||
|
}
|
||||||
|
select {
|
||||||
|
case ch <- delivered:
|
||||||
|
default:
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// subscribeResult is what Subscribe returns to the SSE handler.
|
||||||
|
type subscribeResult struct {
|
||||||
|
// ReplayEvents contains any buffered events newer than the client's
|
||||||
|
// Last-Event-ID, ordered oldest-first. The handler must emit these
|
||||||
|
// before streaming from Channel to preserve ordering.
|
||||||
|
ReplayEvents []deliveredEvent
|
||||||
|
// NeedResync is true when the handler should emit a "resync" event
|
||||||
|
// telling the client to re-fetch state (no header, epoch mismatch, or
|
||||||
|
// seq fell off the ring).
|
||||||
|
NeedResync bool
|
||||||
|
// Channel delivers subsequent live events.
|
||||||
|
Channel chan deliveredEvent
|
||||||
|
}
|
||||||
|
|
||||||
|
// Subscribe registers a new subscriber and returns any replay events plus a
|
||||||
|
// live delivery channel. The caller must call Unsubscribe when done.
|
||||||
|
//
|
||||||
|
// Holding b.mu across "collect replay + register channel" is the critical
|
||||||
|
// invariant: it ensures every event is delivered exactly once, either via
|
||||||
|
// replay or via the live channel. Dropping the lock between those two steps
|
||||||
|
// would open a race where a publish is lost.
|
||||||
|
func (b *Broker) Subscribe(lastEventID string) subscribeResult {
|
||||||
|
b.mu.Lock()
|
||||||
|
defer b.mu.Unlock()
|
||||||
|
|
||||||
|
ch := make(chan deliveredEvent, subscriberBuf)
|
||||||
|
b.subs[ch] = struct{}{}
|
||||||
|
|
||||||
|
replay, needResync := b.collectReplay(lastEventID)
|
||||||
|
return subscribeResult{
|
||||||
|
ReplayEvents: replay,
|
||||||
|
NeedResync: needResync,
|
||||||
|
Channel: ch,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// collectReplay returns any events newer than lastEventID, or signals that a
|
||||||
|
// resync is required. Must be called with b.mu held.
|
||||||
|
func (b *Broker) collectReplay(lastEventID string) ([]deliveredEvent, bool) {
|
||||||
|
if lastEventID == "" {
|
||||||
|
return nil, true
|
||||||
|
}
|
||||||
|
epochStr, seqStr, ok := strings.Cut(lastEventID, "-")
|
||||||
|
if !ok {
|
||||||
|
return nil, true
|
||||||
|
}
|
||||||
|
epoch, err := strconv.ParseInt(epochStr, 10, 64)
|
||||||
|
if err != nil || epoch != b.epoch {
|
||||||
|
return nil, true
|
||||||
|
}
|
||||||
|
lastSeq, err := strconv.ParseUint(seqStr, 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
return nil, true
|
||||||
|
}
|
||||||
|
if len(b.buffer) == 0 {
|
||||||
|
// No events since last seen. Safe to continue without resync if
|
||||||
|
// lastSeq is at or behind nextSeq; otherwise something is wrong
|
||||||
|
// and we resync.
|
||||||
|
if lastSeq+1 >= b.nextSeq {
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
return nil, true
|
||||||
|
}
|
||||||
|
// Parse the oldest buffered seq to decide whether lastSeq is still
|
||||||
|
// within range.
|
||||||
|
_, oldestSeqStr, _ := strings.Cut(b.buffer[0].ID, "-")
|
||||||
|
oldestSeq, err := strconv.ParseUint(oldestSeqStr, 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
return nil, true
|
||||||
|
}
|
||||||
|
if lastSeq+1 < oldestSeq {
|
||||||
|
// Gap: client missed events that have since been evicted.
|
||||||
|
return nil, true
|
||||||
|
}
|
||||||
|
out := make([]deliveredEvent, 0, len(b.buffer))
|
||||||
|
for _, e := range b.buffer {
|
||||||
|
_, seqStr, _ := strings.Cut(e.ID, "-")
|
||||||
|
seq, _ := strconv.ParseUint(seqStr, 10, 64)
|
||||||
|
if seq > lastSeq {
|
||||||
|
out = append(out, deliveredEvent{ID: e.ID, Event: e.Event})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return out, false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unsubscribe removes a subscriber and closes its channel.
|
||||||
|
func (b *Broker) Unsubscribe(ch chan deliveredEvent) {
|
||||||
|
b.mu.Lock()
|
||||||
|
defer b.mu.Unlock()
|
||||||
|
if _, ok := b.subs[ch]; ok {
|
||||||
|
delete(b.subs, ch)
|
||||||
|
close(ch)
|
||||||
|
}
|
||||||
|
}
|
||||||
113
cmd/frontend/broker_test.go
Normal file
113
cmd/frontend/broker_test.go
Normal file
@@ -0,0 +1,113 @@
|
|||||||
|
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
||||||
|
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
func mkEvent(i int) BrowserEvent {
|
||||||
|
p, _ := json.Marshal(map[string]int{"i": i})
|
||||||
|
return BrowserEvent{
|
||||||
|
Maglevd: "lb",
|
||||||
|
Type: "backend",
|
||||||
|
AtUnixNs: time.Now().UnixNano(),
|
||||||
|
Payload: p,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBrokerSubscribeNoHeaderResync(t *testing.T) {
|
||||||
|
b := NewBroker()
|
||||||
|
b.Publish(mkEvent(1))
|
||||||
|
|
||||||
|
res := b.Subscribe("")
|
||||||
|
defer b.Unsubscribe(res.Channel)
|
||||||
|
if !res.NeedResync {
|
||||||
|
t.Errorf("expected NeedResync=true when Last-Event-ID is empty")
|
||||||
|
}
|
||||||
|
if len(res.ReplayEvents) != 0 {
|
||||||
|
t.Errorf("expected no replay events, got %d", len(res.ReplayEvents))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBrokerReplayMatchingEpoch(t *testing.T) {
|
||||||
|
b := NewBroker()
|
||||||
|
b.Publish(mkEvent(1))
|
||||||
|
b.Publish(mkEvent(2))
|
||||||
|
b.Publish(mkEvent(3))
|
||||||
|
|
||||||
|
// Publishes used seqs 0,1,2. Client last saw seq 0, so we expect
|
||||||
|
// replay of seqs 1 and 2.
|
||||||
|
lastID := fmt.Sprintf("%d-0", b.Epoch())
|
||||||
|
res := b.Subscribe(lastID)
|
||||||
|
defer b.Unsubscribe(res.Channel)
|
||||||
|
if res.NeedResync {
|
||||||
|
t.Errorf("expected no resync when seqs are in buffer")
|
||||||
|
}
|
||||||
|
if len(res.ReplayEvents) != 2 {
|
||||||
|
t.Fatalf("expected 2 replay events (seqs 1,2), got %d", len(res.ReplayEvents))
|
||||||
|
}
|
||||||
|
if res.ReplayEvents[0].ID != fmt.Sprintf("%d-1", b.Epoch()) {
|
||||||
|
t.Errorf("replay[0] ID = %q, want epoch-1", res.ReplayEvents[0].ID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBrokerEpochMismatchResyncs(t *testing.T) {
|
||||||
|
b := NewBroker()
|
||||||
|
b.Publish(mkEvent(1))
|
||||||
|
|
||||||
|
res := b.Subscribe("9999-0")
|
||||||
|
defer b.Unsubscribe(res.Channel)
|
||||||
|
if !res.NeedResync {
|
||||||
|
t.Errorf("expected resync on epoch mismatch")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBrokerLiveDelivery(t *testing.T) {
|
||||||
|
b := NewBroker()
|
||||||
|
res := b.Subscribe("")
|
||||||
|
defer b.Unsubscribe(res.Channel)
|
||||||
|
b.Publish(mkEvent(42))
|
||||||
|
select {
|
||||||
|
case ev := <-res.Channel:
|
||||||
|
if ev.ID == "" {
|
||||||
|
t.Errorf("live event should have an ID")
|
||||||
|
}
|
||||||
|
case <-time.After(500 * time.Millisecond):
|
||||||
|
t.Fatalf("timed out waiting for live event delivery")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBrokerExactlyOnceOverSubscribeBoundary(t *testing.T) {
|
||||||
|
// Invariant: an event published while a subscriber is mid-subscribe
|
||||||
|
// should be delivered exactly once — either via replay or via the
|
||||||
|
// live channel. We approximate by publishing, subscribing with the
|
||||||
|
// previous event ID, and checking that exactly one delivery happens.
|
||||||
|
b := NewBroker()
|
||||||
|
b.Publish(mkEvent(1)) // seq 0
|
||||||
|
b.Publish(mkEvent(2)) // seq 1
|
||||||
|
lastID := fmt.Sprintf("%d-0", b.Epoch())
|
||||||
|
res := b.Subscribe(lastID)
|
||||||
|
defer b.Unsubscribe(res.Channel)
|
||||||
|
|
||||||
|
// Expect seq 1 in replay, nothing on the live channel yet.
|
||||||
|
if len(res.ReplayEvents) != 1 {
|
||||||
|
t.Fatalf("expected 1 replay event, got %d", len(res.ReplayEvents))
|
||||||
|
}
|
||||||
|
select {
|
||||||
|
case <-res.Channel:
|
||||||
|
t.Fatalf("unexpected live event without any publish")
|
||||||
|
case <-time.After(50 * time.Millisecond):
|
||||||
|
}
|
||||||
|
|
||||||
|
// New publish should arrive live.
|
||||||
|
b.Publish(mkEvent(3))
|
||||||
|
select {
|
||||||
|
case <-res.Channel:
|
||||||
|
case <-time.After(500 * time.Millisecond):
|
||||||
|
t.Fatalf("new publish not delivered live")
|
||||||
|
}
|
||||||
|
}
|
||||||
500
cmd/frontend/client.go
Normal file
500
cmd/frontend/client.go
Normal file
@@ -0,0 +1,500 @@
|
|||||||
|
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
||||||
|
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"log/slog"
|
||||||
|
"net"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"google.golang.org/grpc"
|
||||||
|
"google.golang.org/grpc/credentials/insecure"
|
||||||
|
|
||||||
|
"git.ipng.ch/ipng/vpp-maglev/internal/grpcapi"
|
||||||
|
)
|
||||||
|
|
||||||
|
// maglevClient is a per-maglevd gRPC client plus cache and background loops.
|
||||||
|
type maglevClient struct {
|
||||||
|
name string
|
||||||
|
address string
|
||||||
|
conn *grpc.ClientConn
|
||||||
|
api grpcapi.MaglevClient
|
||||||
|
broker *Broker
|
||||||
|
|
||||||
|
mu sync.RWMutex
|
||||||
|
connected bool
|
||||||
|
lastErr string
|
||||||
|
cache cachedState
|
||||||
|
}
|
||||||
|
|
||||||
|
// cachedState is the per-maglevd snapshot served via the REST handlers.
|
||||||
|
// Frontends / Backends / HealthChecks are maps for O(1) lookup from the
|
||||||
|
// event path, and the *Order slices preserve the order returned by the
|
||||||
|
// corresponding List* RPC so the UI renders in a stable order across
|
||||||
|
// reloads instead of Go map iteration's randomised order.
|
||||||
|
type cachedState struct {
|
||||||
|
Frontends map[string]*FrontendSnapshot
|
||||||
|
FrontendsOrder []string
|
||||||
|
Backends map[string]*BackendSnapshot
|
||||||
|
BackendsOrder []string
|
||||||
|
HealthChecks map[string]*HealthCheckSnapshot
|
||||||
|
HealthCheckOrder []string
|
||||||
|
VPPInfo *VPPInfoSnapshot
|
||||||
|
LastRefresh time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
func newMaglevClient(address string, broker *Broker) (*maglevClient, error) {
|
||||||
|
conn, err := grpc.NewClient(address,
|
||||||
|
grpc.WithTransportCredentials(insecure.NewCredentials()))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &maglevClient{
|
||||||
|
name: hostnameOf(address),
|
||||||
|
address: address,
|
||||||
|
conn: conn,
|
||||||
|
api: grpcapi.NewMaglevClient(conn),
|
||||||
|
broker: broker,
|
||||||
|
cache: cachedState{
|
||||||
|
Frontends: map[string]*FrontendSnapshot{},
|
||||||
|
Backends: map[string]*BackendSnapshot{},
|
||||||
|
HealthChecks: map[string]*HealthCheckSnapshot{},
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// hostnameOf strips the port from an address and returns a short display
|
||||||
|
// name. For DNS names we take the first label ("lb-ams.internal:9090" →
|
||||||
|
// "lb-ams"). For IP literals we return the full address so we don't
|
||||||
|
// accidentally truncate "127.0.0.1" to "127".
|
||||||
|
func hostnameOf(address string) string {
|
||||||
|
host := address
|
||||||
|
if h, _, err := net.SplitHostPort(address); err == nil {
|
||||||
|
host = h
|
||||||
|
}
|
||||||
|
host = strings.TrimPrefix(strings.TrimSuffix(host, "]"), "[")
|
||||||
|
if net.ParseIP(host) != nil {
|
||||||
|
return host
|
||||||
|
}
|
||||||
|
if i := strings.Index(host, "."); i >= 0 {
|
||||||
|
return host[:i]
|
||||||
|
}
|
||||||
|
return host
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *maglevClient) Close() {
|
||||||
|
_ = c.conn.Close()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *maglevClient) Start(ctx context.Context) {
|
||||||
|
go c.watchLoop(ctx)
|
||||||
|
go c.refreshLoop(ctx)
|
||||||
|
go c.healthLoop(ctx)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *maglevClient) setConnected(ok bool, errMsg string) {
|
||||||
|
c.mu.Lock()
|
||||||
|
prev := c.connected
|
||||||
|
c.connected = ok
|
||||||
|
c.lastErr = errMsg
|
||||||
|
c.mu.Unlock()
|
||||||
|
if prev != ok {
|
||||||
|
payload, _ := json.Marshal(MaglevdStatusPayload{Connected: ok, LastError: errMsg})
|
||||||
|
c.broker.Publish(BrowserEvent{
|
||||||
|
Maglevd: c.name,
|
||||||
|
Type: "maglevd-status",
|
||||||
|
AtUnixNs: time.Now().UnixNano(),
|
||||||
|
Payload: payload,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Info returns the current connection status for this maglevd.
|
||||||
|
func (c *maglevClient) Info() MaglevdInfo {
|
||||||
|
c.mu.RLock()
|
||||||
|
defer c.mu.RUnlock()
|
||||||
|
return MaglevdInfo{
|
||||||
|
Name: c.name,
|
||||||
|
Address: c.address,
|
||||||
|
Connected: c.connected,
|
||||||
|
LastError: c.lastErr,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Snapshot returns a deep-ish copy of the cached state for REST handlers.
|
||||||
|
// Iteration order follows the corresponding *Order slice so the UI sees a
|
||||||
|
// stable, RPC-defined order across reloads.
|
||||||
|
func (c *maglevClient) Snapshot() *StateSnapshot {
|
||||||
|
c.mu.RLock()
|
||||||
|
defer c.mu.RUnlock()
|
||||||
|
snap := &StateSnapshot{
|
||||||
|
Maglevd: MaglevdInfo{
|
||||||
|
Name: c.name,
|
||||||
|
Address: c.address,
|
||||||
|
Connected: c.connected,
|
||||||
|
LastError: c.lastErr,
|
||||||
|
},
|
||||||
|
Frontends: make([]*FrontendSnapshot, 0, len(c.cache.FrontendsOrder)),
|
||||||
|
Backends: make([]*BackendSnapshot, 0, len(c.cache.BackendsOrder)),
|
||||||
|
HealthChecks: make([]*HealthCheckSnapshot, 0, len(c.cache.HealthCheckOrder)),
|
||||||
|
VPPInfo: c.cache.VPPInfo,
|
||||||
|
}
|
||||||
|
for _, name := range c.cache.FrontendsOrder {
|
||||||
|
if f, ok := c.cache.Frontends[name]; ok {
|
||||||
|
snap.Frontends = append(snap.Frontends, f)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, name := range c.cache.BackendsOrder {
|
||||||
|
if b, ok := c.cache.Backends[name]; ok {
|
||||||
|
snap.Backends = append(snap.Backends, b)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, name := range c.cache.HealthCheckOrder {
|
||||||
|
if h, ok := c.cache.HealthChecks[name]; ok {
|
||||||
|
snap.HealthChecks = append(snap.HealthChecks, h)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return snap
|
||||||
|
}
|
||||||
|
|
||||||
|
// refreshAll pulls a full fresh view of the maglevd's state into the cache.
|
||||||
|
// Called from the refreshLoop every 30s and immediately after a successful
|
||||||
|
// reconnect.
|
||||||
|
func (c *maglevClient) refreshAll(ctx context.Context) error {
|
||||||
|
rctx, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
frontends := map[string]*FrontendSnapshot{}
|
||||||
|
fl, err := c.api.ListFrontends(rctx, &grpcapi.ListFrontendsRequest{})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("list frontends: %w", err)
|
||||||
|
}
|
||||||
|
frontendsOrder := append([]string(nil), fl.GetFrontendNames()...)
|
||||||
|
for _, name := range frontendsOrder {
|
||||||
|
fi, err := c.api.GetFrontend(rctx, &grpcapi.GetFrontendRequest{Name: name})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("get frontend %s: %w", name, err)
|
||||||
|
}
|
||||||
|
frontends[name] = frontendFromProto(fi)
|
||||||
|
}
|
||||||
|
|
||||||
|
backends := map[string]*BackendSnapshot{}
|
||||||
|
bl, err := c.api.ListBackends(rctx, &grpcapi.ListBackendsRequest{})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("list backends: %w", err)
|
||||||
|
}
|
||||||
|
backendsOrder := append([]string(nil), bl.GetBackendNames()...)
|
||||||
|
for _, name := range backendsOrder {
|
||||||
|
bi, err := c.api.GetBackend(rctx, &grpcapi.GetBackendRequest{Name: name})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("get backend %s: %w", name, err)
|
||||||
|
}
|
||||||
|
backends[name] = backendFromProto(bi)
|
||||||
|
}
|
||||||
|
|
||||||
|
healthchecks := map[string]*HealthCheckSnapshot{}
|
||||||
|
hl, err := c.api.ListHealthChecks(rctx, &grpcapi.ListHealthChecksRequest{})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("list healthchecks: %w", err)
|
||||||
|
}
|
||||||
|
healthCheckOrder := append([]string(nil), hl.GetNames()...)
|
||||||
|
for _, name := range healthCheckOrder {
|
||||||
|
hi, err := c.api.GetHealthCheck(rctx, &grpcapi.GetHealthCheckRequest{Name: name})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("get healthcheck %s: %w", name, err)
|
||||||
|
}
|
||||||
|
healthchecks[name] = healthCheckFromProto(hi)
|
||||||
|
}
|
||||||
|
|
||||||
|
var vppInfo *VPPInfoSnapshot
|
||||||
|
if vi, err := c.api.GetVPPInfo(rctx, &grpcapi.GetVPPInfoRequest{}); err == nil {
|
||||||
|
vppInfo = &VPPInfoSnapshot{
|
||||||
|
Version: vi.GetVersion(),
|
||||||
|
BuildDate: vi.GetBuildDate(),
|
||||||
|
PID: vi.GetPid(),
|
||||||
|
BoottimeNs: vi.GetBoottimeNs(),
|
||||||
|
ConnecttimeNs: vi.GetConnecttimeNs(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
c.mu.Lock()
|
||||||
|
c.cache.Frontends = frontends
|
||||||
|
c.cache.FrontendsOrder = frontendsOrder
|
||||||
|
c.cache.Backends = backends
|
||||||
|
c.cache.BackendsOrder = backendsOrder
|
||||||
|
c.cache.HealthChecks = healthchecks
|
||||||
|
c.cache.HealthCheckOrder = healthCheckOrder
|
||||||
|
c.cache.VPPInfo = vppInfo
|
||||||
|
c.cache.LastRefresh = time.Now()
|
||||||
|
c.mu.Unlock()
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// watchLoop subscribes to WatchEvents and feeds the broker until the context
|
||||||
|
// is cancelled. Reconnects with exponential backoff on stream errors.
|
||||||
|
func (c *maglevClient) watchLoop(ctx context.Context) {
|
||||||
|
backoff := time.Second
|
||||||
|
maxBackoff := 30 * time.Second
|
||||||
|
for {
|
||||||
|
if ctx.Err() != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := c.watchOnce(ctx); err != nil {
|
||||||
|
if ctx.Err() != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
slog.Warn("watch-disconnected", "maglevd", c.name, "err", err)
|
||||||
|
c.setConnected(false, err.Error())
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
case <-time.After(backoff):
|
||||||
|
}
|
||||||
|
backoff *= 2
|
||||||
|
if backoff > maxBackoff {
|
||||||
|
backoff = maxBackoff
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
backoff = time.Second
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *maglevClient) watchOnce(ctx context.Context) error {
|
||||||
|
logFlag := true
|
||||||
|
backendFlag := true
|
||||||
|
frontendFlag := true
|
||||||
|
req := &grpcapi.WatchRequest{
|
||||||
|
Log: &logFlag,
|
||||||
|
LogLevel: "debug",
|
||||||
|
Backend: &backendFlag,
|
||||||
|
Frontend: &frontendFlag,
|
||||||
|
}
|
||||||
|
stream, err := c.api.WatchEvents(ctx, req)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("open stream: %w", err)
|
||||||
|
}
|
||||||
|
// Successful subscribe: mark connected and pull a fresh snapshot so
|
||||||
|
// the REST cache is immediately ground-truth accurate. WatchEvents
|
||||||
|
// itself replays current state as synthetic from==to events, which
|
||||||
|
// will also update the cache as they arrive.
|
||||||
|
c.setConnected(true, "")
|
||||||
|
if err := c.refreshAll(ctx); err != nil {
|
||||||
|
slog.Warn("refresh-after-watch", "maglevd", c.name, "err", err)
|
||||||
|
}
|
||||||
|
for {
|
||||||
|
ev, err := stream.Recv()
|
||||||
|
if err != nil {
|
||||||
|
if errors.Is(err, io.EOF) || ctx.Err() != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
c.handleEvent(ev)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// handleEvent applies an incoming gRPC event to the local cache and
|
||||||
|
// publishes a corresponding BrowserEvent on the broker.
|
||||||
|
func (c *maglevClient) handleEvent(ev *grpcapi.Event) {
|
||||||
|
switch body := ev.GetEvent().(type) {
|
||||||
|
case *grpcapi.Event_Log:
|
||||||
|
le := body.Log
|
||||||
|
if le == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
attrs := make(map[string]string, len(le.GetAttrs()))
|
||||||
|
for _, a := range le.GetAttrs() {
|
||||||
|
attrs[a.GetKey()] = a.GetValue()
|
||||||
|
}
|
||||||
|
payload, _ := json.Marshal(LogEventPayload{
|
||||||
|
Level: le.GetLevel(),
|
||||||
|
Msg: le.GetMsg(),
|
||||||
|
Attrs: attrs,
|
||||||
|
})
|
||||||
|
c.broker.Publish(BrowserEvent{
|
||||||
|
Maglevd: c.name,
|
||||||
|
Type: "log",
|
||||||
|
AtUnixNs: le.GetAtUnixNs(),
|
||||||
|
Payload: payload,
|
||||||
|
})
|
||||||
|
|
||||||
|
case *grpcapi.Event_Backend:
|
||||||
|
be := body.Backend
|
||||||
|
if be == nil || be.GetTransition() == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
tr := transitionFromProto(be.GetTransition())
|
||||||
|
// maglevd replays current state on WatchEvents subscribe as a
|
||||||
|
// synthetic event with from==to and at_unix_ns=0 (see
|
||||||
|
// internal/grpcapi/server.go). It is not a real transition — the
|
||||||
|
// in-process cache is already correct from refreshAll, so don't
|
||||||
|
// touch LastTransition (which would clobber it with at=0 and
|
||||||
|
// render as "55 years ago" in the browser) and don't forward to
|
||||||
|
// the broker.
|
||||||
|
if tr.From == tr.To {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
c.applyBackendTransition(be.GetBackendName(), tr)
|
||||||
|
payload, _ := json.Marshal(BackendEventPayload{
|
||||||
|
Backend: be.GetBackendName(),
|
||||||
|
Transition: *tr,
|
||||||
|
})
|
||||||
|
c.broker.Publish(BrowserEvent{
|
||||||
|
Maglevd: c.name,
|
||||||
|
Type: "backend",
|
||||||
|
AtUnixNs: tr.AtUnixNs,
|
||||||
|
Payload: payload,
|
||||||
|
})
|
||||||
|
|
||||||
|
case *grpcapi.Event_Frontend:
|
||||||
|
fe := body.Frontend
|
||||||
|
if fe == nil || fe.GetTransition() == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
tr := transitionFromProto(fe.GetTransition())
|
||||||
|
if tr.From == tr.To {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
payload, _ := json.Marshal(FrontendEventPayload{
|
||||||
|
Frontend: fe.GetFrontendName(),
|
||||||
|
Transition: *tr,
|
||||||
|
})
|
||||||
|
c.broker.Publish(BrowserEvent{
|
||||||
|
Maglevd: c.name,
|
||||||
|
Type: "frontend",
|
||||||
|
AtUnixNs: tr.AtUnixNs,
|
||||||
|
Payload: payload,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *maglevClient) applyBackendTransition(name string, tr *TransitionRecord) {
|
||||||
|
c.mu.Lock()
|
||||||
|
defer c.mu.Unlock()
|
||||||
|
b, ok := c.cache.Backends[name]
|
||||||
|
if !ok {
|
||||||
|
b = &BackendSnapshot{Name: name}
|
||||||
|
c.cache.Backends[name] = b
|
||||||
|
c.cache.BackendsOrder = append(c.cache.BackendsOrder, name)
|
||||||
|
}
|
||||||
|
b.State = tr.To
|
||||||
|
b.LastTransition = tr
|
||||||
|
b.Transitions = append(b.Transitions, tr)
|
||||||
|
// Cap history to the most recent 20 entries to mirror what maglevd
|
||||||
|
// returns from GetBackend.
|
||||||
|
if len(b.Transitions) > 20 {
|
||||||
|
b.Transitions = b.Transitions[len(b.Transitions)-20:]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// refreshLoop pulls a fresh snapshot every 30s to catch anything the live
|
||||||
|
// event stream may have missed (e.g. during a brief gRPC reconnect).
|
||||||
|
func (c *maglevClient) refreshLoop(ctx context.Context) {
|
||||||
|
t := time.NewTicker(30 * time.Second)
|
||||||
|
defer t.Stop()
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
case <-t.C:
|
||||||
|
if err := c.refreshAll(ctx); err != nil {
|
||||||
|
slog.Debug("refresh-all", "maglevd", c.name, "err", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// healthLoop issues a cheap GetVPPInfo every 5s to surface connection drops
|
||||||
|
// quickly. Errors flip the connection indicator; recoveries trigger a
|
||||||
|
// refreshAll so the cache catches up.
|
||||||
|
func (c *maglevClient) healthLoop(ctx context.Context) {
|
||||||
|
t := time.NewTicker(5 * time.Second)
|
||||||
|
defer t.Stop()
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
case <-t.C:
|
||||||
|
hctx, cancel := context.WithTimeout(ctx, 2*time.Second)
|
||||||
|
_, err := c.api.GetVPPInfo(hctx, &grpcapi.GetVPPInfoRequest{})
|
||||||
|
cancel()
|
||||||
|
if err != nil {
|
||||||
|
c.setConnected(false, err.Error())
|
||||||
|
} else {
|
||||||
|
c.setConnected(true, "")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---- proto → JSON helpers --------------------------------------------------
|
||||||
|
|
||||||
|
func frontendFromProto(fi *grpcapi.FrontendInfo) *FrontendSnapshot {
|
||||||
|
out := &FrontendSnapshot{
|
||||||
|
Name: fi.GetName(),
|
||||||
|
Address: fi.GetAddress(),
|
||||||
|
Protocol: fi.GetProtocol(),
|
||||||
|
Port: fi.GetPort(),
|
||||||
|
Description: fi.GetDescription(),
|
||||||
|
SrcIPSticky: fi.GetSrcIpSticky(),
|
||||||
|
}
|
||||||
|
for _, p := range fi.GetPools() {
|
||||||
|
ps := &PoolSnapshot{Name: p.GetName()}
|
||||||
|
for _, pb := range p.GetBackends() {
|
||||||
|
ps.Backends = append(ps.Backends, &PoolBackendSnapshot{
|
||||||
|
Name: pb.GetName(),
|
||||||
|
Weight: pb.GetWeight(),
|
||||||
|
EffectiveWeight: pb.GetEffectiveWeight(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
out.Pools = append(out.Pools, ps)
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func backendFromProto(bi *grpcapi.BackendInfo) *BackendSnapshot {
|
||||||
|
out := &BackendSnapshot{
|
||||||
|
Name: bi.GetName(),
|
||||||
|
Address: bi.GetAddress(),
|
||||||
|
State: bi.GetState(),
|
||||||
|
Enabled: bi.GetEnabled(),
|
||||||
|
HealthCheck: bi.GetHealthcheck(),
|
||||||
|
}
|
||||||
|
for _, t := range bi.GetTransitions() {
|
||||||
|
out.Transitions = append(out.Transitions, transitionFromProto(t))
|
||||||
|
}
|
||||||
|
if n := len(out.Transitions); n > 0 {
|
||||||
|
out.LastTransition = out.Transitions[n-1]
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func transitionFromProto(t *grpcapi.TransitionRecord) *TransitionRecord {
|
||||||
|
return &TransitionRecord{
|
||||||
|
From: t.GetFrom(),
|
||||||
|
To: t.GetTo(),
|
||||||
|
AtUnixNs: t.GetAtUnixNs(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func healthCheckFromProto(h *grpcapi.HealthCheckInfo) *HealthCheckSnapshot {
|
||||||
|
return &HealthCheckSnapshot{
|
||||||
|
Name: h.GetName(),
|
||||||
|
Type: h.GetType(),
|
||||||
|
Port: h.GetPort(),
|
||||||
|
IntervalNs: h.GetIntervalNs(),
|
||||||
|
FastIntervalNs: h.GetFastIntervalNs(),
|
||||||
|
DownIntervalNs: h.GetDownIntervalNs(),
|
||||||
|
TimeoutNs: h.GetTimeoutNs(),
|
||||||
|
Rise: h.GetRise(),
|
||||||
|
Fall: h.GetFall(),
|
||||||
|
}
|
||||||
|
}
|
||||||
164
cmd/frontend/handlers.go
Normal file
164
cmd/frontend/handlers.go
Normal file
@@ -0,0 +1,164 @@
|
|||||||
|
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
||||||
|
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io/fs"
|
||||||
|
"log/slog"
|
||||||
|
"net/http"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
buildinfo "git.ipng.ch/ipng/vpp-maglev/cmd"
|
||||||
|
)
|
||||||
|
|
||||||
|
func registerHandlers(mux *http.ServeMux, clients []*maglevClient, broker *Broker) {
|
||||||
|
byName := make(map[string]*maglevClient, len(clients))
|
||||||
|
for _, c := range clients {
|
||||||
|
byName[c.name] = c
|
||||||
|
}
|
||||||
|
|
||||||
|
mux.HandleFunc("/healthz", func(w http.ResponseWriter, _ *http.Request) {
|
||||||
|
_, _ = w.Write([]byte("ok\n"))
|
||||||
|
})
|
||||||
|
|
||||||
|
mux.HandleFunc("/view/api/version", func(w http.ResponseWriter, _ *http.Request) {
|
||||||
|
writeJSON(w, VersionInfo{
|
||||||
|
Version: buildinfo.Version(),
|
||||||
|
Commit: buildinfo.Commit(),
|
||||||
|
Date: buildinfo.Date(),
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
mux.HandleFunc("/view/api/maglevds", func(w http.ResponseWriter, _ *http.Request) {
|
||||||
|
infos := make([]MaglevdInfo, 0, len(clients))
|
||||||
|
for _, c := range clients {
|
||||||
|
infos = append(infos, c.Info())
|
||||||
|
}
|
||||||
|
writeJSON(w, infos)
|
||||||
|
})
|
||||||
|
|
||||||
|
mux.HandleFunc("/view/api/state", func(w http.ResponseWriter, _ *http.Request) {
|
||||||
|
out := make([]*StateSnapshot, 0, len(clients))
|
||||||
|
for _, c := range clients {
|
||||||
|
out = append(out, c.Snapshot())
|
||||||
|
}
|
||||||
|
writeJSON(w, out)
|
||||||
|
})
|
||||||
|
|
||||||
|
mux.HandleFunc("/view/api/state/", func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
name := strings.TrimPrefix(r.URL.Path, "/view/api/state/")
|
||||||
|
c, ok := byName[name]
|
||||||
|
if !ok {
|
||||||
|
http.NotFound(w, r)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
writeJSON(w, c.Snapshot())
|
||||||
|
})
|
||||||
|
|
||||||
|
mux.HandleFunc("/view/api/events", func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
serveSSE(w, r, broker)
|
||||||
|
})
|
||||||
|
|
||||||
|
mux.HandleFunc("/admin/", func(w http.ResponseWriter, _ *http.Request) {
|
||||||
|
http.Error(w, "admin mode not implemented", http.StatusNotImplemented)
|
||||||
|
})
|
||||||
|
|
||||||
|
// Static SPA served from the embedded dist fs, mounted under /view/.
|
||||||
|
staticFS, err := fs.Sub(webFS, "web/dist")
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("embed-subfs", "err", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
fileServer := http.FileServer(http.FS(staticFS))
|
||||||
|
mux.Handle("/view/", http.StripPrefix("/view/", fileServer))
|
||||||
|
mux.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.URL.Path == "/" {
|
||||||
|
http.Redirect(w, r, "/view/", http.StatusFound)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
http.NotFound(w, r)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func writeJSON(w http.ResponseWriter, v any) {
|
||||||
|
w.Header().Set("Content-Type", "application/json")
|
||||||
|
enc := json.NewEncoder(w)
|
||||||
|
enc.SetEscapeHTML(false)
|
||||||
|
if err := enc.Encode(v); err != nil {
|
||||||
|
slog.Error("json-encode", "err", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// serveSSE handles the long-lived /view/api/events stream. The operational
|
||||||
|
// requirements (retry hint, heartbeat, flush-after-write, X-Accel-Buffering,
|
||||||
|
// context-done teardown) are documented in PLAN_FRONTEND.md §SSE operational
|
||||||
|
// requirements.
|
||||||
|
func serveSSE(w http.ResponseWriter, r *http.Request, broker *Broker) {
|
||||||
|
flusher, ok := w.(http.Flusher)
|
||||||
|
if !ok {
|
||||||
|
http.Error(w, "streaming unsupported", http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
h := w.Header()
|
||||||
|
h.Set("Content-Type", "text/event-stream")
|
||||||
|
h.Set("Cache-Control", "no-cache")
|
||||||
|
h.Set("Connection", "keep-alive")
|
||||||
|
h.Set("X-Accel-Buffering", "no")
|
||||||
|
w.WriteHeader(http.StatusOK)
|
||||||
|
|
||||||
|
// Reconnect hint: EventSource default is 3–5s; 2s feels livelier.
|
||||||
|
fmt.Fprintf(w, "retry: 2000\n\n")
|
||||||
|
flusher.Flush()
|
||||||
|
|
||||||
|
result := broker.Subscribe(r.Header.Get("Last-Event-ID"))
|
||||||
|
defer broker.Unsubscribe(result.Channel)
|
||||||
|
|
||||||
|
if result.NeedResync {
|
||||||
|
// No id: line — the browser keeps whatever Last-Event-ID it had,
|
||||||
|
// so subsequent reconnects compare against a real event ID.
|
||||||
|
fmt.Fprintf(w, "event: resync\ndata: {}\n\n")
|
||||||
|
flusher.Flush()
|
||||||
|
}
|
||||||
|
for _, ev := range result.ReplayEvents {
|
||||||
|
if err := writeEvent(w, ev); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
flusher.Flush()
|
||||||
|
}
|
||||||
|
|
||||||
|
heartbeat := time.NewTicker(15 * time.Second)
|
||||||
|
defer heartbeat.Stop()
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-r.Context().Done():
|
||||||
|
return
|
||||||
|
case ev, ok := <-result.Channel:
|
||||||
|
if !ok {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := writeEvent(w, ev); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
flusher.Flush()
|
||||||
|
case <-heartbeat.C:
|
||||||
|
if _, err := fmt.Fprintf(w, ": ping\n\n"); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
flusher.Flush()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func writeEvent(w http.ResponseWriter, ev deliveredEvent) error {
|
||||||
|
body, err := json.Marshal(ev.Event)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
_, err = fmt.Fprintf(w, "id: %s\ndata: %s\n\n", ev.ID, body)
|
||||||
|
return err
|
||||||
|
}
|
||||||
126
cmd/frontend/main.go
Normal file
126
cmd/frontend/main.go
Normal file
@@ -0,0 +1,126 @@
|
|||||||
|
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
||||||
|
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"flag"
|
||||||
|
"fmt"
|
||||||
|
"log/slog"
|
||||||
|
"net/http"
|
||||||
|
"os"
|
||||||
|
"os/signal"
|
||||||
|
"strings"
|
||||||
|
"syscall"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
buildinfo "git.ipng.ch/ipng/vpp-maglev/cmd"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
if err := run(); err != nil {
|
||||||
|
slog.Error("startup-fatal", "err", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func run() error {
|
||||||
|
printVersion := flag.Bool("version", false, "print version and exit")
|
||||||
|
servers := stringFlag("server", "", "MAGLEV_SERVERS", "comma-separated maglevd gRPC addresses (required)")
|
||||||
|
listen := stringFlag("listen", ":8080", "MAGLEV_LISTEN", "HTTP listen address")
|
||||||
|
logLevel := stringFlag("log-level", "info", "MAGLEV_LOG_LEVEL", "log verbosity (debug|info|warn|error)")
|
||||||
|
flag.Parse()
|
||||||
|
|
||||||
|
if *printVersion {
|
||||||
|
fmt.Printf("maglev-frontend %s (commit %s, built %s)\n",
|
||||||
|
buildinfo.Version(), buildinfo.Commit(), buildinfo.Date())
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var level slog.Level
|
||||||
|
if err := level.UnmarshalText([]byte(*logLevel)); err != nil {
|
||||||
|
return fmt.Errorf("invalid log level %q: %w", *logLevel, err)
|
||||||
|
}
|
||||||
|
slog.SetDefault(slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{Level: level})))
|
||||||
|
slog.Info("starting",
|
||||||
|
"version", buildinfo.Version(),
|
||||||
|
"commit", buildinfo.Commit(),
|
||||||
|
"date", buildinfo.Date())
|
||||||
|
|
||||||
|
addrs := parseServers(*servers)
|
||||||
|
if len(addrs) == 0 {
|
||||||
|
return errors.New("at least one -server address is required")
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, cancel := context.WithCancel(context.Background())
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
broker := NewBroker()
|
||||||
|
|
||||||
|
clients := make([]*maglevClient, 0, len(addrs))
|
||||||
|
for _, addr := range addrs {
|
||||||
|
c, err := newMaglevClient(addr, broker)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("connect %s: %w", addr, err)
|
||||||
|
}
|
||||||
|
clients = append(clients, c)
|
||||||
|
c.Start(ctx)
|
||||||
|
slog.Info("maglevd-configured", "name", c.name, "address", c.address)
|
||||||
|
}
|
||||||
|
|
||||||
|
mux := http.NewServeMux()
|
||||||
|
registerHandlers(mux, clients, broker)
|
||||||
|
|
||||||
|
srv := &http.Server{
|
||||||
|
Addr: *listen,
|
||||||
|
Handler: mux,
|
||||||
|
ReadHeaderTimeout: 10 * time.Second,
|
||||||
|
}
|
||||||
|
|
||||||
|
sigCh := make(chan os.Signal, 1)
|
||||||
|
signal.Notify(sigCh, syscall.SIGTERM, syscall.SIGINT)
|
||||||
|
|
||||||
|
errCh := make(chan error, 1)
|
||||||
|
go func() {
|
||||||
|
slog.Info("http-listening", "addr", *listen)
|
||||||
|
if err := srv.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) {
|
||||||
|
errCh <- err
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
select {
|
||||||
|
case sig := <-sigCh:
|
||||||
|
slog.Info("shutdown", "signal", sig)
|
||||||
|
case err := <-errCh:
|
||||||
|
cancel()
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
cancel()
|
||||||
|
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||||
|
defer shutdownCancel()
|
||||||
|
_ = srv.Shutdown(shutdownCtx)
|
||||||
|
for _, c := range clients {
|
||||||
|
c.Close()
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseServers(s string) []string {
|
||||||
|
var out []string
|
||||||
|
for _, part := range strings.Split(s, ",") {
|
||||||
|
if p := strings.TrimSpace(part); p != "" {
|
||||||
|
out = append(out, p)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func stringFlag(name, defaultVal, envKey, usage string) *string {
|
||||||
|
val := defaultVal
|
||||||
|
if v := os.Getenv(envKey); v != "" {
|
||||||
|
val = v
|
||||||
|
}
|
||||||
|
return flag.String(name, val, fmt.Sprintf("%s (env: %s)", usage, envKey))
|
||||||
|
}
|
||||||
117
cmd/frontend/types.go
Normal file
117
cmd/frontend/types.go
Normal file
@@ -0,0 +1,117 @@
|
|||||||
|
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
||||||
|
|
||||||
|
package main
|
||||||
|
|
||||||
|
import "encoding/json"
|
||||||
|
|
||||||
|
// StateSnapshot is the full JSON snapshot served for a single maglevd.
|
||||||
|
type StateSnapshot struct {
|
||||||
|
Maglevd MaglevdInfo `json:"maglevd"`
|
||||||
|
Frontends []*FrontendSnapshot `json:"frontends"`
|
||||||
|
Backends []*BackendSnapshot `json:"backends"`
|
||||||
|
HealthChecks []*HealthCheckSnapshot `json:"healthchecks"`
|
||||||
|
VPPInfo *VPPInfoSnapshot `json:"vpp_info,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// MaglevdInfo is the per-maglevd connection status record.
|
||||||
|
type MaglevdInfo struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Address string `json:"address"`
|
||||||
|
Connected bool `json:"connected"`
|
||||||
|
LastError string `json:"last_error,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// VersionInfo is the build metadata of this maglev-frontend binary.
|
||||||
|
type VersionInfo struct {
|
||||||
|
Version string `json:"version"`
|
||||||
|
Commit string `json:"commit"`
|
||||||
|
Date string `json:"date"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type FrontendSnapshot struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Address string `json:"address"`
|
||||||
|
Protocol string `json:"protocol"`
|
||||||
|
Port uint32 `json:"port"`
|
||||||
|
Description string `json:"description,omitempty"`
|
||||||
|
SrcIPSticky bool `json:"src_ip_sticky"`
|
||||||
|
Pools []*PoolSnapshot `json:"pools"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type PoolSnapshot struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Backends []*PoolBackendSnapshot `json:"backends"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type PoolBackendSnapshot struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Weight int32 `json:"weight"`
|
||||||
|
EffectiveWeight int32 `json:"effective_weight"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type BackendSnapshot struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Address string `json:"address"`
|
||||||
|
State string `json:"state"`
|
||||||
|
Enabled bool `json:"enabled"`
|
||||||
|
HealthCheck string `json:"healthcheck"`
|
||||||
|
LastTransition *TransitionRecord `json:"last_transition,omitempty"`
|
||||||
|
Transitions []*TransitionRecord `json:"transitions,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type TransitionRecord struct {
|
||||||
|
From string `json:"from"`
|
||||||
|
To string `json:"to"`
|
||||||
|
AtUnixNs int64 `json:"at_unix_ns"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type HealthCheckSnapshot struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Type string `json:"type"`
|
||||||
|
Port uint32 `json:"port"`
|
||||||
|
IntervalNs int64 `json:"interval_ns"`
|
||||||
|
FastIntervalNs int64 `json:"fast_interval_ns"`
|
||||||
|
DownIntervalNs int64 `json:"down_interval_ns"`
|
||||||
|
TimeoutNs int64 `json:"timeout_ns"`
|
||||||
|
Rise int32 `json:"rise"`
|
||||||
|
Fall int32 `json:"fall"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type VPPInfoSnapshot struct {
|
||||||
|
Version string `json:"version"`
|
||||||
|
BuildDate string `json:"build_date"`
|
||||||
|
PID uint32 `json:"pid"`
|
||||||
|
BoottimeNs int64 `json:"boottime_ns"`
|
||||||
|
ConnecttimeNs int64 `json:"connecttime_ns"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// BrowserEvent is the wire shape sent over SSE to the browser.
|
||||||
|
type BrowserEvent struct {
|
||||||
|
Maglevd string `json:"maglevd"`
|
||||||
|
Type string `json:"type"` // log|backend|frontend|maglevd-status|resync
|
||||||
|
AtUnixNs int64 `json:"at_unix_ns"`
|
||||||
|
Payload json.RawMessage `json:"payload"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// BackendEventPayload is what we ship inside BrowserEvent.Payload for
|
||||||
|
// type == "backend".
|
||||||
|
type BackendEventPayload struct {
|
||||||
|
Backend string `json:"backend"`
|
||||||
|
Transition TransitionRecord `json:"transition"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type FrontendEventPayload struct {
|
||||||
|
Frontend string `json:"frontend"`
|
||||||
|
Transition TransitionRecord `json:"transition"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type LogEventPayload struct {
|
||||||
|
Level string `json:"level"`
|
||||||
|
Msg string `json:"msg"`
|
||||||
|
Attrs map[string]string `json:"attrs,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type MaglevdStatusPayload struct {
|
||||||
|
Connected bool `json:"connected"`
|
||||||
|
LastError string `json:"last_error,omitempty"`
|
||||||
|
}
|
||||||
3
cmd/frontend/web/.prettierignore
Normal file
3
cmd/frontend/web/.prettierignore
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
dist/
|
||||||
|
node_modules/
|
||||||
|
package-lock.json
|
||||||
7
cmd/frontend/web/.prettierrc.json
Normal file
7
cmd/frontend/web/.prettierrc.json
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"printWidth": 100,
|
||||||
|
"tabWidth": 2,
|
||||||
|
"semi": true,
|
||||||
|
"singleQuote": false,
|
||||||
|
"trailingComma": "all"
|
||||||
|
}
|
||||||
1
cmd/frontend/web/dist/assets/index-9NmAul22.css
vendored
Normal file
1
cmd/frontend/web/dist/assets/index-9NmAul22.css
vendored
Normal file
File diff suppressed because one or more lines are too long
1
cmd/frontend/web/dist/assets/index-DZzDfClm.js
vendored
Normal file
1
cmd/frontend/web/dist/assets/index-DZzDfClm.js
vendored
Normal file
File diff suppressed because one or more lines are too long
13
cmd/frontend/web/dist/index.html
vendored
Normal file
13
cmd/frontend/web/dist/index.html
vendored
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
<!doctype html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8" />
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||||
|
<title>maglev</title>
|
||||||
|
<script type="module" crossorigin src="/view/assets/index-DZzDfClm.js"></script>
|
||||||
|
<link rel="stylesheet" crossorigin href="/view/assets/index-9NmAul22.css">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div id="root"></div>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
12
cmd/frontend/web/index.html
Normal file
12
cmd/frontend/web/index.html
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
<!doctype html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8" />
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||||
|
<title>maglev</title>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div id="root"></div>
|
||||||
|
<script type="module" src="./src/main.tsx"></script>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
1695
cmd/frontend/web/package-lock.json
generated
Normal file
1695
cmd/frontend/web/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
22
cmd/frontend/web/package.json
Normal file
22
cmd/frontend/web/package.json
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
{
|
||||||
|
"name": "maglev-frontend-web",
|
||||||
|
"private": true,
|
||||||
|
"version": "0.0.0",
|
||||||
|
"type": "module",
|
||||||
|
"scripts": {
|
||||||
|
"dev": "vite",
|
||||||
|
"build": "vite build --outDir dist --emptyOutDir",
|
||||||
|
"check": "tsc --noEmit",
|
||||||
|
"format": "prettier --write .",
|
||||||
|
"format:check": "prettier --check ."
|
||||||
|
},
|
||||||
|
"dependencies": {
|
||||||
|
"solid-js": "^1.9.3"
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"prettier": "^3.3.3",
|
||||||
|
"typescript": "^5.6.3",
|
||||||
|
"vite": "^5.4.10",
|
||||||
|
"vite-plugin-solid": "^2.10.2"
|
||||||
|
}
|
||||||
|
}
|
||||||
62
cmd/frontend/web/src/App.tsx
Normal file
62
cmd/frontend/web/src/App.tsx
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
import { createSignal, onMount, type Component } from "solid-js";
|
||||||
|
import { fetchAllState, fetchVersion } from "./api/rest";
|
||||||
|
import { openEventStream } from "./api/sse";
|
||||||
|
import { replaceAll, state } from "./stores/state";
|
||||||
|
import { scope, setScope } from "./stores/scope";
|
||||||
|
import ScopeSelector from "./components/ScopeSelector";
|
||||||
|
import Overview from "./views/Overview";
|
||||||
|
import DebugPanel from "./views/DebugPanel";
|
||||||
|
import type { VersionInfo } from "./types";
|
||||||
|
|
||||||
|
const isAdmin = window.location.pathname.startsWith("/admin");
|
||||||
|
|
||||||
|
const App: Component = () => {
|
||||||
|
const [error, setError] = createSignal<string | undefined>();
|
||||||
|
const [version, setVersion] = createSignal<VersionInfo | undefined>();
|
||||||
|
|
||||||
|
onMount(async () => {
|
||||||
|
try {
|
||||||
|
const [snaps, ver] = await Promise.all([fetchAllState(), fetchVersion()]);
|
||||||
|
replaceAll(snaps);
|
||||||
|
setVersion(ver);
|
||||||
|
if (!scope() && snaps.length > 0) {
|
||||||
|
setScope(snaps[0].maglevd.name);
|
||||||
|
}
|
||||||
|
openEventStream();
|
||||||
|
} catch (err) {
|
||||||
|
setError(`${err}`);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
return (
|
||||||
|
<div class="app">
|
||||||
|
<header class="app-header">
|
||||||
|
<div class="brand">
|
||||||
|
<strong>maglev</strong>
|
||||||
|
{version() && (
|
||||||
|
<span class="version" title={`commit ${version()!.commit} · built ${version()!.date}`}>
|
||||||
|
{version()!.version} ({version()!.commit})
|
||||||
|
</span>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
<ScopeSelector />
|
||||||
|
<span class="mode-tag">{isAdmin ? "admin" : "view"}</span>
|
||||||
|
<a
|
||||||
|
class="admin-toggle"
|
||||||
|
href={isAdmin ? "/view/" : "/admin/"}
|
||||||
|
title={isAdmin ? "exit admin mode" : "enter admin mode"}
|
||||||
|
>
|
||||||
|
{isAdmin ? "exit admin" : "admin…"}
|
||||||
|
</a>
|
||||||
|
</header>
|
||||||
|
|
||||||
|
{error() && <div class="banner err">{error()}</div>}
|
||||||
|
{!error() && Object.keys(state.byName).length === 0 && <p class="loading">Loading…</p>}
|
||||||
|
|
||||||
|
<Overview />
|
||||||
|
<DebugPanel />
|
||||||
|
</div>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default App;
|
||||||
23
cmd/frontend/web/src/api/rest.ts
Normal file
23
cmd/frontend/web/src/api/rest.ts
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
import type { MaglevdInfo, StateSnapshot, VersionInfo } from "../types";
|
||||||
|
|
||||||
|
async function getJSON<T>(path: string): Promise<T> {
|
||||||
|
const r = await fetch(path, { credentials: "same-origin" });
|
||||||
|
if (!r.ok) throw new Error(`${path}: ${r.status} ${r.statusText}`);
|
||||||
|
return (await r.json()) as T;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function listMaglevds(): Promise<MaglevdInfo[]> {
|
||||||
|
return getJSON<MaglevdInfo[]>("/view/api/maglevds");
|
||||||
|
}
|
||||||
|
|
||||||
|
export function fetchAllState(): Promise<StateSnapshot[]> {
|
||||||
|
return getJSON<StateSnapshot[]>("/view/api/state");
|
||||||
|
}
|
||||||
|
|
||||||
|
export function fetchState(name: string): Promise<StateSnapshot> {
|
||||||
|
return getJSON<StateSnapshot>(`/view/api/state/${encodeURIComponent(name)}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
export function fetchVersion(): Promise<VersionInfo> {
|
||||||
|
return getJSON<VersionInfo>("/view/api/version");
|
||||||
|
}
|
||||||
92
cmd/frontend/web/src/api/sse.ts
Normal file
92
cmd/frontend/web/src/api/sse.ts
Normal file
@@ -0,0 +1,92 @@
|
|||||||
|
import type {
|
||||||
|
BackendEventPayload,
|
||||||
|
BrowserEvent,
|
||||||
|
FrontendEventPayload,
|
||||||
|
LogEventPayload,
|
||||||
|
MaglevdStatusPayload,
|
||||||
|
} from "../types";
|
||||||
|
import { fetchAllState } from "./rest";
|
||||||
|
import {
|
||||||
|
applyBackendEffectiveWeight,
|
||||||
|
applyBackendTransition,
|
||||||
|
applyFrontendTransition,
|
||||||
|
applyMaglevdStatus,
|
||||||
|
replaceAll,
|
||||||
|
} from "../stores/state";
|
||||||
|
import { pushEvent } from "../stores/events";
|
||||||
|
|
||||||
|
// openEventStream wires the SPA to /view/api/events. EventSource auto-
|
||||||
|
// reconnects with the Last-Event-ID header set, which the Go broker uses
|
||||||
|
// to replay events from its 30s ring buffer. A "resync" event tells us to
|
||||||
|
// refetch full state and redraw.
|
||||||
|
export function openEventStream(): EventSource {
|
||||||
|
const es = new EventSource("/view/api/events");
|
||||||
|
|
||||||
|
es.onmessage = (msg) => {
|
||||||
|
try {
|
||||||
|
const ev = JSON.parse(msg.data) as BrowserEvent;
|
||||||
|
dispatch(ev);
|
||||||
|
} catch (err) {
|
||||||
|
console.error("sse parse error", err, msg.data);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// "resync" is emitted as a named event so we can listen for it
|
||||||
|
// without it going through the default onmessage dispatch.
|
||||||
|
es.addEventListener("resync", async () => {
|
||||||
|
try {
|
||||||
|
const snaps = await fetchAllState();
|
||||||
|
replaceAll(snaps);
|
||||||
|
} catch (err) {
|
||||||
|
console.error("resync refetch failed", err);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
es.onerror = (err) => {
|
||||||
|
// EventSource handles reconnection on its own — just log.
|
||||||
|
console.debug("sse error, browser will reconnect", err);
|
||||||
|
};
|
||||||
|
|
||||||
|
return es;
|
||||||
|
}
|
||||||
|
|
||||||
|
function dispatch(ev: BrowserEvent) {
|
||||||
|
pushEvent(ev);
|
||||||
|
switch (ev.type) {
|
||||||
|
case "backend":
|
||||||
|
applyBackendTransition(ev.maglevd, ev.payload as BackendEventPayload);
|
||||||
|
break;
|
||||||
|
case "frontend":
|
||||||
|
applyFrontendTransition(ev.maglevd, ev.payload as FrontendEventPayload);
|
||||||
|
break;
|
||||||
|
case "maglevd-status":
|
||||||
|
applyMaglevdStatus(ev.maglevd, ev.payload as MaglevdStatusPayload);
|
||||||
|
break;
|
||||||
|
case "log":
|
||||||
|
applyLogEvent(ev.maglevd, ev.payload as LogEventPayload);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// applyLogEvent surfaces the few log messages that carry data we want to
|
||||||
|
// reflect in the store. Probe-start/probe-done drive the heartbeat and are
|
||||||
|
// handled by BackendRow watching the events signal directly; here we only
|
||||||
|
// react to VPP LB sync mutations so the effective weight column updates
|
||||||
|
// live when a backend is disabled, enabled, or reweighted.
|
||||||
|
function applyLogEvent(maglevd: string, p: LogEventPayload) {
|
||||||
|
if (!p.msg.startsWith("vpp-lb-sync-as-")) return;
|
||||||
|
const attrs = p.attrs ?? {};
|
||||||
|
const address = attrs.address;
|
||||||
|
if (!address) return;
|
||||||
|
switch (p.msg) {
|
||||||
|
case "vpp-lb-sync-as-added":
|
||||||
|
applyBackendEffectiveWeight(maglevd, address, Number(attrs.weight ?? 0));
|
||||||
|
break;
|
||||||
|
case "vpp-lb-sync-as-removed":
|
||||||
|
applyBackendEffectiveWeight(maglevd, address, 0);
|
||||||
|
break;
|
||||||
|
case "vpp-lb-sync-as-weight-updated":
|
||||||
|
applyBackendEffectiveWeight(maglevd, address, Number(attrs.to ?? 0));
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
41
cmd/frontend/web/src/components/Flash.tsx
Normal file
41
cmd/frontend/web/src/components/Flash.tsx
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
import { createEffect, on, type Component, type JSX } from "solid-js";
|
||||||
|
|
||||||
|
type Props = {
|
||||||
|
// value is only used for change detection. When it changes the
|
||||||
|
// wrapper runs a 1s flash animation.
|
||||||
|
value: string | number | boolean;
|
||||||
|
// When children are provided they are rendered inside the wrapper
|
||||||
|
// instead of the raw value. Useful for wrapping e.g. <StatusBadge>
|
||||||
|
// so the pill animates on state change while still showing itself.
|
||||||
|
children?: JSX.Element;
|
||||||
|
};
|
||||||
|
|
||||||
|
// Flash plays a 1s yellow-to-transparent background animation every
|
||||||
|
// time `value` changes. The initial mount is skipped (defer: true) so
|
||||||
|
// nothing flashes on page load. Uses the Web Animations API so repeated
|
||||||
|
// changes reliably re-trigger even when the new value arrives while a
|
||||||
|
// previous animation is still running.
|
||||||
|
const Flash: Component<Props> = (props) => {
|
||||||
|
let el: HTMLSpanElement | undefined;
|
||||||
|
|
||||||
|
createEffect(
|
||||||
|
on(
|
||||||
|
() => props.value,
|
||||||
|
() => {
|
||||||
|
el?.animate([{ backgroundColor: "#fefe27" }, { backgroundColor: "transparent" }], {
|
||||||
|
duration: 1000,
|
||||||
|
easing: "ease-out",
|
||||||
|
});
|
||||||
|
},
|
||||||
|
{ defer: true },
|
||||||
|
),
|
||||||
|
);
|
||||||
|
|
||||||
|
return (
|
||||||
|
<span ref={el} class="flash-target">
|
||||||
|
{props.children ?? (props.value as unknown as JSX.Element)}
|
||||||
|
</span>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default Flash;
|
||||||
32
cmd/frontend/web/src/components/ProbeHeartbeat.tsx
Normal file
32
cmd/frontend/web/src/components/ProbeHeartbeat.tsx
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
import { createEffect, createSignal, type Component } from "solid-js";
|
||||||
|
import { events } from "../stores/events";
|
||||||
|
import type { LogEventPayload } from "../types";
|
||||||
|
|
||||||
|
type Props = { maglevd: string; backend: string };
|
||||||
|
|
||||||
|
// ProbeHeartbeat watches the event stream for probe-start/probe-done log
|
||||||
|
// records targeted at this backend. It shows a heart while a probe is in
|
||||||
|
// flight and a dot at rest. Success/failure is reflected by the backend's
|
||||||
|
// state column, so this component is purely an activity indicator.
|
||||||
|
const ProbeHeartbeat: Component<Props> = (props) => {
|
||||||
|
const [inFlight, setInFlight] = createSignal(false);
|
||||||
|
|
||||||
|
createEffect(() => {
|
||||||
|
const list = events();
|
||||||
|
if (list.length === 0) return;
|
||||||
|
const ev = list[list.length - 1]; // newest — list is chronological
|
||||||
|
if (ev.type !== "log" || ev.maglevd !== props.maglevd) return;
|
||||||
|
const payload = ev.payload as LogEventPayload;
|
||||||
|
if (payload.attrs?.backend !== props.backend) return;
|
||||||
|
if (payload.msg === "probe-start") setInFlight(true);
|
||||||
|
else if (payload.msg === "probe-done") setInFlight(false);
|
||||||
|
});
|
||||||
|
|
||||||
|
return (
|
||||||
|
<span class="probe-heartbeat" classList={{ "in-flight": inFlight() }}>
|
||||||
|
{inFlight() ? "\u2764\uFE0F" : "\u00B7"}
|
||||||
|
</span>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default ProbeHeartbeat;
|
||||||
34
cmd/frontend/web/src/components/ScopeSelector.tsx
Normal file
34
cmd/frontend/web/src/components/ScopeSelector.tsx
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
import { For, type Component } from "solid-js";
|
||||||
|
import { scope, setScope } from "../stores/scope";
|
||||||
|
import { state } from "../stores/state";
|
||||||
|
|
||||||
|
const ScopeSelector: Component = () => {
|
||||||
|
const names = () => Object.keys(state.byName).sort();
|
||||||
|
return (
|
||||||
|
<nav class="scope-selector">
|
||||||
|
<For each={names()}>
|
||||||
|
{(name) => {
|
||||||
|
const snap = () => state.byName[name];
|
||||||
|
const connected = () => snap()?.maglevd.connected ?? false;
|
||||||
|
return (
|
||||||
|
<button
|
||||||
|
class="scope-tab"
|
||||||
|
classList={{
|
||||||
|
active: scope() === name,
|
||||||
|
connected: connected(),
|
||||||
|
disconnected: !connected(),
|
||||||
|
}}
|
||||||
|
title={snap()?.maglevd.address ?? ""}
|
||||||
|
onClick={() => setScope(name)}
|
||||||
|
>
|
||||||
|
<span class="dot" />
|
||||||
|
{name}
|
||||||
|
</button>
|
||||||
|
);
|
||||||
|
}}
|
||||||
|
</For>
|
||||||
|
</nav>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default ScopeSelector;
|
||||||
15
cmd/frontend/web/src/components/StatusBadge.tsx
Normal file
15
cmd/frontend/web/src/components/StatusBadge.tsx
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
import type { Component } from "solid-js";
|
||||||
|
|
||||||
|
type Props = { state: string; label?: string };
|
||||||
|
|
||||||
|
// StatusBadge renders a state pill. Background color is a CSS custom
|
||||||
|
// property on the :root so themes can override centrally.
|
||||||
|
const StatusBadge: Component<Props> = (props) => {
|
||||||
|
return (
|
||||||
|
<span class="status-badge" data-state={props.state}>
|
||||||
|
{props.label ?? props.state}
|
||||||
|
</span>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default StatusBadge;
|
||||||
18
cmd/frontend/web/src/components/Zippy.tsx
Normal file
18
cmd/frontend/web/src/components/Zippy.tsx
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
import type { Component, JSX } from "solid-js";
|
||||||
|
|
||||||
|
type Props = {
|
||||||
|
title: string;
|
||||||
|
open?: boolean;
|
||||||
|
children: JSX.Element;
|
||||||
|
};
|
||||||
|
|
||||||
|
const Zippy: Component<Props> = (props) => {
|
||||||
|
return (
|
||||||
|
<details class="zippy" open={props.open}>
|
||||||
|
<summary>{props.title}</summary>
|
||||||
|
<div class="zippy-body">{props.children}</div>
|
||||||
|
</details>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default Zippy;
|
||||||
9
cmd/frontend/web/src/main.tsx
Normal file
9
cmd/frontend/web/src/main.tsx
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
/* @refresh reload */
|
||||||
|
import { render } from "solid-js/web";
|
||||||
|
import App from "./App";
|
||||||
|
import "./styles/reset.css";
|
||||||
|
import "./styles/theme.css";
|
||||||
|
|
||||||
|
const root = document.getElementById("root");
|
||||||
|
if (!root) throw new Error("no #root element");
|
||||||
|
render(() => <App />, root);
|
||||||
21
cmd/frontend/web/src/stores/events.ts
Normal file
21
cmd/frontend/web/src/stores/events.ts
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
import { createSignal } from "solid-js";
|
||||||
|
import type { BrowserEvent } from "../types";
|
||||||
|
|
||||||
|
// Rolling tail of recent events for the debug panel. Capped at 500 to
|
||||||
|
// keep memory bounded on busy load balancers. Chronological order: the
|
||||||
|
// oldest retained event is at index 0, the newest is at the end. The
|
||||||
|
// DebugPanel renders in this order and auto-scrolls to the bottom so
|
||||||
|
// the newest line stays in view (tail-style).
|
||||||
|
const MAX = 500;
|
||||||
|
|
||||||
|
const [events, setEvents] = createSignal<BrowserEvent[]>([]);
|
||||||
|
|
||||||
|
export { events };
|
||||||
|
|
||||||
|
export function pushEvent(ev: BrowserEvent) {
|
||||||
|
setEvents((prev) => {
|
||||||
|
const next = [...prev, ev];
|
||||||
|
if (next.length > MAX) return next.slice(next.length - MAX);
|
||||||
|
return next;
|
||||||
|
});
|
||||||
|
}
|
||||||
6
cmd/frontend/web/src/stores/scope.ts
Normal file
6
cmd/frontend/web/src/stores/scope.ts
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
import { createSignal } from "solid-js";
|
||||||
|
|
||||||
|
// The currently selected maglevd name, or undefined before first fetch.
|
||||||
|
const [scope, setScope] = createSignal<string | undefined>(undefined);
|
||||||
|
|
||||||
|
export { scope, setScope };
|
||||||
107
cmd/frontend/web/src/stores/state.ts
Normal file
107
cmd/frontend/web/src/stores/state.ts
Normal file
@@ -0,0 +1,107 @@
|
|||||||
|
import { createStore, produce } from "solid-js/store";
|
||||||
|
import type {
|
||||||
|
BackendEventPayload,
|
||||||
|
FrontendEventPayload,
|
||||||
|
MaglevdStatusPayload,
|
||||||
|
StateSnapshot,
|
||||||
|
TransitionRecord,
|
||||||
|
} from "../types";
|
||||||
|
|
||||||
|
// FrontendState keys snapshots by maglevd name. A single store drives the
|
||||||
|
// whole UI; reducers produce() into the right branch.
|
||||||
|
export type FrontendState = {
|
||||||
|
byName: Record<string, StateSnapshot>;
|
||||||
|
};
|
||||||
|
|
||||||
|
const [state, setState] = createStore<FrontendState>({ byName: {} });
|
||||||
|
|
||||||
|
export { state };
|
||||||
|
|
||||||
|
export function replaceSnapshot(snap: StateSnapshot) {
|
||||||
|
setState(
|
||||||
|
produce((s) => {
|
||||||
|
s.byName[snap.maglevd.name] = snap;
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
export function replaceAll(snaps: StateSnapshot[]) {
|
||||||
|
const byName: Record<string, StateSnapshot> = {};
|
||||||
|
for (const s of snaps) byName[s.maglevd.name] = s;
|
||||||
|
setState({ byName });
|
||||||
|
}
|
||||||
|
|
||||||
|
export function applyBackendTransition(maglevd: string, p: BackendEventPayload) {
|
||||||
|
setState(
|
||||||
|
produce((s) => {
|
||||||
|
const snap = s.byName[maglevd];
|
||||||
|
if (!snap) return;
|
||||||
|
const b = snap.backends.find((x) => x.name === p.backend);
|
||||||
|
if (!b) return;
|
||||||
|
b.state = p.transition.to;
|
||||||
|
b.last_transition = p.transition;
|
||||||
|
if (!b.transitions) b.transitions = [];
|
||||||
|
b.transitions.push(p.transition);
|
||||||
|
if (b.transitions.length > 20) {
|
||||||
|
b.transitions = b.transitions.slice(b.transitions.length - 20);
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
export function applyFrontendTransition(maglevd: string, _p: FrontendEventPayload) {
|
||||||
|
// Frontend roll-up state is computed per render in the current cut, so
|
||||||
|
// there is nothing to update in the store. Kept as a named reducer so
|
||||||
|
// the SSE dispatcher has one entry per event type and future frontend
|
||||||
|
// state fields have a single place to land.
|
||||||
|
void maglevd;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function applyMaglevdStatus(maglevd: string, p: MaglevdStatusPayload) {
|
||||||
|
setState(
|
||||||
|
produce((s) => {
|
||||||
|
const snap = s.byName[maglevd];
|
||||||
|
if (!snap) return;
|
||||||
|
snap.maglevd.connected = p.connected;
|
||||||
|
snap.maglevd.last_error = p.last_error;
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// applyBackendEffectiveWeight updates the effective_weight of every pool
|
||||||
|
// row that references the backend with the given address. Driven by the
|
||||||
|
// vpp-lb-sync-as-* log events so the UI reflects VPP LB changes without
|
||||||
|
// waiting for the 30s refresh tick.
|
||||||
|
export function applyBackendEffectiveWeight(maglevd: string, address: string, weight: number) {
|
||||||
|
setState(
|
||||||
|
produce((s) => {
|
||||||
|
const snap = s.byName[maglevd];
|
||||||
|
if (!snap) return;
|
||||||
|
const b = snap.backends.find((x) => x.address === address);
|
||||||
|
if (!b) return;
|
||||||
|
for (const fe of snap.frontends) {
|
||||||
|
for (const pool of fe.pools) {
|
||||||
|
for (const pb of pool.backends) {
|
||||||
|
if (pb.name === b.name) {
|
||||||
|
pb.effective_weight = weight;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Helpers used by views.
|
||||||
|
|
||||||
|
export function lastTransitionAge(t?: TransitionRecord): string {
|
||||||
|
if (!t || !t.at_unix_ns || t.at_unix_ns <= 0) return "";
|
||||||
|
const ms = Date.now() - t.at_unix_ns / 1e6;
|
||||||
|
const s = Math.floor(ms / 1000);
|
||||||
|
if (s < 60) return `${s}s ago`;
|
||||||
|
const m = Math.floor(s / 60);
|
||||||
|
if (m < 60) return `${m}m ago`;
|
||||||
|
const h = Math.floor(m / 60);
|
||||||
|
if (h < 48) return `${h}h ago`;
|
||||||
|
return `${Math.floor(h / 24)}d ago`;
|
||||||
|
}
|
||||||
64
cmd/frontend/web/src/styles/reset.css
Normal file
64
cmd/frontend/web/src/styles/reset.css
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
*,
|
||||||
|
*::before,
|
||||||
|
*::after {
|
||||||
|
box-sizing: border-box;
|
||||||
|
}
|
||||||
|
html,
|
||||||
|
body {
|
||||||
|
margin: 0;
|
||||||
|
padding: 0;
|
||||||
|
}
|
||||||
|
body {
|
||||||
|
font-family:
|
||||||
|
-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
|
||||||
|
font-size: 14px;
|
||||||
|
line-height: 1.4;
|
||||||
|
color: var(--fg);
|
||||||
|
background: var(--bg);
|
||||||
|
}
|
||||||
|
h1,
|
||||||
|
h2,
|
||||||
|
h3,
|
||||||
|
h4,
|
||||||
|
p,
|
||||||
|
dl,
|
||||||
|
dd,
|
||||||
|
ol,
|
||||||
|
ul {
|
||||||
|
margin: 0;
|
||||||
|
padding: 0;
|
||||||
|
}
|
||||||
|
ol,
|
||||||
|
ul {
|
||||||
|
list-style: none;
|
||||||
|
}
|
||||||
|
a {
|
||||||
|
color: inherit;
|
||||||
|
text-decoration: none;
|
||||||
|
}
|
||||||
|
button {
|
||||||
|
font: inherit;
|
||||||
|
color: inherit;
|
||||||
|
background: none;
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: 4px;
|
||||||
|
padding: 4px 8px;
|
||||||
|
cursor: pointer;
|
||||||
|
}
|
||||||
|
button:hover {
|
||||||
|
background: var(--bg-soft);
|
||||||
|
}
|
||||||
|
table {
|
||||||
|
border-collapse: collapse;
|
||||||
|
width: 100%;
|
||||||
|
}
|
||||||
|
th,
|
||||||
|
td {
|
||||||
|
text-align: left;
|
||||||
|
padding: 4px 8px;
|
||||||
|
}
|
||||||
|
code,
|
||||||
|
pre,
|
||||||
|
.mono {
|
||||||
|
font-family: "SF Mono", Menlo, Consolas, monospace;
|
||||||
|
}
|
||||||
334
cmd/frontend/web/src/styles/theme.css
Normal file
334
cmd/frontend/web/src/styles/theme.css
Normal file
@@ -0,0 +1,334 @@
|
|||||||
|
:root {
|
||||||
|
--bg: #fafafa;
|
||||||
|
--bg-soft: #f0f0f0;
|
||||||
|
--bg-card: #ffffff;
|
||||||
|
--fg: #1f2937;
|
||||||
|
--fg-muted: #6b7280;
|
||||||
|
--border: #e5e7eb;
|
||||||
|
--accent: #2563eb;
|
||||||
|
|
||||||
|
--state-up: #16a34a;
|
||||||
|
--state-down: #dc2626;
|
||||||
|
--state-paused: #2563eb;
|
||||||
|
--state-disabled: #6b7280;
|
||||||
|
--state-unknown: #eab308;
|
||||||
|
--state-removed: #374151;
|
||||||
|
}
|
||||||
|
|
||||||
|
.flash-target {
|
||||||
|
display: inline-block;
|
||||||
|
padding: 0 4px;
|
||||||
|
border-radius: 3px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.app {
|
||||||
|
max-width: 1400px;
|
||||||
|
margin: 0 auto;
|
||||||
|
padding: 16px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.app-header {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 16px;
|
||||||
|
padding: 12px 0;
|
||||||
|
border-bottom: 1px solid var(--border);
|
||||||
|
margin-bottom: 16px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.brand strong {
|
||||||
|
font-size: 18px;
|
||||||
|
}
|
||||||
|
.app-header .mode-tag {
|
||||||
|
margin-left: auto;
|
||||||
|
padding: 2px 6px;
|
||||||
|
border-radius: 3px;
|
||||||
|
background: var(--bg-soft);
|
||||||
|
color: var(--fg-muted);
|
||||||
|
font-size: 11px;
|
||||||
|
text-transform: uppercase;
|
||||||
|
}
|
||||||
|
.brand .version {
|
||||||
|
margin-left: 8px;
|
||||||
|
color: var(--fg-muted);
|
||||||
|
font-family: "SF Mono", Menlo, Consolas, monospace;
|
||||||
|
font-size: 11px;
|
||||||
|
cursor: help;
|
||||||
|
}
|
||||||
|
|
||||||
|
.admin-toggle {
|
||||||
|
padding: 4px 10px;
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: 4px;
|
||||||
|
color: var(--accent);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ---- scope selector ---- */
|
||||||
|
|
||||||
|
.scope-selector {
|
||||||
|
display: flex;
|
||||||
|
gap: 6px;
|
||||||
|
flex-wrap: wrap;
|
||||||
|
}
|
||||||
|
.scope-tab {
|
||||||
|
display: inline-flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 6px;
|
||||||
|
padding: 4px 10px;
|
||||||
|
border-radius: 20px;
|
||||||
|
}
|
||||||
|
.scope-tab.active {
|
||||||
|
background: var(--accent);
|
||||||
|
color: white;
|
||||||
|
border-color: var(--accent);
|
||||||
|
}
|
||||||
|
.scope-tab .dot {
|
||||||
|
display: inline-block;
|
||||||
|
width: 8px;
|
||||||
|
height: 8px;
|
||||||
|
border-radius: 50%;
|
||||||
|
background: var(--state-down);
|
||||||
|
}
|
||||||
|
.scope-tab.connected .dot {
|
||||||
|
background: var(--state-up);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ---- status badge ---- */
|
||||||
|
|
||||||
|
.status-badge {
|
||||||
|
display: inline-block;
|
||||||
|
padding: 2px 10px;
|
||||||
|
border-radius: 10px;
|
||||||
|
font-size: 12px;
|
||||||
|
font-weight: 500;
|
||||||
|
color: white;
|
||||||
|
text-transform: capitalize;
|
||||||
|
}
|
||||||
|
.status-badge[data-state="up"] {
|
||||||
|
background: var(--state-up);
|
||||||
|
}
|
||||||
|
.status-badge[data-state="down"] {
|
||||||
|
background: var(--state-down);
|
||||||
|
}
|
||||||
|
.status-badge[data-state="paused"] {
|
||||||
|
background: var(--state-paused);
|
||||||
|
}
|
||||||
|
.status-badge[data-state="disabled"] {
|
||||||
|
background: var(--state-disabled);
|
||||||
|
}
|
||||||
|
.status-badge[data-state="unknown"] {
|
||||||
|
background: var(--state-unknown);
|
||||||
|
color: #1f2937;
|
||||||
|
}
|
||||||
|
.status-badge[data-state="removed"] {
|
||||||
|
background: var(--state-removed);
|
||||||
|
text-decoration: line-through;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ---- frontend grid ---- */
|
||||||
|
|
||||||
|
.frontend-grid {
|
||||||
|
display: grid;
|
||||||
|
gap: 16px;
|
||||||
|
grid-template-columns: 1fr;
|
||||||
|
}
|
||||||
|
@media (min-width: 640px) {
|
||||||
|
.frontend-grid {
|
||||||
|
grid-template-columns: 1fr 1fr;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@media (min-width: 1024px) {
|
||||||
|
.frontend-grid {
|
||||||
|
grid-template-columns: repeat(3, 1fr);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
.frontend-card {
|
||||||
|
background: var(--bg-card);
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: 6px;
|
||||||
|
padding: 12px;
|
||||||
|
}
|
||||||
|
.frontend-header h2 {
|
||||||
|
font-size: 16px;
|
||||||
|
margin-bottom: 4px;
|
||||||
|
}
|
||||||
|
.frontend-meta {
|
||||||
|
display: flex;
|
||||||
|
gap: 8px;
|
||||||
|
color: var(--fg-muted);
|
||||||
|
font-size: 12px;
|
||||||
|
}
|
||||||
|
.frontend-meta .proto {
|
||||||
|
text-transform: uppercase;
|
||||||
|
font-weight: 600;
|
||||||
|
}
|
||||||
|
.frontend-desc {
|
||||||
|
font-size: 12px;
|
||||||
|
color: var(--fg-muted);
|
||||||
|
margin-top: 4px;
|
||||||
|
}
|
||||||
|
.tag {
|
||||||
|
display: inline-block;
|
||||||
|
padding: 1px 6px;
|
||||||
|
border-radius: 3px;
|
||||||
|
background: var(--bg-soft);
|
||||||
|
color: var(--fg-muted);
|
||||||
|
font-size: 11px;
|
||||||
|
margin-left: 4px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.pool-block {
|
||||||
|
margin-top: 12px;
|
||||||
|
}
|
||||||
|
.pool-name {
|
||||||
|
font-size: 13px;
|
||||||
|
color: var(--fg-muted);
|
||||||
|
margin-bottom: 4px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.backend-table th,
|
||||||
|
.backend-table td {
|
||||||
|
white-space: nowrap;
|
||||||
|
}
|
||||||
|
.backend-table th {
|
||||||
|
font-size: 11px;
|
||||||
|
color: var(--fg-muted);
|
||||||
|
text-transform: uppercase;
|
||||||
|
border-bottom: 1px solid var(--border);
|
||||||
|
}
|
||||||
|
.backend-table .numeric {
|
||||||
|
text-align: right;
|
||||||
|
}
|
||||||
|
.backend-row td {
|
||||||
|
border-bottom: 1px solid var(--border);
|
||||||
|
font-size: 13px;
|
||||||
|
}
|
||||||
|
.backend-row .backend-name {
|
||||||
|
font-weight: 500;
|
||||||
|
}
|
||||||
|
.backend-row .backend-address,
|
||||||
|
.backend-row .age {
|
||||||
|
color: var(--fg-muted);
|
||||||
|
font-family: "SF Mono", Menlo, Consolas, monospace;
|
||||||
|
font-size: 12px;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ---- probe heartbeat ---- */
|
||||||
|
|
||||||
|
/* Fixed-box wrapper so the row doesn't jiggle when the glyph swaps
|
||||||
|
* between "·" (very narrow) and "❤️" (wide emoji with a different
|
||||||
|
* font metric). Width is picked to comfortably contain the heart at
|
||||||
|
* the declared font-size, line-height is locked so the emoji doesn't
|
||||||
|
* push the row baseline, and overflow is hidden as a safety net in
|
||||||
|
* case a platform renders the emoji even wider.
|
||||||
|
*/
|
||||||
|
.probe-heartbeat {
|
||||||
|
display: inline-block;
|
||||||
|
width: 16px;
|
||||||
|
height: 14px;
|
||||||
|
line-height: 14px;
|
||||||
|
margin-right: 6px;
|
||||||
|
text-align: center;
|
||||||
|
font-size: 10px;
|
||||||
|
color: var(--state-disabled);
|
||||||
|
overflow: hidden;
|
||||||
|
vertical-align: middle;
|
||||||
|
}
|
||||||
|
.probe-heartbeat.in-flight {
|
||||||
|
color: inherit;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ---- banners & loading ---- */
|
||||||
|
|
||||||
|
.banner {
|
||||||
|
padding: 8px 12px;
|
||||||
|
border-radius: 4px;
|
||||||
|
margin-bottom: 12px;
|
||||||
|
font-size: 13px;
|
||||||
|
}
|
||||||
|
.banner.warn {
|
||||||
|
background: #fef3c7;
|
||||||
|
color: #92400e;
|
||||||
|
}
|
||||||
|
.banner.err {
|
||||||
|
background: #fee2e2;
|
||||||
|
color: #991b1b;
|
||||||
|
}
|
||||||
|
.loading {
|
||||||
|
color: var(--fg-muted);
|
||||||
|
padding: 16px;
|
||||||
|
}
|
||||||
|
.empty {
|
||||||
|
color: var(--fg-muted);
|
||||||
|
padding: 16px;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ---- zippy ---- */
|
||||||
|
|
||||||
|
.zippy {
|
||||||
|
margin-top: 16px;
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: 6px;
|
||||||
|
background: var(--bg-card);
|
||||||
|
}
|
||||||
|
.zippy summary {
|
||||||
|
padding: 8px 12px;
|
||||||
|
cursor: pointer;
|
||||||
|
font-weight: 500;
|
||||||
|
}
|
||||||
|
.zippy-body {
|
||||||
|
padding: 8px 12px;
|
||||||
|
border-top: 1px solid var(--border);
|
||||||
|
}
|
||||||
|
|
||||||
|
.kv {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: max-content 1fr;
|
||||||
|
gap: 4px 12px;
|
||||||
|
}
|
||||||
|
.kv dt {
|
||||||
|
color: var(--fg-muted);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ---- debug panel ---- */
|
||||||
|
|
||||||
|
.debug-toolbar {
|
||||||
|
display: flex;
|
||||||
|
gap: 12px;
|
||||||
|
align-items: center;
|
||||||
|
margin-top: 8px;
|
||||||
|
font-size: 12px;
|
||||||
|
}
|
||||||
|
.debug-toolbar .count {
|
||||||
|
margin-left: auto;
|
||||||
|
color: var(--fg-muted);
|
||||||
|
}
|
||||||
|
.event-tail {
|
||||||
|
max-height: 320px;
|
||||||
|
overflow: auto;
|
||||||
|
font-family: "SF Mono", Menlo, Consolas, monospace;
|
||||||
|
font-size: 11px;
|
||||||
|
line-height: 1.5;
|
||||||
|
}
|
||||||
|
.event-row {
|
||||||
|
padding: 2px 4px;
|
||||||
|
white-space: pre-wrap;
|
||||||
|
word-break: break-all;
|
||||||
|
}
|
||||||
|
.event-row.event-backend {
|
||||||
|
color: var(--state-up);
|
||||||
|
}
|
||||||
|
.event-row.event-frontend {
|
||||||
|
color: var(--accent);
|
||||||
|
}
|
||||||
|
.event-row.event-log {
|
||||||
|
color: var(--fg-muted);
|
||||||
|
}
|
||||||
|
.event-row.event-maglevd-status {
|
||||||
|
color: var(--state-down);
|
||||||
|
}
|
||||||
|
.event-row.event-sync {
|
||||||
|
color: var(--state-paused);
|
||||||
|
font-weight: 500;
|
||||||
|
}
|
||||||
107
cmd/frontend/web/src/types.ts
Normal file
107
cmd/frontend/web/src/types.ts
Normal file
@@ -0,0 +1,107 @@
|
|||||||
|
// TS mirror of cmd/frontend/types.go — keep in sync.
|
||||||
|
|
||||||
|
export type MaglevdInfo = {
|
||||||
|
name: string;
|
||||||
|
address: string;
|
||||||
|
connected: boolean;
|
||||||
|
last_error?: string;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type VersionInfo = {
|
||||||
|
version: string;
|
||||||
|
commit: string;
|
||||||
|
date: string;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type TransitionRecord = {
|
||||||
|
from: string;
|
||||||
|
to: string;
|
||||||
|
at_unix_ns: number;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type PoolBackendSnapshot = {
|
||||||
|
name: string;
|
||||||
|
weight: number;
|
||||||
|
effective_weight: number;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type PoolSnapshot = {
|
||||||
|
name: string;
|
||||||
|
backends: PoolBackendSnapshot[];
|
||||||
|
};
|
||||||
|
|
||||||
|
export type FrontendSnapshot = {
|
||||||
|
name: string;
|
||||||
|
address: string;
|
||||||
|
protocol: string;
|
||||||
|
port: number;
|
||||||
|
description?: string;
|
||||||
|
src_ip_sticky: boolean;
|
||||||
|
pools: PoolSnapshot[];
|
||||||
|
};
|
||||||
|
|
||||||
|
export type BackendSnapshot = {
|
||||||
|
name: string;
|
||||||
|
address: string;
|
||||||
|
state: string;
|
||||||
|
enabled: boolean;
|
||||||
|
healthcheck: string;
|
||||||
|
last_transition?: TransitionRecord;
|
||||||
|
transitions?: TransitionRecord[];
|
||||||
|
};
|
||||||
|
|
||||||
|
export type HealthCheckSnapshot = {
|
||||||
|
name: string;
|
||||||
|
type: string;
|
||||||
|
port: number;
|
||||||
|
interval_ns: number;
|
||||||
|
fast_interval_ns: number;
|
||||||
|
down_interval_ns: number;
|
||||||
|
timeout_ns: number;
|
||||||
|
rise: number;
|
||||||
|
fall: number;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type VPPInfoSnapshot = {
|
||||||
|
version: string;
|
||||||
|
build_date: string;
|
||||||
|
pid: number;
|
||||||
|
boottime_ns: number;
|
||||||
|
connecttime_ns: number;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type StateSnapshot = {
|
||||||
|
maglevd: MaglevdInfo;
|
||||||
|
frontends: FrontendSnapshot[];
|
||||||
|
backends: BackendSnapshot[];
|
||||||
|
healthchecks: HealthCheckSnapshot[];
|
||||||
|
vpp_info?: VPPInfoSnapshot;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type BrowserEvent = {
|
||||||
|
maglevd: string;
|
||||||
|
type: "log" | "backend" | "frontend" | "maglevd-status" | "resync";
|
||||||
|
at_unix_ns: number;
|
||||||
|
payload: unknown;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type BackendEventPayload = {
|
||||||
|
backend: string;
|
||||||
|
transition: TransitionRecord;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type FrontendEventPayload = {
|
||||||
|
frontend: string;
|
||||||
|
transition: TransitionRecord;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type LogEventPayload = {
|
||||||
|
level: string;
|
||||||
|
msg: string;
|
||||||
|
attrs?: Record<string, string>;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type MaglevdStatusPayload = {
|
||||||
|
connected: boolean;
|
||||||
|
last_error?: string;
|
||||||
|
};
|
||||||
40
cmd/frontend/web/src/views/BackendRow.tsx
Normal file
40
cmd/frontend/web/src/views/BackendRow.tsx
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
import type { Component } from "solid-js";
|
||||||
|
import type { BackendSnapshot, PoolBackendSnapshot } from "../types";
|
||||||
|
import StatusBadge from "../components/StatusBadge";
|
||||||
|
import ProbeHeartbeat from "../components/ProbeHeartbeat";
|
||||||
|
import Flash from "../components/Flash";
|
||||||
|
import { lastTransitionAge } from "../stores/state";
|
||||||
|
|
||||||
|
type Props = {
|
||||||
|
maglevd: string;
|
||||||
|
backend: BackendSnapshot;
|
||||||
|
poolBackend: PoolBackendSnapshot;
|
||||||
|
};
|
||||||
|
|
||||||
|
const BackendRow: Component<Props> = (props) => {
|
||||||
|
const b = () => props.backend;
|
||||||
|
return (
|
||||||
|
<tr class="backend-row" data-state={b().state}>
|
||||||
|
<td class="backend-name">
|
||||||
|
<ProbeHeartbeat maglevd={props.maglevd} backend={b().name} />
|
||||||
|
{b().name}
|
||||||
|
{!b().enabled && <span class="tag">[disabled]</span>}
|
||||||
|
</td>
|
||||||
|
<td class="backend-address">{b().address}</td>
|
||||||
|
<td>
|
||||||
|
<Flash value={b().state}>
|
||||||
|
<StatusBadge state={b().state} />
|
||||||
|
</Flash>
|
||||||
|
</td>
|
||||||
|
<td class="numeric">
|
||||||
|
<Flash value={props.poolBackend.weight} />
|
||||||
|
</td>
|
||||||
|
<td class="numeric">
|
||||||
|
<Flash value={props.poolBackend.effective_weight} />
|
||||||
|
</td>
|
||||||
|
<td class="age">{lastTransitionAge(b().last_transition)}</td>
|
||||||
|
</tr>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default BackendRow;
|
||||||
141
cmd/frontend/web/src/views/DebugPanel.tsx
Normal file
141
cmd/frontend/web/src/views/DebugPanel.tsx
Normal file
@@ -0,0 +1,141 @@
|
|||||||
|
import { For, createEffect, createMemo, createSignal, type Component } from "solid-js";
|
||||||
|
import Zippy from "../components/Zippy";
|
||||||
|
import { events } from "../stores/events";
|
||||||
|
import { scope } from "../stores/scope";
|
||||||
|
import type {
|
||||||
|
BackendEventPayload,
|
||||||
|
BrowserEvent,
|
||||||
|
FrontendEventPayload,
|
||||||
|
LogEventPayload,
|
||||||
|
} from "../types";
|
||||||
|
|
||||||
|
// DebugPanel is a collapsible rolling tail of recent events. Honors the
|
||||||
|
// current scope by default; a checkbox flips it into firehose mode.
|
||||||
|
const DebugPanel: Component = () => {
|
||||||
|
const [firehose, setFirehose] = createSignal(false);
|
||||||
|
const [paused, setPaused] = createSignal(false);
|
||||||
|
const [frozen, setFrozen] = createSignal<BrowserEvent[]>([]);
|
||||||
|
|
||||||
|
const filtered = createMemo(() => {
|
||||||
|
const list = paused() ? frozen() : events();
|
||||||
|
if (firehose()) return list;
|
||||||
|
const s = scope();
|
||||||
|
if (!s) return list;
|
||||||
|
return list.filter((e) => e.maglevd === s);
|
||||||
|
});
|
||||||
|
|
||||||
|
const togglePause = () => {
|
||||||
|
if (!paused()) {
|
||||||
|
setFrozen([...events()]);
|
||||||
|
setPaused(true);
|
||||||
|
} else {
|
||||||
|
setPaused(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Tail behavior: whenever the list grows (or we unpause), scroll the
|
||||||
|
// event container to the bottom so the newest event stays visible. If
|
||||||
|
// paused, leave the scroll position alone so the operator can read.
|
||||||
|
let olRef: HTMLOListElement | undefined;
|
||||||
|
createEffect(() => {
|
||||||
|
filtered(); // track
|
||||||
|
if (paused()) return;
|
||||||
|
if (olRef) olRef.scrollTop = olRef.scrollHeight;
|
||||||
|
});
|
||||||
|
|
||||||
|
return (
|
||||||
|
<Zippy title="Event stream">
|
||||||
|
<ol class="event-tail" ref={olRef}>
|
||||||
|
<For each={filtered()}>
|
||||||
|
{(ev) => (
|
||||||
|
<li class={`event-row event-${ev.type}`} classList={{ "event-sync": isSyncEvent(ev) }}>
|
||||||
|
{formatEvent(ev)}
|
||||||
|
</li>
|
||||||
|
)}
|
||||||
|
</For>
|
||||||
|
</ol>
|
||||||
|
<div class="debug-toolbar">
|
||||||
|
<label>
|
||||||
|
<input
|
||||||
|
type="checkbox"
|
||||||
|
checked={firehose()}
|
||||||
|
onChange={(e) => setFirehose(e.currentTarget.checked)}
|
||||||
|
/>
|
||||||
|
all maglevds
|
||||||
|
</label>
|
||||||
|
<button onClick={togglePause}>{paused() ? "resume" : "pause"}</button>
|
||||||
|
<span class="count">{filtered().length} events</span>
|
||||||
|
</div>
|
||||||
|
</Zippy>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default DebugPanel;
|
||||||
|
|
||||||
|
function isSyncEvent(ev: BrowserEvent): boolean {
|
||||||
|
if (ev.type !== "log") return false;
|
||||||
|
const p = ev.payload as LogEventPayload;
|
||||||
|
return p.msg.startsWith("vpp-lb-sync-");
|
||||||
|
}
|
||||||
|
|
||||||
|
// formatSyncAttrs renders vpp-lb-sync attributes in a fixed order so the
|
||||||
|
// event stream is easy to scan. Any key not explicitly listed is appended
|
||||||
|
// at the end preserving insertion order.
|
||||||
|
function formatSyncAttrs(attrs?: Record<string, string>): string {
|
||||||
|
if (!attrs) return "";
|
||||||
|
const order = [
|
||||||
|
"vip",
|
||||||
|
"protocol",
|
||||||
|
"port",
|
||||||
|
"address",
|
||||||
|
"weight",
|
||||||
|
"from",
|
||||||
|
"to",
|
||||||
|
"encap",
|
||||||
|
"src-ip-sticky",
|
||||||
|
"flush",
|
||||||
|
];
|
||||||
|
const parts: string[] = [];
|
||||||
|
const seen = new Set<string>();
|
||||||
|
for (const k of order) {
|
||||||
|
if (k in attrs) {
|
||||||
|
parts.push(`${k}=${attrs[k]}`);
|
||||||
|
seen.add(k);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for (const [k, v] of Object.entries(attrs)) {
|
||||||
|
if (!seen.has(k)) parts.push(`${k}=${v}`);
|
||||||
|
}
|
||||||
|
return parts.join(" ");
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatEvent(ev: BrowserEvent): string {
|
||||||
|
const ts = new Date(ev.at_unix_ns / 1e6).toISOString().substring(11, 23);
|
||||||
|
const tag = `[${ev.maglevd}]`;
|
||||||
|
switch (ev.type) {
|
||||||
|
case "backend": {
|
||||||
|
const p = ev.payload as BackendEventPayload;
|
||||||
|
return `${ts} ${tag} backend ${p.backend}: ${p.transition.from} → ${p.transition.to}`;
|
||||||
|
}
|
||||||
|
case "frontend": {
|
||||||
|
const p = ev.payload as FrontendEventPayload;
|
||||||
|
return `${ts} ${tag} frontend ${p.frontend}: ${p.transition.from} → ${p.transition.to}`;
|
||||||
|
}
|
||||||
|
case "log": {
|
||||||
|
const p = ev.payload as LogEventPayload;
|
||||||
|
if (p.msg.startsWith("vpp-lb-sync-")) {
|
||||||
|
return `${ts} ${tag} ${p.msg} ${formatSyncAttrs(p.attrs)}`.trimEnd();
|
||||||
|
}
|
||||||
|
const attrs = p.attrs
|
||||||
|
? Object.entries(p.attrs)
|
||||||
|
.map(([k, v]) => `${k}=${v}`)
|
||||||
|
.join(" ")
|
||||||
|
: "";
|
||||||
|
return `${ts} ${tag} ${p.level} ${p.msg} ${attrs}`.trimEnd();
|
||||||
|
}
|
||||||
|
case "maglevd-status":
|
||||||
|
return `${ts} ${tag} maglevd status: ${JSON.stringify(ev.payload)}`;
|
||||||
|
default:
|
||||||
|
return `${ts} ${tag} ${ev.type}`;
|
||||||
|
}
|
||||||
|
}
|
||||||
66
cmd/frontend/web/src/views/FrontendCard.tsx
Normal file
66
cmd/frontend/web/src/views/FrontendCard.tsx
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
import { For, type Component } from "solid-js";
|
||||||
|
import type { FrontendSnapshot, StateSnapshot } from "../types";
|
||||||
|
import BackendRow from "./BackendRow";
|
||||||
|
|
||||||
|
type Props = {
|
||||||
|
snap: StateSnapshot;
|
||||||
|
frontend: FrontendSnapshot;
|
||||||
|
};
|
||||||
|
|
||||||
|
const FrontendCard: Component<Props> = (props) => {
|
||||||
|
const backendByName = () => Object.fromEntries(props.snap.backends.map((b) => [b.name, b]));
|
||||||
|
const fe = () => props.frontend;
|
||||||
|
|
||||||
|
return (
|
||||||
|
<section class="frontend-card">
|
||||||
|
<header class="frontend-header">
|
||||||
|
<h2>{fe().name}</h2>
|
||||||
|
<div class="frontend-meta">
|
||||||
|
<span class="addr">
|
||||||
|
{fe().address}:{fe().port}
|
||||||
|
</span>
|
||||||
|
<span class="proto">{fe().protocol.toUpperCase()}</span>
|
||||||
|
{fe().src_ip_sticky && <span class="tag">sticky</span>}
|
||||||
|
</div>
|
||||||
|
{fe().description && <p class="frontend-desc">{fe().description}</p>}
|
||||||
|
</header>
|
||||||
|
|
||||||
|
<For each={fe().pools}>
|
||||||
|
{(pool) => (
|
||||||
|
<div class="pool-block">
|
||||||
|
<h3 class="pool-name">pool: {pool.name}</h3>
|
||||||
|
<table class="backend-table">
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th>backend</th>
|
||||||
|
<th>address</th>
|
||||||
|
<th>state</th>
|
||||||
|
<th class="numeric">weight</th>
|
||||||
|
<th class="numeric">effective</th>
|
||||||
|
<th>last transition</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<For each={pool.backends}>
|
||||||
|
{(pb) => {
|
||||||
|
const backend = backendByName()[pb.name];
|
||||||
|
if (!backend) return null;
|
||||||
|
return (
|
||||||
|
<BackendRow
|
||||||
|
maglevd={props.snap.maglevd.name}
|
||||||
|
backend={backend}
|
||||||
|
poolBackend={pb}
|
||||||
|
/>
|
||||||
|
);
|
||||||
|
}}
|
||||||
|
</For>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</For>
|
||||||
|
</section>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default FrontendCard;
|
||||||
35
cmd/frontend/web/src/views/Overview.tsx
Normal file
35
cmd/frontend/web/src/views/Overview.tsx
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
import { For, Show, type Component } from "solid-js";
|
||||||
|
import { scope } from "../stores/scope";
|
||||||
|
import { state } from "../stores/state";
|
||||||
|
import FrontendCard from "./FrontendCard";
|
||||||
|
import VPPInfoPanel from "./VPPInfoPanel";
|
||||||
|
|
||||||
|
const Overview: Component = () => {
|
||||||
|
const snap = () => {
|
||||||
|
const s = scope();
|
||||||
|
return s ? state.byName[s] : undefined;
|
||||||
|
};
|
||||||
|
|
||||||
|
return (
|
||||||
|
<main class="overview">
|
||||||
|
<Show when={snap()} fallback={<p class="empty">No maglevd selected.</p>}>
|
||||||
|
{(s) => (
|
||||||
|
<>
|
||||||
|
<Show when={!s().maglevd.connected}>
|
||||||
|
<div class="banner warn">
|
||||||
|
{s().maglevd.name} disconnected
|
||||||
|
{s().maglevd.last_error && `: ${s().maglevd.last_error}`}
|
||||||
|
</div>
|
||||||
|
</Show>
|
||||||
|
<div class="frontend-grid">
|
||||||
|
<For each={s().frontends}>{(fe) => <FrontendCard snap={s()} frontend={fe} />}</For>
|
||||||
|
</div>
|
||||||
|
<VPPInfoPanel info={s().vpp_info} />
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
</Show>
|
||||||
|
</main>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default Overview;
|
||||||
30
cmd/frontend/web/src/views/VPPInfoPanel.tsx
Normal file
30
cmd/frontend/web/src/views/VPPInfoPanel.tsx
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
import type { Component } from "solid-js";
|
||||||
|
import Zippy from "../components/Zippy";
|
||||||
|
import type { VPPInfoSnapshot } from "../types";
|
||||||
|
|
||||||
|
type Props = { info?: VPPInfoSnapshot };
|
||||||
|
|
||||||
|
const VPPInfoPanel: Component<Props> = (props) => {
|
||||||
|
if (!props.info) return null;
|
||||||
|
const i = props.info;
|
||||||
|
const boot = i.boottime_ns ? new Date(i.boottime_ns / 1e6).toISOString() : "";
|
||||||
|
const conn = i.connecttime_ns ? new Date(i.connecttime_ns / 1e6).toISOString() : "";
|
||||||
|
return (
|
||||||
|
<Zippy title="VPP information">
|
||||||
|
<dl class="kv">
|
||||||
|
<dt>version</dt>
|
||||||
|
<dd>{i.version}</dd>
|
||||||
|
<dt>build date</dt>
|
||||||
|
<dd>{i.build_date}</dd>
|
||||||
|
<dt>pid</dt>
|
||||||
|
<dd>{i.pid}</dd>
|
||||||
|
<dt>booted</dt>
|
||||||
|
<dd>{boot}</dd>
|
||||||
|
<dt>connected</dt>
|
||||||
|
<dd>{conn}</dd>
|
||||||
|
</dl>
|
||||||
|
</Zippy>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default VPPInfoPanel;
|
||||||
22
cmd/frontend/web/tsconfig.json
Normal file
22
cmd/frontend/web/tsconfig.json
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
{
|
||||||
|
"compilerOptions": {
|
||||||
|
"target": "ES2020",
|
||||||
|
"useDefineForClassFields": true,
|
||||||
|
"module": "ESNext",
|
||||||
|
"lib": ["ES2020", "DOM", "DOM.Iterable"],
|
||||||
|
"skipLibCheck": true,
|
||||||
|
"moduleResolution": "bundler",
|
||||||
|
"allowImportingTsExtensions": true,
|
||||||
|
"resolveJsonModule": true,
|
||||||
|
"isolatedModules": true,
|
||||||
|
"noEmit": true,
|
||||||
|
"jsx": "preserve",
|
||||||
|
"jsxImportSource": "solid-js",
|
||||||
|
"strict": true,
|
||||||
|
"noUnusedLocals": true,
|
||||||
|
"noUnusedParameters": true,
|
||||||
|
"noFallthroughCasesInSwitch": true,
|
||||||
|
"types": ["vite/client"]
|
||||||
|
},
|
||||||
|
"include": ["src"]
|
||||||
|
}
|
||||||
23
cmd/frontend/web/vite.config.ts
Normal file
23
cmd/frontend/web/vite.config.ts
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
import { defineConfig } from "vite";
|
||||||
|
import solid from "vite-plugin-solid";
|
||||||
|
|
||||||
|
// The app is served under /view/ by the Go binary (via http.StripPrefix),
|
||||||
|
// which means Vite must emit asset URLs rooted there. For local `npm run
|
||||||
|
// dev` outside the Go binary we also set server.base so the dev server
|
||||||
|
// serves at /view/ and proxies API calls through to a running frontend.
|
||||||
|
export default defineConfig({
|
||||||
|
base: "/view/",
|
||||||
|
plugins: [solid()],
|
||||||
|
build: {
|
||||||
|
outDir: "dist",
|
||||||
|
emptyOutDir: true,
|
||||||
|
target: "es2020",
|
||||||
|
},
|
||||||
|
server: {
|
||||||
|
port: 5173,
|
||||||
|
proxy: {
|
||||||
|
"/view/api": "http://localhost:8080",
|
||||||
|
"/admin": "http://localhost:8080",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
@@ -41,10 +41,13 @@ special capabilities.
|
|||||||
All log output is written to stdout as JSON using Go's `log/slog`. The first
|
All log output is written to stdout as JSON using Go's `log/slog`. The first
|
||||||
line logged after the logger is configured is a `starting` record that includes
|
line logged after the logger is configured is a `starting` record that includes
|
||||||
`version`, `commit`, and `date`. Every state change emits a `backend-transition`
|
`version`, `commit`, and `date`. Every state change emits a `backend-transition`
|
||||||
line at `INFO` level. Set `--log-level debug` to see individual probe attempts,
|
line at `INFO` level. Per-mutation VPP LB sync events
|
||||||
every VPP binary-API call (`vpp-api-send` / `vpp-api-recv` with full payload),
|
(`vpp-lb-sync-vip-added`, `vpp-lb-sync-vip-removed`, `vpp-lb-sync-as-added`,
|
||||||
and the per-VIP sync operations (`vpp-lbsync-vip-add`, `vpp-lbsync-as-weight`,
|
`vpp-lb-sync-as-removed`, `vpp-lb-sync-as-weight-updated`) are also emitted
|
||||||
etc.) as they happen.
|
at `INFO` so the CLI `watch events` stream and the web frontend see every
|
||||||
|
dataplane change without raising the log level. Set `--log-level debug` to
|
||||||
|
see individual probe attempts and every VPP binary-API call
|
||||||
|
(`vpp-api-send` / `vpp-api-recv` with full payload) as they happen.
|
||||||
|
|
||||||
### Prometheus metrics
|
### Prometheus metrics
|
||||||
|
|
||||||
|
|||||||
@@ -205,7 +205,7 @@ func (c *Client) lbSyncLoop(ctx context.Context) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if err := c.SyncLBStateAll(cfg); err != nil {
|
if err := c.SyncLBStateAll(cfg); err != nil {
|
||||||
slog.Warn("vpp-lbsync-error", "err", err)
|
slog.Warn("vpp-lb-sync-error", "err", err)
|
||||||
}
|
}
|
||||||
next = time.Now().Add(interval)
|
next = time.Now().Add(interval)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -108,7 +108,7 @@ func (c *Client) SyncLBStateAll(cfg *config.Config) error {
|
|||||||
}
|
}
|
||||||
defer ch.Close()
|
defer ch.Close()
|
||||||
|
|
||||||
slog.Info("vpp-lbsync-start",
|
slog.Info("vpp-lb-sync-start",
|
||||||
"scope", "all",
|
"scope", "all",
|
||||||
"vips-desired", len(desired),
|
"vips-desired", len(desired),
|
||||||
"vips-current", len(cur.VIPs))
|
"vips-current", len(cur.VIPs))
|
||||||
@@ -150,7 +150,7 @@ func (c *Client) SyncLBStateAll(cfg *config.Config) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
recordSyncStats("all", &st)
|
recordSyncStats("all", &st)
|
||||||
slog.Info("vpp-lbsync-done",
|
slog.Info("vpp-lb-sync-done",
|
||||||
"scope", "all",
|
"scope", "all",
|
||||||
"vip-added", st.vipAdd,
|
"vip-added", st.vipAdd,
|
||||||
"vip-removed", st.vipDel,
|
"vip-removed", st.vipDel,
|
||||||
@@ -190,10 +190,10 @@ func (c *Client) SyncLBStateVIP(cfg *config.Config, feName string) error {
|
|||||||
}
|
}
|
||||||
defer ch.Close()
|
defer ch.Close()
|
||||||
|
|
||||||
slog.Info("vpp-lbsync-start",
|
slog.Info("vpp-lb-sync-start",
|
||||||
"scope", "vip",
|
"scope", "vip",
|
||||||
"frontend", feName,
|
"frontend", feName,
|
||||||
"prefix", d.Prefix.String(),
|
"vip", d.Prefix.IP.String(),
|
||||||
"protocol", protocolName(d.Protocol),
|
"protocol", protocolName(d.Protocol),
|
||||||
"port", d.Port)
|
"port", d.Port)
|
||||||
|
|
||||||
@@ -207,7 +207,7 @@ func (c *Client) SyncLBStateVIP(cfg *config.Config, feName string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
recordSyncStats("vip", &st)
|
recordSyncStats("vip", &st)
|
||||||
slog.Info("vpp-lbsync-done",
|
slog.Info("vpp-lb-sync-done",
|
||||||
"scope", "vip",
|
"scope", "vip",
|
||||||
"frontend", feName,
|
"frontend", feName,
|
||||||
"vip-added", st.vipAdd,
|
"vip-added", st.vipAdd,
|
||||||
@@ -243,8 +243,8 @@ func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, s
|
|||||||
}
|
}
|
||||||
|
|
||||||
if curSticky != d.SrcIPSticky {
|
if curSticky != d.SrcIPSticky {
|
||||||
slog.Info("vpp-lbsync-vip-recreate",
|
slog.Info("vpp-lb-sync-vip-recreate",
|
||||||
"prefix", d.Prefix.String(),
|
"vip", d.Prefix.IP.String(),
|
||||||
"protocol", protocolName(d.Protocol),
|
"protocol", protocolName(d.Protocol),
|
||||||
"port", d.Port,
|
"port", d.Port,
|
||||||
"reason", "src-ip-sticky-changed",
|
"reason", "src-ip-sticky-changed",
|
||||||
@@ -277,7 +277,7 @@ func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, s
|
|||||||
if _, keep := d.ASes[addr]; keep {
|
if _, keep := d.ASes[addr]; keep {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
if err := delAS(ch, cur.Prefix, cur.Protocol, cur.Port, a.Address); err != nil {
|
if err := delAS(ch, cur.Prefix, cur.Protocol, cur.Port, a.Address, a.Weight); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
st.asDel++
|
st.asDel++
|
||||||
@@ -299,7 +299,7 @@ func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, s
|
|||||||
// (i.e. the backend was disabled, not merely drained). Steady-
|
// (i.e. the backend was disabled, not merely drained). Steady-
|
||||||
// state syncs where weight doesn't change never re-flush.
|
// state syncs where weight doesn't change never re-flush.
|
||||||
flush := a.Flush && c.Weight > 0 && a.Weight == 0
|
flush := a.Flush && c.Weight > 0 && a.Weight == 0
|
||||||
if err := setASWeight(ch, d.Prefix, d.Protocol, d.Port, a, flush); err != nil {
|
if err := setASWeight(ch, d.Prefix, d.Protocol, d.Port, a, c.Weight, flush); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
st.asWeight++
|
st.asWeight++
|
||||||
@@ -311,7 +311,7 @@ func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, s
|
|||||||
// removeVIP flushes all ASes from a VIP and then deletes the VIP itself.
|
// removeVIP flushes all ASes from a VIP and then deletes the VIP itself.
|
||||||
func removeVIP(ch *loggedChannel, v LBVIP, st *syncStats) error {
|
func removeVIP(ch *loggedChannel, v LBVIP, st *syncStats) error {
|
||||||
for _, as := range v.ASes {
|
for _, as := range v.ASes {
|
||||||
if err := delAS(ch, v.Prefix, v.Protocol, v.Port, as.Address); err != nil {
|
if err := delAS(ch, v.Prefix, v.Protocol, v.Port, as.Address, as.Weight); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
st.asDel++
|
st.asDel++
|
||||||
@@ -441,8 +441,8 @@ func addVIP(ch *loggedChannel, d desiredVIP) error {
|
|||||||
if reply.Retval != 0 {
|
if reply.Retval != 0 {
|
||||||
return fmt.Errorf("lb_add_del_vip_v2 add %s: retval=%d", d.Prefix, reply.Retval)
|
return fmt.Errorf("lb_add_del_vip_v2 add %s: retval=%d", d.Prefix, reply.Retval)
|
||||||
}
|
}
|
||||||
slog.Debug("vpp-lbsync-vip-add",
|
slog.Info("vpp-lb-sync-vip-added",
|
||||||
"prefix", d.Prefix.String(),
|
"vip", d.Prefix.IP.String(),
|
||||||
"protocol", protocolName(d.Protocol),
|
"protocol", protocolName(d.Protocol),
|
||||||
"port", d.Port,
|
"port", d.Port,
|
||||||
"encap", encapName(encap),
|
"encap", encapName(encap),
|
||||||
@@ -464,8 +464,8 @@ func delVIP(ch *loggedChannel, prefix *net.IPNet, protocol uint8, port uint16) e
|
|||||||
if reply.Retval != 0 {
|
if reply.Retval != 0 {
|
||||||
return fmt.Errorf("lb_add_del_vip_v2 del %s: retval=%d", prefix, reply.Retval)
|
return fmt.Errorf("lb_add_del_vip_v2 del %s: retval=%d", prefix, reply.Retval)
|
||||||
}
|
}
|
||||||
slog.Debug("vpp-lbsync-vip-del",
|
slog.Info("vpp-lb-sync-vip-removed",
|
||||||
"prefix", prefix.String(),
|
"vip", prefix.IP.String(),
|
||||||
"protocol", protocolName(protocol),
|
"protocol", protocolName(protocol),
|
||||||
"port", port)
|
"port", port)
|
||||||
return nil
|
return nil
|
||||||
@@ -487,8 +487,8 @@ func addAS(ch *loggedChannel, prefix *net.IPNet, protocol uint8, port uint16, a
|
|||||||
if reply.Retval != 0 {
|
if reply.Retval != 0 {
|
||||||
return fmt.Errorf("lb_add_del_as_v2 add %s@%s: retval=%d", a.Address, prefix, reply.Retval)
|
return fmt.Errorf("lb_add_del_as_v2 add %s@%s: retval=%d", a.Address, prefix, reply.Retval)
|
||||||
}
|
}
|
||||||
slog.Debug("vpp-lbsync-as-add",
|
slog.Info("vpp-lb-sync-as-added",
|
||||||
"vip", prefix.String(),
|
"vip", prefix.IP.String(),
|
||||||
"protocol", protocolName(protocol),
|
"protocol", protocolName(protocol),
|
||||||
"port", port,
|
"port", port,
|
||||||
"address", a.Address.String(),
|
"address", a.Address.String(),
|
||||||
@@ -496,7 +496,7 @@ func addAS(ch *loggedChannel, prefix *net.IPNet, protocol uint8, port uint16, a
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func delAS(ch *loggedChannel, prefix *net.IPNet, protocol uint8, port uint16, addr net.IP) error {
|
func delAS(ch *loggedChannel, prefix *net.IPNet, protocol uint8, port uint16, addr net.IP, fromWeight uint8) error {
|
||||||
req := &lb.LbAddDelAsV2{
|
req := &lb.LbAddDelAsV2{
|
||||||
Pfx: ip_types.NewAddressWithPrefix(*prefix),
|
Pfx: ip_types.NewAddressWithPrefix(*prefix),
|
||||||
Protocol: protocol,
|
Protocol: protocol,
|
||||||
@@ -512,15 +512,16 @@ func delAS(ch *loggedChannel, prefix *net.IPNet, protocol uint8, port uint16, ad
|
|||||||
if reply.Retval != 0 {
|
if reply.Retval != 0 {
|
||||||
return fmt.Errorf("lb_add_del_as_v2 del %s@%s: retval=%d", addr, prefix, reply.Retval)
|
return fmt.Errorf("lb_add_del_as_v2 del %s@%s: retval=%d", addr, prefix, reply.Retval)
|
||||||
}
|
}
|
||||||
slog.Debug("vpp-lbsync-as-del",
|
slog.Info("vpp-lb-sync-as-removed",
|
||||||
"vip", prefix.String(),
|
"vip", prefix.IP.String(),
|
||||||
"protocol", protocolName(protocol),
|
"protocol", protocolName(protocol),
|
||||||
"port", port,
|
"port", port,
|
||||||
"address", addr.String())
|
"address", addr.String(),
|
||||||
|
"weight", fromWeight)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func setASWeight(ch *loggedChannel, prefix *net.IPNet, protocol uint8, port uint16, a desiredAS, flush bool) error {
|
func setASWeight(ch *loggedChannel, prefix *net.IPNet, protocol uint8, port uint16, a desiredAS, fromWeight uint8, flush bool) error {
|
||||||
req := &lb.LbAsSetWeight{
|
req := &lb.LbAsSetWeight{
|
||||||
Pfx: ip_types.NewAddressWithPrefix(*prefix),
|
Pfx: ip_types.NewAddressWithPrefix(*prefix),
|
||||||
Protocol: protocol,
|
Protocol: protocol,
|
||||||
@@ -536,12 +537,13 @@ func setASWeight(ch *loggedChannel, prefix *net.IPNet, protocol uint8, port uint
|
|||||||
if reply.Retval != 0 {
|
if reply.Retval != 0 {
|
||||||
return fmt.Errorf("lb_as_set_weight %s@%s: retval=%d", a.Address, prefix, reply.Retval)
|
return fmt.Errorf("lb_as_set_weight %s@%s: retval=%d", a.Address, prefix, reply.Retval)
|
||||||
}
|
}
|
||||||
slog.Debug("vpp-lbsync-as-weight",
|
slog.Info("vpp-lb-sync-as-weight-updated",
|
||||||
"vip", prefix.String(),
|
"vip", prefix.IP.String(),
|
||||||
"protocol", protocolName(protocol),
|
"protocol", protocolName(protocol),
|
||||||
"port", port,
|
"port", port,
|
||||||
"address", a.Address.String(),
|
"address", a.Address.String(),
|
||||||
"weight", a.Weight,
|
"from", fromWeight,
|
||||||
|
"to", a.Weight,
|
||||||
"flush", flush)
|
"flush", flush)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|||||||
Reference in New Issue
Block a user