Files
vpp-maglev/internal/config/config_test.go
Pim van Pelt 4347bb9b05 Bug fixes, config validation, SPA tightening, set-weight UI
This session covers three distinct arcs: correctness bug fixes in the
VPP sync path and frontend reducers, new config validation, and a
large polish pass on the web frontend (tighter layout, backend kebab
dialogs, live grouped-table, live config-reload re-sync).

 - encap for a VIP is now derived from the backend address family,
   not the VIP's. A v6 VIP with v4 backends is programmed as IP6_GRE4
   (not the buggy IP6_GRE6), matching the VPP LB plugin's
   requirement that encap reflects the tunnel inner family. desiredVIP
   gained an Encap field populated in desiredFromFrontend.
 - ActivePoolIndex now requires at least one backend in a pool to be
   BOTH in StateUp AND pb.Weight>0 before the pool counts as active.
   Previously a primary pool with every backend manually zeroed would
   still win over a fallback with weight=100, so fallback traffic
   never materialized. New TestActivePoolIndexWeightedFailover table
   pins the rule in five subcases.
 - SyncLBStateVIP gained a flushAddress parameter threaded through
   reconcileVIP; it forces flush=true on the setASWeight call for a
   specific backend regardless of the usual 0→N heuristic. Wires up
   the explicit [flush] knob the CLI exposes.

 - convertFrontend already enforced that backends within one frontend
   share a family. New cross-frontend pass validateVIPFamilyConsistency
   rejects configs where two frontends share a VIP address but carry
   backends in different families — VPP's LB plugin requires every
   VIP on a prefix to have the same encap type, so such a config
   would fail at lb_add_del_vip_v2 time with VNET_API_ERROR_INVALID
   _ARGUMENT (-73). Catching it at config load turns a silent
   runtime failure into a clear startup error.
 - Two new TestValidationErrors cases pin the behavior: mismatched
   families reject, same-family frontends on one VIP address allowed.

 - Proto adds `bool flush = 5` to SetWeightRequest. The RPC now
   drives a VIP sync immediately after mutating config (fixing the
   latent "weight change only takes effect at the next 30s periodic
   reconcile" gap), passing flushAddress = backend IP when req.Flush
   is true.
 - maglevc grows an optional [flush] token: `set frontend F pool P
   backend B weight N [flush]`. Implementation uses two Run closures
   (runSetFrontendPoolBackendWeight and -Flush) because the tree
   walker only puts slot tokens in args — literal keywords like
   `flush` advance the node but don't appear in the arg list.
 - docs/user-guide.md updated with the [flush] optional and a
   three-paragraph explainer of the graceful-drain vs. flush
   semantics at the VPP level.

 - checker.ListFrontends now sorts alphabetically to match the
   existing sort in ListBackends / ListHealthChecks — RPC responses
   no longer shuffle VIPs per call. cmd/frontend/client.go also
   sorts defensively in refreshAll so an old maglevd build renders
   alphabetically too.
 - backendFromProto was returning out.Transitions[n-1] as the
   LastTransition, but maglevd stores (and the proto carries)
   transitions newest-first, so [n-1] was actually the oldest.
   Reverse on read, which normalizes the client's Transitions slice
   to oldest-first and makes [n-1] genuinely the newest. LastTransition
   now points at the actual latest transition record.
 - applyBackendTransition (Go and TS) derives Enabled = state!="disabled"
   so the two fields stay in lockstep — closed a drift window where
   a recently re-enabled backend still rendered with a stuck
   [disabled] tag. The tag was later removed entirely since state
   and enabled carry the same information.

 - Layout tightened substantially: "FRONTENDS" panel header removed,
   zippy-summary and zippy-body paddings cut, backend-table row
   padding dropped to 2px, per-pool <h3> removed. Pools now live in
   a single consolidated table per frontend with a dedicated "pool"
   column that shows the pool name only on the first row of each
   group — classic grouped-table layout, maximally dense.
 - Description moved inline into the Zippy summary as muted italic
   text, freeing a vertical line per frontend card.
 - formatVIPAddress() helper renders IPv6 VIPs as [addr]:port and
   IPv4 as addr:port, matching RFC 3986 authority syntax.
 - Pools with effective_weight=0 on every backend (standby
   fallbacks, fully-drained primaries) render at opacity 0.35 on
   their non-actions cells; the kebab column stays at full contrast
   because its menu is still fully functional on standby backends.
 - Config-reload propagation: a maglevd config-reload-done log
   event triggers triggerConfigResync() on the frontend side —
   refreshAll() runs off the event-dispatch goroutine, then a
   BrowserEvent{Type:"resync"} is published through the broker.
   writeEvent emits type="resync" as a named SSE frame so the
   SPA's existing addEventListener("resync") handler picks it up
   and calls fetchAllState → replaceAll.
 - recomputeEffectiveWeights in stores/state.ts mirrors the
   server-side health.EffectiveWeights logic so the SPA keeps
   pool.effective_weight correct the moment a backend transitions,
   without waiting for the 30s refresh. Fixed a nasty bug where
   applyBackendEffectiveWeight wrote VIP-scoped vpp-lb-sync-as-*
   event weights into every frontend sharing the backend,
   corrupting frontends with different per-pool configured weights.
   The old log-event reducer was removed; applyConfiguredWeight is
   the narrower replacement used by the kebab set-weight flow.
 - applyBackendTransition calls recomputeEffectiveWeights after
   state updates so pool-failover transitions (primary ⇌ fallback)
   reflect instantly in the UI.

 - Confirmation dialogs via a new Modal primitive
   (Portal-mounted to document.body, escape/click-outside close,
   click-outside debounced on mousedown so mid-row-text-selection
   drags don't dismiss).
 - pause/resume/enable/disable each show a Modal with a consequence
   paragraph explaining what hits live traffic ("will keep existing
   flows", "will flush VPP's flow table", etc.). The disable commit
   button is styled btn-danger red.
 - set-weight action shows a Modal with a range slider (0-100,
   seeded from the current configured weight, accent-colored live
   numeric readout via <output>) plus a flush checkbox and a live-
   swapping note/warn paragraph describing what will happen. On
   commit, the SPA also updates its local store via
   applyConfiguredWeight so the operator sees the new weight
   immediately without waiting for the next refresh.

 - ProbeHeartbeat is now state-aware: ▶ (play) at rest for up/
   down/unknown backends, ⏸ (pause) for paused, ⏹ (stop) for
   disabled/removed, ❤️ (heart) during an in-flight probe.
 - Drop the probe-done event listener — fast probes (<10ms)
   could fire probe-done in the same render tick as probe-start
   and the heart would never visibly paint. Each probe-start now
   runs a fixed 400ms scale-pop animation on a timer; subsequent
   probe-start events reset the timer, so fast cadences produce a
   continuous heart pulse.
 - Fixed wrapper box (16x14 px, overflow hidden) so the row
   doesn't jiggle when the glyph swaps between the narrow ▶/⏸/⏹
   text glyphs and the wider ❤️ emoji.

 - Brand wordmark changed from "maglev" to "vpp-maglev" and wrapped
   in an <a> linking to https://git.ipng.ch/ipng/vpp-maglev. Logo
   link changed to https://ipng.ch/. Both open in a new tab with
   rel="noopener".
 - .gitignore fix: `frontend`, `maglevc`, `maglevd` were matching
   ANY file or directory with those names anywhere in the tree,
   silently ignoring cmd/frontend and friends. Anchored with
   leading slashes so only repo-root build artifacts match.
2026-04-12 23:06:42 +02:00

673 lines
14 KiB
Go

// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
package config
import (
"testing"
"time"
)
const validConfig = `
maglev:
healthchecker:
transition-history: 5
netns: dataplane
vpp:
lb:
ipv4-src-address: 10.0.0.1
ipv6-src-address: 2001:db8::1
healthchecks:
http-check:
type: http
port: 80
probe-ipv4-src: 10.0.0.1
params:
path: /healthz
host: example.com
response-code: "200"
interval: 2s
timeout: 3s
rise: 2
fall: 3
icmp-check:
type: icmp
probe-ipv6-src: 2001:db8:1::1
interval: 1s
timeout: 3s
fall: 5
backends:
be-v4:
address: 192.0.2.10
healthcheck: http-check
be-v6a:
address: 2001:db8:2::1
healthcheck: icmp-check
be-v6b:
address: 2001:db8:2::2
healthcheck: icmp-check
enabled: true
frontends:
web4:
description: "IPv4 VIP"
address: 192.0.2.1
protocol: tcp
port: 80
pools:
- name: primary
backends:
be-v4: {}
web6:
description: "IPv6 VIP"
address: 2001:db8::1
protocol: tcp
port: 443
pools:
- name: primary
backends:
be-v6a:
weight: 100
be-v6b:
weight: 50
`
func TestValidConfig(t *testing.T) {
cfg, err := parse([]byte(validConfig))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if cfg.HealthChecker.Netns != "dataplane" {
t.Errorf("healthchecker.netns: got %q, want dataplane", cfg.HealthChecker.Netns)
}
if cfg.HealthChecker.TransitionHistory != 5 {
t.Errorf("transition-history: got %d, want 5", cfg.HealthChecker.TransitionHistory)
}
if len(cfg.Frontends) != 2 {
t.Fatalf("frontends: got %d, want 2", len(cfg.Frontends))
}
hc := cfg.HealthChecks["http-check"]
if hc.Type != "http" {
t.Errorf("http-check type: got %q, want http", hc.Type)
}
if hc.Fall != 3 || hc.Rise != 2 {
t.Errorf("http-check fall/rise: got %d/%d, want 3/2", hc.Fall, hc.Rise)
}
if hc.ProbeIPv4Src.String() != "10.0.0.1" {
t.Errorf("http-check probe-ipv4-src: got %s, want 10.0.0.1", hc.ProbeIPv4Src)
}
if hc.HTTP == nil {
t.Fatal("http-check HTTP params should not be nil")
}
if hc.HTTP.Path != "/healthz" {
t.Errorf("http-check path: got %q, want /healthz", hc.HTTP.Path)
}
if hc.HTTP.Host != "example.com" {
t.Errorf("http-check host: got %q, want example.com", hc.HTTP.Host)
}
if hc.HTTP.ResponseCodeMin != 200 || hc.HTTP.ResponseCodeMax != 200 {
t.Errorf("http-check response-code: got %d-%d, want 200-200",
hc.HTTP.ResponseCodeMin, hc.HTTP.ResponseCodeMax)
}
icmp := cfg.HealthChecks["icmp-check"]
if icmp.Fall != 5 {
t.Errorf("icmp-check fall: got %d, want 5", icmp.Fall)
}
if icmp.ProbeIPv6Src.String() != "2001:db8:1::1" {
t.Errorf("icmp-check probe-ipv6-src: got %s, want 2001:db8:1::1", icmp.ProbeIPv6Src)
}
// Backend fields.
beV4 := cfg.Backends["be-v4"]
if beV4.Address.String() != "192.0.2.10" {
t.Errorf("be-v4 address: got %s", beV4.Address)
}
if beV4.HealthCheck != "http-check" {
t.Errorf("be-v4 healthcheck: got %q", beV4.HealthCheck)
}
if !beV4.Enabled {
t.Error("be-v4 enabled: want true (default)")
}
// Pool structure.
web4 := cfg.Frontends["web4"]
if len(web4.Pools) != 1 || web4.Pools[0].Name != "primary" {
t.Errorf("web4 pools: got %v", web4.Pools)
}
if _, ok := web4.Pools[0].Backends["be-v4"]; !ok {
t.Error("web4 primary pool missing be-v4")
}
if web4.Pools[0].Backends["be-v4"].Weight != 100 {
t.Errorf("web4 be-v4 weight: got %d, want 100 (default)", web4.Pools[0].Backends["be-v4"].Weight)
}
web6 := cfg.Frontends["web6"]
if len(web6.Pools) != 1 || len(web6.Pools[0].Backends) != 2 {
t.Errorf("web6 pools[0] backends: got %d, want 2", len(web6.Pools[0].Backends))
}
if web6.Pools[0].Backends["be-v6b"].Weight != 50 {
t.Errorf("web6 be-v6b weight: got %d, want 50", web6.Pools[0].Backends["be-v6b"].Weight)
}
}
func TestDefaults(t *testing.T) {
raw := `
maglev:
vpp:
lb:
ipv4-src-address: 10.0.0.1
ipv6-src-address: 2001:db8::1
healthchecks:
icmp:
type: icmp
interval: 1s
timeout: 2s
backends:
be:
address: 10.0.0.2
healthcheck: icmp
frontends:
v:
address: 192.0.2.1
pools:
- name: primary
backends:
be: {}
`
cfg, err := parse([]byte(raw))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if cfg.HealthChecker.Netns != "" {
t.Errorf("default netns: got %q, want empty", cfg.HealthChecker.Netns)
}
if cfg.HealthChecker.TransitionHistory != 5 {
t.Errorf("default transition-history: got %d, want 5", cfg.HealthChecker.TransitionHistory)
}
hc := cfg.HealthChecks["icmp"]
if hc.Rise != 2 || hc.Fall != 3 {
t.Errorf("defaults rise/fall: got %d/%d, want 2/3", hc.Rise, hc.Fall)
}
be := cfg.Backends["be"]
if !be.Enabled {
t.Errorf("backend default enabled: got false, want true")
}
// Pool backend weight defaults to 100.
v := cfg.Frontends["v"]
if v.Pools[0].Backends["be"].Weight != 100 {
t.Errorf("pool backend default weight: got %d, want 100", v.Pools[0].Backends["be"].Weight)
}
}
func TestBackendNoHealthcheck(t *testing.T) {
// A backend with no healthcheck reference is valid; probe is skipped.
raw := `
maglev:
vpp:
lb:
ipv4-src-address: 10.0.0.1
ipv6-src-address: 2001:db8::1
healthchecks: {}
backends:
be:
address: 10.0.0.2
frontends:
v:
address: 192.0.2.1
pools:
- name: primary
backends:
be: {}
`
cfg, err := parse([]byte(raw))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if cfg.Backends["be"].HealthCheck != "" {
t.Error("expected empty healthcheck")
}
}
func TestOptionalIntervals(t *testing.T) {
raw := `
maglev:
vpp:
lb:
ipv4-src-address: 10.0.0.1
ipv6-src-address: 2001:db8::1
healthchecks:
icmp:
type: icmp
interval: 2s
fast-interval: 500ms
down-interval: 30s
timeout: 1s
backends:
be:
address: 10.0.0.2
healthcheck: icmp
frontends:
v:
address: 192.0.2.1
pools:
- name: primary
backends:
be: {}
`
cfg, err := parse([]byte(raw))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
hc := cfg.HealthChecks["icmp"]
if hc.Interval != 2*time.Second {
t.Errorf("interval: got %v, want 2s", hc.Interval)
}
if hc.FastInterval != 500*time.Millisecond {
t.Errorf("fast-interval: got %v, want 500ms", hc.FastInterval)
}
if hc.DownInterval != 30*time.Second {
t.Errorf("down-interval: got %v, want 30s", hc.DownInterval)
}
}
func TestValidationErrors(t *testing.T) {
base := func(hcExtra, beExtra, feExtra string) string {
return `
maglev:
vpp:
lb:
ipv4-src-address: 10.0.0.1
ipv6-src-address: 2001:db8::1
healthchecks:
c:
type: icmp
interval: 1s
timeout: 2s
` + hcExtra + `
backends:
be:
address: 10.0.0.2
healthcheck: c
` + beExtra + `
frontends:
v:
address: 192.0.2.1
pools:
- name: primary
backends:
be: {}
` + feExtra
}
tests := []struct {
name string
yaml string
errSub string
}{
{
name: "wrong family probe-ipv4-src",
yaml: base(" probe-ipv4-src: 2001:db8::1\n", "", ""),
errSub: "probe-ipv4-src",
},
{
name: "mixed backend address families in pool",
yaml: `
maglev:
vpp:
lb:
ipv4-src-address: 10.0.0.1
ipv6-src-address: 2001:db8::1
healthchecks:
c:
type: icmp
interval: 1s
timeout: 2s
backends:
v4: {address: 10.0.0.2, healthcheck: c}
v6: {address: 2001:db8::1, healthcheck: c}
frontends:
v:
address: 192.0.2.1
pools:
- name: primary
backends:
v4: {}
v6: {}
`,
errSub: "address family",
},
{
name: "port without protocol",
yaml: base("", "", " port: 80\n"),
errSub: "port requires protocol",
},
{
name: "protocol without port",
yaml: `
maglev:
healthchecks:
c:
type: icmp
interval: 1s
timeout: 2s
backends:
be: {address: 10.0.0.2, healthcheck: c}
frontends:
v:
address: 192.0.2.1
protocol: tcp
pools:
- name: primary
backends:
be: {}
`,
errSub: "requires port",
},
{
name: "invalid healthcheck type",
yaml: `
maglev:
healthchecks:
c:
type: dns
interval: 1s
timeout: 2s
backends:
be: {address: 10.0.0.2, healthcheck: c}
frontends:
v:
address: 192.0.2.1
pools:
- name: primary
backends:
be: {}
`,
errSub: "type must be",
},
{
name: "http missing path",
yaml: `
maglev:
healthchecks:
c:
type: http
port: 80
interval: 1s
timeout: 2s
backends:
be: {address: 10.0.0.2, healthcheck: c}
frontends:
v:
address: 192.0.2.1
pools:
- name: primary
backends:
be: {}
`,
errSub: "params.path",
},
{
name: "no error case",
yaml: base("", "", ""),
errSub: "",
},
{
name: "undefined healthcheck reference",
yaml: `
maglev:
healthchecks: {}
backends:
be: {address: 10.0.0.2, healthcheck: missing}
frontends:
v:
address: 192.0.2.1
pools:
- name: primary
backends:
be: {}
`,
errSub: "not defined",
},
{
name: "undefined backend reference in pool",
yaml: `
maglev:
healthchecks:
c:
type: icmp
interval: 1s
timeout: 2s
backends: {}
frontends:
v:
address: 192.0.2.1
pools:
- name: primary
backends:
missing: {}
`,
errSub: "not defined",
},
{
name: "pool weight out of range",
yaml: `
maglev:
healthchecks:
c:
type: icmp
interval: 1s
timeout: 2s
backends:
be: {address: 10.0.0.2, healthcheck: c}
frontends:
v:
address: 192.0.2.1
pools:
- name: primary
backends:
be:
weight: 150
`,
errSub: "out of range",
},
{
name: "fall zero becomes default",
yaml: base(" fall: 0\n", "", ""),
errSub: "",
},
{
name: "tcp missing port",
yaml: `
maglev:
healthchecks:
c:
type: tcp
interval: 1s
timeout: 2s
backends:
be: {address: 10.0.0.2, healthcheck: c}
frontends:
v:
address: 192.0.2.1
pools:
- name: primary
backends:
be: {}
`,
errSub: "requires port",
},
{
name: "http missing port",
yaml: `
maglev:
healthchecks:
c:
type: http
interval: 1s
timeout: 2s
params:
path: /
backends:
be: {address: 10.0.0.2, healthcheck: c}
frontends:
v:
address: 192.0.2.1
pools:
- name: primary
backends:
be: {}
`,
errSub: "requires port",
},
{
name: "empty pools",
yaml: `
maglev:
healthchecks:
c:
type: icmp
interval: 1s
timeout: 2s
backends:
be: {address: 10.0.0.2, healthcheck: c}
frontends:
v:
address: 192.0.2.1
pools: []
`,
errSub: "pools must not be empty",
},
{
name: "pool missing name",
yaml: `
maglev:
healthchecks:
c:
type: icmp
interval: 1s
timeout: 2s
backends:
be: {address: 10.0.0.2, healthcheck: c}
frontends:
v:
address: 192.0.2.1
pools:
- backends:
be: {}
`,
errSub: "name must not be empty",
},
{
// Regression: VPP's LB plugin requires every VIP sharing
// a prefix to use the same encap type. Two frontends on
// the same VIP address with mismatched backend families
// can't both be programmed; catch it at config load so
// the operator doesn't see a late vpp-reconciler-error.
name: "cross-frontend VIP family mismatch",
yaml: `
maglev:
vpp:
lb:
ipv4-src-address: 10.0.0.1
ipv6-src-address: 2001:db8::10
healthchecks:
c:
type: icmp
interval: 1s
timeout: 2s
backends:
v4: {address: 10.0.0.2, healthcheck: c}
v6: {address: 2001:db8::2, healthcheck: c}
frontends:
web:
address: 2001:db8::1
protocol: tcp
port: 443
pools:
- name: primary
backends:
v4: {}
mail:
address: 2001:db8::1
protocol: tcp
port: 993
pools:
- name: primary
backends:
v6: {}
`,
errSub: "VIP address 2001:db8::1",
},
{
// Sanity: two frontends sharing a VIP address with
// matching backend families is fine — VPP's constraint
// is about encap consistency, not about address reuse.
name: "cross-frontend VIP address share with same family is allowed",
yaml: `
maglev:
vpp:
lb:
ipv4-src-address: 10.0.0.1
ipv6-src-address: 2001:db8::10
healthchecks:
c:
type: icmp
interval: 1s
timeout: 2s
backends:
v6a: {address: 2001:db8::2, healthcheck: c}
v6b: {address: 2001:db8::3, healthcheck: c}
frontends:
web:
address: 2001:db8::1
protocol: tcp
port: 443
pools:
- name: primary
backends:
v6a: {}
mail:
address: 2001:db8::1
protocol: tcp
port: 993
pools:
- name: primary
backends:
v6b: {}
`,
errSub: "",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := parse([]byte(tt.yaml))
if tt.errSub == "" {
if err != nil {
t.Fatalf("expected no error, got: %v", err)
}
return
}
if err == nil {
t.Fatalf("expected error containing %q, got nil", tt.errSub)
}
if !contains(err.Error(), tt.errSub) {
t.Errorf("error %q does not contain %q", err.Error(), tt.errSub)
}
})
}
}
func contains(s, sub string) bool {
return len(s) >= len(sub) && (s == sub || len(sub) == 0 ||
func() bool {
for i := 0; i <= len(s)-len(sub); i++ {
if s[i:i+len(sub)] == sub {
return true
}
}
return false
}())
}