Bug fixes, config validation, SPA tightening, set-weight UI

This session covers three distinct arcs: correctness bug fixes in the
VPP sync path and frontend reducers, new config validation, and a
large polish pass on the web frontend (tighter layout, backend kebab
dialogs, live grouped-table, live config-reload re-sync).

 - encap for a VIP is now derived from the backend address family,
   not the VIP's. A v6 VIP with v4 backends is programmed as IP6_GRE4
   (not the buggy IP6_GRE6), matching the VPP LB plugin's
   requirement that encap reflects the tunnel inner family. desiredVIP
   gained an Encap field populated in desiredFromFrontend.
 - ActivePoolIndex now requires at least one backend in a pool to be
   BOTH in StateUp AND pb.Weight>0 before the pool counts as active.
   Previously a primary pool with every backend manually zeroed would
   still win over a fallback with weight=100, so fallback traffic
   never materialized. New TestActivePoolIndexWeightedFailover table
   pins the rule in five subcases.
 - SyncLBStateVIP gained a flushAddress parameter threaded through
   reconcileVIP; it forces flush=true on the setASWeight call for a
   specific backend regardless of the usual 0→N heuristic. Wires up
   the explicit [flush] knob the CLI exposes.

 - convertFrontend already enforced that backends within one frontend
   share a family. New cross-frontend pass validateVIPFamilyConsistency
   rejects configs where two frontends share a VIP address but carry
   backends in different families — VPP's LB plugin requires every
   VIP on a prefix to have the same encap type, so such a config
   would fail at lb_add_del_vip_v2 time with VNET_API_ERROR_INVALID
   _ARGUMENT (-73). Catching it at config load turns a silent
   runtime failure into a clear startup error.
 - Two new TestValidationErrors cases pin the behavior: mismatched
   families reject, same-family frontends on one VIP address allowed.

 - Proto adds `bool flush = 5` to SetWeightRequest. The RPC now
   drives a VIP sync immediately after mutating config (fixing the
   latent "weight change only takes effect at the next 30s periodic
   reconcile" gap), passing flushAddress = backend IP when req.Flush
   is true.
 - maglevc grows an optional [flush] token: `set frontend F pool P
   backend B weight N [flush]`. Implementation uses two Run closures
   (runSetFrontendPoolBackendWeight and -Flush) because the tree
   walker only puts slot tokens in args — literal keywords like
   `flush` advance the node but don't appear in the arg list.
 - docs/user-guide.md updated with the [flush] optional and a
   three-paragraph explainer of the graceful-drain vs. flush
   semantics at the VPP level.

 - checker.ListFrontends now sorts alphabetically to match the
   existing sort in ListBackends / ListHealthChecks — RPC responses
   no longer shuffle VIPs per call. cmd/frontend/client.go also
   sorts defensively in refreshAll so an old maglevd build renders
   alphabetically too.
 - backendFromProto was returning out.Transitions[n-1] as the
   LastTransition, but maglevd stores (and the proto carries)
   transitions newest-first, so [n-1] was actually the oldest.
   Reverse on read, which normalizes the client's Transitions slice
   to oldest-first and makes [n-1] genuinely the newest. LastTransition
   now points at the actual latest transition record.
 - applyBackendTransition (Go and TS) derives Enabled = state!="disabled"
   so the two fields stay in lockstep — closed a drift window where
   a recently re-enabled backend still rendered with a stuck
   [disabled] tag. The tag was later removed entirely since state
   and enabled carry the same information.

 - Layout tightened substantially: "FRONTENDS" panel header removed,
   zippy-summary and zippy-body paddings cut, backend-table row
   padding dropped to 2px, per-pool <h3> removed. Pools now live in
   a single consolidated table per frontend with a dedicated "pool"
   column that shows the pool name only on the first row of each
   group — classic grouped-table layout, maximally dense.
 - Description moved inline into the Zippy summary as muted italic
   text, freeing a vertical line per frontend card.
 - formatVIPAddress() helper renders IPv6 VIPs as [addr]:port and
   IPv4 as addr:port, matching RFC 3986 authority syntax.
 - Pools with effective_weight=0 on every backend (standby
   fallbacks, fully-drained primaries) render at opacity 0.35 on
   their non-actions cells; the kebab column stays at full contrast
   because its menu is still fully functional on standby backends.
 - Config-reload propagation: a maglevd config-reload-done log
   event triggers triggerConfigResync() on the frontend side —
   refreshAll() runs off the event-dispatch goroutine, then a
   BrowserEvent{Type:"resync"} is published through the broker.
   writeEvent emits type="resync" as a named SSE frame so the
   SPA's existing addEventListener("resync") handler picks it up
   and calls fetchAllState → replaceAll.
 - recomputeEffectiveWeights in stores/state.ts mirrors the
   server-side health.EffectiveWeights logic so the SPA keeps
   pool.effective_weight correct the moment a backend transitions,
   without waiting for the 30s refresh. Fixed a nasty bug where
   applyBackendEffectiveWeight wrote VIP-scoped vpp-lb-sync-as-*
   event weights into every frontend sharing the backend,
   corrupting frontends with different per-pool configured weights.
   The old log-event reducer was removed; applyConfiguredWeight is
   the narrower replacement used by the kebab set-weight flow.
 - applyBackendTransition calls recomputeEffectiveWeights after
   state updates so pool-failover transitions (primary ⇌ fallback)
   reflect instantly in the UI.

 - Confirmation dialogs via a new Modal primitive
   (Portal-mounted to document.body, escape/click-outside close,
   click-outside debounced on mousedown so mid-row-text-selection
   drags don't dismiss).
 - pause/resume/enable/disable each show a Modal with a consequence
   paragraph explaining what hits live traffic ("will keep existing
   flows", "will flush VPP's flow table", etc.). The disable commit
   button is styled btn-danger red.
 - set-weight action shows a Modal with a range slider (0-100,
   seeded from the current configured weight, accent-colored live
   numeric readout via <output>) plus a flush checkbox and a live-
   swapping note/warn paragraph describing what will happen. On
   commit, the SPA also updates its local store via
   applyConfiguredWeight so the operator sees the new weight
   immediately without waiting for the next refresh.

 - ProbeHeartbeat is now state-aware: ▶ (play) at rest for up/
   down/unknown backends, ⏸ (pause) for paused, ⏹ (stop) for
   disabled/removed, ❤️ (heart) during an in-flight probe.
 - Drop the probe-done event listener — fast probes (<10ms)
   could fire probe-done in the same render tick as probe-start
   and the heart would never visibly paint. Each probe-start now
   runs a fixed 400ms scale-pop animation on a timer; subsequent
   probe-start events reset the timer, so fast cadences produce a
   continuous heart pulse.
 - Fixed wrapper box (16x14 px, overflow hidden) so the row
   doesn't jiggle when the glyph swaps between the narrow ▶/⏸/⏹
   text glyphs and the wider ❤️ emoji.

 - Brand wordmark changed from "maglev" to "vpp-maglev" and wrapped
   in an <a> linking to https://git.ipng.ch/ipng/vpp-maglev. Logo
   link changed to https://ipng.ch/. Both open in a new tab with
   rel="noopener".
 - .gitignore fix: `frontend`, `maglevc`, `maglevd` were matching
   ANY file or directory with those names anywhere in the tree,
   silently ignoring cmd/frontend and friends. Anchored with
   leading slashes so only repo-root build artifacts match.
This commit is contained in:
2026-04-12 23:06:38 +02:00
parent 25e9d79aba
commit 4347bb9b05
33 changed files with 1729 additions and 241 deletions

View File

@@ -222,6 +222,7 @@ func (c *Checker) ListFrontends() []string {
for name := range c.cfg.Frontends {
names = append(names, name)
}
sort.Strings(names)
return names
}

View File

@@ -7,6 +7,7 @@ import (
"net"
"os"
"regexp"
"sort"
"strconv"
"strings"
"time"
@@ -302,6 +303,21 @@ func convert(r *rawMaglev) (*Config, error) {
cfg.Frontends[name] = fe
}
// ---- cross-frontend: VIP-address family consistency -----------------------
//
// VPP's LB plugin requires every VIP sharing a given IP prefix to use
// the same encap type (GRE4 vs GRE6) — even when the VIPs sit on
// different ports. The encap is determined by the backend address
// family (see internal/vpp/lbsync.go desiredFromFrontend). So two
// frontends on the same VIP address with backends in different
// families (one IPv4 pool, one IPv6 pool) cannot both be programmed
// into VPP: the second one fails at lb_add_del_vip_v2 time with
// VNET_API_ERROR_INVALID_ARGUMENT (-73). Catching it here turns the
// silent runtime failure into a clear config-load error.
if err := validateVIPFamilyConsistency(cfg); err != nil {
return nil, err
}
// ---- vpp ------------------------------------------------------------------
// Runs last so structural errors in healthchecks/backends/frontends are
// reported first; operators fix those, then we tell them about the VPP
@@ -579,6 +595,69 @@ func convertFrontend(name string, r *rawFrontend, backends map[string]Backend) (
return fe, nil
}
// validateVIPFamilyConsistency walks cfg.Frontends, groups them by VIP
// address, and rejects any group whose members disagree on the backend
// address family used by their pools. See the call site in Parse for
// why this matters (VPP LB plugin limitation).
//
// Each frontend already has its own within-frontend family invariant
// (every backend in a frontend must share a family — enforced in
// convertFrontend). This check adds the cross-frontend dimension:
// frontends that happen to collide on the VIP address.
func validateVIPFamilyConsistency(cfg *Config) error {
type seen struct {
family int
frontendName string
}
byAddr := map[string]seen{}
// Sort frontend names so the "first frontend on this address"
// reported in errors is deterministic, independent of Go's
// randomized map iteration.
names := make([]string, 0, len(cfg.Frontends))
for name := range cfg.Frontends {
names = append(names, name)
}
sort.Strings(names)
for _, name := range names {
fe := cfg.Frontends[name]
fam := frontendBackendFamily(cfg, fe)
if fam == 0 {
continue // no valid backends; family is unknowable
}
addr := fe.Address.String()
if prev, ok := byAddr[addr]; ok {
if prev.family != fam {
return fmt.Errorf(
"frontend %q: VIP address %s is also used by frontend %q with IPv%d backends, "+
"but %q has IPv%d backends; VPP's LB plugin requires all VIPs sharing an "+
"address to use the same encap (backend family), so this config cannot be "+
"programmed — give the two frontends different VIP addresses",
name, addr, prev.frontendName, prev.family, name, fam)
}
continue
}
byAddr[addr] = seen{family: fam, frontendName: name}
}
return nil
}
// frontendBackendFamily returns the address family (4 or 6) of the
// first valid backend in the frontend's first pool. Returns 0 when no
// backend is resolvable — convertFrontend already enforces that all
// backends in a frontend share a family, so the first one is
// authoritative.
func frontendBackendFamily(cfg *Config, fe Frontend) int {
if len(fe.Pools) == 0 {
return 0
}
for bName := range fe.Pools[0].Backends {
if b, ok := cfg.Backends[bName]; ok && b.Address != nil {
return ipFamily(b.Address)
}
}
return 0
}
// ---- helpers ---------------------------------------------------------------
func parseOptionalIPFamily(s string, family int, field string) (net.IP, error) {

View File

@@ -558,6 +558,86 @@ maglev:
`,
errSub: "name must not be empty",
},
{
// Regression: VPP's LB plugin requires every VIP sharing
// a prefix to use the same encap type. Two frontends on
// the same VIP address with mismatched backend families
// can't both be programmed; catch it at config load so
// the operator doesn't see a late vpp-reconciler-error.
name: "cross-frontend VIP family mismatch",
yaml: `
maglev:
vpp:
lb:
ipv4-src-address: 10.0.0.1
ipv6-src-address: 2001:db8::10
healthchecks:
c:
type: icmp
interval: 1s
timeout: 2s
backends:
v4: {address: 10.0.0.2, healthcheck: c}
v6: {address: 2001:db8::2, healthcheck: c}
frontends:
web:
address: 2001:db8::1
protocol: tcp
port: 443
pools:
- name: primary
backends:
v4: {}
mail:
address: 2001:db8::1
protocol: tcp
port: 993
pools:
- name: primary
backends:
v6: {}
`,
errSub: "VIP address 2001:db8::1",
},
{
// Sanity: two frontends sharing a VIP address with
// matching backend families is fine — VPP's constraint
// is about encap consistency, not about address reuse.
name: "cross-frontend VIP address share with same family is allowed",
yaml: `
maglev:
vpp:
lb:
ipv4-src-address: 10.0.0.1
ipv6-src-address: 2001:db8::10
healthchecks:
c:
type: icmp
interval: 1s
timeout: 2s
backends:
v6a: {address: 2001:db8::2, healthcheck: c}
v6b: {address: 2001:db8::3, healthcheck: c}
frontends:
web:
address: 2001:db8::1
protocol: tcp
port: 443
pools:
- name: primary
backends:
v6a: {}
mail:
address: 2001:db8::1
protocol: tcp
port: 993
pools:
- name: primary
backends:
v6b: {}
`,
errSub: "",
},
}
for _, tt := range tests {

View File

@@ -1314,11 +1314,16 @@ func (x *VPPLBCounters) GetBackends() []*VPPLBBackendCounters {
}
type SetWeightRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
Frontend string `protobuf:"bytes,1,opt,name=frontend,proto3" json:"frontend,omitempty"`
Pool string `protobuf:"bytes,2,opt,name=pool,proto3" json:"pool,omitempty"`
Backend string `protobuf:"bytes,3,opt,name=backend,proto3" json:"backend,omitempty"`
Weight int32 `protobuf:"varint,4,opt,name=weight,proto3" json:"weight,omitempty"` // 0-100
state protoimpl.MessageState `protogen:"open.v1"`
Frontend string `protobuf:"bytes,1,opt,name=frontend,proto3" json:"frontend,omitempty"`
Pool string `protobuf:"bytes,2,opt,name=pool,proto3" json:"pool,omitempty"`
Backend string `protobuf:"bytes,3,opt,name=backend,proto3" json:"backend,omitempty"`
Weight int32 `protobuf:"varint,4,opt,name=weight,proto3" json:"weight,omitempty"` // 0-100
// flush, when true, also clears VPP's flow table for this backend
// so existing sessions are torn down. When false (default), only
// Maglev's new-bucket mapping is updated and live flows keep
// draining to this backend.
Flush bool `protobuf:"varint,5,opt,name=flush,proto3" json:"flush,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
@@ -1381,6 +1386,13 @@ func (x *SetWeightRequest) GetWeight() int32 {
return 0
}
func (x *SetWeightRequest) GetFlush() bool {
if x != nil {
return x.Flush
}
return false
}
// WatchRequest controls which event types are streamed. All fields default to
// true (i.e. an empty request subscribes to everything at info level).
type WatchRequest struct {
@@ -2638,12 +2650,13 @@ const file_proto_maglev_proto_rawDesc = "" +
"\x05bytes\x18\x04 \x01(\x04R\x05bytes\"w\n" +
"\rVPPLBCounters\x12,\n" +
"\x04vips\x18\x01 \x03(\v2\x18.maglev.VPPLBVIPCountersR\x04vips\x128\n" +
"\bbackends\x18\x02 \x03(\v2\x1c.maglev.VPPLBBackendCountersR\bbackends\"t\n" +
"\bbackends\x18\x02 \x03(\v2\x1c.maglev.VPPLBBackendCountersR\bbackends\"\x8a\x01\n" +
"\x10SetWeightRequest\x12\x1a\n" +
"\bfrontend\x18\x01 \x01(\tR\bfrontend\x12\x12\n" +
"\x04pool\x18\x02 \x01(\tR\x04pool\x12\x18\n" +
"\abackend\x18\x03 \x01(\tR\abackend\x12\x16\n" +
"\x06weight\x18\x04 \x01(\x05R\x06weight\"\xa3\x01\n" +
"\x06weight\x18\x04 \x01(\x05R\x06weight\x12\x14\n" +
"\x05flush\x18\x05 \x01(\bR\x05flush\"\xa3\x01\n" +
"\fWatchRequest\x12\x15\n" +
"\x03log\x18\x01 \x01(\bH\x00R\x03log\x88\x01\x01\x12\x1b\n" +
"\tlog_level\x18\x02 \x01(\tR\blogLevel\x12\x1d\n" +

View File

@@ -102,7 +102,13 @@ func (s *Server) DisableBackend(_ context.Context, req *BackendRequest) (*Backen
return backendToProto(b), nil
}
// SetFrontendPoolBackendWeight updates the weight of a backend in a pool.
// SetFrontendPoolBackendWeight updates the weight of a backend in a pool
// and immediately pushes the change into VPP via a targeted single-VIP
// sync. When req.Flush is true the backend's AS row is rewritten with
// lb_as_set_weight(is_flush=true), which tears down VPP's flow table for
// that AS so existing sessions are dropped; when false the flow table is
// left alone and only Maglev's new-bucket mapping is updated, so existing
// sessions keep reaching this backend until they naturally drain.
func (s *Server) SetFrontendPoolBackendWeight(_ context.Context, req *SetWeightRequest) (*FrontendInfo, error) {
if req.Weight < 0 || req.Weight > 100 {
return nil, status.Errorf(codes.InvalidArgument, "weight %d out of range [0, 100]", req.Weight)
@@ -111,6 +117,26 @@ func (s *Server) SetFrontendPoolBackendWeight(_ context.Context, req *SetWeightR
if err != nil {
return nil, status.Errorf(codes.NotFound, "%v", err)
}
// Push the change into VPP so the operator doesn't have to wait
// for the periodic 30s reconcile to pick it up. Silently skipped
// when VPP integration is disabled — the mutation still lands in
// config and any future sync will reconcile it.
if s.vppClient != nil && s.vppClient.IsConnected() {
cfg := s.checker.Config()
flushAddr := ""
if req.Flush {
if b, ok := cfg.Backends[req.Backend]; ok && b.Address != nil {
flushAddr = b.Address.String()
}
}
if err := s.vppClient.SyncLBStateVIP(cfg, req.Frontend, flushAddr); err != nil && !errors.Is(err, vpp.ErrFrontendNotFound) {
slog.Warn("set-weight-sync",
"frontend", req.Frontend, "backend", req.Backend,
"weight", req.Weight, "flush", req.Flush, "err", err)
}
}
return frontendToProto(req.Frontend, fe, s.checker), nil
}
@@ -403,7 +429,7 @@ func (s *Server) SyncVPPLBState(_ context.Context, req *SyncVPPLBStateRequest) (
}
cfg := s.checker.Config()
if req.FrontendName != nil && *req.FrontendName != "" {
if err := s.vppClient.SyncLBStateVIP(cfg, *req.FrontendName); err != nil {
if err := s.vppClient.SyncLBStateVIP(cfg, *req.FrontendName, ""); err != nil {
if errors.Is(err, vpp.ErrFrontendNotFound) {
return nil, status.Errorf(codes.NotFound, "%v", err)
}

View File

@@ -8,14 +8,17 @@ import (
// ActivePoolIndex returns the priority-failover pool index for fe given
// the current backend states. The active pool is the first pool that
// contains at least one backend in StateUp — pool[0] is the primary,
// pool[1] the first fallback, and so on. Returns 0 when no pool has
// any up backend, in which case every backend maps to weight 0 and the
// return value is unobservable.
// contains at least one backend which is both in StateUp AND has a
// non-zero configured weight: a pool whose up backends are all
// weight=0 contributes no serving capacity, so failover falls through
// to the next tier. Returns 0 when no pool can serve, in which case
// every backend maps to weight 0 and the return value is unobservable.
//
// pool[0] is the primary, pool[1] the first fallback, and so on.
func ActivePoolIndex(fe config.Frontend, states map[string]State) int {
for i, pool := range fe.Pools {
for bName := range pool.Backends {
if states[bName] == StateUp {
for bName, pb := range pool.Backends {
if states[bName] == StateUp && pb.Weight > 0 {
return i
}
}

View File

@@ -139,6 +139,92 @@ func TestActivePoolIndex(t *testing.T) {
}
}
// TestActivePoolIndexWeightedFailover pins the rule that a pool is only
// "active" when it has at least one backend that is both up AND has a
// non-zero configured weight. A pool whose up backends are all
// weight=0 contributes no serving capacity, so failover should fall
// through to the next tier.
//
// This was a latent bug: ActivePoolIndex used to check state alone and
// would return poolIdx=0 even when every primary backend had weight=0,
// leaving the fallback pool unused even though it was the only pool
// that could actually serve traffic.
func TestActivePoolIndexWeightedFailover(t *testing.T) {
mkFE := func(pools ...map[string]int) config.Frontend {
out := make([]config.Pool, len(pools))
for i, p := range pools {
out[i] = config.Pool{Name: "p", Backends: map[string]config.PoolBackend{}}
for name, w := range p {
out[i].Backends[name] = config.PoolBackend{Weight: w}
}
}
return config.Frontend{Pools: out}
}
cases := []struct {
name string
fe config.Frontend
states map[string]State
want int
}{
{
name: "primary has only weight-0 backends → failover to secondary",
fe: mkFE(
map[string]int{"a": 0, "b": 0},
map[string]int{"c": 100},
),
states: map[string]State{"a": StateUp, "b": StateUp, "c": StateUp},
want: 1,
},
{
name: "primary has a weight-0 AND a weight>0 backend → primary stays active",
fe: mkFE(
map[string]int{"a": 0, "b": 50},
map[string]int{"c": 100},
),
states: map[string]State{"a": StateUp, "b": StateUp, "c": StateUp},
want: 0,
},
{
name: "primary w>0 backend is down, w=0 sibling is up → failover",
fe: mkFE(
map[string]int{"a": 0, "b": 50},
map[string]int{"c": 100},
),
states: map[string]State{"a": StateUp, "b": StateDown, "c": StateUp},
want: 1,
},
{
name: "two tiers of weight-0 → fall through to third tier",
fe: mkFE(
map[string]int{"a": 0},
map[string]int{"b": 0},
map[string]int{"c": 100},
),
states: map[string]State{"a": StateUp, "b": StateUp, "c": StateUp},
want: 2,
},
{
name: "every tier weight-0 → default 0 (nothing can serve)",
fe: mkFE(
map[string]int{"a": 0},
map[string]int{"b": 0},
),
states: map[string]State{"a": StateUp, "b": StateUp},
want: 0,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got := ActivePoolIndex(tc.fe, tc.states)
if got != tc.want {
t.Errorf("got pool %d, want pool %d", got, tc.want)
}
})
}
}
// TestComputeFrontendState locks down the reduction rule: frontends are
// up iff any backend has effective weight > 0, unknown iff all backends
// are still in StateUnknown (or there are no backends), and down otherwise.

View File

@@ -38,6 +38,7 @@ type desiredVIP struct {
Protocol uint8 // 6=TCP, 17=UDP, 255=any
Port uint16
SrcIPSticky bool // lb_add_del_vip_v2.src_ip_sticky
Encap lb_types.LbEncapType // GRE4 / GRE6; matches the backend family, not the VIP's
ASes map[string]desiredAS // keyed by AS IP string
}
@@ -144,7 +145,7 @@ func (c *Client) SyncLBStateAll(cfg *config.Config) error {
curPtr = &cur
curSticky = cur.SrcIPSticky
}
if err := reconcileVIP(ch, d, curPtr, curSticky, &st); err != nil {
if err := reconcileVIP(ch, d, curPtr, curSticky, "", &st); err != nil {
return err
}
}
@@ -165,7 +166,14 @@ func (c *Client) SyncLBStateAll(cfg *config.Config) error {
// frontend is missing from cfg, SyncLBStateVIP returns ErrFrontendNotFound.
// This is the right tool for targeted updates on a busy load-balancer with
// many VIPs — only one VIP is read from VPP and only its ASes are modified.
func (c *Client) SyncLBStateVIP(cfg *config.Config, feName string) error {
//
// flushAddress, when non-empty, is the IP of an application server whose
// weight change (if any) should be pushed with IsFlush=true regardless of
// the usual "only flush on non-zero → zero" heuristic. This is how the
// SetFrontendPoolBackendWeight RPC exposes an explicit "drop flows now"
// knob: the server handler resolves the backend's config address and
// passes it here. Callers that don't need forced flushing pass "".
func (c *Client) SyncLBStateVIP(cfg *config.Config, feName, flushAddress string) error {
if !c.IsConnected() {
return errNotConnected
}
@@ -203,7 +211,7 @@ func (c *Client) SyncLBStateVIP(cfg *config.Config, feName string) error {
}
var st syncStats
if err := reconcileVIP(ch, d, cur, curSticky, &st); err != nil {
if err := reconcileVIP(ch, d, cur, curSticky, flushAddress, &st); err != nil {
return err
}
recordSyncStats("vip", &st)
@@ -227,7 +235,7 @@ func (c *Client) SyncLBStateVIP(cfg *config.Config, feName string) error {
// matching entry is always present. When the flag differs from the desired
// value, the VIP is torn down (ASes del+flushed, VIP deleted) and recreated
// — VPP has no API to mutate src_ip_sticky on an existing VIP.
func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, st *syncStats) error {
func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, flushAddress string, st *syncStats) error {
if cur == nil {
if err := addVIP(ch, d); err != nil {
return err
@@ -299,6 +307,14 @@ func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, s
// (i.e. the backend was disabled, not merely drained). Steady-
// state syncs where weight doesn't change never re-flush.
flush := a.Flush && c.Weight > 0 && a.Weight == 0
// Caller-forced flush: used by SetFrontendPoolBackendWeight
// with flush=true to explicitly drop live sessions for a
// single backend. The address match is exact — no other
// AS's weight change is affected, even if several happen
// in the same reconcile pass.
if flushAddress != "" && addr == flushAddress {
flush = true
}
if err := setASWeight(ch, d.Prefix, d.Protocol, d.Port, a, c.Weight, flush); err != nil {
return err
}
@@ -349,13 +365,24 @@ func desiredFromFrontend(cfg *config.Config, fe config.Frontend, src StateSource
if fe.Address.To4() == nil {
bits = 128
}
// Start with an encap derived from the VIP's own family as a
// fallback. This only applies when the frontend has zero valid
// backends (e.g. every referenced backend is missing from
// cfg.Backends); any real backend below overrides it to the
// backend family, which is the correct choice because the GRE
// encap carries backend traffic, not VIP traffic. Config
// validation already guarantees every backend in a frontend
// shares the same family, so the first valid backend we see is
// authoritative.
d := desiredVIP{
Prefix: &net.IPNet{IP: fe.Address, Mask: net.CIDRMask(bits, bits)},
Protocol: protocolFromConfig(fe.Protocol),
Port: fe.Port,
SrcIPSticky: fe.SrcIPSticky,
Encap: encapForIP(fe.Address),
ASes: make(map[string]desiredAS),
}
encapSet := false
states := snapshotStates(fe, src)
activePool := health.ActivePoolIndex(fe, states)
@@ -366,6 +393,10 @@ func desiredFromFrontend(cfg *config.Config, fe config.Frontend, src StateSource
if !ok || b.Address == nil {
continue
}
if !encapSet {
d.Encap = encapForIP(b.Address)
encapSet = true
}
// Disabled backends (either via operator action or config) are
// kept in the desired set so they stay installed in VPP with
// weight=0 — they must not be deleted, otherwise a subsequent
@@ -423,12 +454,11 @@ func snapshotStates(fe config.Frontend, src StateSource) map[string]health.State
const defaultFlowsTableLength = 1024
func addVIP(ch *loggedChannel, d desiredVIP) error {
encap := encapForIP(d.Prefix.IP)
req := &lb.LbAddDelVipV2{
Pfx: ip_types.NewAddressWithPrefix(*d.Prefix),
Protocol: d.Protocol,
Port: d.Port,
Encap: encap,
Encap: d.Encap,
Type: lb_types.LB_API_SRV_TYPE_CLUSTERIP,
NewFlowsTableLength: defaultFlowsTableLength,
SrcIPSticky: d.SrcIPSticky,
@@ -445,7 +475,7 @@ func addVIP(ch *loggedChannel, d desiredVIP) error {
"vip", d.Prefix.IP.String(),
"protocol", protocolName(d.Protocol),
"port", d.Port,
"encap", encapName(encap),
"encap", encapName(d.Encap),
"src-ip-sticky", d.SrcIPSticky)
return nil
}

View File

@@ -177,3 +177,264 @@ func TestDesiredFromFrontendFailover(t *testing.T) {
})
}
}
// TestDesiredFromFrontendSharedBackend exercises the exact shape of
// maglev.yaml: two frontends that share three backends across primary
// and fallback pools with different per-pool weights. The key
// invariants being pinned:
//
// - Each frontend's desiredFromFrontend must read its own
// per-pool-membership weights, never leaking weights from a sibling
// frontend's pool config.
// - When the primary pool has at least one backend up, the fallback
// pool's backends must all be weight=0 (standby).
// - When every primary-pool backend is non-up (down / paused /
// disabled), failover kicks in: the fallback pool's backends get
// their configured weights, and primary-pool backends stay at 0.
//
// Frontends modelled below:
//
// nginx-ip4-http:
// primary: nginx0-frggh0 w=10, nginx0-nlams0 w=100
// fallback: nginx0-chlzn0 w=100
//
// nginx-ip6-https:
// primary: nginx0-frggh0 w=100
// fallback: nginx0-nlams0 w=100, nginx0-chlzn0 w=100
//
// Note that nginx0-frggh0 is configured with weight 10 in the ip4
// primary but 100 in the ip6 primary — this is the exact crossed
// configuration that the user reported as producing weight=10 in the
// ip6 VIP (a regression).
func TestDesiredFromFrontendSharedBackend(t *testing.T) {
ip := func(s string) net.IP { return net.ParseIP(s).To4() }
frggh := "198.19.6.76"
nlams := "198.19.4.118"
chlzn := "198.19.6.167"
cfg := &config.Config{
Backends: map[string]config.Backend{
"nginx0-frggh0": {Address: ip(frggh), Enabled: true},
"nginx0-nlams0": {Address: ip(nlams), Enabled: true},
"nginx0-chlzn0": {Address: ip(chlzn), Enabled: true},
},
}
feIP4 := config.Frontend{
Address: ip("198.19.0.254"),
Protocol: "tcp",
Port: 80,
Pools: []config.Pool{
{Name: "primary", Backends: map[string]config.PoolBackend{
"nginx0-frggh0": {Weight: 10},
"nginx0-nlams0": {Weight: 100},
}},
{Name: "fallback", Backends: map[string]config.PoolBackend{
"nginx0-chlzn0": {Weight: 100},
}},
},
}
feIP6 := config.Frontend{
Address: net.ParseIP("2001:db8::1"),
Protocol: "tcp",
Port: 443,
Pools: []config.Pool{
{Name: "primary", Backends: map[string]config.PoolBackend{
"nginx0-frggh0": {Weight: 100},
}},
{Name: "fallback", Backends: map[string]config.PoolBackend{
"nginx0-nlams0": {Weight: 100},
"nginx0-chlzn0": {Weight: 100},
}},
},
}
type want struct {
ip4 map[string]uint8
ip6 map[string]uint8
}
tests := []struct {
name string
states map[string]health.State
want want
}{
{
name: "all up — each primary serves with its own weights",
states: map[string]health.State{
"nginx0-frggh0": health.StateUp,
"nginx0-nlams0": health.StateUp,
"nginx0-chlzn0": health.StateUp,
},
want: want{
ip4: map[string]uint8{frggh: 10, nlams: 100, chlzn: 0},
ip6: map[string]uint8{frggh: 100, nlams: 0, chlzn: 0},
},
},
{
name: "frggh0 disabled — ip4 primary still served by nlams0, ip6 fails over to fallback",
states: map[string]health.State{
"nginx0-frggh0": health.StateDisabled,
"nginx0-nlams0": health.StateUp,
"nginx0-chlzn0": health.StateUp,
},
want: want{
// ip4 primary still has nlams0 up, so stays on primary;
// frggh0 is in primary but disabled → 0.
ip4: map[string]uint8{frggh: 0, nlams: 100, chlzn: 0},
// ip6 primary has only frggh0 (disabled) → fallback
// pool activates and both of its backends get their
// configured weights.
ip6: map[string]uint8{frggh: 0, nlams: 100, chlzn: 100},
},
},
{
name: "frggh0 paused — same failover shape as disabled for ip6",
states: map[string]health.State{
"nginx0-frggh0": health.StatePaused,
"nginx0-nlams0": health.StateUp,
"nginx0-chlzn0": health.StateUp,
},
want: want{
ip4: map[string]uint8{frggh: 0, nlams: 100, chlzn: 0},
ip6: map[string]uint8{frggh: 0, nlams: 100, chlzn: 100},
},
},
{
name: "frggh0 down — same failover shape as disabled for ip6",
states: map[string]health.State{
"nginx0-frggh0": health.StateDown,
"nginx0-nlams0": health.StateUp,
"nginx0-chlzn0": health.StateUp,
},
want: want{
ip4: map[string]uint8{frggh: 0, nlams: 100, chlzn: 0},
ip6: map[string]uint8{frggh: 0, nlams: 100, chlzn: 100},
},
},
{
name: "ip4 primary all down → failover to chlzn0; ip6 unaffected",
states: map[string]health.State{
"nginx0-frggh0": health.StateDown,
"nginx0-nlams0": health.StateDown,
"nginx0-chlzn0": health.StateUp,
},
want: want{
// ip4 primary has nothing up → fallback activates.
ip4: map[string]uint8{frggh: 0, nlams: 0, chlzn: 100},
// ip6 primary has frggh0 (down) → fallback activates
// too; nlams0 is in ip6 fallback but down, chlzn0 is
// up and carries traffic.
ip6: map[string]uint8{frggh: 0, nlams: 0, chlzn: 100},
},
},
{
name: "all backends down → everyone zero",
states: map[string]health.State{
"nginx0-frggh0": health.StateDown,
"nginx0-nlams0": health.StateDown,
"nginx0-chlzn0": health.StateDown,
},
want: want{
ip4: map[string]uint8{frggh: 0, nlams: 0, chlzn: 0},
ip6: map[string]uint8{frggh: 0, nlams: 0, chlzn: 0},
},
},
{
name: "all backends disabled → everyone zero (and flushed)",
states: map[string]health.State{
"nginx0-frggh0": health.StateDisabled,
"nginx0-nlams0": health.StateDisabled,
"nginx0-chlzn0": health.StateDisabled,
},
want: want{
ip4: map[string]uint8{frggh: 0, nlams: 0, chlzn: 0},
ip6: map[string]uint8{frggh: 0, nlams: 0, chlzn: 0},
},
},
{
name: "frggh0 re-enabled and up — each frontend returns to its own configured weight (regression)",
states: map[string]health.State{
"nginx0-frggh0": health.StateUp,
"nginx0-nlams0": health.StateUp,
"nginx0-chlzn0": health.StateUp,
},
want: want{
// This is the specific regression the user reported:
// after a disable/enable cycle, the ip6 VIP should
// return to weight=100 for frggh0 (its own pool's
// configured weight), not 10 (ip4's weight).
ip4: map[string]uint8{frggh: 10, nlams: 100, chlzn: 0},
ip6: map[string]uint8{frggh: 100, nlams: 0, chlzn: 0},
},
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
src := &fakeStateSource{cfg: cfg, states: tc.states}
d4 := desiredFromFrontend(cfg, feIP4, src)
for addr, w := range tc.want.ip4 {
got, ok := d4.ASes[addr]
if !ok {
t.Errorf("ip4: %s missing from desired set", addr)
continue
}
if got.Weight != w {
t.Errorf("ip4: %s weight got %d, want %d", addr, got.Weight, w)
}
}
if len(d4.ASes) != len(tc.want.ip4) {
t.Errorf("ip4: got %d ASes, want %d", len(d4.ASes), len(tc.want.ip4))
}
d6 := desiredFromFrontend(cfg, feIP6, src)
for addr, w := range tc.want.ip6 {
got, ok := d6.ASes[addr]
if !ok {
t.Errorf("ip6: %s missing from desired set", addr)
continue
}
if got.Weight != w {
t.Errorf("ip6: %s weight got %d, want %d", addr, got.Weight, w)
}
}
if len(d6.ASes) != len(tc.want.ip6) {
t.Errorf("ip6: got %d ASes, want %d", len(d6.ASes), len(tc.want.ip6))
}
// Also exercise desiredFromConfig (the batch version used
// by the 30-second periodic SyncLBStateAll): it iterates
// every frontend in cfg and must produce the same
// per-frontend weights as desiredFromFrontend called
// directly. A bug where one frontend's pool config leaks
// into another would show up here too.
cfgBatch := &config.Config{
Backends: cfg.Backends,
Frontends: map[string]config.Frontend{
"nginx-ip4-http": feIP4,
"nginx-ip6-https": feIP6,
},
}
batch := desiredFromConfig(cfgBatch, src)
byAddr := map[string]desiredVIP{}
for _, d := range batch {
byAddr[d.Prefix.IP.String()] = d
}
if d := byAddr["198.19.0.254"]; true {
for addr, w := range tc.want.ip4 {
if got := d.ASes[addr].Weight; got != w {
t.Errorf("batch ip4: %s weight got %d, want %d", addr, got, w)
}
}
}
if d := byAddr["2001:db8::1"]; true {
for addr, w := range tc.want.ip6 {
if got := d.ASes[addr].Weight; got != w {
t.Errorf("batch ip6: %s weight got %d, want %d", addr, got, w)
}
}
}
})
}
}

View File

@@ -87,7 +87,7 @@ func (r *Reconciler) handle(ev checker.Event) {
"from", ev.Transition.From.String(),
"to", ev.Transition.To.String())
if err := r.client.SyncLBStateVIP(cfg, ev.FrontendName); err != nil {
if err := r.client.SyncLBStateVIP(cfg, ev.FrontendName, ""); err != nil {
if errors.Is(err, ErrFrontendNotFound) {
// Frontend was removed between the event being emitted and
// us handling it; a periodic SyncLBStateAll will clean it up.