VPP reconciler: event-driven sync, pool failover, bug fixes
This commit wires the checker's state machine through to the VPP dataplane:
every backend state transition flows through a single code path that
recomputes the effective per-backend weight (with pool failover) and pushes
the result to VPP. Along the way several latent bugs in the state machine
and the sync path were fixed.
internal/vpp/reconciler.go (new)
- New Reconciler type subscribes to checker.Checker events and, on every
transition, calls Client.SyncLBStateVIP for the affected frontend. This
is the ONLY place in the codebase where backend state changes cause VPP
calls — the "single path" discipline requested during design.
- Defines an EventSource interface (checker.Checker satisfies it) so the
dependency direction stays vpp → checker; the checker never imports vpp.
internal/vpp/client.go
- Renamed ConfigSource → StateSource. The interface now has two methods:
Config() and BackendState(name) — the reconciler and the desired-state
builder both need live health state to compute effective weights.
- SetConfigSource → SetStateSource; internal cfgSrc field → stateSrc.
- New getStateSource() helper for internal locked access.
- lbSyncLoop still uses the state source for its periodic drift
reconciliation; it's fully idempotent and runs the same code path as
event-driven syncs.
internal/vpp/lbsync.go
- desiredAS grows a Flush bool so the mapping function can signal "on
transition to weight 0, flush existing flow-table entries".
- asFromBackend is now the single source of truth for the state →
(weight, flush) rule. Documented with a full truth table. Takes an
activePool parameter so it can distinguish "up in active pool" from
"up but standby".
- activePoolIndex(fe, states) implements priority failover: returns the
index of the first pool containing any StateUp backend. pool[0] wins
when at least one member is up; pool[1] takes over when pool[0] is
empty; and so on. Defaults to 0 (unobservable, since all backends map
to weight 0 when nothing is up).
- desiredFromFrontend snapshots backend states once, computes activePool,
then walks every backend through asFromBackend. No more filtering on
b.Enabled — disabled backends stay in the desired set so they keep
their AS entry in VPP with weight=0. The previous filter caused delAS
on disable, which destroyed the entry and broke enable afterwards.
- EffectiveWeights(fe, src) exported helper that returns the per-pool
per-backend weight map for one frontend. Used by the gRPC GetFrontend
handler and robot tests to observe failover without touching VPP.
- reconcileVIP computes flush at the weight-change call site:
flush = desired.Flush && cur.Weight > 0 && desired.Weight == 0
This ensures only the *transition* to disabled flushes sessions —
steady-state syncs with already-zero weight skip the call entirely.
- setASWeight now plumbs IsFlush into lb_as_set_weight.
internal/vpp/lbsync_test.go (new)
- TestAsFromBackend: 15 cases locking down the truth table, including
failover scenarios (up in standby pool, up promoted in pool[1]).
- TestActivePoolIndex: 8 cases covering pool[0]-has-up, pool[0]-all-down,
all-disabled, all-paused, all-unknown, nothing-up-anywhere, and
three-tier failover.
- TestDesiredFromFrontendFailover: 5 end-to-end scenarios wiring a fake
StateSource through desiredFromFrontend and asserting the final
per-IP weight map. Exercises the complete pipeline without VPP.
internal/checker/checker.go
- Added BackendState(name) (health.State, bool) — one-line method that
satisfies vpp.StateSource. The checker is otherwise unchanged.
- EnableBackend rewritten to reuse the existing worker (parallel to
ResumeBackend). The old code called startWorker which constructed a
brand-new Backend via health.New, throwing away the transition
history; the resulting 'backend-transition' log showed the bogus
from=unknown,to=unknown. Now uses w.backend.Enable() to record a
proper disabled→unknown transition and launches a fresh goroutine.
- Static (no-healthcheck) backends now fire their synthetic 'always up'
pass on the first iteration of runProbe instead of sleeping 30s
first. Previously static backends sat in StateUnknown for 30s after
startup — useless for deterministic testing and surprising for
operators. The fix is a simple first-iteration flag.
internal/health/state.go
- New Enable(maxHistory) method parallel to Disable. Transitions the
backend from whatever state it's in (typically StateDisabled) to
StateUnknown, resets the health counter to rise-1 so the expedited
resolution kicks in on the first probe result, and emits a transition
with code 'enabled'.
proto/maglev.proto
- PoolBackendInfo gains effective_weight: the state-aware weight that
would be programmed into VPP (distinct from the configured weight in
the YAML). Exposed via GetFrontend.
internal/grpcapi/server.go
- frontendToProto takes a vpp.StateSource, computes effective weights
via vpp.EffectiveWeights, and populates PoolBackendInfo.EffectiveWeight.
- GetFrontend and SetFrontendPoolBackendWeight updated to pass the
checker in.
cmd/maglevc/commands.go
- 'show frontends <name>' now renders every pool backend row as
<name> weight <cfg> effective <eff> [disabled]?
so both values are always visible. The VPP-style key/value format
avoids the ANSI-alignment pitfall we hit earlier and makes the output
regex-parseable for robot tests.
cmd/maglevd/main.go
- Construct and start the Reconciler alongside the VPP client. Two
extra lines, no other changes to startup.
tests/01-maglevd/maglevd-lab/maglev.yaml
- Two new static backends (static-primary, static-fallback) and a new
failover-vip frontend with one backend per pool. No healthcheck, so
the state machine resolves them to 'up' immediately via the synthetic
pass. Used by the failover robot tests.
tests/01-maglevd/01-healthcheck.robot
- Three new test cases exercising pool failover end-to-end:
1. primary up, secondary standby (initial state)
2. disable primary → fallback takes over (effective weight flips)
3. enable primary → fallback steps back
All run without VPP: they scrape 'maglevc show frontends <name>' and
regex-match the effective weight in the output. Deterministic and
fast (~2s total) because the static backends don't probe.
- Two helper keywords: Static Backend Should Be Up and
Effective Weight Should Be.
Net result: 16/16 robot tests pass. Backend state transitions now
flow through a single documented path (checker event → reconciler →
SyncLBStateVIP → desiredFromFrontend → asFromBackend → reconcileVIP →
setASWeight), and the pool failover / enable-after-disable / static-
backend-startup bugs are all fixed.
This commit is contained in:
@@ -9,6 +9,7 @@ import (
|
||||
"net"
|
||||
|
||||
"git.ipng.ch/ipng/vpp-maglev/internal/config"
|
||||
"git.ipng.ch/ipng/vpp-maglev/internal/health"
|
||||
ip_types "git.ipng.ch/ipng/vpp-maglev/internal/vpp/binapi/ip_types"
|
||||
lb "git.ipng.ch/ipng/vpp-maglev/internal/vpp/binapi/lb"
|
||||
lb_types "git.ipng.ch/ipng/vpp-maglev/internal/vpp/binapi/lb_types"
|
||||
@@ -37,6 +38,7 @@ type desiredVIP struct {
|
||||
type desiredAS struct {
|
||||
Address net.IP
|
||||
Weight uint8 // 0-100
|
||||
Flush bool // if true, drop existing flows when transitioning to weight 0
|
||||
}
|
||||
|
||||
// syncStats counts changes made to the dataplane during a sync run.
|
||||
@@ -60,12 +62,16 @@ func (c *Client) SyncLBStateAll(cfg *config.Config) error {
|
||||
if !c.IsConnected() {
|
||||
return errNotConnected
|
||||
}
|
||||
src := c.getStateSource()
|
||||
if src == nil {
|
||||
return fmt.Errorf("no state source configured")
|
||||
}
|
||||
|
||||
cur, err := c.GetLBStateAll()
|
||||
if err != nil {
|
||||
return fmt.Errorf("read VPP LB state: %w", err)
|
||||
}
|
||||
desired := desiredFromConfig(cfg)
|
||||
desired := desiredFromConfig(cfg, src)
|
||||
|
||||
ch, err := c.apiChannel()
|
||||
if err != nil {
|
||||
@@ -131,11 +137,15 @@ func (c *Client) SyncLBStateVIP(cfg *config.Config, feName string) error {
|
||||
if !c.IsConnected() {
|
||||
return errNotConnected
|
||||
}
|
||||
src := c.getStateSource()
|
||||
if src == nil {
|
||||
return fmt.Errorf("no state source configured")
|
||||
}
|
||||
fe, ok := cfg.Frontends[feName]
|
||||
if !ok {
|
||||
return fmt.Errorf("%q: %w", feName, ErrFrontendNotFound)
|
||||
}
|
||||
d := desiredFromFrontend(cfg, fe)
|
||||
d := desiredFromFrontend(cfg, fe, src)
|
||||
|
||||
cur, err := c.GetLBStateVIP(d.Prefix, d.Protocol, d.Port)
|
||||
if err != nil {
|
||||
@@ -215,7 +225,12 @@ func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, st *syncStats) er
|
||||
continue
|
||||
}
|
||||
if c.Weight != a.Weight {
|
||||
if err := setASWeight(ch, d.Prefix, d.Protocol, d.Port, a); err != nil {
|
||||
// Flush only on the transition from serving traffic (cur > 0) to
|
||||
// zero, and only when the desired state explicitly asks for it
|
||||
// (i.e. the backend was disabled, not merely drained). Steady-
|
||||
// state syncs where weight doesn't change never re-flush.
|
||||
flush := a.Flush && c.Weight > 0 && a.Weight == 0
|
||||
if err := setASWeight(ch, d.Prefix, d.Protocol, d.Port, a, flush); err != nil {
|
||||
return err
|
||||
}
|
||||
st.asWeight++
|
||||
@@ -240,10 +255,12 @@ func removeVIP(ch *loggedChannel, v LBVIP, st *syncStats) error {
|
||||
}
|
||||
|
||||
// desiredFromConfig flattens every frontend in cfg into a desired VIP set.
|
||||
func desiredFromConfig(cfg *config.Config) []desiredVIP {
|
||||
// src provides the per-backend health state so weights and flush hints
|
||||
// reflect the current runtime state, not just the static config.
|
||||
func desiredFromConfig(cfg *config.Config, src StateSource) []desiredVIP {
|
||||
out := make([]desiredVIP, 0, len(cfg.Frontends))
|
||||
for _, fe := range cfg.Frontends {
|
||||
out = append(out, desiredFromFrontend(cfg, fe))
|
||||
out = append(out, desiredFromFrontend(cfg, fe, src))
|
||||
}
|
||||
return out
|
||||
}
|
||||
@@ -252,16 +269,13 @@ func desiredFromConfig(cfg *config.Config) []desiredVIP {
|
||||
//
|
||||
// All backends across all pools of a frontend are merged into a single
|
||||
// application-server list so VPP knows about every backend that could ever
|
||||
// receive traffic. Weights are assigned as follows:
|
||||
// receive traffic. The per-AS weight and flush hint are computed by
|
||||
// asFromBackend from three inputs: (pool index, backend health state,
|
||||
// configured pool weight).
|
||||
//
|
||||
// - primary (first) pool: the backend's configured weight
|
||||
// - any subsequent pool: weight 0 (backend is known but receives no traffic)
|
||||
//
|
||||
// This preserves the pool priority model: higher layers can later flip
|
||||
// secondary-pool backends to non-zero weights on failover without needing to
|
||||
// add/remove ASes in the dataplane. When the same backend appears in multiple
|
||||
// pools, the first pool it appears in wins.
|
||||
func desiredFromFrontend(cfg *config.Config, fe config.Frontend) desiredVIP {
|
||||
// When the same backend appears in multiple pools, the first pool it
|
||||
// appears in wins.
|
||||
func desiredFromFrontend(cfg *config.Config, fe config.Frontend, src StateSource) desiredVIP {
|
||||
bits := 32
|
||||
if fe.Address.To4() == nil {
|
||||
bits = 128
|
||||
@@ -272,26 +286,138 @@ func desiredFromFrontend(cfg *config.Config, fe config.Frontend) desiredVIP {
|
||||
Port: fe.Port,
|
||||
ASes: make(map[string]desiredAS),
|
||||
}
|
||||
|
||||
// Snapshot backend states once so the active-pool computation and the
|
||||
// per-backend weight assignment see a consistent view.
|
||||
states := make(map[string]health.State)
|
||||
for _, pool := range fe.Pools {
|
||||
for bName := range pool.Backends {
|
||||
if s, ok := src.BackendState(bName); ok {
|
||||
states[bName] = s
|
||||
} else {
|
||||
states[bName] = health.StateUnknown
|
||||
}
|
||||
}
|
||||
}
|
||||
activePool := activePoolIndex(fe, states)
|
||||
|
||||
for poolIdx, pool := range fe.Pools {
|
||||
for bName, pb := range pool.Backends {
|
||||
b, ok := cfg.Backends[bName]
|
||||
if !ok || !b.Enabled || b.Address == nil {
|
||||
if !ok || b.Address == nil {
|
||||
continue
|
||||
}
|
||||
// Disabled backends (either via operator action or config) are
|
||||
// kept in the desired set so they stay installed in VPP with
|
||||
// weight=0 — they must not be deleted, otherwise a subsequent
|
||||
// enable has to re-add them and existing flow-table state (if
|
||||
// any) is lost. The state machine drives what weight to set
|
||||
// via asFromBackend; we never filter on b.Enabled here.
|
||||
addr := b.Address.String()
|
||||
if _, already := d.ASes[addr]; already {
|
||||
continue
|
||||
}
|
||||
var w uint8
|
||||
if poolIdx == 0 {
|
||||
w = clampWeight(pb.Weight)
|
||||
} // secondary pools: weight 0 (default)
|
||||
d.ASes[addr] = desiredAS{Address: b.Address, Weight: w}
|
||||
w, flush := asFromBackend(poolIdx, activePool, states[bName], pb.Weight)
|
||||
d.ASes[addr] = desiredAS{
|
||||
Address: b.Address,
|
||||
Weight: w,
|
||||
Flush: flush,
|
||||
}
|
||||
}
|
||||
}
|
||||
return d
|
||||
}
|
||||
|
||||
// EffectiveWeights returns the current effective VPP weight for every backend
|
||||
// in every pool of fe, keyed by poolIdx and backend name. It runs the same
|
||||
// failover + state-aware weight calculation that the sync path uses, but
|
||||
// produces a plain map instead of desiredVIP — intended for observability
|
||||
// (e.g. the GetFrontend gRPC handler) and for robot-testing the failover
|
||||
// logic without needing a running VPP instance.
|
||||
//
|
||||
// The returned map layout is: result[poolIdx][backendName] = effective weight.
|
||||
func EffectiveWeights(fe config.Frontend, src StateSource) map[int]map[string]uint8 {
|
||||
states := make(map[string]health.State)
|
||||
for _, pool := range fe.Pools {
|
||||
for bName := range pool.Backends {
|
||||
if s, ok := src.BackendState(bName); ok {
|
||||
states[bName] = s
|
||||
} else {
|
||||
states[bName] = health.StateUnknown
|
||||
}
|
||||
}
|
||||
}
|
||||
activePool := activePoolIndex(fe, states)
|
||||
|
||||
out := make(map[int]map[string]uint8, len(fe.Pools))
|
||||
for poolIdx, pool := range fe.Pools {
|
||||
out[poolIdx] = make(map[string]uint8, len(pool.Backends))
|
||||
for bName, pb := range pool.Backends {
|
||||
w, _ := asFromBackend(poolIdx, activePool, states[bName], pb.Weight)
|
||||
out[poolIdx][bName] = w
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// activePoolIndex returns the index of the first pool in fe that contains at
|
||||
// least one backend currently in StateUp. This is the priority-failover
|
||||
// selector: pool[0] is the primary, pool[1] is the first fallback, and so on.
|
||||
// As long as pool[0] has any up backend, it stays active. When every pool[0]
|
||||
// backend leaves StateUp (down, paused, disabled, unknown), pool[1] is
|
||||
// promoted — and so on for further fallback tiers. When no pool has any up
|
||||
// backend, returns 0 (the return value is unobservable in that case since
|
||||
// every backend maps to weight 0 regardless of the active pool).
|
||||
func activePoolIndex(fe config.Frontend, states map[string]health.State) int {
|
||||
for i, pool := range fe.Pools {
|
||||
for bName := range pool.Backends {
|
||||
if states[bName] == health.StateUp {
|
||||
return i
|
||||
}
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// asFromBackend is the pure mapping from (pool index, active pool, backend
|
||||
// state, config weight) to the desired VPP AS weight and flush hint. This is
|
||||
// the single source of truth for the state → dataplane rule — every LB change
|
||||
// flows through this function.
|
||||
//
|
||||
// A backend gets its configured weight iff it is up AND belongs to the
|
||||
// currently-active pool. Every other case yields weight 0. The only
|
||||
// state that produces flush=true is disabled.
|
||||
//
|
||||
// state in active pool not in active pool flush
|
||||
// -------- -------------- ------------------- -----
|
||||
// unknown 0 0 no
|
||||
// up configured 0 (standby) no
|
||||
// down 0 0 no
|
||||
// paused 0 0 no
|
||||
// disabled 0 0 yes
|
||||
// removed handled separately (AS deleted via delAS)
|
||||
//
|
||||
// Flush semantics: flush=true means "if the AS currently has a non-zero
|
||||
// weight in VPP, drop its existing flow-table entries when setting weight
|
||||
// to 0". The reconciler only acts on flush when transitioning (current
|
||||
// weight > 0), so steady-state syncs never re-flush. Failover demotion
|
||||
// (e.g. pool[1] up→standby when pool[0] recovers) does NOT flush — we
|
||||
// let those sessions drain naturally.
|
||||
func asFromBackend(poolIdx, activePool int, state health.State, cfgWeight int) (weight uint8, flush bool) {
|
||||
switch state {
|
||||
case health.StateUp:
|
||||
if poolIdx == activePool {
|
||||
return clampWeight(cfgWeight), false
|
||||
}
|
||||
return 0, false
|
||||
case health.StateDisabled:
|
||||
return 0, true
|
||||
default:
|
||||
// unknown, down, paused: off, drain existing flows naturally.
|
||||
return 0, false
|
||||
}
|
||||
}
|
||||
|
||||
// ---- API call helpers ------------------------------------------------------
|
||||
|
||||
// defaultFlowsTableLength is sent as NewFlowsTableLength in lb_add_del_vip_v2.
|
||||
@@ -397,13 +523,14 @@ func delAS(ch *loggedChannel, prefix *net.IPNet, protocol uint8, port uint16, ad
|
||||
return nil
|
||||
}
|
||||
|
||||
func setASWeight(ch *loggedChannel, prefix *net.IPNet, protocol uint8, port uint16, a desiredAS) error {
|
||||
func setASWeight(ch *loggedChannel, prefix *net.IPNet, protocol uint8, port uint16, a desiredAS, flush bool) error {
|
||||
req := &lb.LbAsSetWeight{
|
||||
Pfx: ip_types.NewAddressWithPrefix(*prefix),
|
||||
Protocol: protocol,
|
||||
Port: port,
|
||||
AsAddress: ip_types.NewAddress(a.Address),
|
||||
Weight: a.Weight,
|
||||
IsFlush: flush,
|
||||
}
|
||||
reply := &lb.LbAsSetWeightReply{}
|
||||
if err := ch.SendRequest(req).ReceiveReply(reply); err != nil {
|
||||
@@ -417,7 +544,8 @@ func setASWeight(ch *loggedChannel, prefix *net.IPNet, protocol uint8, port uint
|
||||
"protocol", protocolName(protocol),
|
||||
"port", port,
|
||||
"address", a.Address.String(),
|
||||
"weight", a.Weight)
|
||||
"weight", a.Weight,
|
||||
"flush", flush)
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user