Frontend flush-on-down policy; v0.9.3
Adds a per-frontend flush-on-down flag (default true) that causes maglevd to set is_flush=true on lb_as_set_weight when a backend transitions to StateDown, tearing down existing flows pinned to the dead AS instead of just draining them. rise/fall debouncing in the health checker already absorbs single-probe flaps, so a fall-counted down is almost always a real outage — and during a real outage the client-visible "connection refused" oscillation window (where VPP keeps steering existing flows at a dead AS until retry) is a reliability regression worth closing by default. Operators who want the pre-flag drain-only behaviour can set flush-on-down: false per frontend. BackendEffectiveWeight's truth table grows one axis: StateDown now returns (0, flushOnDown); StateDisabled still unconditionally flushes; StateUnknown / StatePaused still never flush. The unit test pins all four combinations. The flag surfaces in the gRPC FrontendInfo message and in `maglevc show frontend <name>` right next to src-ip-sticky.
This commit is contained in:
@@ -27,27 +27,39 @@ func ActivePoolIndex(fe config.Frontend, states map[string]State) int {
|
||||
}
|
||||
|
||||
// BackendEffectiveWeight is the pure mapping from (pool index, active pool,
|
||||
// backend state, config weight) to the desired VPP AS weight and flush hint.
|
||||
// This is the single source of truth for the state → dataplane rule.
|
||||
// backend state, config weight, flush-on-down policy) to the desired VPP AS
|
||||
// weight and flush hint. This is the single source of truth for the state
|
||||
// → dataplane rule.
|
||||
//
|
||||
// A backend gets its configured weight iff it is up AND belongs to the
|
||||
// currently-active pool. Every other case yields weight 0. Only StateDisabled
|
||||
// produces flush=true (immediate session teardown).
|
||||
// currently-active pool. Every other case yields weight 0.
|
||||
//
|
||||
// The flush hint controls whether VPP tears down existing flows pinned to
|
||||
// the AS on the weight update (is_flush=true on lb_as_set_weight) or merely
|
||||
// stops accepting new flows (drain, keep existing). StateDisabled always
|
||||
// flushes — it's an operator-driven "this AS is going away" signal. StateDown
|
||||
// flushes iff the frontend has flush-on-down enabled; the default is true,
|
||||
// because rise/fall debouncing in the health checker already absorbs flaps
|
||||
// and a fall-counted down is almost always a real outage the operator wants
|
||||
// cleared from the session table fast. Unknown / paused never flush —
|
||||
// unknown is pre-probe, and paused is an explicit drain-don't-kill signal.
|
||||
//
|
||||
// state in active pool not in active pool flush
|
||||
// -------- -------------- ------------------- -----
|
||||
// -------- -------------- ------------------- ----------------
|
||||
// unknown 0 0 no
|
||||
// up configured 0 (standby) no
|
||||
// down 0 0 no
|
||||
// down 0 0 flushOnDown
|
||||
// paused 0 0 no
|
||||
// disabled 0 0 yes
|
||||
func BackendEffectiveWeight(poolIdx, activePool int, state State, cfgWeight int) (weight uint8, flush bool) {
|
||||
func BackendEffectiveWeight(poolIdx, activePool int, state State, cfgWeight int, flushOnDown bool) (weight uint8, flush bool) {
|
||||
switch state {
|
||||
case StateUp:
|
||||
if poolIdx == activePool {
|
||||
return clampWeight(cfgWeight), false
|
||||
}
|
||||
return 0, false
|
||||
case StateDown:
|
||||
return 0, flushOnDown
|
||||
case StateDisabled:
|
||||
return 0, true
|
||||
default:
|
||||
@@ -63,7 +75,7 @@ func EffectiveWeights(fe config.Frontend, states map[string]State) map[int]map[s
|
||||
for poolIdx, pool := range fe.Pools {
|
||||
out[poolIdx] = make(map[string]uint8, len(pool.Backends))
|
||||
for bName, pb := range pool.Backends {
|
||||
w, _ := BackendEffectiveWeight(poolIdx, activePool, states[bName], pb.Weight)
|
||||
w, _ := BackendEffectiveWeight(poolIdx, activePool, states[bName], pb.Weight, fe.FlushOnDown)
|
||||
out[poolIdx][bName] = w
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user