Frontend flush-on-down policy; v0.9.3
Adds a per-frontend flush-on-down flag (default true) that causes maglevd to set is_flush=true on lb_as_set_weight when a backend transitions to StateDown, tearing down existing flows pinned to the dead AS instead of just draining them. rise/fall debouncing in the health checker already absorbs single-probe flaps, so a fall-counted down is almost always a real outage — and during a real outage the client-visible "connection refused" oscillation window (where VPP keeps steering existing flows at a dead AS until retry) is a reliability regression worth closing by default. Operators who want the pre-flag drain-only behaviour can set flush-on-down: false per frontend. BackendEffectiveWeight's truth table grows one axis: StateDown now returns (0, flushOnDown); StateDisabled still unconditionally flushes; StateUnknown / StatePaused still never flush. The unit test pins all four combinations. The flag surfaces in the gRPC FrontendInfo message and in `maglevc show frontend <name>` right next to src-ip-sticky.
This commit is contained in:
@@ -462,6 +462,7 @@ func runShowFrontend(ctx context.Context, client grpcapi.MaglevClient, args []st
|
||||
_, _ = fmt.Fprintf(w, "%s\t%s\n", label("protocol"), info.Protocol)
|
||||
_, _ = fmt.Fprintf(w, "%s\t%d\n", label("port"), info.Port)
|
||||
_, _ = fmt.Fprintf(w, "%s\t%t\n", label("src-ip-sticky"), info.SrcIpSticky)
|
||||
_, _ = fmt.Fprintf(w, "%s\t%t\n", label("flush-on-down"), info.FlushOnDown)
|
||||
if info.Description != "" {
|
||||
_, _ = fmt.Fprintf(w, "%s\t%s\n", label("description"), info.Description)
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user