Bug fixes, config validation, SPA tightening, set-weight UI
This session covers three distinct arcs: correctness bug fixes in the
VPP sync path and frontend reducers, new config validation, and a
large polish pass on the web frontend (tighter layout, backend kebab
dialogs, live grouped-table, live config-reload re-sync).
- encap for a VIP is now derived from the backend address family,
not the VIP's. A v6 VIP with v4 backends is programmed as IP6_GRE4
(not the buggy IP6_GRE6), matching the VPP LB plugin's
requirement that encap reflects the tunnel inner family. desiredVIP
gained an Encap field populated in desiredFromFrontend.
- ActivePoolIndex now requires at least one backend in a pool to be
BOTH in StateUp AND pb.Weight>0 before the pool counts as active.
Previously a primary pool with every backend manually zeroed would
still win over a fallback with weight=100, so fallback traffic
never materialized. New TestActivePoolIndexWeightedFailover table
pins the rule in five subcases.
- SyncLBStateVIP gained a flushAddress parameter threaded through
reconcileVIP; it forces flush=true on the setASWeight call for a
specific backend regardless of the usual 0→N heuristic. Wires up
the explicit [flush] knob the CLI exposes.
- convertFrontend already enforced that backends within one frontend
share a family. New cross-frontend pass validateVIPFamilyConsistency
rejects configs where two frontends share a VIP address but carry
backends in different families — VPP's LB plugin requires every
VIP on a prefix to have the same encap type, so such a config
would fail at lb_add_del_vip_v2 time with VNET_API_ERROR_INVALID
_ARGUMENT (-73). Catching it at config load turns a silent
runtime failure into a clear startup error.
- Two new TestValidationErrors cases pin the behavior: mismatched
families reject, same-family frontends on one VIP address allowed.
- Proto adds `bool flush = 5` to SetWeightRequest. The RPC now
drives a VIP sync immediately after mutating config (fixing the
latent "weight change only takes effect at the next 30s periodic
reconcile" gap), passing flushAddress = backend IP when req.Flush
is true.
- maglevc grows an optional [flush] token: `set frontend F pool P
backend B weight N [flush]`. Implementation uses two Run closures
(runSetFrontendPoolBackendWeight and -Flush) because the tree
walker only puts slot tokens in args — literal keywords like
`flush` advance the node but don't appear in the arg list.
- docs/user-guide.md updated with the [flush] optional and a
three-paragraph explainer of the graceful-drain vs. flush
semantics at the VPP level.
- checker.ListFrontends now sorts alphabetically to match the
existing sort in ListBackends / ListHealthChecks — RPC responses
no longer shuffle VIPs per call. cmd/frontend/client.go also
sorts defensively in refreshAll so an old maglevd build renders
alphabetically too.
- backendFromProto was returning out.Transitions[n-1] as the
LastTransition, but maglevd stores (and the proto carries)
transitions newest-first, so [n-1] was actually the oldest.
Reverse on read, which normalizes the client's Transitions slice
to oldest-first and makes [n-1] genuinely the newest. LastTransition
now points at the actual latest transition record.
- applyBackendTransition (Go and TS) derives Enabled = state!="disabled"
so the two fields stay in lockstep — closed a drift window where
a recently re-enabled backend still rendered with a stuck
[disabled] tag. The tag was later removed entirely since state
and enabled carry the same information.
- Layout tightened substantially: "FRONTENDS" panel header removed,
zippy-summary and zippy-body paddings cut, backend-table row
padding dropped to 2px, per-pool <h3> removed. Pools now live in
a single consolidated table per frontend with a dedicated "pool"
column that shows the pool name only on the first row of each
group — classic grouped-table layout, maximally dense.
- Description moved inline into the Zippy summary as muted italic
text, freeing a vertical line per frontend card.
- formatVIPAddress() helper renders IPv6 VIPs as [addr]:port and
IPv4 as addr:port, matching RFC 3986 authority syntax.
- Pools with effective_weight=0 on every backend (standby
fallbacks, fully-drained primaries) render at opacity 0.35 on
their non-actions cells; the kebab column stays at full contrast
because its menu is still fully functional on standby backends.
- Config-reload propagation: a maglevd config-reload-done log
event triggers triggerConfigResync() on the frontend side —
refreshAll() runs off the event-dispatch goroutine, then a
BrowserEvent{Type:"resync"} is published through the broker.
writeEvent emits type="resync" as a named SSE frame so the
SPA's existing addEventListener("resync") handler picks it up
and calls fetchAllState → replaceAll.
- recomputeEffectiveWeights in stores/state.ts mirrors the
server-side health.EffectiveWeights logic so the SPA keeps
pool.effective_weight correct the moment a backend transitions,
without waiting for the 30s refresh. Fixed a nasty bug where
applyBackendEffectiveWeight wrote VIP-scoped vpp-lb-sync-as-*
event weights into every frontend sharing the backend,
corrupting frontends with different per-pool configured weights.
The old log-event reducer was removed; applyConfiguredWeight is
the narrower replacement used by the kebab set-weight flow.
- applyBackendTransition calls recomputeEffectiveWeights after
state updates so pool-failover transitions (primary ⇌ fallback)
reflect instantly in the UI.
- Confirmation dialogs via a new Modal primitive
(Portal-mounted to document.body, escape/click-outside close,
click-outside debounced on mousedown so mid-row-text-selection
drags don't dismiss).
- pause/resume/enable/disable each show a Modal with a consequence
paragraph explaining what hits live traffic ("will keep existing
flows", "will flush VPP's flow table", etc.). The disable commit
button is styled btn-danger red.
- set-weight action shows a Modal with a range slider (0-100,
seeded from the current configured weight, accent-colored live
numeric readout via <output>) plus a flush checkbox and a live-
swapping note/warn paragraph describing what will happen. On
commit, the SPA also updates its local store via
applyConfiguredWeight so the operator sees the new weight
immediately without waiting for the next refresh.
- ProbeHeartbeat is now state-aware: ▶ (play) at rest for up/
down/unknown backends, ⏸ (pause) for paused, ⏹ (stop) for
disabled/removed, ❤️ (heart) during an in-flight probe.
- Drop the probe-done event listener — fast probes (<10ms)
could fire probe-done in the same render tick as probe-start
and the heart would never visibly paint. Each probe-start now
runs a fixed 400ms scale-pop animation on a timer; subsequent
probe-start events reset the timer, so fast cadences produce a
continuous heart pulse.
- Fixed wrapper box (16x14 px, overflow hidden) so the row
doesn't jiggle when the glyph swaps between the narrow ▶/⏸/⏹
text glyphs and the wider ❤️ emoji.
- Brand wordmark changed from "maglev" to "vpp-maglev" and wrapped
in an <a> linking to https://git.ipng.ch/ipng/vpp-maglev. Logo
link changed to https://ipng.ch/. Both open in a new tab with
rel="noopener".
- .gitignore fix: `frontend`, `maglevc`, `maglevd` were matching
ANY file or directory with those names anywhere in the tree,
silently ignoring cmd/frontend and friends. Anchored with
leading slashes so only repo-root build artifacts match.
This commit is contained in:
6
.gitignore
vendored
6
.gitignore
vendored
@@ -6,3 +6,9 @@ tests/.venv/
|
|||||||
tests/**/maglevd.log
|
tests/**/maglevd.log
|
||||||
tests/**/clab-*/
|
tests/**/clab-*/
|
||||||
cmd/frontend/web/node_modules/
|
cmd/frontend/web/node_modules/
|
||||||
|
# Binaries built at the repo root via `go build ./cmd/<name>/` (no -o).
|
||||||
|
# Anchored with a leading slash so they don't also match the source
|
||||||
|
# dirs under cmd/.
|
||||||
|
/frontend
|
||||||
|
/maglevc
|
||||||
|
/maglevd
|
||||||
|
|||||||
@@ -10,6 +10,7 @@ import (
|
|||||||
"io"
|
"io"
|
||||||
"log/slog"
|
"log/slog"
|
||||||
"net"
|
"net"
|
||||||
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
@@ -125,6 +126,23 @@ func (c *maglevClient) BackendAction(ctx context.Context, name, action string) (
|
|||||||
return backendFromProto(bi), nil
|
return backendFromProto(bi), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetBackendWeight runs the SetFrontendPoolBackendWeight gRPC call. A
|
||||||
|
// fresh FrontendSnapshot is returned so admin callers get the
|
||||||
|
// post-mutation effective weights in one round-trip.
|
||||||
|
func (c *maglevClient) SetBackendWeight(ctx context.Context, frontend, pool, backend string, weight int32, flush bool) (*FrontendSnapshot, error) {
|
||||||
|
fi, err := c.api.SetFrontendPoolBackendWeight(ctx, &grpcapi.SetWeightRequest{
|
||||||
|
Frontend: frontend,
|
||||||
|
Pool: pool,
|
||||||
|
Backend: backend,
|
||||||
|
Weight: weight,
|
||||||
|
Flush: flush,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return frontendFromProto(fi), nil
|
||||||
|
}
|
||||||
|
|
||||||
func (c *maglevClient) Start(ctx context.Context) {
|
func (c *maglevClient) Start(ctx context.Context) {
|
||||||
go c.watchLoop(ctx)
|
go c.watchLoop(ctx)
|
||||||
go c.refreshLoop(ctx)
|
go c.refreshLoop(ctx)
|
||||||
@@ -209,7 +227,12 @@ func (c *maglevClient) refreshAll(ctx context.Context) error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("list frontends: %w", err)
|
return fmt.Errorf("list frontends: %w", err)
|
||||||
}
|
}
|
||||||
|
// Sort alphabetically so the UI layout is stable across
|
||||||
|
// reloads/restarts. maglevd's checker.ListFrontends already sorts
|
||||||
|
// in current versions, but older builds don't — sort here too as
|
||||||
|
// a belt-and-braces guarantee.
|
||||||
frontendsOrder := append([]string(nil), fl.GetFrontendNames()...)
|
frontendsOrder := append([]string(nil), fl.GetFrontendNames()...)
|
||||||
|
sort.Strings(frontendsOrder)
|
||||||
for _, name := range frontendsOrder {
|
for _, name := range frontendsOrder {
|
||||||
fi, err := c.api.GetFrontend(rctx, &grpcapi.GetFrontendRequest{Name: name})
|
fi, err := c.api.GetFrontend(rctx, &grpcapi.GetFrontendRequest{Name: name})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -360,6 +383,18 @@ func (c *maglevClient) handleEvent(ev *grpcapi.Event) {
|
|||||||
attrs[a.GetKey()] = a.GetValue()
|
attrs[a.GetKey()] = a.GetValue()
|
||||||
}
|
}
|
||||||
c.applyVPPLogHeartbeat(le.GetMsg())
|
c.applyVPPLogHeartbeat(le.GetMsg())
|
||||||
|
// A config reload on maglevd can shuffle anything: add or
|
||||||
|
// remove frontends, change pool membership, flip configured
|
||||||
|
// weights, move backends between pools. Rather than try to
|
||||||
|
// incrementally update the cache for every possible change,
|
||||||
|
// refresh the whole maglevd state and tell every connected
|
||||||
|
// browser to re-hydrate from the fresh snapshot. Only the
|
||||||
|
// "-done" event triggers this, not "-start": a failed reload
|
||||||
|
// (which never emits "-done") leaves the running config
|
||||||
|
// unchanged, so no refresh is needed.
|
||||||
|
if le.GetMsg() == "config-reload-done" {
|
||||||
|
c.triggerConfigResync()
|
||||||
|
}
|
||||||
payload, _ := json.Marshal(LogEventPayload{
|
payload, _ := json.Marshal(LogEventPayload{
|
||||||
Level: le.GetLevel(),
|
Level: le.GetLevel(),
|
||||||
Msg: le.GetMsg(),
|
Msg: le.GetMsg(),
|
||||||
@@ -428,6 +463,43 @@ func (c *maglevClient) handleEvent(ev *grpcapi.Event) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// triggerConfigResync runs refreshAll off the event-dispatch goroutine
|
||||||
|
// (so the stream.Recv loop isn't blocked while the full config refetch
|
||||||
|
// hits several gRPC calls) and then publishes a BrowserEvent of type
|
||||||
|
// "resync" so every connected browser re-fetches /view/api/state from
|
||||||
|
// the now-fresh cache. Fired in response to a maglevd "config-reload-
|
||||||
|
// done" log event.
|
||||||
|
//
|
||||||
|
// The refresh-then-publish order matters: if we published first, the
|
||||||
|
// SPA would fetchState from a stale cache and display old data until
|
||||||
|
// the next 30s refresh tick. Running refreshAll synchronously inside
|
||||||
|
// this goroutine closes that window.
|
||||||
|
//
|
||||||
|
// The resync event goes through the normal broker → ring buffer path,
|
||||||
|
// so a browser that reconnects shortly after the reload (within the
|
||||||
|
// 30s / 2000-event replay window) still sees the resync on its first
|
||||||
|
// live event and re-hydrates without needing a separate out-of-band
|
||||||
|
// signal.
|
||||||
|
func (c *maglevClient) triggerConfigResync() {
|
||||||
|
go func() {
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
if err := c.refreshAll(ctx); err != nil {
|
||||||
|
slog.Warn("config-resync-refresh", "maglevd", c.name, "err", err)
|
||||||
|
// Publish anyway — the SPA's refetch will see the
|
||||||
|
// cache in whatever state refreshAll left it, and
|
||||||
|
// the periodic refreshLoop will retry. Better than
|
||||||
|
// silently dropping the signal.
|
||||||
|
}
|
||||||
|
c.broker.Publish(BrowserEvent{
|
||||||
|
Maglevd: c.name,
|
||||||
|
Type: "resync",
|
||||||
|
AtUnixNs: time.Now().UnixNano(),
|
||||||
|
Payload: json.RawMessage("{}"),
|
||||||
|
})
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
// applyFrontendState writes the given state into the cached frontend
|
// applyFrontendState writes the given state into the cached frontend
|
||||||
// snapshot. Called both by synthetic replay events on subscribe and by
|
// snapshot. Called both by synthetic replay events on subscribe and by
|
||||||
// live transitions afterwards.
|
// live transitions afterwards.
|
||||||
@@ -479,11 +551,25 @@ func (c *maglevClient) applyBackendTransition(name string, tr *TransitionRecord)
|
|||||||
defer c.mu.Unlock()
|
defer c.mu.Unlock()
|
||||||
b, ok := c.cache.Backends[name]
|
b, ok := c.cache.Backends[name]
|
||||||
if !ok {
|
if !ok {
|
||||||
|
// Partial-create fallback for a transition that arrives before
|
||||||
|
// the first refreshAll has seen this backend. The real fields
|
||||||
|
// (address, healthcheck, pool memberships) are filled in on
|
||||||
|
// the next refresh tick; here we just stamp Name so the entry
|
||||||
|
// exists.
|
||||||
b = &BackendSnapshot{Name: name}
|
b = &BackendSnapshot{Name: name}
|
||||||
c.cache.Backends[name] = b
|
c.cache.Backends[name] = b
|
||||||
c.cache.BackendsOrder = append(c.cache.BackendsOrder, name)
|
c.cache.BackendsOrder = append(c.cache.BackendsOrder, name)
|
||||||
}
|
}
|
||||||
b.State = tr.To
|
b.State = tr.To
|
||||||
|
// Derive Enabled from State. In maglevd, state="disabled" and
|
||||||
|
// config.enabled=false are two ways of expressing the same
|
||||||
|
// condition — DisableBackend / EnableBackend flip both together,
|
||||||
|
// and no other state corresponds to enabled=false. Keeping them
|
||||||
|
// in sync in the reducer closes a race where the cache's cached
|
||||||
|
// Enabled could lag behind state by up to a refreshLoop tick,
|
||||||
|
// causing the SPA to render a bogus [disabled] tag next to an
|
||||||
|
// "up" badge on a freshly-re-enabled backend.
|
||||||
|
b.Enabled = tr.To != "disabled"
|
||||||
b.LastTransition = tr
|
b.LastTransition = tr
|
||||||
b.Transitions = append(b.Transitions, tr)
|
b.Transitions = append(b.Transitions, tr)
|
||||||
// Cap history to the most recent 20 entries to mirror what maglevd
|
// Cap history to the most recent 20 entries to mirror what maglevd
|
||||||
@@ -566,8 +652,16 @@ func backendFromProto(bi *grpcapi.BackendInfo) *BackendSnapshot {
|
|||||||
Enabled: bi.GetEnabled(),
|
Enabled: bi.GetEnabled(),
|
||||||
HealthCheck: bi.GetHealthcheck(),
|
HealthCheck: bi.GetHealthcheck(),
|
||||||
}
|
}
|
||||||
for _, t := range bi.GetTransitions() {
|
// maglevd stores and returns transitions newest-first (it prepends
|
||||||
out.Transitions = append(out.Transitions, transitionFromProto(t))
|
// in health.Backend.transition()). The client stores them
|
||||||
|
// oldest-first so applyBackendTransition can simply append new
|
||||||
|
// events to the end. Reverse on read to reconcile the two
|
||||||
|
// conventions — then out.Transitions[n-1] is the newest, which is
|
||||||
|
// the correct LastTransition.
|
||||||
|
trs := bi.GetTransitions()
|
||||||
|
out.Transitions = make([]*TransitionRecord, len(trs))
|
||||||
|
for i, t := range trs {
|
||||||
|
out.Transitions[len(trs)-1-i] = transitionFromProto(t)
|
||||||
}
|
}
|
||||||
if n := len(out.Transitions); n > 0 {
|
if n := len(out.Transitions); n > 0 {
|
||||||
out.LastTransition = out.Transitions[n-1]
|
out.LastTransition = out.Transitions[n-1]
|
||||||
|
|||||||
@@ -124,17 +124,21 @@ func registerHandlers(mux *http.ServeMux, clients []*maglevClient, broker *Broke
|
|||||||
|
|
||||||
// handleAdminAPI dispatches mutation requests under /admin/api/.
|
// handleAdminAPI dispatches mutation requests under /admin/api/.
|
||||||
//
|
//
|
||||||
// Currently the only supported shape is:
|
// Supported shapes:
|
||||||
//
|
//
|
||||||
// POST /admin/api/{maglevd}/backend/{name}/{pause|resume|enable|disable}
|
// POST /admin/api/{maglevd}/backend/{name}/{pause|resume|enable|disable}
|
||||||
|
// → fresh BackendSnapshot as JSON
|
||||||
//
|
//
|
||||||
// The response body is the fresh BackendSnapshot (JSON) returned by
|
// POST /admin/api/{maglevd}/frontend/{fe}/pool/{pool}/backend/{name}/weight
|
||||||
// maglevd. The WatchEvents stream also delivers a transition event
|
// body: {"weight": 0-100, "flush": bool}
|
||||||
// so every connected browser converges through the normal reducer
|
// → fresh FrontendSnapshot as JSON
|
||||||
// path — the POST response is primarily for the originating SPA to
|
//
|
||||||
// learn about failures immediately. Errors from the gRPC side are
|
// The WatchEvents stream also delivers a backend-transition (and, for
|
||||||
// surfaced as 400 (bad request / unknown action / unknown target)
|
// the weight case, no event — since the config mutation doesn't flip
|
||||||
// or 502 (maglevd returned an error).
|
// the health state). The POST response is primarily for the
|
||||||
|
// originating SPA to learn about failures and to refresh effective
|
||||||
|
// weights immediately. Errors from the gRPC side are surfaced as
|
||||||
|
// 400 (bad request) or 502 (maglevd returned an error).
|
||||||
func handleAdminAPI(w http.ResponseWriter, r *http.Request, byName map[string]*maglevClient) {
|
func handleAdminAPI(w http.ResponseWriter, r *http.Request, byName map[string]*maglevClient) {
|
||||||
if r.Method != http.MethodPost {
|
if r.Method != http.MethodPost {
|
||||||
w.Header().Set("Allow", "POST")
|
w.Header().Set("Allow", "POST")
|
||||||
@@ -142,17 +146,32 @@ func handleAdminAPI(w http.ResponseWriter, r *http.Request, byName map[string]*m
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
parts := strings.Split(strings.TrimPrefix(r.URL.Path, "/admin/api/"), "/")
|
parts := strings.Split(strings.TrimPrefix(r.URL.Path, "/admin/api/"), "/")
|
||||||
// Expect: {maglevd} "backend" {name} {action}
|
// Peel off the maglevd name (always the first segment).
|
||||||
if len(parts) != 4 || parts[1] != "backend" {
|
if len(parts) < 2 {
|
||||||
http.NotFound(w, r)
|
http.NotFound(w, r)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
maglevd, name, action := parts[0], parts[2], parts[3]
|
maglevd := parts[0]
|
||||||
c, ok := byName[maglevd]
|
c, ok := byName[maglevd]
|
||||||
if !ok {
|
if !ok {
|
||||||
http.NotFound(w, r)
|
http.NotFound(w, r)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
rest := parts[1:]
|
||||||
|
|
||||||
|
switch {
|
||||||
|
// {maglevd}/backend/{name}/{action}
|
||||||
|
case len(rest) == 3 && rest[0] == "backend":
|
||||||
|
handleBackendLifecycle(w, r, c, rest[1], rest[2])
|
||||||
|
// {maglevd}/frontend/{fe}/pool/{pool}/backend/{name}/weight
|
||||||
|
case len(rest) == 7 && rest[0] == "frontend" && rest[2] == "pool" && rest[4] == "backend" && rest[6] == "weight":
|
||||||
|
handleBackendWeight(w, r, c, rest[1], rest[3], rest[5])
|
||||||
|
default:
|
||||||
|
http.NotFound(w, r)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func handleBackendLifecycle(w http.ResponseWriter, r *http.Request, c *maglevClient, name, action string) {
|
||||||
switch action {
|
switch action {
|
||||||
case "pause", "resume", "enable", "disable":
|
case "pause", "resume", "enable", "disable":
|
||||||
default:
|
default:
|
||||||
@@ -163,12 +182,43 @@ func handleAdminAPI(w http.ResponseWriter, r *http.Request, byName map[string]*m
|
|||||||
defer cancel()
|
defer cancel()
|
||||||
snap, err := c.BackendAction(ctx, name, action)
|
snap, err := c.BackendAction(ctx, name, action)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
slog.Warn("admin-backend-action", "maglevd", maglevd, "backend", name, "action", action, "err", err)
|
slog.Warn("admin-backend-action", "maglevd", c.name, "backend", name, "action", action, "err", err)
|
||||||
http.Error(w, err.Error(), http.StatusBadGateway)
|
http.Error(w, err.Error(), http.StatusBadGateway)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
slog.Info("admin-backend-action",
|
slog.Info("admin-backend-action",
|
||||||
"maglevd", maglevd, "backend", name, "action", action, "state", snap.State)
|
"maglevd", c.name, "backend", name, "action", action, "state", snap.State)
|
||||||
|
writeJSON(w, snap)
|
||||||
|
}
|
||||||
|
|
||||||
|
type setWeightBody struct {
|
||||||
|
Weight int32 `json:"weight"`
|
||||||
|
Flush bool `json:"flush"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func handleBackendWeight(w http.ResponseWriter, r *http.Request, c *maglevClient, frontend, pool, backend string) {
|
||||||
|
var body setWeightBody
|
||||||
|
if err := json.NewDecoder(r.Body).Decode(&body); err != nil {
|
||||||
|
http.Error(w, fmt.Sprintf("bad json: %v", err), http.StatusBadRequest)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if body.Weight < 0 || body.Weight > 100 {
|
||||||
|
http.Error(w, fmt.Sprintf("weight %d out of range [0, 100]", body.Weight), http.StatusBadRequest)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
snap, err := c.SetBackendWeight(ctx, frontend, pool, backend, body.Weight, body.Flush)
|
||||||
|
if err != nil {
|
||||||
|
slog.Warn("admin-set-weight",
|
||||||
|
"maglevd", c.name, "frontend", frontend, "pool", pool, "backend", backend,
|
||||||
|
"weight", body.Weight, "flush", body.Flush, "err", err)
|
||||||
|
http.Error(w, err.Error(), http.StatusBadGateway)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
slog.Info("admin-set-weight",
|
||||||
|
"maglevd", c.name, "frontend", frontend, "pool", pool, "backend", backend,
|
||||||
|
"weight", body.Weight, "flush", body.Flush)
|
||||||
writeJSON(w, snap)
|
writeJSON(w, snap)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -264,6 +314,17 @@ func serveSSE(w http.ResponseWriter, r *http.Request, broker *Broker) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func writeEvent(w http.ResponseWriter, ev deliveredEvent) error {
|
func writeEvent(w http.ResponseWriter, ev deliveredEvent) error {
|
||||||
|
// "resync" goes out as a named SSE event so the SPA's existing
|
||||||
|
// addEventListener("resync", ...) handler fires (and not the
|
||||||
|
// default onmessage path). Every other event type keeps the
|
||||||
|
// default onmessage path with a JSON body. We still emit an id
|
||||||
|
// so a reconnecting browser can replay from the right point in
|
||||||
|
// the ring; the resync handler is idempotent (a duplicate
|
||||||
|
// replay just triggers a redundant fetchState).
|
||||||
|
if ev.Event.Type == "resync" {
|
||||||
|
_, err := fmt.Fprintf(w, "id: %s\nevent: resync\ndata: {}\n\n", ev.ID)
|
||||||
|
return err
|
||||||
|
}
|
||||||
body, err := json.Marshal(ev.Event)
|
body, err := json.Marshal(ev.Event)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
|||||||
File diff suppressed because one or more lines are too long
1
cmd/frontend/web/dist/assets/index-BBNMNdtq.js
vendored
Normal file
1
cmd/frontend/web/dist/assets/index-BBNMNdtq.js
vendored
Normal file
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
1
cmd/frontend/web/dist/assets/index-CxDuAfMR.css
vendored
Normal file
1
cmd/frontend/web/dist/assets/index-CxDuAfMR.css
vendored
Normal file
File diff suppressed because one or more lines are too long
4
cmd/frontend/web/dist/index.html
vendored
4
cmd/frontend/web/dist/index.html
vendored
@@ -4,8 +4,8 @@
|
|||||||
<meta charset="UTF-8" />
|
<meta charset="UTF-8" />
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||||
<title>maglev</title>
|
<title>maglev</title>
|
||||||
<script type="module" crossorigin src="/view/assets/index-AsNHMKdQ.js"></script>
|
<script type="module" crossorigin src="/view/assets/index-BBNMNdtq.js"></script>
|
||||||
<link rel="stylesheet" crossorigin href="/view/assets/index-CrBeXDdb.css">
|
<link rel="stylesheet" crossorigin href="/view/assets/index-CxDuAfMR.css">
|
||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<div id="root"></div>
|
<div id="root"></div>
|
||||||
|
|||||||
@@ -39,14 +39,22 @@ const App: Component = () => {
|
|||||||
<div class="brand">
|
<div class="brand">
|
||||||
<a
|
<a
|
||||||
class="brand-logo"
|
class="brand-logo"
|
||||||
href="https://git.ipng.ch/ipng/vpp-maglev/"
|
href="https://ipng.ch/"
|
||||||
|
target="_blank"
|
||||||
|
rel="noopener"
|
||||||
|
title="IPng Networks"
|
||||||
|
>
|
||||||
|
<img src={logoUrl} alt="IPng" />
|
||||||
|
</a>
|
||||||
|
<a
|
||||||
|
class="brand-name"
|
||||||
|
href="https://git.ipng.ch/ipng/vpp-maglev"
|
||||||
target="_blank"
|
target="_blank"
|
||||||
rel="noopener"
|
rel="noopener"
|
||||||
title="vpp-maglev on git.ipng.ch"
|
title="vpp-maglev on git.ipng.ch"
|
||||||
>
|
>
|
||||||
<img src={logoUrl} alt="IPng" />
|
<strong>vpp-maglev</strong>
|
||||||
</a>
|
</a>
|
||||||
<strong>maglev</strong>
|
|
||||||
{version() && (
|
{version() && (
|
||||||
<span class="version" title={`commit ${version()!.commit} · built ${version()!.date}`}>
|
<span class="version" title={`commit ${version()!.commit} · built ${version()!.date}`}>
|
||||||
{version()!.version} ({version()!.commit})
|
{version()!.version} ({version()!.commit})
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
import type { BackendSnapshot } from "../types";
|
import type { BackendSnapshot, FrontendSnapshot } from "../types";
|
||||||
|
|
||||||
export type BackendAction = "pause" | "resume" | "enable" | "disable";
|
export type BackendAction = "pause" | "resume" | "enable" | "disable";
|
||||||
|
|
||||||
@@ -22,3 +22,33 @@ export async function runBackendAction(
|
|||||||
}
|
}
|
||||||
return (await r.json()) as BackendSnapshot;
|
return (await r.json()) as BackendSnapshot;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Set a backend's weight within a specific frontend/pool. When flush
|
||||||
|
// is true, VPP's flow table for the backend is cleared so existing
|
||||||
|
// sessions are dropped; when false, only the new-buckets mapping is
|
||||||
|
// updated and existing flows keep draining to the backend.
|
||||||
|
export async function setBackendWeight(
|
||||||
|
maglevd: string,
|
||||||
|
frontend: string,
|
||||||
|
pool: string,
|
||||||
|
backend: string,
|
||||||
|
weight: number,
|
||||||
|
flush: boolean,
|
||||||
|
): Promise<FrontendSnapshot> {
|
||||||
|
const url =
|
||||||
|
`/admin/api/${encodeURIComponent(maglevd)}` +
|
||||||
|
`/frontend/${encodeURIComponent(frontend)}` +
|
||||||
|
`/pool/${encodeURIComponent(pool)}` +
|
||||||
|
`/backend/${encodeURIComponent(backend)}/weight`;
|
||||||
|
const r = await fetch(url, {
|
||||||
|
method: "POST",
|
||||||
|
credentials: "same-origin",
|
||||||
|
headers: { "Content-Type": "application/json" },
|
||||||
|
body: JSON.stringify({ weight, flush }),
|
||||||
|
});
|
||||||
|
if (!r.ok) {
|
||||||
|
const body = (await r.text()).trim();
|
||||||
|
throw new Error(body || `${r.status} ${r.statusText}`);
|
||||||
|
}
|
||||||
|
return (await r.json()) as FrontendSnapshot;
|
||||||
|
}
|
||||||
|
|||||||
@@ -2,13 +2,11 @@ import type {
|
|||||||
BackendEventPayload,
|
BackendEventPayload,
|
||||||
BrowserEvent,
|
BrowserEvent,
|
||||||
FrontendEventPayload,
|
FrontendEventPayload,
|
||||||
LogEventPayload,
|
|
||||||
MaglevdStatusPayload,
|
MaglevdStatusPayload,
|
||||||
VPPStatusPayload,
|
VPPStatusPayload,
|
||||||
} from "../types";
|
} from "../types";
|
||||||
import { fetchAllState } from "./rest";
|
import { fetchAllState } from "./rest";
|
||||||
import {
|
import {
|
||||||
applyBackendEffectiveWeight,
|
|
||||||
applyBackendTransition,
|
applyBackendTransition,
|
||||||
applyFrontendTransition,
|
applyFrontendTransition,
|
||||||
applyMaglevdStatus,
|
applyMaglevdStatus,
|
||||||
@@ -56,6 +54,9 @@ function dispatch(ev: BrowserEvent) {
|
|||||||
pushEvent(ev);
|
pushEvent(ev);
|
||||||
switch (ev.type) {
|
switch (ev.type) {
|
||||||
case "backend":
|
case "backend":
|
||||||
|
// The reducer also recomputes effective weights across every
|
||||||
|
// frontend so pool-failover transitions (primary ⇌ fallback)
|
||||||
|
// reflect instantly, without waiting for the 30s refresh.
|
||||||
applyBackendTransition(ev.maglevd, ev.payload as BackendEventPayload);
|
applyBackendTransition(ev.maglevd, ev.payload as BackendEventPayload);
|
||||||
break;
|
break;
|
||||||
case "frontend":
|
case "frontend":
|
||||||
@@ -68,30 +69,17 @@ function dispatch(ev: BrowserEvent) {
|
|||||||
applyVPPStatus(ev.maglevd, (ev.payload as VPPStatusPayload).state);
|
applyVPPStatus(ev.maglevd, (ev.payload as VPPStatusPayload).state);
|
||||||
break;
|
break;
|
||||||
case "log":
|
case "log":
|
||||||
applyLogEvent(ev.maglevd, ev.payload as LogEventPayload);
|
// Log events are displayed in the DebugPanel but no longer
|
||||||
break;
|
// mutate the state tree. The previous vpp-lb-sync-as-*
|
||||||
}
|
// routing was removed because a VIP-scoped event was being
|
||||||
}
|
// naively written into every frontend that shared the
|
||||||
|
// backend, corrupting effective_weight for frontends with
|
||||||
// applyLogEvent surfaces the few log messages that carry data we want to
|
// different per-pool configured weights. Backend state
|
||||||
// reflect in the store. Probe-start/probe-done drive the heartbeat and are
|
// changes (arriving via "backend" events above) are a
|
||||||
// handled by BackendRow watching the events signal directly; here we only
|
// sufficient trigger for recomputing effective weights
|
||||||
// react to VPP LB sync mutations so the effective weight column updates
|
// locally, and the set-weight kebab action updates the
|
||||||
// live when a backend is disabled, enabled, or reweighted.
|
// store directly via applyConfiguredWeight on its POST
|
||||||
function applyLogEvent(maglevd: string, p: LogEventPayload) {
|
// success path.
|
||||||
if (!p.msg.startsWith("vpp-lb-sync-as-")) return;
|
|
||||||
const attrs = p.attrs ?? {};
|
|
||||||
const address = attrs.address;
|
|
||||||
if (!address) return;
|
|
||||||
switch (p.msg) {
|
|
||||||
case "vpp-lb-sync-as-added":
|
|
||||||
applyBackendEffectiveWeight(maglevd, address, Number(attrs.weight ?? 0));
|
|
||||||
break;
|
|
||||||
case "vpp-lb-sync-as-removed":
|
|
||||||
applyBackendEffectiveWeight(maglevd, address, 0);
|
|
||||||
break;
|
|
||||||
case "vpp-lb-sync-as-weight-updated":
|
|
||||||
applyBackendEffectiveWeight(maglevd, address, Number(attrs.to ?? 0));
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,54 +1,96 @@
|
|||||||
import { For, Show, createEffect, createSignal, onCleanup, type Component } from "solid-js";
|
import { For, Show, createEffect, createSignal, onCleanup, type Component } from "solid-js";
|
||||||
import { runBackendAction, type BackendAction } from "../api/admin";
|
import { runBackendAction, setBackendWeight, type BackendAction } from "../api/admin";
|
||||||
|
import Modal from "./Modal";
|
||||||
|
import { applyConfiguredWeight } from "../stores/state";
|
||||||
|
|
||||||
type Props = {
|
type Props = {
|
||||||
maglevd: string;
|
maglevd: string;
|
||||||
|
frontend: string;
|
||||||
|
pool: string;
|
||||||
backend: string;
|
backend: string;
|
||||||
state: string;
|
state: string;
|
||||||
|
// Current configured weight. Used to seed the weight dialog's
|
||||||
|
// number input so the operator sees the existing value and only
|
||||||
|
// has to change the digits that matter.
|
||||||
|
configuredWeight: number;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// MenuAction is what the user clicks. It maps 1:1 onto BackendAction
|
||||||
|
// plus a virtual "weight" entry that opens the weight form rather
|
||||||
|
// than firing an immediate lifecycle RPC.
|
||||||
|
type MenuAction = BackendAction | "weight";
|
||||||
|
|
||||||
type MenuItem = {
|
type MenuItem = {
|
||||||
label: string;
|
label: string;
|
||||||
action: BackendAction;
|
action: MenuAction;
|
||||||
};
|
};
|
||||||
|
|
||||||
// Action set available per current backend state. Only the actions
|
// Available items per state. "weight" is always present (operators
|
||||||
// that make sense for the current state are shown — e.g. "resume" is
|
// can adjust the configured weight in any state including paused
|
||||||
// meaningless on a running backend, and "enable" is meaningless on
|
// and disabled — the effective weight won't change until the state
|
||||||
// anything except a disabled one. A backend in the "removed" state
|
// lets it, but the config value is still meaningful). A removed
|
||||||
// has no actionable operations, so the whole kebab is suppressed.
|
// backend has no actionable operations, so the kebab is suppressed
|
||||||
|
// entirely by returning an empty list.
|
||||||
function itemsForState(state: string): MenuItem[] {
|
function itemsForState(state: string): MenuItem[] {
|
||||||
|
const weightItem: MenuItem = { label: "set weight…", action: "weight" };
|
||||||
switch (state) {
|
switch (state) {
|
||||||
case "up":
|
case "up":
|
||||||
case "down":
|
case "down":
|
||||||
case "unknown":
|
case "unknown":
|
||||||
return [
|
return [
|
||||||
|
weightItem,
|
||||||
{ label: "pause", action: "pause" },
|
{ label: "pause", action: "pause" },
|
||||||
{ label: "disable", action: "disable" },
|
{ label: "disable", action: "disable" },
|
||||||
];
|
];
|
||||||
case "paused":
|
case "paused":
|
||||||
return [
|
return [
|
||||||
|
weightItem,
|
||||||
{ label: "resume", action: "resume" },
|
{ label: "resume", action: "resume" },
|
||||||
{ label: "disable", action: "disable" },
|
{ label: "disable", action: "disable" },
|
||||||
];
|
];
|
||||||
case "disabled":
|
case "disabled":
|
||||||
return [{ label: "enable", action: "enable" }];
|
return [weightItem, { label: "enable", action: "enable" }];
|
||||||
default:
|
default:
|
||||||
return [];
|
return [];
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// consequenceText returns the "this will…" description shown in the
|
||||||
|
// confirmation dialog for each lifecycle action. Spelling it out in
|
||||||
|
// plain English makes the live-traffic impact unmistakable before
|
||||||
|
// the operator commits.
|
||||||
|
function consequenceText(action: BackendAction): string {
|
||||||
|
switch (action) {
|
||||||
|
case "pause":
|
||||||
|
return "This will stop health checks and set the weight to 0, but existing flows to this backend are kept. New traffic will be rerouted to other backends.";
|
||||||
|
case "resume":
|
||||||
|
return "This will restart health checks. The backend re-enters the 'unknown' state and will start receiving traffic once it probes up.";
|
||||||
|
case "disable":
|
||||||
|
return "This will stop health checks, set the weight to 0, AND flush VPP's flow table for this backend. Active sessions will be dropped immediately.";
|
||||||
|
case "enable":
|
||||||
|
return "This will restart health checks on a previously disabled backend. It re-enters the 'unknown' state and will start receiving traffic once it probes up.";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
const BackendActionsMenu: Component<Props> = (props) => {
|
const BackendActionsMenu: Component<Props> = (props) => {
|
||||||
const [open, setOpen] = createSignal(false);
|
const [open, setOpen] = createSignal(false);
|
||||||
const [busy, setBusy] = createSignal<BackendAction | undefined>();
|
const [dialog, setDialog] = createSignal<MenuAction | undefined>();
|
||||||
|
const [busy, setBusy] = createSignal(false);
|
||||||
const [error, setError] = createSignal<string | undefined>();
|
const [error, setError] = createSignal<string | undefined>();
|
||||||
|
|
||||||
|
// Weight-dialog form state. Seeded from the current configured
|
||||||
|
// weight each time the dialog opens so re-opening after a change
|
||||||
|
// shows the new value.
|
||||||
|
const [weightInput, setWeightInput] = createSignal(props.configuredWeight);
|
||||||
|
const [flushInput, setFlushInput] = createSignal(false);
|
||||||
|
|
||||||
let wrapRef: HTMLDivElement | undefined;
|
let wrapRef: HTMLDivElement | undefined;
|
||||||
|
|
||||||
const items = () => itemsForState(props.state);
|
const items = () => itemsForState(props.state);
|
||||||
|
|
||||||
// Close on outside-click or Escape. The effect only installs its
|
// Close the popover on outside click or Escape while it's open.
|
||||||
// document listeners while the menu is open, so there's no cost on
|
// The dialog has its own Escape handler; this effect only runs
|
||||||
// the typical closed-at-rest state.
|
// when the kebab popover (not the dialog) is visible.
|
||||||
createEffect(() => {
|
createEffect(() => {
|
||||||
if (!open()) return;
|
if (!open()) return;
|
||||||
const onMouseDown = (e: MouseEvent) => {
|
const onMouseDown = (e: MouseEvent) => {
|
||||||
@@ -66,22 +108,59 @@ const BackendActionsMenu: Component<Props> = (props) => {
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
const run = async (action: BackendAction) => {
|
const openDialog = (action: MenuAction) => {
|
||||||
setBusy(action);
|
setOpen(false);
|
||||||
|
setError(undefined);
|
||||||
|
if (action === "weight") {
|
||||||
|
// Seed form from current value on each open.
|
||||||
|
setWeightInput(props.configuredWeight);
|
||||||
|
setFlushInput(false);
|
||||||
|
}
|
||||||
|
setDialog(action);
|
||||||
|
};
|
||||||
|
|
||||||
|
const closeDialog = () => {
|
||||||
|
if (busy()) return; // don't yank the modal out from under an in-flight call
|
||||||
|
setDialog(undefined);
|
||||||
|
setError(undefined);
|
||||||
|
};
|
||||||
|
|
||||||
|
const commit = async () => {
|
||||||
|
const action = dialog();
|
||||||
|
if (!action) return;
|
||||||
|
setBusy(true);
|
||||||
setError(undefined);
|
setError(undefined);
|
||||||
try {
|
try {
|
||||||
|
if (action === "weight") {
|
||||||
|
const w = weightInput();
|
||||||
|
if (!Number.isFinite(w) || w < 0 || w > 100) {
|
||||||
|
throw new Error("weight must be an integer in [0, 100]");
|
||||||
|
}
|
||||||
|
const newWeight = Math.floor(w);
|
||||||
|
await setBackendWeight(
|
||||||
|
props.maglevd,
|
||||||
|
props.frontend,
|
||||||
|
props.pool,
|
||||||
|
props.backend,
|
||||||
|
newWeight,
|
||||||
|
flushInput(),
|
||||||
|
);
|
||||||
|
// Mirror the server-side mutation into our local store so
|
||||||
|
// the new weight (and any resulting effective-weight
|
||||||
|
// recompute) is visible instantly, without waiting for
|
||||||
|
// the next 30s refresh tick.
|
||||||
|
applyConfiguredWeight(props.maglevd, props.frontend, props.pool, props.backend, newWeight);
|
||||||
|
} else {
|
||||||
await runBackendAction(props.maglevd, props.backend, action);
|
await runBackendAction(props.maglevd, props.backend, action);
|
||||||
setOpen(false);
|
}
|
||||||
|
setDialog(undefined);
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
setError(`${err}`);
|
setError(`${err}`);
|
||||||
} finally {
|
} finally {
|
||||||
setBusy(undefined);
|
setBusy(false);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// If there are no valid actions for the current state, render
|
|
||||||
// nothing — the surrounding <td> stays an empty cell, no "dead"
|
|
||||||
// kebab tempting clicks.
|
|
||||||
return (
|
return (
|
||||||
<Show when={items().length > 0}>
|
<Show when={items().length > 0}>
|
||||||
<div class="kebab-wrap" ref={wrapRef}>
|
<div class="kebab-wrap" ref={wrapRef}>
|
||||||
@@ -106,21 +185,110 @@ const BackendActionsMenu: Component<Props> = (props) => {
|
|||||||
type="button"
|
type="button"
|
||||||
class="kebab-item"
|
class="kebab-item"
|
||||||
role="menuitem"
|
role="menuitem"
|
||||||
disabled={busy() !== undefined}
|
onClick={() => openDialog(item.action)}
|
||||||
onClick={() => run(item.action)}
|
|
||||||
>
|
>
|
||||||
{busy() === item.action ? `${item.label}…` : item.label}
|
{item.label}
|
||||||
</button>
|
</button>
|
||||||
)}
|
)}
|
||||||
</For>
|
</For>
|
||||||
<Show when={error()}>
|
</div>
|
||||||
<p class="kebab-error">{error()}</p>
|
</Show>
|
||||||
|
|
||||||
|
<Show when={dialog()}>
|
||||||
|
{(action) => (
|
||||||
|
<Modal title={dialogTitle(action(), props.backend)} onClose={closeDialog}>
|
||||||
|
{action() === "weight" ? (
|
||||||
|
<div class="dialog-body">
|
||||||
|
<p class="dialog-target">
|
||||||
|
<code>{props.backend}</code> in pool <code>{props.pool}</code> of frontend{" "}
|
||||||
|
<code>{props.frontend}</code>
|
||||||
|
</p>
|
||||||
|
<label class="dialog-field">
|
||||||
|
<span class="weight-slider-label">
|
||||||
|
weight
|
||||||
|
<output class="weight-slider-value">{weightInput()}</output>
|
||||||
|
</span>
|
||||||
|
<input
|
||||||
|
type="range"
|
||||||
|
class="weight-slider"
|
||||||
|
min="0"
|
||||||
|
max="100"
|
||||||
|
step="1"
|
||||||
|
value={weightInput()}
|
||||||
|
onInput={(e) => setWeightInput(Number(e.currentTarget.value))}
|
||||||
|
/>
|
||||||
|
<small>0–100; 0 keeps the backend in the pool but assigns it no traffic</small>
|
||||||
|
</label>
|
||||||
|
<label class="dialog-field checkbox">
|
||||||
|
<input
|
||||||
|
type="checkbox"
|
||||||
|
checked={flushInput()}
|
||||||
|
onChange={(e) => setFlushInput(e.currentTarget.checked)}
|
||||||
|
/>
|
||||||
|
<span>flush existing flows</span>
|
||||||
|
</label>
|
||||||
|
<Show
|
||||||
|
when={flushInput()}
|
||||||
|
fallback={
|
||||||
|
<p class="dialog-note">
|
||||||
|
VPP's flow table is left alone. Existing sessions keep reaching this backend
|
||||||
|
until they finish.
|
||||||
|
</p>
|
||||||
|
}
|
||||||
|
>
|
||||||
|
<p class="dialog-warn">
|
||||||
|
VPP's flow table will be cleared for this backend. Active sessions will be
|
||||||
|
dropped immediately.
|
||||||
|
</p>
|
||||||
</Show>
|
</Show>
|
||||||
</div>
|
</div>
|
||||||
|
) : (
|
||||||
|
<div class="dialog-body">
|
||||||
|
<p class="dialog-consequence">{consequenceText(action() as BackendAction)}</p>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
|
||||||
|
<Show when={error()}>
|
||||||
|
<p class="dialog-error">{error()}</p>
|
||||||
|
</Show>
|
||||||
|
|
||||||
|
<footer class="dialog-footer">
|
||||||
|
<button type="button" class="btn-secondary" onClick={closeDialog} disabled={busy()}>
|
||||||
|
cancel
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
type="button"
|
||||||
|
classList={{
|
||||||
|
"btn-primary": true,
|
||||||
|
"btn-danger": action() === "disable" || (action() === "weight" && flushInput()),
|
||||||
|
}}
|
||||||
|
onClick={commit}
|
||||||
|
disabled={busy()}
|
||||||
|
>
|
||||||
|
{busy() ? "committing…" : "commit"}
|
||||||
|
</button>
|
||||||
|
</footer>
|
||||||
|
</Modal>
|
||||||
|
)}
|
||||||
</Show>
|
</Show>
|
||||||
</div>
|
</div>
|
||||||
</Show>
|
</Show>
|
||||||
);
|
);
|
||||||
};
|
};
|
||||||
|
|
||||||
|
function dialogTitle(action: MenuAction, backend: string): string {
|
||||||
|
switch (action) {
|
||||||
|
case "weight":
|
||||||
|
return `Set weight — ${backend}`;
|
||||||
|
case "pause":
|
||||||
|
return `Pause ${backend}?`;
|
||||||
|
case "resume":
|
||||||
|
return `Resume ${backend}?`;
|
||||||
|
case "disable":
|
||||||
|
return `Disable ${backend}?`;
|
||||||
|
case "enable":
|
||||||
|
return `Enable ${backend}?`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
export default BackendActionsMenu;
|
export default BackendActionsMenu;
|
||||||
|
|||||||
48
cmd/frontend/web/src/components/Modal.tsx
Normal file
48
cmd/frontend/web/src/components/Modal.tsx
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
import { createEffect, onCleanup, type Component, type JSX } from "solid-js";
|
||||||
|
import { Portal } from "solid-js/web";
|
||||||
|
|
||||||
|
type Props = {
|
||||||
|
title: string;
|
||||||
|
onClose: () => void;
|
||||||
|
children: JSX.Element;
|
||||||
|
};
|
||||||
|
|
||||||
|
// Modal is a simple overlay primitive: a dark backdrop + a centered
|
||||||
|
// card, portaled into document.body so it can't be clipped by a
|
||||||
|
// table cell's overflow or trapped behind a z-index stack. Closes
|
||||||
|
// on Escape, on explicit close-button click, or on backdrop click.
|
||||||
|
// The backdrop listens on mousedown to decide whether the click
|
||||||
|
// landed outside the card; that makes it robust against drags that
|
||||||
|
// start inside and release outside (a user trying to select text).
|
||||||
|
const Modal: Component<Props> = (props) => {
|
||||||
|
createEffect(() => {
|
||||||
|
const onKey = (e: KeyboardEvent) => {
|
||||||
|
if (e.key === "Escape") props.onClose();
|
||||||
|
};
|
||||||
|
document.addEventListener("keydown", onKey);
|
||||||
|
onCleanup(() => document.removeEventListener("keydown", onKey));
|
||||||
|
});
|
||||||
|
|
||||||
|
return (
|
||||||
|
<Portal mount={document.body}>
|
||||||
|
<div
|
||||||
|
class="modal-backdrop"
|
||||||
|
onMouseDown={(e) => {
|
||||||
|
if (e.target === e.currentTarget) props.onClose();
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<div class="modal-card" role="dialog" aria-modal="true" aria-label={props.title}>
|
||||||
|
<header class="modal-header">
|
||||||
|
<h3>{props.title}</h3>
|
||||||
|
<button type="button" class="modal-close" aria-label="close" onClick={props.onClose}>
|
||||||
|
{"\u00D7"}
|
||||||
|
</button>
|
||||||
|
</header>
|
||||||
|
<div class="modal-body">{props.children}</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</Portal>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default Modal;
|
||||||
@@ -1,15 +1,51 @@
|
|||||||
import { createEffect, createSignal, type Component } from "solid-js";
|
import { createEffect, createSignal, onCleanup, type Component } from "solid-js";
|
||||||
import { events } from "../stores/events";
|
import { events } from "../stores/events";
|
||||||
import type { LogEventPayload } from "../types";
|
import type { LogEventPayload } from "../types";
|
||||||
|
|
||||||
type Props = { maglevd: string; backend: string };
|
type Props = {
|
||||||
|
maglevd: string;
|
||||||
|
backend: string;
|
||||||
|
state: string;
|
||||||
|
};
|
||||||
|
|
||||||
|
// Glyphs shown in the first column of every backend row. The at-rest
|
||||||
|
// glyph reflects the backend's lifecycle state so the column does
|
||||||
|
// double duty as a quick visual cue:
|
||||||
|
//
|
||||||
|
// ▶ active — will pop to a heart at each probe-start
|
||||||
|
// ⏸ paused — health checks stopped via PauseBackend
|
||||||
|
// ⏹ disabled / removed — no probes will run
|
||||||
|
// ❤️ probe fired — shown for POP_DURATION_MS after each probe-start
|
||||||
|
//
|
||||||
|
// We only listen to probe-start, not probe-done: fast probes can
|
||||||
|
// complete in <10 ms on local health checks, which means start and
|
||||||
|
// done arrive in the same render tick and the heart never visibly
|
||||||
|
// paints. Instead a single probe-start kicks off a fixed-length pop
|
||||||
|
// sequence (heart visible + scale animation) driven by a timer.
|
||||||
|
// Re-entering probe-start during the sequence cancels and restarts
|
||||||
|
// the timer, so fast cadences produce a continuous pulse.
|
||||||
|
const GLYPH_IDLE = "\u25B6"; // ▶
|
||||||
|
const GLYPH_PAUSED = "\u23F8"; // ⏸
|
||||||
|
const GLYPH_STOP = "\u23F9"; // ⏹
|
||||||
|
const GLYPH_HEART = "\u2764\uFE0F"; // ❤️
|
||||||
|
const POP_DURATION_MS = 400;
|
||||||
|
|
||||||
|
function idleGlyph(state: string): string {
|
||||||
|
switch (state) {
|
||||||
|
case "paused":
|
||||||
|
return GLYPH_PAUSED;
|
||||||
|
case "disabled":
|
||||||
|
case "removed":
|
||||||
|
return GLYPH_STOP;
|
||||||
|
default:
|
||||||
|
return GLYPH_IDLE;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// ProbeHeartbeat watches the event stream for probe-start/probe-done log
|
|
||||||
// records targeted at this backend. It shows a heart while a probe is in
|
|
||||||
// flight and a dot at rest. Success/failure is reflected by the backend's
|
|
||||||
// state column, so this component is purely an activity indicator.
|
|
||||||
const ProbeHeartbeat: Component<Props> = (props) => {
|
const ProbeHeartbeat: Component<Props> = (props) => {
|
||||||
const [inFlight, setInFlight] = createSignal(false);
|
const [popping, setPopping] = createSignal(false);
|
||||||
|
let el: HTMLSpanElement | undefined;
|
||||||
|
let popTimer: number | undefined;
|
||||||
|
|
||||||
createEffect(() => {
|
createEffect(() => {
|
||||||
const list = events();
|
const list = events();
|
||||||
@@ -18,13 +54,39 @@ const ProbeHeartbeat: Component<Props> = (props) => {
|
|||||||
if (ev.type !== "log" || ev.maglevd !== props.maglevd) return;
|
if (ev.type !== "log" || ev.maglevd !== props.maglevd) return;
|
||||||
const payload = ev.payload as LogEventPayload;
|
const payload = ev.payload as LogEventPayload;
|
||||||
if (payload.attrs?.backend !== props.backend) return;
|
if (payload.attrs?.backend !== props.backend) return;
|
||||||
if (payload.msg === "probe-start") setInFlight(true);
|
if (payload.msg !== "probe-start") return;
|
||||||
else if (payload.msg === "probe-done") setInFlight(false);
|
|
||||||
|
setPopping(true);
|
||||||
|
// Scale-pop on appearance so the heart visually lands even for
|
||||||
|
// <10 ms probes. The Web Animations API supersedes any still-
|
||||||
|
// running anim on the next call, so fast back-to-back probes
|
||||||
|
// re-trigger cleanly.
|
||||||
|
el?.animate(
|
||||||
|
[
|
||||||
|
{ transform: "scale(1)" },
|
||||||
|
{ transform: "scale(1.6)", offset: 0.25 },
|
||||||
|
{ transform: "scale(1)" },
|
||||||
|
],
|
||||||
|
{ duration: POP_DURATION_MS, easing: "ease-out" },
|
||||||
|
);
|
||||||
|
// Each new probe-start resets the off-timer so the heart stays
|
||||||
|
// visible for the full pop duration. Fast cadences keep the
|
||||||
|
// heart continuously on; slow ones get a clean heart-then-idle
|
||||||
|
// transition when the timer expires.
|
||||||
|
if (popTimer !== undefined) clearTimeout(popTimer);
|
||||||
|
popTimer = window.setTimeout(() => {
|
||||||
|
setPopping(false);
|
||||||
|
popTimer = undefined;
|
||||||
|
}, POP_DURATION_MS);
|
||||||
|
});
|
||||||
|
|
||||||
|
onCleanup(() => {
|
||||||
|
if (popTimer !== undefined) clearTimeout(popTimer);
|
||||||
});
|
});
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<span class="probe-heartbeat" classList={{ "in-flight": inFlight() }}>
|
<span ref={el} class="probe-heartbeat" classList={{ "in-flight": popping() }}>
|
||||||
{inFlight() ? "\u2764\uFE0F" : "\u00B7"}
|
{popping() ? GLYPH_HEART : idleGlyph(props.state)}
|
||||||
</span>
|
</span>
|
||||||
);
|
);
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -8,6 +8,48 @@ import type {
|
|||||||
} from "../types";
|
} from "../types";
|
||||||
import { tick } from "./tick";
|
import { tick } from "./tick";
|
||||||
|
|
||||||
|
// recomputeEffectiveWeights mirrors the server-side
|
||||||
|
// health.EffectiveWeights / ActivePoolIndex logic so the SPA can keep
|
||||||
|
// pool.effective_weight correct the moment a backend transitions,
|
||||||
|
// without waiting for the 30s refresh. Walking every frontend is cheap
|
||||||
|
// — O(frontends × pools × backends-per-pool) with tiny constants —
|
||||||
|
// and it's strictly a function of the backend state map, so there's no
|
||||||
|
// risk of drift vs. the server as long as the rule stays the same.
|
||||||
|
//
|
||||||
|
// Rule: a backend gets its configured pool weight iff it is up AND
|
||||||
|
// belongs to the currently-active pool; everything else is 0. The
|
||||||
|
// active pool is the first pool containing a backend that is both
|
||||||
|
// up AND has a non-zero configured weight — a pool whose up backends
|
||||||
|
// are all weight=0 contributes no serving capacity and gets skipped
|
||||||
|
// over in priority failover. Kept in lock-step with
|
||||||
|
// internal/health/weights.go.
|
||||||
|
function recomputeEffectiveWeights(snap: StateSnapshot) {
|
||||||
|
const stateOf: Record<string, string> = {};
|
||||||
|
for (const b of snap.backends) stateOf[b.name] = b.state;
|
||||||
|
for (const fe of snap.frontends) {
|
||||||
|
let activePool = 0;
|
||||||
|
for (let i = 0; i < fe.pools.length; i++) {
|
||||||
|
let anyServing = false;
|
||||||
|
for (const pb of fe.pools[i].backends) {
|
||||||
|
if (stateOf[pb.name] === "up" && pb.weight > 0) {
|
||||||
|
anyServing = true;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (anyServing) {
|
||||||
|
activePool = i;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for (let i = 0; i < fe.pools.length; i++) {
|
||||||
|
for (const pb of fe.pools[i].backends) {
|
||||||
|
const st = stateOf[pb.name];
|
||||||
|
pb.effective_weight = st === "up" && i === activePool ? pb.weight : 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// FrontendState keys snapshots by maglevd name. A single store drives the
|
// FrontendState keys snapshots by maglevd name. A single store drives the
|
||||||
// whole UI; reducers produce() into the right branch.
|
// whole UI; reducers produce() into the right branch.
|
||||||
export type FrontendState = {
|
export type FrontendState = {
|
||||||
@@ -40,12 +82,25 @@ export function applyBackendTransition(maglevd: string, p: BackendEventPayload)
|
|||||||
const b = snap.backends.find((x) => x.name === p.backend);
|
const b = snap.backends.find((x) => x.name === p.backend);
|
||||||
if (!b) return;
|
if (!b) return;
|
||||||
b.state = p.transition.to;
|
b.state = p.transition.to;
|
||||||
|
// Derive enabled from state — see the matching comment in
|
||||||
|
// cmd/frontend/client.go applyBackendTransition. state="disabled"
|
||||||
|
// and enabled=false are two expressions of the same condition
|
||||||
|
// in maglevd, so keeping them in sync locally closes a drift
|
||||||
|
// window where the UI would show the wrong [disabled] tag.
|
||||||
|
b.enabled = p.transition.to !== "disabled";
|
||||||
b.last_transition = p.transition;
|
b.last_transition = p.transition;
|
||||||
if (!b.transitions) b.transitions = [];
|
if (!b.transitions) b.transitions = [];
|
||||||
b.transitions.push(p.transition);
|
b.transitions.push(p.transition);
|
||||||
if (b.transitions.length > 20) {
|
if (b.transitions.length > 20) {
|
||||||
b.transitions = b.transitions.slice(b.transitions.length - 20);
|
b.transitions = b.transitions.slice(b.transitions.length - 20);
|
||||||
}
|
}
|
||||||
|
// A backend state change can shift which pool is active and
|
||||||
|
// therefore which pool-memberships get non-zero effective
|
||||||
|
// weights. Recompute for every frontend — not just the one
|
||||||
|
// pointed at by this backend — because pool-failover is a
|
||||||
|
// per-frontend computation and the same backend can appear in
|
||||||
|
// multiple frontends with different pool placements.
|
||||||
|
recomputeEffectiveWeights(snap);
|
||||||
}),
|
}),
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -83,32 +138,50 @@ export function applyMaglevdStatus(maglevd: string, p: MaglevdStatusPayload) {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// applyBackendEffectiveWeight updates the effective_weight of every pool
|
// applyConfiguredWeight updates the configured weight of a specific
|
||||||
// row that references the backend with the given address. Driven by the
|
// backend's pool-membership within a named frontend/pool, then
|
||||||
// vpp-lb-sync-as-* log events so the UI reflects VPP LB changes without
|
// recomputes effective weights so pool-failover semantics stay
|
||||||
// waiting for the 30s refresh tick.
|
// consistent. Called from the BackendActionsMenu after a successful
|
||||||
export function applyBackendEffectiveWeight(maglevd: string, address: string, weight: number) {
|
// admin "set weight" POST so the UI reflects the change instantly
|
||||||
|
// without waiting for the 30s refresh tick. Unlike the previous
|
||||||
|
// log-event-driven reducer, this one is scoped to exactly the
|
||||||
|
// pool-membership the operator edited, so it can't leak weights
|
||||||
|
// across frontends that share the backend.
|
||||||
|
export function applyConfiguredWeight(
|
||||||
|
maglevd: string,
|
||||||
|
frontend: string,
|
||||||
|
pool: string,
|
||||||
|
backend: string,
|
||||||
|
weight: number,
|
||||||
|
) {
|
||||||
setState(
|
setState(
|
||||||
produce((s) => {
|
produce((s) => {
|
||||||
const snap = s.byName[maglevd];
|
const snap = s.byName[maglevd];
|
||||||
if (!snap) return;
|
if (!snap) return;
|
||||||
const b = snap.backends.find((x) => x.address === address);
|
const fe = snap.frontends.find((f) => f.name === frontend);
|
||||||
if (!b) return;
|
if (!fe) return;
|
||||||
for (const fe of snap.frontends) {
|
const p = fe.pools.find((x) => x.name === pool);
|
||||||
for (const pool of fe.pools) {
|
if (!p) return;
|
||||||
for (const pb of pool.backends) {
|
const pb = p.backends.find((x) => x.name === backend);
|
||||||
if (pb.name === b.name) {
|
if (!pb) return;
|
||||||
pb.effective_weight = weight;
|
pb.weight = weight;
|
||||||
}
|
recomputeEffectiveWeights(snap);
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}),
|
}),
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Helpers used by views.
|
// Helpers used by views.
|
||||||
|
|
||||||
|
// formatVIPAddress renders an address:port string with IPv6 addresses
|
||||||
|
// wrapped in square brackets. This matches the URL-authority
|
||||||
|
// convention (RFC 3986 §3.2.2) — without the brackets the colons in
|
||||||
|
// an IPv6 literal are ambiguous against the port separator. IPv4 is
|
||||||
|
// left bare.
|
||||||
|
export function formatVIPAddress(address: string, port: number): string {
|
||||||
|
if (address.includes(":")) return `[${address}]:${port}`;
|
||||||
|
return `${address}:${port}`;
|
||||||
|
}
|
||||||
|
|
||||||
export function lastTransitionAge(t?: TransitionRecord): string {
|
export function lastTransitionAge(t?: TransitionRecord): string {
|
||||||
// Subscribe to the 1s ticker so the age string updates live as a
|
// Subscribe to the 1s ticker so the age string updates live as a
|
||||||
// real-time countdown. No effect on layout — the age column is
|
// real-time countdown. No effect on layout — the age column is
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
--bg: #fafafa;
|
--bg: #fafafa;
|
||||||
--bg-soft: #f0f0f0;
|
--bg-soft: #f0f0f0;
|
||||||
--bg-card: #ffffff;
|
--bg-card: #ffffff;
|
||||||
--fg: #1f2937;
|
--fg: #0f172a;
|
||||||
--fg-muted: #6b7280;
|
--fg-muted: #6b7280;
|
||||||
--border: #e5e7eb;
|
--border: #e5e7eb;
|
||||||
--accent: #2563eb;
|
--accent: #2563eb;
|
||||||
@@ -51,6 +51,13 @@
|
|||||||
.brand strong {
|
.brand strong {
|
||||||
font-size: 18px;
|
font-size: 18px;
|
||||||
}
|
}
|
||||||
|
.brand-name {
|
||||||
|
color: inherit;
|
||||||
|
text-decoration: none;
|
||||||
|
}
|
||||||
|
.brand-name:hover {
|
||||||
|
color: var(--accent);
|
||||||
|
}
|
||||||
.brand-logo {
|
.brand-logo {
|
||||||
display: inline-flex;
|
display: inline-flex;
|
||||||
align-items: center;
|
align-items: center;
|
||||||
@@ -149,51 +156,44 @@
|
|||||||
text-decoration: line-through;
|
text-decoration: line-through;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* ---- frontend grid ---- */
|
/* ---- frontend list ---- */
|
||||||
|
|
||||||
.frontend-grid {
|
.frontend-list {
|
||||||
display: grid;
|
|
||||||
gap: 16px;
|
|
||||||
grid-template-columns: 1fr;
|
|
||||||
}
|
|
||||||
@media (min-width: 640px) {
|
|
||||||
.frontend-grid {
|
|
||||||
grid-template-columns: 1fr 1fr;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@media (min-width: 1024px) {
|
|
||||||
.frontend-grid {
|
|
||||||
grid-template-columns: repeat(3, 1fr);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
.frontend-card {
|
|
||||||
background: var(--bg-card);
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
border-radius: 6px;
|
|
||||||
padding: 12px;
|
|
||||||
}
|
|
||||||
.frontend-header h2 {
|
|
||||||
font-size: 16px;
|
|
||||||
margin-bottom: 4px;
|
|
||||||
display: flex;
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
gap: 4px;
|
||||||
|
margin-bottom: 12px;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Zippy summary row for a frontend. Flex so every piece sits on a
|
||||||
|
* single line with consistent spacing, and flex-wrap lets a narrow
|
||||||
|
* viewport wrap cleanly rather than overflow. */
|
||||||
|
.frontend-title {
|
||||||
|
display: inline-flex;
|
||||||
align-items: center;
|
align-items: center;
|
||||||
gap: 8px;
|
gap: 10px;
|
||||||
|
flex-wrap: wrap;
|
||||||
}
|
}
|
||||||
.frontend-meta {
|
.frontend-title-name {
|
||||||
display: flex;
|
font-size: 15px;
|
||||||
gap: 8px;
|
|
||||||
color: var(--fg-muted);
|
|
||||||
font-size: 12px;
|
|
||||||
}
|
|
||||||
.frontend-meta .proto {
|
|
||||||
text-transform: uppercase;
|
|
||||||
font-weight: 600;
|
font-weight: 600;
|
||||||
}
|
}
|
||||||
.frontend-desc {
|
.frontend-title-addr {
|
||||||
|
font-family: "SF Mono", Menlo, Consolas, monospace;
|
||||||
font-size: 12px;
|
font-size: 12px;
|
||||||
color: var(--fg-muted);
|
color: var(--fg-muted);
|
||||||
margin-top: 4px;
|
}
|
||||||
|
.frontend-title-proto {
|
||||||
|
font-size: 11px;
|
||||||
|
font-weight: 600;
|
||||||
|
color: var(--fg-muted);
|
||||||
|
text-transform: uppercase;
|
||||||
|
}
|
||||||
|
.frontend-title-desc {
|
||||||
|
font-size: 12px;
|
||||||
|
color: var(--fg-muted);
|
||||||
|
font-style: italic;
|
||||||
|
font-weight: 400;
|
||||||
}
|
}
|
||||||
.tag {
|
.tag {
|
||||||
display: inline-block;
|
display: inline-block;
|
||||||
@@ -205,15 +205,17 @@
|
|||||||
margin-left: 4px;
|
margin-left: 4px;
|
||||||
}
|
}
|
||||||
|
|
||||||
.pool-block {
|
/* Fixed table layout with per-column widths so every backend table
|
||||||
margin-top: 12px;
|
* renders identical columns regardless of which pool/frontend it
|
||||||
|
* lives in. The name column is the only auto-sized one, so it
|
||||||
|
* absorbs the remaining space — and since all tables have the same
|
||||||
|
* width (100% of the same zippy body width) and the same fixed
|
||||||
|
* column sums, the auto column is identical in all of them, and
|
||||||
|
* every column aligns vertically across pools and frontends. */
|
||||||
|
.backend-table {
|
||||||
|
table-layout: fixed;
|
||||||
|
width: 100%;
|
||||||
}
|
}
|
||||||
.pool-name {
|
|
||||||
font-size: 13px;
|
|
||||||
color: var(--fg-muted);
|
|
||||||
margin-bottom: 4px;
|
|
||||||
}
|
|
||||||
|
|
||||||
.backend-table th,
|
.backend-table th,
|
||||||
.backend-table td {
|
.backend-table td {
|
||||||
white-space: nowrap;
|
white-space: nowrap;
|
||||||
@@ -223,13 +225,83 @@
|
|||||||
color: var(--fg-muted);
|
color: var(--fg-muted);
|
||||||
text-transform: uppercase;
|
text-transform: uppercase;
|
||||||
border-bottom: 1px solid var(--border);
|
border-bottom: 1px solid var(--border);
|
||||||
|
padding: 2px 8px;
|
||||||
}
|
}
|
||||||
.backend-table .numeric {
|
.backend-table .numeric {
|
||||||
text-align: right;
|
text-align: right;
|
||||||
}
|
}
|
||||||
|
/* Pool-name column. Rendered only on the first row of each pool
|
||||||
|
* group; subsequent rows within the same pool leave this cell
|
||||||
|
* blank, producing a classic grouped-table look where the pool
|
||||||
|
* label appears once above its members. Wide enough to fit
|
||||||
|
* realistic pool names like "primary"/"fallback"/"canary".
|
||||||
|
*
|
||||||
|
* The header (<th>) keeps its default bold + uppercase + muted
|
||||||
|
* styling from .backend-table th; we only style the data cells
|
||||||
|
* here so the column still reads like every other header. */
|
||||||
|
.backend-table .col-pool {
|
||||||
|
width: 10ch;
|
||||||
|
}
|
||||||
|
.backend-row .col-pool {
|
||||||
|
color: var(--fg-muted);
|
||||||
|
font-weight: 500;
|
||||||
|
}
|
||||||
|
/* Standby rows: every backend in this row's pool has
|
||||||
|
* effective_weight=0, meaning the pool isn't currently carrying
|
||||||
|
* traffic (standby fallback or fully-drained primary). Dims
|
||||||
|
* every column EXCEPT the actions cell, leaving the kebab icon,
|
||||||
|
* its popover menu, and the modal dialogs it opens at full
|
||||||
|
* contrast — those are all still functional (an operator can
|
||||||
|
* pause/disable/set-weight on a standby backend just as easily
|
||||||
|
* as on an active one), and dimming them would misleadingly
|
||||||
|
* imply they're disabled. Opacity is also multiplicative
|
||||||
|
* through the DOM tree, so excluding the td boundary keeps the
|
||||||
|
* popover and modal fully opaque regardless of where their
|
||||||
|
* DOM ends up mounted.
|
||||||
|
*
|
||||||
|
* Combined with the darker --fg base colour on active rows,
|
||||||
|
* 0.35 gives a clear two-tier contrast. */
|
||||||
|
.backend-row.pool-standby > td:not(.actions) {
|
||||||
|
opacity: 0.35;
|
||||||
|
}
|
||||||
|
/* Sized to comfortably fit the longest legitimate IPv6 form
|
||||||
|
* (e.g. 2001:0db8:85a3:0000:0000:8a2e:0370:7334, 39 chars) plus a
|
||||||
|
* little slack. Shorter IPv4 addresses just leave extra room
|
||||||
|
* before the state column, which is cheaper than clipping. */
|
||||||
|
.backend-table .col-address {
|
||||||
|
width: 42ch;
|
||||||
|
}
|
||||||
|
.backend-table .col-state {
|
||||||
|
width: 90px;
|
||||||
|
}
|
||||||
|
.backend-table .col-weight {
|
||||||
|
width: 80px;
|
||||||
|
}
|
||||||
|
.backend-table .col-effective {
|
||||||
|
width: 95px;
|
||||||
|
}
|
||||||
|
.backend-table .col-age {
|
||||||
|
width: 110px;
|
||||||
|
}
|
||||||
|
/* The name column clips with an ellipsis on the inner span rather
|
||||||
|
* than the td itself so the Flash halo box-shadow (which uses
|
||||||
|
* overflow: visible on td) still escapes the cell on adjacent
|
||||||
|
* numeric/state cells. */
|
||||||
|
.backend-row td.backend-name {
|
||||||
|
overflow: hidden;
|
||||||
|
}
|
||||||
|
.backend-name-text {
|
||||||
|
display: inline-block;
|
||||||
|
max-width: calc(100% - 22px); /* leave room for the heartbeat wrapper */
|
||||||
|
overflow: hidden;
|
||||||
|
text-overflow: ellipsis;
|
||||||
|
white-space: nowrap;
|
||||||
|
vertical-align: middle;
|
||||||
|
}
|
||||||
.backend-row td {
|
.backend-row td {
|
||||||
border-bottom: 1px solid var(--border);
|
border-bottom: 1px solid var(--border);
|
||||||
font-size: 13px;
|
font-size: 13px;
|
||||||
|
padding: 2px 8px;
|
||||||
}
|
}
|
||||||
.backend-row .backend-name {
|
.backend-row .backend-name {
|
||||||
font-weight: 500;
|
font-weight: 500;
|
||||||
@@ -242,7 +314,7 @@
|
|||||||
}
|
}
|
||||||
.backend-table th.actions,
|
.backend-table th.actions,
|
||||||
.backend-row td.actions {
|
.backend-row td.actions {
|
||||||
width: 24px;
|
width: 28px;
|
||||||
padding: 0 4px;
|
padding: 0 4px;
|
||||||
text-align: center;
|
text-align: center;
|
||||||
}
|
}
|
||||||
@@ -310,14 +382,219 @@
|
|||||||
white-space: normal;
|
white-space: normal;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* ---- modal ---- */
|
||||||
|
|
||||||
|
.modal-backdrop {
|
||||||
|
position: fixed;
|
||||||
|
inset: 0;
|
||||||
|
background: rgba(15, 23, 42, 0.45);
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: center;
|
||||||
|
z-index: 100;
|
||||||
|
padding: 16px;
|
||||||
|
}
|
||||||
|
.modal-card {
|
||||||
|
background: var(--bg-card);
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: 6px;
|
||||||
|
box-shadow: 0 20px 50px rgba(0, 0, 0, 0.25);
|
||||||
|
width: 100%;
|
||||||
|
max-width: 480px;
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
}
|
||||||
|
.modal-header {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
padding: 12px 16px;
|
||||||
|
border-bottom: 1px solid var(--border);
|
||||||
|
}
|
||||||
|
.modal-header h3 {
|
||||||
|
margin: 0;
|
||||||
|
font-size: 15px;
|
||||||
|
flex: 1;
|
||||||
|
}
|
||||||
|
.modal-close {
|
||||||
|
width: 28px;
|
||||||
|
height: 28px;
|
||||||
|
padding: 0;
|
||||||
|
font-size: 20px;
|
||||||
|
line-height: 1;
|
||||||
|
border: none;
|
||||||
|
background: transparent;
|
||||||
|
color: var(--fg-muted);
|
||||||
|
cursor: pointer;
|
||||||
|
border-radius: 3px;
|
||||||
|
}
|
||||||
|
.modal-close:hover {
|
||||||
|
background: var(--bg-soft);
|
||||||
|
color: var(--fg);
|
||||||
|
}
|
||||||
|
.modal-body {
|
||||||
|
padding: 16px;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* ---- backend-action dialog contents ---- */
|
||||||
|
|
||||||
|
.dialog-body {
|
||||||
|
margin-bottom: 12px;
|
||||||
|
}
|
||||||
|
.dialog-target {
|
||||||
|
margin-bottom: 12px;
|
||||||
|
font-size: 13px;
|
||||||
|
color: var(--fg-muted);
|
||||||
|
}
|
||||||
|
.dialog-target code {
|
||||||
|
font-family: "SF Mono", Menlo, Consolas, monospace;
|
||||||
|
font-size: 13px;
|
||||||
|
background: var(--bg-soft);
|
||||||
|
padding: 1px 5px;
|
||||||
|
border-radius: 3px;
|
||||||
|
color: var(--fg);
|
||||||
|
}
|
||||||
|
.dialog-consequence {
|
||||||
|
font-size: 13px;
|
||||||
|
line-height: 1.5;
|
||||||
|
color: var(--fg);
|
||||||
|
}
|
||||||
|
.dialog-field {
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
margin-bottom: 12px;
|
||||||
|
font-size: 13px;
|
||||||
|
}
|
||||||
|
.dialog-field > span {
|
||||||
|
font-weight: 500;
|
||||||
|
margin-bottom: 4px;
|
||||||
|
}
|
||||||
|
.dialog-field input[type="number"] {
|
||||||
|
font: inherit;
|
||||||
|
padding: 6px 8px;
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: 4px;
|
||||||
|
width: 100px;
|
||||||
|
}
|
||||||
|
.dialog-field .weight-slider-label {
|
||||||
|
display: flex;
|
||||||
|
align-items: baseline;
|
||||||
|
justify-content: space-between;
|
||||||
|
}
|
||||||
|
.dialog-field .weight-slider-value {
|
||||||
|
font-family: "SF Mono", Menlo, Consolas, monospace;
|
||||||
|
font-size: 18px;
|
||||||
|
font-weight: 600;
|
||||||
|
color: var(--accent);
|
||||||
|
min-width: 3ch;
|
||||||
|
text-align: right;
|
||||||
|
}
|
||||||
|
.dialog-field .weight-slider {
|
||||||
|
width: 100%;
|
||||||
|
margin: 4px 0;
|
||||||
|
accent-color: var(--accent);
|
||||||
|
}
|
||||||
|
.dialog-field small {
|
||||||
|
margin-top: 4px;
|
||||||
|
color: var(--fg-muted);
|
||||||
|
font-size: 11px;
|
||||||
|
}
|
||||||
|
.dialog-field.checkbox {
|
||||||
|
flex-direction: row;
|
||||||
|
align-items: center;
|
||||||
|
gap: 8px;
|
||||||
|
}
|
||||||
|
.dialog-field.checkbox > span {
|
||||||
|
margin-bottom: 0;
|
||||||
|
font-weight: 400;
|
||||||
|
}
|
||||||
|
.dialog-note {
|
||||||
|
font-size: 12px;
|
||||||
|
color: var(--fg-muted);
|
||||||
|
line-height: 1.5;
|
||||||
|
padding: 8px 10px;
|
||||||
|
background: var(--bg-soft);
|
||||||
|
border-radius: 4px;
|
||||||
|
}
|
||||||
|
.dialog-warn {
|
||||||
|
font-size: 12px;
|
||||||
|
color: #991b1b;
|
||||||
|
line-height: 1.5;
|
||||||
|
padding: 8px 10px;
|
||||||
|
background: #fee2e2;
|
||||||
|
border: 1px solid #fecaca;
|
||||||
|
border-radius: 4px;
|
||||||
|
font-weight: 500;
|
||||||
|
}
|
||||||
|
.dialog-error {
|
||||||
|
margin-top: 8px;
|
||||||
|
padding: 8px 10px;
|
||||||
|
background: #fee2e2;
|
||||||
|
color: #991b1b;
|
||||||
|
border-radius: 4px;
|
||||||
|
font-family: "SF Mono", Menlo, Consolas, monospace;
|
||||||
|
font-size: 12px;
|
||||||
|
white-space: pre-wrap;
|
||||||
|
word-break: break-word;
|
||||||
|
}
|
||||||
|
.dialog-footer {
|
||||||
|
display: flex;
|
||||||
|
justify-content: flex-end;
|
||||||
|
gap: 8px;
|
||||||
|
margin-top: 16px;
|
||||||
|
padding-top: 12px;
|
||||||
|
border-top: 1px solid var(--border);
|
||||||
|
}
|
||||||
|
.btn-primary,
|
||||||
|
.btn-secondary {
|
||||||
|
font: inherit;
|
||||||
|
padding: 6px 14px;
|
||||||
|
border-radius: 4px;
|
||||||
|
cursor: pointer;
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
}
|
||||||
|
.btn-primary {
|
||||||
|
background: var(--accent);
|
||||||
|
color: white;
|
||||||
|
border-color: var(--accent);
|
||||||
|
}
|
||||||
|
/* The background is set explicitly on :hover to override the generic
|
||||||
|
* button:hover rule in reset.css. Without this, .btn-primary:hover
|
||||||
|
* would only override `filter` and the generic rule's near-white
|
||||||
|
* var(--bg-soft) background would still apply, making a blue button
|
||||||
|
* turn white on hover. */
|
||||||
|
.btn-primary:hover:not(:disabled) {
|
||||||
|
background: var(--accent);
|
||||||
|
filter: brightness(1.12);
|
||||||
|
}
|
||||||
|
.btn-primary.btn-danger {
|
||||||
|
background: var(--state-down);
|
||||||
|
border-color: var(--state-down);
|
||||||
|
}
|
||||||
|
.btn-primary.btn-danger:hover:not(:disabled) {
|
||||||
|
background: var(--state-down);
|
||||||
|
filter: brightness(1.12);
|
||||||
|
}
|
||||||
|
.btn-secondary {
|
||||||
|
background: transparent;
|
||||||
|
color: var(--fg);
|
||||||
|
}
|
||||||
|
.btn-secondary:hover:not(:disabled) {
|
||||||
|
background: var(--bg-soft);
|
||||||
|
}
|
||||||
|
.btn-primary:disabled,
|
||||||
|
.btn-secondary:disabled {
|
||||||
|
opacity: 0.6;
|
||||||
|
cursor: wait;
|
||||||
|
}
|
||||||
|
|
||||||
/* ---- probe heartbeat ---- */
|
/* ---- probe heartbeat ---- */
|
||||||
|
|
||||||
/* Fixed-box wrapper so the row doesn't jiggle when the glyph swaps
|
/* Fixed-box wrapper so the row doesn't jiggle when the glyph swaps
|
||||||
* between "·" (very narrow) and "❤️" (wide emoji with a different
|
* between the text-style play/pause/stop glyphs (▶ ⏸ ⏹) and the
|
||||||
* font metric). Width is picked to comfortably contain the heart at
|
* wider heart emoji (❤️). The scale-pop animation uses
|
||||||
* the declared font-size, line-height is locked so the emoji doesn't
|
* transform-origin: center so the glyph expands in place. Overflow
|
||||||
* push the row baseline, and overflow is hidden as a safety net in
|
* is hidden as a safety net in case a platform renders the emoji
|
||||||
* case a platform renders the emoji even wider.
|
* wider than the box.
|
||||||
*/
|
*/
|
||||||
.probe-heartbeat {
|
.probe-heartbeat {
|
||||||
display: inline-block;
|
display: inline-block;
|
||||||
@@ -326,13 +603,15 @@
|
|||||||
line-height: 14px;
|
line-height: 14px;
|
||||||
margin-right: 6px;
|
margin-right: 6px;
|
||||||
text-align: center;
|
text-align: center;
|
||||||
font-size: 10px;
|
font-size: 11px;
|
||||||
color: var(--state-disabled);
|
color: var(--state-disabled);
|
||||||
overflow: hidden;
|
overflow: hidden;
|
||||||
vertical-align: middle;
|
vertical-align: middle;
|
||||||
|
transform-origin: center center;
|
||||||
}
|
}
|
||||||
.probe-heartbeat.in-flight {
|
.probe-heartbeat.in-flight {
|
||||||
color: inherit;
|
color: inherit;
|
||||||
|
font-size: 10px;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* ---- banners & loading ---- */
|
/* ---- banners & loading ---- */
|
||||||
@@ -363,18 +642,17 @@
|
|||||||
/* ---- zippy ---- */
|
/* ---- zippy ---- */
|
||||||
|
|
||||||
.zippy {
|
.zippy {
|
||||||
margin-top: 16px;
|
|
||||||
border: 1px solid var(--border);
|
border: 1px solid var(--border);
|
||||||
border-radius: 6px;
|
border-radius: 6px;
|
||||||
background: var(--bg-card);
|
background: var(--bg-card);
|
||||||
}
|
}
|
||||||
.zippy summary {
|
.zippy summary {
|
||||||
padding: 8px 12px;
|
padding: 4px 12px;
|
||||||
cursor: pointer;
|
cursor: pointer;
|
||||||
font-weight: 500;
|
font-weight: 500;
|
||||||
}
|
}
|
||||||
.zippy-body {
|
.zippy-body {
|
||||||
padding: 8px 12px;
|
padding: 4px 10px 6px;
|
||||||
border-top: 1px solid var(--border);
|
border-top: 1px solid var(--border);
|
||||||
}
|
}
|
||||||
.zippy-title {
|
.zippy-title {
|
||||||
|
|||||||
@@ -9,6 +9,18 @@ import { isAdmin } from "../stores/mode";
|
|||||||
|
|
||||||
type Props = {
|
type Props = {
|
||||||
maglevd: string;
|
maglevd: string;
|
||||||
|
frontend: string;
|
||||||
|
pool: string;
|
||||||
|
// showPool=true renders the pool name in the first column. Set
|
||||||
|
// only on the first backend row of each pool; subsequent rows in
|
||||||
|
// the same pool leave the cell blank, giving the "grouped table"
|
||||||
|
// look where the pool label appears once above its members.
|
||||||
|
showPool: boolean;
|
||||||
|
// poolActive=false means every backend in this row's pool has
|
||||||
|
// effective_weight=0 right now — a standby fallback or a fully
|
||||||
|
// drained primary. The row is rendered dimmer so the operator
|
||||||
|
// can scan which pool is actually carrying traffic.
|
||||||
|
poolActive: boolean;
|
||||||
backend: BackendSnapshot;
|
backend: BackendSnapshot;
|
||||||
poolBackend: PoolBackendSnapshot;
|
poolBackend: PoolBackendSnapshot;
|
||||||
};
|
};
|
||||||
@@ -16,11 +28,15 @@ type Props = {
|
|||||||
const BackendRow: Component<Props> = (props) => {
|
const BackendRow: Component<Props> = (props) => {
|
||||||
const b = () => props.backend;
|
const b = () => props.backend;
|
||||||
return (
|
return (
|
||||||
<tr class="backend-row" data-state={b().state}>
|
<tr
|
||||||
|
class="backend-row"
|
||||||
|
classList={{ "pool-standby": !props.poolActive }}
|
||||||
|
data-state={b().state}
|
||||||
|
>
|
||||||
|
<td class="col-pool">{props.showPool ? props.pool : ""}</td>
|
||||||
<td class="backend-name">
|
<td class="backend-name">
|
||||||
<ProbeHeartbeat maglevd={props.maglevd} backend={b().name} />
|
<ProbeHeartbeat maglevd={props.maglevd} backend={b().name} state={b().state} />
|
||||||
{b().name}
|
<span class="backend-name-text">{b().name}</span>
|
||||||
{!b().enabled && <span class="tag">[disabled]</span>}
|
|
||||||
</td>
|
</td>
|
||||||
<td class="backend-address">{b().address}</td>
|
<td class="backend-address">{b().address}</td>
|
||||||
<td>
|
<td>
|
||||||
@@ -37,7 +53,14 @@ const BackendRow: Component<Props> = (props) => {
|
|||||||
<td class="age">{lastTransitionAge(b().last_transition)}</td>
|
<td class="age">{lastTransitionAge(b().last_transition)}</td>
|
||||||
<Show when={isAdmin}>
|
<Show when={isAdmin}>
|
||||||
<td class="actions">
|
<td class="actions">
|
||||||
<BackendActionsMenu maglevd={props.maglevd} backend={b().name} state={b().state} />
|
<BackendActionsMenu
|
||||||
|
maglevd={props.maglevd}
|
||||||
|
frontend={props.frontend}
|
||||||
|
pool={props.pool}
|
||||||
|
backend={b().name}
|
||||||
|
state={b().state}
|
||||||
|
configuredWeight={props.poolBackend.weight}
|
||||||
|
/>
|
||||||
</td>
|
</td>
|
||||||
</Show>
|
</Show>
|
||||||
</tr>
|
</tr>
|
||||||
|
|||||||
@@ -3,74 +3,93 @@ import type { FrontendSnapshot, StateSnapshot } from "../types";
|
|||||||
import BackendRow from "./BackendRow";
|
import BackendRow from "./BackendRow";
|
||||||
import StatusBadge from "../components/StatusBadge";
|
import StatusBadge from "../components/StatusBadge";
|
||||||
import Flash from "../components/Flash";
|
import Flash from "../components/Flash";
|
||||||
|
import Zippy from "../components/Zippy";
|
||||||
import { isAdmin } from "../stores/mode";
|
import { isAdmin } from "../stores/mode";
|
||||||
|
import { formatVIPAddress } from "../stores/state";
|
||||||
|
|
||||||
type Props = {
|
type Props = {
|
||||||
snap: StateSnapshot;
|
snap: StateSnapshot;
|
||||||
frontend: FrontendSnapshot;
|
frontend: FrontendSnapshot;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// FrontendCard is rendered as a Zippy so a deployment with many VIPs
|
||||||
|
// can collapse the frontends it doesn't care about. The title line
|
||||||
|
// carries the frontend name, a live state badge, address:port, the
|
||||||
|
// protocol, the sticky marker, and (when set) the description.
|
||||||
|
//
|
||||||
|
// The body renders a single consolidated backend table with one row
|
||||||
|
// per (pool, backend). The first column holds the pool name, shown
|
||||||
|
// only on the first row of each pool and blank on subsequent rows —
|
||||||
|
// a classic grouped-table layout that keeps rows dense while still
|
||||||
|
// making pool grouping instantly scannable.
|
||||||
const FrontendCard: Component<Props> = (props) => {
|
const FrontendCard: Component<Props> = (props) => {
|
||||||
const backendByName = () => Object.fromEntries(props.snap.backends.map((b) => [b.name, b]));
|
const backendByName = () => Object.fromEntries(props.snap.backends.map((b) => [b.name, b]));
|
||||||
const fe = () => props.frontend;
|
const fe = () => props.frontend;
|
||||||
|
|
||||||
return (
|
const title = (
|
||||||
<section class="frontend-card">
|
<span class="frontend-title">
|
||||||
<header class="frontend-header">
|
<span class="frontend-title-name">{fe().name}</span>
|
||||||
<h2>
|
|
||||||
{fe().name}
|
|
||||||
<Flash value={fe().state ?? "unknown"}>
|
<Flash value={fe().state ?? "unknown"}>
|
||||||
<StatusBadge state={fe().state ?? "unknown"} />
|
<StatusBadge state={fe().state ?? "unknown"} />
|
||||||
</Flash>
|
</Flash>
|
||||||
</h2>
|
<span class="frontend-title-addr">{formatVIPAddress(fe().address, fe().port)}</span>
|
||||||
<div class="frontend-meta">
|
<span class="frontend-title-proto">{fe().protocol.toUpperCase()}</span>
|
||||||
<span class="addr">
|
|
||||||
{fe().address}:{fe().port}
|
|
||||||
</span>
|
|
||||||
<span class="proto">{fe().protocol.toUpperCase()}</span>
|
|
||||||
{fe().src_ip_sticky && <span class="tag">sticky</span>}
|
{fe().src_ip_sticky && <span class="tag">sticky</span>}
|
||||||
</div>
|
{fe().description && <span class="frontend-title-desc">{fe().description}</span>}
|
||||||
{fe().description && <p class="frontend-desc">{fe().description}</p>}
|
</span>
|
||||||
</header>
|
);
|
||||||
|
|
||||||
<For each={fe().pools}>
|
return (
|
||||||
{(pool) => (
|
<Zippy title={title} open>
|
||||||
<div class="pool-block">
|
|
||||||
<h3 class="pool-name">pool: {pool.name}</h3>
|
|
||||||
<table class="backend-table">
|
<table class="backend-table">
|
||||||
<thead>
|
<thead>
|
||||||
<tr>
|
<tr>
|
||||||
<th>backend</th>
|
<th class="col-pool">pool</th>
|
||||||
<th>address</th>
|
<th class="col-name">backend</th>
|
||||||
<th>state</th>
|
<th class="col-address">address</th>
|
||||||
<th class="numeric">weight</th>
|
<th class="col-state">state</th>
|
||||||
<th class="numeric">effective</th>
|
<th class="col-weight numeric">weight</th>
|
||||||
<th>last transition</th>
|
<th class="col-effective numeric">effective</th>
|
||||||
|
<th class="col-age">last transition</th>
|
||||||
<Show when={isAdmin}>
|
<Show when={isAdmin}>
|
||||||
<th class="actions" />
|
<th class="col-actions actions" />
|
||||||
</Show>
|
</Show>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
<tbody>
|
<tbody>
|
||||||
|
<For each={fe().pools}>
|
||||||
|
{(pool) => {
|
||||||
|
// A pool is "active" when at least one of its backends
|
||||||
|
// is currently serving traffic (effective_weight > 0).
|
||||||
|
// Inactive pools — standby fallbacks, fully-drained
|
||||||
|
// primaries — have every row rendered dimmer so the
|
||||||
|
// operator can see at a glance which pool is actually
|
||||||
|
// carrying traffic right now.
|
||||||
|
const poolActive = () => pool.backends.some((pb) => pb.effective_weight > 0);
|
||||||
|
return (
|
||||||
<For each={pool.backends}>
|
<For each={pool.backends}>
|
||||||
{(pb) => {
|
{(pb, idx) => {
|
||||||
const backend = backendByName()[pb.name];
|
const backend = backendByName()[pb.name];
|
||||||
if (!backend) return null;
|
if (!backend) return null;
|
||||||
return (
|
return (
|
||||||
<BackendRow
|
<BackendRow
|
||||||
maglevd={props.snap.maglevd.name}
|
maglevd={props.snap.maglevd.name}
|
||||||
|
frontend={fe().name}
|
||||||
|
pool={pool.name}
|
||||||
|
showPool={idx() === 0}
|
||||||
|
poolActive={poolActive()}
|
||||||
backend={backend}
|
backend={backend}
|
||||||
poolBackend={pb}
|
poolBackend={pb}
|
||||||
/>
|
/>
|
||||||
);
|
);
|
||||||
}}
|
}}
|
||||||
</For>
|
</For>
|
||||||
|
);
|
||||||
|
}}
|
||||||
|
</For>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
</div>
|
</Zippy>
|
||||||
)}
|
|
||||||
</For>
|
|
||||||
</section>
|
|
||||||
);
|
);
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ const Overview: Component = () => {
|
|||||||
{s().maglevd.last_error && `: ${s().maglevd.last_error}`}
|
{s().maglevd.last_error && `: ${s().maglevd.last_error}`}
|
||||||
</div>
|
</div>
|
||||||
</Show>
|
</Show>
|
||||||
<div class="frontend-grid">
|
<div class="frontend-list">
|
||||||
<For each={s().frontends}>{(fe) => <FrontendCard snap={s()} frontend={fe} />}</For>
|
<For each={s().frontends}>{(fe) => <FrontendCard snap={s()} frontend={fe} />}</For>
|
||||||
</div>
|
</div>
|
||||||
<VPPInfoPanel info={s().vpp_info} state={s().vpp_state} />
|
<VPPInfoPanel info={s().vpp_info} state={s().vpp_state} />
|
||||||
|
|||||||
@@ -110,12 +110,23 @@ func buildTree() *Node {
|
|||||||
Help: "modify a backend",
|
Help: "modify a backend",
|
||||||
Children: []*Node{setBackendName},
|
Children: []*Node{setBackendName},
|
||||||
}
|
}
|
||||||
// set frontend <name> pool <pool> backend <name> weight <0-100>
|
// set frontend <name> pool <pool> backend <name> weight <0-100> [flush]
|
||||||
|
//
|
||||||
|
// The tree walker only puts tokens from slot (Dynamic) nodes into
|
||||||
|
// args, so the literal "flush" keyword isn't visible in the arg
|
||||||
|
// list. We use two distinct Run functions to distinguish the two
|
||||||
|
// leaf paths instead — both share the same underlying helper.
|
||||||
|
setWeightFlush := &Node{
|
||||||
|
Word: "flush",
|
||||||
|
Help: "also drop VPP's flow table for this backend (otherwise only the new-buckets map is updated)",
|
||||||
|
Run: runSetFrontendPoolBackendWeightFlush,
|
||||||
|
}
|
||||||
setWeightValue := &Node{
|
setWeightValue := &Node{
|
||||||
Word: "<weight>",
|
Word: "<weight>",
|
||||||
Help: "Set weight of a backend in a pool (0-100)",
|
Help: "Set weight of a backend in a pool (0-100)",
|
||||||
Dynamic: dynNone, // accepts any integer; no tab-completion candidates
|
Dynamic: dynNone, // accepts any integer; no tab-completion candidates
|
||||||
Run: runSetFrontendPoolBackendWeight,
|
Run: runSetFrontendPoolBackendWeight,
|
||||||
|
Children: []*Node{setWeightFlush},
|
||||||
}
|
}
|
||||||
setFrontendPoolBackendWeight := &Node{Word: "weight", Help: "set backend weight in pool", Children: []*Node{setWeightValue}}
|
setFrontendPoolBackendWeight := &Node{Word: "weight", Help: "set backend weight in pool", Children: []*Node{setWeightValue}}
|
||||||
setFrontendPoolBackendName := &Node{
|
setFrontendPoolBackendName := &Node{
|
||||||
@@ -666,8 +677,16 @@ func runResumeBackend(ctx context.Context, client grpcapi.MaglevClient, args []s
|
|||||||
}
|
}
|
||||||
|
|
||||||
func runSetFrontendPoolBackendWeight(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
func runSetFrontendPoolBackendWeight(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
||||||
|
return setFrontendPoolBackendWeight(ctx, client, args, false)
|
||||||
|
}
|
||||||
|
|
||||||
|
func runSetFrontendPoolBackendWeightFlush(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
||||||
|
return setFrontendPoolBackendWeight(ctx, client, args, true)
|
||||||
|
}
|
||||||
|
|
||||||
|
func setFrontendPoolBackendWeight(ctx context.Context, client grpcapi.MaglevClient, args []string, flush bool) error {
|
||||||
if len(args) != 4 {
|
if len(args) != 4 {
|
||||||
return fmt.Errorf("usage: set frontend <name> pool <pool> backend <name> weight <0-100>")
|
return fmt.Errorf("usage: set frontend <name> pool <pool> backend <name> weight <0-100> [flush]")
|
||||||
}
|
}
|
||||||
frontendName, poolName, backendName, weightStr := args[0], args[1], args[2], args[3]
|
frontendName, poolName, backendName, weightStr := args[0], args[1], args[2], args[3]
|
||||||
weight, err := strconv.Atoi(weightStr)
|
weight, err := strconv.Atoi(weightStr)
|
||||||
@@ -681,6 +700,7 @@ func runSetFrontendPoolBackendWeight(ctx context.Context, client grpcapi.MaglevC
|
|||||||
Pool: poolName,
|
Pool: poolName,
|
||||||
Backend: backendName,
|
Backend: backendName,
|
||||||
Weight: int32(weight),
|
Weight: int32(weight),
|
||||||
|
Flush: flush,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -692,7 +712,12 @@ func runSetFrontendPoolBackendWeight(ctx context.Context, client grpcapi.MaglevC
|
|||||||
}
|
}
|
||||||
for _, pb := range pool.Backends {
|
for _, pb := range pool.Backends {
|
||||||
if pb.Name == backendName {
|
if pb.Name == backendName {
|
||||||
fmt.Printf("%s pool %s backend %s: weight set to %d\n", info.Name, pool.Name, pb.Name, pb.Weight)
|
flushNote := ""
|
||||||
|
if flush {
|
||||||
|
flushNote = " (flushed)"
|
||||||
|
}
|
||||||
|
fmt.Printf("%s pool %s backend %s: weight set to %d%s\n",
|
||||||
|
info.Name, pool.Name, pb.Name, pb.Weight, flushNote)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -25,6 +25,7 @@ func TestExpandPathsRoot(t *testing.T) {
|
|||||||
"set backend <name> disable",
|
"set backend <name> disable",
|
||||||
"set backend <name> enable",
|
"set backend <name> enable",
|
||||||
"set frontend <name> pool <pool> backend <backend> weight <weight>",
|
"set frontend <name> pool <pool> backend <backend> weight <weight>",
|
||||||
|
"set frontend <name> pool <pool> backend <backend> weight <weight> flush",
|
||||||
"watch events",
|
"watch events",
|
||||||
"watch events <opt>",
|
"watch events <opt>",
|
||||||
"config check",
|
"config check",
|
||||||
|
|||||||
@@ -172,10 +172,30 @@ set backend <name> disable Stop probing entirely and remove the backend fr
|
|||||||
set backend <name> enable Re-enable a disabled backend. A fresh probe goroutine is
|
set backend <name> enable Re-enable a disabled backend. A fresh probe goroutine is
|
||||||
started and the backend re-enters unknown state.
|
started and the backend re-enters unknown state.
|
||||||
|
|
||||||
set frontend <name> pool <pool> backend <name> weight <0-100>
|
set frontend <name> pool <pool> backend <name> weight <0-100> [flush]
|
||||||
Set the weight of a backend within a pool. Weight 0 keeps
|
Set the weight of a backend within a pool. Weight 0 keeps
|
||||||
the backend in the pool but assigns it no traffic.
|
the backend in the pool but assigns it no traffic. Takes
|
||||||
Takes effect immediately without reloading configuration.
|
effect immediately: maglevd pushes the change into VPP
|
||||||
|
via a targeted single-VIP reconcile, so there's no need
|
||||||
|
to wait for the periodic sync tick.
|
||||||
|
|
||||||
|
Without `flush`, the new weight is installed in Maglev's
|
||||||
|
new-bucket mapping but VPP's flow table is left alone.
|
||||||
|
Existing sessions keep reaching this backend until they
|
||||||
|
naturally drain — useful for graceful draining where
|
||||||
|
you want new connections to land elsewhere but don't
|
||||||
|
want to reset any in-flight traffic.
|
||||||
|
|
||||||
|
With `flush`, the corresponding application-server row
|
||||||
|
is rewritten with `lb_as_set_weight(is_flush=true)`,
|
||||||
|
which clears VPP's flow table entries for this backend.
|
||||||
|
Existing sessions are dropped immediately — useful when
|
||||||
|
the backend is being taken out of service for emergency
|
||||||
|
reasons and you don't want to wait for flows to drain.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
set frontend web pool primary backend nginx0 weight 50
|
||||||
|
set frontend web pool primary backend nginx0 weight 0 flush
|
||||||
|
|
||||||
watch events Stream all events (log, backend transitions, frontend)
|
watch events Stream all events (log, backend transitions, frontend)
|
||||||
[num <n>] Stop after receiving n events.
|
[num <n>] Stop after receiving n events.
|
||||||
|
|||||||
@@ -222,6 +222,7 @@ func (c *Checker) ListFrontends() []string {
|
|||||||
for name := range c.cfg.Frontends {
|
for name := range c.cfg.Frontends {
|
||||||
names = append(names, name)
|
names = append(names, name)
|
||||||
}
|
}
|
||||||
|
sort.Strings(names)
|
||||||
return names
|
return names
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ import (
|
|||||||
"net"
|
"net"
|
||||||
"os"
|
"os"
|
||||||
"regexp"
|
"regexp"
|
||||||
|
"sort"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
@@ -302,6 +303,21 @@ func convert(r *rawMaglev) (*Config, error) {
|
|||||||
cfg.Frontends[name] = fe
|
cfg.Frontends[name] = fe
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ---- cross-frontend: VIP-address family consistency -----------------------
|
||||||
|
//
|
||||||
|
// VPP's LB plugin requires every VIP sharing a given IP prefix to use
|
||||||
|
// the same encap type (GRE4 vs GRE6) — even when the VIPs sit on
|
||||||
|
// different ports. The encap is determined by the backend address
|
||||||
|
// family (see internal/vpp/lbsync.go desiredFromFrontend). So two
|
||||||
|
// frontends on the same VIP address with backends in different
|
||||||
|
// families (one IPv4 pool, one IPv6 pool) cannot both be programmed
|
||||||
|
// into VPP: the second one fails at lb_add_del_vip_v2 time with
|
||||||
|
// VNET_API_ERROR_INVALID_ARGUMENT (-73). Catching it here turns the
|
||||||
|
// silent runtime failure into a clear config-load error.
|
||||||
|
if err := validateVIPFamilyConsistency(cfg); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
// ---- vpp ------------------------------------------------------------------
|
// ---- vpp ------------------------------------------------------------------
|
||||||
// Runs last so structural errors in healthchecks/backends/frontends are
|
// Runs last so structural errors in healthchecks/backends/frontends are
|
||||||
// reported first; operators fix those, then we tell them about the VPP
|
// reported first; operators fix those, then we tell them about the VPP
|
||||||
@@ -579,6 +595,69 @@ func convertFrontend(name string, r *rawFrontend, backends map[string]Backend) (
|
|||||||
return fe, nil
|
return fe, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// validateVIPFamilyConsistency walks cfg.Frontends, groups them by VIP
|
||||||
|
// address, and rejects any group whose members disagree on the backend
|
||||||
|
// address family used by their pools. See the call site in Parse for
|
||||||
|
// why this matters (VPP LB plugin limitation).
|
||||||
|
//
|
||||||
|
// Each frontend already has its own within-frontend family invariant
|
||||||
|
// (every backend in a frontend must share a family — enforced in
|
||||||
|
// convertFrontend). This check adds the cross-frontend dimension:
|
||||||
|
// frontends that happen to collide on the VIP address.
|
||||||
|
func validateVIPFamilyConsistency(cfg *Config) error {
|
||||||
|
type seen struct {
|
||||||
|
family int
|
||||||
|
frontendName string
|
||||||
|
}
|
||||||
|
byAddr := map[string]seen{}
|
||||||
|
// Sort frontend names so the "first frontend on this address"
|
||||||
|
// reported in errors is deterministic, independent of Go's
|
||||||
|
// randomized map iteration.
|
||||||
|
names := make([]string, 0, len(cfg.Frontends))
|
||||||
|
for name := range cfg.Frontends {
|
||||||
|
names = append(names, name)
|
||||||
|
}
|
||||||
|
sort.Strings(names)
|
||||||
|
for _, name := range names {
|
||||||
|
fe := cfg.Frontends[name]
|
||||||
|
fam := frontendBackendFamily(cfg, fe)
|
||||||
|
if fam == 0 {
|
||||||
|
continue // no valid backends; family is unknowable
|
||||||
|
}
|
||||||
|
addr := fe.Address.String()
|
||||||
|
if prev, ok := byAddr[addr]; ok {
|
||||||
|
if prev.family != fam {
|
||||||
|
return fmt.Errorf(
|
||||||
|
"frontend %q: VIP address %s is also used by frontend %q with IPv%d backends, "+
|
||||||
|
"but %q has IPv%d backends; VPP's LB plugin requires all VIPs sharing an "+
|
||||||
|
"address to use the same encap (backend family), so this config cannot be "+
|
||||||
|
"programmed — give the two frontends different VIP addresses",
|
||||||
|
name, addr, prev.frontendName, prev.family, name, fam)
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
byAddr[addr] = seen{family: fam, frontendName: name}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// frontendBackendFamily returns the address family (4 or 6) of the
|
||||||
|
// first valid backend in the frontend's first pool. Returns 0 when no
|
||||||
|
// backend is resolvable — convertFrontend already enforces that all
|
||||||
|
// backends in a frontend share a family, so the first one is
|
||||||
|
// authoritative.
|
||||||
|
func frontendBackendFamily(cfg *Config, fe Frontend) int {
|
||||||
|
if len(fe.Pools) == 0 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
for bName := range fe.Pools[0].Backends {
|
||||||
|
if b, ok := cfg.Backends[bName]; ok && b.Address != nil {
|
||||||
|
return ipFamily(b.Address)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
// ---- helpers ---------------------------------------------------------------
|
// ---- helpers ---------------------------------------------------------------
|
||||||
|
|
||||||
func parseOptionalIPFamily(s string, family int, field string) (net.IP, error) {
|
func parseOptionalIPFamily(s string, family int, field string) (net.IP, error) {
|
||||||
|
|||||||
@@ -558,6 +558,86 @@ maglev:
|
|||||||
`,
|
`,
|
||||||
errSub: "name must not be empty",
|
errSub: "name must not be empty",
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
// Regression: VPP's LB plugin requires every VIP sharing
|
||||||
|
// a prefix to use the same encap type. Two frontends on
|
||||||
|
// the same VIP address with mismatched backend families
|
||||||
|
// can't both be programmed; catch it at config load so
|
||||||
|
// the operator doesn't see a late vpp-reconciler-error.
|
||||||
|
name: "cross-frontend VIP family mismatch",
|
||||||
|
yaml: `
|
||||||
|
maglev:
|
||||||
|
vpp:
|
||||||
|
lb:
|
||||||
|
ipv4-src-address: 10.0.0.1
|
||||||
|
ipv6-src-address: 2001:db8::10
|
||||||
|
healthchecks:
|
||||||
|
c:
|
||||||
|
type: icmp
|
||||||
|
interval: 1s
|
||||||
|
timeout: 2s
|
||||||
|
backends:
|
||||||
|
v4: {address: 10.0.0.2, healthcheck: c}
|
||||||
|
v6: {address: 2001:db8::2, healthcheck: c}
|
||||||
|
frontends:
|
||||||
|
web:
|
||||||
|
address: 2001:db8::1
|
||||||
|
protocol: tcp
|
||||||
|
port: 443
|
||||||
|
pools:
|
||||||
|
- name: primary
|
||||||
|
backends:
|
||||||
|
v4: {}
|
||||||
|
mail:
|
||||||
|
address: 2001:db8::1
|
||||||
|
protocol: tcp
|
||||||
|
port: 993
|
||||||
|
pools:
|
||||||
|
- name: primary
|
||||||
|
backends:
|
||||||
|
v6: {}
|
||||||
|
`,
|
||||||
|
errSub: "VIP address 2001:db8::1",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
// Sanity: two frontends sharing a VIP address with
|
||||||
|
// matching backend families is fine — VPP's constraint
|
||||||
|
// is about encap consistency, not about address reuse.
|
||||||
|
name: "cross-frontend VIP address share with same family is allowed",
|
||||||
|
yaml: `
|
||||||
|
maglev:
|
||||||
|
vpp:
|
||||||
|
lb:
|
||||||
|
ipv4-src-address: 10.0.0.1
|
||||||
|
ipv6-src-address: 2001:db8::10
|
||||||
|
healthchecks:
|
||||||
|
c:
|
||||||
|
type: icmp
|
||||||
|
interval: 1s
|
||||||
|
timeout: 2s
|
||||||
|
backends:
|
||||||
|
v6a: {address: 2001:db8::2, healthcheck: c}
|
||||||
|
v6b: {address: 2001:db8::3, healthcheck: c}
|
||||||
|
frontends:
|
||||||
|
web:
|
||||||
|
address: 2001:db8::1
|
||||||
|
protocol: tcp
|
||||||
|
port: 443
|
||||||
|
pools:
|
||||||
|
- name: primary
|
||||||
|
backends:
|
||||||
|
v6a: {}
|
||||||
|
mail:
|
||||||
|
address: 2001:db8::1
|
||||||
|
protocol: tcp
|
||||||
|
port: 993
|
||||||
|
pools:
|
||||||
|
- name: primary
|
||||||
|
backends:
|
||||||
|
v6b: {}
|
||||||
|
`,
|
||||||
|
errSub: "",
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
|
|||||||
@@ -1319,6 +1319,11 @@ type SetWeightRequest struct {
|
|||||||
Pool string `protobuf:"bytes,2,opt,name=pool,proto3" json:"pool,omitempty"`
|
Pool string `protobuf:"bytes,2,opt,name=pool,proto3" json:"pool,omitempty"`
|
||||||
Backend string `protobuf:"bytes,3,opt,name=backend,proto3" json:"backend,omitempty"`
|
Backend string `protobuf:"bytes,3,opt,name=backend,proto3" json:"backend,omitempty"`
|
||||||
Weight int32 `protobuf:"varint,4,opt,name=weight,proto3" json:"weight,omitempty"` // 0-100
|
Weight int32 `protobuf:"varint,4,opt,name=weight,proto3" json:"weight,omitempty"` // 0-100
|
||||||
|
// flush, when true, also clears VPP's flow table for this backend
|
||||||
|
// so existing sessions are torn down. When false (default), only
|
||||||
|
// Maglev's new-bucket mapping is updated and live flows keep
|
||||||
|
// draining to this backend.
|
||||||
|
Flush bool `protobuf:"varint,5,opt,name=flush,proto3" json:"flush,omitempty"`
|
||||||
unknownFields protoimpl.UnknownFields
|
unknownFields protoimpl.UnknownFields
|
||||||
sizeCache protoimpl.SizeCache
|
sizeCache protoimpl.SizeCache
|
||||||
}
|
}
|
||||||
@@ -1381,6 +1386,13 @@ func (x *SetWeightRequest) GetWeight() int32 {
|
|||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (x *SetWeightRequest) GetFlush() bool {
|
||||||
|
if x != nil {
|
||||||
|
return x.Flush
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
// WatchRequest controls which event types are streamed. All fields default to
|
// WatchRequest controls which event types are streamed. All fields default to
|
||||||
// true (i.e. an empty request subscribes to everything at info level).
|
// true (i.e. an empty request subscribes to everything at info level).
|
||||||
type WatchRequest struct {
|
type WatchRequest struct {
|
||||||
@@ -2638,12 +2650,13 @@ const file_proto_maglev_proto_rawDesc = "" +
|
|||||||
"\x05bytes\x18\x04 \x01(\x04R\x05bytes\"w\n" +
|
"\x05bytes\x18\x04 \x01(\x04R\x05bytes\"w\n" +
|
||||||
"\rVPPLBCounters\x12,\n" +
|
"\rVPPLBCounters\x12,\n" +
|
||||||
"\x04vips\x18\x01 \x03(\v2\x18.maglev.VPPLBVIPCountersR\x04vips\x128\n" +
|
"\x04vips\x18\x01 \x03(\v2\x18.maglev.VPPLBVIPCountersR\x04vips\x128\n" +
|
||||||
"\bbackends\x18\x02 \x03(\v2\x1c.maglev.VPPLBBackendCountersR\bbackends\"t\n" +
|
"\bbackends\x18\x02 \x03(\v2\x1c.maglev.VPPLBBackendCountersR\bbackends\"\x8a\x01\n" +
|
||||||
"\x10SetWeightRequest\x12\x1a\n" +
|
"\x10SetWeightRequest\x12\x1a\n" +
|
||||||
"\bfrontend\x18\x01 \x01(\tR\bfrontend\x12\x12\n" +
|
"\bfrontend\x18\x01 \x01(\tR\bfrontend\x12\x12\n" +
|
||||||
"\x04pool\x18\x02 \x01(\tR\x04pool\x12\x18\n" +
|
"\x04pool\x18\x02 \x01(\tR\x04pool\x12\x18\n" +
|
||||||
"\abackend\x18\x03 \x01(\tR\abackend\x12\x16\n" +
|
"\abackend\x18\x03 \x01(\tR\abackend\x12\x16\n" +
|
||||||
"\x06weight\x18\x04 \x01(\x05R\x06weight\"\xa3\x01\n" +
|
"\x06weight\x18\x04 \x01(\x05R\x06weight\x12\x14\n" +
|
||||||
|
"\x05flush\x18\x05 \x01(\bR\x05flush\"\xa3\x01\n" +
|
||||||
"\fWatchRequest\x12\x15\n" +
|
"\fWatchRequest\x12\x15\n" +
|
||||||
"\x03log\x18\x01 \x01(\bH\x00R\x03log\x88\x01\x01\x12\x1b\n" +
|
"\x03log\x18\x01 \x01(\bH\x00R\x03log\x88\x01\x01\x12\x1b\n" +
|
||||||
"\tlog_level\x18\x02 \x01(\tR\blogLevel\x12\x1d\n" +
|
"\tlog_level\x18\x02 \x01(\tR\blogLevel\x12\x1d\n" +
|
||||||
|
|||||||
@@ -102,7 +102,13 @@ func (s *Server) DisableBackend(_ context.Context, req *BackendRequest) (*Backen
|
|||||||
return backendToProto(b), nil
|
return backendToProto(b), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// SetFrontendPoolBackendWeight updates the weight of a backend in a pool.
|
// SetFrontendPoolBackendWeight updates the weight of a backend in a pool
|
||||||
|
// and immediately pushes the change into VPP via a targeted single-VIP
|
||||||
|
// sync. When req.Flush is true the backend's AS row is rewritten with
|
||||||
|
// lb_as_set_weight(is_flush=true), which tears down VPP's flow table for
|
||||||
|
// that AS so existing sessions are dropped; when false the flow table is
|
||||||
|
// left alone and only Maglev's new-bucket mapping is updated, so existing
|
||||||
|
// sessions keep reaching this backend until they naturally drain.
|
||||||
func (s *Server) SetFrontendPoolBackendWeight(_ context.Context, req *SetWeightRequest) (*FrontendInfo, error) {
|
func (s *Server) SetFrontendPoolBackendWeight(_ context.Context, req *SetWeightRequest) (*FrontendInfo, error) {
|
||||||
if req.Weight < 0 || req.Weight > 100 {
|
if req.Weight < 0 || req.Weight > 100 {
|
||||||
return nil, status.Errorf(codes.InvalidArgument, "weight %d out of range [0, 100]", req.Weight)
|
return nil, status.Errorf(codes.InvalidArgument, "weight %d out of range [0, 100]", req.Weight)
|
||||||
@@ -111,6 +117,26 @@ func (s *Server) SetFrontendPoolBackendWeight(_ context.Context, req *SetWeightR
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, status.Errorf(codes.NotFound, "%v", err)
|
return nil, status.Errorf(codes.NotFound, "%v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Push the change into VPP so the operator doesn't have to wait
|
||||||
|
// for the periodic 30s reconcile to pick it up. Silently skipped
|
||||||
|
// when VPP integration is disabled — the mutation still lands in
|
||||||
|
// config and any future sync will reconcile it.
|
||||||
|
if s.vppClient != nil && s.vppClient.IsConnected() {
|
||||||
|
cfg := s.checker.Config()
|
||||||
|
flushAddr := ""
|
||||||
|
if req.Flush {
|
||||||
|
if b, ok := cfg.Backends[req.Backend]; ok && b.Address != nil {
|
||||||
|
flushAddr = b.Address.String()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if err := s.vppClient.SyncLBStateVIP(cfg, req.Frontend, flushAddr); err != nil && !errors.Is(err, vpp.ErrFrontendNotFound) {
|
||||||
|
slog.Warn("set-weight-sync",
|
||||||
|
"frontend", req.Frontend, "backend", req.Backend,
|
||||||
|
"weight", req.Weight, "flush", req.Flush, "err", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return frontendToProto(req.Frontend, fe, s.checker), nil
|
return frontendToProto(req.Frontend, fe, s.checker), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -403,7 +429,7 @@ func (s *Server) SyncVPPLBState(_ context.Context, req *SyncVPPLBStateRequest) (
|
|||||||
}
|
}
|
||||||
cfg := s.checker.Config()
|
cfg := s.checker.Config()
|
||||||
if req.FrontendName != nil && *req.FrontendName != "" {
|
if req.FrontendName != nil && *req.FrontendName != "" {
|
||||||
if err := s.vppClient.SyncLBStateVIP(cfg, *req.FrontendName); err != nil {
|
if err := s.vppClient.SyncLBStateVIP(cfg, *req.FrontendName, ""); err != nil {
|
||||||
if errors.Is(err, vpp.ErrFrontendNotFound) {
|
if errors.Is(err, vpp.ErrFrontendNotFound) {
|
||||||
return nil, status.Errorf(codes.NotFound, "%v", err)
|
return nil, status.Errorf(codes.NotFound, "%v", err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,14 +8,17 @@ import (
|
|||||||
|
|
||||||
// ActivePoolIndex returns the priority-failover pool index for fe given
|
// ActivePoolIndex returns the priority-failover pool index for fe given
|
||||||
// the current backend states. The active pool is the first pool that
|
// the current backend states. The active pool is the first pool that
|
||||||
// contains at least one backend in StateUp — pool[0] is the primary,
|
// contains at least one backend which is both in StateUp AND has a
|
||||||
// pool[1] the first fallback, and so on. Returns 0 when no pool has
|
// non-zero configured weight: a pool whose up backends are all
|
||||||
// any up backend, in which case every backend maps to weight 0 and the
|
// weight=0 contributes no serving capacity, so failover falls through
|
||||||
// return value is unobservable.
|
// to the next tier. Returns 0 when no pool can serve, in which case
|
||||||
|
// every backend maps to weight 0 and the return value is unobservable.
|
||||||
|
//
|
||||||
|
// pool[0] is the primary, pool[1] the first fallback, and so on.
|
||||||
func ActivePoolIndex(fe config.Frontend, states map[string]State) int {
|
func ActivePoolIndex(fe config.Frontend, states map[string]State) int {
|
||||||
for i, pool := range fe.Pools {
|
for i, pool := range fe.Pools {
|
||||||
for bName := range pool.Backends {
|
for bName, pb := range pool.Backends {
|
||||||
if states[bName] == StateUp {
|
if states[bName] == StateUp && pb.Weight > 0 {
|
||||||
return i
|
return i
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -139,6 +139,92 @@ func TestActivePoolIndex(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TestActivePoolIndexWeightedFailover pins the rule that a pool is only
|
||||||
|
// "active" when it has at least one backend that is both up AND has a
|
||||||
|
// non-zero configured weight. A pool whose up backends are all
|
||||||
|
// weight=0 contributes no serving capacity, so failover should fall
|
||||||
|
// through to the next tier.
|
||||||
|
//
|
||||||
|
// This was a latent bug: ActivePoolIndex used to check state alone and
|
||||||
|
// would return poolIdx=0 even when every primary backend had weight=0,
|
||||||
|
// leaving the fallback pool unused even though it was the only pool
|
||||||
|
// that could actually serve traffic.
|
||||||
|
func TestActivePoolIndexWeightedFailover(t *testing.T) {
|
||||||
|
mkFE := func(pools ...map[string]int) config.Frontend {
|
||||||
|
out := make([]config.Pool, len(pools))
|
||||||
|
for i, p := range pools {
|
||||||
|
out[i] = config.Pool{Name: "p", Backends: map[string]config.PoolBackend{}}
|
||||||
|
for name, w := range p {
|
||||||
|
out[i].Backends[name] = config.PoolBackend{Weight: w}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return config.Frontend{Pools: out}
|
||||||
|
}
|
||||||
|
|
||||||
|
cases := []struct {
|
||||||
|
name string
|
||||||
|
fe config.Frontend
|
||||||
|
states map[string]State
|
||||||
|
want int
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "primary has only weight-0 backends → failover to secondary",
|
||||||
|
fe: mkFE(
|
||||||
|
map[string]int{"a": 0, "b": 0},
|
||||||
|
map[string]int{"c": 100},
|
||||||
|
),
|
||||||
|
states: map[string]State{"a": StateUp, "b": StateUp, "c": StateUp},
|
||||||
|
want: 1,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "primary has a weight-0 AND a weight>0 backend → primary stays active",
|
||||||
|
fe: mkFE(
|
||||||
|
map[string]int{"a": 0, "b": 50},
|
||||||
|
map[string]int{"c": 100},
|
||||||
|
),
|
||||||
|
states: map[string]State{"a": StateUp, "b": StateUp, "c": StateUp},
|
||||||
|
want: 0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "primary w>0 backend is down, w=0 sibling is up → failover",
|
||||||
|
fe: mkFE(
|
||||||
|
map[string]int{"a": 0, "b": 50},
|
||||||
|
map[string]int{"c": 100},
|
||||||
|
),
|
||||||
|
states: map[string]State{"a": StateUp, "b": StateDown, "c": StateUp},
|
||||||
|
want: 1,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "two tiers of weight-0 → fall through to third tier",
|
||||||
|
fe: mkFE(
|
||||||
|
map[string]int{"a": 0},
|
||||||
|
map[string]int{"b": 0},
|
||||||
|
map[string]int{"c": 100},
|
||||||
|
),
|
||||||
|
states: map[string]State{"a": StateUp, "b": StateUp, "c": StateUp},
|
||||||
|
want: 2,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "every tier weight-0 → default 0 (nothing can serve)",
|
||||||
|
fe: mkFE(
|
||||||
|
map[string]int{"a": 0},
|
||||||
|
map[string]int{"b": 0},
|
||||||
|
),
|
||||||
|
states: map[string]State{"a": StateUp, "b": StateUp},
|
||||||
|
want: 0,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tc := range cases {
|
||||||
|
t.Run(tc.name, func(t *testing.T) {
|
||||||
|
got := ActivePoolIndex(tc.fe, tc.states)
|
||||||
|
if got != tc.want {
|
||||||
|
t.Errorf("got pool %d, want pool %d", got, tc.want)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// TestComputeFrontendState locks down the reduction rule: frontends are
|
// TestComputeFrontendState locks down the reduction rule: frontends are
|
||||||
// up iff any backend has effective weight > 0, unknown iff all backends
|
// up iff any backend has effective weight > 0, unknown iff all backends
|
||||||
// are still in StateUnknown (or there are no backends), and down otherwise.
|
// are still in StateUnknown (or there are no backends), and down otherwise.
|
||||||
|
|||||||
@@ -38,6 +38,7 @@ type desiredVIP struct {
|
|||||||
Protocol uint8 // 6=TCP, 17=UDP, 255=any
|
Protocol uint8 // 6=TCP, 17=UDP, 255=any
|
||||||
Port uint16
|
Port uint16
|
||||||
SrcIPSticky bool // lb_add_del_vip_v2.src_ip_sticky
|
SrcIPSticky bool // lb_add_del_vip_v2.src_ip_sticky
|
||||||
|
Encap lb_types.LbEncapType // GRE4 / GRE6; matches the backend family, not the VIP's
|
||||||
ASes map[string]desiredAS // keyed by AS IP string
|
ASes map[string]desiredAS // keyed by AS IP string
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -144,7 +145,7 @@ func (c *Client) SyncLBStateAll(cfg *config.Config) error {
|
|||||||
curPtr = &cur
|
curPtr = &cur
|
||||||
curSticky = cur.SrcIPSticky
|
curSticky = cur.SrcIPSticky
|
||||||
}
|
}
|
||||||
if err := reconcileVIP(ch, d, curPtr, curSticky, &st); err != nil {
|
if err := reconcileVIP(ch, d, curPtr, curSticky, "", &st); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -165,7 +166,14 @@ func (c *Client) SyncLBStateAll(cfg *config.Config) error {
|
|||||||
// frontend is missing from cfg, SyncLBStateVIP returns ErrFrontendNotFound.
|
// frontend is missing from cfg, SyncLBStateVIP returns ErrFrontendNotFound.
|
||||||
// This is the right tool for targeted updates on a busy load-balancer with
|
// This is the right tool for targeted updates on a busy load-balancer with
|
||||||
// many VIPs — only one VIP is read from VPP and only its ASes are modified.
|
// many VIPs — only one VIP is read from VPP and only its ASes are modified.
|
||||||
func (c *Client) SyncLBStateVIP(cfg *config.Config, feName string) error {
|
//
|
||||||
|
// flushAddress, when non-empty, is the IP of an application server whose
|
||||||
|
// weight change (if any) should be pushed with IsFlush=true regardless of
|
||||||
|
// the usual "only flush on non-zero → zero" heuristic. This is how the
|
||||||
|
// SetFrontendPoolBackendWeight RPC exposes an explicit "drop flows now"
|
||||||
|
// knob: the server handler resolves the backend's config address and
|
||||||
|
// passes it here. Callers that don't need forced flushing pass "".
|
||||||
|
func (c *Client) SyncLBStateVIP(cfg *config.Config, feName, flushAddress string) error {
|
||||||
if !c.IsConnected() {
|
if !c.IsConnected() {
|
||||||
return errNotConnected
|
return errNotConnected
|
||||||
}
|
}
|
||||||
@@ -203,7 +211,7 @@ func (c *Client) SyncLBStateVIP(cfg *config.Config, feName string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var st syncStats
|
var st syncStats
|
||||||
if err := reconcileVIP(ch, d, cur, curSticky, &st); err != nil {
|
if err := reconcileVIP(ch, d, cur, curSticky, flushAddress, &st); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
recordSyncStats("vip", &st)
|
recordSyncStats("vip", &st)
|
||||||
@@ -227,7 +235,7 @@ func (c *Client) SyncLBStateVIP(cfg *config.Config, feName string) error {
|
|||||||
// matching entry is always present. When the flag differs from the desired
|
// matching entry is always present. When the flag differs from the desired
|
||||||
// value, the VIP is torn down (ASes del+flushed, VIP deleted) and recreated
|
// value, the VIP is torn down (ASes del+flushed, VIP deleted) and recreated
|
||||||
// — VPP has no API to mutate src_ip_sticky on an existing VIP.
|
// — VPP has no API to mutate src_ip_sticky on an existing VIP.
|
||||||
func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, st *syncStats) error {
|
func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, flushAddress string, st *syncStats) error {
|
||||||
if cur == nil {
|
if cur == nil {
|
||||||
if err := addVIP(ch, d); err != nil {
|
if err := addVIP(ch, d); err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -299,6 +307,14 @@ func reconcileVIP(ch *loggedChannel, d desiredVIP, cur *LBVIP, curSticky bool, s
|
|||||||
// (i.e. the backend was disabled, not merely drained). Steady-
|
// (i.e. the backend was disabled, not merely drained). Steady-
|
||||||
// state syncs where weight doesn't change never re-flush.
|
// state syncs where weight doesn't change never re-flush.
|
||||||
flush := a.Flush && c.Weight > 0 && a.Weight == 0
|
flush := a.Flush && c.Weight > 0 && a.Weight == 0
|
||||||
|
// Caller-forced flush: used by SetFrontendPoolBackendWeight
|
||||||
|
// with flush=true to explicitly drop live sessions for a
|
||||||
|
// single backend. The address match is exact — no other
|
||||||
|
// AS's weight change is affected, even if several happen
|
||||||
|
// in the same reconcile pass.
|
||||||
|
if flushAddress != "" && addr == flushAddress {
|
||||||
|
flush = true
|
||||||
|
}
|
||||||
if err := setASWeight(ch, d.Prefix, d.Protocol, d.Port, a, c.Weight, flush); err != nil {
|
if err := setASWeight(ch, d.Prefix, d.Protocol, d.Port, a, c.Weight, flush); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -349,13 +365,24 @@ func desiredFromFrontend(cfg *config.Config, fe config.Frontend, src StateSource
|
|||||||
if fe.Address.To4() == nil {
|
if fe.Address.To4() == nil {
|
||||||
bits = 128
|
bits = 128
|
||||||
}
|
}
|
||||||
|
// Start with an encap derived from the VIP's own family as a
|
||||||
|
// fallback. This only applies when the frontend has zero valid
|
||||||
|
// backends (e.g. every referenced backend is missing from
|
||||||
|
// cfg.Backends); any real backend below overrides it to the
|
||||||
|
// backend family, which is the correct choice because the GRE
|
||||||
|
// encap carries backend traffic, not VIP traffic. Config
|
||||||
|
// validation already guarantees every backend in a frontend
|
||||||
|
// shares the same family, so the first valid backend we see is
|
||||||
|
// authoritative.
|
||||||
d := desiredVIP{
|
d := desiredVIP{
|
||||||
Prefix: &net.IPNet{IP: fe.Address, Mask: net.CIDRMask(bits, bits)},
|
Prefix: &net.IPNet{IP: fe.Address, Mask: net.CIDRMask(bits, bits)},
|
||||||
Protocol: protocolFromConfig(fe.Protocol),
|
Protocol: protocolFromConfig(fe.Protocol),
|
||||||
Port: fe.Port,
|
Port: fe.Port,
|
||||||
SrcIPSticky: fe.SrcIPSticky,
|
SrcIPSticky: fe.SrcIPSticky,
|
||||||
|
Encap: encapForIP(fe.Address),
|
||||||
ASes: make(map[string]desiredAS),
|
ASes: make(map[string]desiredAS),
|
||||||
}
|
}
|
||||||
|
encapSet := false
|
||||||
|
|
||||||
states := snapshotStates(fe, src)
|
states := snapshotStates(fe, src)
|
||||||
activePool := health.ActivePoolIndex(fe, states)
|
activePool := health.ActivePoolIndex(fe, states)
|
||||||
@@ -366,6 +393,10 @@ func desiredFromFrontend(cfg *config.Config, fe config.Frontend, src StateSource
|
|||||||
if !ok || b.Address == nil {
|
if !ok || b.Address == nil {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
if !encapSet {
|
||||||
|
d.Encap = encapForIP(b.Address)
|
||||||
|
encapSet = true
|
||||||
|
}
|
||||||
// Disabled backends (either via operator action or config) are
|
// Disabled backends (either via operator action or config) are
|
||||||
// kept in the desired set so they stay installed in VPP with
|
// kept in the desired set so they stay installed in VPP with
|
||||||
// weight=0 — they must not be deleted, otherwise a subsequent
|
// weight=0 — they must not be deleted, otherwise a subsequent
|
||||||
@@ -423,12 +454,11 @@ func snapshotStates(fe config.Frontend, src StateSource) map[string]health.State
|
|||||||
const defaultFlowsTableLength = 1024
|
const defaultFlowsTableLength = 1024
|
||||||
|
|
||||||
func addVIP(ch *loggedChannel, d desiredVIP) error {
|
func addVIP(ch *loggedChannel, d desiredVIP) error {
|
||||||
encap := encapForIP(d.Prefix.IP)
|
|
||||||
req := &lb.LbAddDelVipV2{
|
req := &lb.LbAddDelVipV2{
|
||||||
Pfx: ip_types.NewAddressWithPrefix(*d.Prefix),
|
Pfx: ip_types.NewAddressWithPrefix(*d.Prefix),
|
||||||
Protocol: d.Protocol,
|
Protocol: d.Protocol,
|
||||||
Port: d.Port,
|
Port: d.Port,
|
||||||
Encap: encap,
|
Encap: d.Encap,
|
||||||
Type: lb_types.LB_API_SRV_TYPE_CLUSTERIP,
|
Type: lb_types.LB_API_SRV_TYPE_CLUSTERIP,
|
||||||
NewFlowsTableLength: defaultFlowsTableLength,
|
NewFlowsTableLength: defaultFlowsTableLength,
|
||||||
SrcIPSticky: d.SrcIPSticky,
|
SrcIPSticky: d.SrcIPSticky,
|
||||||
@@ -445,7 +475,7 @@ func addVIP(ch *loggedChannel, d desiredVIP) error {
|
|||||||
"vip", d.Prefix.IP.String(),
|
"vip", d.Prefix.IP.String(),
|
||||||
"protocol", protocolName(d.Protocol),
|
"protocol", protocolName(d.Protocol),
|
||||||
"port", d.Port,
|
"port", d.Port,
|
||||||
"encap", encapName(encap),
|
"encap", encapName(d.Encap),
|
||||||
"src-ip-sticky", d.SrcIPSticky)
|
"src-ip-sticky", d.SrcIPSticky)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -177,3 +177,264 @@ func TestDesiredFromFrontendFailover(t *testing.T) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TestDesiredFromFrontendSharedBackend exercises the exact shape of
|
||||||
|
// maglev.yaml: two frontends that share three backends across primary
|
||||||
|
// and fallback pools with different per-pool weights. The key
|
||||||
|
// invariants being pinned:
|
||||||
|
//
|
||||||
|
// - Each frontend's desiredFromFrontend must read its own
|
||||||
|
// per-pool-membership weights, never leaking weights from a sibling
|
||||||
|
// frontend's pool config.
|
||||||
|
// - When the primary pool has at least one backend up, the fallback
|
||||||
|
// pool's backends must all be weight=0 (standby).
|
||||||
|
// - When every primary-pool backend is non-up (down / paused /
|
||||||
|
// disabled), failover kicks in: the fallback pool's backends get
|
||||||
|
// their configured weights, and primary-pool backends stay at 0.
|
||||||
|
//
|
||||||
|
// Frontends modelled below:
|
||||||
|
//
|
||||||
|
// nginx-ip4-http:
|
||||||
|
// primary: nginx0-frggh0 w=10, nginx0-nlams0 w=100
|
||||||
|
// fallback: nginx0-chlzn0 w=100
|
||||||
|
//
|
||||||
|
// nginx-ip6-https:
|
||||||
|
// primary: nginx0-frggh0 w=100
|
||||||
|
// fallback: nginx0-nlams0 w=100, nginx0-chlzn0 w=100
|
||||||
|
//
|
||||||
|
// Note that nginx0-frggh0 is configured with weight 10 in the ip4
|
||||||
|
// primary but 100 in the ip6 primary — this is the exact crossed
|
||||||
|
// configuration that the user reported as producing weight=10 in the
|
||||||
|
// ip6 VIP (a regression).
|
||||||
|
func TestDesiredFromFrontendSharedBackend(t *testing.T) {
|
||||||
|
ip := func(s string) net.IP { return net.ParseIP(s).To4() }
|
||||||
|
frggh := "198.19.6.76"
|
||||||
|
nlams := "198.19.4.118"
|
||||||
|
chlzn := "198.19.6.167"
|
||||||
|
|
||||||
|
cfg := &config.Config{
|
||||||
|
Backends: map[string]config.Backend{
|
||||||
|
"nginx0-frggh0": {Address: ip(frggh), Enabled: true},
|
||||||
|
"nginx0-nlams0": {Address: ip(nlams), Enabled: true},
|
||||||
|
"nginx0-chlzn0": {Address: ip(chlzn), Enabled: true},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
feIP4 := config.Frontend{
|
||||||
|
Address: ip("198.19.0.254"),
|
||||||
|
Protocol: "tcp",
|
||||||
|
Port: 80,
|
||||||
|
Pools: []config.Pool{
|
||||||
|
{Name: "primary", Backends: map[string]config.PoolBackend{
|
||||||
|
"nginx0-frggh0": {Weight: 10},
|
||||||
|
"nginx0-nlams0": {Weight: 100},
|
||||||
|
}},
|
||||||
|
{Name: "fallback", Backends: map[string]config.PoolBackend{
|
||||||
|
"nginx0-chlzn0": {Weight: 100},
|
||||||
|
}},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
feIP6 := config.Frontend{
|
||||||
|
Address: net.ParseIP("2001:db8::1"),
|
||||||
|
Protocol: "tcp",
|
||||||
|
Port: 443,
|
||||||
|
Pools: []config.Pool{
|
||||||
|
{Name: "primary", Backends: map[string]config.PoolBackend{
|
||||||
|
"nginx0-frggh0": {Weight: 100},
|
||||||
|
}},
|
||||||
|
{Name: "fallback", Backends: map[string]config.PoolBackend{
|
||||||
|
"nginx0-nlams0": {Weight: 100},
|
||||||
|
"nginx0-chlzn0": {Weight: 100},
|
||||||
|
}},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
type want struct {
|
||||||
|
ip4 map[string]uint8
|
||||||
|
ip6 map[string]uint8
|
||||||
|
}
|
||||||
|
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
states map[string]health.State
|
||||||
|
want want
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "all up — each primary serves with its own weights",
|
||||||
|
states: map[string]health.State{
|
||||||
|
"nginx0-frggh0": health.StateUp,
|
||||||
|
"nginx0-nlams0": health.StateUp,
|
||||||
|
"nginx0-chlzn0": health.StateUp,
|
||||||
|
},
|
||||||
|
want: want{
|
||||||
|
ip4: map[string]uint8{frggh: 10, nlams: 100, chlzn: 0},
|
||||||
|
ip6: map[string]uint8{frggh: 100, nlams: 0, chlzn: 0},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "frggh0 disabled — ip4 primary still served by nlams0, ip6 fails over to fallback",
|
||||||
|
states: map[string]health.State{
|
||||||
|
"nginx0-frggh0": health.StateDisabled,
|
||||||
|
"nginx0-nlams0": health.StateUp,
|
||||||
|
"nginx0-chlzn0": health.StateUp,
|
||||||
|
},
|
||||||
|
want: want{
|
||||||
|
// ip4 primary still has nlams0 up, so stays on primary;
|
||||||
|
// frggh0 is in primary but disabled → 0.
|
||||||
|
ip4: map[string]uint8{frggh: 0, nlams: 100, chlzn: 0},
|
||||||
|
// ip6 primary has only frggh0 (disabled) → fallback
|
||||||
|
// pool activates and both of its backends get their
|
||||||
|
// configured weights.
|
||||||
|
ip6: map[string]uint8{frggh: 0, nlams: 100, chlzn: 100},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "frggh0 paused — same failover shape as disabled for ip6",
|
||||||
|
states: map[string]health.State{
|
||||||
|
"nginx0-frggh0": health.StatePaused,
|
||||||
|
"nginx0-nlams0": health.StateUp,
|
||||||
|
"nginx0-chlzn0": health.StateUp,
|
||||||
|
},
|
||||||
|
want: want{
|
||||||
|
ip4: map[string]uint8{frggh: 0, nlams: 100, chlzn: 0},
|
||||||
|
ip6: map[string]uint8{frggh: 0, nlams: 100, chlzn: 100},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "frggh0 down — same failover shape as disabled for ip6",
|
||||||
|
states: map[string]health.State{
|
||||||
|
"nginx0-frggh0": health.StateDown,
|
||||||
|
"nginx0-nlams0": health.StateUp,
|
||||||
|
"nginx0-chlzn0": health.StateUp,
|
||||||
|
},
|
||||||
|
want: want{
|
||||||
|
ip4: map[string]uint8{frggh: 0, nlams: 100, chlzn: 0},
|
||||||
|
ip6: map[string]uint8{frggh: 0, nlams: 100, chlzn: 100},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "ip4 primary all down → failover to chlzn0; ip6 unaffected",
|
||||||
|
states: map[string]health.State{
|
||||||
|
"nginx0-frggh0": health.StateDown,
|
||||||
|
"nginx0-nlams0": health.StateDown,
|
||||||
|
"nginx0-chlzn0": health.StateUp,
|
||||||
|
},
|
||||||
|
want: want{
|
||||||
|
// ip4 primary has nothing up → fallback activates.
|
||||||
|
ip4: map[string]uint8{frggh: 0, nlams: 0, chlzn: 100},
|
||||||
|
// ip6 primary has frggh0 (down) → fallback activates
|
||||||
|
// too; nlams0 is in ip6 fallback but down, chlzn0 is
|
||||||
|
// up and carries traffic.
|
||||||
|
ip6: map[string]uint8{frggh: 0, nlams: 0, chlzn: 100},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "all backends down → everyone zero",
|
||||||
|
states: map[string]health.State{
|
||||||
|
"nginx0-frggh0": health.StateDown,
|
||||||
|
"nginx0-nlams0": health.StateDown,
|
||||||
|
"nginx0-chlzn0": health.StateDown,
|
||||||
|
},
|
||||||
|
want: want{
|
||||||
|
ip4: map[string]uint8{frggh: 0, nlams: 0, chlzn: 0},
|
||||||
|
ip6: map[string]uint8{frggh: 0, nlams: 0, chlzn: 0},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "all backends disabled → everyone zero (and flushed)",
|
||||||
|
states: map[string]health.State{
|
||||||
|
"nginx0-frggh0": health.StateDisabled,
|
||||||
|
"nginx0-nlams0": health.StateDisabled,
|
||||||
|
"nginx0-chlzn0": health.StateDisabled,
|
||||||
|
},
|
||||||
|
want: want{
|
||||||
|
ip4: map[string]uint8{frggh: 0, nlams: 0, chlzn: 0},
|
||||||
|
ip6: map[string]uint8{frggh: 0, nlams: 0, chlzn: 0},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "frggh0 re-enabled and up — each frontend returns to its own configured weight (regression)",
|
||||||
|
states: map[string]health.State{
|
||||||
|
"nginx0-frggh0": health.StateUp,
|
||||||
|
"nginx0-nlams0": health.StateUp,
|
||||||
|
"nginx0-chlzn0": health.StateUp,
|
||||||
|
},
|
||||||
|
want: want{
|
||||||
|
// This is the specific regression the user reported:
|
||||||
|
// after a disable/enable cycle, the ip6 VIP should
|
||||||
|
// return to weight=100 for frggh0 (its own pool's
|
||||||
|
// configured weight), not 10 (ip4's weight).
|
||||||
|
ip4: map[string]uint8{frggh: 10, nlams: 100, chlzn: 0},
|
||||||
|
ip6: map[string]uint8{frggh: 100, nlams: 0, chlzn: 0},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tc := range tests {
|
||||||
|
t.Run(tc.name, func(t *testing.T) {
|
||||||
|
src := &fakeStateSource{cfg: cfg, states: tc.states}
|
||||||
|
|
||||||
|
d4 := desiredFromFrontend(cfg, feIP4, src)
|
||||||
|
for addr, w := range tc.want.ip4 {
|
||||||
|
got, ok := d4.ASes[addr]
|
||||||
|
if !ok {
|
||||||
|
t.Errorf("ip4: %s missing from desired set", addr)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if got.Weight != w {
|
||||||
|
t.Errorf("ip4: %s weight got %d, want %d", addr, got.Weight, w)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(d4.ASes) != len(tc.want.ip4) {
|
||||||
|
t.Errorf("ip4: got %d ASes, want %d", len(d4.ASes), len(tc.want.ip4))
|
||||||
|
}
|
||||||
|
|
||||||
|
d6 := desiredFromFrontend(cfg, feIP6, src)
|
||||||
|
for addr, w := range tc.want.ip6 {
|
||||||
|
got, ok := d6.ASes[addr]
|
||||||
|
if !ok {
|
||||||
|
t.Errorf("ip6: %s missing from desired set", addr)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if got.Weight != w {
|
||||||
|
t.Errorf("ip6: %s weight got %d, want %d", addr, got.Weight, w)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(d6.ASes) != len(tc.want.ip6) {
|
||||||
|
t.Errorf("ip6: got %d ASes, want %d", len(d6.ASes), len(tc.want.ip6))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Also exercise desiredFromConfig (the batch version used
|
||||||
|
// by the 30-second periodic SyncLBStateAll): it iterates
|
||||||
|
// every frontend in cfg and must produce the same
|
||||||
|
// per-frontend weights as desiredFromFrontend called
|
||||||
|
// directly. A bug where one frontend's pool config leaks
|
||||||
|
// into another would show up here too.
|
||||||
|
cfgBatch := &config.Config{
|
||||||
|
Backends: cfg.Backends,
|
||||||
|
Frontends: map[string]config.Frontend{
|
||||||
|
"nginx-ip4-http": feIP4,
|
||||||
|
"nginx-ip6-https": feIP6,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
batch := desiredFromConfig(cfgBatch, src)
|
||||||
|
byAddr := map[string]desiredVIP{}
|
||||||
|
for _, d := range batch {
|
||||||
|
byAddr[d.Prefix.IP.String()] = d
|
||||||
|
}
|
||||||
|
if d := byAddr["198.19.0.254"]; true {
|
||||||
|
for addr, w := range tc.want.ip4 {
|
||||||
|
if got := d.ASes[addr].Weight; got != w {
|
||||||
|
t.Errorf("batch ip4: %s weight got %d, want %d", addr, got, w)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if d := byAddr["2001:db8::1"]; true {
|
||||||
|
for addr, w := range tc.want.ip6 {
|
||||||
|
if got := d.ASes[addr].Weight; got != w {
|
||||||
|
t.Errorf("batch ip6: %s weight got %d, want %d", addr, got, w)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -87,7 +87,7 @@ func (r *Reconciler) handle(ev checker.Event) {
|
|||||||
"from", ev.Transition.From.String(),
|
"from", ev.Transition.From.String(),
|
||||||
"to", ev.Transition.To.String())
|
"to", ev.Transition.To.String())
|
||||||
|
|
||||||
if err := r.client.SyncLBStateVIP(cfg, ev.FrontendName); err != nil {
|
if err := r.client.SyncLBStateVIP(cfg, ev.FrontendName, ""); err != nil {
|
||||||
if errors.Is(err, ErrFrontendNotFound) {
|
if errors.Is(err, ErrFrontendNotFound) {
|
||||||
// Frontend was removed between the event being emitted and
|
// Frontend was removed between the event being emitted and
|
||||||
// us handling it; a periodic SyncLBStateAll will clean it up.
|
// us handling it; a periodic SyncLBStateAll will clean it up.
|
||||||
|
|||||||
@@ -171,6 +171,11 @@ message SetWeightRequest {
|
|||||||
string pool = 2;
|
string pool = 2;
|
||||||
string backend = 3;
|
string backend = 3;
|
||||||
int32 weight = 4; // 0-100
|
int32 weight = 4; // 0-100
|
||||||
|
// flush, when true, also clears VPP's flow table for this backend
|
||||||
|
// so existing sessions are torn down. When false (default), only
|
||||||
|
// Maglev's new-bucket mapping is updated and live flows keep
|
||||||
|
// draining to this backend.
|
||||||
|
bool flush = 5;
|
||||||
}
|
}
|
||||||
|
|
||||||
// WatchRequest controls which event types are streamed. All fields default to
|
// WatchRequest controls which event types are streamed. All fields default to
|
||||||
|
|||||||
Reference in New Issue
Block a user