This session covers three distinct arcs: correctness bug fixes in the
VPP sync path and frontend reducers, new config validation, and a
large polish pass on the web frontend (tighter layout, backend kebab
dialogs, live grouped-table, live config-reload re-sync).
- encap for a VIP is now derived from the backend address family,
not the VIP's. A v6 VIP with v4 backends is programmed as IP6_GRE4
(not the buggy IP6_GRE6), matching the VPP LB plugin's
requirement that encap reflects the tunnel inner family. desiredVIP
gained an Encap field populated in desiredFromFrontend.
- ActivePoolIndex now requires at least one backend in a pool to be
BOTH in StateUp AND pb.Weight>0 before the pool counts as active.
Previously a primary pool with every backend manually zeroed would
still win over a fallback with weight=100, so fallback traffic
never materialized. New TestActivePoolIndexWeightedFailover table
pins the rule in five subcases.
- SyncLBStateVIP gained a flushAddress parameter threaded through
reconcileVIP; it forces flush=true on the setASWeight call for a
specific backend regardless of the usual 0→N heuristic. Wires up
the explicit [flush] knob the CLI exposes.
- convertFrontend already enforced that backends within one frontend
share a family. New cross-frontend pass validateVIPFamilyConsistency
rejects configs where two frontends share a VIP address but carry
backends in different families — VPP's LB plugin requires every
VIP on a prefix to have the same encap type, so such a config
would fail at lb_add_del_vip_v2 time with VNET_API_ERROR_INVALID
_ARGUMENT (-73). Catching it at config load turns a silent
runtime failure into a clear startup error.
- Two new TestValidationErrors cases pin the behavior: mismatched
families reject, same-family frontends on one VIP address allowed.
- Proto adds `bool flush = 5` to SetWeightRequest. The RPC now
drives a VIP sync immediately after mutating config (fixing the
latent "weight change only takes effect at the next 30s periodic
reconcile" gap), passing flushAddress = backend IP when req.Flush
is true.
- maglevc grows an optional [flush] token: `set frontend F pool P
backend B weight N [flush]`. Implementation uses two Run closures
(runSetFrontendPoolBackendWeight and -Flush) because the tree
walker only puts slot tokens in args — literal keywords like
`flush` advance the node but don't appear in the arg list.
- docs/user-guide.md updated with the [flush] optional and a
three-paragraph explainer of the graceful-drain vs. flush
semantics at the VPP level.
- checker.ListFrontends now sorts alphabetically to match the
existing sort in ListBackends / ListHealthChecks — RPC responses
no longer shuffle VIPs per call. cmd/frontend/client.go also
sorts defensively in refreshAll so an old maglevd build renders
alphabetically too.
- backendFromProto was returning out.Transitions[n-1] as the
LastTransition, but maglevd stores (and the proto carries)
transitions newest-first, so [n-1] was actually the oldest.
Reverse on read, which normalizes the client's Transitions slice
to oldest-first and makes [n-1] genuinely the newest. LastTransition
now points at the actual latest transition record.
- applyBackendTransition (Go and TS) derives Enabled = state!="disabled"
so the two fields stay in lockstep — closed a drift window where
a recently re-enabled backend still rendered with a stuck
[disabled] tag. The tag was later removed entirely since state
and enabled carry the same information.
- Layout tightened substantially: "FRONTENDS" panel header removed,
zippy-summary and zippy-body paddings cut, backend-table row
padding dropped to 2px, per-pool <h3> removed. Pools now live in
a single consolidated table per frontend with a dedicated "pool"
column that shows the pool name only on the first row of each
group — classic grouped-table layout, maximally dense.
- Description moved inline into the Zippy summary as muted italic
text, freeing a vertical line per frontend card.
- formatVIPAddress() helper renders IPv6 VIPs as [addr]:port and
IPv4 as addr:port, matching RFC 3986 authority syntax.
- Pools with effective_weight=0 on every backend (standby
fallbacks, fully-drained primaries) render at opacity 0.35 on
their non-actions cells; the kebab column stays at full contrast
because its menu is still fully functional on standby backends.
- Config-reload propagation: a maglevd config-reload-done log
event triggers triggerConfigResync() on the frontend side —
refreshAll() runs off the event-dispatch goroutine, then a
BrowserEvent{Type:"resync"} is published through the broker.
writeEvent emits type="resync" as a named SSE frame so the
SPA's existing addEventListener("resync") handler picks it up
and calls fetchAllState → replaceAll.
- recomputeEffectiveWeights in stores/state.ts mirrors the
server-side health.EffectiveWeights logic so the SPA keeps
pool.effective_weight correct the moment a backend transitions,
without waiting for the 30s refresh. Fixed a nasty bug where
applyBackendEffectiveWeight wrote VIP-scoped vpp-lb-sync-as-*
event weights into every frontend sharing the backend,
corrupting frontends with different per-pool configured weights.
The old log-event reducer was removed; applyConfiguredWeight is
the narrower replacement used by the kebab set-weight flow.
- applyBackendTransition calls recomputeEffectiveWeights after
state updates so pool-failover transitions (primary ⇌ fallback)
reflect instantly in the UI.
- Confirmation dialogs via a new Modal primitive
(Portal-mounted to document.body, escape/click-outside close,
click-outside debounced on mousedown so mid-row-text-selection
drags don't dismiss).
- pause/resume/enable/disable each show a Modal with a consequence
paragraph explaining what hits live traffic ("will keep existing
flows", "will flush VPP's flow table", etc.). The disable commit
button is styled btn-danger red.
- set-weight action shows a Modal with a range slider (0-100,
seeded from the current configured weight, accent-colored live
numeric readout via <output>) plus a flush checkbox and a live-
swapping note/warn paragraph describing what will happen. On
commit, the SPA also updates its local store via
applyConfiguredWeight so the operator sees the new weight
immediately without waiting for the next refresh.
- ProbeHeartbeat is now state-aware: ▶ (play) at rest for up/
down/unknown backends, ⏸ (pause) for paused, ⏹ (stop) for
disabled/removed, ❤️ (heart) during an in-flight probe.
- Drop the probe-done event listener — fast probes (<10ms)
could fire probe-done in the same render tick as probe-start
and the heart would never visibly paint. Each probe-start now
runs a fixed 400ms scale-pop animation on a timer; subsequent
probe-start events reset the timer, so fast cadences produce a
continuous heart pulse.
- Fixed wrapper box (16x14 px, overflow hidden) so the row
doesn't jiggle when the glyph swaps between the narrow ▶/⏸/⏹
text glyphs and the wider ❤️ emoji.
- Brand wordmark changed from "maglev" to "vpp-maglev" and wrapped
in an <a> linking to https://git.ipng.ch/ipng/vpp-maglev. Logo
link changed to https://ipng.ch/. Both open in a new tab with
rel="noopener".
- .gitignore fix: `frontend`, `maglevc`, `maglevd` were matching
ANY file or directory with those names anywhere in the tree,
silently ignoring cmd/frontend and friends. Anchored with
leading slashes so only repo-root build artifacts match.
827 lines
26 KiB
Go
827 lines
26 KiB
Go
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
|
|
|
package main
|
|
|
|
import (
|
|
"context"
|
|
"fmt"
|
|
"os"
|
|
"strconv"
|
|
"strings"
|
|
"text/tabwriter"
|
|
"time"
|
|
|
|
buildinfo "git.ipng.ch/ipng/vpp-maglev/cmd"
|
|
"git.ipng.ch/ipng/vpp-maglev/internal/grpcapi"
|
|
)
|
|
|
|
const callTimeout = 10 * time.Second
|
|
|
|
// buildTree constructs the full command tree.
|
|
func buildTree() *Node {
|
|
root := &Node{Word: "", Help: ""}
|
|
|
|
show := &Node{Word: "show", Help: "show information"}
|
|
set := &Node{Word: "set", Help: "modify configuration"}
|
|
quit := &Node{Word: "quit", Help: "exit the shell", Run: runQuit}
|
|
exit := &Node{Word: "exit", Help: "exit the shell", Run: runQuit}
|
|
|
|
// show version
|
|
showVersion := &Node{Word: "version", Help: "Show build version", Run: runShowVersion}
|
|
|
|
// show frontends [<name>] — without name: list all, with name: show details
|
|
showFrontendName := &Node{
|
|
Word: "<name>",
|
|
Help: "Show details for a single frontend",
|
|
Dynamic: dynFrontends,
|
|
Run: runShowFrontend,
|
|
}
|
|
showFrontends := &Node{
|
|
Word: "frontends",
|
|
Help: "List all frontends",
|
|
Run: runShowFrontends,
|
|
Children: []*Node{showFrontendName},
|
|
}
|
|
|
|
// show backends [<name>] — without name: list all, with name: show details
|
|
showBackendName := &Node{
|
|
Word: "<name>",
|
|
Help: "Show details for a single backend",
|
|
Dynamic: dynBackends,
|
|
Run: runShowBackend,
|
|
}
|
|
showBackends := &Node{
|
|
Word: "backends",
|
|
Help: "List all backends",
|
|
Run: runShowBackends,
|
|
Children: []*Node{showBackendName},
|
|
}
|
|
|
|
// show healthchecks [<name>] — without name: list all, with name: show details
|
|
showHealthCheckName := &Node{
|
|
Word: "<name>",
|
|
Help: "Show details for a single health check",
|
|
Dynamic: dynHealthChecks,
|
|
Run: runShowHealthCheck,
|
|
}
|
|
showHealthChecks := &Node{
|
|
Word: "healthchecks",
|
|
Help: "List all health checks",
|
|
Run: runShowHealthChecks,
|
|
Children: []*Node{showHealthCheckName},
|
|
}
|
|
|
|
// show vpp info / lb state / lb counters
|
|
showVPPInfo := &Node{Word: "info", Help: "Show VPP version, uptime, and connection status", Run: runShowVPPInfo}
|
|
showVPPLBState := &Node{Word: "state", Help: "Show VPP load-balancer state (VIPs and application servers)", Run: runShowVPPLBState}
|
|
showVPPLBCounters := &Node{Word: "counters", Help: "Show VPP per-VIP and per-backend packet/byte counters (refreshed every ~5s server-side)", Run: runShowVPPLBCounters}
|
|
showVPPLB := &Node{
|
|
Word: "lb",
|
|
Help: "VPP load-balancer information",
|
|
Children: []*Node{showVPPLBState, showVPPLBCounters},
|
|
}
|
|
showVPP := &Node{
|
|
Word: "vpp",
|
|
Help: "VPP dataplane information",
|
|
Children: []*Node{showVPPInfo, showVPPLB},
|
|
}
|
|
|
|
show.Children = []*Node{
|
|
showVersion,
|
|
showFrontends,
|
|
showBackends,
|
|
showHealthChecks,
|
|
showVPP,
|
|
}
|
|
|
|
// set backend <name> pause|resume|disabled|enabled
|
|
setPause := &Node{Word: "pause", Help: "pause health checking", Run: runPauseBackend}
|
|
setResume := &Node{Word: "resume", Help: "resume health checking", Run: runResumeBackend}
|
|
setDisabled := &Node{Word: "disable", Help: "disable backend (stop probing, remove from rotation)", Run: runDisableBackend}
|
|
setEnabled := &Node{Word: "enable", Help: "enable backend (resume probing)", Run: runEnableBackend}
|
|
setBackendName := &Node{
|
|
Word: "<name>",
|
|
Help: "backend name",
|
|
Dynamic: dynBackends,
|
|
Children: []*Node{setPause, setResume, setDisabled, setEnabled},
|
|
}
|
|
setBackend := &Node{
|
|
Word: "backend",
|
|
Help: "modify a backend",
|
|
Children: []*Node{setBackendName},
|
|
}
|
|
// set frontend <name> pool <pool> backend <name> weight <0-100> [flush]
|
|
//
|
|
// The tree walker only puts tokens from slot (Dynamic) nodes into
|
|
// args, so the literal "flush" keyword isn't visible in the arg
|
|
// list. We use two distinct Run functions to distinguish the two
|
|
// leaf paths instead — both share the same underlying helper.
|
|
setWeightFlush := &Node{
|
|
Word: "flush",
|
|
Help: "also drop VPP's flow table for this backend (otherwise only the new-buckets map is updated)",
|
|
Run: runSetFrontendPoolBackendWeightFlush,
|
|
}
|
|
setWeightValue := &Node{
|
|
Word: "<weight>",
|
|
Help: "Set weight of a backend in a pool (0-100)",
|
|
Dynamic: dynNone, // accepts any integer; no tab-completion candidates
|
|
Run: runSetFrontendPoolBackendWeight,
|
|
Children: []*Node{setWeightFlush},
|
|
}
|
|
setFrontendPoolBackendWeight := &Node{Word: "weight", Help: "set backend weight in pool", Children: []*Node{setWeightValue}}
|
|
setFrontendPoolBackendName := &Node{
|
|
Word: "<backend>",
|
|
Help: "backend name",
|
|
Dynamic: dynBackends,
|
|
Children: []*Node{setFrontendPoolBackendWeight},
|
|
}
|
|
setFrontendPoolBackend := &Node{Word: "backend", Help: "select a backend", Children: []*Node{setFrontendPoolBackendName}}
|
|
setFrontendPoolName := &Node{
|
|
Word: "<pool>",
|
|
Help: "pool name",
|
|
Dynamic: dynNone, // pool names aren't listed via gRPC; accepts any input
|
|
Children: []*Node{setFrontendPoolBackend},
|
|
}
|
|
setFrontendPool := &Node{Word: "pool", Help: "select a pool", Children: []*Node{setFrontendPoolName}}
|
|
setFrontendName := &Node{
|
|
Word: "<name>",
|
|
Help: "frontend name",
|
|
Dynamic: dynFrontends,
|
|
Children: []*Node{setFrontendPool},
|
|
}
|
|
setFrontend := &Node{
|
|
Word: "frontend",
|
|
Help: "modify a frontend",
|
|
Children: []*Node{setFrontendName},
|
|
}
|
|
|
|
set.Children = []*Node{setBackend, setFrontend}
|
|
|
|
// watch events [num <n>] [log [level <level>]] [backend] [frontend]
|
|
//
|
|
// All tokens after 'events' are captured as args via a self-referencing slot
|
|
// node. This lets runWatchEvents parse the optional flags manually while still
|
|
// providing tab-completion through the dynamic enumerator.
|
|
var watchEventsOptSlot *Node
|
|
watchEventsOptSlot = &Node{
|
|
Word: "<opt>",
|
|
Help: "Stream events with options",
|
|
Dynamic: dynWatchEventOpts,
|
|
Run: runWatchEvents,
|
|
}
|
|
watchEventsOptSlot.Children = []*Node{watchEventsOptSlot}
|
|
|
|
watchEvents := &Node{
|
|
Word: "events",
|
|
Help: "stream events (press any key or Ctrl-C to stop)",
|
|
Run: runWatchEvents,
|
|
Children: []*Node{watchEventsOptSlot},
|
|
}
|
|
watch := &Node{
|
|
Word: "watch",
|
|
Help: "watch live event streams",
|
|
Children: []*Node{watchEvents},
|
|
}
|
|
|
|
// config check / reload
|
|
configCheck := &Node{Word: "check", Help: "Check configuration file", Run: runConfigCheck}
|
|
configReload := &Node{Word: "reload", Help: "Check and reload configuration", Run: runConfigReload}
|
|
configNode := &Node{
|
|
Word: "config",
|
|
Help: "configuration commands",
|
|
Children: []*Node{configCheck, configReload},
|
|
}
|
|
|
|
// sync vpp lb state [<name>]
|
|
//
|
|
// Without a name: run SyncLBStateAll (may remove stale VIPs).
|
|
// With a name: run SyncLBStateVIP(name) for just that frontend (no removals).
|
|
syncVPPLBStateName := &Node{
|
|
Word: "<name>",
|
|
Help: "Sync a single frontend's VIP to VPP",
|
|
Dynamic: dynFrontends,
|
|
Run: runSyncVPPLBState,
|
|
}
|
|
syncVPPLBState := &Node{
|
|
Word: "state",
|
|
Help: "Sync the VPP load-balancer dataplane from the running config",
|
|
Run: runSyncVPPLBState,
|
|
Children: []*Node{syncVPPLBStateName},
|
|
}
|
|
syncVPPLB := &Node{
|
|
Word: "lb",
|
|
Help: "VPP load-balancer sync commands",
|
|
Children: []*Node{syncVPPLBState},
|
|
}
|
|
syncVPP := &Node{
|
|
Word: "vpp",
|
|
Help: "VPP dataplane sync commands",
|
|
Children: []*Node{syncVPPLB},
|
|
}
|
|
syncNode := &Node{
|
|
Word: "sync",
|
|
Help: "Reconcile dataplane state from the running config",
|
|
Children: []*Node{syncVPP},
|
|
}
|
|
|
|
root.Children = []*Node{show, set, watch, configNode, syncNode, quit, exit}
|
|
return root
|
|
}
|
|
|
|
// ---- dynamic enumerators ---------------------------------------------------
|
|
|
|
func dynFrontends(ctx context.Context, client grpcapi.MaglevClient) []string {
|
|
resp, err := client.ListFrontends(ctx, &grpcapi.ListFrontendsRequest{})
|
|
if err != nil {
|
|
return nil
|
|
}
|
|
return resp.FrontendNames
|
|
}
|
|
|
|
func dynBackends(ctx context.Context, client grpcapi.MaglevClient) []string {
|
|
resp, err := client.ListBackends(ctx, &grpcapi.ListBackendsRequest{})
|
|
if err != nil {
|
|
return nil
|
|
}
|
|
return resp.BackendNames
|
|
}
|
|
|
|
func dynHealthChecks(ctx context.Context, client grpcapi.MaglevClient) []string {
|
|
resp, err := client.ListHealthChecks(ctx, &grpcapi.ListHealthChecksRequest{})
|
|
if err != nil {
|
|
return nil
|
|
}
|
|
return resp.Names
|
|
}
|
|
|
|
// dynNone marks a slot node that accepts any input but provides no
|
|
// tab-completion candidates (e.g. a pool name or numeric weight value).
|
|
func dynNone(_ context.Context, _ grpcapi.MaglevClient) []string { return nil }
|
|
|
|
// ---- run functions ---------------------------------------------------------
|
|
|
|
func runShowVPPInfo(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.GetVPPInfo(ctx, &grpcapi.GetVPPInfoRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("version"), info.Version)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("build-date"), info.BuildDate)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("build-dir"), info.BuildDirectory)
|
|
fmt.Fprintf(w, "%s\t%d\n", label("vpp-pid"), info.Pid)
|
|
if info.BoottimeNs > 0 {
|
|
bootTime := time.Unix(0, info.BoottimeNs)
|
|
fmt.Fprintf(w, "%s\t%s (%s)\n", label("vpp-boottime"),
|
|
bootTime.Format("2006-01-02 15:04:05"),
|
|
formatDuration(time.Since(bootTime)))
|
|
}
|
|
connTime := time.Unix(0, info.ConnecttimeNs)
|
|
fmt.Fprintf(w, "%s\t%s (%s)\n", label("connected"),
|
|
connTime.Format("2006-01-02 15:04:05"),
|
|
formatDuration(time.Since(connTime)))
|
|
return w.Flush()
|
|
}
|
|
|
|
func runShowVPPLBState(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
state, err := client.GetVPPLBState(ctx, &grpcapi.GetVPPLBStateRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
|
|
// ---- global config ----
|
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
|
fmt.Fprintf(w, "%s\n", label("global"))
|
|
if state.Conf.Ip4SrcAddress != "" {
|
|
fmt.Fprintf(w, " %s\t%s\n", label("ip4-src"), state.Conf.Ip4SrcAddress)
|
|
}
|
|
if state.Conf.Ip6SrcAddress != "" {
|
|
fmt.Fprintf(w, " %s\t%s\n", label("ip6-src"), state.Conf.Ip6SrcAddress)
|
|
}
|
|
fmt.Fprintf(w, " %s\t%d\n", label("sticky-buckets-per-core"), state.Conf.StickyBucketsPerCore)
|
|
fmt.Fprintf(w, " %s\t%ds\n", label("flow-timeout"), state.Conf.FlowTimeout)
|
|
if err := w.Flush(); err != nil {
|
|
return err
|
|
}
|
|
|
|
if len(state.Vips) == 0 {
|
|
fmt.Println(label("vips") + " (none)")
|
|
return nil
|
|
}
|
|
|
|
// ---- per-VIP details ----
|
|
for _, v := range state.Vips {
|
|
fmt.Println()
|
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("vip"), stripHostMask(v.Prefix))
|
|
fmt.Fprintf(w, " %s\t%s\n", label("protocol"), protoString(v.Protocol))
|
|
fmt.Fprintf(w, " %s\t%d\n", label("port"), v.Port)
|
|
fmt.Fprintf(w, " %s\t%s\n", label("encap"), v.Encap)
|
|
fmt.Fprintf(w, " %s\t%t\n", label("src-ip-sticky"), v.SrcIpSticky)
|
|
fmt.Fprintf(w, " %s\t%d\n", label("flow-table-length"), v.FlowTableLength)
|
|
fmt.Fprintf(w, " %s\t%d\n", label("application-servers"), len(v.ApplicationServers))
|
|
if err := w.Flush(); err != nil {
|
|
return err
|
|
}
|
|
for _, a := range v.ApplicationServers {
|
|
fmt.Printf(" %s %s %s %d %s %d\n",
|
|
label("address"), a.Address,
|
|
label("weight"), a.Weight,
|
|
label("flow-table-buckets"), a.NumBuckets)
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// runShowVPPLBCounters prints the per-VIP and per-backend runtime counters
|
|
// captured by maglevd's 5s scrape loop. Values are up to 5 seconds stale;
|
|
// Prometheus is the right tool if you need live rates.
|
|
func runShowVPPLBCounters(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
resp, err := client.GetVPPLBCounters(ctx, &grpcapi.GetVPPLBCountersRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
|
|
if len(resp.Vips) == 0 && len(resp.Backends) == 0 {
|
|
fmt.Println("(no counters — VPP disconnected or scrape pending)")
|
|
return nil
|
|
}
|
|
|
|
// ---- frontend-counters ----
|
|
fmt.Println(label("frontend-counters"))
|
|
if len(resp.Vips) == 0 {
|
|
fmt.Println(" (none)")
|
|
} else {
|
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
|
fmt.Fprintf(w, " %s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n",
|
|
label("vip"), label("proto"), label("port"),
|
|
label("first"), label("next"),
|
|
label("untracked"), label("no-server"),
|
|
label("fib-packets"), label("fib-bytes"),
|
|
)
|
|
for _, v := range resp.Vips {
|
|
fmt.Fprintf(w, " %s\t%s\t%d\t%d\t%d\t%d\t%d\t%d\t%d\n",
|
|
stripHostMask(v.Prefix), v.Protocol, v.Port,
|
|
v.FirstPacket, v.NextPacket,
|
|
v.UntrackedPacket, v.NoServer,
|
|
v.Packets, v.Bytes,
|
|
)
|
|
}
|
|
if err := w.Flush(); err != nil {
|
|
return err
|
|
}
|
|
}
|
|
|
|
fmt.Println()
|
|
|
|
// ---- backend-counters ----
|
|
fmt.Println(label("backend-counters"))
|
|
if len(resp.Backends) == 0 {
|
|
fmt.Println(" (none)")
|
|
return nil
|
|
}
|
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
|
fmt.Fprintf(w, " %s\t%s\t%s\t%s\n",
|
|
label("backend"), label("address"),
|
|
label("fib-packets"), label("fib-bytes"),
|
|
)
|
|
for _, b := range resp.Backends {
|
|
fmt.Fprintf(w, " %s\t%s\t%d\t%d\n",
|
|
b.Backend, b.Address, b.Packets, b.Bytes,
|
|
)
|
|
}
|
|
return w.Flush()
|
|
}
|
|
|
|
// stripHostMask trims "/32" (IPv4) or "/128" (IPv6) from a VIP's CIDR
|
|
// string. maglevd only programs host-prefix VIPs so the mask is always
|
|
// one of these two values and carries no information for a human reader.
|
|
// Non-host prefixes and unparseable strings are returned unchanged so
|
|
// future changes don't silently lose data.
|
|
func stripHostMask(prefix string) string {
|
|
if strings.HasSuffix(prefix, "/32") || strings.HasSuffix(prefix, "/128") {
|
|
return prefix[:strings.LastIndexByte(prefix, '/')]
|
|
}
|
|
return prefix
|
|
}
|
|
|
|
// protoString renders an IP protocol number as a name (tcp, udp, any, or numeric).
|
|
func protoString(p uint32) string {
|
|
switch p {
|
|
case 6:
|
|
return "tcp"
|
|
case 17:
|
|
return "udp"
|
|
case 255:
|
|
return "any"
|
|
}
|
|
return fmt.Sprintf("%d", p)
|
|
}
|
|
|
|
func runSyncVPPLBState(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
req := &grpcapi.SyncVPPLBStateRequest{}
|
|
if len(args) > 0 && args[0] != "" {
|
|
name := args[0]
|
|
req.FrontendName = &name
|
|
}
|
|
if _, err := client.SyncVPPLBState(ctx, req); err != nil {
|
|
return err
|
|
}
|
|
if req.FrontendName != nil {
|
|
fmt.Printf("synced frontend %q to VPP\n", *req.FrontendName)
|
|
} else {
|
|
fmt.Println("synced full LB state to VPP")
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func runShowVersion(_ context.Context, _ grpcapi.MaglevClient, _ []string) error {
|
|
fmt.Printf("maglevc %s (commit %s, built %s)\n",
|
|
buildinfo.Version(), buildinfo.Commit(), buildinfo.Date())
|
|
return nil
|
|
}
|
|
|
|
func runQuit(_ context.Context, _ grpcapi.MaglevClient, _ []string) error {
|
|
return errQuit
|
|
}
|
|
|
|
func runShowFrontends(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
resp, err := client.ListFrontends(ctx, &grpcapi.ListFrontendsRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
for _, name := range resp.FrontendNames {
|
|
fmt.Println(name)
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func runShowFrontend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
if len(args) == 0 {
|
|
return fmt.Errorf("usage: show frontend <name>")
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.GetFrontend(ctx, &grpcapi.GetFrontendRequest{Name: args[0]})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
|
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("name"), info.Name)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("address"), info.Address)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("protocol"), info.Protocol)
|
|
fmt.Fprintf(w, "%s\t%d\n", label("port"), info.Port)
|
|
fmt.Fprintf(w, "%s\t%t\n", label("src-ip-sticky"), info.SrcIpSticky)
|
|
if info.Description != "" {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("description"), info.Description)
|
|
}
|
|
if len(info.Pools) > 0 {
|
|
fmt.Fprintf(w, "%s\n", label("pools"))
|
|
}
|
|
if err := w.Flush(); err != nil {
|
|
return err
|
|
}
|
|
|
|
// Pool section uses direct Printf with fixed-width padding so that ANSI
|
|
// escape codes in labels don't confuse tabwriter's byte-based alignment.
|
|
// "backends" is always the widest pool label (8 chars); all pool labels
|
|
// are right-padded to that width, giving a 2+8+2 = 12-char visual indent.
|
|
const poolLblWidth = len("backends")
|
|
const poolIndent = " "
|
|
const poolSep = " "
|
|
contIndent := strings.Repeat(" ", len(poolIndent)+poolLblWidth+len(poolSep))
|
|
|
|
for _, pool := range info.Pools {
|
|
namePad := strings.Repeat(" ", poolLblWidth-len("name"))
|
|
fmt.Printf("%s%s%s%s%s\n", poolIndent, label("name"), namePad, poolSep, pool.Name)
|
|
for i, pb := range pool.Backends {
|
|
beInfo, beErr := client.GetBackend(ctx, &grpcapi.GetBackendRequest{Name: pb.Name})
|
|
suffix := ""
|
|
if beErr == nil && !beInfo.Enabled {
|
|
suffix = " [disabled]"
|
|
}
|
|
// Show both the configured weight (from YAML) and the
|
|
// state-aware effective weight (what gets programmed into VPP
|
|
// after pool-failover logic). Format matches the VPP-style
|
|
// key-value line so robot tests can parse it with a regex.
|
|
metaStr := fmt.Sprintf(" %s %d %s %d",
|
|
label("weight"), pb.Weight,
|
|
label("effective"), pb.EffectiveWeight)
|
|
if i == 0 {
|
|
bePad := strings.Repeat(" ", poolLblWidth-len("backends"))
|
|
fmt.Printf("%s%s%s%s%s%s%s\n", poolIndent, label("backends"), bePad, poolSep, pb.Name, metaStr, suffix)
|
|
} else {
|
|
fmt.Printf("%s%s%s%s\n", contIndent, pb.Name, metaStr, suffix)
|
|
}
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func runShowBackends(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
resp, err := client.ListBackends(ctx, &grpcapi.ListBackendsRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
for _, name := range resp.BackendNames {
|
|
fmt.Println(name)
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func runShowBackend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
if len(args) == 0 {
|
|
return fmt.Errorf("usage: show backend <name>")
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.GetBackend(ctx, &grpcapi.GetBackendRequest{Name: args[0]})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("name"), info.Name)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("address"), info.Address)
|
|
stateDur := ""
|
|
if len(info.Transitions) > 0 {
|
|
since := time.Since(time.Unix(0, info.Transitions[0].AtUnixNs))
|
|
stateDur = " for " + formatDuration(since)
|
|
}
|
|
fmt.Fprintf(w, "%s\t%s%s\n", label("state"), info.State, stateDur)
|
|
fmt.Fprintf(w, "%s\t%v\n", label("enabled"), info.Enabled)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("healthcheck"), info.Healthcheck)
|
|
for i, t := range info.Transitions {
|
|
ts := time.Unix(0, t.AtUnixNs)
|
|
var lbl string
|
|
if i == 0 {
|
|
lbl = label("transitions")
|
|
} else {
|
|
// Pad to same visible width as "transitions" and wrap through
|
|
// label() so tabwriter sees the same byte count (ANSI overhead
|
|
// is identical on every row, keeping columns aligned).
|
|
lbl = label(" ")
|
|
}
|
|
fmt.Fprintf(w, "%s\t%s → %s\t%s\t%s\n",
|
|
lbl,
|
|
t.From, t.To,
|
|
ts.Format("2006-01-02 15:04:05.000"),
|
|
formatAgo(time.Since(ts)),
|
|
)
|
|
}
|
|
return w.Flush()
|
|
}
|
|
|
|
func runShowHealthChecks(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
resp, err := client.ListHealthChecks(ctx, &grpcapi.ListHealthChecksRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
for _, name := range resp.Names {
|
|
fmt.Println(name)
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func runShowHealthCheck(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
if len(args) == 0 {
|
|
return fmt.Errorf("usage: show healthcheck <name>")
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.GetHealthCheck(ctx, &grpcapi.GetHealthCheckRequest{Name: args[0]})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("name"), info.Name)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("type"), info.Type)
|
|
if info.Port > 0 {
|
|
fmt.Fprintf(w, "%s\t%d\n", label("port"), info.Port)
|
|
}
|
|
fmt.Fprintf(w, "%s\t%s\n", label("interval"), time.Duration(info.IntervalNs))
|
|
if info.FastIntervalNs > 0 {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("fast-interval"), time.Duration(info.FastIntervalNs))
|
|
}
|
|
if info.DownIntervalNs > 0 {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("down-interval"), time.Duration(info.DownIntervalNs))
|
|
}
|
|
fmt.Fprintf(w, "%s\t%s\n", label("timeout"), time.Duration(info.TimeoutNs))
|
|
fmt.Fprintf(w, "%s\t%d\n", label("rise"), info.Rise)
|
|
fmt.Fprintf(w, "%s\t%d\n", label("fall"), info.Fall)
|
|
if info.ProbeIpv4Src != "" {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("probe-ipv4-src"), info.ProbeIpv4Src)
|
|
}
|
|
if info.ProbeIpv6Src != "" {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("probe-ipv6-src"), info.ProbeIpv6Src)
|
|
}
|
|
if h := info.Http; h != nil {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("http.path"), h.Path)
|
|
if h.Host != "" {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("http.host"), h.Host)
|
|
}
|
|
fmt.Fprintf(w, "%s\t%d-%d\n", label("http.response-code"), h.ResponseCodeMin, h.ResponseCodeMax)
|
|
if h.ResponseRegexp != "" {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("http.response-regexp"), h.ResponseRegexp)
|
|
}
|
|
}
|
|
if t := info.Tcp; t != nil {
|
|
fmt.Fprintf(w, "%s\t%v\n", label("tcp.ssl"), t.Ssl)
|
|
if t.ServerName != "" {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("tcp.server-name"), t.ServerName)
|
|
}
|
|
}
|
|
return w.Flush()
|
|
}
|
|
|
|
func runPauseBackend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
if len(args) == 0 {
|
|
return fmt.Errorf("usage: set backend <name> pause")
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.PauseBackend(ctx, &grpcapi.BackendRequest{Name: args[0]})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
fmt.Printf("%s: setting state to '%s'\n", info.Name, info.State)
|
|
return nil
|
|
}
|
|
|
|
func runResumeBackend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
if len(args) == 0 {
|
|
return fmt.Errorf("usage: set backend <name> resume")
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.ResumeBackend(ctx, &grpcapi.BackendRequest{Name: args[0]})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
fmt.Printf("%s: setting state to '%s'\n", info.Name, info.State)
|
|
return nil
|
|
}
|
|
|
|
func runSetFrontendPoolBackendWeight(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
return setFrontendPoolBackendWeight(ctx, client, args, false)
|
|
}
|
|
|
|
func runSetFrontendPoolBackendWeightFlush(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
return setFrontendPoolBackendWeight(ctx, client, args, true)
|
|
}
|
|
|
|
func setFrontendPoolBackendWeight(ctx context.Context, client grpcapi.MaglevClient, args []string, flush bool) error {
|
|
if len(args) != 4 {
|
|
return fmt.Errorf("usage: set frontend <name> pool <pool> backend <name> weight <0-100> [flush]")
|
|
}
|
|
frontendName, poolName, backendName, weightStr := args[0], args[1], args[2], args[3]
|
|
weight, err := strconv.Atoi(weightStr)
|
|
if err != nil || weight < 0 || weight > 100 {
|
|
return fmt.Errorf("weight: expected integer 0-100, got %q", weightStr)
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.SetFrontendPoolBackendWeight(ctx, &grpcapi.SetWeightRequest{
|
|
Frontend: frontendName,
|
|
Pool: poolName,
|
|
Backend: backendName,
|
|
Weight: int32(weight),
|
|
Flush: flush,
|
|
})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
// Print the updated pool so the user can confirm the new weight.
|
|
for _, pool := range info.Pools {
|
|
if pool.Name != poolName {
|
|
continue
|
|
}
|
|
for _, pb := range pool.Backends {
|
|
if pb.Name == backendName {
|
|
flushNote := ""
|
|
if flush {
|
|
flushNote = " (flushed)"
|
|
}
|
|
fmt.Printf("%s pool %s backend %s: weight set to %d%s\n",
|
|
info.Name, pool.Name, pb.Name, pb.Weight, flushNote)
|
|
return nil
|
|
}
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func runEnableBackend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
if len(args) == 0 {
|
|
return fmt.Errorf("usage: set backend <name> enable")
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.EnableBackend(ctx, &grpcapi.BackendRequest{Name: args[0]})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
fmt.Printf("%s: enabled, state is '%s'\n", info.Name, info.State)
|
|
return nil
|
|
}
|
|
|
|
func runDisableBackend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
if len(args) == 0 {
|
|
return fmt.Errorf("usage: set backend <name> disable")
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.DisableBackend(ctx, &grpcapi.BackendRequest{Name: args[0]})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
fmt.Printf("%s: disabled, state is '%s'\n", info.Name, info.State)
|
|
return nil
|
|
}
|
|
|
|
func runConfigCheck(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
resp, err := client.CheckConfig(ctx, &grpcapi.CheckConfigRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
if resp.Ok {
|
|
fmt.Println("config ok")
|
|
return nil
|
|
}
|
|
if resp.ParseError != "" {
|
|
return fmt.Errorf("parse error: %s", resp.ParseError)
|
|
}
|
|
return fmt.Errorf("semantic error: %s", resp.SemanticError)
|
|
}
|
|
|
|
func runConfigReload(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
resp, err := client.ReloadConfig(ctx, &grpcapi.ReloadConfigRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
if resp.Ok {
|
|
fmt.Println("config reloaded")
|
|
return nil
|
|
}
|
|
if resp.ParseError != "" {
|
|
return fmt.Errorf("parse error: %s", resp.ParseError)
|
|
}
|
|
if resp.SemanticError != "" {
|
|
return fmt.Errorf("semantic error: %s", resp.SemanticError)
|
|
}
|
|
return fmt.Errorf("reload error: %s", resp.ReloadError)
|
|
}
|
|
|
|
// formatDuration formats a duration as Xd Xh Xm Xs without milliseconds.
|
|
func formatDuration(d time.Duration) string {
|
|
if d < 0 {
|
|
d = 0
|
|
}
|
|
d = d.Truncate(time.Second)
|
|
|
|
days := int(d.Hours()) / 24
|
|
d -= time.Duration(days) * 24 * time.Hour
|
|
hours := int(d.Hours())
|
|
d -= time.Duration(hours) * time.Hour
|
|
minutes := int(d.Minutes())
|
|
d -= time.Duration(minutes) * time.Minute
|
|
seconds := int(d.Seconds())
|
|
|
|
var b strings.Builder
|
|
if days > 0 {
|
|
fmt.Fprintf(&b, "%dd", days)
|
|
}
|
|
if hours > 0 {
|
|
fmt.Fprintf(&b, "%dh", hours)
|
|
}
|
|
if minutes > 0 {
|
|
fmt.Fprintf(&b, "%dm", minutes)
|
|
}
|
|
if seconds > 0 || b.Len() == 0 {
|
|
fmt.Fprintf(&b, "%ds", seconds)
|
|
}
|
|
return b.String()
|
|
}
|
|
|
|
func formatAgo(d time.Duration) string {
|
|
return formatDuration(d) + " ago"
|
|
}
|