Files
vpp-maglev/cmd/maglevc/commands.go
Pim van Pelt 0049c2ae73 VPP reconciler: event-driven sync, pool failover, bug fixes
This commit wires the checker's state machine through to the VPP dataplane:
every backend state transition flows through a single code path that
recomputes the effective per-backend weight (with pool failover) and pushes
the result to VPP. Along the way several latent bugs in the state machine
and the sync path were fixed.

internal/vpp/reconciler.go (new)
- New Reconciler type subscribes to checker.Checker events and, on every
  transition, calls Client.SyncLBStateVIP for the affected frontend. This
  is the ONLY place in the codebase where backend state changes cause VPP
  calls — the "single path" discipline requested during design.
- Defines an EventSource interface (checker.Checker satisfies it) so the
  dependency direction stays vpp → checker; the checker never imports vpp.

internal/vpp/client.go
- Renamed ConfigSource → StateSource. The interface now has two methods:
  Config() and BackendState(name) — the reconciler and the desired-state
  builder both need live health state to compute effective weights.
- SetConfigSource → SetStateSource; internal cfgSrc field → stateSrc.
- New getStateSource() helper for internal locked access.
- lbSyncLoop still uses the state source for its periodic drift
  reconciliation; it's fully idempotent and runs the same code path as
  event-driven syncs.

internal/vpp/lbsync.go
- desiredAS grows a Flush bool so the mapping function can signal "on
  transition to weight 0, flush existing flow-table entries".
- asFromBackend is now the single source of truth for the state →
  (weight, flush) rule. Documented with a full truth table. Takes an
  activePool parameter so it can distinguish "up in active pool" from
  "up but standby".
- activePoolIndex(fe, states) implements priority failover: returns the
  index of the first pool containing any StateUp backend. pool[0] wins
  when at least one member is up; pool[1] takes over when pool[0] is
  empty; and so on. Defaults to 0 (unobservable, since all backends map
  to weight 0 when nothing is up).
- desiredFromFrontend snapshots backend states once, computes activePool,
  then walks every backend through asFromBackend. No more filtering on
  b.Enabled — disabled backends stay in the desired set so they keep
  their AS entry in VPP with weight=0. The previous filter caused delAS
  on disable, which destroyed the entry and broke enable afterwards.
- EffectiveWeights(fe, src) exported helper that returns the per-pool
  per-backend weight map for one frontend. Used by the gRPC GetFrontend
  handler and robot tests to observe failover without touching VPP.
- reconcileVIP computes flush at the weight-change call site:
    flush = desired.Flush && cur.Weight > 0 && desired.Weight == 0
  This ensures only the *transition* to disabled flushes sessions —
  steady-state syncs with already-zero weight skip the call entirely.
- setASWeight now plumbs IsFlush into lb_as_set_weight.

internal/vpp/lbsync_test.go (new)
- TestAsFromBackend: 15 cases locking down the truth table, including
  failover scenarios (up in standby pool, up promoted in pool[1]).
- TestActivePoolIndex: 8 cases covering pool[0]-has-up, pool[0]-all-down,
  all-disabled, all-paused, all-unknown, nothing-up-anywhere, and
  three-tier failover.
- TestDesiredFromFrontendFailover: 5 end-to-end scenarios wiring a fake
  StateSource through desiredFromFrontend and asserting the final
  per-IP weight map. Exercises the complete pipeline without VPP.

internal/checker/checker.go
- Added BackendState(name) (health.State, bool) — one-line method that
  satisfies vpp.StateSource. The checker is otherwise unchanged.
- EnableBackend rewritten to reuse the existing worker (parallel to
  ResumeBackend). The old code called startWorker which constructed a
  brand-new Backend via health.New, throwing away the transition
  history; the resulting 'backend-transition' log showed the bogus
  from=unknown,to=unknown. Now uses w.backend.Enable() to record a
  proper disabled→unknown transition and launches a fresh goroutine.
- Static (no-healthcheck) backends now fire their synthetic 'always up'
  pass on the first iteration of runProbe instead of sleeping 30s
  first. Previously static backends sat in StateUnknown for 30s after
  startup — useless for deterministic testing and surprising for
  operators. The fix is a simple first-iteration flag.

internal/health/state.go
- New Enable(maxHistory) method parallel to Disable. Transitions the
  backend from whatever state it's in (typically StateDisabled) to
  StateUnknown, resets the health counter to rise-1 so the expedited
  resolution kicks in on the first probe result, and emits a transition
  with code 'enabled'.

proto/maglev.proto
- PoolBackendInfo gains effective_weight: the state-aware weight that
  would be programmed into VPP (distinct from the configured weight in
  the YAML). Exposed via GetFrontend.

internal/grpcapi/server.go
- frontendToProto takes a vpp.StateSource, computes effective weights
  via vpp.EffectiveWeights, and populates PoolBackendInfo.EffectiveWeight.
- GetFrontend and SetFrontendPoolBackendWeight updated to pass the
  checker in.

cmd/maglevc/commands.go
- 'show frontends <name>' now renders every pool backend row as
    <name>  weight <cfg>  effective <eff>  [disabled]?
  so both values are always visible. The VPP-style key/value format
  avoids the ANSI-alignment pitfall we hit earlier and makes the output
  regex-parseable for robot tests.

cmd/maglevd/main.go
- Construct and start the Reconciler alongside the VPP client. Two
  extra lines, no other changes to startup.

tests/01-maglevd/maglevd-lab/maglev.yaml
- Two new static backends (static-primary, static-fallback) and a new
  failover-vip frontend with one backend per pool. No healthcheck, so
  the state machine resolves them to 'up' immediately via the synthetic
  pass. Used by the failover robot tests.

tests/01-maglevd/01-healthcheck.robot
- Three new test cases exercising pool failover end-to-end:
  1. primary up, secondary standby (initial state)
  2. disable primary → fallback takes over (effective weight flips)
  3. enable primary → fallback steps back
  All run without VPP: they scrape 'maglevc show frontends <name>' and
  regex-match the effective weight in the output. Deterministic and
  fast (~2s total) because the static backends don't probe.
- Two helper keywords: Static Backend Should Be Up and
  Effective Weight Should Be.

Net result: 16/16 robot tests pass. Backend state transitions now
flow through a single documented path (checker event → reconciler →
SyncLBStateVIP → desiredFromFrontend → asFromBackend → reconcileVIP →
setASWeight), and the pool failover / enable-after-disable / static-
backend-startup bugs are all fixed.
2026-04-12 12:40:09 +02:00

715 lines
22 KiB
Go

// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
package main
import (
"context"
"fmt"
"os"
"strconv"
"strings"
"text/tabwriter"
"time"
buildinfo "git.ipng.ch/ipng/vpp-maglev/cmd"
"git.ipng.ch/ipng/vpp-maglev/internal/grpcapi"
)
const callTimeout = 10 * time.Second
// buildTree constructs the full command tree.
func buildTree() *Node {
root := &Node{Word: "", Help: ""}
show := &Node{Word: "show", Help: "show information"}
set := &Node{Word: "set", Help: "modify configuration"}
quit := &Node{Word: "quit", Help: "exit the shell", Run: runQuit}
exit := &Node{Word: "exit", Help: "exit the shell", Run: runQuit}
// show version
showVersion := &Node{Word: "version", Help: "Show build version", Run: runShowVersion}
// show frontends [<name>] — without name: list all, with name: show details
showFrontendName := &Node{
Word: "<name>",
Help: "Show details for a single frontend",
Dynamic: dynFrontends,
Run: runShowFrontend,
}
showFrontends := &Node{
Word: "frontends",
Help: "List all frontends",
Run: runShowFrontends,
Children: []*Node{showFrontendName},
}
// show backends [<name>] — without name: list all, with name: show details
showBackendName := &Node{
Word: "<name>",
Help: "Show details for a single backend",
Dynamic: dynBackends,
Run: runShowBackend,
}
showBackends := &Node{
Word: "backends",
Help: "List all backends",
Run: runShowBackends,
Children: []*Node{showBackendName},
}
// show healthchecks [<name>] — without name: list all, with name: show details
showHealthCheckName := &Node{
Word: "<name>",
Help: "Show details for a single health check",
Dynamic: dynHealthChecks,
Run: runShowHealthCheck,
}
showHealthChecks := &Node{
Word: "healthchecks",
Help: "List all health checks",
Run: runShowHealthChecks,
Children: []*Node{showHealthCheckName},
}
// show vpp info / lbstate
showVPPInfo := &Node{Word: "info", Help: "Show VPP version, uptime, and connection status", Run: runShowVPPInfo}
showVPPLBState := &Node{Word: "lbstate", Help: "Show VPP load-balancer state (VIPs and application servers)", Run: runShowVPPLBState}
showVPP := &Node{
Word: "vpp",
Help: "VPP dataplane information",
Children: []*Node{showVPPInfo, showVPPLBState},
}
show.Children = []*Node{
showVersion,
showFrontends,
showBackends,
showHealthChecks,
showVPP,
}
// set backend <name> pause|resume|disabled|enabled
setPause := &Node{Word: "pause", Help: "pause health checking", Run: runPauseBackend}
setResume := &Node{Word: "resume", Help: "resume health checking", Run: runResumeBackend}
setDisabled := &Node{Word: "disable", Help: "disable backend (stop probing, remove from rotation)", Run: runDisableBackend}
setEnabled := &Node{Word: "enable", Help: "enable backend (resume probing)", Run: runEnableBackend}
setBackendName := &Node{
Word: "<name>",
Help: "backend name",
Dynamic: dynBackends,
Children: []*Node{setPause, setResume, setDisabled, setEnabled},
}
setBackend := &Node{
Word: "backend",
Help: "modify a backend",
Children: []*Node{setBackendName},
}
// set frontend <name> pool <pool> backend <name> weight <0-100>
setWeightValue := &Node{
Word: "<weight>",
Help: "Set weight of a backend in a pool (0-100)",
Dynamic: dynNone, // accepts any integer; no tab-completion candidates
Run: runSetFrontendPoolBackendWeight,
}
setFrontendPoolBackendWeight := &Node{Word: "weight", Help: "set backend weight in pool", Children: []*Node{setWeightValue}}
setFrontendPoolBackendName := &Node{
Word: "<backend>",
Help: "backend name",
Dynamic: dynBackends,
Children: []*Node{setFrontendPoolBackendWeight},
}
setFrontendPoolBackend := &Node{Word: "backend", Help: "select a backend", Children: []*Node{setFrontendPoolBackendName}}
setFrontendPoolName := &Node{
Word: "<pool>",
Help: "pool name",
Dynamic: dynNone, // pool names aren't listed via gRPC; accepts any input
Children: []*Node{setFrontendPoolBackend},
}
setFrontendPool := &Node{Word: "pool", Help: "select a pool", Children: []*Node{setFrontendPoolName}}
setFrontendName := &Node{
Word: "<name>",
Help: "frontend name",
Dynamic: dynFrontends,
Children: []*Node{setFrontendPool},
}
setFrontend := &Node{
Word: "frontend",
Help: "modify a frontend",
Children: []*Node{setFrontendName},
}
set.Children = []*Node{setBackend, setFrontend}
// watch events [num <n>] [log [level <level>]] [backend] [frontend]
//
// All tokens after 'events' are captured as args via a self-referencing slot
// node. This lets runWatchEvents parse the optional flags manually while still
// providing tab-completion through the dynamic enumerator.
var watchEventsOptSlot *Node
watchEventsOptSlot = &Node{
Word: "<opt>",
Help: "Stream events with options",
Dynamic: dynWatchEventOpts,
Run: runWatchEvents,
}
watchEventsOptSlot.Children = []*Node{watchEventsOptSlot}
watchEvents := &Node{
Word: "events",
Help: "stream events (press any key or Ctrl-C to stop)",
Run: runWatchEvents,
Children: []*Node{watchEventsOptSlot},
}
watch := &Node{
Word: "watch",
Help: "watch live event streams",
Children: []*Node{watchEvents},
}
// config check / reload
configCheck := &Node{Word: "check", Help: "Check configuration file", Run: runConfigCheck}
configReload := &Node{Word: "reload", Help: "Check and reload configuration", Run: runConfigReload}
configNode := &Node{
Word: "config",
Help: "configuration commands",
Children: []*Node{configCheck, configReload},
}
// sync vpp lbstate [<name>]
//
// Without a name: run SyncLBStateAll (may remove stale VIPs).
// With a name: run SyncLBStateVIP(name) for just that frontend (no removals).
syncVPPLBStateName := &Node{
Word: "<name>",
Help: "Sync a single frontend's VIP to VPP",
Dynamic: dynFrontends,
Run: runSyncVPPLBState,
}
syncVPPLBState := &Node{
Word: "lbstate",
Help: "Sync the VPP load-balancer dataplane from the running config",
Run: runSyncVPPLBState,
Children: []*Node{syncVPPLBStateName},
}
syncVPP := &Node{
Word: "vpp",
Help: "VPP dataplane sync commands",
Children: []*Node{syncVPPLBState},
}
syncNode := &Node{
Word: "sync",
Help: "Reconcile dataplane state from the running config",
Children: []*Node{syncVPP},
}
root.Children = []*Node{show, set, watch, configNode, syncNode, quit, exit}
return root
}
// ---- dynamic enumerators ---------------------------------------------------
func dynFrontends(ctx context.Context, client grpcapi.MaglevClient) []string {
resp, err := client.ListFrontends(ctx, &grpcapi.ListFrontendsRequest{})
if err != nil {
return nil
}
return resp.FrontendNames
}
func dynBackends(ctx context.Context, client grpcapi.MaglevClient) []string {
resp, err := client.ListBackends(ctx, &grpcapi.ListBackendsRequest{})
if err != nil {
return nil
}
return resp.BackendNames
}
func dynHealthChecks(ctx context.Context, client grpcapi.MaglevClient) []string {
resp, err := client.ListHealthChecks(ctx, &grpcapi.ListHealthChecksRequest{})
if err != nil {
return nil
}
return resp.Names
}
// dynNone marks a slot node that accepts any input but provides no
// tab-completion candidates (e.g. a pool name or numeric weight value).
func dynNone(_ context.Context, _ grpcapi.MaglevClient) []string { return nil }
// ---- run functions ---------------------------------------------------------
func runShowVPPInfo(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
info, err := client.GetVPPInfo(ctx, &grpcapi.GetVPPInfoRequest{})
if err != nil {
return err
}
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "%s\t%s\n", label("version"), info.Version)
fmt.Fprintf(w, "%s\t%s\n", label("build-date"), info.BuildDate)
fmt.Fprintf(w, "%s\t%s\n", label("build-dir"), info.BuildDirectory)
fmt.Fprintf(w, "%s\t%d\n", label("vpp-pid"), info.Pid)
if info.BoottimeNs > 0 {
bootTime := time.Unix(0, info.BoottimeNs)
fmt.Fprintf(w, "%s\t%s (%s)\n", label("vpp-boottime"),
bootTime.Format("2006-01-02 15:04:05"),
formatDuration(time.Since(bootTime)))
}
connTime := time.Unix(0, info.ConnecttimeNs)
fmt.Fprintf(w, "%s\t%s (%s)\n", label("connected"),
connTime.Format("2006-01-02 15:04:05"),
formatDuration(time.Since(connTime)))
return w.Flush()
}
func runShowVPPLBState(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
state, err := client.GetVPPLBState(ctx, &grpcapi.GetVPPLBStateRequest{})
if err != nil {
return err
}
// ---- global config ----
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "%s\n", label("global"))
if state.Conf.Ip4SrcAddress != "" {
fmt.Fprintf(w, " %s\t%s\n", label("ip4-src"), state.Conf.Ip4SrcAddress)
}
if state.Conf.Ip6SrcAddress != "" {
fmt.Fprintf(w, " %s\t%s\n", label("ip6-src"), state.Conf.Ip6SrcAddress)
}
fmt.Fprintf(w, " %s\t%d\n", label("sticky-buckets-per-core"), state.Conf.StickyBucketsPerCore)
fmt.Fprintf(w, " %s\t%ds\n", label("flow-timeout"), state.Conf.FlowTimeout)
if err := w.Flush(); err != nil {
return err
}
if len(state.Vips) == 0 {
fmt.Println(label("vips") + " (none)")
return nil
}
// ---- per-VIP details ----
for _, v := range state.Vips {
fmt.Println()
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "%s\t%s\n", label("vip"), v.Prefix)
fmt.Fprintf(w, " %s\t%s\n", label("protocol"), protoString(v.Protocol))
fmt.Fprintf(w, " %s\t%d\n", label("port"), v.Port)
fmt.Fprintf(w, " %s\t%s\n", label("encap"), v.Encap)
fmt.Fprintf(w, " %s\t%d\n", label("flow-table-length"), v.FlowTableLength)
fmt.Fprintf(w, " %s\t%d\n", label("application-servers"), len(v.ApplicationServers))
if err := w.Flush(); err != nil {
return err
}
for _, a := range v.ApplicationServers {
fmt.Printf(" %s %s %s %d %s %d\n",
label("address"), a.Address,
label("weight"), a.Weight,
label("flow-table-buckets"), a.NumBuckets)
}
}
return nil
}
// protoString renders an IP protocol number as a name (tcp, udp, any, or numeric).
func protoString(p uint32) string {
switch p {
case 6:
return "tcp"
case 17:
return "udp"
case 255:
return "any"
}
return fmt.Sprintf("%d", p)
}
func runSyncVPPLBState(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
req := &grpcapi.SyncVPPLBStateRequest{}
if len(args) > 0 && args[0] != "" {
name := args[0]
req.FrontendName = &name
}
if _, err := client.SyncVPPLBState(ctx, req); err != nil {
return err
}
if req.FrontendName != nil {
fmt.Printf("synced frontend %q to VPP\n", *req.FrontendName)
} else {
fmt.Println("synced full LB state to VPP")
}
return nil
}
func runShowVersion(_ context.Context, _ grpcapi.MaglevClient, _ []string) error {
fmt.Printf("maglevc %s (commit %s, built %s)\n",
buildinfo.Version(), buildinfo.Commit(), buildinfo.Date())
return nil
}
func runQuit(_ context.Context, _ grpcapi.MaglevClient, _ []string) error {
return errQuit
}
func runShowFrontends(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
resp, err := client.ListFrontends(ctx, &grpcapi.ListFrontendsRequest{})
if err != nil {
return err
}
for _, name := range resp.FrontendNames {
fmt.Println(name)
}
return nil
}
func runShowFrontend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
if len(args) == 0 {
return fmt.Errorf("usage: show frontend <name>")
}
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
info, err := client.GetFrontend(ctx, &grpcapi.GetFrontendRequest{Name: args[0]})
if err != nil {
return err
}
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "%s\t%s\n", label("name"), info.Name)
fmt.Fprintf(w, "%s\t%s\n", label("address"), info.Address)
fmt.Fprintf(w, "%s\t%s\n", label("protocol"), info.Protocol)
fmt.Fprintf(w, "%s\t%d\n", label("port"), info.Port)
if info.Description != "" {
fmt.Fprintf(w, "%s\t%s\n", label("description"), info.Description)
}
if len(info.Pools) > 0 {
fmt.Fprintf(w, "%s\n", label("pools"))
}
if err := w.Flush(); err != nil {
return err
}
// Pool section uses direct Printf with fixed-width padding so that ANSI
// escape codes in labels don't confuse tabwriter's byte-based alignment.
// "backends" is always the widest pool label (8 chars); all pool labels
// are right-padded to that width, giving a 2+8+2 = 12-char visual indent.
const poolLblWidth = len("backends")
const poolIndent = " "
const poolSep = " "
contIndent := strings.Repeat(" ", len(poolIndent)+poolLblWidth+len(poolSep))
for _, pool := range info.Pools {
namePad := strings.Repeat(" ", poolLblWidth-len("name"))
fmt.Printf("%s%s%s%s%s\n", poolIndent, label("name"), namePad, poolSep, pool.Name)
for i, pb := range pool.Backends {
beInfo, beErr := client.GetBackend(ctx, &grpcapi.GetBackendRequest{Name: pb.Name})
suffix := ""
if beErr == nil && !beInfo.Enabled {
suffix = " [disabled]"
}
// Show both the configured weight (from YAML) and the
// state-aware effective weight (what gets programmed into VPP
// after pool-failover logic). Format matches the VPP-style
// key-value line so robot tests can parse it with a regex.
metaStr := fmt.Sprintf(" %s %d %s %d",
label("weight"), pb.Weight,
label("effective"), pb.EffectiveWeight)
if i == 0 {
bePad := strings.Repeat(" ", poolLblWidth-len("backends"))
fmt.Printf("%s%s%s%s%s%s%s\n", poolIndent, label("backends"), bePad, poolSep, pb.Name, metaStr, suffix)
} else {
fmt.Printf("%s%s%s%s\n", contIndent, pb.Name, metaStr, suffix)
}
}
}
return nil
}
func runShowBackends(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
resp, err := client.ListBackends(ctx, &grpcapi.ListBackendsRequest{})
if err != nil {
return err
}
for _, name := range resp.BackendNames {
fmt.Println(name)
}
return nil
}
func runShowBackend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
if len(args) == 0 {
return fmt.Errorf("usage: show backend <name>")
}
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
info, err := client.GetBackend(ctx, &grpcapi.GetBackendRequest{Name: args[0]})
if err != nil {
return err
}
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "%s\t%s\n", label("name"), info.Name)
fmt.Fprintf(w, "%s\t%s\n", label("address"), info.Address)
stateDur := ""
if len(info.Transitions) > 0 {
since := time.Since(time.Unix(0, info.Transitions[0].AtUnixNs))
stateDur = " for " + formatDuration(since)
}
fmt.Fprintf(w, "%s\t%s%s\n", label("state"), info.State, stateDur)
fmt.Fprintf(w, "%s\t%v\n", label("enabled"), info.Enabled)
fmt.Fprintf(w, "%s\t%s\n", label("healthcheck"), info.Healthcheck)
for i, t := range info.Transitions {
ts := time.Unix(0, t.AtUnixNs)
var lbl string
if i == 0 {
lbl = label("transitions")
} else {
// Pad to same visible width as "transitions" and wrap through
// label() so tabwriter sees the same byte count (ANSI overhead
// is identical on every row, keeping columns aligned).
lbl = label(" ")
}
fmt.Fprintf(w, "%s\t%s → %s\t%s\t%s\n",
lbl,
t.From, t.To,
ts.Format("2006-01-02 15:04:05.000"),
formatAgo(time.Since(ts)),
)
}
return w.Flush()
}
func runShowHealthChecks(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
resp, err := client.ListHealthChecks(ctx, &grpcapi.ListHealthChecksRequest{})
if err != nil {
return err
}
for _, name := range resp.Names {
fmt.Println(name)
}
return nil
}
func runShowHealthCheck(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
if len(args) == 0 {
return fmt.Errorf("usage: show healthcheck <name>")
}
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
info, err := client.GetHealthCheck(ctx, &grpcapi.GetHealthCheckRequest{Name: args[0]})
if err != nil {
return err
}
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "%s\t%s\n", label("name"), info.Name)
fmt.Fprintf(w, "%s\t%s\n", label("type"), info.Type)
if info.Port > 0 {
fmt.Fprintf(w, "%s\t%d\n", label("port"), info.Port)
}
fmt.Fprintf(w, "%s\t%s\n", label("interval"), time.Duration(info.IntervalNs))
if info.FastIntervalNs > 0 {
fmt.Fprintf(w, "%s\t%s\n", label("fast-interval"), time.Duration(info.FastIntervalNs))
}
if info.DownIntervalNs > 0 {
fmt.Fprintf(w, "%s\t%s\n", label("down-interval"), time.Duration(info.DownIntervalNs))
}
fmt.Fprintf(w, "%s\t%s\n", label("timeout"), time.Duration(info.TimeoutNs))
fmt.Fprintf(w, "%s\t%d\n", label("rise"), info.Rise)
fmt.Fprintf(w, "%s\t%d\n", label("fall"), info.Fall)
if info.ProbeIpv4Src != "" {
fmt.Fprintf(w, "%s\t%s\n", label("probe-ipv4-src"), info.ProbeIpv4Src)
}
if info.ProbeIpv6Src != "" {
fmt.Fprintf(w, "%s\t%s\n", label("probe-ipv6-src"), info.ProbeIpv6Src)
}
if h := info.Http; h != nil {
fmt.Fprintf(w, "%s\t%s\n", label("http.path"), h.Path)
if h.Host != "" {
fmt.Fprintf(w, "%s\t%s\n", label("http.host"), h.Host)
}
fmt.Fprintf(w, "%s\t%d-%d\n", label("http.response-code"), h.ResponseCodeMin, h.ResponseCodeMax)
if h.ResponseRegexp != "" {
fmt.Fprintf(w, "%s\t%s\n", label("http.response-regexp"), h.ResponseRegexp)
}
}
if t := info.Tcp; t != nil {
fmt.Fprintf(w, "%s\t%v\n", label("tcp.ssl"), t.Ssl)
if t.ServerName != "" {
fmt.Fprintf(w, "%s\t%s\n", label("tcp.server-name"), t.ServerName)
}
}
return w.Flush()
}
func runPauseBackend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
if len(args) == 0 {
return fmt.Errorf("usage: set backend <name> pause")
}
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
info, err := client.PauseBackend(ctx, &grpcapi.BackendRequest{Name: args[0]})
if err != nil {
return err
}
fmt.Printf("%s: setting state to '%s'\n", info.Name, info.State)
return nil
}
func runResumeBackend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
if len(args) == 0 {
return fmt.Errorf("usage: set backend <name> resume")
}
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
info, err := client.ResumeBackend(ctx, &grpcapi.BackendRequest{Name: args[0]})
if err != nil {
return err
}
fmt.Printf("%s: setting state to '%s'\n", info.Name, info.State)
return nil
}
func runSetFrontendPoolBackendWeight(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
if len(args) != 4 {
return fmt.Errorf("usage: set frontend <name> pool <pool> backend <name> weight <0-100>")
}
frontendName, poolName, backendName, weightStr := args[0], args[1], args[2], args[3]
weight, err := strconv.Atoi(weightStr)
if err != nil || weight < 0 || weight > 100 {
return fmt.Errorf("weight: expected integer 0-100, got %q", weightStr)
}
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
info, err := client.SetFrontendPoolBackendWeight(ctx, &grpcapi.SetWeightRequest{
Frontend: frontendName,
Pool: poolName,
Backend: backendName,
Weight: int32(weight),
})
if err != nil {
return err
}
// Print the updated pool so the user can confirm the new weight.
for _, pool := range info.Pools {
if pool.Name != poolName {
continue
}
for _, pb := range pool.Backends {
if pb.Name == backendName {
fmt.Printf("%s pool %s backend %s: weight set to %d\n", info.Name, pool.Name, pb.Name, pb.Weight)
return nil
}
}
}
return nil
}
func runEnableBackend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
if len(args) == 0 {
return fmt.Errorf("usage: set backend <name> enable")
}
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
info, err := client.EnableBackend(ctx, &grpcapi.BackendRequest{Name: args[0]})
if err != nil {
return err
}
fmt.Printf("%s: enabled, state is '%s'\n", info.Name, info.State)
return nil
}
func runDisableBackend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
if len(args) == 0 {
return fmt.Errorf("usage: set backend <name> disable")
}
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
info, err := client.DisableBackend(ctx, &grpcapi.BackendRequest{Name: args[0]})
if err != nil {
return err
}
fmt.Printf("%s: disabled, state is '%s'\n", info.Name, info.State)
return nil
}
func runConfigCheck(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
resp, err := client.CheckConfig(ctx, &grpcapi.CheckConfigRequest{})
if err != nil {
return err
}
if resp.Ok {
fmt.Println("config ok")
return nil
}
if resp.ParseError != "" {
return fmt.Errorf("parse error: %s", resp.ParseError)
}
return fmt.Errorf("semantic error: %s", resp.SemanticError)
}
func runConfigReload(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
ctx, cancel := context.WithTimeout(ctx, callTimeout)
defer cancel()
resp, err := client.ReloadConfig(ctx, &grpcapi.ReloadConfigRequest{})
if err != nil {
return err
}
if resp.Ok {
fmt.Println("config reloaded")
return nil
}
if resp.ParseError != "" {
return fmt.Errorf("parse error: %s", resp.ParseError)
}
if resp.SemanticError != "" {
return fmt.Errorf("semantic error: %s", resp.SemanticError)
}
return fmt.Errorf("reload error: %s", resp.ReloadError)
}
// formatDuration formats a duration as Xd Xh Xm Xs without milliseconds.
func formatDuration(d time.Duration) string {
if d < 0 {
d = 0
}
d = d.Truncate(time.Second)
days := int(d.Hours()) / 24
d -= time.Duration(days) * 24 * time.Hour
hours := int(d.Hours())
d -= time.Duration(hours) * time.Hour
minutes := int(d.Minutes())
d -= time.Duration(minutes) * time.Minute
seconds := int(d.Seconds())
var b strings.Builder
if days > 0 {
fmt.Fprintf(&b, "%dd", days)
}
if hours > 0 {
fmt.Fprintf(&b, "%dh", hours)
}
if minutes > 0 {
fmt.Fprintf(&b, "%dm", minutes)
}
if seconds > 0 || b.Len() == 0 {
fmt.Fprintf(&b, "%ds", seconds)
}
return b.String()
}
func formatAgo(d time.Duration) string {
return formatDuration(d) + " ago"
}