This commit wires maglevd through to VPP's LB plugin end-to-end, using
locally-generated GoVPP bindings for the newer v2 API messages.
VPP binapi (vendored)
- New package internal/vpp/binapi/ containing lb, lb_types, ip_types, and
interface_types, generated from a local VPP build (~/src/vpp) via a new
'make vpp-binapi' target. GoVPP v0.12.0 upstream lacks the v2 messages we
need (lb_conf_get, lb_add_del_vip_v2, lb_add_del_as_v2, lb_as_v2_dump,
lb_as_set_weight), so we commit the generated output in-tree.
- All generated files go through our loggedChannel wrapper; every VPP API
send/receive is recorded at DEBUG via slog (vpp-api-send / vpp-api-recv /
vpp-api-send-multi / vpp-api-recv-multi) so the full wire-level trail is
auditable. NewAPIChannel is unexported — callers must use c.apiChannel().
Read path: GetLBState{All,VIP}
- GetLBStateAll returns a full snapshot (global conf + every VIP with its
attached application servers).
- GetLBStateVIP looks up a single VIP by (prefix, protocol, port) and
returns (nil, nil) when the VIP doesn't exist in VPP. This is the
efficient path for targeted updates on a busy LB.
- Helpers factored out: getLBConf, dumpAllVIPs, dumpASesForVIP, lookupVIP,
vipFromDetails.
Write path: SyncLBState{All,VIP}
- SyncLBStateAll reconciles every configured frontend with VPP: creates
missing VIPs, removes stale ones (with AS flush), and reconciles AS
membership and weights within VIPs that exist on both sides.
- SyncLBStateVIP targets a single frontend by name. Never removes VIPs.
Returns ErrFrontendNotFound (wrapped with the name) when the frontend
isn't in config, so callers can use errors.Is.
- Shared reconcileVIP helper does the per-VIP AS diff; removeVIP is used
only by the full-sync pass.
- LbAddDelVipV2 requests always set NewFlowsTableLength=1024. The .api
default=1024 annotation is only applied by VAT/CLI parsers, not wire-
level marshalling — sending 0 caused VPP to vec_validate with mask
0xFFFFFFFF and OOM-panic.
- Pool semantics: backends in the primary (first) pool of a frontend get
their configured weight; backends in secondary pools get weight 0. All
backends are installed so higher layers can flip weights on failover
without add/remove churn.
- Every individual change emits a DEBUG slog (vpp-lbsync-vip-add/del,
vpp-lbsync-as-add/del, vpp-lbsync-as-weight). Start/done INFO logs
carry a scope=all|vip label plus aggregate counts.
Global conf push: SetLBConf
- New SetLBConf(cfg) sends lb_conf with ipv4-src, ipv6-src, sticky-buckets,
and flow-timeout. Called automatically on VPP (re)connect and after
every config reload (via doReloadConfig). Results are cached on the
Client so redundant pushes are silently skipped — only actual changes
produce a vpp-lb-conf-set INFO log line.
Periodic drift reconciliation
- vpp.Client.lbSyncLoop runs in a goroutine tied to each VPP connection's
lifetime. Its first tick is immediate (startup and post-reconnect
sync quickly); subsequent ticks fire every vpp.lb.sync-interval from
config (default 30s). Purpose: catch drift if something/someone
modifies VPP state by hand. The loop uses a ConfigSource interface
(satisfied by checker.Checker via its new Config() accessor) to avoid
an import cycle with the checker package.
Config schema additions (maglev.vpp.lb)
- sync-interval: positive Go duration, default 30s.
- ipv4-src-address: REQUIRED. Used as the outer source for GRE4 encap
to application servers. Missing this is a hard semantic error —
maglevd --check exits 2 and the daemon refuses to start. VPP GRE
needs a source address and every VIP we program uses GRE, so there
is no meaningful config without it.
- ipv6-src-address: REQUIRED. Same treatment as ipv4-src-address.
- sticky-buckets-per-core: default 65536, must be a power of 2.
- flow-timeout: default 40s, must be a whole number of seconds in [1s, 120s].
- VPP validation runs at the end of convert() so structural errors in
healthchecks/backends/frontends surface first — operators fix those,
then get the VPP-specific requirements.
gRPC API
- New GetVPPLBState RPC returning VPPLBState: global conf + VIPs with
ASes. Mirrors the read-path but strips fields irrelevant to our
GRE-only deployment (srv_type, dscp, target_port).
- New SyncVPPLBState RPC with optional frontend_name. Unset → full sync
(may remove stale VIPs). Set → single-VIP sync (never removes).
Returns codes.NotFound for unknown frontends, codes.Unavailable when
VPP integration is disabled or disconnected.
maglevc (CLI)
- New 'show vpp lbstate' command displaying the LB plugin state. VPP-only
fields the dataplane irrelevant to GRE are suppressed. Per-AS lines use
a key-value format ("address X weight Y flow-table-buckets Z")
instead of a tabwriter column, which avoids the ANSI-color alignment
issue we hit with mixed label/data rows.
- New 'sync vpp lbstate [<name>]' command. Without a name, triggers a
full reconciliation; with a name, targets one frontend.
- Previous 'show vpp lb' renamed to 'show vpp lbstate' for consistency
with the new sync command.
Test fixtures
- validConfig and all ad-hoc config_test.go fixtures that reach the end
of convert() now include the two required vpp.lb src addresses.
- tests/01-maglevd/maglevd-lab/maglev.yaml gains a vpp.lb section so the
robot integration tests can still load the config.
- cmd/maglevc/tree_test.go gains expected paths for the new commands.
Docs
- config-guide.md: new 'vpp' section in the basic structure, detailed
vpp.lb field reference, noting ipv4/ipv6 src addresses as REQUIRED
(hard error) with no defaults; example config updated.
- user-guide.md: documented 'show vpp info', 'show vpp lbstate',
'sync vpp lbstate [<name>]', new --vpp-api-addr and --vpp-stats-addr
flags, the vpp-lb-conf-set log line, and corrected the pause/resume
description to reflect that pause cancels the probe goroutine.
- debian/maglev.yaml: example config gains a vpp.lb block with src
addresses and commented optional overrides.
712 lines
22 KiB
Go
712 lines
22 KiB
Go
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
|
|
|
package main
|
|
|
|
import (
|
|
"context"
|
|
"fmt"
|
|
"os"
|
|
"strconv"
|
|
"strings"
|
|
"text/tabwriter"
|
|
"time"
|
|
|
|
buildinfo "git.ipng.ch/ipng/vpp-maglev/cmd"
|
|
"git.ipng.ch/ipng/vpp-maglev/internal/grpcapi"
|
|
)
|
|
|
|
const callTimeout = 10 * time.Second
|
|
|
|
// buildTree constructs the full command tree.
|
|
func buildTree() *Node {
|
|
root := &Node{Word: "", Help: ""}
|
|
|
|
show := &Node{Word: "show", Help: "show information"}
|
|
set := &Node{Word: "set", Help: "modify configuration"}
|
|
quit := &Node{Word: "quit", Help: "exit the shell", Run: runQuit}
|
|
exit := &Node{Word: "exit", Help: "exit the shell", Run: runQuit}
|
|
|
|
// show version
|
|
showVersion := &Node{Word: "version", Help: "Show build version", Run: runShowVersion}
|
|
|
|
// show frontends [<name>] — without name: list all, with name: show details
|
|
showFrontendName := &Node{
|
|
Word: "<name>",
|
|
Help: "Show details for a single frontend",
|
|
Dynamic: dynFrontends,
|
|
Run: runShowFrontend,
|
|
}
|
|
showFrontends := &Node{
|
|
Word: "frontends",
|
|
Help: "List all frontends",
|
|
Run: runShowFrontends,
|
|
Children: []*Node{showFrontendName},
|
|
}
|
|
|
|
// show backends [<name>] — without name: list all, with name: show details
|
|
showBackendName := &Node{
|
|
Word: "<name>",
|
|
Help: "Show details for a single backend",
|
|
Dynamic: dynBackends,
|
|
Run: runShowBackend,
|
|
}
|
|
showBackends := &Node{
|
|
Word: "backends",
|
|
Help: "List all backends",
|
|
Run: runShowBackends,
|
|
Children: []*Node{showBackendName},
|
|
}
|
|
|
|
// show healthchecks [<name>] — without name: list all, with name: show details
|
|
showHealthCheckName := &Node{
|
|
Word: "<name>",
|
|
Help: "Show details for a single health check",
|
|
Dynamic: dynHealthChecks,
|
|
Run: runShowHealthCheck,
|
|
}
|
|
showHealthChecks := &Node{
|
|
Word: "healthchecks",
|
|
Help: "List all health checks",
|
|
Run: runShowHealthChecks,
|
|
Children: []*Node{showHealthCheckName},
|
|
}
|
|
|
|
// show vpp info / lbstate
|
|
showVPPInfo := &Node{Word: "info", Help: "Show VPP version, uptime, and connection status", Run: runShowVPPInfo}
|
|
showVPPLBState := &Node{Word: "lbstate", Help: "Show VPP load-balancer state (VIPs and application servers)", Run: runShowVPPLBState}
|
|
showVPP := &Node{
|
|
Word: "vpp",
|
|
Help: "VPP dataplane information",
|
|
Children: []*Node{showVPPInfo, showVPPLBState},
|
|
}
|
|
|
|
show.Children = []*Node{
|
|
showVersion,
|
|
showFrontends,
|
|
showBackends,
|
|
showHealthChecks,
|
|
showVPP,
|
|
}
|
|
|
|
// set backend <name> pause|resume|disabled|enabled
|
|
setPause := &Node{Word: "pause", Help: "pause health checking", Run: runPauseBackend}
|
|
setResume := &Node{Word: "resume", Help: "resume health checking", Run: runResumeBackend}
|
|
setDisabled := &Node{Word: "disable", Help: "disable backend (stop probing, remove from rotation)", Run: runDisableBackend}
|
|
setEnabled := &Node{Word: "enable", Help: "enable backend (resume probing)", Run: runEnableBackend}
|
|
setBackendName := &Node{
|
|
Word: "<name>",
|
|
Help: "backend name",
|
|
Dynamic: dynBackends,
|
|
Children: []*Node{setPause, setResume, setDisabled, setEnabled},
|
|
}
|
|
setBackend := &Node{
|
|
Word: "backend",
|
|
Help: "modify a backend",
|
|
Children: []*Node{setBackendName},
|
|
}
|
|
// set frontend <name> pool <pool> backend <name> weight <0-100>
|
|
setWeightValue := &Node{
|
|
Word: "<weight>",
|
|
Help: "Set weight of a backend in a pool (0-100)",
|
|
Dynamic: dynNone, // accepts any integer; no tab-completion candidates
|
|
Run: runSetFrontendPoolBackendWeight,
|
|
}
|
|
setFrontendPoolBackendWeight := &Node{Word: "weight", Help: "set backend weight in pool", Children: []*Node{setWeightValue}}
|
|
setFrontendPoolBackendName := &Node{
|
|
Word: "<backend>",
|
|
Help: "backend name",
|
|
Dynamic: dynBackends,
|
|
Children: []*Node{setFrontendPoolBackendWeight},
|
|
}
|
|
setFrontendPoolBackend := &Node{Word: "backend", Help: "select a backend", Children: []*Node{setFrontendPoolBackendName}}
|
|
setFrontendPoolName := &Node{
|
|
Word: "<pool>",
|
|
Help: "pool name",
|
|
Dynamic: dynNone, // pool names aren't listed via gRPC; accepts any input
|
|
Children: []*Node{setFrontendPoolBackend},
|
|
}
|
|
setFrontendPool := &Node{Word: "pool", Help: "select a pool", Children: []*Node{setFrontendPoolName}}
|
|
setFrontendName := &Node{
|
|
Word: "<name>",
|
|
Help: "frontend name",
|
|
Dynamic: dynFrontends,
|
|
Children: []*Node{setFrontendPool},
|
|
}
|
|
setFrontend := &Node{
|
|
Word: "frontend",
|
|
Help: "modify a frontend",
|
|
Children: []*Node{setFrontendName},
|
|
}
|
|
|
|
set.Children = []*Node{setBackend, setFrontend}
|
|
|
|
// watch events [num <n>] [log [level <level>]] [backend] [frontend]
|
|
//
|
|
// All tokens after 'events' are captured as args via a self-referencing slot
|
|
// node. This lets runWatchEvents parse the optional flags manually while still
|
|
// providing tab-completion through the dynamic enumerator.
|
|
var watchEventsOptSlot *Node
|
|
watchEventsOptSlot = &Node{
|
|
Word: "<opt>",
|
|
Help: "Stream events with options",
|
|
Dynamic: dynWatchEventOpts,
|
|
Run: runWatchEvents,
|
|
}
|
|
watchEventsOptSlot.Children = []*Node{watchEventsOptSlot}
|
|
|
|
watchEvents := &Node{
|
|
Word: "events",
|
|
Help: "stream events (press any key or Ctrl-C to stop)",
|
|
Run: runWatchEvents,
|
|
Children: []*Node{watchEventsOptSlot},
|
|
}
|
|
watch := &Node{
|
|
Word: "watch",
|
|
Help: "watch live event streams",
|
|
Children: []*Node{watchEvents},
|
|
}
|
|
|
|
// config check / reload
|
|
configCheck := &Node{Word: "check", Help: "Check configuration file", Run: runConfigCheck}
|
|
configReload := &Node{Word: "reload", Help: "Check and reload configuration", Run: runConfigReload}
|
|
configNode := &Node{
|
|
Word: "config",
|
|
Help: "configuration commands",
|
|
Children: []*Node{configCheck, configReload},
|
|
}
|
|
|
|
// sync vpp lbstate [<name>]
|
|
//
|
|
// Without a name: run SyncLBStateAll (may remove stale VIPs).
|
|
// With a name: run SyncLBStateVIP(name) for just that frontend (no removals).
|
|
syncVPPLBStateName := &Node{
|
|
Word: "<name>",
|
|
Help: "Sync a single frontend's VIP to VPP",
|
|
Dynamic: dynFrontends,
|
|
Run: runSyncVPPLBState,
|
|
}
|
|
syncVPPLBState := &Node{
|
|
Word: "lbstate",
|
|
Help: "Sync the VPP load-balancer dataplane from the running config",
|
|
Run: runSyncVPPLBState,
|
|
Children: []*Node{syncVPPLBStateName},
|
|
}
|
|
syncVPP := &Node{
|
|
Word: "vpp",
|
|
Help: "VPP dataplane sync commands",
|
|
Children: []*Node{syncVPPLBState},
|
|
}
|
|
syncNode := &Node{
|
|
Word: "sync",
|
|
Help: "Reconcile dataplane state from the running config",
|
|
Children: []*Node{syncVPP},
|
|
}
|
|
|
|
root.Children = []*Node{show, set, watch, configNode, syncNode, quit, exit}
|
|
return root
|
|
}
|
|
|
|
// ---- dynamic enumerators ---------------------------------------------------
|
|
|
|
func dynFrontends(ctx context.Context, client grpcapi.MaglevClient) []string {
|
|
resp, err := client.ListFrontends(ctx, &grpcapi.ListFrontendsRequest{})
|
|
if err != nil {
|
|
return nil
|
|
}
|
|
return resp.FrontendNames
|
|
}
|
|
|
|
func dynBackends(ctx context.Context, client grpcapi.MaglevClient) []string {
|
|
resp, err := client.ListBackends(ctx, &grpcapi.ListBackendsRequest{})
|
|
if err != nil {
|
|
return nil
|
|
}
|
|
return resp.BackendNames
|
|
}
|
|
|
|
func dynHealthChecks(ctx context.Context, client grpcapi.MaglevClient) []string {
|
|
resp, err := client.ListHealthChecks(ctx, &grpcapi.ListHealthChecksRequest{})
|
|
if err != nil {
|
|
return nil
|
|
}
|
|
return resp.Names
|
|
}
|
|
|
|
// dynNone marks a slot node that accepts any input but provides no
|
|
// tab-completion candidates (e.g. a pool name or numeric weight value).
|
|
func dynNone(_ context.Context, _ grpcapi.MaglevClient) []string { return nil }
|
|
|
|
// ---- run functions ---------------------------------------------------------
|
|
|
|
func runShowVPPInfo(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.GetVPPInfo(ctx, &grpcapi.GetVPPInfoRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("version"), info.Version)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("build-date"), info.BuildDate)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("build-dir"), info.BuildDirectory)
|
|
fmt.Fprintf(w, "%s\t%d\n", label("vpp-pid"), info.Pid)
|
|
if info.BoottimeNs > 0 {
|
|
bootTime := time.Unix(0, info.BoottimeNs)
|
|
fmt.Fprintf(w, "%s\t%s (%s)\n", label("vpp-boottime"),
|
|
bootTime.Format("2006-01-02 15:04:05"),
|
|
formatDuration(time.Since(bootTime)))
|
|
}
|
|
connTime := time.Unix(0, info.ConnecttimeNs)
|
|
fmt.Fprintf(w, "%s\t%s (%s)\n", label("connected"),
|
|
connTime.Format("2006-01-02 15:04:05"),
|
|
formatDuration(time.Since(connTime)))
|
|
return w.Flush()
|
|
}
|
|
|
|
func runShowVPPLBState(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
state, err := client.GetVPPLBState(ctx, &grpcapi.GetVPPLBStateRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
|
|
// ---- global config ----
|
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
|
fmt.Fprintf(w, "%s\n", label("global"))
|
|
if state.Conf.Ip4SrcAddress != "" {
|
|
fmt.Fprintf(w, " %s\t%s\n", label("ip4-src"), state.Conf.Ip4SrcAddress)
|
|
}
|
|
if state.Conf.Ip6SrcAddress != "" {
|
|
fmt.Fprintf(w, " %s\t%s\n", label("ip6-src"), state.Conf.Ip6SrcAddress)
|
|
}
|
|
fmt.Fprintf(w, " %s\t%d\n", label("sticky-buckets-per-core"), state.Conf.StickyBucketsPerCore)
|
|
fmt.Fprintf(w, " %s\t%ds\n", label("flow-timeout"), state.Conf.FlowTimeout)
|
|
if err := w.Flush(); err != nil {
|
|
return err
|
|
}
|
|
|
|
if len(state.Vips) == 0 {
|
|
fmt.Println(label("vips") + " (none)")
|
|
return nil
|
|
}
|
|
|
|
// ---- per-VIP details ----
|
|
for _, v := range state.Vips {
|
|
fmt.Println()
|
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("vip"), v.Prefix)
|
|
fmt.Fprintf(w, " %s\t%s\n", label("protocol"), protoString(v.Protocol))
|
|
fmt.Fprintf(w, " %s\t%d\n", label("port"), v.Port)
|
|
fmt.Fprintf(w, " %s\t%s\n", label("encap"), v.Encap)
|
|
fmt.Fprintf(w, " %s\t%d\n", label("flow-table-length"), v.FlowTableLength)
|
|
fmt.Fprintf(w, " %s\t%d\n", label("application-servers"), len(v.ApplicationServers))
|
|
if err := w.Flush(); err != nil {
|
|
return err
|
|
}
|
|
for _, a := range v.ApplicationServers {
|
|
fmt.Printf(" %s %s %s %d %s %d\n",
|
|
label("address"), a.Address,
|
|
label("weight"), a.Weight,
|
|
label("flow-table-buckets"), a.NumBuckets)
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// protoString renders an IP protocol number as a name (tcp, udp, any, or numeric).
|
|
func protoString(p uint32) string {
|
|
switch p {
|
|
case 6:
|
|
return "tcp"
|
|
case 17:
|
|
return "udp"
|
|
case 255:
|
|
return "any"
|
|
}
|
|
return fmt.Sprintf("%d", p)
|
|
}
|
|
|
|
func runSyncVPPLBState(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
req := &grpcapi.SyncVPPLBStateRequest{}
|
|
if len(args) > 0 && args[0] != "" {
|
|
name := args[0]
|
|
req.FrontendName = &name
|
|
}
|
|
if _, err := client.SyncVPPLBState(ctx, req); err != nil {
|
|
return err
|
|
}
|
|
if req.FrontendName != nil {
|
|
fmt.Printf("synced frontend %q to VPP\n", *req.FrontendName)
|
|
} else {
|
|
fmt.Println("synced full LB state to VPP")
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func runShowVersion(_ context.Context, _ grpcapi.MaglevClient, _ []string) error {
|
|
fmt.Printf("maglevc %s (commit %s, built %s)\n",
|
|
buildinfo.Version(), buildinfo.Commit(), buildinfo.Date())
|
|
return nil
|
|
}
|
|
|
|
func runQuit(_ context.Context, _ grpcapi.MaglevClient, _ []string) error {
|
|
return errQuit
|
|
}
|
|
|
|
func runShowFrontends(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
resp, err := client.ListFrontends(ctx, &grpcapi.ListFrontendsRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
for _, name := range resp.FrontendNames {
|
|
fmt.Println(name)
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func runShowFrontend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
if len(args) == 0 {
|
|
return fmt.Errorf("usage: show frontend <name>")
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.GetFrontend(ctx, &grpcapi.GetFrontendRequest{Name: args[0]})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
|
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("name"), info.Name)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("address"), info.Address)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("protocol"), info.Protocol)
|
|
fmt.Fprintf(w, "%s\t%d\n", label("port"), info.Port)
|
|
if info.Description != "" {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("description"), info.Description)
|
|
}
|
|
if len(info.Pools) > 0 {
|
|
fmt.Fprintf(w, "%s\n", label("pools"))
|
|
}
|
|
if err := w.Flush(); err != nil {
|
|
return err
|
|
}
|
|
|
|
// Pool section uses direct Printf with fixed-width padding so that ANSI
|
|
// escape codes in labels don't confuse tabwriter's byte-based alignment.
|
|
// "backends" is always the widest pool label (8 chars); all pool labels
|
|
// are right-padded to that width, giving a 2+8+2 = 12-char visual indent.
|
|
const poolLblWidth = len("backends")
|
|
const poolIndent = " "
|
|
const poolSep = " "
|
|
contIndent := strings.Repeat(" ", len(poolIndent)+poolLblWidth+len(poolSep))
|
|
|
|
for _, pool := range info.Pools {
|
|
namePad := strings.Repeat(" ", poolLblWidth-len("name"))
|
|
fmt.Printf("%s%s%s%s%s\n", poolIndent, label("name"), namePad, poolSep, pool.Name)
|
|
for i, pb := range pool.Backends {
|
|
beInfo, beErr := client.GetBackend(ctx, &grpcapi.GetBackendRequest{Name: pb.Name})
|
|
suffix := ""
|
|
if beErr == nil && !beInfo.Enabled {
|
|
suffix = " [disabled]"
|
|
}
|
|
weightStr := ""
|
|
if pb.Weight != 100 {
|
|
weightStr = fmt.Sprintf(" %s %d", label("weight"), pb.Weight)
|
|
}
|
|
if i == 0 {
|
|
bePad := strings.Repeat(" ", poolLblWidth-len("backends"))
|
|
fmt.Printf("%s%s%s%s%s%s%s\n", poolIndent, label("backends"), bePad, poolSep, pb.Name, weightStr, suffix)
|
|
} else {
|
|
fmt.Printf("%s%s%s%s\n", contIndent, pb.Name, weightStr, suffix)
|
|
}
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func runShowBackends(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
resp, err := client.ListBackends(ctx, &grpcapi.ListBackendsRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
for _, name := range resp.BackendNames {
|
|
fmt.Println(name)
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func runShowBackend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
if len(args) == 0 {
|
|
return fmt.Errorf("usage: show backend <name>")
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.GetBackend(ctx, &grpcapi.GetBackendRequest{Name: args[0]})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("name"), info.Name)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("address"), info.Address)
|
|
stateDur := ""
|
|
if len(info.Transitions) > 0 {
|
|
since := time.Since(time.Unix(0, info.Transitions[0].AtUnixNs))
|
|
stateDur = " for " + formatDuration(since)
|
|
}
|
|
fmt.Fprintf(w, "%s\t%s%s\n", label("state"), info.State, stateDur)
|
|
fmt.Fprintf(w, "%s\t%v\n", label("enabled"), info.Enabled)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("healthcheck"), info.Healthcheck)
|
|
for i, t := range info.Transitions {
|
|
ts := time.Unix(0, t.AtUnixNs)
|
|
var lbl string
|
|
if i == 0 {
|
|
lbl = label("transitions")
|
|
} else {
|
|
// Pad to same visible width as "transitions" and wrap through
|
|
// label() so tabwriter sees the same byte count (ANSI overhead
|
|
// is identical on every row, keeping columns aligned).
|
|
lbl = label(" ")
|
|
}
|
|
fmt.Fprintf(w, "%s\t%s → %s\t%s\t%s\n",
|
|
lbl,
|
|
t.From, t.To,
|
|
ts.Format("2006-01-02 15:04:05.000"),
|
|
formatAgo(time.Since(ts)),
|
|
)
|
|
}
|
|
return w.Flush()
|
|
}
|
|
|
|
func runShowHealthChecks(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
resp, err := client.ListHealthChecks(ctx, &grpcapi.ListHealthChecksRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
for _, name := range resp.Names {
|
|
fmt.Println(name)
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func runShowHealthCheck(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
if len(args) == 0 {
|
|
return fmt.Errorf("usage: show healthcheck <name>")
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.GetHealthCheck(ctx, &grpcapi.GetHealthCheckRequest{Name: args[0]})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("name"), info.Name)
|
|
fmt.Fprintf(w, "%s\t%s\n", label("type"), info.Type)
|
|
if info.Port > 0 {
|
|
fmt.Fprintf(w, "%s\t%d\n", label("port"), info.Port)
|
|
}
|
|
fmt.Fprintf(w, "%s\t%s\n", label("interval"), time.Duration(info.IntervalNs))
|
|
if info.FastIntervalNs > 0 {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("fast-interval"), time.Duration(info.FastIntervalNs))
|
|
}
|
|
if info.DownIntervalNs > 0 {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("down-interval"), time.Duration(info.DownIntervalNs))
|
|
}
|
|
fmt.Fprintf(w, "%s\t%s\n", label("timeout"), time.Duration(info.TimeoutNs))
|
|
fmt.Fprintf(w, "%s\t%d\n", label("rise"), info.Rise)
|
|
fmt.Fprintf(w, "%s\t%d\n", label("fall"), info.Fall)
|
|
if info.ProbeIpv4Src != "" {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("probe-ipv4-src"), info.ProbeIpv4Src)
|
|
}
|
|
if info.ProbeIpv6Src != "" {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("probe-ipv6-src"), info.ProbeIpv6Src)
|
|
}
|
|
if h := info.Http; h != nil {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("http.path"), h.Path)
|
|
if h.Host != "" {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("http.host"), h.Host)
|
|
}
|
|
fmt.Fprintf(w, "%s\t%d-%d\n", label("http.response-code"), h.ResponseCodeMin, h.ResponseCodeMax)
|
|
if h.ResponseRegexp != "" {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("http.response-regexp"), h.ResponseRegexp)
|
|
}
|
|
}
|
|
if t := info.Tcp; t != nil {
|
|
fmt.Fprintf(w, "%s\t%v\n", label("tcp.ssl"), t.Ssl)
|
|
if t.ServerName != "" {
|
|
fmt.Fprintf(w, "%s\t%s\n", label("tcp.server-name"), t.ServerName)
|
|
}
|
|
}
|
|
return w.Flush()
|
|
}
|
|
|
|
func runPauseBackend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
if len(args) == 0 {
|
|
return fmt.Errorf("usage: set backend <name> pause")
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.PauseBackend(ctx, &grpcapi.BackendRequest{Name: args[0]})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
fmt.Printf("%s: setting state to '%s'\n", info.Name, info.State)
|
|
return nil
|
|
}
|
|
|
|
func runResumeBackend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
if len(args) == 0 {
|
|
return fmt.Errorf("usage: set backend <name> resume")
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.ResumeBackend(ctx, &grpcapi.BackendRequest{Name: args[0]})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
fmt.Printf("%s: setting state to '%s'\n", info.Name, info.State)
|
|
return nil
|
|
}
|
|
|
|
func runSetFrontendPoolBackendWeight(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
if len(args) != 4 {
|
|
return fmt.Errorf("usage: set frontend <name> pool <pool> backend <name> weight <0-100>")
|
|
}
|
|
frontendName, poolName, backendName, weightStr := args[0], args[1], args[2], args[3]
|
|
weight, err := strconv.Atoi(weightStr)
|
|
if err != nil || weight < 0 || weight > 100 {
|
|
return fmt.Errorf("weight: expected integer 0-100, got %q", weightStr)
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.SetFrontendPoolBackendWeight(ctx, &grpcapi.SetWeightRequest{
|
|
Frontend: frontendName,
|
|
Pool: poolName,
|
|
Backend: backendName,
|
|
Weight: int32(weight),
|
|
})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
// Print the updated pool so the user can confirm the new weight.
|
|
for _, pool := range info.Pools {
|
|
if pool.Name != poolName {
|
|
continue
|
|
}
|
|
for _, pb := range pool.Backends {
|
|
if pb.Name == backendName {
|
|
fmt.Printf("%s pool %s backend %s: weight set to %d\n", info.Name, pool.Name, pb.Name, pb.Weight)
|
|
return nil
|
|
}
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func runEnableBackend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
if len(args) == 0 {
|
|
return fmt.Errorf("usage: set backend <name> enable")
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.EnableBackend(ctx, &grpcapi.BackendRequest{Name: args[0]})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
fmt.Printf("%s: enabled, state is '%s'\n", info.Name, info.State)
|
|
return nil
|
|
}
|
|
|
|
func runDisableBackend(ctx context.Context, client grpcapi.MaglevClient, args []string) error {
|
|
if len(args) == 0 {
|
|
return fmt.Errorf("usage: set backend <name> disable")
|
|
}
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
info, err := client.DisableBackend(ctx, &grpcapi.BackendRequest{Name: args[0]})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
fmt.Printf("%s: disabled, state is '%s'\n", info.Name, info.State)
|
|
return nil
|
|
}
|
|
|
|
func runConfigCheck(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
resp, err := client.CheckConfig(ctx, &grpcapi.CheckConfigRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
if resp.Ok {
|
|
fmt.Println("config ok")
|
|
return nil
|
|
}
|
|
if resp.ParseError != "" {
|
|
return fmt.Errorf("parse error: %s", resp.ParseError)
|
|
}
|
|
return fmt.Errorf("semantic error: %s", resp.SemanticError)
|
|
}
|
|
|
|
func runConfigReload(ctx context.Context, client grpcapi.MaglevClient, _ []string) error {
|
|
ctx, cancel := context.WithTimeout(ctx, callTimeout)
|
|
defer cancel()
|
|
resp, err := client.ReloadConfig(ctx, &grpcapi.ReloadConfigRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
if resp.Ok {
|
|
fmt.Println("config reloaded")
|
|
return nil
|
|
}
|
|
if resp.ParseError != "" {
|
|
return fmt.Errorf("parse error: %s", resp.ParseError)
|
|
}
|
|
if resp.SemanticError != "" {
|
|
return fmt.Errorf("semantic error: %s", resp.SemanticError)
|
|
}
|
|
return fmt.Errorf("reload error: %s", resp.ReloadError)
|
|
}
|
|
|
|
// formatDuration formats a duration as Xd Xh Xm Xs without milliseconds.
|
|
func formatDuration(d time.Duration) string {
|
|
if d < 0 {
|
|
d = 0
|
|
}
|
|
d = d.Truncate(time.Second)
|
|
|
|
days := int(d.Hours()) / 24
|
|
d -= time.Duration(days) * 24 * time.Hour
|
|
hours := int(d.Hours())
|
|
d -= time.Duration(hours) * time.Hour
|
|
minutes := int(d.Minutes())
|
|
d -= time.Duration(minutes) * time.Minute
|
|
seconds := int(d.Seconds())
|
|
|
|
var b strings.Builder
|
|
if days > 0 {
|
|
fmt.Fprintf(&b, "%dd", days)
|
|
}
|
|
if hours > 0 {
|
|
fmt.Fprintf(&b, "%dh", hours)
|
|
}
|
|
if minutes > 0 {
|
|
fmt.Fprintf(&b, "%dm", minutes)
|
|
}
|
|
if seconds > 0 || b.Len() == 0 {
|
|
fmt.Fprintf(&b, "%ds", seconds)
|
|
}
|
|
return b.String()
|
|
}
|
|
|
|
func formatAgo(d time.Duration) string {
|
|
return formatDuration(d) + " ago"
|
|
}
|