This commit wires maglevd through to VPP's LB plugin end-to-end, using
locally-generated GoVPP bindings for the newer v2 API messages.
VPP binapi (vendored)
- New package internal/vpp/binapi/ containing lb, lb_types, ip_types, and
interface_types, generated from a local VPP build (~/src/vpp) via a new
'make vpp-binapi' target. GoVPP v0.12.0 upstream lacks the v2 messages we
need (lb_conf_get, lb_add_del_vip_v2, lb_add_del_as_v2, lb_as_v2_dump,
lb_as_set_weight), so we commit the generated output in-tree.
- All generated files go through our loggedChannel wrapper; every VPP API
send/receive is recorded at DEBUG via slog (vpp-api-send / vpp-api-recv /
vpp-api-send-multi / vpp-api-recv-multi) so the full wire-level trail is
auditable. NewAPIChannel is unexported — callers must use c.apiChannel().
Read path: GetLBState{All,VIP}
- GetLBStateAll returns a full snapshot (global conf + every VIP with its
attached application servers).
- GetLBStateVIP looks up a single VIP by (prefix, protocol, port) and
returns (nil, nil) when the VIP doesn't exist in VPP. This is the
efficient path for targeted updates on a busy LB.
- Helpers factored out: getLBConf, dumpAllVIPs, dumpASesForVIP, lookupVIP,
vipFromDetails.
Write path: SyncLBState{All,VIP}
- SyncLBStateAll reconciles every configured frontend with VPP: creates
missing VIPs, removes stale ones (with AS flush), and reconciles AS
membership and weights within VIPs that exist on both sides.
- SyncLBStateVIP targets a single frontend by name. Never removes VIPs.
Returns ErrFrontendNotFound (wrapped with the name) when the frontend
isn't in config, so callers can use errors.Is.
- Shared reconcileVIP helper does the per-VIP AS diff; removeVIP is used
only by the full-sync pass.
- LbAddDelVipV2 requests always set NewFlowsTableLength=1024. The .api
default=1024 annotation is only applied by VAT/CLI parsers, not wire-
level marshalling — sending 0 caused VPP to vec_validate with mask
0xFFFFFFFF and OOM-panic.
- Pool semantics: backends in the primary (first) pool of a frontend get
their configured weight; backends in secondary pools get weight 0. All
backends are installed so higher layers can flip weights on failover
without add/remove churn.
- Every individual change emits a DEBUG slog (vpp-lbsync-vip-add/del,
vpp-lbsync-as-add/del, vpp-lbsync-as-weight). Start/done INFO logs
carry a scope=all|vip label plus aggregate counts.
Global conf push: SetLBConf
- New SetLBConf(cfg) sends lb_conf with ipv4-src, ipv6-src, sticky-buckets,
and flow-timeout. Called automatically on VPP (re)connect and after
every config reload (via doReloadConfig). Results are cached on the
Client so redundant pushes are silently skipped — only actual changes
produce a vpp-lb-conf-set INFO log line.
Periodic drift reconciliation
- vpp.Client.lbSyncLoop runs in a goroutine tied to each VPP connection's
lifetime. Its first tick is immediate (startup and post-reconnect
sync quickly); subsequent ticks fire every vpp.lb.sync-interval from
config (default 30s). Purpose: catch drift if something/someone
modifies VPP state by hand. The loop uses a ConfigSource interface
(satisfied by checker.Checker via its new Config() accessor) to avoid
an import cycle with the checker package.
Config schema additions (maglev.vpp.lb)
- sync-interval: positive Go duration, default 30s.
- ipv4-src-address: REQUIRED. Used as the outer source for GRE4 encap
to application servers. Missing this is a hard semantic error —
maglevd --check exits 2 and the daemon refuses to start. VPP GRE
needs a source address and every VIP we program uses GRE, so there
is no meaningful config without it.
- ipv6-src-address: REQUIRED. Same treatment as ipv4-src-address.
- sticky-buckets-per-core: default 65536, must be a power of 2.
- flow-timeout: default 40s, must be a whole number of seconds in [1s, 120s].
- VPP validation runs at the end of convert() so structural errors in
healthchecks/backends/frontends surface first — operators fix those,
then get the VPP-specific requirements.
gRPC API
- New GetVPPLBState RPC returning VPPLBState: global conf + VIPs with
ASes. Mirrors the read-path but strips fields irrelevant to our
GRE-only deployment (srv_type, dscp, target_port).
- New SyncVPPLBState RPC with optional frontend_name. Unset → full sync
(may remove stale VIPs). Set → single-VIP sync (never removes).
Returns codes.NotFound for unknown frontends, codes.Unavailable when
VPP integration is disabled or disconnected.
maglevc (CLI)
- New 'show vpp lbstate' command displaying the LB plugin state. VPP-only
fields the dataplane irrelevant to GRE are suppressed. Per-AS lines use
a key-value format ("address X weight Y flow-table-buckets Z")
instead of a tabwriter column, which avoids the ANSI-color alignment
issue we hit with mixed label/data rows.
- New 'sync vpp lbstate [<name>]' command. Without a name, triggers a
full reconciliation; with a name, targets one frontend.
- Previous 'show vpp lb' renamed to 'show vpp lbstate' for consistency
with the new sync command.
Test fixtures
- validConfig and all ad-hoc config_test.go fixtures that reach the end
of convert() now include the two required vpp.lb src addresses.
- tests/01-maglevd/maglevd-lab/maglev.yaml gains a vpp.lb section so the
robot integration tests can still load the config.
- cmd/maglevc/tree_test.go gains expected paths for the new commands.
Docs
- config-guide.md: new 'vpp' section in the basic structure, detailed
vpp.lb field reference, noting ipv4/ipv6 src addresses as REQUIRED
(hard error) with no defaults; example config updated.
- user-guide.md: documented 'show vpp info', 'show vpp lbstate',
'sync vpp lbstate [<name>]', new --vpp-api-addr and --vpp-stats-addr
flags, the vpp-lb-conf-set log line, and corrected the pause/resume
description to reflect that pause cancels the probe goroutine.
- debian/maglev.yaml: example config gains a vpp.lb block with src
addresses and commented optional overrides.
268 lines
6.8 KiB
Go
268 lines
6.8 KiB
Go
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
|
|
|
package vpp
|
|
|
|
import (
|
|
"fmt"
|
|
"net"
|
|
"time"
|
|
|
|
lb "git.ipng.ch/ipng/vpp-maglev/internal/vpp/binapi/lb"
|
|
lb_types "git.ipng.ch/ipng/vpp-maglev/internal/vpp/binapi/lb_types"
|
|
)
|
|
|
|
// LBConf mirrors VPP's lb_conf_get_reply: global LB plugin settings.
|
|
type LBConf struct {
|
|
IP4SrcAddress net.IP
|
|
IP6SrcAddress net.IP
|
|
StickyBucketsPerCore uint32
|
|
FlowTimeout uint32
|
|
}
|
|
|
|
// LBVIP mirrors VPP's lb_vip_details plus the set of application servers
|
|
// attached to this VIP (from lb_as_v2_details).
|
|
type LBVIP struct {
|
|
Prefix *net.IPNet // VIP address + prefix length
|
|
Protocol uint8 // IP proto (6=TCP, 17=UDP, 255=any)
|
|
Port uint16 // 0 = all-port VIP
|
|
Encap string // gre4|gre6|l3dsr|nat4|nat6
|
|
SrvType string // clusterip|nodeport
|
|
Dscp uint8
|
|
TargetPort uint16
|
|
FlowTableLength uint16
|
|
ASes []LBAS
|
|
}
|
|
|
|
// LBAS mirrors VPP's lb_as_v2_details: one application server bound to a VIP.
|
|
type LBAS struct {
|
|
Address net.IP
|
|
Weight uint8
|
|
Flags uint8 // bit 0 = used (alive), bit 1 = flushed
|
|
NumBuckets uint32
|
|
InUseSince time.Time // from VPP seconds-since-epoch (0 = never)
|
|
}
|
|
|
|
// LBState is a snapshot of the VPP LB plugin state.
|
|
type LBState struct {
|
|
Conf LBConf
|
|
VIPs []LBVIP
|
|
}
|
|
|
|
// GetLBStateAll fetches a full snapshot of the LB plugin state (global config
|
|
// plus every VIP and its application servers).
|
|
// Returns an error if VPP is not connected.
|
|
func (c *Client) GetLBStateAll() (*LBState, error) {
|
|
ch, err := c.apiChannel()
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
defer ch.Close()
|
|
|
|
state := &LBState{}
|
|
|
|
conf, err := getLBConf(ch)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
state.Conf = conf
|
|
|
|
vips, err := dumpAllVIPs(ch)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
for i := range vips {
|
|
ases, err := dumpASesForVIP(ch, vips[i].Protocol, vips[i].Port)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
vips[i].ASes = ases
|
|
}
|
|
state.VIPs = vips
|
|
return state, nil
|
|
}
|
|
|
|
// GetLBStateVIP fetches a single VIP from VPP. Returns (nil, nil) if the VIP
|
|
// does not exist in VPP (caller must treat absence as "needs to be added").
|
|
// Returns an error only on transport/VPP failures.
|
|
func (c *Client) GetLBStateVIP(prefix *net.IPNet, protocol uint8, port uint16) (*LBVIP, error) {
|
|
ch, err := c.apiChannel()
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
defer ch.Close()
|
|
return lookupVIP(ch, prefix, protocol, port)
|
|
}
|
|
|
|
// ---- low-level helpers (used by both Get and Sync paths) -------------------
|
|
|
|
func getLBConf(ch *loggedChannel) (LBConf, error) {
|
|
reply := &lb.LbConfGetReply{}
|
|
if err := ch.SendRequest(&lb.LbConfGet{}).ReceiveReply(reply); err != nil {
|
|
return LBConf{}, fmt.Errorf("lb_conf_get: %w", err)
|
|
}
|
|
return LBConf{
|
|
IP4SrcAddress: ip4ToNetIP(reply.IP4SrcAddress),
|
|
IP6SrcAddress: ip6ToNetIP(reply.IP6SrcAddress),
|
|
StickyBucketsPerCore: reply.StickyBucketsPerCore,
|
|
FlowTimeout: reply.FlowTimeout,
|
|
}, nil
|
|
}
|
|
|
|
// dumpAllVIPs returns every VIP known to VPP (metadata only — ASes not populated).
|
|
func dumpAllVIPs(ch *loggedChannel) ([]LBVIP, error) {
|
|
reqCtx := ch.SendMultiRequest(&lb.LbVipDump{})
|
|
var out []LBVIP
|
|
for {
|
|
reply := &lb.LbVipDetails{}
|
|
stop, err := reqCtx.ReceiveReply(reply)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("lb_vip_dump: %w", err)
|
|
}
|
|
if stop {
|
|
break
|
|
}
|
|
out = append(out, vipFromDetails(reply))
|
|
}
|
|
return out, nil
|
|
}
|
|
|
|
// lookupVIP finds a single VIP by (prefix, protocol, port) and returns it
|
|
// populated with its application servers, or nil if the VIP does not exist.
|
|
func lookupVIP(ch *loggedChannel, prefix *net.IPNet, protocol uint8, port uint16) (*LBVIP, error) {
|
|
all, err := dumpAllVIPs(ch)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
want := prefix.String()
|
|
for i := range all {
|
|
if all[i].Prefix.String() != want {
|
|
continue
|
|
}
|
|
if all[i].Protocol != protocol || all[i].Port != port {
|
|
continue
|
|
}
|
|
ases, err := dumpASesForVIP(ch, protocol, port)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
all[i].ASes = ases
|
|
return &all[i], nil
|
|
}
|
|
return nil, nil
|
|
}
|
|
|
|
// dumpASesForVIP returns the application servers bound to the VIP identified
|
|
// by (protocol, port). VPP's lb_as_v2_dump filter is used; we also guard
|
|
// defensively against replies for other VIPs.
|
|
func dumpASesForVIP(ch *loggedChannel, protocol uint8, port uint16) ([]LBAS, error) {
|
|
req := &lb.LbAsV2Dump{
|
|
Protocol: protocol,
|
|
Port: port,
|
|
}
|
|
reqCtx := ch.SendMultiRequest(req)
|
|
var out []LBAS
|
|
for {
|
|
reply := &lb.LbAsV2Details{}
|
|
stop, err := reqCtx.ReceiveReply(reply)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("lb_as_v2_dump: %w", err)
|
|
}
|
|
if stop {
|
|
break
|
|
}
|
|
if reply.Vip.Port != port || uint8(reply.Vip.Protocol) != protocol {
|
|
continue
|
|
}
|
|
var inUse time.Time
|
|
if reply.InUseSince != 0 {
|
|
inUse = time.Unix(int64(reply.InUseSince), 0)
|
|
}
|
|
out = append(out, LBAS{
|
|
Address: reply.AppSrv.ToIP(),
|
|
Weight: reply.Weight,
|
|
Flags: reply.Flags,
|
|
NumBuckets: reply.NumBuckets,
|
|
InUseSince: inUse,
|
|
})
|
|
}
|
|
return out, nil
|
|
}
|
|
|
|
// vipFromDetails builds an LBVIP (without ASes) from a VPP lb_vip_details reply.
|
|
func vipFromDetails(reply *lb.LbVipDetails) LBVIP {
|
|
return LBVIP{
|
|
Prefix: lbVipPrefix(reply.Vip),
|
|
Protocol: uint8(reply.Vip.Protocol),
|
|
Port: reply.Vip.Port,
|
|
Encap: encapString(reply.Encap),
|
|
SrvType: srvTypeString(reply.SrvType),
|
|
Dscp: uint8(reply.Dscp),
|
|
TargetPort: reply.TargetPort,
|
|
FlowTableLength: reply.FlowTableLength,
|
|
}
|
|
}
|
|
|
|
// lbVipPrefix converts a VPP lb_vip's address+prefix to a *net.IPNet.
|
|
func lbVipPrefix(v lb_types.LbVip) *net.IPNet {
|
|
ip := v.Pfx.Address.ToIP()
|
|
bits := 32
|
|
if ip.To4() == nil {
|
|
bits = 128
|
|
}
|
|
return &net.IPNet{
|
|
IP: ip,
|
|
Mask: net.CIDRMask(int(v.Pfx.Len), bits),
|
|
}
|
|
}
|
|
|
|
func ip4ToNetIP(a [4]byte) net.IP {
|
|
// VPP reports 255.255.255.255 when no IPv4 src is configured.
|
|
if a == [4]byte{0xff, 0xff, 0xff, 0xff} {
|
|
return nil
|
|
}
|
|
return net.IPv4(a[0], a[1], a[2], a[3]).To4()
|
|
}
|
|
|
|
func ip6ToNetIP(a [16]byte) net.IP {
|
|
// VPP reports all-ones when no IPv6 src is configured.
|
|
allOnes := true
|
|
for _, b := range a {
|
|
if b != 0xff {
|
|
allOnes = false
|
|
break
|
|
}
|
|
}
|
|
if allOnes {
|
|
return nil
|
|
}
|
|
ip := make(net.IP, 16)
|
|
copy(ip, a[:])
|
|
return ip
|
|
}
|
|
|
|
func encapString(e lb_types.LbEncapType) string {
|
|
switch e {
|
|
case lb_types.LB_API_ENCAP_TYPE_GRE4:
|
|
return "gre4"
|
|
case lb_types.LB_API_ENCAP_TYPE_GRE6:
|
|
return "gre6"
|
|
case lb_types.LB_API_ENCAP_TYPE_L3DSR:
|
|
return "l3dsr"
|
|
case lb_types.LB_API_ENCAP_TYPE_NAT4:
|
|
return "nat4"
|
|
case lb_types.LB_API_ENCAP_TYPE_NAT6:
|
|
return "nat6"
|
|
}
|
|
return fmt.Sprintf("unknown(%d)", e)
|
|
}
|
|
|
|
func srvTypeString(t lb_types.LbSrvType) string {
|
|
switch t {
|
|
case lb_types.LB_API_SRV_TYPE_CLUSTERIP:
|
|
return "clusterip"
|
|
case lb_types.LB_API_SRV_TYPE_NODEPORT:
|
|
return "nodeport"
|
|
}
|
|
return fmt.Sprintf("unknown(%d)", t)
|
|
}
|