This commit wires the checker's state machine through to the VPP dataplane:
every backend state transition flows through a single code path that
recomputes the effective per-backend weight (with pool failover) and pushes
the result to VPP. Along the way several latent bugs in the state machine
and the sync path were fixed.
internal/vpp/reconciler.go (new)
- New Reconciler type subscribes to checker.Checker events and, on every
transition, calls Client.SyncLBStateVIP for the affected frontend. This
is the ONLY place in the codebase where backend state changes cause VPP
calls — the "single path" discipline requested during design.
- Defines an EventSource interface (checker.Checker satisfies it) so the
dependency direction stays vpp → checker; the checker never imports vpp.
internal/vpp/client.go
- Renamed ConfigSource → StateSource. The interface now has two methods:
Config() and BackendState(name) — the reconciler and the desired-state
builder both need live health state to compute effective weights.
- SetConfigSource → SetStateSource; internal cfgSrc field → stateSrc.
- New getStateSource() helper for internal locked access.
- lbSyncLoop still uses the state source for its periodic drift
reconciliation; it's fully idempotent and runs the same code path as
event-driven syncs.
internal/vpp/lbsync.go
- desiredAS grows a Flush bool so the mapping function can signal "on
transition to weight 0, flush existing flow-table entries".
- asFromBackend is now the single source of truth for the state →
(weight, flush) rule. Documented with a full truth table. Takes an
activePool parameter so it can distinguish "up in active pool" from
"up but standby".
- activePoolIndex(fe, states) implements priority failover: returns the
index of the first pool containing any StateUp backend. pool[0] wins
when at least one member is up; pool[1] takes over when pool[0] is
empty; and so on. Defaults to 0 (unobservable, since all backends map
to weight 0 when nothing is up).
- desiredFromFrontend snapshots backend states once, computes activePool,
then walks every backend through asFromBackend. No more filtering on
b.Enabled — disabled backends stay in the desired set so they keep
their AS entry in VPP with weight=0. The previous filter caused delAS
on disable, which destroyed the entry and broke enable afterwards.
- EffectiveWeights(fe, src) exported helper that returns the per-pool
per-backend weight map for one frontend. Used by the gRPC GetFrontend
handler and robot tests to observe failover without touching VPP.
- reconcileVIP computes flush at the weight-change call site:
flush = desired.Flush && cur.Weight > 0 && desired.Weight == 0
This ensures only the *transition* to disabled flushes sessions —
steady-state syncs with already-zero weight skip the call entirely.
- setASWeight now plumbs IsFlush into lb_as_set_weight.
internal/vpp/lbsync_test.go (new)
- TestAsFromBackend: 15 cases locking down the truth table, including
failover scenarios (up in standby pool, up promoted in pool[1]).
- TestActivePoolIndex: 8 cases covering pool[0]-has-up, pool[0]-all-down,
all-disabled, all-paused, all-unknown, nothing-up-anywhere, and
three-tier failover.
- TestDesiredFromFrontendFailover: 5 end-to-end scenarios wiring a fake
StateSource through desiredFromFrontend and asserting the final
per-IP weight map. Exercises the complete pipeline without VPP.
internal/checker/checker.go
- Added BackendState(name) (health.State, bool) — one-line method that
satisfies vpp.StateSource. The checker is otherwise unchanged.
- EnableBackend rewritten to reuse the existing worker (parallel to
ResumeBackend). The old code called startWorker which constructed a
brand-new Backend via health.New, throwing away the transition
history; the resulting 'backend-transition' log showed the bogus
from=unknown,to=unknown. Now uses w.backend.Enable() to record a
proper disabled→unknown transition and launches a fresh goroutine.
- Static (no-healthcheck) backends now fire their synthetic 'always up'
pass on the first iteration of runProbe instead of sleeping 30s
first. Previously static backends sat in StateUnknown for 30s after
startup — useless for deterministic testing and surprising for
operators. The fix is a simple first-iteration flag.
internal/health/state.go
- New Enable(maxHistory) method parallel to Disable. Transitions the
backend from whatever state it's in (typically StateDisabled) to
StateUnknown, resets the health counter to rise-1 so the expedited
resolution kicks in on the first probe result, and emits a transition
with code 'enabled'.
proto/maglev.proto
- PoolBackendInfo gains effective_weight: the state-aware weight that
would be programmed into VPP (distinct from the configured weight in
the YAML). Exposed via GetFrontend.
internal/grpcapi/server.go
- frontendToProto takes a vpp.StateSource, computes effective weights
via vpp.EffectiveWeights, and populates PoolBackendInfo.EffectiveWeight.
- GetFrontend and SetFrontendPoolBackendWeight updated to pass the
checker in.
cmd/maglevc/commands.go
- 'show frontends <name>' now renders every pool backend row as
<name> weight <cfg> effective <eff> [disabled]?
so both values are always visible. The VPP-style key/value format
avoids the ANSI-alignment pitfall we hit earlier and makes the output
regex-parseable for robot tests.
cmd/maglevd/main.go
- Construct and start the Reconciler alongside the VPP client. Two
extra lines, no other changes to startup.
tests/01-maglevd/maglevd-lab/maglev.yaml
- Two new static backends (static-primary, static-fallback) and a new
failover-vip frontend with one backend per pool. No healthcheck, so
the state machine resolves them to 'up' immediately via the synthetic
pass. Used by the failover robot tests.
tests/01-maglevd/01-healthcheck.robot
- Three new test cases exercising pool failover end-to-end:
1. primary up, secondary standby (initial state)
2. disable primary → fallback takes over (effective weight flips)
3. enable primary → fallback steps back
All run without VPP: they scrape 'maglevc show frontends <name>' and
regex-match the effective weight in the output. Deterministic and
fast (~2s total) because the static backends don't probe.
- Two helper keywords: Static Backend Should Be Up and
Effective Weight Should Be.
Net result: 16/16 robot tests pass. Backend state transitions now
flow through a single documented path (checker event → reconciler →
SyncLBStateVIP → desiredFromFrontend → asFromBackend → reconcileVIP →
setASWeight), and the pool failover / enable-after-disable / static-
backend-startup bugs are all fixed.
259 lines
7.2 KiB
Protocol Buffer
259 lines
7.2 KiB
Protocol Buffer
syntax = "proto3";
|
|
|
|
package maglev;
|
|
|
|
option go_package = "git.ipng.ch/ipng/vpp-maglev/internal/grpcapi";
|
|
|
|
// Maglev exposes the state of backend health for all frontends.
|
|
service Maglev {
|
|
rpc ListFrontends(ListFrontendsRequest) returns (ListFrontendsResponse);
|
|
rpc GetFrontend(GetFrontendRequest) returns (FrontendInfo);
|
|
rpc ListBackends(ListBackendsRequest) returns (ListBackendsResponse);
|
|
rpc GetBackend(GetBackendRequest) returns (BackendInfo);
|
|
rpc PauseBackend(BackendRequest) returns (BackendInfo);
|
|
rpc ResumeBackend(BackendRequest) returns (BackendInfo);
|
|
rpc EnableBackend(BackendRequest) returns (BackendInfo);
|
|
rpc DisableBackend(BackendRequest) returns (BackendInfo);
|
|
rpc ListHealthChecks(ListHealthChecksRequest) returns (ListHealthChecksResponse);
|
|
rpc GetHealthCheck(GetHealthCheckRequest) returns (HealthCheckInfo);
|
|
rpc SetFrontendPoolBackendWeight(SetWeightRequest) returns (FrontendInfo);
|
|
rpc WatchEvents(WatchRequest) returns (stream Event);
|
|
rpc CheckConfig(CheckConfigRequest) returns (CheckConfigResponse);
|
|
rpc ReloadConfig(ReloadConfigRequest) returns (ReloadConfigResponse);
|
|
rpc GetVPPInfo(GetVPPInfoRequest) returns (VPPInfo);
|
|
rpc GetVPPLBState(GetVPPLBStateRequest) returns (VPPLBState);
|
|
rpc SyncVPPLBState(SyncVPPLBStateRequest) returns (SyncVPPLBStateResponse);
|
|
}
|
|
|
|
// ---- requests ---------------------------------------------------------------
|
|
|
|
message ListFrontendsRequest {}
|
|
|
|
message GetFrontendRequest {
|
|
string name = 1;
|
|
}
|
|
|
|
message ListBackendsRequest {}
|
|
|
|
message GetBackendRequest {
|
|
string name = 1;
|
|
}
|
|
|
|
message BackendRequest {
|
|
string name = 1;
|
|
}
|
|
|
|
message ListHealthChecksRequest {}
|
|
|
|
message GetHealthCheckRequest {
|
|
string name = 1;
|
|
}
|
|
|
|
message CheckConfigRequest {}
|
|
|
|
message CheckConfigResponse {
|
|
bool ok = 1;
|
|
string parse_error = 2; // set when YAML cannot be read or parsed
|
|
string semantic_error = 3; // set when YAML is valid but semantically incorrect
|
|
}
|
|
|
|
message ReloadConfigRequest {}
|
|
|
|
message ReloadConfigResponse {
|
|
bool ok = 1;
|
|
string parse_error = 2; // set when YAML cannot be read or parsed
|
|
string semantic_error = 3; // set when YAML is valid but semantically incorrect
|
|
string reload_error = 4; // set when config is valid but the reload itself failed
|
|
}
|
|
|
|
message GetVPPInfoRequest {}
|
|
|
|
message VPPInfo {
|
|
string version = 1;
|
|
string build_date = 2;
|
|
string build_directory = 3;
|
|
uint32 pid = 4;
|
|
int64 boottime_ns = 5; // unix timestamp (ns) when VPP started (from /sys/boottime)
|
|
int64 connecttime_ns = 6; // unix timestamp (ns) when maglevd connected to VPP
|
|
}
|
|
|
|
// ---- VPP load-balancer state ------------------------------------------------
|
|
|
|
message GetVPPLBStateRequest {}
|
|
|
|
// VPPLBConf mirrors VPP's lb_conf_get_reply: global LB plugin settings.
|
|
message VPPLBConf {
|
|
string ip4_src_address = 1;
|
|
string ip6_src_address = 2;
|
|
uint32 sticky_buckets_per_core = 3;
|
|
uint32 flow_timeout = 4;
|
|
}
|
|
|
|
// VPPLBAS is one application server attached to a VIP.
|
|
message VPPLBAS {
|
|
string address = 1;
|
|
uint32 weight = 2; // 0-100
|
|
uint32 flags = 3; // VPP AS flags (bit 0 = used, bit 1 = flushed)
|
|
uint32 num_buckets = 4;
|
|
int64 in_use_since_ns = 5; // unix timestamp (ns), 0 if never used
|
|
}
|
|
|
|
// VPPLBVIP mirrors VPP's lb_vip_details plus the attached application servers.
|
|
// Note: srv_type, dscp, and target_port are intentionally omitted — maglevd
|
|
// only supports GRE encap, so NAT/L3DSR-specific fields don't apply.
|
|
message VPPLBVIP {
|
|
string prefix = 1; // CIDR, e.g. 192.0.2.1/32
|
|
uint32 protocol = 2; // 6=TCP, 17=UDP, 255=any
|
|
uint32 port = 3; // 0 = all-port VIP
|
|
string encap = 4; // gre4|gre6|l3dsr|nat4|nat6
|
|
uint32 flow_table_length = 5;
|
|
repeated VPPLBAS application_servers = 6;
|
|
}
|
|
|
|
message VPPLBState {
|
|
VPPLBConf conf = 1;
|
|
repeated VPPLBVIP vips = 2;
|
|
}
|
|
|
|
// SyncVPPLBStateRequest triggers a reconciliation between the maglev config
|
|
// and the VPP load-balancer dataplane. When frontend_name is set, only that
|
|
// frontend's VIP is synced (SyncLBStateVIP) and no VIPs are removed. When
|
|
// unset, a full reconciliation runs (SyncLBStateAll), which will also remove
|
|
// stale VIPs from VPP.
|
|
message SyncVPPLBStateRequest {
|
|
optional string frontend_name = 1;
|
|
}
|
|
|
|
message SyncVPPLBStateResponse {}
|
|
|
|
message SetWeightRequest {
|
|
string frontend = 1;
|
|
string pool = 2;
|
|
string backend = 3;
|
|
int32 weight = 4; // 0-100
|
|
}
|
|
|
|
// WatchRequest controls which event types are streamed. All fields default to
|
|
// true (i.e. an empty request subscribes to everything at info level).
|
|
message WatchRequest {
|
|
optional bool log = 1; // include log events (default: true)
|
|
string log_level = 2; // minimum log level: debug|info|warn|error (default: info)
|
|
optional bool backend = 3; // include backend transition events (default: true)
|
|
optional bool frontend = 4; // include frontend events (default: true)
|
|
}
|
|
|
|
// ---- responses --------------------------------------------------------------
|
|
|
|
message ListFrontendsResponse {
|
|
repeated string frontend_names = 1;
|
|
}
|
|
|
|
message PoolBackendInfo {
|
|
string name = 1;
|
|
int32 weight = 2; // configured weight from YAML (0-100)
|
|
int32 effective_weight = 3; // state-aware weight after pool-failover logic
|
|
}
|
|
|
|
message PoolInfo {
|
|
string name = 1;
|
|
repeated PoolBackendInfo backends = 2;
|
|
}
|
|
|
|
message FrontendInfo {
|
|
string name = 1;
|
|
string address = 2;
|
|
string protocol = 3;
|
|
uint32 port = 4;
|
|
repeated PoolInfo pools = 5;
|
|
string description = 6;
|
|
}
|
|
|
|
message ListBackendsResponse {
|
|
repeated string backend_names = 1;
|
|
}
|
|
|
|
message ListHealthChecksResponse {
|
|
repeated string names = 1;
|
|
}
|
|
|
|
message HTTPCheckParams {
|
|
string path = 1;
|
|
string host = 2;
|
|
int32 response_code_min = 3;
|
|
int32 response_code_max = 4;
|
|
string response_regexp = 5;
|
|
string server_name = 6;
|
|
bool insecure_skip_verify = 7;
|
|
}
|
|
|
|
message TCPCheckParams {
|
|
bool ssl = 1;
|
|
string server_name = 2;
|
|
bool insecure_skip_verify = 3;
|
|
}
|
|
|
|
message HealthCheckInfo {
|
|
string name = 1;
|
|
string type = 2;
|
|
uint32 port = 3;
|
|
string probe_ipv4_src = 4;
|
|
string probe_ipv6_src = 5;
|
|
int64 interval_ns = 6;
|
|
int64 fast_interval_ns = 7;
|
|
int64 down_interval_ns = 8;
|
|
int64 timeout_ns = 9;
|
|
int32 rise = 10;
|
|
int32 fall = 11;
|
|
HTTPCheckParams http = 12;
|
|
TCPCheckParams tcp = 13;
|
|
}
|
|
|
|
message BackendInfo {
|
|
string name = 1;
|
|
string address = 2;
|
|
string state = 3;
|
|
repeated TransitionRecord transitions = 4;
|
|
bool enabled = 5;
|
|
string healthcheck = 6;
|
|
}
|
|
|
|
message TransitionRecord {
|
|
string from = 1;
|
|
string to = 2;
|
|
int64 at_unix_ns = 3;
|
|
}
|
|
|
|
// ---- event stream -----------------------------------------------------------
|
|
|
|
// LogAttr is a single key/value attribute from a structured log record.
|
|
message LogAttr {
|
|
string key = 1;
|
|
string value = 2;
|
|
}
|
|
|
|
// LogEvent carries a single structured log record.
|
|
message LogEvent {
|
|
int64 at_unix_ns = 1;
|
|
string level = 2;
|
|
string msg = 3;
|
|
repeated LogAttr attrs = 4;
|
|
}
|
|
|
|
// BackendEvent is emitted on every backend state transition.
|
|
message BackendEvent {
|
|
string backend_name = 1;
|
|
TransitionRecord transition = 2;
|
|
}
|
|
|
|
// FrontendEvent is reserved for future frontend-level events.
|
|
message FrontendEvent {}
|
|
|
|
// Event is the envelope returned by WatchEvents.
|
|
message Event {
|
|
oneof event {
|
|
LogEvent log = 1;
|
|
BackendEvent backend = 2;
|
|
FrontendEvent frontend = 3;
|
|
}
|
|
}
|