This commit wires maglevd through to VPP's LB plugin end-to-end, using
locally-generated GoVPP bindings for the newer v2 API messages.
VPP binapi (vendored)
- New package internal/vpp/binapi/ containing lb, lb_types, ip_types, and
interface_types, generated from a local VPP build (~/src/vpp) via a new
'make vpp-binapi' target. GoVPP v0.12.0 upstream lacks the v2 messages we
need (lb_conf_get, lb_add_del_vip_v2, lb_add_del_as_v2, lb_as_v2_dump,
lb_as_set_weight), so we commit the generated output in-tree.
- All generated files go through our loggedChannel wrapper; every VPP API
send/receive is recorded at DEBUG via slog (vpp-api-send / vpp-api-recv /
vpp-api-send-multi / vpp-api-recv-multi) so the full wire-level trail is
auditable. NewAPIChannel is unexported — callers must use c.apiChannel().
Read path: GetLBState{All,VIP}
- GetLBStateAll returns a full snapshot (global conf + every VIP with its
attached application servers).
- GetLBStateVIP looks up a single VIP by (prefix, protocol, port) and
returns (nil, nil) when the VIP doesn't exist in VPP. This is the
efficient path for targeted updates on a busy LB.
- Helpers factored out: getLBConf, dumpAllVIPs, dumpASesForVIP, lookupVIP,
vipFromDetails.
Write path: SyncLBState{All,VIP}
- SyncLBStateAll reconciles every configured frontend with VPP: creates
missing VIPs, removes stale ones (with AS flush), and reconciles AS
membership and weights within VIPs that exist on both sides.
- SyncLBStateVIP targets a single frontend by name. Never removes VIPs.
Returns ErrFrontendNotFound (wrapped with the name) when the frontend
isn't in config, so callers can use errors.Is.
- Shared reconcileVIP helper does the per-VIP AS diff; removeVIP is used
only by the full-sync pass.
- LbAddDelVipV2 requests always set NewFlowsTableLength=1024. The .api
default=1024 annotation is only applied by VAT/CLI parsers, not wire-
level marshalling — sending 0 caused VPP to vec_validate with mask
0xFFFFFFFF and OOM-panic.
- Pool semantics: backends in the primary (first) pool of a frontend get
their configured weight; backends in secondary pools get weight 0. All
backends are installed so higher layers can flip weights on failover
without add/remove churn.
- Every individual change emits a DEBUG slog (vpp-lbsync-vip-add/del,
vpp-lbsync-as-add/del, vpp-lbsync-as-weight). Start/done INFO logs
carry a scope=all|vip label plus aggregate counts.
Global conf push: SetLBConf
- New SetLBConf(cfg) sends lb_conf with ipv4-src, ipv6-src, sticky-buckets,
and flow-timeout. Called automatically on VPP (re)connect and after
every config reload (via doReloadConfig). Results are cached on the
Client so redundant pushes are silently skipped — only actual changes
produce a vpp-lb-conf-set INFO log line.
Periodic drift reconciliation
- vpp.Client.lbSyncLoop runs in a goroutine tied to each VPP connection's
lifetime. Its first tick is immediate (startup and post-reconnect
sync quickly); subsequent ticks fire every vpp.lb.sync-interval from
config (default 30s). Purpose: catch drift if something/someone
modifies VPP state by hand. The loop uses a ConfigSource interface
(satisfied by checker.Checker via its new Config() accessor) to avoid
an import cycle with the checker package.
Config schema additions (maglev.vpp.lb)
- sync-interval: positive Go duration, default 30s.
- ipv4-src-address: REQUIRED. Used as the outer source for GRE4 encap
to application servers. Missing this is a hard semantic error —
maglevd --check exits 2 and the daemon refuses to start. VPP GRE
needs a source address and every VIP we program uses GRE, so there
is no meaningful config without it.
- ipv6-src-address: REQUIRED. Same treatment as ipv4-src-address.
- sticky-buckets-per-core: default 65536, must be a power of 2.
- flow-timeout: default 40s, must be a whole number of seconds in [1s, 120s].
- VPP validation runs at the end of convert() so structural errors in
healthchecks/backends/frontends surface first — operators fix those,
then get the VPP-specific requirements.
gRPC API
- New GetVPPLBState RPC returning VPPLBState: global conf + VIPs with
ASes. Mirrors the read-path but strips fields irrelevant to our
GRE-only deployment (srv_type, dscp, target_port).
- New SyncVPPLBState RPC with optional frontend_name. Unset → full sync
(may remove stale VIPs). Set → single-VIP sync (never removes).
Returns codes.NotFound for unknown frontends, codes.Unavailable when
VPP integration is disabled or disconnected.
maglevc (CLI)
- New 'show vpp lbstate' command displaying the LB plugin state. VPP-only
fields the dataplane irrelevant to GRE are suppressed. Per-AS lines use
a key-value format ("address X weight Y flow-table-buckets Z")
instead of a tabwriter column, which avoids the ANSI-color alignment
issue we hit with mixed label/data rows.
- New 'sync vpp lbstate [<name>]' command. Without a name, triggers a
full reconciliation; with a name, targets one frontend.
- Previous 'show vpp lb' renamed to 'show vpp lbstate' for consistency
with the new sync command.
Test fixtures
- validConfig and all ad-hoc config_test.go fixtures that reach the end
of convert() now include the two required vpp.lb src addresses.
- tests/01-maglevd/maglevd-lab/maglev.yaml gains a vpp.lb section so the
robot integration tests can still load the config.
- cmd/maglevc/tree_test.go gains expected paths for the new commands.
Docs
- config-guide.md: new 'vpp' section in the basic structure, detailed
vpp.lb field reference, noting ipv4/ipv6 src addresses as REQUIRED
(hard error) with no defaults; example config updated.
- user-guide.md: documented 'show vpp info', 'show vpp lbstate',
'sync vpp lbstate [<name>]', new --vpp-api-addr and --vpp-stats-addr
flags, the vpp-lb-conf-set log line, and corrected the pause/resume
description to reflect that pause cancels the probe goroutine.
- debian/maglev.yaml: example config gains a vpp.lb block with src
addresses and commented optional overrides.
462 lines
15 KiB
Go
462 lines
15 KiB
Go
// Copyright (c) 2026, Pim van Pelt <pim@ipng.ch>
|
|
|
|
package grpcapi
|
|
|
|
import (
|
|
"context"
|
|
"errors"
|
|
"log/slog"
|
|
"net"
|
|
|
|
"google.golang.org/grpc/codes"
|
|
"google.golang.org/grpc/status"
|
|
|
|
"git.ipng.ch/ipng/vpp-maglev/internal/checker"
|
|
"git.ipng.ch/ipng/vpp-maglev/internal/config"
|
|
"git.ipng.ch/ipng/vpp-maglev/internal/health"
|
|
"git.ipng.ch/ipng/vpp-maglev/internal/vpp"
|
|
)
|
|
|
|
// Server implements the MaglevServer gRPC interface.
|
|
type Server struct {
|
|
UnimplementedMaglevServer
|
|
ctx context.Context
|
|
checker *checker.Checker
|
|
logs *LogBroadcaster
|
|
configPath string
|
|
vppClient *vpp.Client // nil when VPP integration is disabled
|
|
}
|
|
|
|
// NewServer creates a Server backed by the given Checker. logs may be nil, in
|
|
// which case log events are never sent to WatchEvents streams. configPath is
|
|
// used by CheckConfig to reload and validate the configuration file on demand.
|
|
// vppClient may be nil if VPP integration is disabled. The provided context
|
|
// controls the lifetime of streaming RPCs: cancelling it closes all active
|
|
// WatchEvents streams so that grpc.Server.GracefulStop can complete.
|
|
func NewServer(ctx context.Context, c *checker.Checker, logs *LogBroadcaster, configPath string, vppClient *vpp.Client) *Server {
|
|
return &Server{ctx: ctx, checker: c, logs: logs, configPath: configPath, vppClient: vppClient}
|
|
}
|
|
|
|
// ListFrontends returns the names of all configured frontends.
|
|
func (s *Server) ListFrontends(_ context.Context, _ *ListFrontendsRequest) (*ListFrontendsResponse, error) {
|
|
return &ListFrontendsResponse{FrontendNames: s.checker.ListFrontends()}, nil
|
|
}
|
|
|
|
// GetFrontend returns configuration details for a single frontend.
|
|
func (s *Server) GetFrontend(_ context.Context, req *GetFrontendRequest) (*FrontendInfo, error) {
|
|
fe, ok := s.checker.GetFrontend(req.Name)
|
|
if !ok {
|
|
return nil, status.Errorf(codes.NotFound, "frontend %q not found", req.Name)
|
|
}
|
|
return frontendToProto(req.Name, fe), nil
|
|
}
|
|
|
|
// ListBackends returns the names of all active backends.
|
|
func (s *Server) ListBackends(_ context.Context, _ *ListBackendsRequest) (*ListBackendsResponse, error) {
|
|
return &ListBackendsResponse{BackendNames: s.checker.ListBackends()}, nil
|
|
}
|
|
|
|
// GetBackend returns health state for a backend by name.
|
|
func (s *Server) GetBackend(_ context.Context, req *GetBackendRequest) (*BackendInfo, error) {
|
|
b, ok := s.checker.GetBackend(req.Name)
|
|
if !ok {
|
|
return nil, status.Errorf(codes.NotFound, "backend %q not found", req.Name)
|
|
}
|
|
return backendToProto(b), nil
|
|
}
|
|
|
|
// PauseBackend pauses health checking for a backend by name.
|
|
func (s *Server) PauseBackend(_ context.Context, req *BackendRequest) (*BackendInfo, error) {
|
|
b, err := s.checker.PauseBackend(req.Name)
|
|
if err != nil {
|
|
return nil, status.Errorf(codes.FailedPrecondition, "%v", err)
|
|
}
|
|
return backendToProto(b), nil
|
|
}
|
|
|
|
// ResumeBackend resumes health checking for a backend by name.
|
|
func (s *Server) ResumeBackend(_ context.Context, req *BackendRequest) (*BackendInfo, error) {
|
|
b, err := s.checker.ResumeBackend(req.Name)
|
|
if err != nil {
|
|
return nil, status.Errorf(codes.FailedPrecondition, "%v", err)
|
|
}
|
|
return backendToProto(b), nil
|
|
}
|
|
|
|
// EnableBackend re-enables a previously disabled backend.
|
|
func (s *Server) EnableBackend(_ context.Context, req *BackendRequest) (*BackendInfo, error) {
|
|
b, ok := s.checker.EnableBackend(req.Name)
|
|
if !ok {
|
|
return nil, status.Errorf(codes.NotFound, "backend %q not found", req.Name)
|
|
}
|
|
return backendToProto(b), nil
|
|
}
|
|
|
|
// DisableBackend disables a backend, stopping its probe goroutine.
|
|
func (s *Server) DisableBackend(_ context.Context, req *BackendRequest) (*BackendInfo, error) {
|
|
b, ok := s.checker.DisableBackend(req.Name)
|
|
if !ok {
|
|
return nil, status.Errorf(codes.NotFound, "backend %q not found", req.Name)
|
|
}
|
|
return backendToProto(b), nil
|
|
}
|
|
|
|
// SetFrontendPoolBackendWeight updates the weight of a backend in a pool.
|
|
func (s *Server) SetFrontendPoolBackendWeight(_ context.Context, req *SetWeightRequest) (*FrontendInfo, error) {
|
|
if req.Weight < 0 || req.Weight > 100 {
|
|
return nil, status.Errorf(codes.InvalidArgument, "weight %d out of range [0, 100]", req.Weight)
|
|
}
|
|
fe, err := s.checker.SetFrontendPoolBackendWeight(req.Frontend, req.Pool, req.Backend, int(req.Weight))
|
|
if err != nil {
|
|
return nil, status.Errorf(codes.NotFound, "%v", err)
|
|
}
|
|
return frontendToProto(req.Frontend, fe), nil
|
|
}
|
|
|
|
// ListHealthChecks returns the names of all configured health checks.
|
|
func (s *Server) ListHealthChecks(_ context.Context, _ *ListHealthChecksRequest) (*ListHealthChecksResponse, error) {
|
|
return &ListHealthChecksResponse{Names: s.checker.ListHealthChecks()}, nil
|
|
}
|
|
|
|
// GetHealthCheck returns the full configuration for a health check by name.
|
|
func (s *Server) GetHealthCheck(_ context.Context, req *GetHealthCheckRequest) (*HealthCheckInfo, error) {
|
|
hc, ok := s.checker.GetHealthCheck(req.Name)
|
|
if !ok {
|
|
return nil, status.Errorf(codes.NotFound, "healthcheck %q not found", req.Name)
|
|
}
|
|
return healthCheckToProto(req.Name, hc), nil
|
|
}
|
|
|
|
// WatchEvents streams events to the client. On connect, the current state of
|
|
// all backends is sent as synthetic BackendEvents. Afterwards, live events are
|
|
// forwarded based on the filter flags in req. An unset (nil) flag defaults to
|
|
// true (subscribe). An empty log_level defaults to "info".
|
|
func (s *Server) WatchEvents(req *WatchRequest, stream Maglev_WatchEventsServer) error {
|
|
wantLog := req.Log == nil || *req.Log
|
|
wantBackend := req.Backend == nil || *req.Backend
|
|
wantFrontend := req.Frontend == nil || *req.Frontend
|
|
_ = wantFrontend // no frontend events emitted yet
|
|
|
|
logLevel := slog.LevelInfo
|
|
if req.LogLevel != "" {
|
|
if err := logLevel.UnmarshalText([]byte(req.LogLevel)); err != nil {
|
|
return status.Errorf(codes.InvalidArgument, "invalid log_level %q: must be debug, info, warn, or error", req.LogLevel)
|
|
}
|
|
}
|
|
|
|
// Subscribe to log events (nil channel blocks forever when not wanted).
|
|
var logCh <-chan *LogEvent
|
|
if wantLog && s.logs != nil {
|
|
var unsub func()
|
|
logCh, unsub = s.logs.Subscribe(logLevel)
|
|
defer unsub()
|
|
}
|
|
|
|
// Subscribe to backend events; send initial state snapshot first.
|
|
var backendCh <-chan checker.Event
|
|
if wantBackend {
|
|
for _, name := range s.checker.ListBackends() {
|
|
snap, ok := s.checker.GetBackend(name)
|
|
if !ok {
|
|
continue
|
|
}
|
|
ev := &Event{Event: &Event_Backend{Backend: &BackendEvent{
|
|
BackendName: name,
|
|
Transition: &TransitionRecord{
|
|
From: snap.Health.State.String(),
|
|
To: snap.Health.State.String(),
|
|
AtUnixNs: 0,
|
|
},
|
|
}}}
|
|
if err := stream.Send(ev); err != nil {
|
|
return err
|
|
}
|
|
}
|
|
var unsub func()
|
|
backendCh, unsub = s.checker.Subscribe()
|
|
defer unsub()
|
|
}
|
|
|
|
for {
|
|
select {
|
|
case <-s.ctx.Done():
|
|
return status.Error(codes.Unavailable, "server shutting down")
|
|
case <-stream.Context().Done():
|
|
return nil
|
|
case le, ok := <-logCh:
|
|
if !ok {
|
|
return nil
|
|
}
|
|
if err := stream.Send(&Event{Event: &Event_Log{Log: le}}); err != nil {
|
|
return err
|
|
}
|
|
case e, ok := <-backendCh:
|
|
if !ok {
|
|
return nil
|
|
}
|
|
if err := stream.Send(&Event{Event: &Event_Backend{Backend: &BackendEvent{
|
|
BackendName: e.BackendName,
|
|
Transition: transitionToProto(e.Transition),
|
|
}}}); err != nil {
|
|
return err
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
// CheckConfig reads and validates the configuration file, returning a
|
|
// structured result that distinguishes YAML parse errors from semantic errors.
|
|
func (s *Server) CheckConfig(_ context.Context, _ *CheckConfigRequest) (*CheckConfigResponse, error) {
|
|
slog.Info("config-check-start", "path", s.configPath)
|
|
_, result := config.Check(s.configPath)
|
|
resp := &CheckConfigResponse{
|
|
Ok: result.OK(),
|
|
ParseError: result.ParseError,
|
|
SemanticError: result.SemanticError,
|
|
}
|
|
if result.OK() {
|
|
slog.Info("config-check-done", "result", "ok")
|
|
} else if result.ParseError != "" {
|
|
slog.Info("config-check-done", "result", "failed", "type", "parse", "err", result.ParseError)
|
|
} else {
|
|
slog.Info("config-check-done", "result", "failed", "type", "semantic", "err", result.SemanticError)
|
|
}
|
|
return resp, nil
|
|
}
|
|
|
|
// ReloadConfig checks the configuration file and, if valid, applies it to the
|
|
// running checker. This is the same code path used by SIGHUP.
|
|
func (s *Server) ReloadConfig(_ context.Context, _ *ReloadConfigRequest) (*ReloadConfigResponse, error) {
|
|
return s.doReloadConfig(), nil
|
|
}
|
|
|
|
// TriggerReload performs a config check and reload. Intended for use by the
|
|
// SIGHUP handler so that signals and gRPC share the same code path.
|
|
func (s *Server) TriggerReload() {
|
|
s.doReloadConfig()
|
|
}
|
|
|
|
func (s *Server) doReloadConfig() *ReloadConfigResponse {
|
|
slog.Info("config-reload-start")
|
|
newCfg, result := config.Check(s.configPath)
|
|
if !result.OK() {
|
|
if result.ParseError != "" {
|
|
slog.Error("config-check-failed", "type", "parse", "err", result.ParseError)
|
|
} else {
|
|
slog.Error("config-check-failed", "type", "semantic", "err", result.SemanticError)
|
|
}
|
|
return &ReloadConfigResponse{
|
|
ParseError: result.ParseError,
|
|
SemanticError: result.SemanticError,
|
|
}
|
|
}
|
|
if err := s.checker.Reload(s.ctx, newCfg); err != nil {
|
|
slog.Error("checker-reload-error", "err", err)
|
|
return &ReloadConfigResponse{
|
|
ReloadError: err.Error(),
|
|
}
|
|
}
|
|
// Push new global LB conf to VPP if anything changed. SetLBConf is a
|
|
// no-op when VPP isn't connected or when the values are unchanged.
|
|
if s.vppClient != nil {
|
|
if err := s.vppClient.SetLBConf(newCfg); err != nil {
|
|
slog.Warn("vpp-lb-conf-set-failed", "err", err)
|
|
}
|
|
}
|
|
slog.Info("config-reload-done", "frontends", len(newCfg.Frontends))
|
|
return &ReloadConfigResponse{Ok: true}
|
|
}
|
|
|
|
// GetVPPInfo returns VPP version and runtime information.
|
|
func (s *Server) GetVPPInfo(_ context.Context, _ *GetVPPInfoRequest) (*VPPInfo, error) {
|
|
if s.vppClient == nil {
|
|
return nil, status.Error(codes.Unavailable, "VPP integration is disabled")
|
|
}
|
|
info, err := s.vppClient.GetInfo()
|
|
if err != nil {
|
|
return nil, status.Errorf(codes.Unavailable, "%v", err)
|
|
}
|
|
var boottimeNs int64
|
|
if !info.BootTime.IsZero() {
|
|
boottimeNs = info.BootTime.UnixNano()
|
|
}
|
|
return &VPPInfo{
|
|
Version: info.Version,
|
|
BuildDate: info.BuildDate,
|
|
BuildDirectory: info.BuildDirectory,
|
|
Pid: info.PID,
|
|
BoottimeNs: boottimeNs,
|
|
ConnecttimeNs: info.ConnectedSince.UnixNano(),
|
|
}, nil
|
|
}
|
|
|
|
// GetVPPLBState returns a snapshot of the VPP load-balancer plugin state.
|
|
func (s *Server) GetVPPLBState(_ context.Context, _ *GetVPPLBStateRequest) (*VPPLBState, error) {
|
|
if s.vppClient == nil {
|
|
return nil, status.Error(codes.Unavailable, "VPP integration is disabled")
|
|
}
|
|
state, err := s.vppClient.GetLBStateAll()
|
|
if err != nil {
|
|
return nil, status.Errorf(codes.Unavailable, "%v", err)
|
|
}
|
|
return lbStateToProto(state), nil
|
|
}
|
|
|
|
// SyncVPPLBState runs the LB reconciler. With frontend_name unset it does a
|
|
// full sync (SyncLBStateAll), which may remove stale VIPs. With frontend_name
|
|
// set it does a single-VIP sync (SyncLBStateVIP) that only adds/updates.
|
|
func (s *Server) SyncVPPLBState(_ context.Context, req *SyncVPPLBStateRequest) (*SyncVPPLBStateResponse, error) {
|
|
if s.vppClient == nil {
|
|
return nil, status.Error(codes.Unavailable, "VPP integration is disabled")
|
|
}
|
|
cfg := s.checker.Config()
|
|
if req.FrontendName != nil && *req.FrontendName != "" {
|
|
if err := s.vppClient.SyncLBStateVIP(cfg, *req.FrontendName); err != nil {
|
|
if errors.Is(err, vpp.ErrFrontendNotFound) {
|
|
return nil, status.Errorf(codes.NotFound, "%v", err)
|
|
}
|
|
return nil, status.Errorf(codes.Unavailable, "%v", err)
|
|
}
|
|
return &SyncVPPLBStateResponse{}, nil
|
|
}
|
|
if err := s.vppClient.SyncLBStateAll(cfg); err != nil {
|
|
return nil, status.Errorf(codes.Unavailable, "%v", err)
|
|
}
|
|
return &SyncVPPLBStateResponse{}, nil
|
|
}
|
|
|
|
// lbStateToProto converts the vpp package's LBState into the proto message.
|
|
func lbStateToProto(s *vpp.LBState) *VPPLBState {
|
|
out := &VPPLBState{
|
|
Conf: &VPPLBConf{
|
|
Ip4SrcAddress: ipStringOrEmpty(s.Conf.IP4SrcAddress),
|
|
Ip6SrcAddress: ipStringOrEmpty(s.Conf.IP6SrcAddress),
|
|
StickyBucketsPerCore: s.Conf.StickyBucketsPerCore,
|
|
FlowTimeout: s.Conf.FlowTimeout,
|
|
},
|
|
}
|
|
for _, v := range s.VIPs {
|
|
pv := &VPPLBVIP{
|
|
Prefix: v.Prefix.String(),
|
|
Protocol: uint32(v.Protocol),
|
|
Port: uint32(v.Port),
|
|
Encap: v.Encap,
|
|
FlowTableLength: uint32(v.FlowTableLength),
|
|
}
|
|
for _, a := range v.ASes {
|
|
var ts int64
|
|
if !a.InUseSince.IsZero() {
|
|
ts = a.InUseSince.UnixNano()
|
|
}
|
|
pv.ApplicationServers = append(pv.ApplicationServers, &VPPLBAS{
|
|
Address: a.Address.String(),
|
|
Weight: uint32(a.Weight),
|
|
Flags: uint32(a.Flags),
|
|
NumBuckets: a.NumBuckets,
|
|
InUseSinceNs: ts,
|
|
})
|
|
}
|
|
out.Vips = append(out.Vips, pv)
|
|
}
|
|
return out
|
|
}
|
|
|
|
func ipStringOrEmpty(ip net.IP) string {
|
|
if len(ip) == 0 || ip.IsUnspecified() {
|
|
return ""
|
|
}
|
|
return ip.String()
|
|
}
|
|
|
|
// ---- conversion helpers ----------------------------------------------------
|
|
|
|
func frontendToProto(name string, fe config.Frontend) *FrontendInfo {
|
|
pools := make([]*PoolInfo, 0, len(fe.Pools))
|
|
for _, p := range fe.Pools {
|
|
pi := &PoolInfo{Name: p.Name}
|
|
for bName, pb := range p.Backends {
|
|
pi.Backends = append(pi.Backends, &PoolBackendInfo{
|
|
Name: bName,
|
|
Weight: int32(pb.Weight),
|
|
})
|
|
}
|
|
pools = append(pools, pi)
|
|
}
|
|
return &FrontendInfo{
|
|
Name: name,
|
|
Address: fe.Address.String(),
|
|
Protocol: fe.Protocol,
|
|
Port: uint32(fe.Port),
|
|
Description: fe.Description,
|
|
Pools: pools,
|
|
}
|
|
}
|
|
|
|
func backendToProto(snap checker.BackendSnapshot) *BackendInfo {
|
|
info := &BackendInfo{
|
|
Name: snap.Health.Name,
|
|
Address: snap.Health.Address.String(),
|
|
State: snap.Health.State.String(),
|
|
Enabled: snap.Config.Enabled,
|
|
Healthcheck: snap.Config.HealthCheck,
|
|
}
|
|
for _, t := range snap.Health.Transitions {
|
|
info.Transitions = append(info.Transitions, transitionToProto(t))
|
|
}
|
|
return info
|
|
}
|
|
|
|
func healthCheckToProto(name string, hc config.HealthCheck) *HealthCheckInfo {
|
|
info := &HealthCheckInfo{
|
|
Name: name,
|
|
Type: hc.Type,
|
|
Port: uint32(hc.Port),
|
|
IntervalNs: hc.Interval.Nanoseconds(),
|
|
FastIntervalNs: hc.FastInterval.Nanoseconds(),
|
|
DownIntervalNs: hc.DownInterval.Nanoseconds(),
|
|
TimeoutNs: hc.Timeout.Nanoseconds(),
|
|
Rise: int32(hc.Rise),
|
|
Fall: int32(hc.Fall),
|
|
}
|
|
if hc.ProbeIPv4Src != nil {
|
|
info.ProbeIpv4Src = hc.ProbeIPv4Src.String()
|
|
}
|
|
if hc.ProbeIPv6Src != nil {
|
|
info.ProbeIpv6Src = hc.ProbeIPv6Src.String()
|
|
}
|
|
if hc.HTTP != nil {
|
|
re := ""
|
|
if hc.HTTP.ResponseRegexp != nil {
|
|
re = hc.HTTP.ResponseRegexp.String()
|
|
}
|
|
info.Http = &HTTPCheckParams{
|
|
Path: hc.HTTP.Path,
|
|
Host: hc.HTTP.Host,
|
|
ResponseCodeMin: int32(hc.HTTP.ResponseCodeMin),
|
|
ResponseCodeMax: int32(hc.HTTP.ResponseCodeMax),
|
|
ResponseRegexp: re,
|
|
ServerName: hc.HTTP.ServerName,
|
|
InsecureSkipVerify: hc.HTTP.InsecureSkipVerify,
|
|
}
|
|
}
|
|
if hc.TCP != nil {
|
|
info.Tcp = &TCPCheckParams{
|
|
Ssl: hc.TCP.SSL,
|
|
ServerName: hc.TCP.ServerName,
|
|
InsecureSkipVerify: hc.TCP.InsecureSkipVerify,
|
|
}
|
|
}
|
|
return info
|
|
}
|
|
|
|
func transitionToProto(t health.Transition) *TransitionRecord {
|
|
return &TransitionRecord{
|
|
From: t.From.String(),
|
|
To: t.To.String(),
|
|
AtUnixNs: t.At.UnixNano(),
|
|
}
|
|
}
|
|
|
|
// Ensure net.IP is imported (used via b.Address.String()).
|
|
var _ = net.IP{}
|