Collector implementation
This commit is contained in:
144
PLAN_COLLECTOR.md
Normal file
144
PLAN_COLLECTOR.md
Normal file
@@ -0,0 +1,144 @@
|
||||
# Collector v0 — Implementation Plan ✓ COMPLETE
|
||||
|
||||
Module path: `git.ipng.ch/ipng/nginx-logtail`
|
||||
|
||||
**Scope:** A working collector that tails files, aggregates into memory, and serves `TopN`,
|
||||
`Trend`, and `StreamSnapshots` over gRPC. Full vertical slice, no optimisation passes yet.
|
||||
|
||||
---
|
||||
|
||||
## Step 1 — Repo scaffolding
|
||||
- `go mod init git.ipng.ch/ipng/nginx-logtail`
|
||||
- `.gitignore`
|
||||
- Install deps: `google.golang.org/grpc`, `google.golang.org/protobuf`, `github.com/fsnotify/fsnotify`
|
||||
|
||||
## Step 2 — Proto (`proto/logtail.proto`)
|
||||
Write the full proto file as specified in README.md DESIGN § Protobuf API. Generate Go stubs with
|
||||
`protoc`. Commit generated files. This defines the contract everything else builds on.
|
||||
|
||||
## Step 3 — Parser (`cmd/collector/parser.go`)
|
||||
- `LogRecord` struct: `Website`, `ClientPrefix`, `URI`, `Status string`
|
||||
- `ParseLine(line string) (LogRecord, bool)` — `SplitN` on tab, discard query string at `?`,
|
||||
return `false` for lines with fewer than 8 fields
|
||||
- `TruncateIP(addr string, v4bits, v6bits int) string` — handle IPv4 and IPv6
|
||||
- Unit-tested with table-driven tests: normal line, short line, IPv6, query string stripping,
|
||||
/24 and /48 truncation
|
||||
|
||||
## Step 4 — Store (`cmd/collector/store.go`)
|
||||
Implement in order, each piece testable independently:
|
||||
|
||||
1. **`Tuple4` and live map** — `map[Tuple4]int64`, cap enforcement at 100K, `Ingest(r LogRecord)`
|
||||
2. **Fine ring buffer** — `[60]Snapshot` circular array, `rotate()` heap-selects top-50K from
|
||||
live map, appends to ring, resets live map
|
||||
3. **Coarse ring buffer** — `[288]Snapshot`, populated every 5 fine rotations by merging
|
||||
the last 5 fine snapshots into a top-5K snapshot
|
||||
4. **`QueryTopN(filter, groupBy, n, window)`** — RLock, sum bucket range, group by dimension,
|
||||
apply filter, heap-select top N
|
||||
5. **`QueryTrend(filter, window)`** — per-bucket count sum, returns one point per bucket
|
||||
6. **`Store.Run(ch <-chan LogRecord)`** — single goroutine: read channel → `Ingest`, minute
|
||||
ticker → `rotate()`
|
||||
7. **Snapshot broadcast** — per-subscriber buffered channel fan-out;
|
||||
`Subscribe() <-chan Snapshot` / `Unsubscribe(ch)`
|
||||
|
||||
## Step 5 — Tailer (`cmd/collector/tailer.go`)
|
||||
- `Tailer` struct: path, fsnotify watcher, output channel
|
||||
- On start: open file, seek to EOF, register fsnotify watch
|
||||
- On `fsnotify.Write`: `bufio.Scanner` reads all new lines, sends `LogRecord` to channel
|
||||
- On `fsnotify.Rename` / `Remove`: drain to EOF, close fd, retry open with 100 ms backoff
|
||||
(up to 5 s), resume from position 0 — no lines lost between drain and reopen
|
||||
- `Tailer.Run(ctx context.Context)` — blocks until context cancelled
|
||||
|
||||
## Step 6 — gRPC server (`cmd/collector/server.go`)
|
||||
- `Server` wraps `*Store`, implements `LogtailServiceServer`
|
||||
- `TopN`: `store.QueryTopN` → marshal to proto response
|
||||
- `Trend`: `store.QueryTrend` → marshal to proto response
|
||||
- `StreamSnapshots`: `store.Subscribe()`, loop sending snapshots until client disconnects
|
||||
or context done, then `store.Unsubscribe(ch)`
|
||||
|
||||
## Step 7 — Main (`cmd/collector/main.go`)
|
||||
Flags:
|
||||
- `--listen` default `:9090`
|
||||
- `--logs` comma-separated log file paths
|
||||
- `--source` name for this collector instance (default: hostname)
|
||||
- `--v4prefix` default `24`
|
||||
- `--v6prefix` default `48`
|
||||
|
||||
Wire-up: create channel → start `store.Run` goroutine → start one `Tailer` goroutine per log
|
||||
path → start gRPC server → `signal.NotifyContext` for clean shutdown on SIGINT/SIGTERM.
|
||||
|
||||
## Step 8 — Smoke test
|
||||
- Generate fake log lines at 10K/s (small Go script or shell one-liner)
|
||||
- Run collector against them
|
||||
- Use `grpcurl` to call `TopN` and verify results
|
||||
- Check `runtime.MemStats` to confirm memory stays well under 1 GB
|
||||
|
||||
---
|
||||
|
||||
## Deferred (not in v0)
|
||||
- `cmd/cli`, `cmd/aggregator`, `cmd/frontend`
|
||||
- ClickHouse export
|
||||
- TLS / auth
|
||||
- Prometheus metrics endpoint
|
||||
|
||||
---
|
||||
|
||||
## Implementation notes
|
||||
|
||||
### Deviation from plan: MultiTailer
|
||||
|
||||
Step 5 planned one `Tailer` struct per file. During implementation this was changed to a single
|
||||
`MultiTailer` with one shared `fsnotify.Watcher`. Reason: one watcher per file creates one inotify
|
||||
instance per file; the kernel default limit is 128 instances per user, which would be hit with
|
||||
100s of log files. The `MultiTailer` uses a single instance and routes events by path via a
|
||||
`map[string]*fileState`.
|
||||
|
||||
### Deviation from plan: IPv6 /48 semantics
|
||||
|
||||
The design doc said "truncate to /48". `/48` keeps the first three full 16-bit groups intact
|
||||
(e.g. `2001:db8:cafe::1` → `2001:db8:cafe::/48`). An early test expected `2001:db8:ca00::/48`
|
||||
(truncating mid-group), which was wrong. The code is correct; the test was fixed.
|
||||
|
||||
---
|
||||
|
||||
## Test results
|
||||
|
||||
Run with: `go test ./cmd/collector/ -v -count=1 -timeout 120s`
|
||||
|
||||
| Test | What it covers |
|
||||
|-----------------------------|----------------------------------------------------|
|
||||
| `TestParseLine` (7 cases) | Tab parsing, query string stripping, bad lines |
|
||||
| `TestTruncateIP` | IPv4 /24 and IPv6 /48 masking |
|
||||
| `TestIngestAndRotate` | Live map → fine ring rotation |
|
||||
| `TestLiveMapCap` | Hard cap at 100 K entries, no panic beyond cap |
|
||||
| `TestQueryTopN` | Ranked results from ring buffer |
|
||||
| `TestQueryTopNWithFilter` | Filter by HTTP status code |
|
||||
| `TestQueryTrend` | Per-bucket counts, oldest-first ordering |
|
||||
| `TestCoarseRingPopulated` | 5 fine ticks → 1 coarse bucket, count aggregation |
|
||||
| `TestSubscribeBroadcast` | Fan-out channel delivery after rotation |
|
||||
| `TestTopKOrdering` | Heap select returns correct top-K descending |
|
||||
| `TestMultiTailerReadsLines` | Live file write → LogRecord received on channel |
|
||||
| `TestMultiTailerMultipleFiles` | 5 files, one watcher, all lines received |
|
||||
| `TestMultiTailerLogRotation`| RENAME → drain → retry → new file tailed correctly |
|
||||
| `TestExpandGlobs` | Glob pattern expands to matching files only |
|
||||
| `TestExpandGlobsDeduplication` | Same file via path + glob deduplicated to one |
|
||||
| `TestMemoryBudget` | Full ring fill stays within 1 GB heap |
|
||||
| `TestGRPCEndToEnd` | Real gRPC server: TopN, filtered TopN, Trend, StreamSnapshots |
|
||||
|
||||
**Total: 17 tests, all passing.**
|
||||
|
||||
---
|
||||
|
||||
## Benchmark results
|
||||
|
||||
Run with: `go test ./cmd/collector/ -bench=. -benchtime=3s`
|
||||
|
||||
Hardware: 12th Gen Intel Core i7-12700T
|
||||
|
||||
| Benchmark | ns/op | throughput | headroom vs 10K/s |
|
||||
|--------------------|-------|----------------|-------------------|
|
||||
| `BenchmarkParseLine` | 418 | ~2.4M lines/s | 240× |
|
||||
| `BenchmarkIngest` | 152 | ~6.5M records/s| 650× |
|
||||
|
||||
Both the parser and the store ingestion goroutine have several hundred times more capacity than
|
||||
the 10 000 lines/second peak requirement. The bottleneck at scale will be fsnotify event delivery
|
||||
and kernel I/O, not the Go code.
|
||||
130
cmd/collector/main.go
Normal file
130
cmd/collector/main.go
Normal file
@@ -0,0 +1,130 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"flag"
|
||||
"log"
|
||||
"net"
|
||||
"os"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"syscall"
|
||||
|
||||
pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb"
|
||||
"google.golang.org/grpc"
|
||||
)
|
||||
|
||||
func main() {
|
||||
listen := flag.String("listen", ":9090", "gRPC listen address")
|
||||
logPaths := flag.String("logs", "", "comma-separated log file paths/globs to tail")
|
||||
logsFile := flag.String("logs-file", "", "file containing one log path/glob per line")
|
||||
source := flag.String("source", hostname(), "name for this collector (default: hostname)")
|
||||
v4prefix := flag.Int("v4prefix", 24, "IPv4 prefix length for client bucketing")
|
||||
v6prefix := flag.Int("v6prefix", 48, "IPv6 prefix length for client bucketing")
|
||||
flag.Parse()
|
||||
|
||||
patterns := collectPatterns(*logPaths, *logsFile)
|
||||
if len(patterns) == 0 {
|
||||
log.Fatal("collector: no log paths specified; use --logs or --logs-file")
|
||||
}
|
||||
|
||||
paths := expandGlobs(patterns)
|
||||
if len(paths) == 0 {
|
||||
log.Fatal("collector: no log files matched the specified patterns")
|
||||
}
|
||||
log.Printf("collector: tailing %d file(s)", len(paths))
|
||||
|
||||
ctx, stop := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
|
||||
defer stop()
|
||||
|
||||
// Shared channel: tailer → store. Buffer absorbs ~20s of peak load.
|
||||
ch := make(chan LogRecord, 200_000)
|
||||
|
||||
store := NewStore(*source)
|
||||
go store.Run(ch)
|
||||
|
||||
tailer := NewMultiTailer(paths, *v4prefix, *v6prefix, ch)
|
||||
go tailer.Run(ctx)
|
||||
|
||||
lis, err := net.Listen("tcp", *listen)
|
||||
if err != nil {
|
||||
log.Fatalf("collector: failed to listen on %s: %v", *listen, err)
|
||||
}
|
||||
grpcServer := grpc.NewServer()
|
||||
pb.RegisterLogtailServiceServer(grpcServer, NewServer(store, *source))
|
||||
|
||||
go func() {
|
||||
log.Printf("collector: gRPC server listening on %s (source=%s)", *listen, *source)
|
||||
if err := grpcServer.Serve(lis); err != nil {
|
||||
log.Printf("collector: gRPC server stopped: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
<-ctx.Done()
|
||||
log.Printf("collector: shutting down")
|
||||
grpcServer.GracefulStop()
|
||||
close(ch)
|
||||
}
|
||||
|
||||
// collectPatterns merges patterns from --logs (comma-separated) and --logs-file.
|
||||
func collectPatterns(logPaths, logsFile string) []string {
|
||||
var patterns []string
|
||||
for _, p := range strings.Split(logPaths, ",") {
|
||||
if p = strings.TrimSpace(p); p != "" {
|
||||
patterns = append(patterns, p)
|
||||
}
|
||||
}
|
||||
if logsFile != "" {
|
||||
f, err := os.Open(logsFile)
|
||||
if err != nil {
|
||||
log.Fatalf("collector: cannot open --logs-file %s: %v", logsFile, err)
|
||||
}
|
||||
defer f.Close()
|
||||
sc := bufio.NewScanner(f)
|
||||
for sc.Scan() {
|
||||
if p := strings.TrimSpace(sc.Text()); p != "" && !strings.HasPrefix(p, "#") {
|
||||
patterns = append(patterns, p)
|
||||
}
|
||||
}
|
||||
}
|
||||
return patterns
|
||||
}
|
||||
|
||||
// expandGlobs expands any glob patterns and returns deduplicated concrete paths.
|
||||
func expandGlobs(patterns []string) []string {
|
||||
seen := make(map[string]struct{})
|
||||
var paths []string
|
||||
for _, pat := range patterns {
|
||||
matches, err := filepath.Glob(pat)
|
||||
if err != nil {
|
||||
log.Printf("collector: invalid glob %q: %v", pat, err)
|
||||
continue
|
||||
}
|
||||
if len(matches) == 0 {
|
||||
// Keep the path even if it doesn't exist yet; the tailer will retry.
|
||||
log.Printf("collector: pattern %q matched no files, will watch for creation", pat)
|
||||
if _, ok := seen[pat]; !ok {
|
||||
seen[pat] = struct{}{}
|
||||
paths = append(paths, pat)
|
||||
}
|
||||
continue
|
||||
}
|
||||
for _, m := range matches {
|
||||
if _, ok := seen[m]; !ok {
|
||||
seen[m] = struct{}{}
|
||||
paths = append(paths, m)
|
||||
}
|
||||
}
|
||||
}
|
||||
return paths
|
||||
}
|
||||
|
||||
func hostname() string {
|
||||
h, err := os.Hostname()
|
||||
if err != nil {
|
||||
return "unknown"
|
||||
}
|
||||
return h
|
||||
}
|
||||
71
cmd/collector/parser.go
Normal file
71
cmd/collector/parser.go
Normal file
@@ -0,0 +1,71 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// LogRecord holds the four dimensions extracted from a single nginx log line.
|
||||
type LogRecord struct {
|
||||
Website string
|
||||
ClientPrefix string
|
||||
URI string
|
||||
Status string
|
||||
}
|
||||
|
||||
// ParseLine parses a tab-separated logtail log line:
|
||||
//
|
||||
// $host \t $remote_addr \t $msec \t $request_method \t $request_uri \t $status \t $body_bytes_sent \t $request_time
|
||||
//
|
||||
// Returns false for lines with fewer than 8 fields.
|
||||
func ParseLine(line string, v4bits, v6bits int) (LogRecord, bool) {
|
||||
// SplitN caps allocations; we need exactly 8 fields.
|
||||
fields := strings.SplitN(line, "\t", 8)
|
||||
if len(fields) < 8 {
|
||||
return LogRecord{}, false
|
||||
}
|
||||
|
||||
uri := fields[4]
|
||||
if i := strings.IndexByte(uri, '?'); i >= 0 {
|
||||
uri = uri[:i]
|
||||
}
|
||||
|
||||
prefix, ok := truncateIP(fields[1], v4bits, v6bits)
|
||||
if !ok {
|
||||
return LogRecord{}, false
|
||||
}
|
||||
|
||||
return LogRecord{
|
||||
Website: fields[0],
|
||||
ClientPrefix: prefix,
|
||||
URI: uri,
|
||||
Status: fields[5],
|
||||
}, true
|
||||
}
|
||||
|
||||
// truncateIP masks addr to the given prefix length depending on IP version.
|
||||
// Returns the CIDR string (e.g. "1.2.3.0/24") and true on success.
|
||||
func truncateIP(addr string, v4bits, v6bits int) (string, bool) {
|
||||
ip := net.ParseIP(addr)
|
||||
if ip == nil {
|
||||
return "", false
|
||||
}
|
||||
|
||||
var bits int
|
||||
if ip.To4() != nil {
|
||||
ip = ip.To4()
|
||||
bits = v4bits
|
||||
} else {
|
||||
ip = ip.To16()
|
||||
bits = v6bits
|
||||
}
|
||||
|
||||
mask := net.CIDRMask(bits, len(ip)*8)
|
||||
masked := make(net.IP, len(ip))
|
||||
for i := range ip {
|
||||
masked[i] = ip[i] & mask[i]
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%s/%d", masked.String(), bits), true
|
||||
}
|
||||
114
cmd/collector/parser_test.go
Normal file
114
cmd/collector/parser_test.go
Normal file
@@ -0,0 +1,114 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestParseLine(t *testing.T) {
|
||||
good := "www.example.com\t1.2.3.4\t1741954800.123\tGET\t/api/v1/search?q=foo&x=1\t200\t1452\t0.043"
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
line string
|
||||
wantOK bool
|
||||
want LogRecord
|
||||
}{
|
||||
{
|
||||
name: "normal IPv4 line strips query string",
|
||||
line: good,
|
||||
wantOK: true,
|
||||
want: LogRecord{
|
||||
Website: "www.example.com",
|
||||
ClientPrefix: "1.2.3.0/24",
|
||||
URI: "/api/v1/search",
|
||||
Status: "200",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "URI with no query string",
|
||||
line: "host\t10.0.0.1\t0\tPOST\t/submit\t201\t0\t0.001",
|
||||
wantOK: true,
|
||||
want: LogRecord{
|
||||
Website: "host",
|
||||
ClientPrefix: "10.0.0.0/24",
|
||||
URI: "/submit",
|
||||
Status: "201",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "IPv6 address truncated to /48",
|
||||
line: "host\t2001:db8:cafe::1\t0\tGET\t/\t200\t0\t0.001",
|
||||
wantOK: true,
|
||||
want: LogRecord{
|
||||
Website: "host",
|
||||
ClientPrefix: "2001:db8:cafe::/48", // /48 = 3 full 16-bit groups intact
|
||||
URI: "/",
|
||||
Status: "200",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "too few fields returns false",
|
||||
line: "host\t1.2.3.4\t0\tGET\t/",
|
||||
wantOK: false,
|
||||
},
|
||||
{
|
||||
name: "empty line returns false",
|
||||
line: "",
|
||||
wantOK: false,
|
||||
},
|
||||
{
|
||||
name: "invalid IP returns false",
|
||||
line: "host\tnot-an-ip\t0\tGET\t/\t200\t0\t0.001",
|
||||
wantOK: false,
|
||||
},
|
||||
{
|
||||
name: "status 429",
|
||||
line: "api.example.com\t5.6.7.8\t0\tGET\t/rate-limited\t429\t0\t0.001",
|
||||
wantOK: true,
|
||||
want: LogRecord{
|
||||
Website: "api.example.com",
|
||||
ClientPrefix: "5.6.7.0/24",
|
||||
URI: "/rate-limited",
|
||||
Status: "429",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
got, ok := ParseLine(tc.line, 24, 48)
|
||||
if ok != tc.wantOK {
|
||||
t.Fatalf("ParseLine ok=%v, want %v", ok, tc.wantOK)
|
||||
}
|
||||
if !tc.wantOK {
|
||||
return
|
||||
}
|
||||
if got != tc.want {
|
||||
t.Errorf("got %+v, want %+v", got, tc.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestTruncateIP(t *testing.T) {
|
||||
tests := []struct {
|
||||
addr string
|
||||
want string
|
||||
}{
|
||||
{"1.2.3.4", "1.2.3.0/24"},
|
||||
{"192.168.100.200", "192.168.100.0/24"},
|
||||
{"2001:db8:cafe:babe::1", "2001:db8:cafe::/48"}, // /48 = 3 full groups intact
|
||||
{"::1", "::/48"}, // loopback — first 48 bits are all zero
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got, ok := truncateIP(tc.addr, 24, 48)
|
||||
if !ok {
|
||||
t.Errorf("truncateIP(%q) returned not-ok", tc.addr)
|
||||
continue
|
||||
}
|
||||
if got != tc.want {
|
||||
t.Errorf("truncateIP(%q) = %q, want %q", tc.addr, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
91
cmd/collector/server.go
Normal file
91
cmd/collector/server.go
Normal file
@@ -0,0 +1,91 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// Server implements pb.LogtailServiceServer backed by a Store.
|
||||
type Server struct {
|
||||
pb.UnimplementedLogtailServiceServer
|
||||
store *Store
|
||||
source string
|
||||
}
|
||||
|
||||
func NewServer(store *Store, source string) *Server {
|
||||
return &Server{store: store, source: source}
|
||||
}
|
||||
|
||||
func (srv *Server) TopN(_ context.Context, req *pb.TopNRequest) (*pb.TopNResponse, error) {
|
||||
if req == nil {
|
||||
return nil, status.Error(codes.InvalidArgument, "request is nil")
|
||||
}
|
||||
n := int(req.N)
|
||||
if n <= 0 {
|
||||
n = 10
|
||||
}
|
||||
entries := srv.store.QueryTopN(req.Filter, req.GroupBy, n, req.Window)
|
||||
resp := &pb.TopNResponse{Source: srv.source}
|
||||
for _, e := range entries {
|
||||
resp.Entries = append(resp.Entries, &pb.TopNEntry{
|
||||
Label: e.Label,
|
||||
Count: e.Count,
|
||||
})
|
||||
}
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
func (srv *Server) Trend(_ context.Context, req *pb.TrendRequest) (*pb.TrendResponse, error) {
|
||||
if req == nil {
|
||||
return nil, status.Error(codes.InvalidArgument, "request is nil")
|
||||
}
|
||||
points := srv.store.QueryTrend(req.Filter, req.Window)
|
||||
resp := &pb.TrendResponse{Source: srv.source}
|
||||
for _, p := range points {
|
||||
resp.Points = append(resp.Points, &pb.TrendPoint{
|
||||
TimestampUnix: p.Timestamp.Unix(),
|
||||
Count: p.Count,
|
||||
})
|
||||
}
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
func (srv *Server) StreamSnapshots(req *pb.SnapshotRequest, stream grpc.ServerStreamingServer[pb.Snapshot]) error {
|
||||
ch := srv.store.Subscribe()
|
||||
defer srv.store.Unsubscribe(ch)
|
||||
|
||||
log.Printf("server: new StreamSnapshots subscriber from %v", stream.Context().Value("peer"))
|
||||
for {
|
||||
select {
|
||||
case <-stream.Context().Done():
|
||||
log.Printf("server: StreamSnapshots subscriber disconnected")
|
||||
return nil
|
||||
case snap, ok := <-ch:
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
msg := &pb.Snapshot{
|
||||
Source: srv.source,
|
||||
Timestamp: snap.Timestamp.Unix(),
|
||||
}
|
||||
for _, e := range snap.Entries {
|
||||
msg.Entries = append(msg.Entries, &pb.TopNEntry{
|
||||
Label: e.Label,
|
||||
Count: e.Count,
|
||||
})
|
||||
}
|
||||
if err := stream.Send(msg); err != nil {
|
||||
log.Printf("server: send error: %v", err)
|
||||
return err
|
||||
}
|
||||
case <-time.After(30 * time.Second):
|
||||
// unblock select when server is quiet; gRPC keepalives handle the rest
|
||||
}
|
||||
}
|
||||
}
|
||||
205
cmd/collector/smoke_test.go
Normal file
205
cmd/collector/smoke_test.go
Normal file
@@ -0,0 +1,205 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net"
|
||||
"runtime"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
)
|
||||
|
||||
// BenchmarkIngest measures how fast the store can process log records.
|
||||
// At 10K lines/s we need ~10µs budget per record; we should be well under 1µs.
|
||||
func BenchmarkIngest(b *testing.B) {
|
||||
s := NewStore("bench")
|
||||
r := LogRecord{
|
||||
Website: "www.example.com",
|
||||
ClientPrefix: "1.2.3.0/24",
|
||||
URI: "/api/v1/search",
|
||||
Status: "200",
|
||||
}
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
// Vary the key slightly to avoid the "existing key" fast path
|
||||
r.ClientPrefix = fmt.Sprintf("%d.%d.%d.0/24", i%200, (i/200)%256, (i/51200)%256)
|
||||
s.ingest(r)
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkParseLine measures parser throughput.
|
||||
func BenchmarkParseLine(b *testing.B) {
|
||||
line := "www.example.com\t1.2.3.4\t1741954800.123\tGET\t/api/v1/search?q=foo\t200\t1452\t0.043"
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
ParseLine(line, 24, 48)
|
||||
}
|
||||
}
|
||||
|
||||
// TestMemoryBudget fills the store to capacity and checks RSS stays within budget.
|
||||
func TestMemoryBudget(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping memory test in short mode")
|
||||
}
|
||||
|
||||
s := NewStore("memtest")
|
||||
now := time.Now()
|
||||
|
||||
// Fill the live map to cap
|
||||
for i := 0; i < liveMapCap; i++ {
|
||||
s.ingest(LogRecord{
|
||||
Website: fmt.Sprintf("site%d.com", i%1000),
|
||||
ClientPrefix: fmt.Sprintf("%d.%d.%d.0/24", i%256, (i/256)%256, (i/65536)%256),
|
||||
URI: fmt.Sprintf("/path/%d", i%100),
|
||||
Status: fmt.Sprintf("%d", 200+i%4*100),
|
||||
})
|
||||
}
|
||||
|
||||
// Rotate 60 fine buckets to fill the fine ring
|
||||
for i := 0; i < fineRingSize; i++ {
|
||||
for j := 0; j < 1000; j++ {
|
||||
s.ingest(LogRecord{
|
||||
Website: fmt.Sprintf("site%d.com", j%1000),
|
||||
ClientPrefix: fmt.Sprintf("%d.%d.%d.0/24", j%256, j/256, 0),
|
||||
URI: fmt.Sprintf("/p/%d", j%100),
|
||||
Status: "200",
|
||||
})
|
||||
}
|
||||
s.rotate(now.Add(time.Duration(i) * time.Minute))
|
||||
}
|
||||
|
||||
// Rotate enough to fill the coarse ring (288 coarse buckets × 5 fine each)
|
||||
for i := 0; i < coarseRingSize*coarseEvery; i++ {
|
||||
for j := 0; j < 100; j++ {
|
||||
s.ingest(LogRecord{
|
||||
Website: fmt.Sprintf("site%d.com", j%1000),
|
||||
ClientPrefix: fmt.Sprintf("%d.%d.%d.0/24", j%256, j/256, 0),
|
||||
URI: "/",
|
||||
Status: "200",
|
||||
})
|
||||
}
|
||||
s.rotate(now.Add(time.Duration(fineRingSize+i) * time.Minute))
|
||||
}
|
||||
|
||||
var ms runtime.MemStats
|
||||
runtime.GC()
|
||||
runtime.ReadMemStats(&ms)
|
||||
heapMB := ms.HeapInuse / 1024 / 1024
|
||||
t.Logf("heap in-use after full ring fill: %d MB", heapMB)
|
||||
|
||||
const budgetMB = 1024
|
||||
if heapMB > budgetMB {
|
||||
t.Errorf("heap %d MB exceeds budget of %d MB", heapMB, budgetMB)
|
||||
}
|
||||
}
|
||||
|
||||
// TestGRPCEndToEnd spins up a real gRPC server, injects data, and queries it.
|
||||
func TestGRPCEndToEnd(t *testing.T) {
|
||||
store := NewStore("e2e-test")
|
||||
|
||||
// Pre-populate with known data then rotate so it's queryable
|
||||
for i := 0; i < 500; i++ {
|
||||
store.ingest(LogRecord{"busy.com", "1.2.3.0/24", "/api", "200"})
|
||||
}
|
||||
for i := 0; i < 200; i++ {
|
||||
store.ingest(LogRecord{"quiet.com", "5.6.7.0/24", "/", "429"})
|
||||
}
|
||||
store.rotate(time.Now())
|
||||
|
||||
// Start gRPC server on a random free port
|
||||
lis, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
grpcSrv := grpc.NewServer()
|
||||
pb.RegisterLogtailServiceServer(grpcSrv, NewServer(store, "e2e-test"))
|
||||
go grpcSrv.Serve(lis)
|
||||
defer grpcSrv.Stop()
|
||||
|
||||
// Dial it
|
||||
conn, err := grpc.NewClient(lis.Addr().String(),
|
||||
grpc.WithTransportCredentials(insecure.NewCredentials()))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer conn.Close()
|
||||
client := pb.NewLogtailServiceClient(conn)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// TopN by website
|
||||
resp, err := client.TopN(ctx, &pb.TopNRequest{
|
||||
GroupBy: pb.GroupBy_WEBSITE,
|
||||
N: 10,
|
||||
Window: pb.Window_W1M,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("TopN error: %v", err)
|
||||
}
|
||||
if len(resp.Entries) != 2 {
|
||||
t.Fatalf("got %d entries, want 2", len(resp.Entries))
|
||||
}
|
||||
if resp.Entries[0].Label != "busy.com" {
|
||||
t.Errorf("top site = %q, want busy.com", resp.Entries[0].Label)
|
||||
}
|
||||
if resp.Entries[0].Count != 500 {
|
||||
t.Errorf("top count = %d, want 500", resp.Entries[0].Count)
|
||||
}
|
||||
t.Logf("TopN result: source=%s entries=%v", resp.Source, resp.Entries)
|
||||
|
||||
// TopN filtered to 429s
|
||||
status429 := int32(429)
|
||||
resp, err = client.TopN(ctx, &pb.TopNRequest{
|
||||
Filter: &pb.Filter{HttpResponse: &status429},
|
||||
GroupBy: pb.GroupBy_WEBSITE,
|
||||
N: 10,
|
||||
Window: pb.Window_W1M,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("TopN filtered error: %v", err)
|
||||
}
|
||||
if len(resp.Entries) != 1 || resp.Entries[0].Label != "quiet.com" {
|
||||
t.Errorf("filtered result unexpected: %v", resp.Entries)
|
||||
}
|
||||
|
||||
// Trend
|
||||
tresp, err := client.Trend(ctx, &pb.TrendRequest{Window: pb.Window_W5M})
|
||||
if err != nil {
|
||||
t.Fatalf("Trend error: %v", err)
|
||||
}
|
||||
if len(tresp.Points) != 1 {
|
||||
t.Fatalf("got %d trend points, want 1", len(tresp.Points))
|
||||
}
|
||||
if tresp.Points[0].Count != 700 {
|
||||
t.Errorf("trend count = %d, want 700", tresp.Points[0].Count)
|
||||
}
|
||||
t.Logf("Trend result: %v points", len(tresp.Points))
|
||||
|
||||
// StreamSnapshots — inject a new rotation and check we receive it
|
||||
subCh := store.Subscribe()
|
||||
defer store.Unsubscribe(subCh)
|
||||
|
||||
streamCtx, streamCancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer streamCancel()
|
||||
stream, err := client.StreamSnapshots(streamCtx, &pb.SnapshotRequest{})
|
||||
if err != nil {
|
||||
t.Fatalf("StreamSnapshots error: %v", err)
|
||||
}
|
||||
|
||||
store.ingest(LogRecord{"new.com", "9.9.9.0/24", "/new", "200"})
|
||||
store.rotate(time.Now())
|
||||
|
||||
snap, err := stream.Recv()
|
||||
if err != nil {
|
||||
t.Fatalf("stream Recv error: %v", err)
|
||||
}
|
||||
if snap.Source != "e2e-test" {
|
||||
t.Errorf("snapshot source = %q, want e2e-test", snap.Source)
|
||||
}
|
||||
t.Logf("StreamSnapshots: received snapshot with %d entries from %s", len(snap.Entries), snap.Source)
|
||||
}
|
||||
393
cmd/collector/store.go
Normal file
393
cmd/collector/store.go
Normal file
@@ -0,0 +1,393 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"container/heap"
|
||||
"fmt"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb"
|
||||
)
|
||||
|
||||
const (
|
||||
liveMapCap = 100_000 // hard cap on live map entries
|
||||
fineRingSize = 60 // 60 × 1-min buckets → 1 hour
|
||||
coarseRingSize = 288 // 288 × 5-min buckets → 24 hours
|
||||
fineTopK = 50_000 // entries kept per fine snapshot
|
||||
coarseTopK = 5_000 // entries kept per coarse snapshot
|
||||
coarseEvery = 5 // merge every N fine ticks into one coarse bucket
|
||||
)
|
||||
|
||||
// Tuple4 is the four-dimensional key.
|
||||
type Tuple4 struct {
|
||||
Website string
|
||||
Prefix string
|
||||
URI string
|
||||
Status string
|
||||
}
|
||||
|
||||
// Entry is a labelled count used in snapshots and query results.
|
||||
type Entry struct {
|
||||
Label string
|
||||
Count int64
|
||||
}
|
||||
|
||||
// snapshot is one sorted (desc) slice of top-K entries for a time bucket.
|
||||
type snapshot struct {
|
||||
Timestamp time.Time
|
||||
Entries []Entry // sorted descending by Count
|
||||
}
|
||||
|
||||
// Store holds the live map and both ring buffers.
|
||||
type Store struct {
|
||||
source string
|
||||
|
||||
// live map — written only by Run goroutine, no locking needed for writes
|
||||
live map[Tuple4]int64
|
||||
liveLen int // tracked separately to avoid map len() call in hot path
|
||||
|
||||
// ring buffers — protected by mu for reads (Run goroutine writes)
|
||||
mu sync.RWMutex
|
||||
fineRing [fineRingSize]snapshot
|
||||
fineHead int // index of next write slot
|
||||
fineFilled int // how many slots are populated
|
||||
|
||||
coarseRing [coarseRingSize]snapshot
|
||||
coarseHead int
|
||||
coarseFilled int
|
||||
fineTick int // counts fine ticks mod coarseEvery
|
||||
|
||||
// fan-out to StreamSnapshots subscribers
|
||||
subMu sync.Mutex
|
||||
subs map[chan snapshot]struct{}
|
||||
}
|
||||
|
||||
func NewStore(source string) *Store {
|
||||
return &Store{
|
||||
source: source,
|
||||
live: make(map[Tuple4]int64, liveMapCap),
|
||||
subs: make(map[chan snapshot]struct{}),
|
||||
}
|
||||
}
|
||||
|
||||
// Ingest records one log record into the live map.
|
||||
// Must only be called from the Run goroutine.
|
||||
func (s *Store) ingest(r LogRecord) {
|
||||
key := Tuple4{r.Website, r.ClientPrefix, r.URI, r.Status}
|
||||
if _, exists := s.live[key]; !exists {
|
||||
if s.liveLen >= liveMapCap {
|
||||
return // drop new keys when at cap
|
||||
}
|
||||
s.liveLen++
|
||||
}
|
||||
s.live[key]++
|
||||
}
|
||||
|
||||
// rotate snapshots the live map into the fine ring, and every coarseEvery ticks
|
||||
// also merges into the coarse ring. Called once per minute by Run.
|
||||
func (s *Store) rotate(now time.Time) {
|
||||
fine := topK(s.live, fineTopK, now)
|
||||
|
||||
s.mu.Lock()
|
||||
s.fineRing[s.fineHead] = fine
|
||||
s.fineHead = (s.fineHead + 1) % fineRingSize
|
||||
if s.fineFilled < fineRingSize {
|
||||
s.fineFilled++
|
||||
}
|
||||
|
||||
s.fineTick++
|
||||
if s.fineTick >= coarseEvery {
|
||||
s.fineTick = 0
|
||||
coarse := s.mergeFineBuckets(coarseTopK, now)
|
||||
s.coarseRing[s.coarseHead] = coarse
|
||||
s.coarseHead = (s.coarseHead + 1) % coarseRingSize
|
||||
if s.coarseFilled < coarseRingSize {
|
||||
s.coarseFilled++
|
||||
}
|
||||
}
|
||||
s.mu.Unlock()
|
||||
|
||||
// reset live map
|
||||
s.live = make(map[Tuple4]int64, liveMapCap)
|
||||
s.liveLen = 0
|
||||
|
||||
// notify subscribers — must be outside mu to avoid deadlock
|
||||
s.broadcast(fine)
|
||||
}
|
||||
|
||||
// mergeFineBuckets merges the last coarseEvery fine snapshots into one.
|
||||
// Called with mu held.
|
||||
func (s *Store) mergeFineBuckets(k int, now time.Time) snapshot {
|
||||
merged := make(map[string]int64)
|
||||
count := coarseEvery
|
||||
if count > s.fineFilled {
|
||||
count = s.fineFilled
|
||||
}
|
||||
for i := 0; i < count; i++ {
|
||||
idx := (s.fineHead - 1 - i + fineRingSize) % fineRingSize
|
||||
for _, e := range s.fineRing[idx].Entries {
|
||||
merged[e.Label] += e.Count
|
||||
}
|
||||
}
|
||||
entries := topKFromMap(merged, k)
|
||||
return snapshot{Timestamp: now, Entries: entries}
|
||||
}
|
||||
|
||||
// QueryTopN answers a TopN request from the ring buffers.
|
||||
func (s *Store) QueryTopN(filter *pb.Filter, groupBy pb.GroupBy, n int, window pb.Window) []Entry {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
buckets, count := s.bucketsForWindow(window)
|
||||
|
||||
// Accumulate grouped counts
|
||||
grouped := make(map[string]int64)
|
||||
for i := 0; i < count; i++ {
|
||||
idx := (buckets.head - 1 - i + buckets.size) % buckets.size
|
||||
snap := buckets.ring[idx]
|
||||
for _, e := range snap.Entries {
|
||||
t := labelTuple(e.Label)
|
||||
if !matchesFilter(t, filter) {
|
||||
continue
|
||||
}
|
||||
grouped[dimensionLabel(t, groupBy)] += e.Count
|
||||
}
|
||||
}
|
||||
|
||||
return topKFromMap(grouped, n)
|
||||
}
|
||||
|
||||
// QueryTrend answers a Trend request from the ring buffers.
|
||||
func (s *Store) QueryTrend(filter *pb.Filter, window pb.Window) []trendPoint {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
buckets, count := s.bucketsForWindow(window)
|
||||
points := make([]trendPoint, count)
|
||||
for i := 0; i < count; i++ {
|
||||
// oldest first
|
||||
idx := (buckets.head - count + i + buckets.size) % buckets.size
|
||||
snap := buckets.ring[idx]
|
||||
var total int64
|
||||
for _, e := range snap.Entries {
|
||||
if matchesFilter(labelTuple(e.Label), filter) {
|
||||
total += e.Count
|
||||
}
|
||||
}
|
||||
points[i] = trendPoint{Timestamp: snap.Timestamp, Count: total}
|
||||
}
|
||||
return points
|
||||
}
|
||||
|
||||
type trendPoint struct {
|
||||
Timestamp time.Time
|
||||
Count int64
|
||||
}
|
||||
|
||||
// ringView is a helper to treat fine and coarse rings uniformly.
|
||||
type ringView struct {
|
||||
ring []snapshot
|
||||
head int
|
||||
size int
|
||||
}
|
||||
|
||||
func (s *Store) bucketsForWindow(window pb.Window) (ringView, int) {
|
||||
switch window {
|
||||
case pb.Window_W1M:
|
||||
return s.fineView(), min(1, s.fineFilled)
|
||||
case pb.Window_W5M:
|
||||
return s.fineView(), min(5, s.fineFilled)
|
||||
case pb.Window_W15M:
|
||||
return s.fineView(), min(15, s.fineFilled)
|
||||
case pb.Window_W60M:
|
||||
return s.fineView(), min(60, s.fineFilled)
|
||||
case pb.Window_W6H:
|
||||
return s.coarseView(), min(72, s.coarseFilled) // 72 × 5-min = 6h
|
||||
case pb.Window_W24H:
|
||||
return s.coarseView(), min(288, s.coarseFilled)
|
||||
default:
|
||||
return s.fineView(), min(5, s.fineFilled)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Store) fineView() ringView {
|
||||
ring := make([]snapshot, fineRingSize)
|
||||
copy(ring, s.fineRing[:])
|
||||
return ringView{ring: ring, head: s.fineHead, size: fineRingSize}
|
||||
}
|
||||
|
||||
func (s *Store) coarseView() ringView {
|
||||
ring := make([]snapshot, coarseRingSize)
|
||||
copy(ring, s.coarseRing[:])
|
||||
return ringView{ring: ring, head: s.coarseHead, size: coarseRingSize}
|
||||
}
|
||||
|
||||
// Subscribe returns a channel that receives a copy of each fine snapshot
|
||||
// after rotation. Buffer of 4 so a slow subscriber doesn't block rotation.
|
||||
func (s *Store) Subscribe() chan snapshot {
|
||||
ch := make(chan snapshot, 4)
|
||||
s.subMu.Lock()
|
||||
s.subs[ch] = struct{}{}
|
||||
s.subMu.Unlock()
|
||||
return ch
|
||||
}
|
||||
|
||||
// Unsubscribe removes and closes the subscriber channel.
|
||||
func (s *Store) Unsubscribe(ch chan snapshot) {
|
||||
s.subMu.Lock()
|
||||
delete(s.subs, ch)
|
||||
s.subMu.Unlock()
|
||||
close(ch)
|
||||
}
|
||||
|
||||
func (s *Store) broadcast(snap snapshot) {
|
||||
s.subMu.Lock()
|
||||
defer s.subMu.Unlock()
|
||||
for ch := range s.subs {
|
||||
select {
|
||||
case ch <- snap:
|
||||
default:
|
||||
// subscriber is slow; drop rather than block rotation
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Run is the single goroutine that reads from ch, ingests records, and rotates
|
||||
// the ring buffer every minute. Exits when ch is closed.
|
||||
func (s *Store) Run(ch <-chan LogRecord) {
|
||||
ticker := time.NewTicker(time.Minute)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case r, ok := <-ch:
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
s.ingest(r)
|
||||
case t := <-ticker.C:
|
||||
s.rotate(t)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// --- heap-based top-K helpers ---
|
||||
|
||||
type entryHeap []Entry
|
||||
|
||||
func (h entryHeap) Len() int { return len(h) }
|
||||
func (h entryHeap) Less(i, j int) bool { return h[i].Count < h[j].Count } // min-heap
|
||||
func (h entryHeap) Swap(i, j int) { h[i], h[j] = h[j], h[i] }
|
||||
func (h *entryHeap) Push(x interface{}) { *h = append(*h, x.(Entry)) }
|
||||
func (h *entryHeap) Pop() interface{} {
|
||||
old := *h
|
||||
n := len(old)
|
||||
x := old[n-1]
|
||||
*h = old[:n-1]
|
||||
return x
|
||||
}
|
||||
|
||||
// topK extracts the top-k entries from a Tuple4 map, labelled as "w|p|u|s".
|
||||
func topK(m map[Tuple4]int64, k int, ts time.Time) snapshot {
|
||||
// Build a string-keyed map for topKFromMap
|
||||
flat := make(map[string]int64, len(m))
|
||||
for t, c := range m {
|
||||
flat[encodeTuple(t)] = c
|
||||
}
|
||||
return snapshot{Timestamp: ts, Entries: topKFromMap(flat, k)}
|
||||
}
|
||||
|
||||
// topKFromMap selects the top-k entries from a string→count map, sorted desc.
|
||||
func topKFromMap(m map[string]int64, k int) []Entry {
|
||||
if k <= 0 {
|
||||
return nil
|
||||
}
|
||||
h := make(entryHeap, 0, k+1)
|
||||
for label, count := range m {
|
||||
heap.Push(&h, Entry{Label: label, Count: count})
|
||||
if h.Len() > k {
|
||||
heap.Pop(&h) // evict smallest
|
||||
}
|
||||
}
|
||||
result := make([]Entry, h.Len())
|
||||
for i := len(result) - 1; i >= 0; i-- {
|
||||
result[i] = heap.Pop(&h).(Entry)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// --- label encoding: "website\x00prefix\x00uri\x00status" ---
|
||||
|
||||
func encodeTuple(t Tuple4) string {
|
||||
return t.Website + "\x00" + t.Prefix + "\x00" + t.URI + "\x00" + t.Status
|
||||
}
|
||||
|
||||
func labelTuple(label string) Tuple4 {
|
||||
parts := splitN(label, '\x00', 4)
|
||||
if len(parts) != 4 {
|
||||
return Tuple4{}
|
||||
}
|
||||
return Tuple4{parts[0], parts[1], parts[2], parts[3]}
|
||||
}
|
||||
|
||||
func splitN(s string, sep byte, n int) []string {
|
||||
result := make([]string, 0, n)
|
||||
for len(result) < n-1 {
|
||||
i := indexOf(s, sep)
|
||||
if i < 0 {
|
||||
break
|
||||
}
|
||||
result = append(result, s[:i])
|
||||
s = s[i+1:]
|
||||
}
|
||||
return append(result, s)
|
||||
}
|
||||
|
||||
func indexOf(s string, b byte) int {
|
||||
for i := 0; i < len(s); i++ {
|
||||
if s[i] == b {
|
||||
return i
|
||||
}
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
func matchesFilter(t Tuple4, f *pb.Filter) bool {
|
||||
if f == nil {
|
||||
return true
|
||||
}
|
||||
if f.Website != nil && t.Website != f.GetWebsite() {
|
||||
return false
|
||||
}
|
||||
if f.ClientPrefix != nil && t.Prefix != f.GetClientPrefix() {
|
||||
return false
|
||||
}
|
||||
if f.HttpRequestUri != nil && t.URI != f.GetHttpRequestUri() {
|
||||
return false
|
||||
}
|
||||
if f.HttpResponse != nil && t.Status != fmt.Sprint(f.GetHttpResponse()) {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func dimensionLabel(t Tuple4, g pb.GroupBy) string {
|
||||
switch g {
|
||||
case pb.GroupBy_WEBSITE:
|
||||
return t.Website
|
||||
case pb.GroupBy_CLIENT_PREFIX:
|
||||
return t.Prefix
|
||||
case pb.GroupBy_REQUEST_URI:
|
||||
return t.URI
|
||||
case pb.GroupBy_HTTP_RESPONSE:
|
||||
return t.Status
|
||||
default:
|
||||
return t.Website
|
||||
}
|
||||
}
|
||||
|
||||
func min(a, b int) int {
|
||||
if a < b {
|
||||
return a
|
||||
}
|
||||
return b
|
||||
}
|
||||
179
cmd/collector/store_test.go
Normal file
179
cmd/collector/store_test.go
Normal file
@@ -0,0 +1,179 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb"
|
||||
)
|
||||
|
||||
func makeStore() *Store {
|
||||
return NewStore("test")
|
||||
}
|
||||
|
||||
func ingestN(s *Store, website, prefix, uri, status string, n int) {
|
||||
for i := 0; i < n; i++ {
|
||||
s.ingest(LogRecord{website, prefix, uri, status})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIngestAndRotate(t *testing.T) {
|
||||
s := makeStore()
|
||||
ingestN(s, "example.com", "1.2.3.0/24", "/", "200", 100)
|
||||
ingestN(s, "other.com", "5.6.7.0/24", "/api", "429", 50)
|
||||
|
||||
s.rotate(time.Now())
|
||||
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
if s.fineFilled != 1 {
|
||||
t.Fatalf("fineFilled = %d, want 1", s.fineFilled)
|
||||
}
|
||||
snap := s.fineRing[(s.fineHead-1+fineRingSize)%fineRingSize]
|
||||
if len(snap.Entries) != 2 {
|
||||
t.Fatalf("snapshot has %d entries, want 2", len(snap.Entries))
|
||||
}
|
||||
if snap.Entries[0].Count != 100 {
|
||||
t.Errorf("top entry count = %d, want 100", snap.Entries[0].Count)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLiveMapCap(t *testing.T) {
|
||||
s := makeStore()
|
||||
// Ingest liveMapCap+100 distinct keys; only liveMapCap should be tracked
|
||||
for i := 0; i < liveMapCap+100; i++ {
|
||||
s.ingest(LogRecord{
|
||||
Website: fmt.Sprintf("site%d.com", i),
|
||||
ClientPrefix: "1.2.3.0/24",
|
||||
URI: "/",
|
||||
Status: "200",
|
||||
})
|
||||
}
|
||||
if s.liveLen != liveMapCap {
|
||||
t.Errorf("liveLen = %d, want %d", s.liveLen, liveMapCap)
|
||||
}
|
||||
}
|
||||
|
||||
func TestQueryTopN(t *testing.T) {
|
||||
s := makeStore()
|
||||
ingestN(s, "busy.com", "1.0.0.0/24", "/", "200", 300)
|
||||
ingestN(s, "medium.com", "2.0.0.0/24", "/", "200", 100)
|
||||
ingestN(s, "quiet.com", "3.0.0.0/24", "/", "200", 10)
|
||||
s.rotate(time.Now())
|
||||
|
||||
entries := s.QueryTopN(nil, pb.GroupBy_WEBSITE, 2, pb.Window_W1M)
|
||||
if len(entries) != 2 {
|
||||
t.Fatalf("got %d entries, want 2", len(entries))
|
||||
}
|
||||
if entries[0].Label != "busy.com" {
|
||||
t.Errorf("top entry = %q, want busy.com", entries[0].Label)
|
||||
}
|
||||
if entries[0].Count != 300 {
|
||||
t.Errorf("top count = %d, want 300", entries[0].Count)
|
||||
}
|
||||
}
|
||||
|
||||
func TestQueryTopNWithFilter(t *testing.T) {
|
||||
s := makeStore()
|
||||
ingestN(s, "example.com", "1.0.0.0/24", "/api", "429", 200)
|
||||
ingestN(s, "example.com", "2.0.0.0/24", "/api", "200", 500)
|
||||
ingestN(s, "other.com", "3.0.0.0/24", "/", "429", 100)
|
||||
s.rotate(time.Now())
|
||||
|
||||
status429 := int32(429)
|
||||
entries := s.QueryTopN(&pb.Filter{HttpResponse: &status429}, pb.GroupBy_WEBSITE, 10, pb.Window_W1M)
|
||||
if len(entries) != 2 {
|
||||
t.Fatalf("got %d entries, want 2", len(entries))
|
||||
}
|
||||
// example.com has 200 × 429, other.com has 100 × 429
|
||||
if entries[0].Label != "example.com" || entries[0].Count != 200 {
|
||||
t.Errorf("unexpected top: %+v", entries[0])
|
||||
}
|
||||
}
|
||||
|
||||
func TestQueryTrend(t *testing.T) {
|
||||
s := makeStore()
|
||||
now := time.Now()
|
||||
|
||||
// Rotate 3 buckets with different counts
|
||||
ingestN(s, "x.com", "1.0.0.0/24", "/", "200", 10)
|
||||
s.rotate(now.Add(-2 * time.Minute))
|
||||
|
||||
ingestN(s, "x.com", "1.0.0.0/24", "/", "200", 20)
|
||||
s.rotate(now.Add(-1 * time.Minute))
|
||||
|
||||
ingestN(s, "x.com", "1.0.0.0/24", "/", "200", 30)
|
||||
s.rotate(now)
|
||||
|
||||
points := s.QueryTrend(nil, pb.Window_W5M)
|
||||
if len(points) != 3 {
|
||||
t.Fatalf("got %d points, want 3", len(points))
|
||||
}
|
||||
// Points are oldest-first; counts should be 10, 20, 30
|
||||
if points[0].Count != 10 || points[1].Count != 20 || points[2].Count != 30 {
|
||||
t.Errorf("unexpected counts: %v", points)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCoarseRingPopulated(t *testing.T) {
|
||||
s := makeStore()
|
||||
now := time.Now()
|
||||
|
||||
// Rotate coarseEvery fine buckets to trigger one coarse bucket
|
||||
for i := 0; i < coarseEvery; i++ {
|
||||
ingestN(s, "x.com", "1.0.0.0/24", "/", "200", 10)
|
||||
s.rotate(now.Add(time.Duration(i) * time.Minute))
|
||||
}
|
||||
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
if s.coarseFilled != 1 {
|
||||
t.Fatalf("coarseFilled = %d, want 1", s.coarseFilled)
|
||||
}
|
||||
coarse := s.coarseRing[(s.coarseHead-1+coarseRingSize)%coarseRingSize]
|
||||
if len(coarse.Entries) == 0 {
|
||||
t.Fatal("coarse snapshot is empty")
|
||||
}
|
||||
// 5 fine buckets × 10 counts = 50
|
||||
if coarse.Entries[0].Count != 50 {
|
||||
t.Errorf("coarse count = %d, want 50", coarse.Entries[0].Count)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubscribeBroadcast(t *testing.T) {
|
||||
s := makeStore()
|
||||
ch := s.Subscribe()
|
||||
|
||||
ingestN(s, "x.com", "1.0.0.0/24", "/", "200", 5)
|
||||
s.rotate(time.Now())
|
||||
|
||||
select {
|
||||
case snap := <-ch:
|
||||
if len(snap.Entries) != 1 {
|
||||
t.Errorf("got %d entries, want 1", len(snap.Entries))
|
||||
}
|
||||
case <-time.After(time.Second):
|
||||
t.Fatal("no snapshot received within 1s")
|
||||
}
|
||||
s.Unsubscribe(ch)
|
||||
}
|
||||
|
||||
func TestTopKOrdering(t *testing.T) {
|
||||
m := map[string]int64{
|
||||
"a": 5,
|
||||
"b": 100,
|
||||
"c": 1,
|
||||
"d": 50,
|
||||
}
|
||||
entries := topKFromMap(m, 3)
|
||||
if len(entries) != 3 {
|
||||
t.Fatalf("got %d entries, want 3", len(entries))
|
||||
}
|
||||
if entries[0].Label != "b" || entries[0].Count != 100 {
|
||||
t.Errorf("wrong top: %+v", entries[0])
|
||||
}
|
||||
if entries[1].Label != "d" || entries[1].Count != 50 {
|
||||
t.Errorf("wrong second: %+v", entries[1])
|
||||
}
|
||||
}
|
||||
179
cmd/collector/tailer.go
Normal file
179
cmd/collector/tailer.go
Normal file
@@ -0,0 +1,179 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/fsnotify/fsnotify"
|
||||
)
|
||||
|
||||
// fileState holds the open file handle and buffered reader for one log file.
|
||||
type fileState struct {
|
||||
f *os.File
|
||||
reader *bufio.Reader
|
||||
}
|
||||
|
||||
// reopenMsg is sent by a retry goroutine back to the Run loop when a rotated
|
||||
// file has reappeared and is ready to be watched again.
|
||||
type reopenMsg struct {
|
||||
path string
|
||||
f *os.File
|
||||
}
|
||||
|
||||
// MultiTailer tails any number of log files using a single shared
|
||||
// fsnotify.Watcher (one inotify instance). This scales to hundreds of files
|
||||
// without hitting the kernel limit on inotify instances per user.
|
||||
type MultiTailer struct {
|
||||
paths []string
|
||||
v4bits int
|
||||
v6bits int
|
||||
ch chan<- LogRecord
|
||||
}
|
||||
|
||||
func NewMultiTailer(paths []string, v4bits, v6bits int, ch chan<- LogRecord) *MultiTailer {
|
||||
return &MultiTailer{paths: paths, v4bits: v4bits, v6bits: v6bits, ch: ch}
|
||||
}
|
||||
|
||||
// Run tails all configured files until ctx is cancelled.
|
||||
// All files share one fsnotify.Watcher. Log rotation is handled per-file:
|
||||
// on RENAME/REMOVE the old fd is drained then a retry goroutine re-opens
|
||||
// the original path and hands it back via a channel.
|
||||
func (mt *MultiTailer) Run(ctx context.Context) {
|
||||
watcher, err := fsnotify.NewWatcher()
|
||||
if err != nil {
|
||||
log.Fatalf("tailer: failed to create watcher: %v", err)
|
||||
}
|
||||
defer watcher.Close()
|
||||
|
||||
files := make(map[string]*fileState, len(mt.paths))
|
||||
reopenCh := make(chan reopenMsg, len(mt.paths))
|
||||
|
||||
// Open all files and seek to EOF.
|
||||
for _, path := range mt.paths {
|
||||
fs, err := openAndSeekEOF(path, watcher)
|
||||
if err != nil {
|
||||
log.Printf("tailer: %s not found, will retry: %v", path, err)
|
||||
go retryOpen(ctx, path, watcher, reopenCh)
|
||||
continue
|
||||
}
|
||||
files[path] = fs
|
||||
log.Printf("tailer: watching %s", path)
|
||||
}
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
for _, fs := range files {
|
||||
fs.f.Close()
|
||||
}
|
||||
return
|
||||
|
||||
case msg, ok := <-reopenCh:
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
files[msg.path] = &fileState{f: msg.f, reader: bufio.NewReader(msg.f)}
|
||||
if err := watcher.Add(msg.path); err != nil {
|
||||
log.Printf("tailer: watcher re-add failed for %s: %v", msg.path, err)
|
||||
}
|
||||
log.Printf("tailer: re-opened %s after rotation", msg.path)
|
||||
|
||||
case event, ok := <-watcher.Events:
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
fs, known := files[event.Name]
|
||||
if !known {
|
||||
continue
|
||||
}
|
||||
if event.Has(fsnotify.Write) {
|
||||
mt.readLines(fs.reader)
|
||||
}
|
||||
if event.Has(fsnotify.Rename) || event.Has(fsnotify.Remove) {
|
||||
// Drain remaining bytes in the old fd before it disappears.
|
||||
mt.readLines(fs.reader)
|
||||
fs.f.Close()
|
||||
delete(files, event.Name)
|
||||
_ = watcher.Remove(event.Name)
|
||||
go retryOpen(ctx, event.Name, watcher, reopenCh)
|
||||
}
|
||||
|
||||
case err, ok := <-watcher.Errors:
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
log.Printf("tailer: watcher error: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// openAndSeekEOF opens path, seeks to EOF, and registers it with watcher.
|
||||
func openAndSeekEOF(path string, watcher *fsnotify.Watcher) (*fileState, error) {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if _, err := f.Seek(0, io.SeekEnd); err != nil {
|
||||
f.Close()
|
||||
return nil, err
|
||||
}
|
||||
if err := watcher.Add(path); err != nil {
|
||||
f.Close()
|
||||
return nil, err
|
||||
}
|
||||
return &fileState{f: f, reader: bufio.NewReader(f)}, nil
|
||||
}
|
||||
|
||||
// retryOpen polls until path exists again (after log rotation), then sends
|
||||
// the open file back on ch. Exits if ctx is cancelled.
|
||||
func retryOpen(ctx context.Context, path string, watcher *fsnotify.Watcher, ch chan<- reopenMsg) {
|
||||
backoff := 100 * time.Millisecond
|
||||
const maxBackoff = 5 * time.Second
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-time.After(backoff):
|
||||
}
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
backoff *= 2
|
||||
if backoff > maxBackoff {
|
||||
backoff = maxBackoff
|
||||
}
|
||||
continue
|
||||
}
|
||||
ch <- reopenMsg{path: path, f: f}
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// readLines reads all complete lines currently available and emits records.
|
||||
func (mt *MultiTailer) readLines(reader *bufio.Reader) {
|
||||
for {
|
||||
line, err := reader.ReadString('\n')
|
||||
if len(line) > 0 {
|
||||
l := line
|
||||
if l[len(l)-1] == '\n' {
|
||||
l = l[:len(l)-1]
|
||||
}
|
||||
if len(l) > 0 && l[len(l)-1] == '\r' {
|
||||
l = l[:len(l)-1]
|
||||
}
|
||||
if rec, ok := ParseLine(l, mt.v4bits, mt.v6bits); ok {
|
||||
select {
|
||||
case mt.ch <- rec:
|
||||
default:
|
||||
// Channel full — drop rather than block the event loop.
|
||||
}
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
178
cmd/collector/tailer_test.go
Normal file
178
cmd/collector/tailer_test.go
Normal file
@@ -0,0 +1,178 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func writeLine(t *testing.T, f *os.File, website string) {
|
||||
t.Helper()
|
||||
_, err := fmt.Fprintf(f, "%s\t1.2.3.4\t0\tGET\t/path\t200\t0\t0.001\n", website)
|
||||
if err != nil {
|
||||
t.Fatalf("writeLine: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMultiTailerReadsLines(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
path := filepath.Join(dir, "access.log")
|
||||
|
||||
f, err := os.Create(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
ch := make(chan LogRecord, 100)
|
||||
mt := NewMultiTailer([]string{path}, 24, 48, ch)
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
go mt.Run(ctx)
|
||||
|
||||
// Give the tailer time to open and seek to EOF.
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
|
||||
writeLine(t, f, "www.example.com")
|
||||
writeLine(t, f, "api.example.com")
|
||||
|
||||
received := collectN(t, ch, 2, 2*time.Second)
|
||||
websites := map[string]bool{}
|
||||
for _, r := range received {
|
||||
websites[r.Website] = true
|
||||
}
|
||||
if !websites["www.example.com"] || !websites["api.example.com"] {
|
||||
t.Errorf("unexpected records: %v", received)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMultiTailerMultipleFiles(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
const numFiles = 5
|
||||
files := make([]*os.File, numFiles)
|
||||
paths := make([]string, numFiles)
|
||||
for i := range files {
|
||||
p := filepath.Join(dir, fmt.Sprintf("access%d.log", i))
|
||||
paths[i] = p
|
||||
f, err := os.Create(p)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer f.Close()
|
||||
files[i] = f
|
||||
}
|
||||
|
||||
ch := make(chan LogRecord, 200)
|
||||
mt := NewMultiTailer(paths, 24, 48, ch)
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
go mt.Run(ctx)
|
||||
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
|
||||
// Write one line per file
|
||||
for i, f := range files {
|
||||
writeLine(t, f, fmt.Sprintf("site%d.com", i))
|
||||
}
|
||||
|
||||
received := collectN(t, ch, numFiles, 2*time.Second)
|
||||
if len(received) != numFiles {
|
||||
t.Errorf("got %d records, want %d", len(received), numFiles)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMultiTailerLogRotation(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
path := filepath.Join(dir, "access.log")
|
||||
|
||||
f, err := os.Create(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
ch := make(chan LogRecord, 100)
|
||||
mt := NewMultiTailer([]string{path}, 24, 48, ch)
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
go mt.Run(ctx)
|
||||
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
|
||||
// Write a line to the original file
|
||||
writeLine(t, f, "before.rotation.com")
|
||||
collectN(t, ch, 1, 2*time.Second)
|
||||
|
||||
// Simulate logrotate: rename the old file, create a new one
|
||||
rotated := filepath.Join(dir, "access.log.1")
|
||||
f.Close()
|
||||
if err := os.Rename(path, rotated); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Give the tailer a moment to detect the rename and start retrying
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
|
||||
// Create the new log file (as nginx would after logrotate)
|
||||
newF, err := os.Create(path)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer newF.Close()
|
||||
|
||||
// Allow retry goroutine to pick it up
|
||||
time.Sleep(300 * time.Millisecond)
|
||||
|
||||
writeLine(t, newF, "after.rotation.com")
|
||||
received := collectN(t, ch, 1, 3*time.Second)
|
||||
if len(received) == 0 || received[0].Website != "after.rotation.com" {
|
||||
t.Errorf("expected after.rotation.com, got %v", received)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExpandGlobs(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
for _, name := range []string{"a.log", "b.log", "other.txt"} {
|
||||
f, _ := os.Create(filepath.Join(dir, name))
|
||||
f.Close()
|
||||
}
|
||||
|
||||
pattern := filepath.Join(dir, "*.log")
|
||||
paths := expandGlobs([]string{pattern})
|
||||
if len(paths) != 2 {
|
||||
t.Errorf("glob expanded to %d paths, want 2: %v", len(paths), paths)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExpandGlobsDeduplication(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
p := filepath.Join(dir, "access.log")
|
||||
f, _ := os.Create(p)
|
||||
f.Close()
|
||||
|
||||
// Same file listed twice via explicit path and glob
|
||||
paths := expandGlobs([]string{p, filepath.Join(dir, "*.log")})
|
||||
if len(paths) != 1 {
|
||||
t.Errorf("expected 1 deduplicated path, got %d: %v", len(paths), paths)
|
||||
}
|
||||
}
|
||||
|
||||
// collectN reads exactly n records from ch within timeout, or returns what it got.
|
||||
func collectN(t *testing.T, ch <-chan LogRecord, n int, timeout time.Duration) []LogRecord {
|
||||
t.Helper()
|
||||
var records []LogRecord
|
||||
deadline := time.After(timeout)
|
||||
for len(records) < n {
|
||||
select {
|
||||
case r := <-ch:
|
||||
records = append(records, r)
|
||||
case <-deadline:
|
||||
t.Logf("collectN: timeout waiting for record %d/%d", len(records)+1, n)
|
||||
return records
|
||||
}
|
||||
}
|
||||
return records
|
||||
}
|
||||
299
docs/USERGUIDE.md
Normal file
299
docs/USERGUIDE.md
Normal file
@@ -0,0 +1,299 @@
|
||||
# nginx-logtail User Guide
|
||||
|
||||
## Overview
|
||||
|
||||
nginx-logtail is a three-component system for real-time traffic analysis across a cluster of nginx
|
||||
machines. It answers questions like:
|
||||
|
||||
- Which client prefix is causing the most HTTP 429s right now?
|
||||
- Which website is getting the most 503s over the last 24 hours?
|
||||
- Which nginx machine is the busiest?
|
||||
- Is there a DDoS in progress, and from where?
|
||||
|
||||
Components:
|
||||
|
||||
| Binary | Runs on | Role |
|
||||
|---------------|------------------|----------------------------------------------------|
|
||||
| `collector` | each nginx host | Tails log files, aggregates in memory, serves gRPC |
|
||||
| `aggregator` | central host | Merges all collectors, serves unified gRPC |
|
||||
| `frontend` | central host | HTTP dashboard with drilldown UI |
|
||||
| `cli` | operator laptop | Shell queries against collector or aggregator |
|
||||
|
||||
---
|
||||
|
||||
## nginx Configuration
|
||||
|
||||
Add the `logtail` log format to your `nginx.conf` and apply it to each `server` block:
|
||||
|
||||
```nginx
|
||||
http {
|
||||
log_format logtail '$host\t$remote_addr\t$msec\t$request_method\t$request_uri\t$status\t$body_bytes_sent\t$request_time';
|
||||
|
||||
server {
|
||||
access_log /var/log/nginx/access.log logtail;
|
||||
# or per-vhost:
|
||||
access_log /var/log/nginx/www.example.com.access.log logtail;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The format is tab-separated with fixed field positions. Query strings are stripped from the URI
|
||||
by the collector at ingest time — only the path is tracked.
|
||||
|
||||
---
|
||||
|
||||
## Building
|
||||
|
||||
```bash
|
||||
git clone https://git.ipng.ch/ipng/nginx-logtail
|
||||
cd nginx-logtail
|
||||
go build ./cmd/collector/
|
||||
go build ./cmd/aggregator/
|
||||
go build ./cmd/frontend/
|
||||
go build ./cmd/cli/
|
||||
```
|
||||
|
||||
Requires Go 1.21+. No CGO, no external runtime dependencies.
|
||||
|
||||
---
|
||||
|
||||
## Collector
|
||||
|
||||
Runs on each nginx machine. Tails log files, maintains in-memory top-K counters across six time
|
||||
windows, and exposes a gRPC interface for the aggregator (and directly for the CLI).
|
||||
|
||||
### Flags
|
||||
|
||||
| Flag | Default | Description |
|
||||
|----------------|--------------|-----------------------------------------------------------|
|
||||
| `--listen` | `:9090` | gRPC listen address |
|
||||
| `--logs` | — | Comma-separated log file paths or glob patterns |
|
||||
| `--logs-file` | — | File containing one log path/glob per line |
|
||||
| `--source` | hostname | Name for this collector in query responses |
|
||||
| `--v4prefix` | `24` | IPv4 prefix length for client bucketing (e.g. /24 → /23) |
|
||||
| `--v6prefix` | `48` | IPv6 prefix length for client bucketing |
|
||||
|
||||
At least one of `--logs` or `--logs-file` is required.
|
||||
|
||||
### Examples
|
||||
|
||||
```bash
|
||||
# Single file
|
||||
./collector --logs /var/log/nginx/access.log
|
||||
|
||||
# Multiple files via glob (one inotify instance regardless of count)
|
||||
./collector --logs "/var/log/nginx/*/access.log"
|
||||
|
||||
# Many files via a config file
|
||||
./collector --logs-file /etc/nginx-logtail/logs.conf
|
||||
|
||||
# Custom prefix lengths and listen address
|
||||
./collector \
|
||||
--logs "/var/log/nginx/*.log" \
|
||||
--listen :9091 \
|
||||
--source nginx3.prod \
|
||||
--v4prefix 24 \
|
||||
--v6prefix 48
|
||||
```
|
||||
|
||||
### logs-file format
|
||||
|
||||
One path or glob pattern per line. Lines starting with `#` are ignored.
|
||||
|
||||
```
|
||||
# /etc/nginx-logtail/logs.conf
|
||||
/var/log/nginx/access.log
|
||||
/var/log/nginx/*/access.log
|
||||
/var/log/nginx/api.example.com.access.log
|
||||
```
|
||||
|
||||
### Log rotation
|
||||
|
||||
The collector handles logrotate automatically. On `RENAME`/`REMOVE` events it drains the old file
|
||||
descriptor to EOF (so no lines are lost), then retries opening the original path with backoff until
|
||||
the new file appears. No restart or SIGHUP required.
|
||||
|
||||
### Memory usage
|
||||
|
||||
The collector is designed to stay well under 1 GB:
|
||||
|
||||
| Structure | Max entries | Approx size |
|
||||
|-----------------------------|-------------|-------------|
|
||||
| Live map (current minute) | 100 000 | ~19 MB |
|
||||
| Fine ring (60 × 1-min) | 60 × 50 000 | ~558 MB |
|
||||
| Coarse ring (288 × 5-min) | 288 × 5 000 | ~268 MB |
|
||||
| **Total** | | **~845 MB** |
|
||||
|
||||
When the live map reaches 100 000 distinct 4-tuples, new keys are dropped for the rest of that
|
||||
minute. Existing keys continue to accumulate counts. The cap resets at each minute rotation.
|
||||
|
||||
### Time windows
|
||||
|
||||
Data is served from two tiered ring buffers:
|
||||
|
||||
| Window | Source ring | Resolution |
|
||||
|--------|-------------|------------|
|
||||
| 1 min | fine | 1 minute |
|
||||
| 5 min | fine | 1 minute |
|
||||
| 15 min | fine | 1 minute |
|
||||
| 60 min | fine | 1 minute |
|
||||
| 6 h | coarse | 5 minutes |
|
||||
| 24 h | coarse | 5 minutes |
|
||||
|
||||
History is lost on restart — the collector resumes tailing immediately but all ring buffers start
|
||||
empty. The fine ring fills in 1 hour; the coarse ring fills in 24 hours.
|
||||
|
||||
### Systemd unit example
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=nginx-logtail collector
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/collector \
|
||||
--logs-file /etc/nginx-logtail/logs.conf \
|
||||
--listen :9090 \
|
||||
--source %H
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Aggregator
|
||||
|
||||
Runs on a central machine. Connects to all collectors via gRPC streaming, merges their snapshots
|
||||
into a unified view, and serves the same gRPC interface as the collector.
|
||||
|
||||
### Flags
|
||||
|
||||
| Flag | Default | Description |
|
||||
|----------------|-----------|--------------------------------------------------------|
|
||||
| `--listen` | `:9091` | gRPC listen address |
|
||||
| `--collectors` | — | Comma-separated `host:port` addresses of collectors |
|
||||
| `--source` | hostname | Name for this aggregator in query responses |
|
||||
|
||||
### Example
|
||||
|
||||
```bash
|
||||
./aggregator \
|
||||
--collectors nginx1:9090,nginx2:9090,nginx3:9090 \
|
||||
--listen :9091
|
||||
```
|
||||
|
||||
The aggregator tolerates collector failures — if one collector is unreachable, results from the
|
||||
remaining collectors are returned with a warning. It reconnects automatically with backoff.
|
||||
|
||||
---
|
||||
|
||||
## Frontend
|
||||
|
||||
HTTP dashboard. Connects to the aggregator (or directly to a single collector for debugging).
|
||||
|
||||
### Flags
|
||||
|
||||
| Flag | Default | Description |
|
||||
|-------------|--------------|---------------------------------------|
|
||||
| `--listen` | `:8080` | HTTP listen address |
|
||||
| `--target` | `localhost:9091` | gRPC address of aggregator or collector |
|
||||
|
||||
### Usage
|
||||
|
||||
Navigate to `http://your-host:8080`. The dashboard shows a ranked table of the top entries for
|
||||
the selected dimension and time window.
|
||||
|
||||
**Filter controls:**
|
||||
- Click any row to add that value as a filter (e.g. click a website to restrict to it)
|
||||
- The filter breadcrumb at the top shows all active filters; click any token to remove it
|
||||
- Use the window tabs to switch between 1m / 5m / 15m / 60m / 6h / 24h
|
||||
- The page auto-refreshes every 30 seconds
|
||||
|
||||
**Dimension selector:** switch between grouping by Website, Client Prefix, Request URI, or HTTP
|
||||
Status using the tabs at the top of the table.
|
||||
|
||||
**Sparkline:** the trend chart shows total request count per bucket for the selected window and
|
||||
active filters. Useful for spotting sudden spikes.
|
||||
|
||||
**URL sharing:** all filter state is in the URL query string — copy the URL to share a specific
|
||||
view with another operator.
|
||||
|
||||
---
|
||||
|
||||
## CLI
|
||||
|
||||
A shell companion for one-off queries and debugging. Outputs JSON; pipe to `jq` for filtering.
|
||||
|
||||
### Subcommands
|
||||
|
||||
```
|
||||
cli topn --target HOST:PORT [filters] [--by DIM] [--window W] [--n N] [--pretty]
|
||||
cli trend --target HOST:PORT [filters] [--window W] [--pretty]
|
||||
cli stream --target HOST:PORT [--pretty]
|
||||
```
|
||||
|
||||
### Common flags
|
||||
|
||||
| Flag | Default | Description |
|
||||
|---------------|------------------|----------------------------------------------------------|
|
||||
| `--target` | `localhost:9090` | gRPC address of collector or aggregator |
|
||||
| `--by` | `website` | Dimension: `website` `prefix` `uri` `status` |
|
||||
| `--window` | `5m` | Window: `1m` `5m` `15m` `60m` `6h` `24h` |
|
||||
| `--n` | `10` | Number of results |
|
||||
| `--website` | — | Filter to this website |
|
||||
| `--prefix` | — | Filter to this client prefix |
|
||||
| `--uri` | — | Filter to this request URI |
|
||||
| `--status` | — | Filter to this HTTP status code |
|
||||
| `--pretty` | false | Pretty-print JSON |
|
||||
|
||||
### Examples
|
||||
|
||||
```bash
|
||||
# Top 20 client prefixes sending 429s right now
|
||||
cli topn --target agg:9091 --window 1m --by prefix --status 429 --n 20 | jq '.entries[]'
|
||||
|
||||
# Which website has the most 503s in the last 24h?
|
||||
cli topn --target agg:9091 --window 24h --by website --status 503
|
||||
|
||||
# Trend of 429s on one site over 6h — pipe to a quick graph
|
||||
cli trend --target agg:9091 --window 6h --website api.example.com \
|
||||
| jq '[.points[] | {t: .time, n: .count}]'
|
||||
|
||||
# Watch live snapshots from one collector; alert on large entry counts
|
||||
cli stream --target nginx3:9090 | jq -c 'select(.entry_count > 50000)'
|
||||
|
||||
# Query a single collector directly (bypass aggregator)
|
||||
cli topn --target nginx1:9090 --window 5m --by prefix --pretty
|
||||
```
|
||||
|
||||
The `stream` subcommand emits one JSON object per line (NDJSON) and runs until interrupted.
|
||||
Exit code is non-zero on any gRPC error.
|
||||
|
||||
---
|
||||
|
||||
## Operational notes
|
||||
|
||||
**No persistence.** All data is in-memory. A collector restart loses ring buffer history but
|
||||
resumes tailing the log file from the current position immediately.
|
||||
|
||||
**No TLS.** Designed for trusted internal networks. If you need encryption in transit, put a
|
||||
TLS-terminating proxy (e.g. stunnel, nginx stream) in front of the gRPC port.
|
||||
|
||||
**inotify limits.** The collector uses a single inotify instance regardless of how many files it
|
||||
tails. If you tail files across many different directories, check
|
||||
`/proc/sys/fs/inotify/max_user_watches` (default 8192); increase it if needed:
|
||||
```bash
|
||||
echo 65536 | sudo tee /proc/sys/fs/inotify/max_user_watches
|
||||
```
|
||||
|
||||
**High-cardinality attacks.** If a DDoS sends traffic from thousands of unique /24 prefixes with
|
||||
unique URIs, the live map will hit its 100 000 entry cap and drop new keys for the rest of that
|
||||
minute. The top-K entries already tracked continue accumulating counts. This is by design — the
|
||||
cap prevents memory exhaustion under attack conditions.
|
||||
|
||||
**Clock skew.** Trend sparklines are based on the collector's local clock. If collectors have
|
||||
significant clock skew, trend buckets from different collectors may not align precisely in the
|
||||
aggregator. NTP sync is recommended.
|
||||
13
go.mod
Normal file
13
go.mod
Normal file
@@ -0,0 +1,13 @@
|
||||
module git.ipng.ch/ipng/nginx-logtail
|
||||
|
||||
go 1.24.6
|
||||
|
||||
require (
|
||||
github.com/fsnotify/fsnotify v1.9.0 // indirect
|
||||
golang.org/x/net v0.48.0 // indirect
|
||||
golang.org/x/sys v0.39.0 // indirect
|
||||
golang.org/x/text v0.32.0 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 // indirect
|
||||
google.golang.org/grpc v1.79.2 // indirect
|
||||
google.golang.org/protobuf v1.36.11 // indirect
|
||||
)
|
||||
14
go.sum
Normal file
14
go.sum
Normal file
@@ -0,0 +1,14 @@
|
||||
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
|
||||
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
|
||||
golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU=
|
||||
golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY=
|
||||
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
|
||||
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=
|
||||
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 h1:gRkg/vSppuSQoDjxyiGfN4Upv/h/DQmIR10ZU8dh4Ww=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
|
||||
google.golang.org/grpc v1.79.2 h1:fRMD94s2tITpyJGtBBn7MkMseNpOZU8ZxgC3MMBaXRU=
|
||||
google.golang.org/grpc v1.79.2/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=
|
||||
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
||||
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||
82
proto/logtail.proto
Normal file
82
proto/logtail.proto
Normal file
@@ -0,0 +1,82 @@
|
||||
syntax = "proto3";
|
||||
|
||||
package logtail;
|
||||
|
||||
option go_package = "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb";
|
||||
|
||||
// Filter restricts results to entries matching all specified fields.
|
||||
// Unset fields match everything.
|
||||
message Filter {
|
||||
optional string website = 1;
|
||||
optional string client_prefix = 2;
|
||||
optional string http_request_uri = 3;
|
||||
optional int32 http_response = 4;
|
||||
}
|
||||
|
||||
enum GroupBy {
|
||||
WEBSITE = 0;
|
||||
CLIENT_PREFIX = 1;
|
||||
REQUEST_URI = 2;
|
||||
HTTP_RESPONSE = 3;
|
||||
}
|
||||
|
||||
enum Window {
|
||||
W1M = 0; // last 1 minute
|
||||
W5M = 1; // last 5 minutes
|
||||
W15M = 2; // last 15 minutes
|
||||
W60M = 3; // last 60 minutes
|
||||
W6H = 4; // last 6 hours
|
||||
W24H = 5; // last 24 hours
|
||||
}
|
||||
|
||||
// TopN
|
||||
|
||||
message TopNRequest {
|
||||
Filter filter = 1;
|
||||
GroupBy group_by = 2;
|
||||
int32 n = 3;
|
||||
Window window = 4;
|
||||
}
|
||||
|
||||
message TopNEntry {
|
||||
string label = 1;
|
||||
int64 count = 2;
|
||||
}
|
||||
|
||||
message TopNResponse {
|
||||
repeated TopNEntry entries = 1;
|
||||
string source = 2; // hostname of the responding node
|
||||
}
|
||||
|
||||
// Trend — one total count per bucket, for sparklines
|
||||
|
||||
message TrendRequest {
|
||||
Filter filter = 1;
|
||||
Window window = 2;
|
||||
}
|
||||
|
||||
message TrendPoint {
|
||||
int64 timestamp_unix = 1;
|
||||
int64 count = 2;
|
||||
}
|
||||
|
||||
message TrendResponse {
|
||||
repeated TrendPoint points = 1;
|
||||
string source = 2;
|
||||
}
|
||||
|
||||
// StreamSnapshots — pushed by collector after every minute rotation
|
||||
|
||||
message SnapshotRequest {}
|
||||
|
||||
message Snapshot {
|
||||
string source = 1;
|
||||
int64 timestamp = 2;
|
||||
repeated TopNEntry entries = 3; // top-50K for this 1-minute bucket, sorted desc
|
||||
}
|
||||
|
||||
service LogtailService {
|
||||
rpc TopN (TopNRequest) returns (TopNResponse);
|
||||
rpc Trend (TrendRequest) returns (TrendResponse);
|
||||
rpc StreamSnapshots (SnapshotRequest) returns (stream Snapshot);
|
||||
}
|
||||
759
proto/logtailpb/logtail.pb.go
Normal file
759
proto/logtailpb/logtail.pb.go
Normal file
@@ -0,0 +1,759 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.36.11
|
||||
// protoc v3.21.12
|
||||
// source: logtail.proto
|
||||
|
||||
package logtailpb
|
||||
|
||||
import (
|
||||
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
|
||||
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
||||
reflect "reflect"
|
||||
sync "sync"
|
||||
unsafe "unsafe"
|
||||
)
|
||||
|
||||
const (
|
||||
// Verify that this generated code is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
|
||||
// Verify that runtime/protoimpl is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
||||
)
|
||||
|
||||
type GroupBy int32
|
||||
|
||||
const (
|
||||
GroupBy_WEBSITE GroupBy = 0
|
||||
GroupBy_CLIENT_PREFIX GroupBy = 1
|
||||
GroupBy_REQUEST_URI GroupBy = 2
|
||||
GroupBy_HTTP_RESPONSE GroupBy = 3
|
||||
)
|
||||
|
||||
// Enum value maps for GroupBy.
|
||||
var (
|
||||
GroupBy_name = map[int32]string{
|
||||
0: "WEBSITE",
|
||||
1: "CLIENT_PREFIX",
|
||||
2: "REQUEST_URI",
|
||||
3: "HTTP_RESPONSE",
|
||||
}
|
||||
GroupBy_value = map[string]int32{
|
||||
"WEBSITE": 0,
|
||||
"CLIENT_PREFIX": 1,
|
||||
"REQUEST_URI": 2,
|
||||
"HTTP_RESPONSE": 3,
|
||||
}
|
||||
)
|
||||
|
||||
func (x GroupBy) Enum() *GroupBy {
|
||||
p := new(GroupBy)
|
||||
*p = x
|
||||
return p
|
||||
}
|
||||
|
||||
func (x GroupBy) String() string {
|
||||
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
|
||||
}
|
||||
|
||||
func (GroupBy) Descriptor() protoreflect.EnumDescriptor {
|
||||
return file_logtail_proto_enumTypes[0].Descriptor()
|
||||
}
|
||||
|
||||
func (GroupBy) Type() protoreflect.EnumType {
|
||||
return &file_logtail_proto_enumTypes[0]
|
||||
}
|
||||
|
||||
func (x GroupBy) Number() protoreflect.EnumNumber {
|
||||
return protoreflect.EnumNumber(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use GroupBy.Descriptor instead.
|
||||
func (GroupBy) EnumDescriptor() ([]byte, []int) {
|
||||
return file_logtail_proto_rawDescGZIP(), []int{0}
|
||||
}
|
||||
|
||||
type Window int32
|
||||
|
||||
const (
|
||||
Window_W1M Window = 0 // last 1 minute
|
||||
Window_W5M Window = 1 // last 5 minutes
|
||||
Window_W15M Window = 2 // last 15 minutes
|
||||
Window_W60M Window = 3 // last 60 minutes
|
||||
Window_W6H Window = 4 // last 6 hours
|
||||
Window_W24H Window = 5 // last 24 hours
|
||||
)
|
||||
|
||||
// Enum value maps for Window.
|
||||
var (
|
||||
Window_name = map[int32]string{
|
||||
0: "W1M",
|
||||
1: "W5M",
|
||||
2: "W15M",
|
||||
3: "W60M",
|
||||
4: "W6H",
|
||||
5: "W24H",
|
||||
}
|
||||
Window_value = map[string]int32{
|
||||
"W1M": 0,
|
||||
"W5M": 1,
|
||||
"W15M": 2,
|
||||
"W60M": 3,
|
||||
"W6H": 4,
|
||||
"W24H": 5,
|
||||
}
|
||||
)
|
||||
|
||||
func (x Window) Enum() *Window {
|
||||
p := new(Window)
|
||||
*p = x
|
||||
return p
|
||||
}
|
||||
|
||||
func (x Window) String() string {
|
||||
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
|
||||
}
|
||||
|
||||
func (Window) Descriptor() protoreflect.EnumDescriptor {
|
||||
return file_logtail_proto_enumTypes[1].Descriptor()
|
||||
}
|
||||
|
||||
func (Window) Type() protoreflect.EnumType {
|
||||
return &file_logtail_proto_enumTypes[1]
|
||||
}
|
||||
|
||||
func (x Window) Number() protoreflect.EnumNumber {
|
||||
return protoreflect.EnumNumber(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use Window.Descriptor instead.
|
||||
func (Window) EnumDescriptor() ([]byte, []int) {
|
||||
return file_logtail_proto_rawDescGZIP(), []int{1}
|
||||
}
|
||||
|
||||
// Filter restricts results to entries matching all specified fields.
|
||||
// Unset fields match everything.
|
||||
type Filter struct {
|
||||
state protoimpl.MessageState `protogen:"open.v1"`
|
||||
Website *string `protobuf:"bytes,1,opt,name=website,proto3,oneof" json:"website,omitempty"`
|
||||
ClientPrefix *string `protobuf:"bytes,2,opt,name=client_prefix,json=clientPrefix,proto3,oneof" json:"client_prefix,omitempty"`
|
||||
HttpRequestUri *string `protobuf:"bytes,3,opt,name=http_request_uri,json=httpRequestUri,proto3,oneof" json:"http_request_uri,omitempty"`
|
||||
HttpResponse *int32 `protobuf:"varint,4,opt,name=http_response,json=httpResponse,proto3,oneof" json:"http_response,omitempty"`
|
||||
unknownFields protoimpl.UnknownFields
|
||||
sizeCache protoimpl.SizeCache
|
||||
}
|
||||
|
||||
func (x *Filter) Reset() {
|
||||
*x = Filter{}
|
||||
mi := &file_logtail_proto_msgTypes[0]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
|
||||
func (x *Filter) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*Filter) ProtoMessage() {}
|
||||
|
||||
func (x *Filter) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_logtail_proto_msgTypes[0]
|
||||
if x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use Filter.ProtoReflect.Descriptor instead.
|
||||
func (*Filter) Descriptor() ([]byte, []int) {
|
||||
return file_logtail_proto_rawDescGZIP(), []int{0}
|
||||
}
|
||||
|
||||
func (x *Filter) GetWebsite() string {
|
||||
if x != nil && x.Website != nil {
|
||||
return *x.Website
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *Filter) GetClientPrefix() string {
|
||||
if x != nil && x.ClientPrefix != nil {
|
||||
return *x.ClientPrefix
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *Filter) GetHttpRequestUri() string {
|
||||
if x != nil && x.HttpRequestUri != nil {
|
||||
return *x.HttpRequestUri
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *Filter) GetHttpResponse() int32 {
|
||||
if x != nil && x.HttpResponse != nil {
|
||||
return *x.HttpResponse
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type TopNRequest struct {
|
||||
state protoimpl.MessageState `protogen:"open.v1"`
|
||||
Filter *Filter `protobuf:"bytes,1,opt,name=filter,proto3" json:"filter,omitempty"`
|
||||
GroupBy GroupBy `protobuf:"varint,2,opt,name=group_by,json=groupBy,proto3,enum=logtail.GroupBy" json:"group_by,omitempty"`
|
||||
N int32 `protobuf:"varint,3,opt,name=n,proto3" json:"n,omitempty"`
|
||||
Window Window `protobuf:"varint,4,opt,name=window,proto3,enum=logtail.Window" json:"window,omitempty"`
|
||||
unknownFields protoimpl.UnknownFields
|
||||
sizeCache protoimpl.SizeCache
|
||||
}
|
||||
|
||||
func (x *TopNRequest) Reset() {
|
||||
*x = TopNRequest{}
|
||||
mi := &file_logtail_proto_msgTypes[1]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
|
||||
func (x *TopNRequest) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*TopNRequest) ProtoMessage() {}
|
||||
|
||||
func (x *TopNRequest) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_logtail_proto_msgTypes[1]
|
||||
if x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use TopNRequest.ProtoReflect.Descriptor instead.
|
||||
func (*TopNRequest) Descriptor() ([]byte, []int) {
|
||||
return file_logtail_proto_rawDescGZIP(), []int{1}
|
||||
}
|
||||
|
||||
func (x *TopNRequest) GetFilter() *Filter {
|
||||
if x != nil {
|
||||
return x.Filter
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *TopNRequest) GetGroupBy() GroupBy {
|
||||
if x != nil {
|
||||
return x.GroupBy
|
||||
}
|
||||
return GroupBy_WEBSITE
|
||||
}
|
||||
|
||||
func (x *TopNRequest) GetN() int32 {
|
||||
if x != nil {
|
||||
return x.N
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *TopNRequest) GetWindow() Window {
|
||||
if x != nil {
|
||||
return x.Window
|
||||
}
|
||||
return Window_W1M
|
||||
}
|
||||
|
||||
type TopNEntry struct {
|
||||
state protoimpl.MessageState `protogen:"open.v1"`
|
||||
Label string `protobuf:"bytes,1,opt,name=label,proto3" json:"label,omitempty"`
|
||||
Count int64 `protobuf:"varint,2,opt,name=count,proto3" json:"count,omitempty"`
|
||||
unknownFields protoimpl.UnknownFields
|
||||
sizeCache protoimpl.SizeCache
|
||||
}
|
||||
|
||||
func (x *TopNEntry) Reset() {
|
||||
*x = TopNEntry{}
|
||||
mi := &file_logtail_proto_msgTypes[2]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
|
||||
func (x *TopNEntry) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*TopNEntry) ProtoMessage() {}
|
||||
|
||||
func (x *TopNEntry) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_logtail_proto_msgTypes[2]
|
||||
if x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use TopNEntry.ProtoReflect.Descriptor instead.
|
||||
func (*TopNEntry) Descriptor() ([]byte, []int) {
|
||||
return file_logtail_proto_rawDescGZIP(), []int{2}
|
||||
}
|
||||
|
||||
func (x *TopNEntry) GetLabel() string {
|
||||
if x != nil {
|
||||
return x.Label
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *TopNEntry) GetCount() int64 {
|
||||
if x != nil {
|
||||
return x.Count
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type TopNResponse struct {
|
||||
state protoimpl.MessageState `protogen:"open.v1"`
|
||||
Entries []*TopNEntry `protobuf:"bytes,1,rep,name=entries,proto3" json:"entries,omitempty"`
|
||||
Source string `protobuf:"bytes,2,opt,name=source,proto3" json:"source,omitempty"` // hostname of the responding node
|
||||
unknownFields protoimpl.UnknownFields
|
||||
sizeCache protoimpl.SizeCache
|
||||
}
|
||||
|
||||
func (x *TopNResponse) Reset() {
|
||||
*x = TopNResponse{}
|
||||
mi := &file_logtail_proto_msgTypes[3]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
|
||||
func (x *TopNResponse) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*TopNResponse) ProtoMessage() {}
|
||||
|
||||
func (x *TopNResponse) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_logtail_proto_msgTypes[3]
|
||||
if x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use TopNResponse.ProtoReflect.Descriptor instead.
|
||||
func (*TopNResponse) Descriptor() ([]byte, []int) {
|
||||
return file_logtail_proto_rawDescGZIP(), []int{3}
|
||||
}
|
||||
|
||||
func (x *TopNResponse) GetEntries() []*TopNEntry {
|
||||
if x != nil {
|
||||
return x.Entries
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *TopNResponse) GetSource() string {
|
||||
if x != nil {
|
||||
return x.Source
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
type TrendRequest struct {
|
||||
state protoimpl.MessageState `protogen:"open.v1"`
|
||||
Filter *Filter `protobuf:"bytes,1,opt,name=filter,proto3" json:"filter,omitempty"`
|
||||
Window Window `protobuf:"varint,2,opt,name=window,proto3,enum=logtail.Window" json:"window,omitempty"`
|
||||
unknownFields protoimpl.UnknownFields
|
||||
sizeCache protoimpl.SizeCache
|
||||
}
|
||||
|
||||
func (x *TrendRequest) Reset() {
|
||||
*x = TrendRequest{}
|
||||
mi := &file_logtail_proto_msgTypes[4]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
|
||||
func (x *TrendRequest) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*TrendRequest) ProtoMessage() {}
|
||||
|
||||
func (x *TrendRequest) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_logtail_proto_msgTypes[4]
|
||||
if x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use TrendRequest.ProtoReflect.Descriptor instead.
|
||||
func (*TrendRequest) Descriptor() ([]byte, []int) {
|
||||
return file_logtail_proto_rawDescGZIP(), []int{4}
|
||||
}
|
||||
|
||||
func (x *TrendRequest) GetFilter() *Filter {
|
||||
if x != nil {
|
||||
return x.Filter
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *TrendRequest) GetWindow() Window {
|
||||
if x != nil {
|
||||
return x.Window
|
||||
}
|
||||
return Window_W1M
|
||||
}
|
||||
|
||||
type TrendPoint struct {
|
||||
state protoimpl.MessageState `protogen:"open.v1"`
|
||||
TimestampUnix int64 `protobuf:"varint,1,opt,name=timestamp_unix,json=timestampUnix,proto3" json:"timestamp_unix,omitempty"`
|
||||
Count int64 `protobuf:"varint,2,opt,name=count,proto3" json:"count,omitempty"`
|
||||
unknownFields protoimpl.UnknownFields
|
||||
sizeCache protoimpl.SizeCache
|
||||
}
|
||||
|
||||
func (x *TrendPoint) Reset() {
|
||||
*x = TrendPoint{}
|
||||
mi := &file_logtail_proto_msgTypes[5]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
|
||||
func (x *TrendPoint) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*TrendPoint) ProtoMessage() {}
|
||||
|
||||
func (x *TrendPoint) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_logtail_proto_msgTypes[5]
|
||||
if x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use TrendPoint.ProtoReflect.Descriptor instead.
|
||||
func (*TrendPoint) Descriptor() ([]byte, []int) {
|
||||
return file_logtail_proto_rawDescGZIP(), []int{5}
|
||||
}
|
||||
|
||||
func (x *TrendPoint) GetTimestampUnix() int64 {
|
||||
if x != nil {
|
||||
return x.TimestampUnix
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *TrendPoint) GetCount() int64 {
|
||||
if x != nil {
|
||||
return x.Count
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type TrendResponse struct {
|
||||
state protoimpl.MessageState `protogen:"open.v1"`
|
||||
Points []*TrendPoint `protobuf:"bytes,1,rep,name=points,proto3" json:"points,omitempty"`
|
||||
Source string `protobuf:"bytes,2,opt,name=source,proto3" json:"source,omitempty"`
|
||||
unknownFields protoimpl.UnknownFields
|
||||
sizeCache protoimpl.SizeCache
|
||||
}
|
||||
|
||||
func (x *TrendResponse) Reset() {
|
||||
*x = TrendResponse{}
|
||||
mi := &file_logtail_proto_msgTypes[6]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
|
||||
func (x *TrendResponse) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*TrendResponse) ProtoMessage() {}
|
||||
|
||||
func (x *TrendResponse) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_logtail_proto_msgTypes[6]
|
||||
if x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use TrendResponse.ProtoReflect.Descriptor instead.
|
||||
func (*TrendResponse) Descriptor() ([]byte, []int) {
|
||||
return file_logtail_proto_rawDescGZIP(), []int{6}
|
||||
}
|
||||
|
||||
func (x *TrendResponse) GetPoints() []*TrendPoint {
|
||||
if x != nil {
|
||||
return x.Points
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *TrendResponse) GetSource() string {
|
||||
if x != nil {
|
||||
return x.Source
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
type SnapshotRequest struct {
|
||||
state protoimpl.MessageState `protogen:"open.v1"`
|
||||
unknownFields protoimpl.UnknownFields
|
||||
sizeCache protoimpl.SizeCache
|
||||
}
|
||||
|
||||
func (x *SnapshotRequest) Reset() {
|
||||
*x = SnapshotRequest{}
|
||||
mi := &file_logtail_proto_msgTypes[7]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
|
||||
func (x *SnapshotRequest) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*SnapshotRequest) ProtoMessage() {}
|
||||
|
||||
func (x *SnapshotRequest) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_logtail_proto_msgTypes[7]
|
||||
if x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use SnapshotRequest.ProtoReflect.Descriptor instead.
|
||||
func (*SnapshotRequest) Descriptor() ([]byte, []int) {
|
||||
return file_logtail_proto_rawDescGZIP(), []int{7}
|
||||
}
|
||||
|
||||
type Snapshot struct {
|
||||
state protoimpl.MessageState `protogen:"open.v1"`
|
||||
Source string `protobuf:"bytes,1,opt,name=source,proto3" json:"source,omitempty"`
|
||||
Timestamp int64 `protobuf:"varint,2,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
|
||||
Entries []*TopNEntry `protobuf:"bytes,3,rep,name=entries,proto3" json:"entries,omitempty"` // top-50K for this 1-minute bucket, sorted desc
|
||||
unknownFields protoimpl.UnknownFields
|
||||
sizeCache protoimpl.SizeCache
|
||||
}
|
||||
|
||||
func (x *Snapshot) Reset() {
|
||||
*x = Snapshot{}
|
||||
mi := &file_logtail_proto_msgTypes[8]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
|
||||
func (x *Snapshot) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*Snapshot) ProtoMessage() {}
|
||||
|
||||
func (x *Snapshot) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_logtail_proto_msgTypes[8]
|
||||
if x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use Snapshot.ProtoReflect.Descriptor instead.
|
||||
func (*Snapshot) Descriptor() ([]byte, []int) {
|
||||
return file_logtail_proto_rawDescGZIP(), []int{8}
|
||||
}
|
||||
|
||||
func (x *Snapshot) GetSource() string {
|
||||
if x != nil {
|
||||
return x.Source
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *Snapshot) GetTimestamp() int64 {
|
||||
if x != nil {
|
||||
return x.Timestamp
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *Snapshot) GetEntries() []*TopNEntry {
|
||||
if x != nil {
|
||||
return x.Entries
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
var File_logtail_proto protoreflect.FileDescriptor
|
||||
|
||||
const file_logtail_proto_rawDesc = "" +
|
||||
"\n" +
|
||||
"\rlogtail.proto\x12\alogtail\"\xef\x01\n" +
|
||||
"\x06Filter\x12\x1d\n" +
|
||||
"\awebsite\x18\x01 \x01(\tH\x00R\awebsite\x88\x01\x01\x12(\n" +
|
||||
"\rclient_prefix\x18\x02 \x01(\tH\x01R\fclientPrefix\x88\x01\x01\x12-\n" +
|
||||
"\x10http_request_uri\x18\x03 \x01(\tH\x02R\x0ehttpRequestUri\x88\x01\x01\x12(\n" +
|
||||
"\rhttp_response\x18\x04 \x01(\x05H\x03R\fhttpResponse\x88\x01\x01B\n" +
|
||||
"\n" +
|
||||
"\b_websiteB\x10\n" +
|
||||
"\x0e_client_prefixB\x13\n" +
|
||||
"\x11_http_request_uriB\x10\n" +
|
||||
"\x0e_http_response\"\x9a\x01\n" +
|
||||
"\vTopNRequest\x12'\n" +
|
||||
"\x06filter\x18\x01 \x01(\v2\x0f.logtail.FilterR\x06filter\x12+\n" +
|
||||
"\bgroup_by\x18\x02 \x01(\x0e2\x10.logtail.GroupByR\agroupBy\x12\f\n" +
|
||||
"\x01n\x18\x03 \x01(\x05R\x01n\x12'\n" +
|
||||
"\x06window\x18\x04 \x01(\x0e2\x0f.logtail.WindowR\x06window\"7\n" +
|
||||
"\tTopNEntry\x12\x14\n" +
|
||||
"\x05label\x18\x01 \x01(\tR\x05label\x12\x14\n" +
|
||||
"\x05count\x18\x02 \x01(\x03R\x05count\"T\n" +
|
||||
"\fTopNResponse\x12,\n" +
|
||||
"\aentries\x18\x01 \x03(\v2\x12.logtail.TopNEntryR\aentries\x12\x16\n" +
|
||||
"\x06source\x18\x02 \x01(\tR\x06source\"`\n" +
|
||||
"\fTrendRequest\x12'\n" +
|
||||
"\x06filter\x18\x01 \x01(\v2\x0f.logtail.FilterR\x06filter\x12'\n" +
|
||||
"\x06window\x18\x02 \x01(\x0e2\x0f.logtail.WindowR\x06window\"I\n" +
|
||||
"\n" +
|
||||
"TrendPoint\x12%\n" +
|
||||
"\x0etimestamp_unix\x18\x01 \x01(\x03R\rtimestampUnix\x12\x14\n" +
|
||||
"\x05count\x18\x02 \x01(\x03R\x05count\"T\n" +
|
||||
"\rTrendResponse\x12+\n" +
|
||||
"\x06points\x18\x01 \x03(\v2\x13.logtail.TrendPointR\x06points\x12\x16\n" +
|
||||
"\x06source\x18\x02 \x01(\tR\x06source\"\x11\n" +
|
||||
"\x0fSnapshotRequest\"n\n" +
|
||||
"\bSnapshot\x12\x16\n" +
|
||||
"\x06source\x18\x01 \x01(\tR\x06source\x12\x1c\n" +
|
||||
"\ttimestamp\x18\x02 \x01(\x03R\ttimestamp\x12,\n" +
|
||||
"\aentries\x18\x03 \x03(\v2\x12.logtail.TopNEntryR\aentries*M\n" +
|
||||
"\aGroupBy\x12\v\n" +
|
||||
"\aWEBSITE\x10\x00\x12\x11\n" +
|
||||
"\rCLIENT_PREFIX\x10\x01\x12\x0f\n" +
|
||||
"\vREQUEST_URI\x10\x02\x12\x11\n" +
|
||||
"\rHTTP_RESPONSE\x10\x03*A\n" +
|
||||
"\x06Window\x12\a\n" +
|
||||
"\x03W1M\x10\x00\x12\a\n" +
|
||||
"\x03W5M\x10\x01\x12\b\n" +
|
||||
"\x04W15M\x10\x02\x12\b\n" +
|
||||
"\x04W60M\x10\x03\x12\a\n" +
|
||||
"\x03W6H\x10\x04\x12\b\n" +
|
||||
"\x04W24H\x10\x052\xbf\x01\n" +
|
||||
"\x0eLogtailService\x123\n" +
|
||||
"\x04TopN\x12\x14.logtail.TopNRequest\x1a\x15.logtail.TopNResponse\x126\n" +
|
||||
"\x05Trend\x12\x15.logtail.TrendRequest\x1a\x16.logtail.TrendResponse\x12@\n" +
|
||||
"\x0fStreamSnapshots\x12\x18.logtail.SnapshotRequest\x1a\x11.logtail.Snapshot0\x01B0Z.git.ipng.ch/ipng/nginx-logtail/proto/logtailpbb\x06proto3"
|
||||
|
||||
var (
|
||||
file_logtail_proto_rawDescOnce sync.Once
|
||||
file_logtail_proto_rawDescData []byte
|
||||
)
|
||||
|
||||
func file_logtail_proto_rawDescGZIP() []byte {
|
||||
file_logtail_proto_rawDescOnce.Do(func() {
|
||||
file_logtail_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_logtail_proto_rawDesc), len(file_logtail_proto_rawDesc)))
|
||||
})
|
||||
return file_logtail_proto_rawDescData
|
||||
}
|
||||
|
||||
var file_logtail_proto_enumTypes = make([]protoimpl.EnumInfo, 2)
|
||||
var file_logtail_proto_msgTypes = make([]protoimpl.MessageInfo, 9)
|
||||
var file_logtail_proto_goTypes = []any{
|
||||
(GroupBy)(0), // 0: logtail.GroupBy
|
||||
(Window)(0), // 1: logtail.Window
|
||||
(*Filter)(nil), // 2: logtail.Filter
|
||||
(*TopNRequest)(nil), // 3: logtail.TopNRequest
|
||||
(*TopNEntry)(nil), // 4: logtail.TopNEntry
|
||||
(*TopNResponse)(nil), // 5: logtail.TopNResponse
|
||||
(*TrendRequest)(nil), // 6: logtail.TrendRequest
|
||||
(*TrendPoint)(nil), // 7: logtail.TrendPoint
|
||||
(*TrendResponse)(nil), // 8: logtail.TrendResponse
|
||||
(*SnapshotRequest)(nil), // 9: logtail.SnapshotRequest
|
||||
(*Snapshot)(nil), // 10: logtail.Snapshot
|
||||
}
|
||||
var file_logtail_proto_depIdxs = []int32{
|
||||
2, // 0: logtail.TopNRequest.filter:type_name -> logtail.Filter
|
||||
0, // 1: logtail.TopNRequest.group_by:type_name -> logtail.GroupBy
|
||||
1, // 2: logtail.TopNRequest.window:type_name -> logtail.Window
|
||||
4, // 3: logtail.TopNResponse.entries:type_name -> logtail.TopNEntry
|
||||
2, // 4: logtail.TrendRequest.filter:type_name -> logtail.Filter
|
||||
1, // 5: logtail.TrendRequest.window:type_name -> logtail.Window
|
||||
7, // 6: logtail.TrendResponse.points:type_name -> logtail.TrendPoint
|
||||
4, // 7: logtail.Snapshot.entries:type_name -> logtail.TopNEntry
|
||||
3, // 8: logtail.LogtailService.TopN:input_type -> logtail.TopNRequest
|
||||
6, // 9: logtail.LogtailService.Trend:input_type -> logtail.TrendRequest
|
||||
9, // 10: logtail.LogtailService.StreamSnapshots:input_type -> logtail.SnapshotRequest
|
||||
5, // 11: logtail.LogtailService.TopN:output_type -> logtail.TopNResponse
|
||||
8, // 12: logtail.LogtailService.Trend:output_type -> logtail.TrendResponse
|
||||
10, // 13: logtail.LogtailService.StreamSnapshots:output_type -> logtail.Snapshot
|
||||
11, // [11:14] is the sub-list for method output_type
|
||||
8, // [8:11] is the sub-list for method input_type
|
||||
8, // [8:8] is the sub-list for extension type_name
|
||||
8, // [8:8] is the sub-list for extension extendee
|
||||
0, // [0:8] is the sub-list for field type_name
|
||||
}
|
||||
|
||||
func init() { file_logtail_proto_init() }
|
||||
func file_logtail_proto_init() {
|
||||
if File_logtail_proto != nil {
|
||||
return
|
||||
}
|
||||
file_logtail_proto_msgTypes[0].OneofWrappers = []any{}
|
||||
type x struct{}
|
||||
out := protoimpl.TypeBuilder{
|
||||
File: protoimpl.DescBuilder{
|
||||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
|
||||
RawDescriptor: unsafe.Slice(unsafe.StringData(file_logtail_proto_rawDesc), len(file_logtail_proto_rawDesc)),
|
||||
NumEnums: 2,
|
||||
NumMessages: 9,
|
||||
NumExtensions: 0,
|
||||
NumServices: 1,
|
||||
},
|
||||
GoTypes: file_logtail_proto_goTypes,
|
||||
DependencyIndexes: file_logtail_proto_depIdxs,
|
||||
EnumInfos: file_logtail_proto_enumTypes,
|
||||
MessageInfos: file_logtail_proto_msgTypes,
|
||||
}.Build()
|
||||
File_logtail_proto = out.File
|
||||
file_logtail_proto_goTypes = nil
|
||||
file_logtail_proto_depIdxs = nil
|
||||
}
|
||||
201
proto/logtailpb/logtail_grpc.pb.go
Normal file
201
proto/logtailpb/logtail_grpc.pb.go
Normal file
@@ -0,0 +1,201 @@
|
||||
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
|
||||
// versions:
|
||||
// - protoc-gen-go-grpc v1.6.1
|
||||
// - protoc v3.21.12
|
||||
// source: logtail.proto
|
||||
|
||||
package logtailpb
|
||||
|
||||
import (
|
||||
context "context"
|
||||
grpc "google.golang.org/grpc"
|
||||
codes "google.golang.org/grpc/codes"
|
||||
status "google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the grpc package it is being compiled against.
|
||||
// Requires gRPC-Go v1.64.0 or later.
|
||||
const _ = grpc.SupportPackageIsVersion9
|
||||
|
||||
const (
|
||||
LogtailService_TopN_FullMethodName = "/logtail.LogtailService/TopN"
|
||||
LogtailService_Trend_FullMethodName = "/logtail.LogtailService/Trend"
|
||||
LogtailService_StreamSnapshots_FullMethodName = "/logtail.LogtailService/StreamSnapshots"
|
||||
)
|
||||
|
||||
// LogtailServiceClient is the client API for LogtailService service.
|
||||
//
|
||||
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
|
||||
type LogtailServiceClient interface {
|
||||
TopN(ctx context.Context, in *TopNRequest, opts ...grpc.CallOption) (*TopNResponse, error)
|
||||
Trend(ctx context.Context, in *TrendRequest, opts ...grpc.CallOption) (*TrendResponse, error)
|
||||
StreamSnapshots(ctx context.Context, in *SnapshotRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[Snapshot], error)
|
||||
}
|
||||
|
||||
type logtailServiceClient struct {
|
||||
cc grpc.ClientConnInterface
|
||||
}
|
||||
|
||||
func NewLogtailServiceClient(cc grpc.ClientConnInterface) LogtailServiceClient {
|
||||
return &logtailServiceClient{cc}
|
||||
}
|
||||
|
||||
func (c *logtailServiceClient) TopN(ctx context.Context, in *TopNRequest, opts ...grpc.CallOption) (*TopNResponse, error) {
|
||||
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||
out := new(TopNResponse)
|
||||
err := c.cc.Invoke(ctx, LogtailService_TopN_FullMethodName, in, out, cOpts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *logtailServiceClient) Trend(ctx context.Context, in *TrendRequest, opts ...grpc.CallOption) (*TrendResponse, error) {
|
||||
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||
out := new(TrendResponse)
|
||||
err := c.cc.Invoke(ctx, LogtailService_Trend_FullMethodName, in, out, cOpts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *logtailServiceClient) StreamSnapshots(ctx context.Context, in *SnapshotRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[Snapshot], error) {
|
||||
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
|
||||
stream, err := c.cc.NewStream(ctx, &LogtailService_ServiceDesc.Streams[0], LogtailService_StreamSnapshots_FullMethodName, cOpts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
x := &grpc.GenericClientStream[SnapshotRequest, Snapshot]{ClientStream: stream}
|
||||
if err := x.ClientStream.SendMsg(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := x.ClientStream.CloseSend(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return x, nil
|
||||
}
|
||||
|
||||
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
|
||||
type LogtailService_StreamSnapshotsClient = grpc.ServerStreamingClient[Snapshot]
|
||||
|
||||
// LogtailServiceServer is the server API for LogtailService service.
|
||||
// All implementations must embed UnimplementedLogtailServiceServer
|
||||
// for forward compatibility.
|
||||
type LogtailServiceServer interface {
|
||||
TopN(context.Context, *TopNRequest) (*TopNResponse, error)
|
||||
Trend(context.Context, *TrendRequest) (*TrendResponse, error)
|
||||
StreamSnapshots(*SnapshotRequest, grpc.ServerStreamingServer[Snapshot]) error
|
||||
mustEmbedUnimplementedLogtailServiceServer()
|
||||
}
|
||||
|
||||
// UnimplementedLogtailServiceServer must be embedded to have
|
||||
// forward compatible implementations.
|
||||
//
|
||||
// NOTE: this should be embedded by value instead of pointer to avoid a nil
|
||||
// pointer dereference when methods are called.
|
||||
type UnimplementedLogtailServiceServer struct{}
|
||||
|
||||
func (UnimplementedLogtailServiceServer) TopN(context.Context, *TopNRequest) (*TopNResponse, error) {
|
||||
return nil, status.Error(codes.Unimplemented, "method TopN not implemented")
|
||||
}
|
||||
func (UnimplementedLogtailServiceServer) Trend(context.Context, *TrendRequest) (*TrendResponse, error) {
|
||||
return nil, status.Error(codes.Unimplemented, "method Trend not implemented")
|
||||
}
|
||||
func (UnimplementedLogtailServiceServer) StreamSnapshots(*SnapshotRequest, grpc.ServerStreamingServer[Snapshot]) error {
|
||||
return status.Error(codes.Unimplemented, "method StreamSnapshots not implemented")
|
||||
}
|
||||
func (UnimplementedLogtailServiceServer) mustEmbedUnimplementedLogtailServiceServer() {}
|
||||
func (UnimplementedLogtailServiceServer) testEmbeddedByValue() {}
|
||||
|
||||
// UnsafeLogtailServiceServer may be embedded to opt out of forward compatibility for this service.
|
||||
// Use of this interface is not recommended, as added methods to LogtailServiceServer will
|
||||
// result in compilation errors.
|
||||
type UnsafeLogtailServiceServer interface {
|
||||
mustEmbedUnimplementedLogtailServiceServer()
|
||||
}
|
||||
|
||||
func RegisterLogtailServiceServer(s grpc.ServiceRegistrar, srv LogtailServiceServer) {
|
||||
// If the following call panics, it indicates UnimplementedLogtailServiceServer was
|
||||
// embedded by pointer and is nil. This will cause panics if an
|
||||
// unimplemented method is ever invoked, so we test this at initialization
|
||||
// time to prevent it from happening at runtime later due to I/O.
|
||||
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
|
||||
t.testEmbeddedByValue()
|
||||
}
|
||||
s.RegisterService(&LogtailService_ServiceDesc, srv)
|
||||
}
|
||||
|
||||
func _LogtailService_TopN_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(TopNRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(LogtailServiceServer).TopN(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: LogtailService_TopN_FullMethodName,
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(LogtailServiceServer).TopN(ctx, req.(*TopNRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _LogtailService_Trend_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(TrendRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(LogtailServiceServer).Trend(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: LogtailService_Trend_FullMethodName,
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(LogtailServiceServer).Trend(ctx, req.(*TrendRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _LogtailService_StreamSnapshots_Handler(srv interface{}, stream grpc.ServerStream) error {
|
||||
m := new(SnapshotRequest)
|
||||
if err := stream.RecvMsg(m); err != nil {
|
||||
return err
|
||||
}
|
||||
return srv.(LogtailServiceServer).StreamSnapshots(m, &grpc.GenericServerStream[SnapshotRequest, Snapshot]{ServerStream: stream})
|
||||
}
|
||||
|
||||
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
|
||||
type LogtailService_StreamSnapshotsServer = grpc.ServerStreamingServer[Snapshot]
|
||||
|
||||
// LogtailService_ServiceDesc is the grpc.ServiceDesc for LogtailService service.
|
||||
// It's only intended for direct use with grpc.RegisterService,
|
||||
// and not to be introspected or modified (even as a copy)
|
||||
var LogtailService_ServiceDesc = grpc.ServiceDesc{
|
||||
ServiceName: "logtail.LogtailService",
|
||||
HandlerType: (*LogtailServiceServer)(nil),
|
||||
Methods: []grpc.MethodDesc{
|
||||
{
|
||||
MethodName: "TopN",
|
||||
Handler: _LogtailService_TopN_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "Trend",
|
||||
Handler: _LogtailService_Trend_Handler,
|
||||
},
|
||||
},
|
||||
Streams: []grpc.StreamDesc{
|
||||
{
|
||||
StreamName: "StreamSnapshots",
|
||||
Handler: _LogtailService_StreamSnapshots_Handler,
|
||||
ServerStreams: true,
|
||||
},
|
||||
},
|
||||
Metadata: "logtail.proto",
|
||||
}
|
||||
Reference in New Issue
Block a user