Compare commits

...

2 Commits

Author SHA1 Message Date
Pim van Pelt
c7f8455188 go fmt 2026-03-24 02:30:18 +01:00
Pim van Pelt
30c8c40157 Add ASN to logtail, collector, aggregator, frontend and CLI 2026-03-24 02:28:29 +01:00
23 changed files with 594 additions and 182 deletions

View File

@@ -48,7 +48,7 @@ nginx-logtail/
│ └── logtail_grpc.pb.go # generated: service stubs │ └── logtail_grpc.pb.go # generated: service stubs
├── internal/ ├── internal/
│ └── store/ │ └── store/
│ └── store.go # shared types: Tuple5, Entry, Snapshot, ring helpers │ └── store.go # shared types: Tuple6, Entry, Snapshot, ring helpers
└── cmd/ └── cmd/
├── collector/ ├── collector/
│ ├── main.go │ ├── main.go
@@ -86,7 +86,7 @@ nginx-logtail/
## Data Model ## Data Model
The core unit is a **count keyed by five dimensions**: The core unit is a **count keyed by six dimensions**:
| Field | Description | Example | | Field | Description | Example |
|-------------------|------------------------------------------------------|-------------------| |-------------------|------------------------------------------------------|-------------------|
@@ -95,6 +95,7 @@ The core unit is a **count keyed by five dimensions**:
| `http_request_uri`| `$request_uri` path only — query string stripped | `/api/v1/search` | | `http_request_uri`| `$request_uri` path only — query string stripped | `/api/v1/search` |
| `http_response` | HTTP status code | `429` | | `http_response` | HTTP status code | `429` |
| `is_tor` | whether the client IP is a TOR exit node | `1` | | `is_tor` | whether the client IP is a TOR exit node | `1` |
| `asn` | client AS number (MaxMind GeoIP2, 32-bit int) | `8298` |
## Time Windows & Tiered Ring Buffers ## Time Windows & Tiered Ring Buffers
@@ -121,8 +122,8 @@ Every 5 minutes: merge last 5 fine snapshots → top-5K → append to coarse rin
## Memory Budget (Collector, target ≤ 1 GB) ## Memory Budget (Collector, target ≤ 1 GB)
Entry size: ~30 B website + ~15 B prefix + ~50 B URI + 3 B status + 1 B is_tor + 8 B count + ~80 B Go map Entry size: ~30 B website + ~15 B prefix + ~50 B URI + 3 B status + 1 B is_tor + 4 B asn + 8 B count + ~80 B Go map
overhead ≈ **~187 bytes per entry**. overhead ≈ **~191 bytes per entry**.
| Structure | Entries | Size | | Structure | Entries | Size |
|-------------------------|-------------|-------------| |-------------------------|-------------|-------------|
@@ -174,9 +175,11 @@ message Filter {
optional string website_regex = 6; // RE2 regex against website optional string website_regex = 6; // RE2 regex against website
optional string uri_regex = 7; // RE2 regex against http_request_uri optional string uri_regex = 7; // RE2 regex against http_request_uri
TorFilter tor = 8; // TOR_ANY (default) / TOR_YES / TOR_NO TorFilter tor = 8; // TOR_ANY (default) / TOR_YES / TOR_NO
optional int32 asn_number = 9; // filter by client ASN
StatusOp asn_op = 10; // comparison operator for asn_number
} }
enum GroupBy { WEBSITE = 0; CLIENT_PREFIX = 1; REQUEST_URI = 2; HTTP_RESPONSE = 3; } enum GroupBy { WEBSITE = 0; CLIENT_PREFIX = 1; REQUEST_URI = 2; HTTP_RESPONSE = 3; ASN_NUMBER = 4; }
enum Window { W1M = 0; W5M = 1; W15M = 2; W60M = 3; W6H = 4; W24H = 5; } enum Window { W1M = 0; W5M = 1; W15M = 2; W60M = 3; W6H = 4; W24H = 5; }
message TopNRequest { Filter filter = 1; GroupBy group_by = 2; int32 n = 3; Window window = 4; } message TopNRequest { Filter filter = 1; GroupBy group_by = 2; int32 n = 3; Window window = 4; }
@@ -230,7 +233,7 @@ service LogtailService {
- Parses the fixed **logtail** nginx log format — tab-separated, fixed field order, no quoting: - Parses the fixed **logtail** nginx log format — tab-separated, fixed field order, no quoting:
```nginx ```nginx
log_format logtail '$host\t$remote_addr\t$msec\t$request_method\t$request_uri\t$status\t$body_bytes_sent\t$request_time\t$is_tor'; log_format logtail '$host\t$remote_addr\t$msec\t$request_method\t$request_uri\t$status\t$body_bytes_sent\t$request_time\t$is_tor\t$asn';
``` ```
| # | Field | Used for | | # | Field | Used for |
@@ -244,18 +247,21 @@ service LogtailService {
| 6 | `$body_bytes_sent`| (discarded) | | 6 | `$body_bytes_sent`| (discarded) |
| 7 | `$request_time` | (discarded) | | 7 | `$request_time` | (discarded) |
| 8 | `$is_tor` | is_tor | | 8 | `$is_tor` | is_tor |
| 9 | `$asn` | asn |
- `strings.SplitN(line, "\t", 9)` — ~50 ns/line. No regex. - `strings.SplitN(line, "\t", 10)` — ~50 ns/line. No regex.
- `$request_uri`: query string discarded at first `?`. - `$request_uri`: query string discarded at first `?`.
- `$remote_addr`: truncated to /24 (IPv4) or /48 (IPv6); prefix lengths configurable via flags. - `$remote_addr`: truncated to /24 (IPv4) or /48 (IPv6); prefix lengths configurable via flags.
- `$is_tor`: `1` if the client IP is a TOR exit node, `0` otherwise. Field is optional — lines - `$is_tor`: `1` if the client IP is a TOR exit node, `0` otherwise. Field is optional — lines
with exactly 8 fields (old format) are accepted and default to `is_tor=false`. with exactly 8 fields (old format) are accepted and default to `is_tor=false`.
- `$asn`: client AS number as a decimal integer (from MaxMind GeoIP2). Field is optional —
lines without it default to `asn=0`.
- Lines with fewer than 8 fields are silently skipped. - Lines with fewer than 8 fields are silently skipped.
### store.go ### store.go
- **Single aggregator goroutine** reads from the channel and updates the live map — no locking on - **Single aggregator goroutine** reads from the channel and updates the live map — no locking on
the hot path. At 10 K lines/s the goroutine uses <1% CPU. the hot path. At 10 K lines/s the goroutine uses <1% CPU.
- Live map: `map[Tuple5]int64`, hard-capped at 100 K entries (new keys dropped when full). - Live map: `map[Tuple6]int64`, hard-capped at 100 K entries (new keys dropped when full).
- **Minute ticker**: heap-selects top-50K entries, writes snapshot to fine ring, resets live map. - **Minute ticker**: heap-selects top-50K entries, writes snapshot to fine ring, resets live map.
- Every 5 fine ticks: merge last 5 fine snapshots → top-5K → write to coarse ring. - Every 5 fine ticks: merge last 5 fine snapshots → top-5K → write to coarse ring.
- **TopN query**: RLock ring, sum bucket range, apply filter, group by dimension, heap-select top N. - **TopN query**: RLock ring, sum bucket range, apply filter, group by dimension, heap-select top N.
@@ -307,14 +313,17 @@ service LogtailService {
### handler.go ### handler.go
- All filter state in the **URL query string**: `w` (window), `by` (group_by), `f_website`, - All filter state in the **URL query string**: `w` (window), `by` (group_by), `f_website`,
`f_prefix`, `f_uri`, `f_status`, `f_website_re`, `f_uri_re`, `f_is_tor`, `n`, `target`. No server-side `f_prefix`, `f_uri`, `f_status`, `f_website_re`, `f_uri_re`, `f_is_tor`, `f_asn`, `n`, `target`. No
session — URLs are shareable and bookmarkable; multiple operators see independent views. server-side session — URLs are shareable and bookmarkable; multiple operators see independent views.
- **Filter expression box**: a `q=` parameter carries a mini filter language - **Filter expression box**: a `q=` parameter carries a mini filter language
(`status>=400 AND website~=gouda.* AND uri~=^/api/`). On submission the handler parses it (`status>=400 AND website~=gouda.* AND uri~=^/api/`). On submission the handler parses it
via `ParseFilterExpr` and redirects to the canonical URL with individual `f_*` params; `q=` via `ParseFilterExpr` and redirects to the canonical URL with individual `f_*` params; `q=`
never appears in the final URL. Parse errors re-render the current page with an inline message. never appears in the final URL. Parse errors re-render the current page with an inline message.
- **Status expressions**: `f_status` accepts `200`, `!=200`, `>=400`, `<500`, etc. — parsed by - **Status expressions**: `f_status` accepts `200`, `!=200`, `>=400`, `<500`, etc. — parsed by
`store.ParseStatusExpr` into `(value, StatusOp)` for the filter protobuf. `store.ParseStatusExpr` into `(value, StatusOp)` for the filter protobuf.
- **ASN expressions**: `f_asn` accepts the same expression syntax (`12345`, `!=65000`, `>=1000`,
`<64512`, etc.) — also parsed by `store.ParseStatusExpr`, stored as `(asn_number, AsnOp)` in the
filter protobuf.
- **Regex filters**: `f_website_re` and `f_uri_re` hold RE2 patterns; compiled once per request - **Regex filters**: `f_website_re` and `f_uri_re` hold RE2 patterns; compiled once per request
into `store.CompiledFilter` before the query-loop iteration. Invalid regexes match nothing. into `store.CompiledFilter` before the query-loop iteration. Invalid regexes match nothing.
- `TopN`, `Trend`, and `ListTargets` RPCs issued **concurrently** (all with a 5 s deadline); page - `TopN`, `Trend`, and `ListTargets` RPCs issued **concurrently** (all with a 5 s deadline); page
@@ -325,7 +334,7 @@ service LogtailService {
default aggregator. Picker is hidden when `ListTargets` returns ≤0 collectors (direct collector default aggregator. Picker is hidden when `ListTargets` returns ≤0 collectors (direct collector
mode). mode).
- **Drilldown**: clicking a table row adds the current dimension's filter and advances `by` through - **Drilldown**: clicking a table row adds the current dimension's filter and advances `by` through
`website → prefix → uri → status → website` (cycles). `website → prefix → uri → status → asn → website` (cycles).
- **`raw=1`**: returns the TopN result as JSON — same URL, no CLI needed for scripting. - **`raw=1`**: returns the TopN result as JSON — same URL, no CLI needed for scripting.
- **`target=` override**: per-request gRPC endpoint override for comparing sources. - **`target=` override**: per-request gRPC endpoint override for comparing sources.
- Error pages render at HTTP 502 with the window/group-by tabs still functional. - Error pages render at HTTP 502 with the window/group-by tabs still functional.
@@ -367,6 +376,7 @@ logtail-cli targets [flags] list targets known to the queried endpoint
| `--website-re`| — | Filter: RE2 regex against website | | `--website-re`| — | Filter: RE2 regex against website |
| `--uri-re` | — | Filter: RE2 regex against request URI | | `--uri-re` | — | Filter: RE2 regex against request URI |
| `--is-tor` | — | Filter: TOR traffic (`1` or `!=0` = TOR only; `0` or `!=1` = non-TOR only) | | `--is-tor` | — | Filter: TOR traffic (`1` or `!=0` = TOR only; `0` or `!=1` = non-TOR only) |
| `--asn` | — | Filter: ASN expression (`12345`, `!=65000`, `>=1000`, `<64512`, …) |
**`topn` only**: `--n 10`, `--window 5m`, `--group-by website` **`topn` only**: `--n 10`, `--window 5m`, `--group-by website`
@@ -398,7 +408,7 @@ with a non-zero code on gRPC error.
| Tick-based cache rotation in aggregator | Ring stays on the same 1-min cadence regardless of collector count | | Tick-based cache rotation in aggregator | Ring stays on the same 1-min cadence regardless of collector count |
| Degraded collector zeroing | Stale counts from failed collectors don't accumulate in the merged view | | Degraded collector zeroing | Stale counts from failed collectors don't accumulate in the merged view |
| Same `LogtailService` for collector and aggregator | CLI and frontend work with either; no special-casing | | Same `LogtailService` for collector and aggregator | CLI and frontend work with either; no special-casing |
| `internal/store` shared package | ring-buffer, `Tuple5` encoding, and filter logic shared between collector and aggregator | | `internal/store` shared package | ring-buffer, `Tuple6` encoding, and filter logic shared between collector and aggregator |
| Filter state in URL, not session cookie | Multiple concurrent operators; shareable/bookmarkable URLs | | Filter state in URL, not session cookie | Multiple concurrent operators; shareable/bookmarkable URLs |
| Query strings stripped at ingest | Major cardinality reduction; prevents URI explosion under attack | | Query strings stripped at ingest | Major cardinality reduction; prevents URI explosion under attack |
| No persistent storage | Simplicity; acceptable for ops dashboards (restart = lose history) | | No persistent storage | Simplicity; acceptable for ops dashboards (restart = lose history) |
@@ -408,6 +418,7 @@ with a non-zero code on gRPC error.
| CLI multi-target fan-out | Compare a collector vs. aggregator, or two collectors, in one command | | CLI multi-target fan-out | Compare a collector vs. aggregator, or two collectors, in one command |
| CLI uses stdlib `flag`, no framework | Four subcommands don't justify a dependency | | CLI uses stdlib `flag`, no framework | Four subcommands don't justify a dependency |
| Status filter as expression string (`!=200`, `>=400`) | Operator-friendly; parsed once at query boundary, encoded as `(int32, StatusOp)` in proto | | Status filter as expression string (`!=200`, `>=400`) | Operator-friendly; parsed once at query boundary, encoded as `(int32, StatusOp)` in proto |
| ASN filter reuses `StatusOp` and `ParseStatusExpr` | Same 6-operator grammar as status; no duplicate enum or parser needed |
| Regex filters compiled once per query (`CompiledFilter`) | Up to 288 × 5 000 per-entry calls — compiling per-entry would dominate query latency | | Regex filters compiled once per query (`CompiledFilter`) | Up to 288 × 5 000 per-entry calls — compiling per-entry would dominate query latency |
| Filter expression box (`q=`) redirects to canonical URL | Filter state stays in individual `f_*` params; URLs remain shareable and bookmarkable | | Filter expression box (`q=`) redirects to canonical URL | Filter state stays in individual `f_*` params; URLs remain shareable and bookmarkable |
| `ListTargets` + frontend source picker | "Which nginx is busiest?" answered by switching `target=` to a collector; no data model changes, no extra memory | | `ListTargets` + frontend source picker | "Which nginx is busiest?" answered by switching `target=` to a collector; no data model changes, no extra memory |

View File

@@ -163,8 +163,8 @@ func TestCacheCoarseRing(t *testing.T) {
func TestCacheQueryTopN(t *testing.T) { func TestCacheQueryTopN(t *testing.T) {
m := NewMerger() m := NewMerger()
m.Apply(makeSnap("c1", map[string]int64{ m.Apply(makeSnap("c1", map[string]int64{
st.EncodeTuple(st.Tuple5{Website: "busy.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): 300, st.EncodeTuple(st.Tuple6{Website: "busy.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): 300,
st.EncodeTuple(st.Tuple5{Website: "quiet.com", Prefix: "2.0.0.0/24", URI: "/", Status: "200"}): 50, st.EncodeTuple(st.Tuple6{Website: "quiet.com", Prefix: "2.0.0.0/24", URI: "/", Status: "200"}): 50,
})) }))
cache := NewCache(m, "test") cache := NewCache(m, "test")
@@ -181,8 +181,8 @@ func TestCacheQueryTopN(t *testing.T) {
func TestCacheQueryTopNWithFilter(t *testing.T) { func TestCacheQueryTopNWithFilter(t *testing.T) {
m := NewMerger() m := NewMerger()
status429 := st.EncodeTuple(st.Tuple5{Website: "example.com", Prefix: "1.0.0.0/24", URI: "/api", Status: "429"}) status429 := st.EncodeTuple(st.Tuple6{Website: "example.com", Prefix: "1.0.0.0/24", URI: "/api", Status: "429"})
status200 := st.EncodeTuple(st.Tuple5{Website: "example.com", Prefix: "2.0.0.0/24", URI: "/api", Status: "200"}) status200 := st.EncodeTuple(st.Tuple6{Website: "example.com", Prefix: "2.0.0.0/24", URI: "/api", Status: "200"})
m.Apply(makeSnap("c1", map[string]int64{status429: 200, status200: 500})) m.Apply(makeSnap("c1", map[string]int64{status429: 200, status200: 500}))
cache := NewCache(m, "test") cache := NewCache(m, "test")
@@ -202,7 +202,7 @@ func TestCacheQueryTrend(t *testing.T) {
for i, count := range []int64{10, 20, 30} { for i, count := range []int64{10, 20, 30} {
m.Apply(makeSnap("c1", map[string]int64{ m.Apply(makeSnap("c1", map[string]int64{
st.EncodeTuple(st.Tuple5{Website: "x.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): count, st.EncodeTuple(st.Tuple6{Website: "x.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): count,
})) }))
cache.rotate(now.Add(time.Duration(i) * time.Minute)) cache.rotate(now.Add(time.Duration(i) * time.Minute))
} }
@@ -270,12 +270,12 @@ func startFakeCollector(t *testing.T, snaps []*pb.Snapshot) string {
func TestGRPCEndToEnd(t *testing.T) { func TestGRPCEndToEnd(t *testing.T) {
// Two fake collectors with overlapping labels. // Two fake collectors with overlapping labels.
snap1 := makeSnap("col1", map[string]int64{ snap1 := makeSnap("col1", map[string]int64{
st.EncodeTuple(st.Tuple5{Website: "busy.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): 500, st.EncodeTuple(st.Tuple6{Website: "busy.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): 500,
st.EncodeTuple(st.Tuple5{Website: "quiet.com", Prefix: "2.0.0.0/24", URI: "/", Status: "429"}): 100, st.EncodeTuple(st.Tuple6{Website: "quiet.com", Prefix: "2.0.0.0/24", URI: "/", Status: "429"}): 100,
}) })
snap2 := makeSnap("col2", map[string]int64{ snap2 := makeSnap("col2", map[string]int64{
st.EncodeTuple(st.Tuple5{Website: "busy.com", Prefix: "3.0.0.0/24", URI: "/", Status: "200"}): 300, st.EncodeTuple(st.Tuple6{Website: "busy.com", Prefix: "3.0.0.0/24", URI: "/", Status: "200"}): 300,
st.EncodeTuple(st.Tuple5{Website: "other.com", Prefix: "4.0.0.0/24", URI: "/", Status: "200"}): 50, st.EncodeTuple(st.Tuple6{Website: "other.com", Prefix: "4.0.0.0/24", URI: "/", Status: "200"}): 50,
}) })
addr1 := startFakeCollector(t, []*pb.Snapshot{snap1}) addr1 := startFakeCollector(t, []*pb.Snapshot{snap1})
addr2 := startFakeCollector(t, []*pb.Snapshot{snap2}) addr2 := startFakeCollector(t, []*pb.Snapshot{snap2})
@@ -388,7 +388,7 @@ func TestGRPCEndToEnd(t *testing.T) {
func TestDegradedCollector(t *testing.T) { func TestDegradedCollector(t *testing.T) {
// Start one real and one immediately-gone collector. // Start one real and one immediately-gone collector.
snap1 := makeSnap("col1", map[string]int64{ snap1 := makeSnap("col1", map[string]int64{
st.EncodeTuple(st.Tuple5{Website: "good.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): 100, st.EncodeTuple(st.Tuple6{Website: "good.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): 100,
}) })
addr1 := startFakeCollector(t, []*pb.Snapshot{snap1}) addr1 := startFakeCollector(t, []*pb.Snapshot{snap1})
// addr2 points at nothing — connections will fail immediately. // addr2 points at nothing — connections will fail immediately.

View File

@@ -89,7 +89,10 @@ func TestBuildFilterNil(t *testing.T) {
} }
func TestFmtCount(t *testing.T) { func TestFmtCount(t *testing.T) {
cases := []struct{ n int64; want string }{ cases := []struct {
n int64
want string
}{
{0, "0"}, {0, "0"},
{999, "999"}, {999, "999"},
{1000, "1 000"}, {1000, "1 000"},

View File

@@ -21,6 +21,7 @@ type sharedFlags struct {
websiteRe string // RE2 regex against website websiteRe string // RE2 regex against website
uriRe string // RE2 regex against request URI uriRe string // RE2 regex against request URI
isTor string // "", "1" / "!=0" (TOR only), "0" / "!=1" (non-TOR only) isTor string // "", "1" / "!=0" (TOR only), "0" / "!=1" (non-TOR only)
asn string // expression: "12345", "!=65000", ">=1000", etc.
} }
// bindShared registers the shared flags on fs and returns a pointer to the // bindShared registers the shared flags on fs and returns a pointer to the
@@ -36,6 +37,7 @@ func bindShared(fs *flag.FlagSet) (*sharedFlags, *string) {
fs.StringVar(&sf.websiteRe, "website-re", "", "filter: RE2 regex against website") fs.StringVar(&sf.websiteRe, "website-re", "", "filter: RE2 regex against website")
fs.StringVar(&sf.uriRe, "uri-re", "", "filter: RE2 regex against request URI") fs.StringVar(&sf.uriRe, "uri-re", "", "filter: RE2 regex against request URI")
fs.StringVar(&sf.isTor, "is-tor", "", "filter: TOR traffic (1 or !=0 = TOR only; 0 or !=1 = non-TOR only)") fs.StringVar(&sf.isTor, "is-tor", "", "filter: TOR traffic (1 or !=0 = TOR only; 0 or !=1 = non-TOR only)")
fs.StringVar(&sf.asn, "asn", "", "filter: ASN expression (12345, !=65000, >=1000, <64512, …)")
return sf, target return sf, target
} }
@@ -58,7 +60,7 @@ func parseTargets(s string) []string {
} }
func buildFilter(sf *sharedFlags) *pb.Filter { func buildFilter(sf *sharedFlags) *pb.Filter {
if sf.website == "" && sf.prefix == "" && sf.uri == "" && sf.status == "" && sf.websiteRe == "" && sf.uriRe == "" && sf.isTor == "" { if sf.website == "" && sf.prefix == "" && sf.uri == "" && sf.status == "" && sf.websiteRe == "" && sf.uriRe == "" && sf.isTor == "" && sf.asn == "" {
return nil return nil
} }
f := &pb.Filter{} f := &pb.Filter{}
@@ -97,6 +99,15 @@ func buildFilter(sf *sharedFlags) *pb.Filter {
fmt.Fprintf(os.Stderr, "--is-tor: invalid value %q; use 1, 0, !=0, or !=1\n", sf.isTor) fmt.Fprintf(os.Stderr, "--is-tor: invalid value %q; use 1, 0, !=0, or !=1\n", sf.isTor)
os.Exit(1) os.Exit(1)
} }
if sf.asn != "" {
n, op, ok := st.ParseStatusExpr(sf.asn)
if !ok {
fmt.Fprintf(os.Stderr, "--asn: invalid expression %q; use e.g. 12345, !=65000, >=1000, <64512\n", sf.asn)
os.Exit(1)
}
f.AsnNumber = &n
f.AsnOp = op
}
return f return f
} }

View File

@@ -3,6 +3,7 @@ package main
import ( import (
"fmt" "fmt"
"net" "net"
"strconv"
"strings" "strings"
) )
@@ -13,18 +14,20 @@ type LogRecord struct {
URI string URI string
Status string Status string
IsTor bool IsTor bool
ASN int32
} }
// ParseLine parses a tab-separated logtail log line: // ParseLine parses a tab-separated logtail log line:
// //
// $host \t $remote_addr \t $msec \t $request_method \t $request_uri \t $status \t $body_bytes_sent \t $request_time \t $is_tor // $host \t $remote_addr \t $msec \t $request_method \t $request_uri \t $status \t $body_bytes_sent \t $request_time \t $is_tor \t $asn
// //
// The is_tor field (0 or 1) is optional for backward compatibility with // The is_tor (field 9) and asn (field 10) fields are optional for backward
// older log files that omit it; it defaults to false when absent. // compatibility with older log files that omit them; they default to false/0
// when absent.
// Returns false for lines with fewer than 8 fields. // Returns false for lines with fewer than 8 fields.
func ParseLine(line string, v4bits, v6bits int) (LogRecord, bool) { func ParseLine(line string, v4bits, v6bits int) (LogRecord, bool) {
// SplitN caps allocations; we need up to 9 fields. // SplitN caps allocations; we need up to 10 fields.
fields := strings.SplitN(line, "\t", 9) fields := strings.SplitN(line, "\t", 10)
if len(fields) < 8 { if len(fields) < 8 {
return LogRecord{}, false return LogRecord{}, false
} }
@@ -39,7 +42,14 @@ func ParseLine(line string, v4bits, v6bits int) (LogRecord, bool) {
return LogRecord{}, false return LogRecord{}, false
} }
isTor := len(fields) == 9 && fields[8] == "1" isTor := len(fields) >= 9 && fields[8] == "1"
var asn int32
if len(fields) == 10 {
if n, err := strconv.ParseInt(fields[9], 10, 32); err == nil {
asn = int32(n)
}
}
return LogRecord{ return LogRecord{
Website: fields[0], Website: fields[0],
@@ -47,6 +57,7 @@ func ParseLine(line string, v4bits, v6bits int) (LogRecord, bool) {
URI: uri, URI: uri,
Status: fields[5], Status: fields[5],
IsTor: isTor, IsTor: isTor,
ASN: asn,
}, true }, true
} }

View File

@@ -108,6 +108,58 @@ func TestParseLine(t *testing.T) {
IsTor: false, IsTor: false,
}, },
}, },
{
name: "asn field parsed",
line: "asn.example.com\t1.2.3.4\t0\tGET\t/\t200\t0\t0.001\t0\t12345",
wantOK: true,
want: LogRecord{
Website: "asn.example.com",
ClientPrefix: "1.2.3.0/24",
URI: "/",
Status: "200",
IsTor: false,
ASN: 12345,
},
},
{
name: "asn field with is_tor=1",
line: "both.example.com\t1.2.3.4\t0\tGET\t/\t200\t0\t0.001\t1\t65535",
wantOK: true,
want: LogRecord{
Website: "both.example.com",
ClientPrefix: "1.2.3.0/24",
URI: "/",
Status: "200",
IsTor: true,
ASN: 65535,
},
},
{
name: "missing asn field defaults to 0 (backward compat)",
line: "noasn.example.com\t1.2.3.4\t0\tGET\t/\t200\t0\t0.001\t1",
wantOK: true,
want: LogRecord{
Website: "noasn.example.com",
ClientPrefix: "1.2.3.0/24",
URI: "/",
Status: "200",
IsTor: true,
ASN: 0,
},
},
{
name: "invalid asn field defaults to 0",
line: "badann.example.com\t1.2.3.4\t0\tGET\t/\t200\t0\t0.001\t0\tnot-a-number",
wantOK: true,
want: LogRecord{
Website: "badann.example.com",
ClientPrefix: "1.2.3.0/24",
URI: "/",
Status: "200",
IsTor: false,
ASN: 0,
},
},
} }
for _, tc := range tests { for _, tc := range tests {

View File

@@ -4,8 +4,8 @@ import (
"sync" "sync"
"time" "time"
pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb"
st "git.ipng.ch/ipng/nginx-logtail/internal/store" st "git.ipng.ch/ipng/nginx-logtail/internal/store"
pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb"
) )
const liveMapCap = 100_000 // hard cap on live map entries const liveMapCap = 100_000 // hard cap on live map entries
@@ -15,7 +15,7 @@ type Store struct {
source string source string
// live map — written only by the Run goroutine; no locking needed on writes // live map — written only by the Run goroutine; no locking needed on writes
live map[st.Tuple5]int64 live map[st.Tuple6]int64
liveLen int liveLen int
// ring buffers — protected by mu for reads // ring buffers — protected by mu for reads
@@ -36,7 +36,7 @@ type Store struct {
func NewStore(source string) *Store { func NewStore(source string) *Store {
return &Store{ return &Store{
source: source, source: source,
live: make(map[st.Tuple5]int64, liveMapCap), live: make(map[st.Tuple6]int64, liveMapCap),
subs: make(map[chan st.Snapshot]struct{}), subs: make(map[chan st.Snapshot]struct{}),
} }
} }
@@ -44,7 +44,7 @@ func NewStore(source string) *Store {
// ingest records one log record into the live map. // ingest records one log record into the live map.
// Must only be called from the Run goroutine. // Must only be called from the Run goroutine.
func (s *Store) ingest(r LogRecord) { func (s *Store) ingest(r LogRecord) {
key := st.Tuple5{Website: r.Website, Prefix: r.ClientPrefix, URI: r.URI, Status: r.Status, IsTor: r.IsTor} key := st.Tuple6{Website: r.Website, Prefix: r.ClientPrefix, URI: r.URI, Status: r.Status, IsTor: r.IsTor, ASN: r.ASN}
if _, exists := s.live[key]; !exists { if _, exists := s.live[key]; !exists {
if s.liveLen >= liveMapCap { if s.liveLen >= liveMapCap {
return return
@@ -77,7 +77,7 @@ func (s *Store) rotate(now time.Time) {
} }
s.mu.Unlock() s.mu.Unlock()
s.live = make(map[st.Tuple5]int64, liveMapCap) s.live = make(map[st.Tuple6]int64, liveMapCap)
s.liveLen = 0 s.liveLen = 0
s.broadcast(fine) s.broadcast(fine)

View File

@@ -5,8 +5,8 @@ import (
"testing" "testing"
"time" "time"
pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb"
st "git.ipng.ch/ipng/nginx-logtail/internal/store" st "git.ipng.ch/ipng/nginx-logtail/internal/store"
pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb"
) )
func makeStore() *Store { func makeStore() *Store {

View File

@@ -126,8 +126,20 @@ func applyTerm(term string, fs *filterState) error {
} else { } else {
fs.IsTor = "0" fs.IsTor = "0"
} }
case "asn":
if op == "~=" {
return fmt.Errorf("asn does not support ~=; use =, !=, >=, >, <=, <")
}
expr := op + value
if op == "=" {
expr = value
}
if _, _, ok := st.ParseStatusExpr(expr); !ok {
return fmt.Errorf("invalid asn expression %q", expr)
}
fs.ASN = expr
default: default:
return fmt.Errorf("unknown field %q; valid: status, website, uri, prefix, is_tor", field) return fmt.Errorf("unknown field %q; valid: status, website, uri, prefix, is_tor, asn", field)
} }
return nil return nil
} }
@@ -167,9 +179,24 @@ func FilterExprString(f filterState) string {
if f.IsTor != "" { if f.IsTor != "" {
parts = append(parts, "is_tor="+f.IsTor) parts = append(parts, "is_tor="+f.IsTor)
} }
if f.ASN != "" {
parts = append(parts, asnTermStr(f.ASN))
}
return strings.Join(parts, " AND ") return strings.Join(parts, " AND ")
} }
// asnTermStr converts a stored ASN expression (">=1000", "12345") to a
// full filter term ("asn>=1000", "asn=12345").
func asnTermStr(expr string) string {
if expr == "" {
return ""
}
if len(expr) > 0 && (expr[0] == '!' || expr[0] == '>' || expr[0] == '<') {
return "asn" + expr
}
return "asn=" + expr
}
// statusTermStr converts a stored status expression (">=400", "200") to a // statusTermStr converts a stored status expression (">=400", "200") to a
// full filter term ("status>=400", "status=200"). // full filter term ("status>=400", "status=200").
func statusTermStr(expr string) string { func statusTermStr(expr string) string {

View File

@@ -258,3 +258,77 @@ func TestFilterExprRoundTrip(t *testing.T) {
} }
} }
} }
func TestParseAsnEQ(t *testing.T) {
fs, err := ParseFilterExpr("asn=12345")
if err != nil || fs.ASN != "12345" {
t.Fatalf("got err=%v fs=%+v", err, fs)
}
}
func TestParseAsnNE(t *testing.T) {
fs, err := ParseFilterExpr("asn!=65000")
if err != nil || fs.ASN != "!=65000" {
t.Fatalf("got err=%v fs=%+v", err, fs)
}
}
func TestParseAsnGE(t *testing.T) {
fs, err := ParseFilterExpr("asn>=1000")
if err != nil || fs.ASN != ">=1000" {
t.Fatalf("got err=%v fs=%+v", err, fs)
}
}
func TestParseAsnLT(t *testing.T) {
fs, err := ParseFilterExpr("asn<64512")
if err != nil || fs.ASN != "<64512" {
t.Fatalf("got err=%v fs=%+v", err, fs)
}
}
func TestParseAsnRegexRejected(t *testing.T) {
_, err := ParseFilterExpr("asn~=123")
if err == nil {
t.Fatal("expected error for asn~=")
}
}
func TestParseAsnInvalidExpr(t *testing.T) {
_, err := ParseFilterExpr("asn=notanumber")
if err == nil {
t.Fatal("expected error for non-numeric ASN")
}
}
func TestFilterExprStringASN(t *testing.T) {
s := FilterExprString(filterState{ASN: "12345"})
if s != "asn=12345" {
t.Fatalf("got %q", s)
}
s = FilterExprString(filterState{ASN: ">=1000"})
if s != "asn>=1000" {
t.Fatalf("got %q", s)
}
}
func TestFilterExprRoundTripASN(t *testing.T) {
cases := []filterState{
{ASN: "12345"},
{ASN: "!=65000"},
{ASN: ">=1000"},
{ASN: "<64512"},
{Status: ">=400", ASN: "12345"},
}
for _, fs := range cases {
expr := FilterExprString(fs)
fs2, err := ParseFilterExpr(expr)
if err != nil {
t.Errorf("round-trip parse error for %+v → %q: %v", fs, expr, err)
continue
}
if fs2 != fs {
t.Errorf("round-trip mismatch: %+v → %q → %+v", fs, expr, fs2)
}
}
}

View File

@@ -220,8 +220,17 @@ func TestDrillURL(t *testing.T) {
if !strings.Contains(u, "f_status=429") { if !strings.Contains(u, "f_status=429") {
t.Errorf("drill from status: missing f_status in %q", u) t.Errorf("drill from status: missing f_status in %q", u)
} }
if !strings.Contains(u, "by=asn") {
t.Errorf("drill from status: expected next by=asn in %q", u)
}
p.GroupByS = "asn"
u = p.drillURL("12345")
if !strings.Contains(u, "f_asn=12345") {
t.Errorf("drill from asn: missing f_asn in %q", u)
}
if !strings.Contains(u, "by=website") { if !strings.Contains(u, "by=website") {
t.Errorf("drill from status: expected cycle back to by=website in %q", u) t.Errorf("drill from asn: expected cycle back to by=website in %q", u)
} }
} }

View File

@@ -54,6 +54,7 @@ type filterState struct {
WebsiteRe string // RE2 regex against website WebsiteRe string // RE2 regex against website
URIRe string // RE2 regex against request URI URIRe string // RE2 regex against request URI
IsTor string // "", "1" (TOR only), "0" (non-TOR only) IsTor string // "", "1" (TOR only), "0" (non-TOR only)
ASN string // expression: "12345", "!=65000", ">=1000", etc.
} }
// QueryParams holds all parsed URL parameters for one page request. // QueryParams holds all parsed URL parameters for one page request.
@@ -91,7 +92,7 @@ var windowSpecs = []struct{ s, label string }{
} }
var groupBySpecs = []struct{ s, label string }{ var groupBySpecs = []struct{ s, label string }{
{"website", "website"}, {"prefix", "prefix"}, {"uri", "uri"}, {"status", "status"}, {"website", "website"}, {"asn", "asn"}, {"prefix", "prefix"}, {"status", "status"}, {"uri", "uri"},
} }
func parseWindowString(s string) (pb.Window, string) { func parseWindowString(s string) (pb.Window, string) {
@@ -121,6 +122,8 @@ func parseGroupByString(s string) (pb.GroupBy, string) {
return pb.GroupBy_REQUEST_URI, "uri" return pb.GroupBy_REQUEST_URI, "uri"
case "status": case "status":
return pb.GroupBy_HTTP_RESPONSE, "status" return pb.GroupBy_HTTP_RESPONSE, "status"
case "asn":
return pb.GroupBy_ASN_NUMBER, "asn"
default: default:
return pb.GroupBy_WEBSITE, "website" return pb.GroupBy_WEBSITE, "website"
} }
@@ -159,12 +162,13 @@ func (h *Handler) parseParams(r *http.Request) QueryParams {
WebsiteRe: q.Get("f_website_re"), WebsiteRe: q.Get("f_website_re"),
URIRe: q.Get("f_uri_re"), URIRe: q.Get("f_uri_re"),
IsTor: q.Get("f_is_tor"), IsTor: q.Get("f_is_tor"),
ASN: q.Get("f_asn"),
}, },
} }
} }
func buildFilter(f filterState) *pb.Filter { func buildFilter(f filterState) *pb.Filter {
if f.Website == "" && f.Prefix == "" && f.URI == "" && f.Status == "" && f.WebsiteRe == "" && f.URIRe == "" && f.IsTor == "" { if f.Website == "" && f.Prefix == "" && f.URI == "" && f.Status == "" && f.WebsiteRe == "" && f.URIRe == "" && f.IsTor == "" && f.ASN == "" {
return nil return nil
} }
out := &pb.Filter{} out := &pb.Filter{}
@@ -195,6 +199,12 @@ func buildFilter(f filterState) *pb.Filter {
case "0": case "0":
out.Tor = pb.TorFilter_TOR_NO out.Tor = pb.TorFilter_TOR_NO
} }
if f.ASN != "" {
if n, op, ok := st.ParseStatusExpr(f.ASN); ok {
out.AsnNumber = &n
out.AsnOp = op
}
}
return out return out
} }
@@ -226,6 +236,9 @@ func (p QueryParams) toValues() url.Values {
if p.Filter.IsTor != "" { if p.Filter.IsTor != "" {
v.Set("f_is_tor", p.Filter.IsTor) v.Set("f_is_tor", p.Filter.IsTor)
} }
if p.Filter.ASN != "" {
v.Set("f_asn", p.Filter.ASN)
}
return v return v
} }
@@ -247,7 +260,7 @@ func (p QueryParams) buildURL(overrides map[string]string) string {
func (p QueryParams) clearFilterURL() string { func (p QueryParams) clearFilterURL() string {
return p.buildURL(map[string]string{ return p.buildURL(map[string]string{
"f_website": "", "f_prefix": "", "f_uri": "", "f_status": "", "f_website": "", "f_prefix": "", "f_uri": "", "f_status": "",
"f_website_re": "", "f_uri_re": "", "f_website_re": "", "f_uri_re": "", "f_is_tor": "", "f_asn": "",
}) })
} }
@@ -260,7 +273,9 @@ func nextGroupBy(s string) string {
return "uri" return "uri"
case "uri": case "uri":
return "status" return "status"
default: // status → back to website case "status":
return "asn"
default: // asn → back to website
return "website" return "website"
} }
} }
@@ -276,6 +291,8 @@ func groupByFilterKey(s string) string {
return "f_uri" return "f_uri"
case "status": case "status":
return "f_status" return "f_status"
case "asn":
return "f_asn"
default: default:
return "f_website" return "f_website"
} }
@@ -338,6 +355,12 @@ func buildCrumbs(p QueryParams) []Crumb {
RemoveURL: p.buildURL(map[string]string{"f_is_tor": ""}), RemoveURL: p.buildURL(map[string]string{"f_is_tor": ""}),
}) })
} }
if p.Filter.ASN != "" {
crumbs = append(crumbs, Crumb{
Text: asnTermStr(p.Filter.ASN),
RemoveURL: p.buildURL(map[string]string{"f_asn": ""}),
})
}
return crumbs return crumbs
} }

View File

@@ -32,7 +32,7 @@
<input type="hidden" name="w" value="{{.Params.WindowS}}"> <input type="hidden" name="w" value="{{.Params.WindowS}}">
<input type="hidden" name="by" value="{{.Params.GroupByS}}"> <input type="hidden" name="by" value="{{.Params.GroupByS}}">
<input type="hidden" name="n" value="{{.Params.N}}"> <input type="hidden" name="n" value="{{.Params.N}}">
<input class="filter-input" type="text" name="q" value="{{.FilterExpr}}" placeholder="status>=400 AND website~=gouda.* AND uri~=^/api/ AND is_tor=1"> <input class="filter-input" type="text" name="q" value="{{.FilterExpr}}" placeholder="status>=400 AND website~=gouda.* AND uri~=^/ct/v1/ AND is_tor=0 AND asn=8298">
<button type="submit">filter</button> <button type="submit">filter</button>
{{- if .FilterExpr}} <a class="clear" href="{{.ClearFilterURL}}">× clear</a>{{end}} {{- if .FilterExpr}} <a class="clear" href="{{.ClearFilterURL}}">× clear</a>{{end}}
</form> </form>

View File

@@ -27,7 +27,7 @@ Add the `logtail` log format to your `nginx.conf` and apply it to each `server`
```nginx ```nginx
http { http {
log_format logtail '$host\t$remote_addr\t$msec\t$request_method\t$request_uri\t$status\t$body_bytes_sent\t$request_time\t$is_tor'; log_format logtail '$host\t$remote_addr\t$msec\t$request_method\t$request_uri\t$status\t$body_bytes_sent\t$request_time\t$is_tor\t$asn';
server { server {
access_log /var/log/nginx/access.log logtail; access_log /var/log/nginx/access.log logtail;
@@ -38,10 +38,16 @@ http {
``` ```
The format is tab-separated with fixed field positions. Query strings are stripped from the URI The format is tab-separated with fixed field positions. Query strings are stripped from the URI
by the collector at ingest time — only the path is tracked. `$is_tor` must be set to `1` when by the collector at ingest time — only the path is tracked.
the client IP is a TOR exit node and `0` otherwise (this is typically populated by a custom nginx
variable or a Lua script that checks the IP against a TOR exit list). The field is optional for `$is_tor` must be set to `1` when the client IP is a TOR exit node and `0` otherwise (typically
backward compatibility — log lines without it are accepted and treated as `is_tor=0`. populated by a custom nginx variable or a Lua script that checks the IP against a TOR exit list).
The field is optional for backward compatibility — log lines without it are accepted and treated
as `is_tor=0`.
`$asn` must be set to the client's AS number as a decimal integer (e.g. from MaxMind GeoIP2's
`$geoip2_data_autonomous_system_number`). The field is optional — log lines without it default
to `asn=0`.
--- ---
@@ -128,7 +134,7 @@ The collector is designed to stay well under 1 GB:
| Coarse ring (288 × 5-min) | 288 × 5 000 | ~268 MB | | Coarse ring (288 × 5-min) | 288 × 5 000 | ~268 MB |
| **Total** | | **~845 MB** | | **Total** | | **~845 MB** |
When the live map reaches 100 000 distinct 5-tuples, new keys are dropped for the rest of that When the live map reaches 100 000 distinct 6-tuples, new keys are dropped for the rest of that
minute. Existing keys continue to accumulate counts. The cap resets at each minute rotation. minute. Existing keys continue to accumulate counts. The cap resets at each minute rotation.
### Time windows ### Time windows
@@ -252,13 +258,13 @@ the selected dimension and time window.
**Window tabs** — switch between `1m / 5m / 15m / 60m / 6h / 24h`. Only the window changes; **Window tabs** — switch between `1m / 5m / 15m / 60m / 6h / 24h`. Only the window changes;
all active filters are preserved. all active filters are preserved.
**Dimension tabs** — switch between grouping by `website / prefix / uri / status`. **Dimension tabs** — switch between grouping by `website / asn / prefix / status / uri`.
**Drilldown** — click any table row to add that value as a filter and advance to the next **Drilldown** — click any table row to add that value as a filter and advance to the next
dimension in the hierarchy: dimension in the hierarchy:
``` ```
website → client prefix → request URI → HTTP status → website (cycles) website → client prefix → request URI → HTTP status → ASN → website (cycles)
``` ```
Example: click `example.com` in the website view to see which client prefixes are hitting it; Example: click `example.com` in the website view to see which client prefixes are hitting it;
@@ -283,16 +289,20 @@ website=example.com AND prefix=1.2.3.0/24
Supported fields and operators: Supported fields and operators:
| Field | Operators | Example | | Field | Operators | Example |
|-----------|---------------------|----------------------------| |-----------|---------------------|-----------------------------------|
| `status` | `=` `!=` `>` `>=` `<` `<=` | `status>=400` | | `status` | `=` `!=` `>` `>=` `<` `<=` | `status>=400` |
| `website` | `=` `~=` | `website~=gouda.*` | | `website` | `=` `~=` | `website~=gouda.*` |
| `uri` | `=` `~=` | `uri~=^/api/` | | `uri` | `=` `~=` | `uri~=^/api/` |
| `prefix` | `=` | `prefix=1.2.3.0/24` | | `prefix` | `=` | `prefix=1.2.3.0/24` |
| `is_tor` | `=` `!=` | `is_tor=1`, `is_tor!=0` | | `is_tor` | `=` `!=` | `is_tor=1`, `is_tor!=0` |
| `asn` | `=` `!=` `>` `>=` `<` `<=` | `asn=8298`, `asn>=1000` |
`is_tor=1` and `is_tor!=0` are equivalent (TOR traffic only). `is_tor=0` and `is_tor!=1` are `is_tor=1` and `is_tor!=0` are equivalent (TOR traffic only). `is_tor=0` and `is_tor!=1` are
equivalent (non-TOR traffic only). equivalent (non-TOR traffic only).
`asn` accepts the same comparison expressions as `status`. Use `asn=8298` to match a single AS,
`asn>=64512` to match the private-use ASN range, or `asn!=0` to exclude unresolved entries.
`~=` means RE2 regex match. Values with spaces or quotes may be wrapped in double or single `~=` means RE2 regex match. Values with spaces or quotes may be wrapped in double or single
quotes: `uri~="^/search\?q="`. quotes: `uri~="^/search\?q="`.
@@ -311,8 +321,8 @@ accept RE2 regular expressions. The breadcrumb strip shows them as `website~=gou
`uri~=^/api/` with the usual `×` remove link. `uri~=^/api/` with the usual `×` remove link.
**URL sharing** — all filter state is in the URL query string (`w`, `by`, `f_website`, **URL sharing** — all filter state is in the URL query string (`w`, `by`, `f_website`,
`f_prefix`, `f_uri`, `f_status`, `f_website_re`, `f_uri_re`, `f_is_tor`, `n`). Copy the URL to `f_prefix`, `f_uri`, `f_status`, `f_website_re`, `f_uri_re`, `f_is_tor`, `f_asn`, `n`). Copy
share an exact view with another operator, or bookmark a recurring query. the URL to share an exact view with another operator, or bookmark a recurring query.
**JSON output** — append `&raw=1` to any URL to receive the TopN result as JSON instead of **JSON output** — append `&raw=1` to any URL to receive the TopN result as JSON instead of
HTML. Useful for scripting without the CLI binary: HTML. Useful for scripting without the CLI binary:
@@ -368,6 +378,7 @@ logtail-cli targets [flags] list targets known to the queried endpoint
| `--website-re`| — | Filter: RE2 regex against website | | `--website-re`| — | Filter: RE2 regex against website |
| `--uri-re` | — | Filter: RE2 regex against request URI | | `--uri-re` | — | Filter: RE2 regex against request URI |
| `--is-tor` | — | Filter: `1` or `!=0` = TOR only; `0` or `!=1` = non-TOR only | | `--is-tor` | — | Filter: `1` or `!=0` = TOR only; `0` or `!=1` = non-TOR only |
| `--asn` | — | Filter: ASN expression (`12345`, `!=65000`, `>=1000`, `<64512`, …) |
### `topn` flags ### `topn` flags
@@ -375,7 +386,7 @@ logtail-cli targets [flags] list targets known to the queried endpoint
|---------------|------------|----------------------------------------------------------| |---------------|------------|----------------------------------------------------------|
| `--n` | `10` | Number of entries | | `--n` | `10` | Number of entries |
| `--window` | `5m` | `1m` `5m` `15m` `60m` `6h` `24h` | | `--window` | `5m` | `1m` `5m` `15m` `60m` `6h` `24h` |
| `--group-by` | `website` | `website` `prefix` `uri` `status` | | `--group-by` | `website` | `website` `prefix` `uri` `status` `asn` |
### `trend` flags ### `trend` flags
@@ -470,6 +481,21 @@ logtail-cli topn --target agg:9091 --window 5m --is-tor 1
# Show non-TOR traffic only — exclude exit nodes from the view # Show non-TOR traffic only — exclude exit nodes from the view
logtail-cli topn --target agg:9091 --window 5m --is-tor 0 logtail-cli topn --target agg:9091 --window 5m --is-tor 0
# Top ASNs by request count over the last 5 minutes
logtail-cli topn --target agg:9091 --window 5m --group-by asn
# Which ASNs are generating the most 429s?
logtail-cli topn --target agg:9091 --window 5m --group-by asn --status 429
# Filter to traffic from a specific ASN
logtail-cli topn --target agg:9091 --window 5m --asn 8298
# Filter to traffic from private-use / unallocated ASNs
logtail-cli topn --target agg:9091 --window 5m --group-by prefix --asn '>=64512'
# Exclude unresolved entries (ASN 0) and show top source ASNs
logtail-cli topn --target agg:9091 --window 5m --group-by asn --asn '!=0'
# Compare two collectors side by side in one command # Compare two collectors side by side in one command
logtail-cli topn --target nginx1:9090,nginx2:9090 --window 5m logtail-cli topn --target nginx1:9090,nginx2:9090 --window 5m

View File

@@ -6,6 +6,7 @@ import (
"container/heap" "container/heap"
"log" "log"
"regexp" "regexp"
"strconv"
"time" "time"
pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb" pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb"
@@ -20,13 +21,14 @@ const (
CoarseEvery = 5 // fine ticks between coarse writes CoarseEvery = 5 // fine ticks between coarse writes
) )
// Tuple5 is the aggregation key (website, prefix, URI, status, is_tor). // Tuple6 is the aggregation key (website, prefix, URI, status, is_tor, asn).
type Tuple5 struct { type Tuple6 struct {
Website string Website string
Prefix string Prefix string
URI string URI string
Status string Status string
IsTor bool IsTor bool
ASN int32
} }
// Entry is a labelled count used in snapshots and query results. // Entry is a labelled count used in snapshots and query results.
@@ -74,28 +76,33 @@ func BucketsForWindow(window pb.Window, fine, coarse RingView, fineFilled, coars
} }
} }
// --- label encoding: "website\x00prefix\x00uri\x00status\x00is_tor" --- // --- label encoding: "website\x00prefix\x00uri\x00status\x00is_tor\x00asn" ---
// EncodeTuple encodes a Tuple5 as a NUL-separated string suitable for use // EncodeTuple encodes a Tuple6 as a NUL-separated string suitable for use
// as a map key in snapshots. // as a map key in snapshots.
func EncodeTuple(t Tuple5) string { func EncodeTuple(t Tuple6) string {
tor := "0" tor := "0"
if t.IsTor { if t.IsTor {
tor = "1" tor = "1"
} }
return t.Website + "\x00" + t.Prefix + "\x00" + t.URI + "\x00" + t.Status + "\x00" + tor return t.Website + "\x00" + t.Prefix + "\x00" + t.URI + "\x00" + t.Status + "\x00" + tor + "\x00" + strconv.Itoa(int(t.ASN))
} }
// LabelTuple decodes a NUL-separated snapshot label back into a Tuple5. // LabelTuple decodes a NUL-separated snapshot label back into a Tuple6.
func LabelTuple(label string) Tuple5 { func LabelTuple(label string) Tuple6 {
parts := splitN(label, '\x00', 5) parts := splitN(label, '\x00', 6)
if len(parts) < 4 { if len(parts) < 4 {
return Tuple5{} return Tuple6{}
} }
t := Tuple5{Website: parts[0], Prefix: parts[1], URI: parts[2], Status: parts[3]} t := Tuple6{Website: parts[0], Prefix: parts[1], URI: parts[2], Status: parts[3]}
if len(parts) == 5 { if len(parts) >= 5 {
t.IsTor = parts[4] == "1" t.IsTor = parts[4] == "1"
} }
if len(parts) == 6 {
if n, err := strconv.Atoi(parts[5]); err == nil {
t.ASN = int32(n)
}
}
return t return t
} }
@@ -159,7 +166,7 @@ func CompileFilter(f *pb.Filter) *CompiledFilter {
// MatchesFilter returns true if t satisfies all constraints in f. // MatchesFilter returns true if t satisfies all constraints in f.
// A nil filter matches everything. // A nil filter matches everything.
func MatchesFilter(t Tuple5, f *CompiledFilter) bool { func MatchesFilter(t Tuple6, f *CompiledFilter) bool {
if f == nil || f.Proto == nil { if f == nil || f.Proto == nil {
return true return true
} }
@@ -199,9 +206,30 @@ func MatchesFilter(t Tuple5, f *CompiledFilter) bool {
return false return false
} }
} }
if p.AsnNumber != nil && !matchesAsnOp(t.ASN, p.GetAsnNumber(), p.AsnOp) {
return false
}
return true return true
} }
// matchesAsnOp applies op(asn, want) directly on int32 values.
func matchesAsnOp(asn, want int32, op pb.StatusOp) bool {
switch op {
case pb.StatusOp_NE:
return asn != want
case pb.StatusOp_GT:
return asn > want
case pb.StatusOp_GE:
return asn >= want
case pb.StatusOp_LT:
return asn < want
case pb.StatusOp_LE:
return asn <= want
default: // EQ
return asn == want
}
}
// matchesStatusOp applies op(statusStr, want), parsing statusStr as an integer. // matchesStatusOp applies op(statusStr, want), parsing statusStr as an integer.
// Returns false if statusStr is not a valid integer. // Returns false if statusStr is not a valid integer.
func matchesStatusOp(statusStr string, want int32, op pb.StatusOp) bool { func matchesStatusOp(statusStr string, want int32, op pb.StatusOp) bool {
@@ -229,7 +257,7 @@ func matchesStatusOp(statusStr string, want int32, op pb.StatusOp) bool {
} }
// DimensionLabel returns the string value of t for the given group-by dimension. // DimensionLabel returns the string value of t for the given group-by dimension.
func DimensionLabel(t Tuple5, g pb.GroupBy) string { func DimensionLabel(t Tuple6, g pb.GroupBy) string {
switch g { switch g {
case pb.GroupBy_WEBSITE: case pb.GroupBy_WEBSITE:
return t.Website return t.Website
@@ -239,6 +267,8 @@ func DimensionLabel(t Tuple5, g pb.GroupBy) string {
return t.URI return t.URI
case pb.GroupBy_HTTP_RESPONSE: case pb.GroupBy_HTTP_RESPONSE:
return t.Status return t.Status
case pb.GroupBy_ASN_NUMBER:
return strconv.Itoa(int(t.ASN))
default: default:
return t.Website return t.Website
} }
@@ -318,9 +348,9 @@ func TopKFromMap(m map[string]int64, k int) []Entry {
return result return result
} }
// TopKFromTupleMap encodes a Tuple5 map and returns the top-k as a Snapshot. // TopKFromTupleMap encodes a Tuple6 map and returns the top-k as a Snapshot.
// Used by the collector to snapshot its live map. // Used by the collector to snapshot its live map.
func TopKFromTupleMap(m map[Tuple5]int64, k int, ts time.Time) Snapshot { func TopKFromTupleMap(m map[Tuple6]int64, k int, ts time.Time) Snapshot {
flat := make(map[string]int64, len(m)) flat := make(map[string]int64, len(m))
for t, c := range m { for t, c := range m {
flat[EncodeTuple(t)] = c flat[EncodeTuple(t)] = c

View File

@@ -83,10 +83,10 @@ func compiledEQ(status int32) *CompiledFilter {
} }
func TestMatchesFilterNil(t *testing.T) { func TestMatchesFilterNil(t *testing.T) {
if !MatchesFilter(Tuple5{Website: "x"}, nil) { if !MatchesFilter(Tuple6{Website: "x"}, nil) {
t.Fatal("nil filter should match everything") t.Fatal("nil filter should match everything")
} }
if !MatchesFilter(Tuple5{Website: "x"}, &CompiledFilter{}) { if !MatchesFilter(Tuple6{Website: "x"}, &CompiledFilter{}) {
t.Fatal("empty compiled filter should match everything") t.Fatal("empty compiled filter should match everything")
} }
} }
@@ -94,10 +94,10 @@ func TestMatchesFilterNil(t *testing.T) {
func TestMatchesFilterExactWebsite(t *testing.T) { func TestMatchesFilterExactWebsite(t *testing.T) {
w := "example.com" w := "example.com"
cf := CompileFilter(&pb.Filter{Website: &w}) cf := CompileFilter(&pb.Filter{Website: &w})
if !MatchesFilter(Tuple5{Website: "example.com"}, cf) { if !MatchesFilter(Tuple6{Website: "example.com"}, cf) {
t.Fatal("expected match") t.Fatal("expected match")
} }
if MatchesFilter(Tuple5{Website: "other.com"}, cf) { if MatchesFilter(Tuple6{Website: "other.com"}, cf) {
t.Fatal("expected no match") t.Fatal("expected no match")
} }
} }
@@ -105,10 +105,10 @@ func TestMatchesFilterExactWebsite(t *testing.T) {
func TestMatchesFilterWebsiteRegex(t *testing.T) { func TestMatchesFilterWebsiteRegex(t *testing.T) {
re := "gouda.*" re := "gouda.*"
cf := CompileFilter(&pb.Filter{WebsiteRegex: &re}) cf := CompileFilter(&pb.Filter{WebsiteRegex: &re})
if !MatchesFilter(Tuple5{Website: "gouda.example.com"}, cf) { if !MatchesFilter(Tuple6{Website: "gouda.example.com"}, cf) {
t.Fatal("expected match") t.Fatal("expected match")
} }
if MatchesFilter(Tuple5{Website: "edam.example.com"}, cf) { if MatchesFilter(Tuple6{Website: "edam.example.com"}, cf) {
t.Fatal("expected no match") t.Fatal("expected no match")
} }
} }
@@ -116,10 +116,10 @@ func TestMatchesFilterWebsiteRegex(t *testing.T) {
func TestMatchesFilterURIRegex(t *testing.T) { func TestMatchesFilterURIRegex(t *testing.T) {
re := "^/api/.*" re := "^/api/.*"
cf := CompileFilter(&pb.Filter{UriRegex: &re}) cf := CompileFilter(&pb.Filter{UriRegex: &re})
if !MatchesFilter(Tuple5{URI: "/api/users"}, cf) { if !MatchesFilter(Tuple6{URI: "/api/users"}, cf) {
t.Fatal("expected match") t.Fatal("expected match")
} }
if MatchesFilter(Tuple5{URI: "/health"}, cf) { if MatchesFilter(Tuple6{URI: "/health"}, cf) {
t.Fatal("expected no match") t.Fatal("expected no match")
} }
} }
@@ -127,17 +127,17 @@ func TestMatchesFilterURIRegex(t *testing.T) {
func TestMatchesFilterInvalidRegexMatchesNothing(t *testing.T) { func TestMatchesFilterInvalidRegexMatchesNothing(t *testing.T) {
re := "[invalid" re := "[invalid"
cf := CompileFilter(&pb.Filter{WebsiteRegex: &re}) cf := CompileFilter(&pb.Filter{WebsiteRegex: &re})
if MatchesFilter(Tuple5{Website: "anything"}, cf) { if MatchesFilter(Tuple6{Website: "anything"}, cf) {
t.Fatal("invalid regex should match nothing") t.Fatal("invalid regex should match nothing")
} }
} }
func TestMatchesFilterStatusEQ(t *testing.T) { func TestMatchesFilterStatusEQ(t *testing.T) {
cf := compiledEQ(200) cf := compiledEQ(200)
if !MatchesFilter(Tuple5{Status: "200"}, cf) { if !MatchesFilter(Tuple6{Status: "200"}, cf) {
t.Fatal("expected match") t.Fatal("expected match")
} }
if MatchesFilter(Tuple5{Status: "404"}, cf) { if MatchesFilter(Tuple6{Status: "404"}, cf) {
t.Fatal("expected no match") t.Fatal("expected no match")
} }
} }
@@ -145,10 +145,10 @@ func TestMatchesFilterStatusEQ(t *testing.T) {
func TestMatchesFilterStatusNE(t *testing.T) { func TestMatchesFilterStatusNE(t *testing.T) {
v := int32(200) v := int32(200)
cf := CompileFilter(&pb.Filter{HttpResponse: &v, StatusOp: pb.StatusOp_NE}) cf := CompileFilter(&pb.Filter{HttpResponse: &v, StatusOp: pb.StatusOp_NE})
if MatchesFilter(Tuple5{Status: "200"}, cf) { if MatchesFilter(Tuple6{Status: "200"}, cf) {
t.Fatal("expected no match for 200 != 200") t.Fatal("expected no match for 200 != 200")
} }
if !MatchesFilter(Tuple5{Status: "404"}, cf) { if !MatchesFilter(Tuple6{Status: "404"}, cf) {
t.Fatal("expected match for 404 != 200") t.Fatal("expected match for 404 != 200")
} }
} }
@@ -156,13 +156,13 @@ func TestMatchesFilterStatusNE(t *testing.T) {
func TestMatchesFilterStatusGE(t *testing.T) { func TestMatchesFilterStatusGE(t *testing.T) {
v := int32(400) v := int32(400)
cf := CompileFilter(&pb.Filter{HttpResponse: &v, StatusOp: pb.StatusOp_GE}) cf := CompileFilter(&pb.Filter{HttpResponse: &v, StatusOp: pb.StatusOp_GE})
if !MatchesFilter(Tuple5{Status: "400"}, cf) { if !MatchesFilter(Tuple6{Status: "400"}, cf) {
t.Fatal("expected match: 400 >= 400") t.Fatal("expected match: 400 >= 400")
} }
if !MatchesFilter(Tuple5{Status: "500"}, cf) { if !MatchesFilter(Tuple6{Status: "500"}, cf) {
t.Fatal("expected match: 500 >= 400") t.Fatal("expected match: 500 >= 400")
} }
if MatchesFilter(Tuple5{Status: "200"}, cf) { if MatchesFilter(Tuple6{Status: "200"}, cf) {
t.Fatal("expected no match: 200 >= 400") t.Fatal("expected no match: 200 >= 400")
} }
} }
@@ -170,17 +170,17 @@ func TestMatchesFilterStatusGE(t *testing.T) {
func TestMatchesFilterStatusLT(t *testing.T) { func TestMatchesFilterStatusLT(t *testing.T) {
v := int32(400) v := int32(400)
cf := CompileFilter(&pb.Filter{HttpResponse: &v, StatusOp: pb.StatusOp_LT}) cf := CompileFilter(&pb.Filter{HttpResponse: &v, StatusOp: pb.StatusOp_LT})
if !MatchesFilter(Tuple5{Status: "200"}, cf) { if !MatchesFilter(Tuple6{Status: "200"}, cf) {
t.Fatal("expected match: 200 < 400") t.Fatal("expected match: 200 < 400")
} }
if MatchesFilter(Tuple5{Status: "400"}, cf) { if MatchesFilter(Tuple6{Status: "400"}, cf) {
t.Fatal("expected no match: 400 < 400") t.Fatal("expected no match: 400 < 400")
} }
} }
func TestMatchesFilterStatusNonNumeric(t *testing.T) { func TestMatchesFilterStatusNonNumeric(t *testing.T) {
cf := compiledEQ(200) cf := compiledEQ(200)
if MatchesFilter(Tuple5{Status: "ok"}, cf) { if MatchesFilter(Tuple6{Status: "ok"}, cf) {
t.Fatal("non-numeric status should not match") t.Fatal("non-numeric status should not match")
} }
} }
@@ -193,13 +193,13 @@ func TestMatchesFilterCombined(t *testing.T) {
HttpResponse: &v, HttpResponse: &v,
StatusOp: pb.StatusOp_EQ, StatusOp: pb.StatusOp_EQ,
}) })
if !MatchesFilter(Tuple5{Website: "example.com", Status: "200"}, cf) { if !MatchesFilter(Tuple6{Website: "example.com", Status: "200"}, cf) {
t.Fatal("expected match") t.Fatal("expected match")
} }
if MatchesFilter(Tuple5{Website: "other.com", Status: "200"}, cf) { if MatchesFilter(Tuple6{Website: "other.com", Status: "200"}, cf) {
t.Fatal("expected no match: wrong website") t.Fatal("expected no match: wrong website")
} }
if MatchesFilter(Tuple5{Website: "example.com", Status: "404"}, cf) { if MatchesFilter(Tuple6{Website: "example.com", Status: "404"}, cf) {
t.Fatal("expected no match: wrong status") t.Fatal("expected no match: wrong status")
} }
} }
@@ -208,7 +208,7 @@ func TestMatchesFilterCombined(t *testing.T) {
func TestEncodeLabelTupleRoundtripWithTor(t *testing.T) { func TestEncodeLabelTupleRoundtripWithTor(t *testing.T) {
for _, isTor := range []bool{false, true} { for _, isTor := range []bool{false, true} {
orig := Tuple5{Website: "a.com", Prefix: "1.2.3.0/24", URI: "/x", Status: "200", IsTor: isTor} orig := Tuple6{Website: "a.com", Prefix: "1.2.3.0/24", URI: "/x", Status: "200", IsTor: isTor}
got := LabelTuple(EncodeTuple(orig)) got := LabelTuple(EncodeTuple(orig))
if got != orig { if got != orig {
t.Errorf("roundtrip mismatch: got %+v, want %+v", got, orig) t.Errorf("roundtrip mismatch: got %+v, want %+v", got, orig)
@@ -230,30 +230,108 @@ func TestLabelTupleBackwardCompat(t *testing.T) {
func TestMatchesFilterTorYes(t *testing.T) { func TestMatchesFilterTorYes(t *testing.T) {
cf := CompileFilter(&pb.Filter{Tor: pb.TorFilter_TOR_YES}) cf := CompileFilter(&pb.Filter{Tor: pb.TorFilter_TOR_YES})
if !MatchesFilter(Tuple5{IsTor: true}, cf) { if !MatchesFilter(Tuple6{IsTor: true}, cf) {
t.Fatal("TOR_YES should match TOR tuple") t.Fatal("TOR_YES should match TOR tuple")
} }
if MatchesFilter(Tuple5{IsTor: false}, cf) { if MatchesFilter(Tuple6{IsTor: false}, cf) {
t.Fatal("TOR_YES should not match non-TOR tuple") t.Fatal("TOR_YES should not match non-TOR tuple")
} }
} }
func TestMatchesFilterTorNo(t *testing.T) { func TestMatchesFilterTorNo(t *testing.T) {
cf := CompileFilter(&pb.Filter{Tor: pb.TorFilter_TOR_NO}) cf := CompileFilter(&pb.Filter{Tor: pb.TorFilter_TOR_NO})
if !MatchesFilter(Tuple5{IsTor: false}, cf) { if !MatchesFilter(Tuple6{IsTor: false}, cf) {
t.Fatal("TOR_NO should match non-TOR tuple") t.Fatal("TOR_NO should match non-TOR tuple")
} }
if MatchesFilter(Tuple5{IsTor: true}, cf) { if MatchesFilter(Tuple6{IsTor: true}, cf) {
t.Fatal("TOR_NO should not match TOR tuple") t.Fatal("TOR_NO should not match TOR tuple")
} }
} }
func TestMatchesFilterTorAny(t *testing.T) { func TestMatchesFilterTorAny(t *testing.T) {
cf := CompileFilter(&pb.Filter{Tor: pb.TorFilter_TOR_ANY}) cf := CompileFilter(&pb.Filter{Tor: pb.TorFilter_TOR_ANY})
if !MatchesFilter(Tuple5{IsTor: true}, cf) { if !MatchesFilter(Tuple6{IsTor: true}, cf) {
t.Fatal("TOR_ANY should match TOR tuple") t.Fatal("TOR_ANY should match TOR tuple")
} }
if !MatchesFilter(Tuple5{IsTor: false}, cf) { if !MatchesFilter(Tuple6{IsTor: false}, cf) {
t.Fatal("TOR_ANY should match non-TOR tuple") t.Fatal("TOR_ANY should match non-TOR tuple")
} }
} }
// --- ASN label encoding, filtering, and DimensionLabel ---
func TestEncodeLabelTupleRoundtripWithASN(t *testing.T) {
for _, asn := range []int32{0, 1, 12345, 65535} {
orig := Tuple6{Website: "a.com", Prefix: "1.2.3.0/24", URI: "/x", Status: "200", ASN: asn}
got := LabelTuple(EncodeTuple(orig))
if got != orig {
t.Errorf("roundtrip mismatch for ASN=%d: got %+v, want %+v", asn, got, orig)
}
}
}
func TestLabelTupleBackwardCompatNoASN(t *testing.T) {
// 5-field label (no asn field) should decode with ASN=0.
label := "a.com\x001.2.3.0/24\x00/x\x00200\x000"
got := LabelTuple(label)
if got.ASN != 0 {
t.Errorf("expected ASN=0 for 5-field label, got %d", got.ASN)
}
}
func TestMatchesFilterAsnEQ(t *testing.T) {
n := int32(12345)
cf := CompileFilter(&pb.Filter{AsnNumber: &n})
if !MatchesFilter(Tuple6{ASN: 12345}, cf) {
t.Fatal("EQ should match equal ASN")
}
if MatchesFilter(Tuple6{ASN: 99999}, cf) {
t.Fatal("EQ should not match different ASN")
}
}
func TestMatchesFilterAsnNE(t *testing.T) {
n := int32(12345)
cf := CompileFilter(&pb.Filter{AsnNumber: &n, AsnOp: pb.StatusOp_NE})
if MatchesFilter(Tuple6{ASN: 12345}, cf) {
t.Fatal("NE should not match equal ASN")
}
if !MatchesFilter(Tuple6{ASN: 99999}, cf) {
t.Fatal("NE should match different ASN")
}
}
func TestMatchesFilterAsnGE(t *testing.T) {
n := int32(1000)
cf := CompileFilter(&pb.Filter{AsnNumber: &n, AsnOp: pb.StatusOp_GE})
if !MatchesFilter(Tuple6{ASN: 1000}, cf) {
t.Fatal("GE should match equal ASN")
}
if !MatchesFilter(Tuple6{ASN: 2000}, cf) {
t.Fatal("GE should match larger ASN")
}
if MatchesFilter(Tuple6{ASN: 500}, cf) {
t.Fatal("GE should not match smaller ASN")
}
}
func TestMatchesFilterAsnLT(t *testing.T) {
n := int32(64512)
cf := CompileFilter(&pb.Filter{AsnNumber: &n, AsnOp: pb.StatusOp_LT})
if !MatchesFilter(Tuple6{ASN: 1000}, cf) {
t.Fatal("LT should match smaller ASN")
}
if MatchesFilter(Tuple6{ASN: 64512}, cf) {
t.Fatal("LT should not match equal ASN")
}
if MatchesFilter(Tuple6{ASN: 65535}, cf) {
t.Fatal("LT should not match larger ASN")
}
}
func TestDimensionLabelASN(t *testing.T) {
got := DimensionLabel(Tuple6{ASN: 12345}, pb.GroupBy_ASN_NUMBER)
if got != "12345" {
t.Errorf("DimensionLabel ASN: got %q, want %q", got, "12345")
}
}

View File

@@ -139,6 +139,7 @@ const (
GroupBy_CLIENT_PREFIX GroupBy = 1 GroupBy_CLIENT_PREFIX GroupBy = 1
GroupBy_REQUEST_URI GroupBy = 2 GroupBy_REQUEST_URI GroupBy = 2
GroupBy_HTTP_RESPONSE GroupBy = 3 GroupBy_HTTP_RESPONSE GroupBy = 3
GroupBy_ASN_NUMBER GroupBy = 4
) )
// Enum value maps for GroupBy. // Enum value maps for GroupBy.
@@ -148,12 +149,14 @@ var (
1: "CLIENT_PREFIX", 1: "CLIENT_PREFIX",
2: "REQUEST_URI", 2: "REQUEST_URI",
3: "HTTP_RESPONSE", 3: "HTTP_RESPONSE",
4: "ASN_NUMBER",
} }
GroupBy_value = map[string]int32{ GroupBy_value = map[string]int32{
"WEBSITE": 0, "WEBSITE": 0,
"CLIENT_PREFIX": 1, "CLIENT_PREFIX": 1,
"REQUEST_URI": 2, "REQUEST_URI": 2,
"HTTP_RESPONSE": 3, "HTTP_RESPONSE": 3,
"ASN_NUMBER": 4,
} }
) )
@@ -254,6 +257,8 @@ type Filter struct {
WebsiteRegex *string `protobuf:"bytes,6,opt,name=website_regex,json=websiteRegex,proto3,oneof" json:"website_regex,omitempty"` // RE2 regex matched against website WebsiteRegex *string `protobuf:"bytes,6,opt,name=website_regex,json=websiteRegex,proto3,oneof" json:"website_regex,omitempty"` // RE2 regex matched against website
UriRegex *string `protobuf:"bytes,7,opt,name=uri_regex,json=uriRegex,proto3,oneof" json:"uri_regex,omitempty"` // RE2 regex matched against http_request_uri UriRegex *string `protobuf:"bytes,7,opt,name=uri_regex,json=uriRegex,proto3,oneof" json:"uri_regex,omitempty"` // RE2 regex matched against http_request_uri
Tor TorFilter `protobuf:"varint,8,opt,name=tor,proto3,enum=logtail.TorFilter" json:"tor,omitempty"` // restrict to TOR / non-TOR clients Tor TorFilter `protobuf:"varint,8,opt,name=tor,proto3,enum=logtail.TorFilter" json:"tor,omitempty"` // restrict to TOR / non-TOR clients
AsnNumber *int32 `protobuf:"varint,9,opt,name=asn_number,json=asnNumber,proto3,oneof" json:"asn_number,omitempty"` // filter by client ASN
AsnOp StatusOp `protobuf:"varint,10,opt,name=asn_op,json=asnOp,proto3,enum=logtail.StatusOp" json:"asn_op,omitempty"` // operator for asn_number; ignored when unset
unknownFields protoimpl.UnknownFields unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache sizeCache protoimpl.SizeCache
} }
@@ -344,6 +349,20 @@ func (x *Filter) GetTor() TorFilter {
return TorFilter_TOR_ANY return TorFilter_TOR_ANY
} }
func (x *Filter) GetAsnNumber() int32 {
if x != nil && x.AsnNumber != nil {
return *x.AsnNumber
}
return 0
}
func (x *Filter) GetAsnOp() StatusOp {
if x != nil {
return x.AsnOp
}
return StatusOp_EQ
}
type TopNRequest struct { type TopNRequest struct {
state protoimpl.MessageState `protogen:"open.v1"` state protoimpl.MessageState `protogen:"open.v1"`
Filter *Filter `protobuf:"bytes,1,opt,name=filter,proto3" json:"filter,omitempty"` Filter *Filter `protobuf:"bytes,1,opt,name=filter,proto3" json:"filter,omitempty"`
@@ -904,7 +923,7 @@ var File_proto_logtail_proto protoreflect.FileDescriptor
const file_proto_logtail_proto_rawDesc = "" + const file_proto_logtail_proto_rawDesc = "" +
"\n" + "\n" +
"\x13proto/logtail.proto\x12\alogtail\"\xb1\x03\n" + "\x13proto/logtail.proto\x12\alogtail\"\x8e\x04\n" +
"\x06Filter\x12\x1d\n" + "\x06Filter\x12\x1d\n" +
"\awebsite\x18\x01 \x01(\tH\x00R\awebsite\x88\x01\x01\x12(\n" + "\awebsite\x18\x01 \x01(\tH\x00R\awebsite\x88\x01\x01\x12(\n" +
"\rclient_prefix\x18\x02 \x01(\tH\x01R\fclientPrefix\x88\x01\x01\x12-\n" + "\rclient_prefix\x18\x02 \x01(\tH\x01R\fclientPrefix\x88\x01\x01\x12-\n" +
@@ -913,7 +932,11 @@ const file_proto_logtail_proto_rawDesc = "" +
"\tstatus_op\x18\x05 \x01(\x0e2\x11.logtail.StatusOpR\bstatusOp\x12(\n" + "\tstatus_op\x18\x05 \x01(\x0e2\x11.logtail.StatusOpR\bstatusOp\x12(\n" +
"\rwebsite_regex\x18\x06 \x01(\tH\x04R\fwebsiteRegex\x88\x01\x01\x12 \n" + "\rwebsite_regex\x18\x06 \x01(\tH\x04R\fwebsiteRegex\x88\x01\x01\x12 \n" +
"\turi_regex\x18\a \x01(\tH\x05R\buriRegex\x88\x01\x01\x12$\n" + "\turi_regex\x18\a \x01(\tH\x05R\buriRegex\x88\x01\x01\x12$\n" +
"\x03tor\x18\b \x01(\x0e2\x12.logtail.TorFilterR\x03torB\n" + "\x03tor\x18\b \x01(\x0e2\x12.logtail.TorFilterR\x03tor\x12\"\n" +
"\n" +
"asn_number\x18\t \x01(\x05H\x06R\tasnNumber\x88\x01\x01\x12(\n" +
"\x06asn_op\x18\n" +
" \x01(\x0e2\x11.logtail.StatusOpR\x05asnOpB\n" +
"\n" + "\n" +
"\b_websiteB\x10\n" + "\b_websiteB\x10\n" +
"\x0e_client_prefixB\x13\n" + "\x0e_client_prefixB\x13\n" +
@@ -921,7 +944,8 @@ const file_proto_logtail_proto_rawDesc = "" +
"\x0e_http_responseB\x10\n" + "\x0e_http_responseB\x10\n" +
"\x0e_website_regexB\f\n" + "\x0e_website_regexB\f\n" +
"\n" + "\n" +
"_uri_regex\"\x9a\x01\n" + "_uri_regexB\r\n" +
"\v_asn_number\"\x9a\x01\n" +
"\vTopNRequest\x12'\n" + "\vTopNRequest\x12'\n" +
"\x06filter\x18\x01 \x01(\v2\x0f.logtail.FilterR\x06filter\x12+\n" + "\x06filter\x18\x01 \x01(\v2\x0f.logtail.FilterR\x06filter\x12+\n" +
"\bgroup_by\x18\x02 \x01(\x0e2\x10.logtail.GroupByR\agroupBy\x12\f\n" + "\bgroup_by\x18\x02 \x01(\x0e2\x10.logtail.GroupByR\agroupBy\x12\f\n" +
@@ -966,12 +990,14 @@ const file_proto_logtail_proto_rawDesc = "" +
"\x02GT\x10\x02\x12\x06\n" + "\x02GT\x10\x02\x12\x06\n" +
"\x02GE\x10\x03\x12\x06\n" + "\x02GE\x10\x03\x12\x06\n" +
"\x02LT\x10\x04\x12\x06\n" + "\x02LT\x10\x04\x12\x06\n" +
"\x02LE\x10\x05*M\n" + "\x02LE\x10\x05*]\n" +
"\aGroupBy\x12\v\n" + "\aGroupBy\x12\v\n" +
"\aWEBSITE\x10\x00\x12\x11\n" + "\aWEBSITE\x10\x00\x12\x11\n" +
"\rCLIENT_PREFIX\x10\x01\x12\x0f\n" + "\rCLIENT_PREFIX\x10\x01\x12\x0f\n" +
"\vREQUEST_URI\x10\x02\x12\x11\n" + "\vREQUEST_URI\x10\x02\x12\x11\n" +
"\rHTTP_RESPONSE\x10\x03*A\n" + "\rHTTP_RESPONSE\x10\x03\x12\x0e\n" +
"\n" +
"ASN_NUMBER\x10\x04*A\n" +
"\x06Window\x12\a\n" + "\x06Window\x12\a\n" +
"\x03W1M\x10\x00\x12\a\n" + "\x03W1M\x10\x00\x12\a\n" +
"\x03W5M\x10\x01\x12\b\n" + "\x03W5M\x10\x01\x12\b\n" +
@@ -1020,28 +1046,29 @@ var file_proto_logtail_proto_goTypes = []any{
var file_proto_logtail_proto_depIdxs = []int32{ var file_proto_logtail_proto_depIdxs = []int32{
1, // 0: logtail.Filter.status_op:type_name -> logtail.StatusOp 1, // 0: logtail.Filter.status_op:type_name -> logtail.StatusOp
0, // 1: logtail.Filter.tor:type_name -> logtail.TorFilter 0, // 1: logtail.Filter.tor:type_name -> logtail.TorFilter
4, // 2: logtail.TopNRequest.filter:type_name -> logtail.Filter 1, // 2: logtail.Filter.asn_op:type_name -> logtail.StatusOp
2, // 3: logtail.TopNRequest.group_by:type_name -> logtail.GroupBy 4, // 3: logtail.TopNRequest.filter:type_name -> logtail.Filter
3, // 4: logtail.TopNRequest.window:type_name -> logtail.Window 2, // 4: logtail.TopNRequest.group_by:type_name -> logtail.GroupBy
6, // 5: logtail.TopNResponse.entries:type_name -> logtail.TopNEntry 3, // 5: logtail.TopNRequest.window:type_name -> logtail.Window
4, // 6: logtail.TrendRequest.filter:type_name -> logtail.Filter 6, // 6: logtail.TopNResponse.entries:type_name -> logtail.TopNEntry
3, // 7: logtail.TrendRequest.window:type_name -> logtail.Window 4, // 7: logtail.TrendRequest.filter:type_name -> logtail.Filter
9, // 8: logtail.TrendResponse.points:type_name -> logtail.TrendPoint 3, // 8: logtail.TrendRequest.window:type_name -> logtail.Window
6, // 9: logtail.Snapshot.entries:type_name -> logtail.TopNEntry 9, // 9: logtail.TrendResponse.points:type_name -> logtail.TrendPoint
14, // 10: logtail.ListTargetsResponse.targets:type_name -> logtail.TargetInfo 6, // 10: logtail.Snapshot.entries:type_name -> logtail.TopNEntry
5, // 11: logtail.LogtailService.TopN:input_type -> logtail.TopNRequest 14, // 11: logtail.ListTargetsResponse.targets:type_name -> logtail.TargetInfo
8, // 12: logtail.LogtailService.Trend:input_type -> logtail.TrendRequest 5, // 12: logtail.LogtailService.TopN:input_type -> logtail.TopNRequest
11, // 13: logtail.LogtailService.StreamSnapshots:input_type -> logtail.SnapshotRequest 8, // 13: logtail.LogtailService.Trend:input_type -> logtail.TrendRequest
13, // 14: logtail.LogtailService.ListTargets:input_type -> logtail.ListTargetsRequest 11, // 14: logtail.LogtailService.StreamSnapshots:input_type -> logtail.SnapshotRequest
7, // 15: logtail.LogtailService.TopN:output_type -> logtail.TopNResponse 13, // 15: logtail.LogtailService.ListTargets:input_type -> logtail.ListTargetsRequest
10, // 16: logtail.LogtailService.Trend:output_type -> logtail.TrendResponse 7, // 16: logtail.LogtailService.TopN:output_type -> logtail.TopNResponse
12, // 17: logtail.LogtailService.StreamSnapshots:output_type -> logtail.Snapshot 10, // 17: logtail.LogtailService.Trend:output_type -> logtail.TrendResponse
15, // 18: logtail.LogtailService.ListTargets:output_type -> logtail.ListTargetsResponse 12, // 18: logtail.LogtailService.StreamSnapshots:output_type -> logtail.Snapshot
15, // [15:19] is the sub-list for method output_type 15, // 19: logtail.LogtailService.ListTargets:output_type -> logtail.ListTargetsResponse
11, // [11:15] is the sub-list for method input_type 16, // [16:20] is the sub-list for method output_type
11, // [11:11] is the sub-list for extension type_name 12, // [12:16] is the sub-list for method input_type
11, // [11:11] is the sub-list for extension extendee 12, // [12:12] is the sub-list for extension type_name
0, // [0:11] is the sub-list for field type_name 12, // [12:12] is the sub-list for extension extendee
0, // [0:12] is the sub-list for field type_name
} }
func init() { file_proto_logtail_proto_init() } func init() { file_proto_logtail_proto_init() }

View File

@@ -34,6 +34,8 @@ message Filter {
optional string website_regex = 6; // RE2 regex matched against website optional string website_regex = 6; // RE2 regex matched against website
optional string uri_regex = 7; // RE2 regex matched against http_request_uri optional string uri_regex = 7; // RE2 regex matched against http_request_uri
TorFilter tor = 8; // restrict to TOR / non-TOR clients TorFilter tor = 8; // restrict to TOR / non-TOR clients
optional int32 asn_number = 9; // filter by client ASN
StatusOp asn_op = 10; // operator for asn_number; ignored when unset
} }
enum GroupBy { enum GroupBy {
@@ -41,6 +43,7 @@ enum GroupBy {
CLIENT_PREFIX = 1; CLIENT_PREFIX = 1;
REQUEST_URI = 2; REQUEST_URI = 2;
HTTP_RESPONSE = 3; HTTP_RESPONSE = 3;
ASN_NUMBER = 4;
} }
enum Window { enum Window {

View File

@@ -139,6 +139,7 @@ const (
GroupBy_CLIENT_PREFIX GroupBy = 1 GroupBy_CLIENT_PREFIX GroupBy = 1
GroupBy_REQUEST_URI GroupBy = 2 GroupBy_REQUEST_URI GroupBy = 2
GroupBy_HTTP_RESPONSE GroupBy = 3 GroupBy_HTTP_RESPONSE GroupBy = 3
GroupBy_ASN_NUMBER GroupBy = 4
) )
// Enum value maps for GroupBy. // Enum value maps for GroupBy.
@@ -148,12 +149,14 @@ var (
1: "CLIENT_PREFIX", 1: "CLIENT_PREFIX",
2: "REQUEST_URI", 2: "REQUEST_URI",
3: "HTTP_RESPONSE", 3: "HTTP_RESPONSE",
4: "ASN_NUMBER",
} }
GroupBy_value = map[string]int32{ GroupBy_value = map[string]int32{
"WEBSITE": 0, "WEBSITE": 0,
"CLIENT_PREFIX": 1, "CLIENT_PREFIX": 1,
"REQUEST_URI": 2, "REQUEST_URI": 2,
"HTTP_RESPONSE": 3, "HTTP_RESPONSE": 3,
"ASN_NUMBER": 4,
} }
) )
@@ -254,6 +257,8 @@ type Filter struct {
WebsiteRegex *string `protobuf:"bytes,6,opt,name=website_regex,json=websiteRegex,proto3,oneof" json:"website_regex,omitempty"` // RE2 regex matched against website WebsiteRegex *string `protobuf:"bytes,6,opt,name=website_regex,json=websiteRegex,proto3,oneof" json:"website_regex,omitempty"` // RE2 regex matched against website
UriRegex *string `protobuf:"bytes,7,opt,name=uri_regex,json=uriRegex,proto3,oneof" json:"uri_regex,omitempty"` // RE2 regex matched against http_request_uri UriRegex *string `protobuf:"bytes,7,opt,name=uri_regex,json=uriRegex,proto3,oneof" json:"uri_regex,omitempty"` // RE2 regex matched against http_request_uri
Tor TorFilter `protobuf:"varint,8,opt,name=tor,proto3,enum=logtail.TorFilter" json:"tor,omitempty"` // restrict to TOR / non-TOR clients Tor TorFilter `protobuf:"varint,8,opt,name=tor,proto3,enum=logtail.TorFilter" json:"tor,omitempty"` // restrict to TOR / non-TOR clients
AsnNumber *int32 `protobuf:"varint,9,opt,name=asn_number,json=asnNumber,proto3,oneof" json:"asn_number,omitempty"` // filter by client ASN
AsnOp StatusOp `protobuf:"varint,10,opt,name=asn_op,json=asnOp,proto3,enum=logtail.StatusOp" json:"asn_op,omitempty"` // operator for asn_number; ignored when unset
unknownFields protoimpl.UnknownFields unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache sizeCache protoimpl.SizeCache
} }
@@ -344,6 +349,20 @@ func (x *Filter) GetTor() TorFilter {
return TorFilter_TOR_ANY return TorFilter_TOR_ANY
} }
func (x *Filter) GetAsnNumber() int32 {
if x != nil && x.AsnNumber != nil {
return *x.AsnNumber
}
return 0
}
func (x *Filter) GetAsnOp() StatusOp {
if x != nil {
return x.AsnOp
}
return StatusOp_EQ
}
type TopNRequest struct { type TopNRequest struct {
state protoimpl.MessageState `protogen:"open.v1"` state protoimpl.MessageState `protogen:"open.v1"`
Filter *Filter `protobuf:"bytes,1,opt,name=filter,proto3" json:"filter,omitempty"` Filter *Filter `protobuf:"bytes,1,opt,name=filter,proto3" json:"filter,omitempty"`
@@ -904,7 +923,7 @@ var File_proto_logtail_proto protoreflect.FileDescriptor
const file_proto_logtail_proto_rawDesc = "" + const file_proto_logtail_proto_rawDesc = "" +
"\n" + "\n" +
"\x13proto/logtail.proto\x12\alogtail\"\xb1\x03\n" + "\x13proto/logtail.proto\x12\alogtail\"\x8e\x04\n" +
"\x06Filter\x12\x1d\n" + "\x06Filter\x12\x1d\n" +
"\awebsite\x18\x01 \x01(\tH\x00R\awebsite\x88\x01\x01\x12(\n" + "\awebsite\x18\x01 \x01(\tH\x00R\awebsite\x88\x01\x01\x12(\n" +
"\rclient_prefix\x18\x02 \x01(\tH\x01R\fclientPrefix\x88\x01\x01\x12-\n" + "\rclient_prefix\x18\x02 \x01(\tH\x01R\fclientPrefix\x88\x01\x01\x12-\n" +
@@ -913,7 +932,11 @@ const file_proto_logtail_proto_rawDesc = "" +
"\tstatus_op\x18\x05 \x01(\x0e2\x11.logtail.StatusOpR\bstatusOp\x12(\n" + "\tstatus_op\x18\x05 \x01(\x0e2\x11.logtail.StatusOpR\bstatusOp\x12(\n" +
"\rwebsite_regex\x18\x06 \x01(\tH\x04R\fwebsiteRegex\x88\x01\x01\x12 \n" + "\rwebsite_regex\x18\x06 \x01(\tH\x04R\fwebsiteRegex\x88\x01\x01\x12 \n" +
"\turi_regex\x18\a \x01(\tH\x05R\buriRegex\x88\x01\x01\x12$\n" + "\turi_regex\x18\a \x01(\tH\x05R\buriRegex\x88\x01\x01\x12$\n" +
"\x03tor\x18\b \x01(\x0e2\x12.logtail.TorFilterR\x03torB\n" + "\x03tor\x18\b \x01(\x0e2\x12.logtail.TorFilterR\x03tor\x12\"\n" +
"\n" +
"asn_number\x18\t \x01(\x05H\x06R\tasnNumber\x88\x01\x01\x12(\n" +
"\x06asn_op\x18\n" +
" \x01(\x0e2\x11.logtail.StatusOpR\x05asnOpB\n" +
"\n" + "\n" +
"\b_websiteB\x10\n" + "\b_websiteB\x10\n" +
"\x0e_client_prefixB\x13\n" + "\x0e_client_prefixB\x13\n" +
@@ -921,7 +944,8 @@ const file_proto_logtail_proto_rawDesc = "" +
"\x0e_http_responseB\x10\n" + "\x0e_http_responseB\x10\n" +
"\x0e_website_regexB\f\n" + "\x0e_website_regexB\f\n" +
"\n" + "\n" +
"_uri_regex\"\x9a\x01\n" + "_uri_regexB\r\n" +
"\v_asn_number\"\x9a\x01\n" +
"\vTopNRequest\x12'\n" + "\vTopNRequest\x12'\n" +
"\x06filter\x18\x01 \x01(\v2\x0f.logtail.FilterR\x06filter\x12+\n" + "\x06filter\x18\x01 \x01(\v2\x0f.logtail.FilterR\x06filter\x12+\n" +
"\bgroup_by\x18\x02 \x01(\x0e2\x10.logtail.GroupByR\agroupBy\x12\f\n" + "\bgroup_by\x18\x02 \x01(\x0e2\x10.logtail.GroupByR\agroupBy\x12\f\n" +
@@ -966,12 +990,14 @@ const file_proto_logtail_proto_rawDesc = "" +
"\x02GT\x10\x02\x12\x06\n" + "\x02GT\x10\x02\x12\x06\n" +
"\x02GE\x10\x03\x12\x06\n" + "\x02GE\x10\x03\x12\x06\n" +
"\x02LT\x10\x04\x12\x06\n" + "\x02LT\x10\x04\x12\x06\n" +
"\x02LE\x10\x05*M\n" + "\x02LE\x10\x05*]\n" +
"\aGroupBy\x12\v\n" + "\aGroupBy\x12\v\n" +
"\aWEBSITE\x10\x00\x12\x11\n" + "\aWEBSITE\x10\x00\x12\x11\n" +
"\rCLIENT_PREFIX\x10\x01\x12\x0f\n" + "\rCLIENT_PREFIX\x10\x01\x12\x0f\n" +
"\vREQUEST_URI\x10\x02\x12\x11\n" + "\vREQUEST_URI\x10\x02\x12\x11\n" +
"\rHTTP_RESPONSE\x10\x03*A\n" + "\rHTTP_RESPONSE\x10\x03\x12\x0e\n" +
"\n" +
"ASN_NUMBER\x10\x04*A\n" +
"\x06Window\x12\a\n" + "\x06Window\x12\a\n" +
"\x03W1M\x10\x00\x12\a\n" + "\x03W1M\x10\x00\x12\a\n" +
"\x03W5M\x10\x01\x12\b\n" + "\x03W5M\x10\x01\x12\b\n" +
@@ -1020,28 +1046,29 @@ var file_proto_logtail_proto_goTypes = []any{
var file_proto_logtail_proto_depIdxs = []int32{ var file_proto_logtail_proto_depIdxs = []int32{
1, // 0: logtail.Filter.status_op:type_name -> logtail.StatusOp 1, // 0: logtail.Filter.status_op:type_name -> logtail.StatusOp
0, // 1: logtail.Filter.tor:type_name -> logtail.TorFilter 0, // 1: logtail.Filter.tor:type_name -> logtail.TorFilter
4, // 2: logtail.TopNRequest.filter:type_name -> logtail.Filter 1, // 2: logtail.Filter.asn_op:type_name -> logtail.StatusOp
2, // 3: logtail.TopNRequest.group_by:type_name -> logtail.GroupBy 4, // 3: logtail.TopNRequest.filter:type_name -> logtail.Filter
3, // 4: logtail.TopNRequest.window:type_name -> logtail.Window 2, // 4: logtail.TopNRequest.group_by:type_name -> logtail.GroupBy
6, // 5: logtail.TopNResponse.entries:type_name -> logtail.TopNEntry 3, // 5: logtail.TopNRequest.window:type_name -> logtail.Window
4, // 6: logtail.TrendRequest.filter:type_name -> logtail.Filter 6, // 6: logtail.TopNResponse.entries:type_name -> logtail.TopNEntry
3, // 7: logtail.TrendRequest.window:type_name -> logtail.Window 4, // 7: logtail.TrendRequest.filter:type_name -> logtail.Filter
9, // 8: logtail.TrendResponse.points:type_name -> logtail.TrendPoint 3, // 8: logtail.TrendRequest.window:type_name -> logtail.Window
6, // 9: logtail.Snapshot.entries:type_name -> logtail.TopNEntry 9, // 9: logtail.TrendResponse.points:type_name -> logtail.TrendPoint
14, // 10: logtail.ListTargetsResponse.targets:type_name -> logtail.TargetInfo 6, // 10: logtail.Snapshot.entries:type_name -> logtail.TopNEntry
5, // 11: logtail.LogtailService.TopN:input_type -> logtail.TopNRequest 14, // 11: logtail.ListTargetsResponse.targets:type_name -> logtail.TargetInfo
8, // 12: logtail.LogtailService.Trend:input_type -> logtail.TrendRequest 5, // 12: logtail.LogtailService.TopN:input_type -> logtail.TopNRequest
11, // 13: logtail.LogtailService.StreamSnapshots:input_type -> logtail.SnapshotRequest 8, // 13: logtail.LogtailService.Trend:input_type -> logtail.TrendRequest
13, // 14: logtail.LogtailService.ListTargets:input_type -> logtail.ListTargetsRequest 11, // 14: logtail.LogtailService.StreamSnapshots:input_type -> logtail.SnapshotRequest
7, // 15: logtail.LogtailService.TopN:output_type -> logtail.TopNResponse 13, // 15: logtail.LogtailService.ListTargets:input_type -> logtail.ListTargetsRequest
10, // 16: logtail.LogtailService.Trend:output_type -> logtail.TrendResponse 7, // 16: logtail.LogtailService.TopN:output_type -> logtail.TopNResponse
12, // 17: logtail.LogtailService.StreamSnapshots:output_type -> logtail.Snapshot 10, // 17: logtail.LogtailService.Trend:output_type -> logtail.TrendResponse
15, // 18: logtail.LogtailService.ListTargets:output_type -> logtail.ListTargetsResponse 12, // 18: logtail.LogtailService.StreamSnapshots:output_type -> logtail.Snapshot
15, // [15:19] is the sub-list for method output_type 15, // 19: logtail.LogtailService.ListTargets:output_type -> logtail.ListTargetsResponse
11, // [11:15] is the sub-list for method input_type 16, // [16:20] is the sub-list for method output_type
11, // [11:11] is the sub-list for extension type_name 12, // [12:16] is the sub-list for method input_type
11, // [11:11] is the sub-list for extension extendee 12, // [12:12] is the sub-list for extension type_name
0, // [0:11] is the sub-list for field type_name 12, // [12:12] is the sub-list for extension extendee
0, // [0:12] is the sub-list for field type_name
} }
func init() { file_proto_logtail_proto_init() } func init() { file_proto_logtail_proto_init() }