Compare commits
2 Commits
a798bb1d1d
...
c7f8455188
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c7f8455188 | ||
|
|
30c8c40157 |
35
README.md
35
README.md
@@ -48,7 +48,7 @@ nginx-logtail/
|
||||
│ └── logtail_grpc.pb.go # generated: service stubs
|
||||
├── internal/
|
||||
│ └── store/
|
||||
│ └── store.go # shared types: Tuple5, Entry, Snapshot, ring helpers
|
||||
│ └── store.go # shared types: Tuple6, Entry, Snapshot, ring helpers
|
||||
└── cmd/
|
||||
├── collector/
|
||||
│ ├── main.go
|
||||
@@ -86,7 +86,7 @@ nginx-logtail/
|
||||
|
||||
## Data Model
|
||||
|
||||
The core unit is a **count keyed by five dimensions**:
|
||||
The core unit is a **count keyed by six dimensions**:
|
||||
|
||||
| Field | Description | Example |
|
||||
|-------------------|------------------------------------------------------|-------------------|
|
||||
@@ -95,6 +95,7 @@ The core unit is a **count keyed by five dimensions**:
|
||||
| `http_request_uri`| `$request_uri` path only — query string stripped | `/api/v1/search` |
|
||||
| `http_response` | HTTP status code | `429` |
|
||||
| `is_tor` | whether the client IP is a TOR exit node | `1` |
|
||||
| `asn` | client AS number (MaxMind GeoIP2, 32-bit int) | `8298` |
|
||||
|
||||
## Time Windows & Tiered Ring Buffers
|
||||
|
||||
@@ -121,8 +122,8 @@ Every 5 minutes: merge last 5 fine snapshots → top-5K → append to coarse rin
|
||||
|
||||
## Memory Budget (Collector, target ≤ 1 GB)
|
||||
|
||||
Entry size: ~30 B website + ~15 B prefix + ~50 B URI + 3 B status + 1 B is_tor + 8 B count + ~80 B Go map
|
||||
overhead ≈ **~187 bytes per entry**.
|
||||
Entry size: ~30 B website + ~15 B prefix + ~50 B URI + 3 B status + 1 B is_tor + 4 B asn + 8 B count + ~80 B Go map
|
||||
overhead ≈ **~191 bytes per entry**.
|
||||
|
||||
| Structure | Entries | Size |
|
||||
|-------------------------|-------------|-------------|
|
||||
@@ -174,9 +175,11 @@ message Filter {
|
||||
optional string website_regex = 6; // RE2 regex against website
|
||||
optional string uri_regex = 7; // RE2 regex against http_request_uri
|
||||
TorFilter tor = 8; // TOR_ANY (default) / TOR_YES / TOR_NO
|
||||
optional int32 asn_number = 9; // filter by client ASN
|
||||
StatusOp asn_op = 10; // comparison operator for asn_number
|
||||
}
|
||||
|
||||
enum GroupBy { WEBSITE = 0; CLIENT_PREFIX = 1; REQUEST_URI = 2; HTTP_RESPONSE = 3; }
|
||||
enum GroupBy { WEBSITE = 0; CLIENT_PREFIX = 1; REQUEST_URI = 2; HTTP_RESPONSE = 3; ASN_NUMBER = 4; }
|
||||
enum Window { W1M = 0; W5M = 1; W15M = 2; W60M = 3; W6H = 4; W24H = 5; }
|
||||
|
||||
message TopNRequest { Filter filter = 1; GroupBy group_by = 2; int32 n = 3; Window window = 4; }
|
||||
@@ -230,7 +233,7 @@ service LogtailService {
|
||||
- Parses the fixed **logtail** nginx log format — tab-separated, fixed field order, no quoting:
|
||||
|
||||
```nginx
|
||||
log_format logtail '$host\t$remote_addr\t$msec\t$request_method\t$request_uri\t$status\t$body_bytes_sent\t$request_time\t$is_tor';
|
||||
log_format logtail '$host\t$remote_addr\t$msec\t$request_method\t$request_uri\t$status\t$body_bytes_sent\t$request_time\t$is_tor\t$asn';
|
||||
```
|
||||
|
||||
| # | Field | Used for |
|
||||
@@ -244,18 +247,21 @@ service LogtailService {
|
||||
| 6 | `$body_bytes_sent`| (discarded) |
|
||||
| 7 | `$request_time` | (discarded) |
|
||||
| 8 | `$is_tor` | is_tor |
|
||||
| 9 | `$asn` | asn |
|
||||
|
||||
- `strings.SplitN(line, "\t", 9)` — ~50 ns/line. No regex.
|
||||
- `strings.SplitN(line, "\t", 10)` — ~50 ns/line. No regex.
|
||||
- `$request_uri`: query string discarded at first `?`.
|
||||
- `$remote_addr`: truncated to /24 (IPv4) or /48 (IPv6); prefix lengths configurable via flags.
|
||||
- `$is_tor`: `1` if the client IP is a TOR exit node, `0` otherwise. Field is optional — lines
|
||||
with exactly 8 fields (old format) are accepted and default to `is_tor=false`.
|
||||
- `$asn`: client AS number as a decimal integer (from MaxMind GeoIP2). Field is optional —
|
||||
lines without it default to `asn=0`.
|
||||
- Lines with fewer than 8 fields are silently skipped.
|
||||
|
||||
### store.go
|
||||
- **Single aggregator goroutine** reads from the channel and updates the live map — no locking on
|
||||
the hot path. At 10 K lines/s the goroutine uses <1% CPU.
|
||||
- Live map: `map[Tuple5]int64`, hard-capped at 100 K entries (new keys dropped when full).
|
||||
- Live map: `map[Tuple6]int64`, hard-capped at 100 K entries (new keys dropped when full).
|
||||
- **Minute ticker**: heap-selects top-50K entries, writes snapshot to fine ring, resets live map.
|
||||
- Every 5 fine ticks: merge last 5 fine snapshots → top-5K → write to coarse ring.
|
||||
- **TopN query**: RLock ring, sum bucket range, apply filter, group by dimension, heap-select top N.
|
||||
@@ -307,14 +313,17 @@ service LogtailService {
|
||||
|
||||
### handler.go
|
||||
- All filter state in the **URL query string**: `w` (window), `by` (group_by), `f_website`,
|
||||
`f_prefix`, `f_uri`, `f_status`, `f_website_re`, `f_uri_re`, `f_is_tor`, `n`, `target`. No server-side
|
||||
session — URLs are shareable and bookmarkable; multiple operators see independent views.
|
||||
`f_prefix`, `f_uri`, `f_status`, `f_website_re`, `f_uri_re`, `f_is_tor`, `f_asn`, `n`, `target`. No
|
||||
server-side session — URLs are shareable and bookmarkable; multiple operators see independent views.
|
||||
- **Filter expression box**: a `q=` parameter carries a mini filter language
|
||||
(`status>=400 AND website~=gouda.* AND uri~=^/api/`). On submission the handler parses it
|
||||
via `ParseFilterExpr` and redirects to the canonical URL with individual `f_*` params; `q=`
|
||||
never appears in the final URL. Parse errors re-render the current page with an inline message.
|
||||
- **Status expressions**: `f_status` accepts `200`, `!=200`, `>=400`, `<500`, etc. — parsed by
|
||||
`store.ParseStatusExpr` into `(value, StatusOp)` for the filter protobuf.
|
||||
- **ASN expressions**: `f_asn` accepts the same expression syntax (`12345`, `!=65000`, `>=1000`,
|
||||
`<64512`, etc.) — also parsed by `store.ParseStatusExpr`, stored as `(asn_number, AsnOp)` in the
|
||||
filter protobuf.
|
||||
- **Regex filters**: `f_website_re` and `f_uri_re` hold RE2 patterns; compiled once per request
|
||||
into `store.CompiledFilter` before the query-loop iteration. Invalid regexes match nothing.
|
||||
- `TopN`, `Trend`, and `ListTargets` RPCs issued **concurrently** (all with a 5 s deadline); page
|
||||
@@ -325,7 +334,7 @@ service LogtailService {
|
||||
default aggregator. Picker is hidden when `ListTargets` returns ≤0 collectors (direct collector
|
||||
mode).
|
||||
- **Drilldown**: clicking a table row adds the current dimension's filter and advances `by` through
|
||||
`website → prefix → uri → status → website` (cycles).
|
||||
`website → prefix → uri → status → asn → website` (cycles).
|
||||
- **`raw=1`**: returns the TopN result as JSON — same URL, no CLI needed for scripting.
|
||||
- **`target=` override**: per-request gRPC endpoint override for comparing sources.
|
||||
- Error pages render at HTTP 502 with the window/group-by tabs still functional.
|
||||
@@ -367,6 +376,7 @@ logtail-cli targets [flags] list targets known to the queried endpoint
|
||||
| `--website-re`| — | Filter: RE2 regex against website |
|
||||
| `--uri-re` | — | Filter: RE2 regex against request URI |
|
||||
| `--is-tor` | — | Filter: TOR traffic (`1` or `!=0` = TOR only; `0` or `!=1` = non-TOR only) |
|
||||
| `--asn` | — | Filter: ASN expression (`12345`, `!=65000`, `>=1000`, `<64512`, …) |
|
||||
|
||||
**`topn` only**: `--n 10`, `--window 5m`, `--group-by website`
|
||||
|
||||
@@ -398,7 +408,7 @@ with a non-zero code on gRPC error.
|
||||
| Tick-based cache rotation in aggregator | Ring stays on the same 1-min cadence regardless of collector count |
|
||||
| Degraded collector zeroing | Stale counts from failed collectors don't accumulate in the merged view |
|
||||
| Same `LogtailService` for collector and aggregator | CLI and frontend work with either; no special-casing |
|
||||
| `internal/store` shared package | ring-buffer, `Tuple5` encoding, and filter logic shared between collector and aggregator |
|
||||
| `internal/store` shared package | ring-buffer, `Tuple6` encoding, and filter logic shared between collector and aggregator |
|
||||
| Filter state in URL, not session cookie | Multiple concurrent operators; shareable/bookmarkable URLs |
|
||||
| Query strings stripped at ingest | Major cardinality reduction; prevents URI explosion under attack |
|
||||
| No persistent storage | Simplicity; acceptable for ops dashboards (restart = lose history) |
|
||||
@@ -408,6 +418,7 @@ with a non-zero code on gRPC error.
|
||||
| CLI multi-target fan-out | Compare a collector vs. aggregator, or two collectors, in one command |
|
||||
| CLI uses stdlib `flag`, no framework | Four subcommands don't justify a dependency |
|
||||
| Status filter as expression string (`!=200`, `>=400`) | Operator-friendly; parsed once at query boundary, encoded as `(int32, StatusOp)` in proto |
|
||||
| ASN filter reuses `StatusOp` and `ParseStatusExpr` | Same 6-operator grammar as status; no duplicate enum or parser needed |
|
||||
| Regex filters compiled once per query (`CompiledFilter`) | Up to 288 × 5 000 per-entry calls — compiling per-entry would dominate query latency |
|
||||
| Filter expression box (`q=`) redirects to canonical URL | Filter state stays in individual `f_*` params; URLs remain shareable and bookmarkable |
|
||||
| `ListTargets` + frontend source picker | "Which nginx is busiest?" answered by switching `target=` to a collector; no data model changes, no extra memory |
|
||||
|
||||
@@ -163,8 +163,8 @@ func TestCacheCoarseRing(t *testing.T) {
|
||||
func TestCacheQueryTopN(t *testing.T) {
|
||||
m := NewMerger()
|
||||
m.Apply(makeSnap("c1", map[string]int64{
|
||||
st.EncodeTuple(st.Tuple5{Website: "busy.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): 300,
|
||||
st.EncodeTuple(st.Tuple5{Website: "quiet.com", Prefix: "2.0.0.0/24", URI: "/", Status: "200"}): 50,
|
||||
st.EncodeTuple(st.Tuple6{Website: "busy.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): 300,
|
||||
st.EncodeTuple(st.Tuple6{Website: "quiet.com", Prefix: "2.0.0.0/24", URI: "/", Status: "200"}): 50,
|
||||
}))
|
||||
|
||||
cache := NewCache(m, "test")
|
||||
@@ -181,8 +181,8 @@ func TestCacheQueryTopN(t *testing.T) {
|
||||
|
||||
func TestCacheQueryTopNWithFilter(t *testing.T) {
|
||||
m := NewMerger()
|
||||
status429 := st.EncodeTuple(st.Tuple5{Website: "example.com", Prefix: "1.0.0.0/24", URI: "/api", Status: "429"})
|
||||
status200 := st.EncodeTuple(st.Tuple5{Website: "example.com", Prefix: "2.0.0.0/24", URI: "/api", Status: "200"})
|
||||
status429 := st.EncodeTuple(st.Tuple6{Website: "example.com", Prefix: "1.0.0.0/24", URI: "/api", Status: "429"})
|
||||
status200 := st.EncodeTuple(st.Tuple6{Website: "example.com", Prefix: "2.0.0.0/24", URI: "/api", Status: "200"})
|
||||
m.Apply(makeSnap("c1", map[string]int64{status429: 200, status200: 500}))
|
||||
|
||||
cache := NewCache(m, "test")
|
||||
@@ -202,7 +202,7 @@ func TestCacheQueryTrend(t *testing.T) {
|
||||
|
||||
for i, count := range []int64{10, 20, 30} {
|
||||
m.Apply(makeSnap("c1", map[string]int64{
|
||||
st.EncodeTuple(st.Tuple5{Website: "x.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): count,
|
||||
st.EncodeTuple(st.Tuple6{Website: "x.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): count,
|
||||
}))
|
||||
cache.rotate(now.Add(time.Duration(i) * time.Minute))
|
||||
}
|
||||
@@ -270,12 +270,12 @@ func startFakeCollector(t *testing.T, snaps []*pb.Snapshot) string {
|
||||
func TestGRPCEndToEnd(t *testing.T) {
|
||||
// Two fake collectors with overlapping labels.
|
||||
snap1 := makeSnap("col1", map[string]int64{
|
||||
st.EncodeTuple(st.Tuple5{Website: "busy.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): 500,
|
||||
st.EncodeTuple(st.Tuple5{Website: "quiet.com", Prefix: "2.0.0.0/24", URI: "/", Status: "429"}): 100,
|
||||
st.EncodeTuple(st.Tuple6{Website: "busy.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): 500,
|
||||
st.EncodeTuple(st.Tuple6{Website: "quiet.com", Prefix: "2.0.0.0/24", URI: "/", Status: "429"}): 100,
|
||||
})
|
||||
snap2 := makeSnap("col2", map[string]int64{
|
||||
st.EncodeTuple(st.Tuple5{Website: "busy.com", Prefix: "3.0.0.0/24", URI: "/", Status: "200"}): 300,
|
||||
st.EncodeTuple(st.Tuple5{Website: "other.com", Prefix: "4.0.0.0/24", URI: "/", Status: "200"}): 50,
|
||||
st.EncodeTuple(st.Tuple6{Website: "busy.com", Prefix: "3.0.0.0/24", URI: "/", Status: "200"}): 300,
|
||||
st.EncodeTuple(st.Tuple6{Website: "other.com", Prefix: "4.0.0.0/24", URI: "/", Status: "200"}): 50,
|
||||
})
|
||||
addr1 := startFakeCollector(t, []*pb.Snapshot{snap1})
|
||||
addr2 := startFakeCollector(t, []*pb.Snapshot{snap2})
|
||||
@@ -388,7 +388,7 @@ func TestGRPCEndToEnd(t *testing.T) {
|
||||
func TestDegradedCollector(t *testing.T) {
|
||||
// Start one real and one immediately-gone collector.
|
||||
snap1 := makeSnap("col1", map[string]int64{
|
||||
st.EncodeTuple(st.Tuple5{Website: "good.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): 100,
|
||||
st.EncodeTuple(st.Tuple6{Website: "good.com", Prefix: "1.0.0.0/24", URI: "/", Status: "200"}): 100,
|
||||
})
|
||||
addr1 := startFakeCollector(t, []*pb.Snapshot{snap1})
|
||||
// addr2 points at nothing — connections will fail immediately.
|
||||
|
||||
@@ -15,9 +15,9 @@ import (
|
||||
)
|
||||
|
||||
func main() {
|
||||
listen := flag.String("listen", ":9091", "gRPC listen address")
|
||||
listen := flag.String("listen", ":9091", "gRPC listen address")
|
||||
collectors := flag.String("collectors", "", "comma-separated collector host:port addresses")
|
||||
source := flag.String("source", hostname(), "name for this aggregator in responses")
|
||||
source := flag.String("source", hostname(), "name for this aggregator in responses")
|
||||
flag.Parse()
|
||||
|
||||
if *collectors == "" {
|
||||
|
||||
@@ -89,7 +89,10 @@ func TestBuildFilterNil(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestFmtCount(t *testing.T) {
|
||||
cases := []struct{ n int64; want string }{
|
||||
cases := []struct {
|
||||
n int64
|
||||
want string
|
||||
}{
|
||||
{0, "0"},
|
||||
{999, "999"},
|
||||
{1000, "1 000"},
|
||||
|
||||
@@ -21,6 +21,7 @@ type sharedFlags struct {
|
||||
websiteRe string // RE2 regex against website
|
||||
uriRe string // RE2 regex against request URI
|
||||
isTor string // "", "1" / "!=0" (TOR only), "0" / "!=1" (non-TOR only)
|
||||
asn string // expression: "12345", "!=65000", ">=1000", etc.
|
||||
}
|
||||
|
||||
// bindShared registers the shared flags on fs and returns a pointer to the
|
||||
@@ -36,6 +37,7 @@ func bindShared(fs *flag.FlagSet) (*sharedFlags, *string) {
|
||||
fs.StringVar(&sf.websiteRe, "website-re", "", "filter: RE2 regex against website")
|
||||
fs.StringVar(&sf.uriRe, "uri-re", "", "filter: RE2 regex against request URI")
|
||||
fs.StringVar(&sf.isTor, "is-tor", "", "filter: TOR traffic (1 or !=0 = TOR only; 0 or !=1 = non-TOR only)")
|
||||
fs.StringVar(&sf.asn, "asn", "", "filter: ASN expression (12345, !=65000, >=1000, <64512, …)")
|
||||
return sf, target
|
||||
}
|
||||
|
||||
@@ -58,7 +60,7 @@ func parseTargets(s string) []string {
|
||||
}
|
||||
|
||||
func buildFilter(sf *sharedFlags) *pb.Filter {
|
||||
if sf.website == "" && sf.prefix == "" && sf.uri == "" && sf.status == "" && sf.websiteRe == "" && sf.uriRe == "" && sf.isTor == "" {
|
||||
if sf.website == "" && sf.prefix == "" && sf.uri == "" && sf.status == "" && sf.websiteRe == "" && sf.uriRe == "" && sf.isTor == "" && sf.asn == "" {
|
||||
return nil
|
||||
}
|
||||
f := &pb.Filter{}
|
||||
@@ -97,6 +99,15 @@ func buildFilter(sf *sharedFlags) *pb.Filter {
|
||||
fmt.Fprintf(os.Stderr, "--is-tor: invalid value %q; use 1, 0, !=0, or !=1\n", sf.isTor)
|
||||
os.Exit(1)
|
||||
}
|
||||
if sf.asn != "" {
|
||||
n, op, ok := st.ParseStatusExpr(sf.asn)
|
||||
if !ok {
|
||||
fmt.Fprintf(os.Stderr, "--asn: invalid expression %q; use e.g. 12345, !=65000, >=1000, <64512\n", sf.asn)
|
||||
os.Exit(1)
|
||||
}
|
||||
f.AsnNumber = &n
|
||||
f.AsnOp = op
|
||||
}
|
||||
return f
|
||||
}
|
||||
|
||||
|
||||
@@ -18,12 +18,12 @@ import (
|
||||
)
|
||||
|
||||
func main() {
|
||||
listen := flag.String("listen", ":9090", "gRPC listen address")
|
||||
logPaths := flag.String("logs", "", "comma-separated log file paths/globs to tail")
|
||||
logsFile := flag.String("logs-file", "", "file containing one log path/glob per line")
|
||||
source := flag.String("source", hostname(), "name for this collector (default: hostname)")
|
||||
v4prefix := flag.Int("v4prefix", 24, "IPv4 prefix length for client bucketing")
|
||||
v6prefix := flag.Int("v6prefix", 48, "IPv6 prefix length for client bucketing")
|
||||
listen := flag.String("listen", ":9090", "gRPC listen address")
|
||||
logPaths := flag.String("logs", "", "comma-separated log file paths/globs to tail")
|
||||
logsFile := flag.String("logs-file", "", "file containing one log path/glob per line")
|
||||
source := flag.String("source", hostname(), "name for this collector (default: hostname)")
|
||||
v4prefix := flag.Int("v4prefix", 24, "IPv4 prefix length for client bucketing")
|
||||
v6prefix := flag.Int("v6prefix", 48, "IPv6 prefix length for client bucketing")
|
||||
scanInterval := flag.Duration("scan-interval", 10*time.Second, "how often to rescan glob patterns for new/removed files")
|
||||
flag.Parse()
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ package main
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
@@ -13,18 +14,20 @@ type LogRecord struct {
|
||||
URI string
|
||||
Status string
|
||||
IsTor bool
|
||||
ASN int32
|
||||
}
|
||||
|
||||
// ParseLine parses a tab-separated logtail log line:
|
||||
//
|
||||
// $host \t $remote_addr \t $msec \t $request_method \t $request_uri \t $status \t $body_bytes_sent \t $request_time \t $is_tor
|
||||
// $host \t $remote_addr \t $msec \t $request_method \t $request_uri \t $status \t $body_bytes_sent \t $request_time \t $is_tor \t $asn
|
||||
//
|
||||
// The is_tor field (0 or 1) is optional for backward compatibility with
|
||||
// older log files that omit it; it defaults to false when absent.
|
||||
// The is_tor (field 9) and asn (field 10) fields are optional for backward
|
||||
// compatibility with older log files that omit them; they default to false/0
|
||||
// when absent.
|
||||
// Returns false for lines with fewer than 8 fields.
|
||||
func ParseLine(line string, v4bits, v6bits int) (LogRecord, bool) {
|
||||
// SplitN caps allocations; we need up to 9 fields.
|
||||
fields := strings.SplitN(line, "\t", 9)
|
||||
// SplitN caps allocations; we need up to 10 fields.
|
||||
fields := strings.SplitN(line, "\t", 10)
|
||||
if len(fields) < 8 {
|
||||
return LogRecord{}, false
|
||||
}
|
||||
@@ -39,7 +42,14 @@ func ParseLine(line string, v4bits, v6bits int) (LogRecord, bool) {
|
||||
return LogRecord{}, false
|
||||
}
|
||||
|
||||
isTor := len(fields) == 9 && fields[8] == "1"
|
||||
isTor := len(fields) >= 9 && fields[8] == "1"
|
||||
|
||||
var asn int32
|
||||
if len(fields) == 10 {
|
||||
if n, err := strconv.ParseInt(fields[9], 10, 32); err == nil {
|
||||
asn = int32(n)
|
||||
}
|
||||
}
|
||||
|
||||
return LogRecord{
|
||||
Website: fields[0],
|
||||
@@ -47,6 +57,7 @@ func ParseLine(line string, v4bits, v6bits int) (LogRecord, bool) {
|
||||
URI: uri,
|
||||
Status: fields[5],
|
||||
IsTor: isTor,
|
||||
ASN: asn,
|
||||
}, true
|
||||
}
|
||||
|
||||
|
||||
@@ -8,10 +8,10 @@ func TestParseLine(t *testing.T) {
|
||||
good := "www.example.com\t1.2.3.4\t1741954800.123\tGET\t/api/v1/search?q=foo&x=1\t200\t1452\t0.043"
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
line string
|
||||
wantOK bool
|
||||
want LogRecord
|
||||
name string
|
||||
line string
|
||||
wantOK bool
|
||||
want LogRecord
|
||||
}{
|
||||
{
|
||||
name: "normal IPv4 line strips query string",
|
||||
@@ -108,6 +108,58 @@ func TestParseLine(t *testing.T) {
|
||||
IsTor: false,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "asn field parsed",
|
||||
line: "asn.example.com\t1.2.3.4\t0\tGET\t/\t200\t0\t0.001\t0\t12345",
|
||||
wantOK: true,
|
||||
want: LogRecord{
|
||||
Website: "asn.example.com",
|
||||
ClientPrefix: "1.2.3.0/24",
|
||||
URI: "/",
|
||||
Status: "200",
|
||||
IsTor: false,
|
||||
ASN: 12345,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "asn field with is_tor=1",
|
||||
line: "both.example.com\t1.2.3.4\t0\tGET\t/\t200\t0\t0.001\t1\t65535",
|
||||
wantOK: true,
|
||||
want: LogRecord{
|
||||
Website: "both.example.com",
|
||||
ClientPrefix: "1.2.3.0/24",
|
||||
URI: "/",
|
||||
Status: "200",
|
||||
IsTor: true,
|
||||
ASN: 65535,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "missing asn field defaults to 0 (backward compat)",
|
||||
line: "noasn.example.com\t1.2.3.4\t0\tGET\t/\t200\t0\t0.001\t1",
|
||||
wantOK: true,
|
||||
want: LogRecord{
|
||||
Website: "noasn.example.com",
|
||||
ClientPrefix: "1.2.3.0/24",
|
||||
URI: "/",
|
||||
Status: "200",
|
||||
IsTor: true,
|
||||
ASN: 0,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "invalid asn field defaults to 0",
|
||||
line: "badann.example.com\t1.2.3.4\t0\tGET\t/\t200\t0\t0.001\t0\tnot-a-number",
|
||||
wantOK: true,
|
||||
want: LogRecord{
|
||||
Website: "badann.example.com",
|
||||
ClientPrefix: "1.2.3.0/24",
|
||||
URI: "/",
|
||||
Status: "200",
|
||||
IsTor: false,
|
||||
ASN: 0,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
@@ -134,7 +186,7 @@ func TestTruncateIP(t *testing.T) {
|
||||
{"1.2.3.4", "1.2.3.0/24"},
|
||||
{"192.168.100.200", "192.168.100.0/24"},
|
||||
{"2001:db8:cafe:babe::1", "2001:db8:cafe::/48"}, // /48 = 3 full groups intact
|
||||
{"::1", "::/48"}, // loopback — first 48 bits are all zero
|
||||
{"::1", "::/48"}, // loopback — first 48 bits are all zero
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
|
||||
@@ -4,8 +4,8 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb"
|
||||
st "git.ipng.ch/ipng/nginx-logtail/internal/store"
|
||||
pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb"
|
||||
)
|
||||
|
||||
const liveMapCap = 100_000 // hard cap on live map entries
|
||||
@@ -15,7 +15,7 @@ type Store struct {
|
||||
source string
|
||||
|
||||
// live map — written only by the Run goroutine; no locking needed on writes
|
||||
live map[st.Tuple5]int64
|
||||
live map[st.Tuple6]int64
|
||||
liveLen int
|
||||
|
||||
// ring buffers — protected by mu for reads
|
||||
@@ -36,7 +36,7 @@ type Store struct {
|
||||
func NewStore(source string) *Store {
|
||||
return &Store{
|
||||
source: source,
|
||||
live: make(map[st.Tuple5]int64, liveMapCap),
|
||||
live: make(map[st.Tuple6]int64, liveMapCap),
|
||||
subs: make(map[chan st.Snapshot]struct{}),
|
||||
}
|
||||
}
|
||||
@@ -44,7 +44,7 @@ func NewStore(source string) *Store {
|
||||
// ingest records one log record into the live map.
|
||||
// Must only be called from the Run goroutine.
|
||||
func (s *Store) ingest(r LogRecord) {
|
||||
key := st.Tuple5{Website: r.Website, Prefix: r.ClientPrefix, URI: r.URI, Status: r.Status, IsTor: r.IsTor}
|
||||
key := st.Tuple6{Website: r.Website, Prefix: r.ClientPrefix, URI: r.URI, Status: r.Status, IsTor: r.IsTor, ASN: r.ASN}
|
||||
if _, exists := s.live[key]; !exists {
|
||||
if s.liveLen >= liveMapCap {
|
||||
return
|
||||
@@ -77,7 +77,7 @@ func (s *Store) rotate(now time.Time) {
|
||||
}
|
||||
s.mu.Unlock()
|
||||
|
||||
s.live = make(map[st.Tuple5]int64, liveMapCap)
|
||||
s.live = make(map[st.Tuple6]int64, liveMapCap)
|
||||
s.liveLen = 0
|
||||
|
||||
s.broadcast(fine)
|
||||
|
||||
@@ -5,8 +5,8 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb"
|
||||
st "git.ipng.ch/ipng/nginx-logtail/internal/store"
|
||||
pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb"
|
||||
)
|
||||
|
||||
func makeStore() *Store {
|
||||
|
||||
@@ -54,7 +54,7 @@ func (mt *MultiTailer) Run(ctx context.Context) {
|
||||
}
|
||||
defer watcher.Close()
|
||||
|
||||
files := make(map[string]*fileState)
|
||||
files := make(map[string]*fileState)
|
||||
retrying := make(map[string]struct{}) // paths currently in a retryOpen goroutine
|
||||
reopenCh := make(chan reopenMsg, 32)
|
||||
|
||||
|
||||
@@ -126,8 +126,20 @@ func applyTerm(term string, fs *filterState) error {
|
||||
} else {
|
||||
fs.IsTor = "0"
|
||||
}
|
||||
case "asn":
|
||||
if op == "~=" {
|
||||
return fmt.Errorf("asn does not support ~=; use =, !=, >=, >, <=, <")
|
||||
}
|
||||
expr := op + value
|
||||
if op == "=" {
|
||||
expr = value
|
||||
}
|
||||
if _, _, ok := st.ParseStatusExpr(expr); !ok {
|
||||
return fmt.Errorf("invalid asn expression %q", expr)
|
||||
}
|
||||
fs.ASN = expr
|
||||
default:
|
||||
return fmt.Errorf("unknown field %q; valid: status, website, uri, prefix, is_tor", field)
|
||||
return fmt.Errorf("unknown field %q; valid: status, website, uri, prefix, is_tor, asn", field)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -167,9 +179,24 @@ func FilterExprString(f filterState) string {
|
||||
if f.IsTor != "" {
|
||||
parts = append(parts, "is_tor="+f.IsTor)
|
||||
}
|
||||
if f.ASN != "" {
|
||||
parts = append(parts, asnTermStr(f.ASN))
|
||||
}
|
||||
return strings.Join(parts, " AND ")
|
||||
}
|
||||
|
||||
// asnTermStr converts a stored ASN expression (">=1000", "12345") to a
|
||||
// full filter term ("asn>=1000", "asn=12345").
|
||||
func asnTermStr(expr string) string {
|
||||
if expr == "" {
|
||||
return ""
|
||||
}
|
||||
if len(expr) > 0 && (expr[0] == '!' || expr[0] == '>' || expr[0] == '<') {
|
||||
return "asn" + expr
|
||||
}
|
||||
return "asn=" + expr
|
||||
}
|
||||
|
||||
// statusTermStr converts a stored status expression (">=400", "200") to a
|
||||
// full filter term ("status>=400", "status=200").
|
||||
func statusTermStr(expr string) string {
|
||||
|
||||
@@ -258,3 +258,77 @@ func TestFilterExprRoundTrip(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAsnEQ(t *testing.T) {
|
||||
fs, err := ParseFilterExpr("asn=12345")
|
||||
if err != nil || fs.ASN != "12345" {
|
||||
t.Fatalf("got err=%v fs=%+v", err, fs)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAsnNE(t *testing.T) {
|
||||
fs, err := ParseFilterExpr("asn!=65000")
|
||||
if err != nil || fs.ASN != "!=65000" {
|
||||
t.Fatalf("got err=%v fs=%+v", err, fs)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAsnGE(t *testing.T) {
|
||||
fs, err := ParseFilterExpr("asn>=1000")
|
||||
if err != nil || fs.ASN != ">=1000" {
|
||||
t.Fatalf("got err=%v fs=%+v", err, fs)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAsnLT(t *testing.T) {
|
||||
fs, err := ParseFilterExpr("asn<64512")
|
||||
if err != nil || fs.ASN != "<64512" {
|
||||
t.Fatalf("got err=%v fs=%+v", err, fs)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAsnRegexRejected(t *testing.T) {
|
||||
_, err := ParseFilterExpr("asn~=123")
|
||||
if err == nil {
|
||||
t.Fatal("expected error for asn~=")
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAsnInvalidExpr(t *testing.T) {
|
||||
_, err := ParseFilterExpr("asn=notanumber")
|
||||
if err == nil {
|
||||
t.Fatal("expected error for non-numeric ASN")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilterExprStringASN(t *testing.T) {
|
||||
s := FilterExprString(filterState{ASN: "12345"})
|
||||
if s != "asn=12345" {
|
||||
t.Fatalf("got %q", s)
|
||||
}
|
||||
s = FilterExprString(filterState{ASN: ">=1000"})
|
||||
if s != "asn>=1000" {
|
||||
t.Fatalf("got %q", s)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilterExprRoundTripASN(t *testing.T) {
|
||||
cases := []filterState{
|
||||
{ASN: "12345"},
|
||||
{ASN: "!=65000"},
|
||||
{ASN: ">=1000"},
|
||||
{ASN: "<64512"},
|
||||
{Status: ">=400", ASN: "12345"},
|
||||
}
|
||||
for _, fs := range cases {
|
||||
expr := FilterExprString(fs)
|
||||
fs2, err := ParseFilterExpr(expr)
|
||||
if err != nil {
|
||||
t.Errorf("round-trip parse error for %+v → %q: %v", fs, expr, err)
|
||||
continue
|
||||
}
|
||||
if fs2 != fs {
|
||||
t.Errorf("round-trip mismatch: %+v → %q → %+v", fs, expr, fs2)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -220,8 +220,17 @@ func TestDrillURL(t *testing.T) {
|
||||
if !strings.Contains(u, "f_status=429") {
|
||||
t.Errorf("drill from status: missing f_status in %q", u)
|
||||
}
|
||||
if !strings.Contains(u, "by=asn") {
|
||||
t.Errorf("drill from status: expected next by=asn in %q", u)
|
||||
}
|
||||
|
||||
p.GroupByS = "asn"
|
||||
u = p.drillURL("12345")
|
||||
if !strings.Contains(u, "f_asn=12345") {
|
||||
t.Errorf("drill from asn: missing f_asn in %q", u)
|
||||
}
|
||||
if !strings.Contains(u, "by=website") {
|
||||
t.Errorf("drill from status: expected cycle back to by=website in %q", u)
|
||||
t.Errorf("drill from asn: expected cycle back to by=website in %q", u)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -54,6 +54,7 @@ type filterState struct {
|
||||
WebsiteRe string // RE2 regex against website
|
||||
URIRe string // RE2 regex against request URI
|
||||
IsTor string // "", "1" (TOR only), "0" (non-TOR only)
|
||||
ASN string // expression: "12345", "!=65000", ">=1000", etc.
|
||||
}
|
||||
|
||||
// QueryParams holds all parsed URL parameters for one page request.
|
||||
@@ -91,7 +92,7 @@ var windowSpecs = []struct{ s, label string }{
|
||||
}
|
||||
|
||||
var groupBySpecs = []struct{ s, label string }{
|
||||
{"website", "website"}, {"prefix", "prefix"}, {"uri", "uri"}, {"status", "status"},
|
||||
{"website", "website"}, {"asn", "asn"}, {"prefix", "prefix"}, {"status", "status"}, {"uri", "uri"},
|
||||
}
|
||||
|
||||
func parseWindowString(s string) (pb.Window, string) {
|
||||
@@ -121,6 +122,8 @@ func parseGroupByString(s string) (pb.GroupBy, string) {
|
||||
return pb.GroupBy_REQUEST_URI, "uri"
|
||||
case "status":
|
||||
return pb.GroupBy_HTTP_RESPONSE, "status"
|
||||
case "asn":
|
||||
return pb.GroupBy_ASN_NUMBER, "asn"
|
||||
default:
|
||||
return pb.GroupBy_WEBSITE, "website"
|
||||
}
|
||||
@@ -159,12 +162,13 @@ func (h *Handler) parseParams(r *http.Request) QueryParams {
|
||||
WebsiteRe: q.Get("f_website_re"),
|
||||
URIRe: q.Get("f_uri_re"),
|
||||
IsTor: q.Get("f_is_tor"),
|
||||
ASN: q.Get("f_asn"),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func buildFilter(f filterState) *pb.Filter {
|
||||
if f.Website == "" && f.Prefix == "" && f.URI == "" && f.Status == "" && f.WebsiteRe == "" && f.URIRe == "" && f.IsTor == "" {
|
||||
if f.Website == "" && f.Prefix == "" && f.URI == "" && f.Status == "" && f.WebsiteRe == "" && f.URIRe == "" && f.IsTor == "" && f.ASN == "" {
|
||||
return nil
|
||||
}
|
||||
out := &pb.Filter{}
|
||||
@@ -195,6 +199,12 @@ func buildFilter(f filterState) *pb.Filter {
|
||||
case "0":
|
||||
out.Tor = pb.TorFilter_TOR_NO
|
||||
}
|
||||
if f.ASN != "" {
|
||||
if n, op, ok := st.ParseStatusExpr(f.ASN); ok {
|
||||
out.AsnNumber = &n
|
||||
out.AsnOp = op
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
@@ -226,6 +236,9 @@ func (p QueryParams) toValues() url.Values {
|
||||
if p.Filter.IsTor != "" {
|
||||
v.Set("f_is_tor", p.Filter.IsTor)
|
||||
}
|
||||
if p.Filter.ASN != "" {
|
||||
v.Set("f_asn", p.Filter.ASN)
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
@@ -247,7 +260,7 @@ func (p QueryParams) buildURL(overrides map[string]string) string {
|
||||
func (p QueryParams) clearFilterURL() string {
|
||||
return p.buildURL(map[string]string{
|
||||
"f_website": "", "f_prefix": "", "f_uri": "", "f_status": "",
|
||||
"f_website_re": "", "f_uri_re": "",
|
||||
"f_website_re": "", "f_uri_re": "", "f_is_tor": "", "f_asn": "",
|
||||
})
|
||||
}
|
||||
|
||||
@@ -260,7 +273,9 @@ func nextGroupBy(s string) string {
|
||||
return "uri"
|
||||
case "uri":
|
||||
return "status"
|
||||
default: // status → back to website
|
||||
case "status":
|
||||
return "asn"
|
||||
default: // asn → back to website
|
||||
return "website"
|
||||
}
|
||||
}
|
||||
@@ -276,6 +291,8 @@ func groupByFilterKey(s string) string {
|
||||
return "f_uri"
|
||||
case "status":
|
||||
return "f_status"
|
||||
case "asn":
|
||||
return "f_asn"
|
||||
default:
|
||||
return "f_website"
|
||||
}
|
||||
@@ -338,6 +355,12 @@ func buildCrumbs(p QueryParams) []Crumb {
|
||||
RemoveURL: p.buildURL(map[string]string{"f_is_tor": ""}),
|
||||
})
|
||||
}
|
||||
if p.Filter.ASN != "" {
|
||||
crumbs = append(crumbs, Crumb{
|
||||
Text: asnTermStr(p.Filter.ASN),
|
||||
RemoveURL: p.buildURL(map[string]string{"f_asn": ""}),
|
||||
})
|
||||
}
|
||||
return crumbs
|
||||
}
|
||||
|
||||
|
||||
@@ -16,9 +16,9 @@ import (
|
||||
var templatesFS embed.FS
|
||||
|
||||
func main() {
|
||||
listen := flag.String("listen", ":8080", "HTTP listen address")
|
||||
target := flag.String("target", "localhost:9091", "default gRPC endpoint (aggregator or collector)")
|
||||
n := flag.Int("n", 25, "default number of table rows")
|
||||
listen := flag.String("listen", ":8080", "HTTP listen address")
|
||||
target := flag.String("target", "localhost:9091", "default gRPC endpoint (aggregator or collector)")
|
||||
n := flag.Int("n", 25, "default number of table rows")
|
||||
refresh := flag.Int("refresh", 30, "meta-refresh interval in seconds (0 = disabled)")
|
||||
flag.Parse()
|
||||
|
||||
|
||||
@@ -32,7 +32,7 @@
|
||||
<input type="hidden" name="w" value="{{.Params.WindowS}}">
|
||||
<input type="hidden" name="by" value="{{.Params.GroupByS}}">
|
||||
<input type="hidden" name="n" value="{{.Params.N}}">
|
||||
<input class="filter-input" type="text" name="q" value="{{.FilterExpr}}" placeholder="status>=400 AND website~=gouda.* AND uri~=^/api/ AND is_tor=1">
|
||||
<input class="filter-input" type="text" name="q" value="{{.FilterExpr}}" placeholder="status>=400 AND website~=gouda.* AND uri~=^/ct/v1/ AND is_tor=0 AND asn=8298">
|
||||
<button type="submit">filter</button>
|
||||
{{- if .FilterExpr}} <a class="clear" href="{{.ClearFilterURL}}">× clear</a>{{end}}
|
||||
</form>
|
||||
|
||||
@@ -27,7 +27,7 @@ Add the `logtail` log format to your `nginx.conf` and apply it to each `server`
|
||||
|
||||
```nginx
|
||||
http {
|
||||
log_format logtail '$host\t$remote_addr\t$msec\t$request_method\t$request_uri\t$status\t$body_bytes_sent\t$request_time\t$is_tor';
|
||||
log_format logtail '$host\t$remote_addr\t$msec\t$request_method\t$request_uri\t$status\t$body_bytes_sent\t$request_time\t$is_tor\t$asn';
|
||||
|
||||
server {
|
||||
access_log /var/log/nginx/access.log logtail;
|
||||
@@ -38,10 +38,16 @@ http {
|
||||
```
|
||||
|
||||
The format is tab-separated with fixed field positions. Query strings are stripped from the URI
|
||||
by the collector at ingest time — only the path is tracked. `$is_tor` must be set to `1` when
|
||||
the client IP is a TOR exit node and `0` otherwise (this is typically populated by a custom nginx
|
||||
variable or a Lua script that checks the IP against a TOR exit list). The field is optional for
|
||||
backward compatibility — log lines without it are accepted and treated as `is_tor=0`.
|
||||
by the collector at ingest time — only the path is tracked.
|
||||
|
||||
`$is_tor` must be set to `1` when the client IP is a TOR exit node and `0` otherwise (typically
|
||||
populated by a custom nginx variable or a Lua script that checks the IP against a TOR exit list).
|
||||
The field is optional for backward compatibility — log lines without it are accepted and treated
|
||||
as `is_tor=0`.
|
||||
|
||||
`$asn` must be set to the client's AS number as a decimal integer (e.g. from MaxMind GeoIP2's
|
||||
`$geoip2_data_autonomous_system_number`). The field is optional — log lines without it default
|
||||
to `asn=0`.
|
||||
|
||||
---
|
||||
|
||||
@@ -128,7 +134,7 @@ The collector is designed to stay well under 1 GB:
|
||||
| Coarse ring (288 × 5-min) | 288 × 5 000 | ~268 MB |
|
||||
| **Total** | | **~845 MB** |
|
||||
|
||||
When the live map reaches 100 000 distinct 5-tuples, new keys are dropped for the rest of that
|
||||
When the live map reaches 100 000 distinct 6-tuples, new keys are dropped for the rest of that
|
||||
minute. Existing keys continue to accumulate counts. The cap resets at each minute rotation.
|
||||
|
||||
### Time windows
|
||||
@@ -252,13 +258,13 @@ the selected dimension and time window.
|
||||
**Window tabs** — switch between `1m / 5m / 15m / 60m / 6h / 24h`. Only the window changes;
|
||||
all active filters are preserved.
|
||||
|
||||
**Dimension tabs** — switch between grouping by `website / prefix / uri / status`.
|
||||
**Dimension tabs** — switch between grouping by `website / asn / prefix / status / uri`.
|
||||
|
||||
**Drilldown** — click any table row to add that value as a filter and advance to the next
|
||||
dimension in the hierarchy:
|
||||
|
||||
```
|
||||
website → client prefix → request URI → HTTP status → website (cycles)
|
||||
website → client prefix → request URI → HTTP status → ASN → website (cycles)
|
||||
```
|
||||
|
||||
Example: click `example.com` in the website view to see which client prefixes are hitting it;
|
||||
@@ -282,17 +288,21 @@ website=example.com AND prefix=1.2.3.0/24
|
||||
|
||||
Supported fields and operators:
|
||||
|
||||
| Field | Operators | Example |
|
||||
|-----------|---------------------|----------------------------|
|
||||
| `status` | `=` `!=` `>` `>=` `<` `<=` | `status>=400` |
|
||||
| `website` | `=` `~=` | `website~=gouda.*` |
|
||||
| `uri` | `=` `~=` | `uri~=^/api/` |
|
||||
| `prefix` | `=` | `prefix=1.2.3.0/24` |
|
||||
| `is_tor` | `=` `!=` | `is_tor=1`, `is_tor!=0` |
|
||||
| Field | Operators | Example |
|
||||
|-----------|---------------------|-----------------------------------|
|
||||
| `status` | `=` `!=` `>` `>=` `<` `<=` | `status>=400` |
|
||||
| `website` | `=` `~=` | `website~=gouda.*` |
|
||||
| `uri` | `=` `~=` | `uri~=^/api/` |
|
||||
| `prefix` | `=` | `prefix=1.2.3.0/24` |
|
||||
| `is_tor` | `=` `!=` | `is_tor=1`, `is_tor!=0` |
|
||||
| `asn` | `=` `!=` `>` `>=` `<` `<=` | `asn=8298`, `asn>=1000` |
|
||||
|
||||
`is_tor=1` and `is_tor!=0` are equivalent (TOR traffic only). `is_tor=0` and `is_tor!=1` are
|
||||
equivalent (non-TOR traffic only).
|
||||
|
||||
`asn` accepts the same comparison expressions as `status`. Use `asn=8298` to match a single AS,
|
||||
`asn>=64512` to match the private-use ASN range, or `asn!=0` to exclude unresolved entries.
|
||||
|
||||
`~=` means RE2 regex match. Values with spaces or quotes may be wrapped in double or single
|
||||
quotes: `uri~="^/search\?q="`.
|
||||
|
||||
@@ -311,8 +321,8 @@ accept RE2 regular expressions. The breadcrumb strip shows them as `website~=gou
|
||||
`uri~=^/api/` with the usual `×` remove link.
|
||||
|
||||
**URL sharing** — all filter state is in the URL query string (`w`, `by`, `f_website`,
|
||||
`f_prefix`, `f_uri`, `f_status`, `f_website_re`, `f_uri_re`, `f_is_tor`, `n`). Copy the URL to
|
||||
share an exact view with another operator, or bookmark a recurring query.
|
||||
`f_prefix`, `f_uri`, `f_status`, `f_website_re`, `f_uri_re`, `f_is_tor`, `f_asn`, `n`). Copy
|
||||
the URL to share an exact view with another operator, or bookmark a recurring query.
|
||||
|
||||
**JSON output** — append `&raw=1` to any URL to receive the TopN result as JSON instead of
|
||||
HTML. Useful for scripting without the CLI binary:
|
||||
@@ -368,6 +378,7 @@ logtail-cli targets [flags] list targets known to the queried endpoint
|
||||
| `--website-re`| — | Filter: RE2 regex against website |
|
||||
| `--uri-re` | — | Filter: RE2 regex against request URI |
|
||||
| `--is-tor` | — | Filter: `1` or `!=0` = TOR only; `0` or `!=1` = non-TOR only |
|
||||
| `--asn` | — | Filter: ASN expression (`12345`, `!=65000`, `>=1000`, `<64512`, …) |
|
||||
|
||||
### `topn` flags
|
||||
|
||||
@@ -375,7 +386,7 @@ logtail-cli targets [flags] list targets known to the queried endpoint
|
||||
|---------------|------------|----------------------------------------------------------|
|
||||
| `--n` | `10` | Number of entries |
|
||||
| `--window` | `5m` | `1m` `5m` `15m` `60m` `6h` `24h` |
|
||||
| `--group-by` | `website` | `website` `prefix` `uri` `status` |
|
||||
| `--group-by` | `website` | `website` `prefix` `uri` `status` `asn` |
|
||||
|
||||
### `trend` flags
|
||||
|
||||
@@ -470,6 +481,21 @@ logtail-cli topn --target agg:9091 --window 5m --is-tor 1
|
||||
# Show non-TOR traffic only — exclude exit nodes from the view
|
||||
logtail-cli topn --target agg:9091 --window 5m --is-tor 0
|
||||
|
||||
# Top ASNs by request count over the last 5 minutes
|
||||
logtail-cli topn --target agg:9091 --window 5m --group-by asn
|
||||
|
||||
# Which ASNs are generating the most 429s?
|
||||
logtail-cli topn --target agg:9091 --window 5m --group-by asn --status 429
|
||||
|
||||
# Filter to traffic from a specific ASN
|
||||
logtail-cli topn --target agg:9091 --window 5m --asn 8298
|
||||
|
||||
# Filter to traffic from private-use / unallocated ASNs
|
||||
logtail-cli topn --target agg:9091 --window 5m --group-by prefix --asn '>=64512'
|
||||
|
||||
# Exclude unresolved entries (ASN 0) and show top source ASNs
|
||||
logtail-cli topn --target agg:9091 --window 5m --group-by asn --asn '!=0'
|
||||
|
||||
# Compare two collectors side by side in one command
|
||||
logtail-cli topn --target nginx1:9090,nginx2:9090 --window 5m
|
||||
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"container/heap"
|
||||
"log"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
pb "git.ipng.ch/ipng/nginx-logtail/proto/logtailpb"
|
||||
@@ -13,20 +14,21 @@ import (
|
||||
|
||||
// Ring-buffer dimensions — shared between collector and aggregator.
|
||||
const (
|
||||
FineRingSize = 60 // 60 × 1-min buckets → 1 hour
|
||||
CoarseRingSize = 288 // 288 × 5-min buckets → 24 hours
|
||||
FineTopK = 50_000 // entries kept per fine snapshot
|
||||
CoarseTopK = 5_000 // entries kept per coarse snapshot
|
||||
CoarseEvery = 5 // fine ticks between coarse writes
|
||||
FineRingSize = 60 // 60 × 1-min buckets → 1 hour
|
||||
CoarseRingSize = 288 // 288 × 5-min buckets → 24 hours
|
||||
FineTopK = 50_000 // entries kept per fine snapshot
|
||||
CoarseTopK = 5_000 // entries kept per coarse snapshot
|
||||
CoarseEvery = 5 // fine ticks between coarse writes
|
||||
)
|
||||
|
||||
// Tuple5 is the aggregation key (website, prefix, URI, status, is_tor).
|
||||
type Tuple5 struct {
|
||||
// Tuple6 is the aggregation key (website, prefix, URI, status, is_tor, asn).
|
||||
type Tuple6 struct {
|
||||
Website string
|
||||
Prefix string
|
||||
URI string
|
||||
Status string
|
||||
IsTor bool
|
||||
ASN int32
|
||||
}
|
||||
|
||||
// Entry is a labelled count used in snapshots and query results.
|
||||
@@ -74,28 +76,33 @@ func BucketsForWindow(window pb.Window, fine, coarse RingView, fineFilled, coars
|
||||
}
|
||||
}
|
||||
|
||||
// --- label encoding: "website\x00prefix\x00uri\x00status\x00is_tor" ---
|
||||
// --- label encoding: "website\x00prefix\x00uri\x00status\x00is_tor\x00asn" ---
|
||||
|
||||
// EncodeTuple encodes a Tuple5 as a NUL-separated string suitable for use
|
||||
// EncodeTuple encodes a Tuple6 as a NUL-separated string suitable for use
|
||||
// as a map key in snapshots.
|
||||
func EncodeTuple(t Tuple5) string {
|
||||
func EncodeTuple(t Tuple6) string {
|
||||
tor := "0"
|
||||
if t.IsTor {
|
||||
tor = "1"
|
||||
}
|
||||
return t.Website + "\x00" + t.Prefix + "\x00" + t.URI + "\x00" + t.Status + "\x00" + tor
|
||||
return t.Website + "\x00" + t.Prefix + "\x00" + t.URI + "\x00" + t.Status + "\x00" + tor + "\x00" + strconv.Itoa(int(t.ASN))
|
||||
}
|
||||
|
||||
// LabelTuple decodes a NUL-separated snapshot label back into a Tuple5.
|
||||
func LabelTuple(label string) Tuple5 {
|
||||
parts := splitN(label, '\x00', 5)
|
||||
// LabelTuple decodes a NUL-separated snapshot label back into a Tuple6.
|
||||
func LabelTuple(label string) Tuple6 {
|
||||
parts := splitN(label, '\x00', 6)
|
||||
if len(parts) < 4 {
|
||||
return Tuple5{}
|
||||
return Tuple6{}
|
||||
}
|
||||
t := Tuple5{Website: parts[0], Prefix: parts[1], URI: parts[2], Status: parts[3]}
|
||||
if len(parts) == 5 {
|
||||
t := Tuple6{Website: parts[0], Prefix: parts[1], URI: parts[2], Status: parts[3]}
|
||||
if len(parts) >= 5 {
|
||||
t.IsTor = parts[4] == "1"
|
||||
}
|
||||
if len(parts) == 6 {
|
||||
if n, err := strconv.Atoi(parts[5]); err == nil {
|
||||
t.ASN = int32(n)
|
||||
}
|
||||
}
|
||||
return t
|
||||
}
|
||||
|
||||
@@ -159,7 +166,7 @@ func CompileFilter(f *pb.Filter) *CompiledFilter {
|
||||
|
||||
// MatchesFilter returns true if t satisfies all constraints in f.
|
||||
// A nil filter matches everything.
|
||||
func MatchesFilter(t Tuple5, f *CompiledFilter) bool {
|
||||
func MatchesFilter(t Tuple6, f *CompiledFilter) bool {
|
||||
if f == nil || f.Proto == nil {
|
||||
return true
|
||||
}
|
||||
@@ -199,9 +206,30 @@ func MatchesFilter(t Tuple5, f *CompiledFilter) bool {
|
||||
return false
|
||||
}
|
||||
}
|
||||
if p.AsnNumber != nil && !matchesAsnOp(t.ASN, p.GetAsnNumber(), p.AsnOp) {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// matchesAsnOp applies op(asn, want) directly on int32 values.
|
||||
func matchesAsnOp(asn, want int32, op pb.StatusOp) bool {
|
||||
switch op {
|
||||
case pb.StatusOp_NE:
|
||||
return asn != want
|
||||
case pb.StatusOp_GT:
|
||||
return asn > want
|
||||
case pb.StatusOp_GE:
|
||||
return asn >= want
|
||||
case pb.StatusOp_LT:
|
||||
return asn < want
|
||||
case pb.StatusOp_LE:
|
||||
return asn <= want
|
||||
default: // EQ
|
||||
return asn == want
|
||||
}
|
||||
}
|
||||
|
||||
// matchesStatusOp applies op(statusStr, want), parsing statusStr as an integer.
|
||||
// Returns false if statusStr is not a valid integer.
|
||||
func matchesStatusOp(statusStr string, want int32, op pb.StatusOp) bool {
|
||||
@@ -229,7 +257,7 @@ func matchesStatusOp(statusStr string, want int32, op pb.StatusOp) bool {
|
||||
}
|
||||
|
||||
// DimensionLabel returns the string value of t for the given group-by dimension.
|
||||
func DimensionLabel(t Tuple5, g pb.GroupBy) string {
|
||||
func DimensionLabel(t Tuple6, g pb.GroupBy) string {
|
||||
switch g {
|
||||
case pb.GroupBy_WEBSITE:
|
||||
return t.Website
|
||||
@@ -239,6 +267,8 @@ func DimensionLabel(t Tuple5, g pb.GroupBy) string {
|
||||
return t.URI
|
||||
case pb.GroupBy_HTTP_RESPONSE:
|
||||
return t.Status
|
||||
case pb.GroupBy_ASN_NUMBER:
|
||||
return strconv.Itoa(int(t.ASN))
|
||||
default:
|
||||
return t.Website
|
||||
}
|
||||
@@ -318,9 +348,9 @@ func TopKFromMap(m map[string]int64, k int) []Entry {
|
||||
return result
|
||||
}
|
||||
|
||||
// TopKFromTupleMap encodes a Tuple5 map and returns the top-k as a Snapshot.
|
||||
// TopKFromTupleMap encodes a Tuple6 map and returns the top-k as a Snapshot.
|
||||
// Used by the collector to snapshot its live map.
|
||||
func TopKFromTupleMap(m map[Tuple5]int64, k int, ts time.Time) Snapshot {
|
||||
func TopKFromTupleMap(m map[Tuple6]int64, k int, ts time.Time) Snapshot {
|
||||
flat := make(map[string]int64, len(m))
|
||||
for t, c := range m {
|
||||
flat[EncodeTuple(t)] = c
|
||||
|
||||
@@ -83,10 +83,10 @@ func compiledEQ(status int32) *CompiledFilter {
|
||||
}
|
||||
|
||||
func TestMatchesFilterNil(t *testing.T) {
|
||||
if !MatchesFilter(Tuple5{Website: "x"}, nil) {
|
||||
if !MatchesFilter(Tuple6{Website: "x"}, nil) {
|
||||
t.Fatal("nil filter should match everything")
|
||||
}
|
||||
if !MatchesFilter(Tuple5{Website: "x"}, &CompiledFilter{}) {
|
||||
if !MatchesFilter(Tuple6{Website: "x"}, &CompiledFilter{}) {
|
||||
t.Fatal("empty compiled filter should match everything")
|
||||
}
|
||||
}
|
||||
@@ -94,10 +94,10 @@ func TestMatchesFilterNil(t *testing.T) {
|
||||
func TestMatchesFilterExactWebsite(t *testing.T) {
|
||||
w := "example.com"
|
||||
cf := CompileFilter(&pb.Filter{Website: &w})
|
||||
if !MatchesFilter(Tuple5{Website: "example.com"}, cf) {
|
||||
if !MatchesFilter(Tuple6{Website: "example.com"}, cf) {
|
||||
t.Fatal("expected match")
|
||||
}
|
||||
if MatchesFilter(Tuple5{Website: "other.com"}, cf) {
|
||||
if MatchesFilter(Tuple6{Website: "other.com"}, cf) {
|
||||
t.Fatal("expected no match")
|
||||
}
|
||||
}
|
||||
@@ -105,10 +105,10 @@ func TestMatchesFilterExactWebsite(t *testing.T) {
|
||||
func TestMatchesFilterWebsiteRegex(t *testing.T) {
|
||||
re := "gouda.*"
|
||||
cf := CompileFilter(&pb.Filter{WebsiteRegex: &re})
|
||||
if !MatchesFilter(Tuple5{Website: "gouda.example.com"}, cf) {
|
||||
if !MatchesFilter(Tuple6{Website: "gouda.example.com"}, cf) {
|
||||
t.Fatal("expected match")
|
||||
}
|
||||
if MatchesFilter(Tuple5{Website: "edam.example.com"}, cf) {
|
||||
if MatchesFilter(Tuple6{Website: "edam.example.com"}, cf) {
|
||||
t.Fatal("expected no match")
|
||||
}
|
||||
}
|
||||
@@ -116,10 +116,10 @@ func TestMatchesFilterWebsiteRegex(t *testing.T) {
|
||||
func TestMatchesFilterURIRegex(t *testing.T) {
|
||||
re := "^/api/.*"
|
||||
cf := CompileFilter(&pb.Filter{UriRegex: &re})
|
||||
if !MatchesFilter(Tuple5{URI: "/api/users"}, cf) {
|
||||
if !MatchesFilter(Tuple6{URI: "/api/users"}, cf) {
|
||||
t.Fatal("expected match")
|
||||
}
|
||||
if MatchesFilter(Tuple5{URI: "/health"}, cf) {
|
||||
if MatchesFilter(Tuple6{URI: "/health"}, cf) {
|
||||
t.Fatal("expected no match")
|
||||
}
|
||||
}
|
||||
@@ -127,17 +127,17 @@ func TestMatchesFilterURIRegex(t *testing.T) {
|
||||
func TestMatchesFilterInvalidRegexMatchesNothing(t *testing.T) {
|
||||
re := "[invalid"
|
||||
cf := CompileFilter(&pb.Filter{WebsiteRegex: &re})
|
||||
if MatchesFilter(Tuple5{Website: "anything"}, cf) {
|
||||
if MatchesFilter(Tuple6{Website: "anything"}, cf) {
|
||||
t.Fatal("invalid regex should match nothing")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchesFilterStatusEQ(t *testing.T) {
|
||||
cf := compiledEQ(200)
|
||||
if !MatchesFilter(Tuple5{Status: "200"}, cf) {
|
||||
if !MatchesFilter(Tuple6{Status: "200"}, cf) {
|
||||
t.Fatal("expected match")
|
||||
}
|
||||
if MatchesFilter(Tuple5{Status: "404"}, cf) {
|
||||
if MatchesFilter(Tuple6{Status: "404"}, cf) {
|
||||
t.Fatal("expected no match")
|
||||
}
|
||||
}
|
||||
@@ -145,10 +145,10 @@ func TestMatchesFilterStatusEQ(t *testing.T) {
|
||||
func TestMatchesFilterStatusNE(t *testing.T) {
|
||||
v := int32(200)
|
||||
cf := CompileFilter(&pb.Filter{HttpResponse: &v, StatusOp: pb.StatusOp_NE})
|
||||
if MatchesFilter(Tuple5{Status: "200"}, cf) {
|
||||
if MatchesFilter(Tuple6{Status: "200"}, cf) {
|
||||
t.Fatal("expected no match for 200 != 200")
|
||||
}
|
||||
if !MatchesFilter(Tuple5{Status: "404"}, cf) {
|
||||
if !MatchesFilter(Tuple6{Status: "404"}, cf) {
|
||||
t.Fatal("expected match for 404 != 200")
|
||||
}
|
||||
}
|
||||
@@ -156,13 +156,13 @@ func TestMatchesFilterStatusNE(t *testing.T) {
|
||||
func TestMatchesFilterStatusGE(t *testing.T) {
|
||||
v := int32(400)
|
||||
cf := CompileFilter(&pb.Filter{HttpResponse: &v, StatusOp: pb.StatusOp_GE})
|
||||
if !MatchesFilter(Tuple5{Status: "400"}, cf) {
|
||||
if !MatchesFilter(Tuple6{Status: "400"}, cf) {
|
||||
t.Fatal("expected match: 400 >= 400")
|
||||
}
|
||||
if !MatchesFilter(Tuple5{Status: "500"}, cf) {
|
||||
if !MatchesFilter(Tuple6{Status: "500"}, cf) {
|
||||
t.Fatal("expected match: 500 >= 400")
|
||||
}
|
||||
if MatchesFilter(Tuple5{Status: "200"}, cf) {
|
||||
if MatchesFilter(Tuple6{Status: "200"}, cf) {
|
||||
t.Fatal("expected no match: 200 >= 400")
|
||||
}
|
||||
}
|
||||
@@ -170,17 +170,17 @@ func TestMatchesFilterStatusGE(t *testing.T) {
|
||||
func TestMatchesFilterStatusLT(t *testing.T) {
|
||||
v := int32(400)
|
||||
cf := CompileFilter(&pb.Filter{HttpResponse: &v, StatusOp: pb.StatusOp_LT})
|
||||
if !MatchesFilter(Tuple5{Status: "200"}, cf) {
|
||||
if !MatchesFilter(Tuple6{Status: "200"}, cf) {
|
||||
t.Fatal("expected match: 200 < 400")
|
||||
}
|
||||
if MatchesFilter(Tuple5{Status: "400"}, cf) {
|
||||
if MatchesFilter(Tuple6{Status: "400"}, cf) {
|
||||
t.Fatal("expected no match: 400 < 400")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchesFilterStatusNonNumeric(t *testing.T) {
|
||||
cf := compiledEQ(200)
|
||||
if MatchesFilter(Tuple5{Status: "ok"}, cf) {
|
||||
if MatchesFilter(Tuple6{Status: "ok"}, cf) {
|
||||
t.Fatal("non-numeric status should not match")
|
||||
}
|
||||
}
|
||||
@@ -193,13 +193,13 @@ func TestMatchesFilterCombined(t *testing.T) {
|
||||
HttpResponse: &v,
|
||||
StatusOp: pb.StatusOp_EQ,
|
||||
})
|
||||
if !MatchesFilter(Tuple5{Website: "example.com", Status: "200"}, cf) {
|
||||
if !MatchesFilter(Tuple6{Website: "example.com", Status: "200"}, cf) {
|
||||
t.Fatal("expected match")
|
||||
}
|
||||
if MatchesFilter(Tuple5{Website: "other.com", Status: "200"}, cf) {
|
||||
if MatchesFilter(Tuple6{Website: "other.com", Status: "200"}, cf) {
|
||||
t.Fatal("expected no match: wrong website")
|
||||
}
|
||||
if MatchesFilter(Tuple5{Website: "example.com", Status: "404"}, cf) {
|
||||
if MatchesFilter(Tuple6{Website: "example.com", Status: "404"}, cf) {
|
||||
t.Fatal("expected no match: wrong status")
|
||||
}
|
||||
}
|
||||
@@ -208,7 +208,7 @@ func TestMatchesFilterCombined(t *testing.T) {
|
||||
|
||||
func TestEncodeLabelTupleRoundtripWithTor(t *testing.T) {
|
||||
for _, isTor := range []bool{false, true} {
|
||||
orig := Tuple5{Website: "a.com", Prefix: "1.2.3.0/24", URI: "/x", Status: "200", IsTor: isTor}
|
||||
orig := Tuple6{Website: "a.com", Prefix: "1.2.3.0/24", URI: "/x", Status: "200", IsTor: isTor}
|
||||
got := LabelTuple(EncodeTuple(orig))
|
||||
if got != orig {
|
||||
t.Errorf("roundtrip mismatch: got %+v, want %+v", got, orig)
|
||||
@@ -230,30 +230,108 @@ func TestLabelTupleBackwardCompat(t *testing.T) {
|
||||
|
||||
func TestMatchesFilterTorYes(t *testing.T) {
|
||||
cf := CompileFilter(&pb.Filter{Tor: pb.TorFilter_TOR_YES})
|
||||
if !MatchesFilter(Tuple5{IsTor: true}, cf) {
|
||||
if !MatchesFilter(Tuple6{IsTor: true}, cf) {
|
||||
t.Fatal("TOR_YES should match TOR tuple")
|
||||
}
|
||||
if MatchesFilter(Tuple5{IsTor: false}, cf) {
|
||||
if MatchesFilter(Tuple6{IsTor: false}, cf) {
|
||||
t.Fatal("TOR_YES should not match non-TOR tuple")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchesFilterTorNo(t *testing.T) {
|
||||
cf := CompileFilter(&pb.Filter{Tor: pb.TorFilter_TOR_NO})
|
||||
if !MatchesFilter(Tuple5{IsTor: false}, cf) {
|
||||
if !MatchesFilter(Tuple6{IsTor: false}, cf) {
|
||||
t.Fatal("TOR_NO should match non-TOR tuple")
|
||||
}
|
||||
if MatchesFilter(Tuple5{IsTor: true}, cf) {
|
||||
if MatchesFilter(Tuple6{IsTor: true}, cf) {
|
||||
t.Fatal("TOR_NO should not match TOR tuple")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchesFilterTorAny(t *testing.T) {
|
||||
cf := CompileFilter(&pb.Filter{Tor: pb.TorFilter_TOR_ANY})
|
||||
if !MatchesFilter(Tuple5{IsTor: true}, cf) {
|
||||
if !MatchesFilter(Tuple6{IsTor: true}, cf) {
|
||||
t.Fatal("TOR_ANY should match TOR tuple")
|
||||
}
|
||||
if !MatchesFilter(Tuple5{IsTor: false}, cf) {
|
||||
if !MatchesFilter(Tuple6{IsTor: false}, cf) {
|
||||
t.Fatal("TOR_ANY should match non-TOR tuple")
|
||||
}
|
||||
}
|
||||
|
||||
// --- ASN label encoding, filtering, and DimensionLabel ---
|
||||
|
||||
func TestEncodeLabelTupleRoundtripWithASN(t *testing.T) {
|
||||
for _, asn := range []int32{0, 1, 12345, 65535} {
|
||||
orig := Tuple6{Website: "a.com", Prefix: "1.2.3.0/24", URI: "/x", Status: "200", ASN: asn}
|
||||
got := LabelTuple(EncodeTuple(orig))
|
||||
if got != orig {
|
||||
t.Errorf("roundtrip mismatch for ASN=%d: got %+v, want %+v", asn, got, orig)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestLabelTupleBackwardCompatNoASN(t *testing.T) {
|
||||
// 5-field label (no asn field) should decode with ASN=0.
|
||||
label := "a.com\x001.2.3.0/24\x00/x\x00200\x000"
|
||||
got := LabelTuple(label)
|
||||
if got.ASN != 0 {
|
||||
t.Errorf("expected ASN=0 for 5-field label, got %d", got.ASN)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchesFilterAsnEQ(t *testing.T) {
|
||||
n := int32(12345)
|
||||
cf := CompileFilter(&pb.Filter{AsnNumber: &n})
|
||||
if !MatchesFilter(Tuple6{ASN: 12345}, cf) {
|
||||
t.Fatal("EQ should match equal ASN")
|
||||
}
|
||||
if MatchesFilter(Tuple6{ASN: 99999}, cf) {
|
||||
t.Fatal("EQ should not match different ASN")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchesFilterAsnNE(t *testing.T) {
|
||||
n := int32(12345)
|
||||
cf := CompileFilter(&pb.Filter{AsnNumber: &n, AsnOp: pb.StatusOp_NE})
|
||||
if MatchesFilter(Tuple6{ASN: 12345}, cf) {
|
||||
t.Fatal("NE should not match equal ASN")
|
||||
}
|
||||
if !MatchesFilter(Tuple6{ASN: 99999}, cf) {
|
||||
t.Fatal("NE should match different ASN")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchesFilterAsnGE(t *testing.T) {
|
||||
n := int32(1000)
|
||||
cf := CompileFilter(&pb.Filter{AsnNumber: &n, AsnOp: pb.StatusOp_GE})
|
||||
if !MatchesFilter(Tuple6{ASN: 1000}, cf) {
|
||||
t.Fatal("GE should match equal ASN")
|
||||
}
|
||||
if !MatchesFilter(Tuple6{ASN: 2000}, cf) {
|
||||
t.Fatal("GE should match larger ASN")
|
||||
}
|
||||
if MatchesFilter(Tuple6{ASN: 500}, cf) {
|
||||
t.Fatal("GE should not match smaller ASN")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchesFilterAsnLT(t *testing.T) {
|
||||
n := int32(64512)
|
||||
cf := CompileFilter(&pb.Filter{AsnNumber: &n, AsnOp: pb.StatusOp_LT})
|
||||
if !MatchesFilter(Tuple6{ASN: 1000}, cf) {
|
||||
t.Fatal("LT should match smaller ASN")
|
||||
}
|
||||
if MatchesFilter(Tuple6{ASN: 64512}, cf) {
|
||||
t.Fatal("LT should not match equal ASN")
|
||||
}
|
||||
if MatchesFilter(Tuple6{ASN: 65535}, cf) {
|
||||
t.Fatal("LT should not match larger ASN")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDimensionLabelASN(t *testing.T) {
|
||||
got := DimensionLabel(Tuple6{ASN: 12345}, pb.GroupBy_ASN_NUMBER)
|
||||
if got != "12345" {
|
||||
t.Errorf("DimensionLabel ASN: got %q, want %q", got, "12345")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -139,6 +139,7 @@ const (
|
||||
GroupBy_CLIENT_PREFIX GroupBy = 1
|
||||
GroupBy_REQUEST_URI GroupBy = 2
|
||||
GroupBy_HTTP_RESPONSE GroupBy = 3
|
||||
GroupBy_ASN_NUMBER GroupBy = 4
|
||||
)
|
||||
|
||||
// Enum value maps for GroupBy.
|
||||
@@ -148,12 +149,14 @@ var (
|
||||
1: "CLIENT_PREFIX",
|
||||
2: "REQUEST_URI",
|
||||
3: "HTTP_RESPONSE",
|
||||
4: "ASN_NUMBER",
|
||||
}
|
||||
GroupBy_value = map[string]int32{
|
||||
"WEBSITE": 0,
|
||||
"CLIENT_PREFIX": 1,
|
||||
"REQUEST_URI": 2,
|
||||
"HTTP_RESPONSE": 3,
|
||||
"ASN_NUMBER": 4,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -254,6 +257,8 @@ type Filter struct {
|
||||
WebsiteRegex *string `protobuf:"bytes,6,opt,name=website_regex,json=websiteRegex,proto3,oneof" json:"website_regex,omitempty"` // RE2 regex matched against website
|
||||
UriRegex *string `protobuf:"bytes,7,opt,name=uri_regex,json=uriRegex,proto3,oneof" json:"uri_regex,omitempty"` // RE2 regex matched against http_request_uri
|
||||
Tor TorFilter `protobuf:"varint,8,opt,name=tor,proto3,enum=logtail.TorFilter" json:"tor,omitempty"` // restrict to TOR / non-TOR clients
|
||||
AsnNumber *int32 `protobuf:"varint,9,opt,name=asn_number,json=asnNumber,proto3,oneof" json:"asn_number,omitempty"` // filter by client ASN
|
||||
AsnOp StatusOp `protobuf:"varint,10,opt,name=asn_op,json=asnOp,proto3,enum=logtail.StatusOp" json:"asn_op,omitempty"` // operator for asn_number; ignored when unset
|
||||
unknownFields protoimpl.UnknownFields
|
||||
sizeCache protoimpl.SizeCache
|
||||
}
|
||||
@@ -344,6 +349,20 @@ func (x *Filter) GetTor() TorFilter {
|
||||
return TorFilter_TOR_ANY
|
||||
}
|
||||
|
||||
func (x *Filter) GetAsnNumber() int32 {
|
||||
if x != nil && x.AsnNumber != nil {
|
||||
return *x.AsnNumber
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *Filter) GetAsnOp() StatusOp {
|
||||
if x != nil {
|
||||
return x.AsnOp
|
||||
}
|
||||
return StatusOp_EQ
|
||||
}
|
||||
|
||||
type TopNRequest struct {
|
||||
state protoimpl.MessageState `protogen:"open.v1"`
|
||||
Filter *Filter `protobuf:"bytes,1,opt,name=filter,proto3" json:"filter,omitempty"`
|
||||
@@ -904,7 +923,7 @@ var File_proto_logtail_proto protoreflect.FileDescriptor
|
||||
|
||||
const file_proto_logtail_proto_rawDesc = "" +
|
||||
"\n" +
|
||||
"\x13proto/logtail.proto\x12\alogtail\"\xb1\x03\n" +
|
||||
"\x13proto/logtail.proto\x12\alogtail\"\x8e\x04\n" +
|
||||
"\x06Filter\x12\x1d\n" +
|
||||
"\awebsite\x18\x01 \x01(\tH\x00R\awebsite\x88\x01\x01\x12(\n" +
|
||||
"\rclient_prefix\x18\x02 \x01(\tH\x01R\fclientPrefix\x88\x01\x01\x12-\n" +
|
||||
@@ -913,7 +932,11 @@ const file_proto_logtail_proto_rawDesc = "" +
|
||||
"\tstatus_op\x18\x05 \x01(\x0e2\x11.logtail.StatusOpR\bstatusOp\x12(\n" +
|
||||
"\rwebsite_regex\x18\x06 \x01(\tH\x04R\fwebsiteRegex\x88\x01\x01\x12 \n" +
|
||||
"\turi_regex\x18\a \x01(\tH\x05R\buriRegex\x88\x01\x01\x12$\n" +
|
||||
"\x03tor\x18\b \x01(\x0e2\x12.logtail.TorFilterR\x03torB\n" +
|
||||
"\x03tor\x18\b \x01(\x0e2\x12.logtail.TorFilterR\x03tor\x12\"\n" +
|
||||
"\n" +
|
||||
"asn_number\x18\t \x01(\x05H\x06R\tasnNumber\x88\x01\x01\x12(\n" +
|
||||
"\x06asn_op\x18\n" +
|
||||
" \x01(\x0e2\x11.logtail.StatusOpR\x05asnOpB\n" +
|
||||
"\n" +
|
||||
"\b_websiteB\x10\n" +
|
||||
"\x0e_client_prefixB\x13\n" +
|
||||
@@ -921,7 +944,8 @@ const file_proto_logtail_proto_rawDesc = "" +
|
||||
"\x0e_http_responseB\x10\n" +
|
||||
"\x0e_website_regexB\f\n" +
|
||||
"\n" +
|
||||
"_uri_regex\"\x9a\x01\n" +
|
||||
"_uri_regexB\r\n" +
|
||||
"\v_asn_number\"\x9a\x01\n" +
|
||||
"\vTopNRequest\x12'\n" +
|
||||
"\x06filter\x18\x01 \x01(\v2\x0f.logtail.FilterR\x06filter\x12+\n" +
|
||||
"\bgroup_by\x18\x02 \x01(\x0e2\x10.logtail.GroupByR\agroupBy\x12\f\n" +
|
||||
@@ -966,12 +990,14 @@ const file_proto_logtail_proto_rawDesc = "" +
|
||||
"\x02GT\x10\x02\x12\x06\n" +
|
||||
"\x02GE\x10\x03\x12\x06\n" +
|
||||
"\x02LT\x10\x04\x12\x06\n" +
|
||||
"\x02LE\x10\x05*M\n" +
|
||||
"\x02LE\x10\x05*]\n" +
|
||||
"\aGroupBy\x12\v\n" +
|
||||
"\aWEBSITE\x10\x00\x12\x11\n" +
|
||||
"\rCLIENT_PREFIX\x10\x01\x12\x0f\n" +
|
||||
"\vREQUEST_URI\x10\x02\x12\x11\n" +
|
||||
"\rHTTP_RESPONSE\x10\x03*A\n" +
|
||||
"\rHTTP_RESPONSE\x10\x03\x12\x0e\n" +
|
||||
"\n" +
|
||||
"ASN_NUMBER\x10\x04*A\n" +
|
||||
"\x06Window\x12\a\n" +
|
||||
"\x03W1M\x10\x00\x12\a\n" +
|
||||
"\x03W5M\x10\x01\x12\b\n" +
|
||||
@@ -1020,28 +1046,29 @@ var file_proto_logtail_proto_goTypes = []any{
|
||||
var file_proto_logtail_proto_depIdxs = []int32{
|
||||
1, // 0: logtail.Filter.status_op:type_name -> logtail.StatusOp
|
||||
0, // 1: logtail.Filter.tor:type_name -> logtail.TorFilter
|
||||
4, // 2: logtail.TopNRequest.filter:type_name -> logtail.Filter
|
||||
2, // 3: logtail.TopNRequest.group_by:type_name -> logtail.GroupBy
|
||||
3, // 4: logtail.TopNRequest.window:type_name -> logtail.Window
|
||||
6, // 5: logtail.TopNResponse.entries:type_name -> logtail.TopNEntry
|
||||
4, // 6: logtail.TrendRequest.filter:type_name -> logtail.Filter
|
||||
3, // 7: logtail.TrendRequest.window:type_name -> logtail.Window
|
||||
9, // 8: logtail.TrendResponse.points:type_name -> logtail.TrendPoint
|
||||
6, // 9: logtail.Snapshot.entries:type_name -> logtail.TopNEntry
|
||||
14, // 10: logtail.ListTargetsResponse.targets:type_name -> logtail.TargetInfo
|
||||
5, // 11: logtail.LogtailService.TopN:input_type -> logtail.TopNRequest
|
||||
8, // 12: logtail.LogtailService.Trend:input_type -> logtail.TrendRequest
|
||||
11, // 13: logtail.LogtailService.StreamSnapshots:input_type -> logtail.SnapshotRequest
|
||||
13, // 14: logtail.LogtailService.ListTargets:input_type -> logtail.ListTargetsRequest
|
||||
7, // 15: logtail.LogtailService.TopN:output_type -> logtail.TopNResponse
|
||||
10, // 16: logtail.LogtailService.Trend:output_type -> logtail.TrendResponse
|
||||
12, // 17: logtail.LogtailService.StreamSnapshots:output_type -> logtail.Snapshot
|
||||
15, // 18: logtail.LogtailService.ListTargets:output_type -> logtail.ListTargetsResponse
|
||||
15, // [15:19] is the sub-list for method output_type
|
||||
11, // [11:15] is the sub-list for method input_type
|
||||
11, // [11:11] is the sub-list for extension type_name
|
||||
11, // [11:11] is the sub-list for extension extendee
|
||||
0, // [0:11] is the sub-list for field type_name
|
||||
1, // 2: logtail.Filter.asn_op:type_name -> logtail.StatusOp
|
||||
4, // 3: logtail.TopNRequest.filter:type_name -> logtail.Filter
|
||||
2, // 4: logtail.TopNRequest.group_by:type_name -> logtail.GroupBy
|
||||
3, // 5: logtail.TopNRequest.window:type_name -> logtail.Window
|
||||
6, // 6: logtail.TopNResponse.entries:type_name -> logtail.TopNEntry
|
||||
4, // 7: logtail.TrendRequest.filter:type_name -> logtail.Filter
|
||||
3, // 8: logtail.TrendRequest.window:type_name -> logtail.Window
|
||||
9, // 9: logtail.TrendResponse.points:type_name -> logtail.TrendPoint
|
||||
6, // 10: logtail.Snapshot.entries:type_name -> logtail.TopNEntry
|
||||
14, // 11: logtail.ListTargetsResponse.targets:type_name -> logtail.TargetInfo
|
||||
5, // 12: logtail.LogtailService.TopN:input_type -> logtail.TopNRequest
|
||||
8, // 13: logtail.LogtailService.Trend:input_type -> logtail.TrendRequest
|
||||
11, // 14: logtail.LogtailService.StreamSnapshots:input_type -> logtail.SnapshotRequest
|
||||
13, // 15: logtail.LogtailService.ListTargets:input_type -> logtail.ListTargetsRequest
|
||||
7, // 16: logtail.LogtailService.TopN:output_type -> logtail.TopNResponse
|
||||
10, // 17: logtail.LogtailService.Trend:output_type -> logtail.TrendResponse
|
||||
12, // 18: logtail.LogtailService.StreamSnapshots:output_type -> logtail.Snapshot
|
||||
15, // 19: logtail.LogtailService.ListTargets:output_type -> logtail.ListTargetsResponse
|
||||
16, // [16:20] is the sub-list for method output_type
|
||||
12, // [12:16] is the sub-list for method input_type
|
||||
12, // [12:12] is the sub-list for extension type_name
|
||||
12, // [12:12] is the sub-list for extension extendee
|
||||
0, // [0:12] is the sub-list for field type_name
|
||||
}
|
||||
|
||||
func init() { file_proto_logtail_proto_init() }
|
||||
|
||||
@@ -34,6 +34,8 @@ message Filter {
|
||||
optional string website_regex = 6; // RE2 regex matched against website
|
||||
optional string uri_regex = 7; // RE2 regex matched against http_request_uri
|
||||
TorFilter tor = 8; // restrict to TOR / non-TOR clients
|
||||
optional int32 asn_number = 9; // filter by client ASN
|
||||
StatusOp asn_op = 10; // operator for asn_number; ignored when unset
|
||||
}
|
||||
|
||||
enum GroupBy {
|
||||
@@ -41,6 +43,7 @@ enum GroupBy {
|
||||
CLIENT_PREFIX = 1;
|
||||
REQUEST_URI = 2;
|
||||
HTTP_RESPONSE = 3;
|
||||
ASN_NUMBER = 4;
|
||||
}
|
||||
|
||||
enum Window {
|
||||
|
||||
@@ -139,6 +139,7 @@ const (
|
||||
GroupBy_CLIENT_PREFIX GroupBy = 1
|
||||
GroupBy_REQUEST_URI GroupBy = 2
|
||||
GroupBy_HTTP_RESPONSE GroupBy = 3
|
||||
GroupBy_ASN_NUMBER GroupBy = 4
|
||||
)
|
||||
|
||||
// Enum value maps for GroupBy.
|
||||
@@ -148,12 +149,14 @@ var (
|
||||
1: "CLIENT_PREFIX",
|
||||
2: "REQUEST_URI",
|
||||
3: "HTTP_RESPONSE",
|
||||
4: "ASN_NUMBER",
|
||||
}
|
||||
GroupBy_value = map[string]int32{
|
||||
"WEBSITE": 0,
|
||||
"CLIENT_PREFIX": 1,
|
||||
"REQUEST_URI": 2,
|
||||
"HTTP_RESPONSE": 3,
|
||||
"ASN_NUMBER": 4,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -254,6 +257,8 @@ type Filter struct {
|
||||
WebsiteRegex *string `protobuf:"bytes,6,opt,name=website_regex,json=websiteRegex,proto3,oneof" json:"website_regex,omitempty"` // RE2 regex matched against website
|
||||
UriRegex *string `protobuf:"bytes,7,opt,name=uri_regex,json=uriRegex,proto3,oneof" json:"uri_regex,omitempty"` // RE2 regex matched against http_request_uri
|
||||
Tor TorFilter `protobuf:"varint,8,opt,name=tor,proto3,enum=logtail.TorFilter" json:"tor,omitempty"` // restrict to TOR / non-TOR clients
|
||||
AsnNumber *int32 `protobuf:"varint,9,opt,name=asn_number,json=asnNumber,proto3,oneof" json:"asn_number,omitempty"` // filter by client ASN
|
||||
AsnOp StatusOp `protobuf:"varint,10,opt,name=asn_op,json=asnOp,proto3,enum=logtail.StatusOp" json:"asn_op,omitempty"` // operator for asn_number; ignored when unset
|
||||
unknownFields protoimpl.UnknownFields
|
||||
sizeCache protoimpl.SizeCache
|
||||
}
|
||||
@@ -344,6 +349,20 @@ func (x *Filter) GetTor() TorFilter {
|
||||
return TorFilter_TOR_ANY
|
||||
}
|
||||
|
||||
func (x *Filter) GetAsnNumber() int32 {
|
||||
if x != nil && x.AsnNumber != nil {
|
||||
return *x.AsnNumber
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *Filter) GetAsnOp() StatusOp {
|
||||
if x != nil {
|
||||
return x.AsnOp
|
||||
}
|
||||
return StatusOp_EQ
|
||||
}
|
||||
|
||||
type TopNRequest struct {
|
||||
state protoimpl.MessageState `protogen:"open.v1"`
|
||||
Filter *Filter `protobuf:"bytes,1,opt,name=filter,proto3" json:"filter,omitempty"`
|
||||
@@ -904,7 +923,7 @@ var File_proto_logtail_proto protoreflect.FileDescriptor
|
||||
|
||||
const file_proto_logtail_proto_rawDesc = "" +
|
||||
"\n" +
|
||||
"\x13proto/logtail.proto\x12\alogtail\"\xb1\x03\n" +
|
||||
"\x13proto/logtail.proto\x12\alogtail\"\x8e\x04\n" +
|
||||
"\x06Filter\x12\x1d\n" +
|
||||
"\awebsite\x18\x01 \x01(\tH\x00R\awebsite\x88\x01\x01\x12(\n" +
|
||||
"\rclient_prefix\x18\x02 \x01(\tH\x01R\fclientPrefix\x88\x01\x01\x12-\n" +
|
||||
@@ -913,7 +932,11 @@ const file_proto_logtail_proto_rawDesc = "" +
|
||||
"\tstatus_op\x18\x05 \x01(\x0e2\x11.logtail.StatusOpR\bstatusOp\x12(\n" +
|
||||
"\rwebsite_regex\x18\x06 \x01(\tH\x04R\fwebsiteRegex\x88\x01\x01\x12 \n" +
|
||||
"\turi_regex\x18\a \x01(\tH\x05R\buriRegex\x88\x01\x01\x12$\n" +
|
||||
"\x03tor\x18\b \x01(\x0e2\x12.logtail.TorFilterR\x03torB\n" +
|
||||
"\x03tor\x18\b \x01(\x0e2\x12.logtail.TorFilterR\x03tor\x12\"\n" +
|
||||
"\n" +
|
||||
"asn_number\x18\t \x01(\x05H\x06R\tasnNumber\x88\x01\x01\x12(\n" +
|
||||
"\x06asn_op\x18\n" +
|
||||
" \x01(\x0e2\x11.logtail.StatusOpR\x05asnOpB\n" +
|
||||
"\n" +
|
||||
"\b_websiteB\x10\n" +
|
||||
"\x0e_client_prefixB\x13\n" +
|
||||
@@ -921,7 +944,8 @@ const file_proto_logtail_proto_rawDesc = "" +
|
||||
"\x0e_http_responseB\x10\n" +
|
||||
"\x0e_website_regexB\f\n" +
|
||||
"\n" +
|
||||
"_uri_regex\"\x9a\x01\n" +
|
||||
"_uri_regexB\r\n" +
|
||||
"\v_asn_number\"\x9a\x01\n" +
|
||||
"\vTopNRequest\x12'\n" +
|
||||
"\x06filter\x18\x01 \x01(\v2\x0f.logtail.FilterR\x06filter\x12+\n" +
|
||||
"\bgroup_by\x18\x02 \x01(\x0e2\x10.logtail.GroupByR\agroupBy\x12\f\n" +
|
||||
@@ -966,12 +990,14 @@ const file_proto_logtail_proto_rawDesc = "" +
|
||||
"\x02GT\x10\x02\x12\x06\n" +
|
||||
"\x02GE\x10\x03\x12\x06\n" +
|
||||
"\x02LT\x10\x04\x12\x06\n" +
|
||||
"\x02LE\x10\x05*M\n" +
|
||||
"\x02LE\x10\x05*]\n" +
|
||||
"\aGroupBy\x12\v\n" +
|
||||
"\aWEBSITE\x10\x00\x12\x11\n" +
|
||||
"\rCLIENT_PREFIX\x10\x01\x12\x0f\n" +
|
||||
"\vREQUEST_URI\x10\x02\x12\x11\n" +
|
||||
"\rHTTP_RESPONSE\x10\x03*A\n" +
|
||||
"\rHTTP_RESPONSE\x10\x03\x12\x0e\n" +
|
||||
"\n" +
|
||||
"ASN_NUMBER\x10\x04*A\n" +
|
||||
"\x06Window\x12\a\n" +
|
||||
"\x03W1M\x10\x00\x12\a\n" +
|
||||
"\x03W5M\x10\x01\x12\b\n" +
|
||||
@@ -1020,28 +1046,29 @@ var file_proto_logtail_proto_goTypes = []any{
|
||||
var file_proto_logtail_proto_depIdxs = []int32{
|
||||
1, // 0: logtail.Filter.status_op:type_name -> logtail.StatusOp
|
||||
0, // 1: logtail.Filter.tor:type_name -> logtail.TorFilter
|
||||
4, // 2: logtail.TopNRequest.filter:type_name -> logtail.Filter
|
||||
2, // 3: logtail.TopNRequest.group_by:type_name -> logtail.GroupBy
|
||||
3, // 4: logtail.TopNRequest.window:type_name -> logtail.Window
|
||||
6, // 5: logtail.TopNResponse.entries:type_name -> logtail.TopNEntry
|
||||
4, // 6: logtail.TrendRequest.filter:type_name -> logtail.Filter
|
||||
3, // 7: logtail.TrendRequest.window:type_name -> logtail.Window
|
||||
9, // 8: logtail.TrendResponse.points:type_name -> logtail.TrendPoint
|
||||
6, // 9: logtail.Snapshot.entries:type_name -> logtail.TopNEntry
|
||||
14, // 10: logtail.ListTargetsResponse.targets:type_name -> logtail.TargetInfo
|
||||
5, // 11: logtail.LogtailService.TopN:input_type -> logtail.TopNRequest
|
||||
8, // 12: logtail.LogtailService.Trend:input_type -> logtail.TrendRequest
|
||||
11, // 13: logtail.LogtailService.StreamSnapshots:input_type -> logtail.SnapshotRequest
|
||||
13, // 14: logtail.LogtailService.ListTargets:input_type -> logtail.ListTargetsRequest
|
||||
7, // 15: logtail.LogtailService.TopN:output_type -> logtail.TopNResponse
|
||||
10, // 16: logtail.LogtailService.Trend:output_type -> logtail.TrendResponse
|
||||
12, // 17: logtail.LogtailService.StreamSnapshots:output_type -> logtail.Snapshot
|
||||
15, // 18: logtail.LogtailService.ListTargets:output_type -> logtail.ListTargetsResponse
|
||||
15, // [15:19] is the sub-list for method output_type
|
||||
11, // [11:15] is the sub-list for method input_type
|
||||
11, // [11:11] is the sub-list for extension type_name
|
||||
11, // [11:11] is the sub-list for extension extendee
|
||||
0, // [0:11] is the sub-list for field type_name
|
||||
1, // 2: logtail.Filter.asn_op:type_name -> logtail.StatusOp
|
||||
4, // 3: logtail.TopNRequest.filter:type_name -> logtail.Filter
|
||||
2, // 4: logtail.TopNRequest.group_by:type_name -> logtail.GroupBy
|
||||
3, // 5: logtail.TopNRequest.window:type_name -> logtail.Window
|
||||
6, // 6: logtail.TopNResponse.entries:type_name -> logtail.TopNEntry
|
||||
4, // 7: logtail.TrendRequest.filter:type_name -> logtail.Filter
|
||||
3, // 8: logtail.TrendRequest.window:type_name -> logtail.Window
|
||||
9, // 9: logtail.TrendResponse.points:type_name -> logtail.TrendPoint
|
||||
6, // 10: logtail.Snapshot.entries:type_name -> logtail.TopNEntry
|
||||
14, // 11: logtail.ListTargetsResponse.targets:type_name -> logtail.TargetInfo
|
||||
5, // 12: logtail.LogtailService.TopN:input_type -> logtail.TopNRequest
|
||||
8, // 13: logtail.LogtailService.Trend:input_type -> logtail.TrendRequest
|
||||
11, // 14: logtail.LogtailService.StreamSnapshots:input_type -> logtail.SnapshotRequest
|
||||
13, // 15: logtail.LogtailService.ListTargets:input_type -> logtail.ListTargetsRequest
|
||||
7, // 16: logtail.LogtailService.TopN:output_type -> logtail.TopNResponse
|
||||
10, // 17: logtail.LogtailService.Trend:output_type -> logtail.TrendResponse
|
||||
12, // 18: logtail.LogtailService.StreamSnapshots:output_type -> logtail.Snapshot
|
||||
15, // 19: logtail.LogtailService.ListTargets:output_type -> logtail.ListTargetsResponse
|
||||
16, // [16:20] is the sub-list for method output_type
|
||||
12, // [12:16] is the sub-list for method input_type
|
||||
12, // [12:12] is the sub-list for extension type_name
|
||||
12, // [12:12] is the sub-list for extension extendee
|
||||
0, // [0:12] is the sub-list for field type_name
|
||||
}
|
||||
|
||||
func init() { file_proto_logtail_proto_init() }
|
||||
|
||||
Reference in New Issue
Block a user