Support multiple device-pinned listens sharing a single port
Nginx's config-level duplicate-listen check rejected the documented pattern of `listen 80 device=X ipng_source_tag=A; listen 80 device=Y ipng_source_tag=B;` with "a duplicate listen 0.0.0.0:80", and even when the dedup was bypassed the kernel refused the second bind() because the first socket was already holding the port without SO_BINDTODEVICE. The listen wrapper now detects same-sockaddr duplicates before the core handler sees them and records them with `needs_clone=1`. In init_module, phase 1 clones an ngx_listening_t for each such duplicate, phase 3 closes every inherited naked fd, and phase 4 rebinds every target with SO_REUSEADDR + SO_REUSEPORT + SO_BINDTODEVICE set before bind(). SO_REUSEPORT keeps `nginx -s reload` from colliding with the still-bound sockets held by old workers during graceful drain; IPV6_V6ONLY matches nginx's default so the IPv6 listen doesn't claim the IPv4 wildcard and collide with sibling IPv4-specific listens. Restructure 01-module to cover the pattern end-to-end: four device-pinned listens on port 8080 (eth1 shares tag `tag1` across v4 and v6; eth2 splits into `tag2-v4` / `tag2-v6`), clients and server both get IPv6 addresses, and a new "Per-(device, family) request count accuracy" case proves that 10 requests on each of the four combinations yields tag1=20, tag2-v4=10, tag2-v6=10. Mgmt/direct traffic moves to port 9180 so it no longer clashes with the shared-port wildcards. Document the constraint in docs/user-guide.md: all listens on a given port must carry `device=`, and direct traffic belongs on a separate port. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -18,7 +18,7 @@ ${SERVER} clab-${lab-name}-server
|
||||
${CLIENT1} clab-${lab-name}-client1
|
||||
${CLIENT2} clab-${lab-name}-client2
|
||||
${SCRAPE_URL} http://172.20.40.2:9113/.well-known/ipng/statsz
|
||||
${SERVER_MGMT} http://172.20.40.2:8080
|
||||
${SERVER_MGMT} http://172.20.40.2:9180
|
||||
|
||||
*** Test Cases ***
|
||||
|
||||
@@ -44,22 +44,42 @@ JSON scrape
|
||||
|
||||
# --- Per-device attribution ---
|
||||
|
||||
Attribute cl1 via eth1
|
||||
[Documentation] Traffic on server:eth1 carries source_tag=cl1, vip=10.0.1.1.
|
||||
Attribute tag1 via eth1 (v4)
|
||||
[Documentation] IPv4 traffic on server:eth1 carries source_tag=tag1.
|
||||
Send Fast Requests ${CLIENT1} 10.0.1.1 5
|
||||
Wait For Flush
|
||||
${output} = Scrape Prometheus
|
||||
Should Contain ${output} source_tag="cl1"
|
||||
Should Contain ${output} source_tag="tag1"
|
||||
Should Contain ${output} vip="10.0.1.1"
|
||||
|
||||
Attribute cl2 via eth2
|
||||
[Documentation] Traffic on server:eth2 carries source_tag=cl2, vip=10.0.2.1.
|
||||
Attribute tag2-v4 via eth2 (v4)
|
||||
[Documentation] IPv4 traffic on server:eth2 carries source_tag=tag2-v4.
|
||||
Send Fast Requests ${CLIENT2} 10.0.2.1 5
|
||||
Wait For Flush
|
||||
${output} = Scrape Prometheus
|
||||
Should Contain ${output} source_tag="cl2"
|
||||
Should Contain ${output} source_tag="tag2-v4"
|
||||
Should Contain ${output} vip="10.0.2.1"
|
||||
|
||||
Attribute tag1 via eth1 (v6)
|
||||
[Documentation] IPv6 traffic on server:eth1 carries source_tag=tag1
|
||||
... — same tag as v4, demonstrating that tag= can be
|
||||
... shared across address families for one device.
|
||||
Send Fast Requests v6 ${CLIENT1} 2001:db8:1::1 5
|
||||
Wait For Flush
|
||||
${output} = Scrape With Filter source_tag=tag1
|
||||
Should Contain ${output} source_tag="tag1"
|
||||
Should Contain ${output} vip="2001:db8:1::1"
|
||||
|
||||
Attribute tag2-v6 via eth2 (v6)
|
||||
[Documentation] IPv6 traffic on server:eth2 carries source_tag=tag2-v6
|
||||
... — distinct from the eth2 v4 tag, demonstrating
|
||||
... per-(device, family) attribution.
|
||||
Send Fast Requests v6 ${CLIENT2} 2001:db8:2::1 5
|
||||
Wait For Flush
|
||||
${output} = Scrape Prometheus
|
||||
Should Contain ${output} source_tag="tag2-v6"
|
||||
Should Contain ${output} vip="2001:db8:2::1"
|
||||
|
||||
Direct traffic tagged
|
||||
[Documentation] Mgmt-interface traffic carries source_tag=direct.
|
||||
${rc} ${output} = Run And Return Rc And Output
|
||||
@@ -76,7 +96,7 @@ Per-class code counters
|
||||
Docker Exec Ignore Rc ${CLIENT1} curl -s http://10.0.1.1:8080/notfound
|
||||
Docker Exec Ignore Rc ${CLIENT1} curl -s http://10.0.1.1:8080/notfound
|
||||
Wait For Flush
|
||||
${output} = Scrape With Filter source_tag=cl1
|
||||
${output} = Scrape With Filter source_tag=tag1
|
||||
Should Contain ${output} code="4xx"
|
||||
Should Contain ${output} code="2xx"
|
||||
|
||||
@@ -86,11 +106,11 @@ Duration histogram
|
||||
[Documentation] proxy_pass to a 50 ms backend populates sum and buckets.
|
||||
Send Slow Requests ${CLIENT1} 10.0.1.1 3
|
||||
Wait For Flush
|
||||
${prom} = Scrape With Filter source_tag=cl1
|
||||
${prom} = Scrape With Filter source_tag=tag1
|
||||
Should Match Regexp ${prom} request_duration_seconds_sum\\{[^}]*\\}\\s+\\d+\\.\\d*[1-9]
|
||||
|
||||
${rc} ${json} = Run And Return Rc And Output
|
||||
... curl -sf -H 'Accept: application/json' '${SCRAPE_URL}?source_tag=cl1' | python3 -m json.tool
|
||||
... curl -sf -H 'Accept: application/json' '${SCRAPE_URL}?source_tag=tag1' | python3 -m json.tool
|
||||
Should Be Equal As Integers ${rc} 0
|
||||
Should Contain ${json} request_duration_ms
|
||||
Should Contain ${json} buckets
|
||||
@@ -98,14 +118,14 @@ Duration histogram
|
||||
# --- Scrape filters ---
|
||||
|
||||
Filter by source_tag
|
||||
[Documentation] ?source_tag=cl1 returns cl1 only; cl2 only.
|
||||
${output} = Scrape With Filter source_tag=cl1
|
||||
Should Contain ${output} source_tag="cl1"
|
||||
Should Not Contain ${output} source_tag="cl2"
|
||||
[Documentation] ?source_tag=tag1 returns tag1 only; tag2-v4 only.
|
||||
${output} = Scrape With Filter source_tag=tag1
|
||||
Should Contain ${output} source_tag="tag1"
|
||||
Should Not Contain ${output} source_tag="tag2-v4"
|
||||
|
||||
${output} = Scrape With Filter source_tag=cl2
|
||||
Should Contain ${output} source_tag="cl2"
|
||||
Should Not Contain ${output} source_tag="cl1"
|
||||
${output} = Scrape With Filter source_tag=tag2-v4
|
||||
Should Contain ${output} source_tag="tag2-v4"
|
||||
Should Not Contain ${output} source_tag="tag1"
|
||||
|
||||
Filter by VIP
|
||||
[Documentation] ?vip=10.0.1.1 excludes 10.0.2.1.
|
||||
@@ -115,10 +135,10 @@ Filter by VIP
|
||||
|
||||
Filter combined
|
||||
[Documentation] source_tag + vip intersection.
|
||||
${output} = Scrape With Filter source_tag=cl1&vip=10.0.1.1
|
||||
Should Contain ${output} source_tag="cl1"
|
||||
${output} = Scrape With Filter source_tag=tag1&vip=10.0.1.1
|
||||
Should Contain ${output} source_tag="tag1"
|
||||
Should Contain ${output} vip="10.0.1.1"
|
||||
Should Not Contain ${output} source_tag="cl2"
|
||||
Should Not Contain ${output} source_tag="tag2-v4"
|
||||
|
||||
Filter unknown tag
|
||||
[Documentation] Unknown source_tag returns empty data set.
|
||||
@@ -128,18 +148,18 @@ Filter unknown tag
|
||||
# --- nginx variable ---
|
||||
|
||||
Variable in access log
|
||||
[Documentation] $ipng_source_tag appears as cl1, cl2, direct in log.
|
||||
[Documentation] $ipng_source_tag appears as tag1, tag2-v4, direct in log.
|
||||
${output} = Docker Exec ${SERVER} cat /var/log/nginx/access.log
|
||||
Should Match Regexp ${output} src=cl1
|
||||
Should Match Regexp ${output} src=cl2
|
||||
Should Match Regexp ${output} src=tag1
|
||||
Should Match Regexp ${output} src=tag2-v4
|
||||
Should Match Regexp ${output} src=direct
|
||||
|
||||
UDP logtail
|
||||
[Documentation] ipng_stats_logtail udp:// sends log lines to a local
|
||||
... nc listener; captured file has all sources and VIPs.
|
||||
${output} = Docker Exec ${SERVER} cat /var/log/nginx/logtail-udp.log
|
||||
Should Match Regexp ${output} cl1
|
||||
Should Match Regexp ${output} cl2
|
||||
Should Match Regexp ${output} tag1
|
||||
Should Match Regexp ${output} tag2-v4
|
||||
Should Match Regexp ${output} direct
|
||||
Should Match Regexp ${output} 10\\.0\\.1\\.1
|
||||
Should Match Regexp ${output} 10\\.0\\.2\\.1
|
||||
@@ -166,10 +186,10 @@ VIP in access log
|
||||
|
||||
Counters survive reload
|
||||
[Documentation] Shared-memory zone persists across nginx -s reload.
|
||||
${before} = Get Request Count cl1
|
||||
${before} = Get Request Count tag1
|
||||
Docker Exec ${SERVER} nginx -s reload
|
||||
Sleep 2s Wait for new workers
|
||||
${after} = Get Request Count cl1
|
||||
${after} = Get Request Count tag1
|
||||
Should Be True ${after} >= ${before}
|
||||
... Counters dropped after reload: before=${before} after=${after}
|
||||
|
||||
@@ -177,24 +197,37 @@ Traffic after reload
|
||||
[Documentation] New requests are counted after reload.
|
||||
Send Fast Requests ${CLIENT1} 10.0.1.1 3
|
||||
Wait For Flush
|
||||
${output} = Scrape With Filter source_tag=cl1
|
||||
Should Contain ${output} source_tag="cl1"
|
||||
${output} = Scrape With Filter source_tag=tag1
|
||||
Should Contain ${output} source_tag="tag1"
|
||||
|
||||
# --- Counter correctness ---
|
||||
|
||||
Request count accuracy
|
||||
[Documentation] 10 requests per client yields exactly 10 delta.
|
||||
${before_cl1} = Get Request Count cl1
|
||||
${before_cl2} = Get Request Count cl2
|
||||
Send Fast Requests ${CLIENT1} 10.0.1.1 10
|
||||
Send Fast Requests ${CLIENT2} 10.0.2.1 10
|
||||
Per-(device, family) request count accuracy
|
||||
[Documentation] 10 requests on each of the four (device, family)
|
||||
... combinations yields tag1=20, tag2-v4=10, tag2-v6=10.
|
||||
... Demonstrates that one device can combine v4+v6 under
|
||||
... a single tag while another device can split them.
|
||||
${before_tag1} = Get Request Count tag1
|
||||
${before_tag2v4} = Get Request Count tag2-v4
|
||||
${before_tag2v6} = Get Request Count tag2-v6
|
||||
|
||||
Send Fast Requests ${CLIENT1} 10.0.1.1 10
|
||||
Send Fast Requests v6 ${CLIENT1} 2001:db8:1::1 10
|
||||
Send Fast Requests ${CLIENT2} 10.0.2.1 10
|
||||
Send Fast Requests v6 ${CLIENT2} 2001:db8:2::1 10
|
||||
Wait For Flush
|
||||
${after_cl1} = Get Request Count cl1
|
||||
${after_cl2} = Get Request Count cl2
|
||||
${delta_cl1} = Evaluate ${after_cl1} - ${before_cl1}
|
||||
${delta_cl2} = Evaluate ${after_cl2} - ${before_cl2}
|
||||
Should Be Equal As Integers ${delta_cl1} 10
|
||||
Should Be Equal As Integers ${delta_cl2} 10
|
||||
|
||||
${after_tag1} = Get Request Count tag1
|
||||
${after_tag2v4} = Get Request Count tag2-v4
|
||||
${after_tag2v6} = Get Request Count tag2-v6
|
||||
|
||||
${delta_tag1} = Evaluate ${after_tag1} - ${before_tag1}
|
||||
${delta_tag2v4} = Evaluate ${after_tag2v4} - ${before_tag2v4}
|
||||
${delta_tag2v6} = Evaluate ${after_tag2v6} - ${before_tag2v6}
|
||||
|
||||
Should Be Equal As Integers ${delta_tag1} 20
|
||||
Should Be Equal As Integers ${delta_tag2v4} 10
|
||||
Should Be Equal As Integers ${delta_tag2v6} 10
|
||||
|
||||
*** Keywords ***
|
||||
|
||||
@@ -249,6 +282,12 @@ Send Fast Requests
|
||||
Docker Exec ${client} curl -sf http://${server_ip}:8080/
|
||||
END
|
||||
|
||||
Send Fast Requests v6
|
||||
[Arguments] ${client} ${server_ip} ${count}
|
||||
FOR ${i} IN RANGE ${count}
|
||||
Docker Exec ${client} curl -sf http://[${server_ip}]:8080/
|
||||
END
|
||||
|
||||
Send Slow Requests
|
||||
[Arguments] ${client} ${server_ip} ${count}
|
||||
FOR ${i} IN RANGE ${count}
|
||||
|
||||
Reference in New Issue
Block a user