Add an idea, and another set of typo fixes
All checks were successful
continuous-integration/drone/push Build is passing
All checks were successful
continuous-integration/drone/push Build is passing
This commit is contained in:
@ -669,7 +669,8 @@ though.
|
|||||||
I'll continue to work with the folks in the sFlow and VPP communities and iterate on the plugin and
|
I'll continue to work with the folks in the sFlow and VPP communities and iterate on the plugin and
|
||||||
other **sFlow Agent** machinery. In an upcoming article, I hope to share more details on how to tie
|
other **sFlow Agent** machinery. In an upcoming article, I hope to share more details on how to tie
|
||||||
the VPP plugin in to the `hsflowd` host sflow daemon in a way that the interface indexes, counters
|
the VPP plugin in to the `hsflowd` host sflow daemon in a way that the interface indexes, counters
|
||||||
and pcaket lengths are all correct
|
and packet lengths are all correct. Of course, the main improvement that we can make is to allow for
|
||||||
|
the system to work better under load, which will take some thinking.
|
||||||
|
|
||||||
I should do a few more tests with a debug binary and profiling turned on. I quickly ran a `perf`
|
I should do a few more tests with a debug binary and profiling turned on. I quickly ran a `perf`
|
||||||
over the VPP (release / optimized) binary running on the bench, but it merely said 80% of time was
|
over the VPP (release / optimized) binary running on the bench, but it merely said 80% of time was
|
||||||
@ -702,6 +703,19 @@ interesting work to do on this `sflow` plugin, with matching ifIndex for consume
|
|||||||
reading interface counters from the dataplane (or from the Prometheus Exporter), and most
|
reading interface counters from the dataplane (or from the Prometheus Exporter), and most
|
||||||
importantly, ensuring it works well, or fails gracefully, under stringent load.
|
importantly, ensuring it works well, or fails gracefully, under stringent load.
|
||||||
|
|
||||||
|
From the _cray-cray_ ideas department, what if we:
|
||||||
|
1. In worker thread, produced the sample but instead of sending an RPC to main and taking the
|
||||||
|
lock, append it to a producer sample queue and move on. This way, no locks are needed, and each
|
||||||
|
worker thread will have its own producer queue.
|
||||||
|
|
||||||
|
1. Create a separate worker (or even pool of workers), running on possibly a different CPU (or in
|
||||||
|
main), that runs a loop iterating on all sflow sample queues consuming the samples and sending them
|
||||||
|
in batches to the PSAMPLE Netlink group, possibly dropping samples if there are too many coming in.
|
||||||
|
|
||||||
|
I'm reminded that this pattern exists already -- async crypto workers create a `crypto-dispatch`
|
||||||
|
node that acts as poller for inbound crypto, and it hands off the result back into the worker
|
||||||
|
thread: lockless at the expense of some complexity!
|
||||||
|
|
||||||
## Acknowledgements
|
## Acknowledgements
|
||||||
|
|
||||||
The plugin I am testing here is a prototype written by Neil McKee of inMon. I also wanted to say
|
The plugin I am testing here is a prototype written by Neil McKee of inMon. I also wanted to say
|
||||||
|
Reference in New Issue
Block a user