Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate batching / parallelizing trace event processing #330

Open
christos68k opened this issue Jan 29, 2025 · 0 comments
Open

Investigate batching / parallelizing trace event processing #330

christos68k opened this issue Jan 29, 2025 · 0 comments

Comments

@christos68k
Copy link
Member

christos68k commented Jan 29, 2025

Some time ago we switched trace event processing from batching to single events. This means that we're now processing trace events serially, one-after-another as we read them from the per-CPU buffers: tracer sends each individual trace event to tracehandler over an unbuffered channel while inside the event drain loop.

During recent rework of SymbolizationComplete we noticed that batching (at some extra memory cost) can be trivially introduced: tracer can first drain the per-CPU buffers into an event batch and send that batch to tracehandler.

Additionally, there's the option of having tracehandler process events in that batch in parallel. That would help avoid dropped events in the tracer drain loop (recently introduced metrics should reveal if that is taking place). From a quick glance it seems that most trace processing operations can be trivially parallelized, as they don't depend on a write lock.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant