Run reducers in their own V8 HandleScope for js modules#4746
Open
joshua-spacetime wants to merge 1 commit intojoshua/js-worker-queuefrom
Open
Run reducers in their own V8 HandleScope for js modules#4746joshua-spacetime wants to merge 1 commit intojoshua/js-worker-queuefrom
HandleScope for js modules#4746joshua-spacetime wants to merge 1 commit intojoshua/js-worker-queuefrom
Conversation
HandleScope in js workerHandleScope for js modules
HandleScope for js modulesHandleScope for js modules
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description of Changes
The main fix in this patch is that reducer, view, and procedure calls now run inside a fresh per-invocation V8
HandleScopeinstead of reusing one worker-lifetime scope. That gives V8 a real call boundary for temporaryLocal<T>handles and avoids retaining call-local JS objects for the lifetime of a long-lived worker.This patch also makes end-of-call host cleanup explicit, lowers the default heap-check cadence, and limits exported heap metrics to the instance-lane worker only.
The JS instance lane is intentionally long-lived. Before this patch, V8 call-local handles and some host-side call state could survive across multiple invocations on the same worker. Over time that can create gradual heap growth, more GC work, and eventually enough slowdown or heap pressure that the isolate needs replacement.
We also had poor heap observability for diagnosis. The Prometheus gauges were effectively mixing multiple workers together, and pooled workers made the numbers noisy without being especially useful since they are short-lived.
What changed
1. Add a fresh V8
HandleScopefor every invocationEach reducer, view, and procedure call now opens a nested V8 scope for the duration of that call.
This preserves the existing long-lived isolate and context, but gives every invocation its own temporary handle lifetime. Call-local V8 handles now die when the invocation returns instead of sticking around until the worker exits.
As part of that refactor:
ArrayBufferis now created per reducer call instead of being stored as a worker-lifetime local.2. Make end-of-call cleanup a real boundary
The V8 host now force-clears leftover per-call host state at the end of a function call.
Specifically:
3. Lower the default heap-check cadence
The default V8 heap policy is now more aggressive about checking worker heap usage.
Defaults changed from:
heap-check-request-interval = 65536heap-check-time-interval = 30sto:
heap-check-request-interval = 4096heap-check-time-interval = 5sThese settings remain configurable through the existing
v8-heap-policyconfig.4. Only export heap metrics for the instance-lane worker
Heap metrics now reflect only the long-lived instance lane.
Specifically:
worker_kind="instance_lane".This avoids the last-writer-wins and noisy-overlap issues from pooled workers while keeping the metrics focused on the worker that accumulates state over time and is most relevant for long-run slowdown diagnosis.
API and ABI breaking changes
None
Expected complexity level and risk
2
Testing
Manually tested via the keynote-2 benchmark. Will add the keynote benchmark to CI which will serve as a regression test.