-
-
Notifications
You must be signed in to change notification settings - Fork 35.2k
Severe performance regression with spawnSync (Mac) #62554
Description
Version
24.14.0, 24.14.1, 25.6.0, 25.6.1
Platform
Darwin M3Max.local 24.6.0 Darwin Kernel Version 24.6.0: Wed Nov 5 21:32:38 PST 2025; root:xnu-11417.140.69.705.2~1/RELEASE_ARM64_T6031 arm64
Subsystem
child_process
What steps will reproduce the bug?
This regression occurs on both v24 from 24.14.0 to 24.14.1 and v25 from 25.6.0 to 25.6.1.
v24.14.1 and v25.6.1 are both around 2x slower to execute a child_process.spawnSync call for a simple CLI app on an isolated repro benchmark, but upwards of 10x slower on a real world application with heavyweight spawnSync's (long lived, large stdout/stderr, large cli args list), when compared to v24.14.0 and v25.6.0 on macOS (With some basic testing I could not reproduce on Linux, not tested Windows).
Given the following minimal sample project:
nodeperf.mjs
#!/usr/bin/env node
import chp from 'node:child_process';
const clang = chp.spawnSync('clang', ['--version'], { encoding: 'utf8' });
console.log(clang.stdout);package.json
{
"name": "nodeperftest",
"version": "1.0.0",
"main": "nodeperf.mjs",
"bin": {
"nodeperftest": "nodeperf.mjs"
},
"type": "module"
}For added context, my npm version is 11.11.0, n= 10.2.0 and hyperfine = 1.20.0
I have ran the following benchmarks using n to test different Node versions and hyperfine:
> hyperfine --warmup 3 'n exec 24.14.0 node nodeperf.mjs' 'n exec 24.14.1 node nodeperf.mjs'
Benchmark 1: n exec 24.14.0 node nodeperf.mjs
Time (mean ± σ): 48.5 ms ± 0.9 ms [User: 24.2 ms, System: 8.4 ms]
Range (min … max): 46.5 ms … 51.6 ms 59 runs
Benchmark 2: n exec 24.14.1 node nodeperf.mjs
Time (mean ± σ): 61.0 ms ± 1.5 ms [User: 27.1 ms, System: 9.4 ms]
Range (min … max): 57.7 ms … 67.6 ms 46 runs
Summary
n exec 24.14.0 node nodeperf.mjs ran
1.26 ± 0.04 times faster than n exec 24.14.1 node nodeperf.mjsNotice how even here really, 24.14.1 is already showing signs of slowdown, but not that bad yet, I probably wouldn't have noticed this.
> hyperfine --warmup 3 'n exec 25.6.0 node nodeperf.mjs' 'n exec 25.6.1 node nodeperf.mjs'
Benchmark 1: n exec 25.6.0 node nodeperf.mjs
Time (mean ± σ): 49.1 ms ± 1.1 ms [User: 26.1 ms, System: 8.5 ms]
Range (min … max): 47.2 ms … 52.8 ms 54 runs
Benchmark 2: n exec 25.6.1 node nodeperf.mjs
Time (mean ± σ): 62.9 ms ± 1.3 ms [User: 27.1 ms, System: 9.6 ms]
Range (min … max): 60.5 ms … 68.2 ms 46 runs
Summary
n exec 25.6.0 node nodeperf.mjs ran
1.28 ± 0.04 times faster than n exec 25.6.1 node nodeperf.mjsThe v25 versions achieve a near identical result here.
Next we need to run npm link to symlink and install our sample project as a global CLI command, nodeperftest, which we then benchmark in the exact same way:
> hyperfine --warmup 3 'n exec 24.14.0 nodeperftest' 'n exec 24.14.1 nodeperftest'
Benchmark 1: n exec 24.14.0 nodeperftest
Time (mean ± σ): 50.2 ms ± 1.2 ms [User: 24.9 ms, System: 8.9 ms]
Range (min … max): 48.6 ms … 57.2 ms 55 runs
Benchmark 2: n exec 24.14.1 nodeperftest
Time (mean ± σ): 97.8 ms ± 2.7 ms [User: 29.1 ms, System: 10.3 ms]
Range (min … max): 94.8 ms … 105.4 ms 27 runs
Summary
n exec 24.14.0 nodeperftest ran
1.95 ± 0.07 times faster than n exec 24.14.1 nodeperftest> hyperfine --warmup 3 'n exec 25.6.0 nodeperftest' 'n exec 25.6.1 nodeperftest'
Benchmark 1: n exec 25.6.0 nodeperftest
Time (mean ± σ): 51.2 ms ± 1.3 ms [User: 26.9 ms, System: 8.9 ms]
Range (min … max): 49.2 ms … 57.0 ms 55 runs
Benchmark 2: n exec 25.6.1 nodeperftest
Time (mean ± σ): 100.0 ms ± 1.6 ms [User: 29.5 ms, System: 10.2 ms]
Range (min … max): 97.9 ms … 105.8 ms 27 runs
Summary
n exec 25.6.0 nodeperftest ran
1.95 ± 0.06 times faster than n exec 25.6.1 nodeperftest...and now the .1 versions are even slower, by nearly 2x! Note that the absolute time has effectively not changed at all for the .0 versions, so they suffer no slowdown regardless of execution method.
I have also profiled my full application in which I first observed the issue after noticing it suddenly feel ~10x slower, below are the profiling results:
v25.6.0
v25.6.1
With only 6 spawnSync's across the project, what was once a total ~78ms operation is now ~700ms, a 9x slowdown!
Note: Profiles were taken from running my application via the npm link CLI symlink, using NODE_OPTIONS="--inspect --inspect-brk".
I've tried comparing the changelogs of v24.14.1 and v25.6.1 to see if I could spot an obvious common change between them to blame but couldn't really spot anything clear to me.
How often does it reproduce? Is there a required condition?
Always.
Minor slowdown with direct node file.js execution.
Major slowdown with npm link'd symlink execution.
What is the expected behavior? Why is that the expected behavior?
No performance loss.
What do you see instead?
Severe performance loss.
Additional information
After further testing I can also report the asynchronous spawn is also affected:
> hyperfine --warmup 3 'n exec 24.14.0 nodeperftest' 'n exec 24.14.1 nodeperftest' 'n exec 24.14.0 node nodeperf.mjs' 'n exec 24.14.1 node nodeperf.mjs'
Benchmark 1: n exec 24.14.0 nodeperftest
Time (mean ± σ): 49.2 ms ± 1.1 ms [User: 26.3 ms, System: 8.9 ms]
Range (min … max): 47.7 ms … 54.1 ms 57 runs
Benchmark 2: n exec 24.14.1 nodeperftest
Time (mean ± σ): 97.8 ms ± 2.9 ms [User: 30.4 ms, System: 10.1 ms]
Range (min … max): 95.5 ms … 110.5 ms 26 runs
Benchmark 3: n exec 24.14.0 node nodeperf.mjs
Time (mean ± σ): 48.6 ms ± 2.1 ms [User: 25.9 ms, System: 8.9 ms]
Range (min … max): 46.2 ms … 56.9 ms 58 runs
Benchmark 4: n exec 24.14.1 node nodeperf.mjs
Time (mean ± σ): 61.6 ms ± 2.2 ms [User: 28.4 ms, System: 9.6 ms]
Range (min … max): 59.2 ms … 73.5 ms 45 runs
Summary
n exec 24.14.0 node nodeperf.mjs ran
1.01 ± 0.05 times faster than n exec 24.14.0 nodeperftest
1.27 ± 0.07 times faster than n exec 24.14.1 node nodeperf.mjs
2.01 ± 0.11 times faster than n exec 24.14.1 nodeperftest> hyperfine --warmup 3 'n exec 25.6.0 nodeperftest' 'n exec 25.6.1 nodeperftest' 'n exec 25.6.0 node nodeperf.mjs' 'n exec 25.6.1 node nodeperf.mjs'
Benchmark 1: n exec 25.6.0 nodeperftest
Time (mean ± σ): 51.4 ms ± 1.1 ms [User: 28.2 ms, System: 9.1 ms]
Range (min … max): 49.3 ms … 54.5 ms 55 runs
Benchmark 2: n exec 25.6.1 nodeperftest
Time (mean ± σ): 101.1 ms ± 1.8 ms [User: 31.1 ms, System: 10.4 ms]
Range (min … max): 98.2 ms … 105.3 ms 27 runs
Benchmark 3: n exec 25.6.0 node nodeperf.mjs
Time (mean ± σ): 54.9 ms ± 4.1 ms [User: 30.3 ms, System: 10.1 ms]
Range (min … max): 48.8 ms … 67.9 ms 54 runs
Benchmark 4: n exec 25.6.1 node nodeperf.mjs
Time (mean ± σ): 63.3 ms ± 1.3 ms [User: 28.6 ms, System: 9.6 ms]
Range (min … max): 61.7 ms … 67.6 ms 46 runs
Summary
n exec 25.6.0 nodeperftest ran
1.07 ± 0.08 times faster than n exec 25.6.0 node nodeperf.mjs
1.23 ± 0.04 times faster than n exec 25.6.1 node nodeperf.mjs
1.97 ± 0.06 times faster than n exec 25.6.1 nodeperftestUpdated script: (Same package.json)
#!/usr/bin/env node
import chp from 'node:child_process';
const clang = chp.spawn('clang', ['--version']);
clang.stdout.setEncoding('utf8');
clang.stdout.on('data', (data) => console.log(data));
clang.on('error', (err) => console.error(err));You can even notice spawn being affected in my application profiles as well, as I have realized the large (idle) gap right in the middle is the time spent waiting for 10-20 parallel spawn()'s the program initiates. Notice the (idle) total time increase from ~73ms to nearly 400ms.