Summary Reduces unnecessary LockSupport.unpark() calls in ZScheduler by gating wakeups behind a runningWorkers counter, addressing the bottleneck identified in #9878.

Problem maybeUnparkWorker is called on every fiber submission with no guard, invoking LockSupport.unpark() even when active workers could drain the queue themselves. This causes excessive context-switch churn in the hot path. Changes

Added runningWorkers atomic counter

Tracks how many workers are actively mid-execution — not parked, not searching. Incremented when a worker picks up a runnable, decremented immediately after execution completes.

Gated maybeUnparkWorker behind the counter

Only calls LockSupport.unpark() when the number of running workers is insufficient to cover the current queue depth. A hysteresis threshold K=1 prevents rapid park/unpark oscillation at the boundary.

Benchmarks PingPong: 2061 ops/s vs 810 ops/s on fixed thread pool ForkMany: 432 ops/s vs 375 ops/s on fixed thread pool ChainedFork: 4060 ops/s vs 3319 ops/s on fixed thread pool YieldMany: 37.2 ops/s vs 30.5 ops/s on fixed thread pool PingPong shows the most dramatic improvement as it directly stresses the park/unpark cycle between two workers. T radeoffs The gate can delay waking idle workers when the queue estimate lags. In practice this resolves within microseconds as running workers drain the queue and trigger unparks naturally.

Testing All blocking specs pass. Full scheduler test suite passes.

Fixes #9878 /claim #9878

https://www.loom.com/share/ca6a64b90ffc4f83a62edec2cb13f236

Claim

Total prize pool $1,350
Total paid $0
Status Pending
Submitted March 22, 2026
Last updated March 22, 2026

Contributors

NI

Nithinfgs

@Nithinfgs

100%

Sponsors

ZI

ZIO

@ZIO

$850
AB

Abrailab

@CelebrityPunks

$500