Summary Reduces unnecessary LockSupport.unpark() calls in ZScheduler by gating wakeups behind a runningWorkers counter, addressing the bottleneck identified in #9878.
Problem maybeUnparkWorker is called on every fiber submission with no guard, invoking LockSupport.unpark() even when active workers could drain the queue themselves. This causes excessive context-switch churn in the hot path. Changes
Added runningWorkers atomic counter
Tracks how many workers are actively mid-execution — not parked, not searching. Incremented when a worker picks up a runnable, decremented immediately after execution completes.
Gated maybeUnparkWorker behind the counter
Only calls LockSupport.unpark() when the number of running workers is insufficient to cover the current queue depth. A hysteresis threshold K=1 prevents rapid park/unpark oscillation at the boundary.
Benchmarks PingPong: 2061 ops/s vs 810 ops/s on fixed thread pool ForkMany: 432 ops/s vs 375 ops/s on fixed thread pool ChainedFork: 4060 ops/s vs 3319 ops/s on fixed thread pool YieldMany: 37.2 ops/s vs 30.5 ops/s on fixed thread pool PingPong shows the most dramatic improvement as it directly stresses the park/unpark cycle between two workers. T radeoffs The gate can delay waking idle workers when the queue estimate lags. In practice this resolves within microseconds as running workers drain the queue and trigger unparks naturally.
Testing All blocking specs pass. Full scheduler test suite passes.
Fixes #9878 /claim #9878
Nithinfgs
@Nithinfgs
ZIO
@ZIO
Abrailab
@CelebrityPunks