Recommended batch size used for breaking synchronous loops in asynchronous batches. When streaming value from a producer to a synchronous consumer it's recommended to break the streaming in batches as to not hold the current thread or run-loop indefinitely.
This is rounded to the next power of 2, because then for applying the modulo operation we can just do:
const modulus = recommendedBatchSize - 1
// ...
nr = (nr + 1) & modulus
The type of the execution model, which can be:
batched
: the default, specifying an mixed execution
mode under which tasks are executed synchronously in
batches up to a maximum size; after a batch of
recommendedBatchSize is executed, the next
execution should be asynchronous.synchronous
: specifies that execution should be
synchronous (immediate / trampolined) for as long as
possible.alwaysAsync
: specifies a run-loop should always do
async execution of tasks, triggering asynchronous
boundaries on each step.The default ExecutionModel that should be used whenever an execution model isn't explicitly specified.
Implements IEquals.equals
.
Implements IEquals.hashCode
.
An ExecutionModel that specifies a run-loop should always do async execution of tasks, thus triggering asynchronous boundaries on each step.
Returns an ExecutionModel that specifies a mixed execution
mode under which tasks are executed synchronously in batches up to
a maximum size, the recommendedBatchSize
.
After such a batch of recommendedBatchSize is executed, the next execution should have a forced asynchronous boundary.
An ExecutionModel that specifies that execution should be synchronous (immediate, trampolined) for as long as possible.
Generated using TypeDoc
The
ExecutionModel
is a specification for how potentially asynchronous run-loops should execute, imposed by theScheduler
.When executing tasks, a run-loop can always execute tasks asynchronously (by forking logical threads), or it can always execute them synchronously (same thread and call-stack, by using an internal trampoline), or it can do a mixed mode that executes tasks in batches before forking.
The specification is considered a recommendation for how run loops should behave, but ultimately it's up to the client to choose the best execution model. This can be related to recursive loops or to events pushed into consumers.