Skip to content

Async-streams: Consider optimizing handling of yield directly following yield #31248

@jcouv

Description

@jcouv

Reported by @stephentoub:

Right now, every time we yield an item, we incur an interlocked operation as part of SetResult on the MRVTSC. That’s because, from the MRVTSC’s perspective, it needs to be able to handle the case where someone might be concurrently calling OnCompleted, and so it can’t avoid the interlocked if it sees that there’s currently no delegate hooked up (in the corelib implementation, it will avoid the interlocked in that other case).

However, the state machine could be taught to understand whether we’ve yet await’d something as part of this MoveNextAsync call: if we haven’t, then we’re still in the synchronous call to MoveNextAsync, we haven’t handed out the awaiter yet, and so there’s no way someone could concurrently be calling OnCompleted.

In such a case, we could avoid the interlocked. It might be worth thinking through what it would look like to make that happen, and whether we’d need to add anything to the MRVTSC to enable it (e.g. a DangerousSetResult that would trust the caller to say no interlocked is needed) or whether the state machine could bypass the MRVTSC completely in that case.

If it were tracking that information, it could also then potentially be used to avoid doing a Reset on the MRVTSC until the first time we actually await something such that we’re going to yield, saving a few more cycles.

Update (1/9): From LDM discussion, we're supportive of this optimization. Adds a bool and some maintenance for that bool. No additional API needed.

Metadata

Metadata

Assignees

Type

No type
No fields configured for issues without a type.

Projects

Status

Misc

Relationships

None yet

Development

No branches or pull requests

Issue actions