Open
Description
I'd like to use this meta-issue to track any ideas for a version of writable streams that has some sort of buffer-reuse semantics, like our BYOB readable streams.
#465 has some previous discussion.
One thing I noticed while writing an example is that it's possible for an underlying sink to always transfer away a buffer. This is probably what most high-performance binary data underlying sinks will do, actually. (It's either transfer, copy, or expose raciness to the web.) But this has two drawbacks:
- The underlying sink doesn't see the buffer until the other buffers in the queue ahead of it have been consumed. So it doesn't get detached immediately. So there is a bit of an async race (not a multithreaded one) between any code that manipulates the buffer, and the underlying sink.
- The underlying sink has no opportunity to give the buffer back to the producer if they want to reuse it.
Here is a real example of this. Assume writableStream
is a stream whose underlying sink transfers any chunks given to it (and errors if not given Uint8Arrays). Then consider this code:
async function writeRandomBytesForever(writableStream) {
const writer = writableStream.getWriter();
while (true) {
if (Math.random() > 0.5) {
await writer.ready;
}
const bytes = new Uint8Array(1024);
window.crypto.getRandomValues(bytes);
const promise = writer.write(bytes);
doStuffWith(bytes);
await promise;
}
}
Here the two problems are illustrated:
doStuffWith(bytes)
can maybe modify bytes, or maybe not. (If we didawait writer.ready
, the queue will possibly be empty, in which case you cannot modify, since the underlying sink already transferred it. But if we did not, then you can modify.)- We have to reallocate new typed arrays every time through the loop; we cannot reuse the same one.