Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 6 additions & 7 deletions doc/api/stream.md
Original file line number Diff line number Diff line change
Expand Up @@ -556,8 +556,8 @@ The `writable.uncork()` method flushes all data buffered since
[`stream.cork()`][] was called.

When using [`writable.cork()`][] and `writable.uncork()` to manage the buffering
of writes to a stream, it is recommended that calls to `writable.uncork()` be
deferred using `process.nextTick()`. Doing so allows batching of all
of writes to a stream, defer calls to `writable.uncork()` using
`process.nextTick()`. Doing so allows batching of all
`writable.write()` calls that occur within a given Node.js event loop phase.

```js
Expand Down Expand Up @@ -736,7 +736,7 @@ stop until the [`'drain'`][] event is emitted.
While a stream is not draining, calls to `write()` will buffer `chunk`, and
return false. Once all currently buffered chunks are drained (accepted for
delivery by the operating system), the `'drain'` event will be emitted.
It is recommended that once `write()` returns false, no more chunks be written
Once `write()` returns false, do not write more chunks
until the `'drain'` event is emitted. While calling `write()` on a stream that
is not draining is allowed, Node.js will buffer all written chunks until
maximum memory usage occurs, at which point it will abort unconditionally.
Expand Down Expand Up @@ -896,10 +896,9 @@ to consume data from a single stream. Specifically, using a combination
of `on('data')`, `on('readable')`, `pipe()`, or async iterators could
lead to unintuitive behavior.

Use of the `readable.pipe()` method is recommended for most users as it has been
implemented to provide the easiest way of consuming stream data. Developers that
require more fine-grained control over the transfer and generation of data can
use the [`EventEmitter`][] and `readable.on('readable')`/`readable.read()`
`readable.pipe()` provides the easiest way to consume stream data. Developers
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This statement has not been true for years (at least since we had async/iterators support).

I would remove these 3 lines altogether or maybe replace them with:

readable.pipe is a low-level API to connect streams. It is preferred to use [pipeline][] or [compose][] to connect streams. For convenience you can iterate streams with for... await loops or .toArray() a stream which is convenient but not as performant.

Or a better phrasing of that

that require more fine-grained control over the transfer and generation of data
can use the [`EventEmitter`][] and `readable.on('readable')`/`readable.read()`
or the `readable.pause()`/`readable.resume()` APIs.

#### Class: `stream.Readable`
Expand Down