Skip to content

Mirror of upstream PR #34065 #252

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 315 commits into
base: main
Choose a base branch
from
Open

Mirror of upstream PR #34065 #252

wants to merge 315 commits into from

Conversation

kushxg
Copy link
Owner

@kushxg kushxg commented Aug 1, 2025

Mirrored from facebook/react PR facebook#34065

eps1lon and others added 30 commits June 3, 2025 21:10
Stacked on facebook#33392.

This adds another track to the Performance Track called `"Server
Requests"`.

<img width="1015" alt="Screenshot 2025-06-01 at 12 02 14 AM"
src="https://github.com/user-attachments/assets/c4d164c4-cfdf-4e14-9a87-3f011f65fd20"
/>

This logs the flat list of I/O awaited on by Server Components. There
will be other views that are more focused on what data blocks a specific
Component or Suspense boundary but this is just the list of all the I/O
basically so you can get an overview of those waterfalls without the
noise of all the Component trees and rendering. It's similar to what the
"Network" track is on the client.

I've been going back and forth on what to call this track but I went
with `"Server Requests"` for now. The idea is that the name should
communicate that this is something that happens on the server and is a
pairing with the `"Server Components"` track. Although we don't use that
feature, since it's missing granularity, it's also similar to "Server
Timings".
Stacked on facebook#33394.

This lets us create async stack traces to the owner that was in context
when the I/O was started or awaited.

<img width="615" alt="Screenshot 2025-06-01 at 12 31 52 AM"
src="https://github.com/user-attachments/assets/6ff5a146-33d6-4a4b-84af-1b57e73047d4"
/>

This owner might not be the immediate closest parent where the I/O was
awaited.
Stacked on facebook#33395.

This lets us keep track of which environment this was fetched and
awaited.

Currently the IO and await is in the same environment. It's just kept
when forwarded. Once we support forwarding information from a Promise
fetched from another environment and awaited in this environment then
the await can end up being in a different environment.

There's a question of when the await is inside Flight itself such as
when you return a promise fetched from another environment whether that
should mean that the await is in the current environment. I don't think
so since the original stack trace is the best stack trace. It's only if
you `await` it in user space in this environment first that this might
happen and even then it should only be considered if there wasn't a
better await earlier or if reading from the other environment was itself
I/O.

The timing of *when* we read `environmentName()` is a little interesting
here too.
…facebook#33402)

Stacked on facebook#33400. 

<img width="1261" alt="Screenshot 2025-06-01 at 10 27 47 PM"
src="https://github.com/user-attachments/assets/a5a73ee2-49e0-4851-84ac-e0df6032efb5"
/>

This is emitted with the start/end time and stack of the "await". Which
may be different than the thing that started the I/O.

These awaits aren't quite as simple as just every await since you can
start a sequence in parallel there can actually be multiple overlapping
awaits and there can be CPU work interleaved with the await on the same
component.

```js
function getData() {
  await fetch(...);
  await fetch(...);
}
const promise = getData();
doWork();
await promise;
```

This has two "I/O" awaits but those are actually happening in parallel
with `doWork()`.

Since these also could have started before we started rendering this
sequence (e.g. a component) we have to clamp it so that we don't
consider awaits that start before the component.

What we're conceptually trying to convey is the time this component was
blocked due to that I/O resource. Whether it's blocked from completing
the last result or if it's blocked from issuing a waterfall request.
…book#33403)

Stacked on facebook#33402.

There's a bug in Chrome Performance tracking which uses the enclosing
line/column instead of the callsite in stacks.

For our fake eval:ed functions that represents functions on the server,
we can position the enclosing function body at the position of the
callsite to simulate getting the right line.

Unfortunately, that doesn't give us exactly the right callsite when it's
used for other purposes that uses the callsite like console logs and
error reporting and stacks inside breakpoints. So I don't think we want
to always do this.

For ReactAsyncInfo/ReactIOInfo, the only thing we're going to use the
fake task for is the Performance tracking, so it doesn't have any
downsides until Chrome fixes the bug and we'd have to revert it.
Therefore this PR uses that techniques only for those entries.

We could do this for Server Components too but we're going to use those
for other things too like console logs. I don't think it's worth
duplicating the Task objects. That would also make it inconsistent with
Client Components.

For Client Components, we could in theory also generate fake evals but
that would be way slower since there's so many of them and currently we
rely on the native implementation for those. So doesn't seem worth
fixing.

But since we can at least fix it for RSC I/O/awaits we can do this hack.
…ook#33424)

We want to change the defaults for `revealOrder` and `tail` on
SuspenseList. This is an intermediate step to allow experimental users
to upgrade.

To explicitly specify these options I added `revealOrder="independent"`
and `tail="visible"`.

I then added warnings if `undefined` or `null` is passed. You must now
always explicitly specify them. However, semantics are still preserved
for now until the next step.

We also want to change the rendering order of the `children` prop for
`revealOrder="backwards"`. As an intermediate step I first added
`revealOrder="unstable_legacy-backwards"` option. This will only be
temporary until all users can switch to the new `"backwards"` semantics
once we flip it in the next step.

I also clarified the types that the directional props requires iterable
children but not iterable inside of those. Rows with multiple items can
be modeled as explicit fragments.
…ok#33415)

Stacked on facebook#33403.

When a Promise is coming from React such as when it's passed from
another environment, we should forward the debug information from that
environment. We already do that when rendered as a child.

This makes it possible to also `await promise` and have the information
from that instrumented promise carry through to the next render.

This is a bit tricky because the current protocol is that we have to
read it from the Promise after it resolves so it has time to be assigned
to the promise. `async_hooks` doesn't pass us the instance (even though
it has it) when it gets resolved so we need to keep it around. However,
we have to be very careful because if we get this wrong it'll cause a
memory leak since we retain things by `asyncId` and then manually listen
for `destroy()` which can only be called once a Promise is GC:ed, which
it can't be if we retain it. We have to therefore use a `WeakRef` in
case it never resolves, and then read the `_debugInfo` when it resolves.
We could maybe install a setter or something instead but that's also
heavy.

The other issues is that we don't use native Promises in
ReactFlightClient so our instrumented promises aren't picked up by the
`async_hooks` implementation and so we never get a handle to our
thenable instance. To solve this we can create a native wrapper only in
DEV.
We highly recommend using Node Streams in Node.js because it's much
faster and it is less likely to cause issues when chained in things like
compression algorithms that need explicit flushing which the Web Streams
ecosystem doesn't have a good solution for. However, that said, people
want to be able to use the worse option for various reasons.

The `.edge` builds aren't technically intended for Node.js. A Node.js
environments needs to be patched in various ways to support it. It's
also less optimal since it can't use [Node.js exclusive
features](facebook#33388) and have to use
[the lowest common
denominator](facebook#27399) such as JS
implementations instead of native.

This adds a Web Streams build of Fizz but exclusively for Node.js so
that in it we can rely on Node.js modules. The main difference compared
to Edge is that SSR now uses `createHash` from the `"crypto"` module and
imports `TextEncoder` from `"util"`. We use `setImmediate` instead of
`setTimeout`.

The public API is just `react-dom/server` which in Node.js automatically
imports `react-dom/server.node` which re-exports the legacy bundle, Node
Streams bundle and Node Web Streams bundle. The main downside is if your
bundler isn't smart to DCE this barrel file.

With Flight the difference is larger but that's a bigger lift.
…book#33443)

This should allow us to visualize what
facebook#33438 is trying to convey.

An uncached 3rd-party component is displayed like this in the dev tools:

<img width="1072" alt="Screenshot 2025-06-05 at 12 57 32"
src="https://github.com/user-attachments/assets/d418ae23-d113-4dc9-98b8-ab426710454a"
/>

However, when the component is restored from a cache, it looks like
this:

<img width="1072" alt="Screenshot 2025-06-05 at 12 56 56"
src="https://github.com/user-attachments/assets/a0e34379-d8c0-4b14-8b54-b5c06211232b"
/>

The `Server Components ⚛` track is missing completely here, and the
`Loading profile...` phase also took way longer than without caching the
3rd-party component.

On `main`, the `Server Components ⚛` track is not missing:

<img width="1072" alt="Screenshot 2025-06-05 at 14 31 20"
src="https://github.com/user-attachments/assets/c35e405d-27ca-4b04-a34c-03bd959a7687"
/>

The cached 3rd-party component starts before the current render, and is
also not excluded here, which is of course expected without facebook#33438.
… for Webpack (facebook#33442)

Like facebook#33441 but for Flight.

This is just one of the many combinations needed. I'm just starting with
one.
When I added the `ready_for_review` event in facebook#32344, no notifications
for opened draft PRs were sent due to some other condition. This is not
the case anymore, so we need to exclude draft PRs from triggering a
notification when the workflow is run because of an `opened` event. This
event is still needed because the `ready_for_review` event only fires
when an existing draft PR is converted to a non-draft state. It does not
trigger for pull requests that are opened directly as ready-for-review.
…ook#33447)

## Summary

Enables the `enableEagerAlternateStateNodeCleanup` feature flag for all
variants, while maintaining the `__VARIANT__` for the internal React
Native flavor for backtesting reasons.

## How did you test this change?

```
$ yarn test
```
This shouldn't be needed now that the lint rule was move
Block the view transition on suspensey images Up to 500ms just like the
client.

We can't use `decode()` because a bug in Chrome where those are blocked
on `startViewTransition` finishing we instead rely on sync decoding but
also that the image is live when it's animating in and we assume it
doesn't start visible.

However, we can block the View Transition from starting on the `"load"`
or `"error"` events.

The nice thing about blocking inside `startViewTransition` is that we
have already done the layout so we can only wait on images that are
within the viewport at this point. We might want to do that in Fiber
too. If many image doesn't have fixed size but need to load first, they
can all end up in the viewport. We might consider only doing this for
images that have a fixed size or only a max number that doesn't have a
fixed size.
…undaries (facebook#33454)

We want to make sure that we can block the reveal of a well designed
complete shell reliably. In the Suspense model, client transitions don't
have any way to implicitly resolve. This means you need to use Suspense
or SuspenseList to explicitly split the document. Relying on implicit
would mean you can't add a Suspense boundary later where needed. So we
highly encourage the use of them around large content.

However, if you have constructed a too large shell (e.g. by not adding
any Suspense boundaries at all) then that might take too long to render
on the client. We shouldn't punish users (or overzealous metrics
tracking tools like search engines) in that scenario.

This opts out of render blocking if the shell ends up too large to be
intentional and too slow to load. Instead it deopts to showing the
content split up in arbitrary ways (browser default). It only does this
for SSR, and not client navs so it's not reliable.

In fact, we issue an error to `onError`. This error is recoverable in
that the document is still produced. It's up to your framework to decide
if this errors the build or just surface it for action later.

What should be the limit though? There's a trade off here. If this limit
is too low then you can't fit a reasonably well built UI within it
without getting errors. If it's too high then things that accidentally
fall below it might take too long to load.

I came up with 512kB of uncompressed shell HTML. See the comment in code
for the rationale for this number. TL;DR: Data and theory indicates that
having this much content inside `rel="expect"` doesn't meaningfully
change metrics. Research of above-the-fold content on various websites
indicate that this can comfortable fit all of them which should be
enough for any intentional initial paint.
…ebook#33456)

Follow up to facebook#33442. This is the bundled version.

To keep type check passes from exploding and the maintainance of the
annoying `paths: []` list small, this doesn't add this to flow type
checks. We might miss some config but every combination should already
be covered by other one passes.

I also don't add any jest tests because to test these double export
entry points we need conditional importing to cover builds and
non-builds which turns out to be difficult for the Flight builds so
these aren't covered by any basic build tests.

This approach is what I'm going for, for the other bundlers too.
Adding throttling or delaying on images, can obviously impact metrics.
However, it's all in the name of better actual user experience overall.
(Note that it's not strictly worse even for metric. Often it's actually
strictly better due to less work being done overall thanks to batching.)

Metrics can impact things like search ranking but I believe this is on a
curve. If you're already pretty good, then a slight delay won't suddenly
make you rank in a completely different category. Similarly, if you're
already pretty bad then a slight delay won't make it suddenly way worse.
It's still in the same realm. It's just one weight of many. I don't
think this will make a meaningful practical impact and if it does,
that's probably a bug in the weights that will get fixed.

However, because there's a race to try to "make everything green" in
terms of web vitals, if you go from green to yellow only because of some
throttling or suspensey images, it can feel bad. Therefore this
implements a heuristic where if the only reason we'd miss a specific
target is because of throttling or suspensey images, then we shorten the
timeout to hit the metric. This is a worse user experience because it
can lead to extra flashing but feeling good about "green" matters too.

If you then have another reveal that happens to be the largest
contentful paint after that, then that's throttled again so that it
doesn't become flashy after that. If you've already missed the deadline
then you're not going to hit your metric target anyway. It can affect
average but not median.

This is mainly about LCP. It doesn't affect FCP since that doesn't have
a throttle. If your LCP is the same as your FCP then it also doesn't
matter.

We assume that `performance.now()`'s zero point starts at the "start of
the navigation" which makes this simple. Even if we used the
`PerformanceNavigationTiming` API it would just tell us the same thing.

This only implements for Fizz since these metrics tend to currently only
by tracked for initial loads, but with soft navs tracking we could
consider implementing the same for Fiber throttles.
When deeply nested Suspense boundaries inside a fallback of another
boundary resolve it is possible to encounter situations where you either
attempt to flush an aborted Segment or you have a boundary without any
root segment. We intended for both of these conditions to be impossible
to arrive at legitimately however it turns out in this situation you
can. The fix is two-fold

1. allow flushing aborted segments by simply skipping them. This does
remove some protection against future misconfiguraiton of React because
it is no longer an invariant that you hsould never attempt to flush an
aborted segment but there are legitimate cases where this can come up
and simply omitting the segment is fine b/c we know that the user will
never observe this. A semantically better solution would be to avoid
flushing boudaries inside an unneeded fallback but to do this we would
need to track all boundaries inside a fallback or create back pointers
which add to memory overhead and possibly make GC harder to do
efficiently. By flushing extra we're maintaining status quo and only
suffer in performance not with broken semantics.

2. when queuing completed segments allow for queueing aborted segments
and if we are eliding the enqueued segment allow for child segments that
are errored to be enqueued too. This will mean that we can maintain the
invariant that a boundary must have a root segment the first time we
flush it, it just might be aborted (see point 1 above).

This change has two seemingly similar test cases to exercise this fix.
The reason we need both is that when you have empty segments you hit
different code paths within Fizz and so each one (without this fix)
triggers a different error pathway.

This change also includes a fix to our tests where we were not
appropriately setting CSPnonce back to null at the start of each test so
in some contexts scripts would not run for some tests
Reverts facebook#33457, facebook#33456 and facebook#33442.

There are too many issues with wrappers, lazy init, stateful modules,
duplicate instantiation of async_hooks and duplication of code.

Instead, we'll just do a wrapper polyfill that uses Node Streams
internally.

I kept the client indirection files that I added for consistency with
the server though.
…k#33473)

This effectively lets us consume Web Streams in a Node build. In fact
the Node entry point is now just adding Node stream APIs.

For the client, this is simple because the configs are not actually
stream type specific. The server is a little trickier.
New take on facebook#33441.

This uses a wrapper instead of a separate bundle.
…k#33474)

This needs some tweaks to the implementation and a conversion but simple
enough.

---------

Co-authored-by: Hendrik Liebau <[email protected]>
…facebook#33478)

I noticed that the ThirdPartyComponent in the fixture was showing the
wrong stack and the `"use third-party"` is in the wrong location.

<img width="628" alt="Screenshot 2025-06-06 at 11 22 11 PM"
src="https://github.com/user-attachments/assets/f0013380-d79e-4765-b371-87fd61b3056b"
/>

When creating the initial JSX inside the third party server, we should
make sure that it has no owner. In a real cross-server environment you
get this by default by just executing in different context. But since
the fixture example is inside the same AsyncLocalStorage as the parent
it already has an owner which gets transferred. So we should make sure
that were we create the JSX has no owner to simulate this.

When we then parse a null owner on the receiving side, we replace its
owner/stack with the owner/stack of the call to `createFrom...` to
connect them. This worked fine with only two environments. The bug was
that when we did this and then transferred the result to a third
environment we took the original parsed stack trace. We should instead
parse a new one from the replaced stack in the current environment.

The second bug was that the `"use third-party"` badge ends up in the
wrong place when we do this kind of thing. Because the stack of the
thing entering the new environment is the call to `createFrom...` which
is in the old environment even though the component itself executes in
the new environment. So to see if there's a change we should be
comparing the current environment of the task to the owner's environment
instead of the next environment after the task.

After:

<img width="494" alt="Screenshot 2025-06-07 at 1 13 28 AM"
src="https://github.com/user-attachments/assets/e2e870ba-f125-4526-a853-bd29f164cf09"
/>
Technically the async call graph spans basically all the way back to the
start of the app potentially, but we don't want to include everything.
Similarly we don't want to include everything from previous components
in every child component. So we need some heuristics for filtering out
data.

We roughly want to be able to inspect is what might contribute to a
Suspense loading sequence even if it didn't this time e.g. due to a race
condition.

One flaw with the previous approach was that awaiting a cached promise
in a sibling that happened to finish after another sibling would be
excluded. However, in a different race condition that might end up being
used so I wanted to include an empty "await" in that scenario to have
some association from that component.

However, for data that resolved fully before the request even started,
it's a little different. This can be things that are part of the start
up sequence of the app or externally cached data. We decided that this
should be excluded because it doesn't contribute to the loading sequence
in the expected scenario. I.e. if it's cached. Things that end up being
cache misses would still be included. If you want to test externally
cached data misses, then it's up to you or the framework to simulate
those. E.g. by dropping the cache. This also helps free up some noise
since static / cached data can be excluded in visualizations.

I also apply this principle to forwarding debug info. If you reuse a
cached RSC payload, then the Server Component render time and its awaits
gets clamped to the caller as if it has zero render/await time. The I/O
entry is still back dated but if it was fully resolved before we started
then it's completely excluded.
sebmarkbage and others added 30 commits July 28, 2025 12:22
…lative to others (facebook#34016)

Stacked on facebook#34012.

This shows a time track for when some I/O started and when it finished
relative to other I/O in the same component (or later in the same
suspense boundary).

This is not meant to be a precise visualization since the data might be
misleading if you're running this in dev which has other perf
characteristics anyway. It's just meant to be a general way to orient
yourself in the data.

We can also highlight rejected promises here.

The color scheme is the same as Chrome's current Performance Track
colors to add continuity but those could change.

<img width="478" height="480" alt="Screenshot 2025-07-27 at 11 48 03 PM"
src="https://github.com/user-attachments/assets/545dd591-a91f-4c47-be96-41d80f09a94a"
/>
See
facebook#34021 (comment).

The purpose of the changelog is to communicate to React users what
changed in the release.

Therefore, it is important that the changelog is written oriented
towards React end users. Historically this means that we omit
internal-only changes, i.e. changes that have no effect on the end user
behavior. If internal changes are mentioned in the changelog (e.g. if
they affect end user behavior), they should be phrased in a way that is
understandable to the end user — in particular, they should not refer to
internal API names or concepts.

We also try to group changes according to the publicly known packages.

In this PR:

- Make facebook#33680 an actual link (otherwise it isn't linkified in
CHANGELOG.md on GitHub).
- Remove two changelog entries listed under "React" that don't affect
anyone who upgrades the "React" package, that are phrased using
terminology and internal function names unfamiliar to React users, and
that seem to be RN-specific changes (so should probably go into the RN
changelog that goes out with the next renderer sync that includes these
changes).
…acebook#33989)

Adds a new property to ReturnTerminals to disambiguate whether it was
explicit, implicit (arrow function expressions), or void (where it was
omitted). I will use this property in the next PR adding a new
validation pass.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33989).
* facebook#34022
* facebook#34002
* facebook#34001
* facebook#33990
* __->__ facebook#33989
Adds a new validation pass to validate against `useMemo`s that don't
return anything. This usually indicates some kind of "useEffect"-like
code that has side effects that need to be memoized to prevent
overfiring, and is an anti-pattern.

A follow up validation could also look at the return value of `useMemo`s
to see if they are being used.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33990).
* facebook#34022
* facebook#34002
* facebook#34001
* __->__ facebook#33990
* facebook#33989
…ebook#34001)

Much of the logic in the new validation pass is already implemented in
DropManualMemoization, so let's combine them. I opted to keep the
environment flag so we can more precisely control the rollout.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34001).
* facebook#34022
* facebook#34002
* __->__ facebook#34001
…acebook#34002)

Noticed this from my previous PR that this pass was throwing on the
first error. This PR is a small refactor to aggregate every violation
and report them all at once.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34002).
* facebook#34022
* __->__ facebook#34002
…ook#34022)

Enables `validateNoVoidUseMemo` by default only in eslint (it defaults
to false otherwise) as well as the playground.
…#34017)

Stacked on facebook#34016.

This is using the same thing we already do for the performance track to
provide a description of the I/O based on the content of the resolved
Promise. E.g. a Response's URL.

<img width="375" height="388" alt="Screenshot 2025-07-28 at 1 09 49 AM"
src="https://github.com/user-attachments/assets/f3fdc40f-4e21-4e83-b49e-21c7ec975137"
/>
While we want to get rid of React.lazy's special wrapper type and just
use a Promise for the type, we still have the wrapper.

However, this is still conceptually the same as a Usable in that it
should be have the same if you `use(promise)` or render a Promise as a
child or type position.

This PR makes it behave like a `use()` when we unwrap them. We could
move to a model where it actually reaches the internal of the Lazy's
Promise when it unwraps but for now I leave the lazy API signature
intact by just catching the Promise and then "use()" that.

This lets us align on the semantics with `use()` such as the suspense
yield optimization. It also lets us warn or fork based on legacy
throw-a-Promise behavior where as `React.lazy` is not deprecated.
Follow up to facebook#34032. The linter now ensures that `use` cannot be used
within try/catch.
We added the `@enableTreatRefLikeIdentifiersAsRefs` feature a while back
but never enabled it. Since then we've continued to see examples that
motivate this mode, so here we're fixing it up to prepare to enable by
default. It now works as follows:

* If we find a property load or property store where both a) the
object's name is ref-like (`ref` or `-Ref`) and b) the property is
`current`, we infer the object itself as a ref and the value of the
property as a ref value. Originally the feature only detected property
loads, not stores.
* Inferred refs are not considered stable (this is a change from the
original implementation). The only way to get a stable ref is by calling
`useRef()`. We've seen issues with assuming refs are stable.

With this change, cases like the following now correctly error:

```js
function Foo(props) {
  const fooRef = props.fooRef;
  fooRef.current = true;
  ^^^^^^^^^^^^^^ cannot modify ref in render
}
```

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34000).
* facebook#34027
* facebook#34026
* facebook#34025
* facebook#34024
* facebook#34005
* facebook#34006
* facebook#34004
* facebook#34003
* __->__ facebook#34000
Improves the error message for ValidateNoRefAccessInRender, using the
new diagnostic type as well as providing a longer but succinct summary
of what refs are for and why they're unsafe to access in render.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34003).
* facebook#34027
* facebook#34026
* facebook#34025
* facebook#34024
* facebook#34005
* facebook#34006
* facebook#34004
* __->__ facebook#34003
…p) (facebook#34004)

Two related changes:
* ValidateNoRefAccessInRender now allows the mergeRefs pattern, ie a
function that aggregates multiple refs into a new ref. This is the main
case where we have seen false positive no-ref-in-render errors.
* Behind `@enableTreatRefLikeIdentifiersAsRefs`, we infer values passed
as the `ref` prop to some JSX as refs.

The second change is potentially helpful for situations such as

```js
function Component({ref: parentRef}) {
  const childRef = useRef(null);
  const mergedRef = mergeRefs(parentRef, childRef);
  useEffect(() => {
    // generally accesses childRef, not mergedRef
  }, []);
  return <Foo ref={mergedRef} />;
}
```

Ie where you create a merged ref but don't access its `.current`
property. Without inferring `ref` props as refs, we'd fail to allow this
merge refs case.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34004).
* facebook#34027
* facebook#34026
* facebook#34025
* facebook#34024
* facebook#34005
* facebook#34006
* __->__ facebook#34004
We infer render helpers as functions whose result is immediately
interpolated into jsx. This is a very conservative approximation, to
help with common cases like `<Foo>{props.renderItem(ref)}</Foo>`. The
idea is similar to hooks that it's ultimately on the developer to catch
ref-in-render validations (and the runtime detects them too), so we can
be a bit more relaxed since there are valid reasons to use this pattern.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34006).
* facebook#34027
* facebook#34026
* facebook#34025
* facebook#34024
* facebook#34005
* __->__ facebook#34006
* facebook#34004
)

`@enableTreatRefLikeIdentifiersAsRefs` is now on by default. I made one
small fix to the render helper logic as part of this, uncovered by
including more tests.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34005).
* facebook#34027
* facebook#34026
* facebook#34025
* facebook#34024
* __->__ facebook#34005
…4024)

Fixes facebook#30782

When developers do an `if (ref.current == null)` guard for lazy ref
initialization, the "safe" blocks should extend up to the if's
fallthrough. Previously we only allowed writing to the ref in the if
consequent, but this meant that you couldn't use a ternary, logical, etc
in the if body.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34024).
* facebook#34027
* facebook#34026
* facebook#34025
* __->__ facebook#34024
…zer (facebook#34025)

Per title, disallow ref access in `useState()` initializer function,
`useReducer()` reducer, and `useReducer()` init function.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34025).
* facebook#34027
* facebook#34026
* __->__ facebook#34025
…mutated (facebook#34026)

Allows assigning a ref-accessing function to an object so long as that
object is not subsequently transitively mutated. We should likely
rewrite the ref validation to use the new mutation/aliasing effects,
which would provide a more consistent behavior across instruction types
and require fewer special cases like this.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34026).
* facebook#34027
* __->__ facebook#34026
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34044).
* facebook#34027
* __->__ facebook#34044
…Backend (facebook#34050)

This keeps a data structure of Suspense boundaries and the root which
can keep track which boundaries might participate in a loading sequence
and everything that suspends them. This will power the Suspense tab.

Now when you select a `<Suspense>` boundary the "suspended by" section
shows the whole boundary instead of just that component.

In the future, we'll likely need to add "Activity" boundaries to this
tree as well, so that we can track what suspended the root of an
Activity when filtering a subtree. Similar to how the root SuspenseNode
now tracks suspending at the root. Maybe it's ok to just traverse to
collect this information on-demand when you select one though since this
doesn't contribute to the deduping.

We'll also need to add implicit Suspense boundaries for the rows of a
SuspenseList with `tail=hidden/collapsed`.
Follow up to facebook#34050.

It's not actually possible to suspend *above* the root since even if you
suspend in the first child position, you're still suspending the
HostRoot which always has a corresponding FiberInstance and
SuspenseNode.
…acebook#34058)

Suspense boundaries that may have contributed to the preamble should not
be outlined due to size because these boundaries are only meant to be in
fallback state if the boundary actually errors. This change excludes any
boundary which has the potential to contribute to the preamble. We could
alternatively track which boundaries actually contributed to the
preamble but in practice there will be very few and I think this is
sufficient.

One problem with this approach is it makes Suspense above body opt out
of the mode where we omit rel="expect" for large shells. In essence
Suspense above body has the semantics of a Shell (it blocks flushing
until resolved) but it doesn't get tracked as request bytes and thus we
will not opt users into the skipped blocking shell for very large
boundaries.

This will be fixed in a followup
…e request byteSize (facebook#34059)

Stacked on facebook#34058

When tracking how large the shell is we currently only track the bytes
of everything above Suspense boundaries. However since Boundaries that
contribute to the preamble will always be inlined when the shell flushes
they should also be considered as part of the request byteSize since
they always flush alongside the shell. This change adds this tracking
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.