-
Notifications
You must be signed in to change notification settings - Fork 78
feat(@vercel/edge-config) Enforce an IO boundary around client APIs #878
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🦋 Changeset detectedLatest commit: 907e541 The changes in this PR will be included in the next version bump. This PR includes changesets to release 2 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Edge config is an IO library but it heavily caches the underlying IO to make reads incredibly fast. Next.js has a heuristic for prerenders that excludes IO. While we could argue that the kind of data edge-config provides is prerenderable because it has a long lifetime it is not clear that users expect this. To keep edge-config reads consistently treated as IO I've added a wrapper for the API methods which guarantees that any read will resolve in a future timeout task. The choice of timeout is to align with React's IO semantics for what is and is not considered IO for purposes of Suspending. On edge runtimes this is unthrottled and should not add meaningful overhead. In Node.js this is at a minimum a 1ms delay if no other work takes time however since the edge config reads themselves may take longer than 1 ms often it won't be observed. Additionally as soon as one read is queued all subsequent reads that are initiated before the timeout executes will share the same boundary. There is of course the possibility that previously scheduled events like immediates and OS IO can delay the unblocking of these APIs but if the workload is already CPU bound then reordering the resolution is not likely to meaningfully affect overall performance.
4539964 to
a14cdf5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would love to explore alternatives, if there are any, as adding 1ms would double our p75 latency
| // are tracked as IO by React in Server Component environments. | ||
| // TODO implement a Next.js specific version so we can skip this boundary checking | ||
| // except where needed. | ||
| setTimeout(() => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
however since the edge config reads themselves may take longer than 1 ms often it won't be observed
We worked very hard to make latency <1ms in the majority of Edge Config reads, as we don't go over the network. p75 is definitely under 1ms.
With this change we would always be over 1ms in observed latency if any user measures await get("foo"). That's quite the loss.
Is there any other way to signal an IO operation to Next.js that does not require artificially delaying the execution of the operation?
And if we absolutely must merge this I would prefer skipping the setTimeout for non-Next.js to avoid punishing other frameworks for what smells a bit like Next.js leaking its implementation details to this SDK.
Can we find another way of achieving this that does not rely on the heuristic?
| @@ -0,0 +1,50 @@ | |||
| let blockedBoundary: null | Promise<void> = null; | |||
|
|
|||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment is in random place so we can have a threaded discussion
While we could argue that the kind of data edge-config provides is prerenderable because it has a long lifetime it is not clear that users expect this.
I'm fully on board that the default expectation is for this to be IO and not part of the prerender. For, we have this explicit no-store configuration option, that is used by some users of this library to treat Edge Config more like a CMS. Would that still work with the proposed changes?
a610430 to
5a72bfe
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving for the snapshot release as discussed. This is still missing a changeset to be release-able as a snapshot
96b4442 to
b60c89e
Compare
b60c89e to
b14ae4d
Compare
b14ae4d to
455e6dd
Compare
455e6dd to
c73938b
Compare
In Vercel the local edge config is read from filesystem but it is cached. We need the boundary to be before this because the reading from the filesystem is where things like sync IO currently happen.
c73938b to
907e541
Compare
|
closing as #883 got merged in favor of this |
Edge config is an IO library but it heavily caches the underlying IO to make reads incredibly fast. Next.js has a heuristic for prerenders that excludes IO. While we could argue that the kind of data edge-config provides is prerenderable because it has a long lifetime it is not clear that users expect this. To keep edge-config reads consistently treated as IO I've added a wrapper for the API methods which guarantees that any read will resolve in a future timeout task.
The choice of timeout is to align with React's IO semantics for what is and is not considered IO for purposes of Suspending. On edge runtimes this is unthrottled and should not add meaningful overhead. In Node.js this is at a minimum a 1ms delay if no other work takes time however since the edge config reads themselves may take longer than 1 ms often it won't be observed. Additionally as soon as one read is queued all subsequent reads that are initiated before the timeout executes will share the same boundary.
There is of course the possibility that previously scheduled events like immediates and OS IO can delay the unblocking of these APIs but if the workload is already CPU bound then reordering the resolution is not likely to meaningfully affect overall performance.
Best reviewed with whitespace ignored