-
Notifications
You must be signed in to change notification settings - Fork 458
Add support for (basic, cached on-visit) offline access using service workers #1427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
rosa
wants to merge
30
commits into
hotwired:main
Choose a base branch
from
rosa:offline-cache
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+1,155
−15
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This is a basic implementation extracted from HEY's more complex implementation, that only caches pages on visit. It's still very bare bones because my goal is to see how it'd be used from turbo-rails and other apps. A lot of Turbo code can't run in a service worker because not all native features are available in web workers (for example, `HTMLFormElement` is not available), we can't rely on apps importing the whole of Turbo to have access to the service worker functionality they'd need on their service worker. Loading everything would just fail. Because of this, we need a different bundle with only the offline functionality code exposed. This includes support for that, specifying a subpath, `/offline` for the `@hotwired/turbo` package (`@hotwired/turbo/offline`). In this way, users of Turbo could do something like ``` import * as TurboOffline from "@hotwired/turbo/offline" ``` without getting all the Turbo stuff.
Much simpler and shorter, plus a more precise name for what it does.
And revise the configuration implementation.
The flow here is a bit complex, so I'm summarising the idea here, which is handling the following scenarios: 1. Network fetch works just fine and returns before any configured timeouts. In this case `Promise.race` returns the network response, and `clearTimeout` prevents the cache fetch. We return that and we're done. 2. Network fetch fails quickly, before any configured timeouts. In this case `Promise.race` throws an error and `clearTimeout` prevents the cache fetch. In this case we try the cache as fallback, explicitly, and return what we get from there (which might be undefined). We're done. 3. The timeout is reached before the network fetch completes. Then we check the cache as fallback, and have two possibilities: - Cache hit: in this case `Promise.race` returns the cached response, we return it and we're done. - Cache miss: in this case we know that the network promise didn't fail yet, so maybe it's going to be slower than the timeout. We wait on it because it doesn't hurt, since we know we have no fallback because we've already looked up the response in the cache and we don't have it. The idea is to ensure we only check the network and the cache once.
This is simpler than network-first, and similar to cache-first except that in the promise to wait after respondWith, if we got a cache hit, we fetch from network and store in the cache to refresh the cached value.
The reason is that these responses could be either opaque or an error. In a cache-first strategy, we risk caching a network error and keeping it forever because of the cache-first nature: we won't revalidate it. In other strategies like network-first or stale-while-revalidate we might cache an error but it'll be remediated the next time we have to refresh it.
For consistency.
This allows users to configure the service worker directly in the HTML independently of loading Turbo and its configuration, since we can't change it easily after we've registered the service worker, unlike it happens with other Turbo configuration options. It'd work like this: ``` <turbo-offline serviceWorkerUrl="/service-worker.js" /> ``` And optionally the following attributes can be provided: - `scope="/"` -> this defaults to resolving "./" against the service worker's script URL (so, "/" for "/service-worker.js"). - `type="module"`, which can be "classic" or "module", and defaults to "classic". - `nativeSupport="true"`, which indicates whether the app has a Hotwire Native counterpart that's loading the service worker as well. We need to set a cookie in this case, to override the User Agent with Hotwire Native's custom User Agent, as otherwise the web view's default User Agent is sent.
Because Turbo's custom elements might register after the `<turbo-offline>` elements have been added to the DOM (very likely, because they're supposed to be added to the <head>), and as such, the `connectedCallback` doesn't run because the element doesn't exist yet. It runs when we call upgrade over them.
Use an explicit JS API rather than a custom element, like this: ```js import { Turbo } from "@hotwired/turbo-rails" Turbo.offline.start("/service-worker.js", { scope: "/", type = "module", native: true }) ``` I think it's clearer and cleaner.
In favour of a simpler, explicit API.
Something like: ```js import * as TurboOffline from "@hotwired/turbo/offline" TurboOffline.addRule({ match: /\.js$/, handler: TurboOffline.handlers.cacheFirst({ cacheName: "assets", maxAge: 60 * 60 * 24 * 7 }) }) TurboOffline.start() ``` Or maybe like: ```js import { addRule, start, handlers } from "@hotwired/turbo/offline" addRule({ match: /\.js$/, handler: handlers.cacheFirst({ cacheName: "assets", maxAge: 60 * 60 * 24 * 7 }) }) start() ```
…stry This is not really necessary. The version will change when the library changes anything about the underlying structure, and the names will only change if necessary (it probably won't be necessary).
In this way we can have different expiration rules per cacheName, and not let a cache interfere with another cache. Each handler has a registry for its cache name, and will use this one for expiration.
"script" is the default for the registration call, but "module" will be needed for Rails apps, so just use that.
We'll trigger this whenever we add something to the cache.
I had forgotten about this. It'll be useful for people not using `type: module` for their service worker. They'll need to use the UMD build with `importScripts`.
To allow test service workers to use / as scope so they can intercept any URL.
Unfortunately clock mocking doesn't seem to work in the service worker context, so I had to resort to use a very short lived cache and wait for the entries to expire.
I had added this in the very beginning but ended up configuring things differently.
Need to tell it about the service worker scripts. Also missed a trailing ; when I copied from the Playwright's docs ^_^U
Because `module` doesn't work on Firefox ¬_¬ https://bugzilla.mozilla.org/show_bug.cgi?id=1360870
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR is a start on bringing proper offline support to Turbo using service workers, which can be useful for PWA, but also for mobile apps built using Hotwire Native.
Main app's side
On the main app side, it can be used like this:
scriptUrl
is the URL of the service worker script. For example, the default service worker added by Rails is located at/service-worker.js
. This needs to return a MIME type oftext/javascript
.options
are the following:scope
: the service worker's registration scope, which determines which URLs the service worker can control. For/service-worker.js
, the default would be/
. In this way, the service worker can intercept any URL from your app.type
: this can beclassic
, which is the default (and it means the service worker is a standard script), andmodule
, which means the service worker is an ES module. This is not currently supported on Firefox, however.native
: this is a specific option for Hotwire Native support. Iftrue
, it'll set a cookie that's needed for Hotwire Native apps to work correctly when a service worker intercepts requests. It'strue
by default, but if you're not using Hotwire Native you can set it tofalse
.So, for example:
In a Rails app, this could be placed on
app/javascript/application.js
orapp/javascript/initializers/turbo_offline.js
for example.Service worker side
Your app needs to serve a
text/javascript
response with your service worker on the URL you've provided to the registration. Maybe you're already using a service worker for push messages, or, if you're not using one at the moment, you can start with an empty response. Then, you can configure offline mode like this:match
controls what requests the service worker will intercept, and can be a regexp that will be tested against the request's URL, or a function that will get the request object passed as a parameter. By default it's/.*/
, which means it'll match all URLs.handler
: can be one of the following:handlers.cacheFirst
: return cached response if exists, without going to the network. If it doesn't exists, go to the network and add it to the cache.handlers.networkFirst
: go always to the network first, caching the response afterwards. Fall back to the cache if the network returns an error.handlers.staleWhileRevalidate
: return a cached response if available, but always refresh it in the background.You always need to provide a
cacheName
, and you can have different rules with different cache names, to cache separate parts in your app. Then, you can also provide the following options, but they're optional:networkTimeout
: this only makes sense in thenetworkFirst
handler. Basically, the time to wait until falling back to the cache. It's for those cases where connectivity is bad, but it takes a long time to get an error, so you'd be better off using the cached version sooner. In this case, if the timeout was reached but the response is not cached, we'll wait for the network anyway.maxAge
: in seconds, to delete entries from the cache. The cache trimming process is triggered in the background whenever we add a new entry to the cache or when a cached response is used in thecacheFirst
strategy. For now, only deleting bymaxAge
is supported, and we look at the last time an entry was cached. So, those entries not refreshed in the lastmaxAge
seconds will be deleted. I'd like to add other mechanisms in the future.For example, if you wanted your service worker to go to the network first, cache everything for at most 24 hours and fall back to the cache after 3 seconds, you could do it like this:
This is still a simple approach, but my plan, if this works, it's to build on this and add more sophisticated mechanisms to pre-cache URLs for offline access before they're accessed, and in a dynamic way, so they don't need to be listed in the service worker beforehand. We need this in HEY's mobile apps, so I'll be extracting that from my work there.
I wanted to get this out as soon as possible to get feedback, ideas and so on. I'll open a corresponding PR to
turbo-rails
to expose the new@hotwired/turbo/offline
.cc @joemasilotti @jayohms @dhh