Skip to content

Conversation

rosa
Copy link
Member

@rosa rosa commented Aug 10, 2025

This PR is a start on bringing proper offline support to Turbo using service workers, which can be useful for PWA, but also for mobile apps built using Hotwire Native.

Main app's side

On the main app side, it can be used like this:

import { Turbo } from "@hotwired/turbo-rails"
// Or however you're importing Turbo into your app

// Then run the following to register your service worker
Turbo.offline.start(scriptUrl, options)

scriptUrl is the URL of the service worker script. For example, the default service worker added by Rails is located at /service-worker.js. This needs to return a MIME type of text/javascript.

options are the following:

  • scope: the service worker's registration scope, which determines which URLs the service worker can control. For /service-worker.js, the default would be /. In this way, the service worker can intercept any URL from your app.
  • type: this can be classic, which is the default (and it means the service worker is a standard script), and module, which means the service worker is an ES module. This is not currently supported on Firefox, however.
  • native: this is a specific option for Hotwire Native support. If true, it'll set a cookie that's needed for Hotwire Native apps to work correctly when a service worker intercepts requests. It's true by default, but if you're not using Hotwire Native you can set it to false.

So, for example:

import { Turbo } from "@hotwired/turbo-rails"

Turbo.offline.start("/service-worker.js", { 
  scope: "/", 
  type: "module", 
  native: true 
})

In a Rails app, this could be placed on app/javascript/application.js or app/javascript/initializers/turbo_offline.js for example.

Service worker side

Your app needs to serve a text/javascript response with your service worker on the URL you've provided to the registration. Maybe you're already using a service worker for push messages, or, if you're not using one at the moment, you can start with an empty response. Then, you can configure offline mode like this:

// if using `type: "classic"
importScripts("url-to-turbo-offline-umd.js")

// if using `type: "module"` (not supported in Firefox), you can do 
// import { addRule, start, handlers } from "url-to-turbo-offline.min"

// Then, add rules for caching  
TurboOffline.addRule({
  match: /\/topics\/\d+/,
  handler: Turbo.handlers.networkFirst({
    cacheName: "topics",
    maxAge: 60 * 60 * 24 * 7,
    networkTimeout: 3
  })
})

// ... more rules if needed

TurboOffline.start()
  • match controls what requests the service worker will intercept, and can be a regexp that will be tested against the request's URL, or a function that will get the request object passed as a parameter. By default it's /.*/, which means it'll match all URLs.
  • handler: can be one of the following:
    • handlers.cacheFirst: return cached response if exists, without going to the network. If it doesn't exists, go to the network and add it to the cache.
    • handlers.networkFirst: go always to the network first, caching the response afterwards. Fall back to the cache if the network returns an error.
    • handlers.staleWhileRevalidate: return a cached response if available, but always refresh it in the background.

You always need to provide a cacheName, and you can have different rules with different cache names, to cache separate parts in your app. Then, you can also provide the following options, but they're optional:

  • networkTimeout: this only makes sense in the networkFirst handler. Basically, the time to wait until falling back to the cache. It's for those cases where connectivity is bad, but it takes a long time to get an error, so you'd be better off using the cached version sooner. In this case, if the timeout was reached but the response is not cached, we'll wait for the network anyway.
  • maxAge: in seconds, to delete entries from the cache. The cache trimming process is triggered in the background whenever we add a new entry to the cache or when a cached response is used in the cacheFirst strategy. For now, only deleting by maxAge is supported, and we look at the last time an entry was cached. So, those entries not refreshed in the last maxAge seconds will be deleted. I'd like to add other mechanisms in the future.

For example, if you wanted your service worker to go to the network first, cache everything for at most 24 hours and fall back to the cache after 3 seconds, you could do it like this:

importScripts("url-to-turbo-offline-umd.js")

// Then, add rules for caching  
TurboOffline.addRule({
  handler: Turbo.handlers.networkFirst({
    cacheName: "global",
    maxAge: 60 * 60 * 24,
    networkTimeout: 3
  })
})

TurboOffline.start()

This is still a simple approach, but my plan, if this works, it's to build on this and add more sophisticated mechanisms to pre-cache URLs for offline access before they're accessed, and in a dynamic way, so they don't need to be listed in the service worker beforehand. We need this in HEY's mobile apps, so I'll be extracting that from my work there.

I wanted to get this out as soon as possible to get feedback, ideas and so on. I'll open a corresponding PR to turbo-rails to expose the new @hotwired/turbo/offline.

cc @joemasilotti @jayohms @dhh

rosa added 29 commits July 29, 2025 20:07
This is a basic implementation extracted from HEY's more complex
implementation, that only caches pages on visit. It's still very
bare bones because my goal is to see how it'd be used from turbo-rails
and other apps.

A lot of Turbo code can't run in a service worker because not all native
features are available in web workers (for example, `HTMLFormElement` is
not available), we can't rely on apps importing the whole of Turbo to
have access to the service worker functionality they'd need on their
service worker. Loading everything would just fail. Because of this, we
need a different bundle with only the offline functionality code
exposed. This includes support for that, specifying a subpath,
`/offline` for the `@hotwired/turbo` package
(`@hotwired/turbo/offline`). In this way, users of Turbo could do
something like
```
import * as TurboOffline from "@hotwired/turbo/offline"
```
without getting all the Turbo stuff.
Much simpler and shorter, plus a more precise name for what it does.
And revise the configuration implementation.
The flow here is a bit complex, so I'm summarising the idea here, which
is handling the following scenarios:

1. Network fetch works just fine and returns before any configured
   timeouts. In this case `Promise.race` returns the network response,
   and `clearTimeout` prevents the cache fetch. We return that and we're
   done.

2. Network fetch fails quickly, before any configured timeouts. In this
   case `Promise.race` throws an error and `clearTimeout` prevents the
   cache fetch. In this case we try the cache as fallback, explicitly,
   and return what we get from there (which might be undefined). We're
   done.

3. The timeout is reached before the network fetch completes. Then we
   check the cache as fallback, and have two possibilities:
    - Cache hit: in this case `Promise.race` returns the cached
      response, we return it and we're done.
    - Cache miss: in this case we know that the network promise didn't
      fail yet, so maybe it's going to be slower than the timeout. We
      wait on it because it doesn't hurt, since we know we have no
      fallback because we've already looked up the response in the
      cache and we don't have it.

The idea is to ensure we only check the network and the cache once.
This is simpler than network-first, and similar to cache-first except
that in the promise to wait after respondWith, if we got a cache hit, we
fetch from network and store in the cache to refresh the cached value.
The reason is that these responses could be either opaque or an error.
In a cache-first strategy, we risk caching a network error and keeping
it forever because of the cache-first nature: we won't revalidate it. In
other strategies like network-first or stale-while-revalidate we might
cache an error but it'll be remediated the next time we have to refresh
it.
This allows users to configure the service worker directly in the HTML
independently of loading Turbo and its configuration, since we can't
change it easily after we've registered the service worker, unlike it
happens with other Turbo configuration options. It'd work like this:

```
<turbo-offline serviceWorkerUrl="/service-worker.js" />
```

And optionally the following attributes can be provided:

- `scope="/"` -> this defaults to resolving "./" against the service
worker's script URL (so, "/" for "/service-worker.js").
- `type="module"`, which can be "classic" or "module", and defaults to "classic".
- `nativeSupport="true"`, which indicates whether the app has a Hotwire
Native counterpart that's loading the service worker as well. We need to
set a cookie in this case, to override the User Agent with Hotwire
Native's custom User Agent, as otherwise the web view's default User
Agent is sent.
Because Turbo's custom elements might register after the
`<turbo-offline>` elements have been added to the DOM (very likely,
because they're supposed to be added to the <head>), and as such, the
`connectedCallback` doesn't run because the element doesn't exist yet.
It runs when we call upgrade over them.
Use an explicit JS API rather than a custom element, like this:

```js
import { Turbo } from "@hotwired/turbo-rails"

Turbo.offline.start("/service-worker.js", {
  scope: "/",
  type = "module",
  native: true
})
```

I think it's clearer and cleaner.
In favour of a simpler, explicit API.
Something like:

```js
  import * as TurboOffline from "@hotwired/turbo/offline"

  TurboOffline.addRule({
    match: /\.js$/,
    handler: TurboOffline.handlers.cacheFirst({
      cacheName: "assets",
      maxAge: 60 * 60 * 24 * 7
    })
  })
  TurboOffline.start()
```

Or maybe like:
```js
import { addRule, start, handlers } from "@hotwired/turbo/offline"

addRule({
 match: /\.js$/,
 handler: handlers.cacheFirst({
   cacheName: "assets",
   maxAge: 60 * 60 * 24 * 7
 })
})

start()
```
…stry

This is not really necessary. The version will change when the library
changes anything about the underlying structure, and the names will only
change if necessary (it probably won't be necessary).
In this way we can have different expiration rules per cacheName, and
not let a cache interfere with another cache. Each handler has a
registry for its cache name, and will use this one for expiration.
"script" is the default for the registration call, but "module" will be
needed for Rails apps, so just use that.
We'll trigger this whenever we add something to the cache.
I had forgotten about this. It'll be useful for people not using `type:
module` for their service worker. They'll need to use the UMD build with
`importScripts`.
To allow test service workers to use / as scope so they can intercept
any URL.
Unfortunately clock mocking doesn't seem to work in the service worker
context, so I had to resort to use a very short lived cache and wait for
the entries to expire.
I had added this in the very beginning but ended up configuring things
differently.
Need to tell it about the service worker scripts. Also missed a trailing
; when I copied from the Playwright's docs ^_^U
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

1 participant