This post is part of a multi-part series on progressive web apps (PWAs).
- Mobile First Design, Web App Manifest
- Intro to Service Workers
- Service Worker Caching (you're here)
- Caching Dynamic Content
- Offline Synchronization
- Push Notifications
The story continues...
Now that we've found out what service workers are capable of, it's time to actually use them.
The thing we're going to use it for is *drum-roll* CACHING!
Caching allows our app to run even when our users are offline,
which is one of the key features of a progressive web app.
So, we're going to talk about the following in this post:
- Caches and Cache Versioning
- Pre-caching
- Dynamic caching
- Cache utilities
Caches and Cache Versioning
Caches are named key / value stores for requests and their corresponding responses.
They allow us to either pre-cache static data for later use, or to add dynamic content on the fly so it will be saved for offline use.
A cache has to be managed manually, so data will neither be added nor updated automatically.
There's also no cache expiration, whenever we want to purge outdated data we could either remove it manually or delete the whole cache.
Since we have to manually take care of our caches, we also have to take care that our cache serves up-to-date data.
Before we go on, let's see how we can actually open a cache:
caches.open($cacheName).then(cache => {});
When opening a cache we have to provide a cache name. In case a cache with the provided name exists, it'll be opened, otherwise a new cache object under this name will be created.
caches.open(...)
returns a Promise
which resolves to the opened cache, so we're able to modify the cache in a .then(cache => {})
block.
Now since caches are operated on using names, it becomes easy to introduce errors in your app by messing up cache names. So, the obvious solution is to store and manage caches in use at a central place.
const currentCaches = {
static: "static-cache-v1",
dynamic: "dynamic-cache-v1"
};
The above snippet also shows how we could apply versioning to our caches.
Their name is assembled from their type, in this example we're dealing with one static and one dynamic cache, and a version string, in this case v1
.
So, whenever we're changing data which lies within our static cache, we'd have to update the cache version to make sure our updated data is also updated in our cache.
Cache Cleanup
As we learned in my last post, the activation stage of a service worker's lifecycle is perfectly suitable to purge outdated caches.
self.onactivate = event => {
const KNOWN_CACHES = Object.values(currentCaches);
event.waitUntil(
caches.keys().then(cacheNames => {
return Promise.all(
cacheNames.map(cacheName => {
if (KNOWN_CACHES.indexOf(cacheName) < 0) {
console.log("Purging outdated cache:", cacheName);
return caches.delete(cacheName);
}
})
);
})
);
};
We're prolonging the activate
event by calling event.waitUntil(...)
and check for each available cache if it's in our list of known caches. If not, we're deleting it since it is no longer needed.
Since caches.delete(...)
returns a Promise
, we're wrapping our cleanup code in a Promise.all(...)
, which takes a list of Promise
objects and only resolves if every one of these Promise
s resolves.
Pre-caching
As the name might suggest, pre-caching stores data before it's actually needed.
In PWAs, this is often used to store assets which are required to properly display the static "shell" of an application.
This includes things like
- CSS
- JS
- Images
- Fonts
- etc.
Caching of static assets which are required to provide a basic version of an app is often referred to as the "app shell" strategy.
App shell caching takes place in the install
phase of a service worker and allows us to display static content of our application even when the user is offline.
self.oninstall = event => {
const dataToCache = [
"/app-icon-48x48.6dc6b62a.png",
"/apple-icon-76x76.3b41636a.png",
"/main-image.8ec44c4f.jpg",
"/main-image-lg.8b45ce89.jpg",
"/manifest.f43e1207.webmanifest",
"https://fonts.googleapis.com/css?family=Roboto:400,700",
"https://fonts.googleapis.com/icon?family=Material+Icons",
.
.
.
];
event.waitUntil(
caches.open(currentCaches.static).then(cache => {
cache
.addAll(dataToCache)
.catch(error =>
console.log("Failed to initialize static cache:", error)
);
})
);
};
cache.add(...)
takes a URL as parameter, fetches it and puts the resulting request / response pair in the currently open cache.
Its extension, cache.addAll(...)
works the same way, but instead of a single URL it processes a whole list of URLs.
cache.addAll(...)
provides a nice, short way to add a list of static assets to our cache. On the other hand, this call will leave you with a semi-initialized cache once a single asset fails to be fetched.
Dynamic Caching
Ok, so now we're able to display the static shell of our app even when users are offline. But what if we also want to show dynamic data in offline mode?
At installation time we do not know about any dynamic data like user images, text posts etc., so we're not able to put them into our static cache. Fortunately, we're also able to intercept any request made from within our application.
By listening for the fetch
event, we're able to intercept and dissect any request and / or response. The perfect place to perform dynamic caching.
self.onfetch = event => {
event.respondWith(
caches.match(event.request).then(cachedResponse => {
if (cachedResponse) {
return cachedResponse;
} else {
return fetch(event.request)
.then(fetchedResponse => {
if (!fetchedResponse.ok) {
return fetchedResponse;
} else {
return caches
.open(currentCaches.dynamic)
.then(cache => {
if (event.request.method === "GET") {
const clonedResponse = fetchedResponse.clone();
cache.put(event.request, clonedResponse);
}
return fetchedResponse;
})
.catch(reason =>
console.log("An error occured while caching data:", reason)
);
}
})
.catch(reason => {
console.log("An error occured while fetching data:", reason);
});
}
})
);
};
This provided code snippet for dynamic caching is applying the so-called "cache first" strategy.
For every request made by our application, we first check if there's already a cached response. If so, we're immediately returning the cached response. This will ultimately lead to faster response times, but also bears the possibility for outdated data.
On cache miss we fetch
the initial request, check whether the request has been successful and add the request / response pair to our cache.
This could also be accomplished by using the already known cache.add(...)
or cache.addAll(...)
methods, but in case you want to apply additional custom logic to caching, this is a more detailed starter.
One thing to pay attention to is the call to fetchedResponse.clone()
.
Since responses are streams, they can only be consumed once. So to return the fetched response after it's been added to our cache, we have to clone it.
Cache Utilities
Caching is quite a heavy topic. There are various strategies to caching and it also depends on the situation at hand which strategy might be the best fit.
Mozilla provides a so-called "service worker cookbook" which contains a lot more details on various caching strategies.
At this point you might also ask yourself whether we have to re-invent the wheel every time we need pre-caching and / or dynamic caching.
The answer is: no.
Google provides a tool called workbox which helps you with boilerplate code for e.g. precaching, dynamic caching etc., so you don't have to necessarily write you caching code manually.
Conclusion
In this post I demoed how to perform
- Cache Versioning
- Cache Cleanup
- Pre-caching
- Dynamic Caching
In my next post we'll take a look at how to store dynamic content in IndexedDB, so stay tuned!
So long
Simon