Hosting a static web site using Google Cloud Run (2024-04-30)
context
This website (https://til.cafe/) is currently a static site, hosted on Google Cloud Run. Wait... isn't that a serverless container hosting service? Yes. You may be asking why I'm hosting a static site on something designed for running containers? The details are in this post about the tradeoffs. The short version is that it works really well (no fiddling), and can be set up to automatically build. So I can edit text, check it in, push and the site gets updated. Nice.
So... how fiddly is it to set up? Not bad. And that's what the rest of this article is about.
overview
OK, so Cloud Run is a service that hosts containers that serve web content. You can host pretty much any application written in any you'd like, as long as you can wrap it in a container and it responds to HTTP requests.
A static website is made from a bunch of files (at least HTML/CSS, probably more) on disk somewhere that can be accessed via HTTP.
So... how do we connect these two? Files can't serve themselves, so we need a web server, and then wrap it and the files into a container and send that to Cloud run. We only need two short config files to do this... but it was a bit fiddly to get there, so let's walk through the details.
static website files
I kinda skimmed it above... but while you could write the HTML/CSS directly (and there are pretty good arguments for doing it), this site uses a Static Site Generator - software which takes text files of some kind and produces the HTML files.
I'm currently using Zola, but for this article, it doesn't matter, anything which will produce the files works.
In my case they're generated by running this command:
$ zola build
So, where you see that below, replace it with whatever you use to generate the files.
web server
Next, we need to actually serve those files over HTTP. Nginx & Apache are currently the most popular web servers... but for this I wanted something as easy as possible to configure with minimal dependencies (to keep the container size small). At some point, I ran across Caddy, and while their marketing page is kinda over-the-top, it ships as a single file binary and the config is very clean:
:8080 {
root * /srv
file_server
log {
output stdout
format json
}
}
This says: Listen on port 8080, using the file_server module to serve all files in /srv
and log to stdout
in JSON.
Some notes:
8080
: Cloud Run expects a process listening on 0.0.0.0 (all IPs) and port 8080 by default (see: Container runtime contract). So I just hard-coded it here. If you need the port to be variable, check out Caddy placeholdersformat json
: This is neat because Cloud Run automatically forwards stdout to Cloud Logging and it parses JSON formatted logs into structured logs you can query and filter.
Dockerfile
The next step is to build a container.
I'm using a two-stage Docker build, a neat pattern I learned from Daniel Azuma's post "Deploying My Blog to Google Cloud Run" First we build the static website (using Zola, in my case) and since we only want the website in the final image, we start a new image (using a pre-made Caddy image) and copy the config and web site files to it.
FROM alpine AS build
RUN apk add zola
WORKDIR /workspace
COPY . /workspace
RUN zola build
####
FROM caddy:2.7-alpine
COPY --from=build /workspace/Caddyfile /etc/caddy/Caddyfile
COPY --from=build /workspace/public /srv
build and deploy an image to Cloud Run
And... now we need to actually get this online. How do we get this on Cloud Run? Luckily Cloud Run can handle things from here.
Well... I have to admit that I'm assuming that the website is stored in source control. I hope yours is? Please say yes.
The source for this website (til.cafe) is here.
Cloud Run can be configured to build an image and deploy that whenever a git repository changes. It supports repositories hosted by a few different services, GitHub in this case.
In case things change, I'm going to refer to the official docs for setting up continuous deployment
I followed those steps, giving a service account permission to connect to GitHub, choosing the trigger (push to the main branch, in my case), and choosing Dockerfile
as the build type. Since it only needs to be done once on a personal project, I did it via the web UI, but you can set things up manually.
usage
All of the above is one-time set up. In everyday usage, I edit the files for my site, test the changes locally, check them into main, and push them to GitHub. A few minutes later, the new version is online! This works from whatever tools I use to update the git repo... so it works from desktop, laptop, mobile, whatever. I'm really happy with this pattern. I haven't had to touch it once since setting it up over a year ago.
note about branches
Reading this carefully has probably made some of you nervous... Isn't this deploying to production on every change? Yes, pretty much, at least everything that lands on main will get deployed.
This is a small personal site, so I have it configured to build and deploy on every push to main. That probably wouldn't make sense for anything more critical than a personal blog or anything with multiple developers. You can also trigger builds on pushes to specific branches or tags as well as pull requests. And you can setup different Cloud Run services at subdomains for each, so it's straightforward to create an integration environment separate from production, for example. Not every push needs to go to prod. ;)
enjoy
Happy static site hosting!