The Layerfile cache
Webapp.io has extended & improved Docker's caching model for use in CI.
Consider the following Layerfile:
Layerfile
In this case, we'll make snapshots after each line and map which files were read back to the snapshots. This means:
- if you edit any other file than file1 or file2, this entire Layerfile will be skipped.
- if you edit file1, the last two lines will be rerun (40s)
- if you edit file2, only the last line will be rerun (20s)
- if you edit the layerfile, we'll invalidate the cache at the point of the edit.
Differences from Docker
Here are the major differences between Layerfiles and Dockerfiles for use in CI:
- Layerfiles define VMs, not containers - this means you can run anything (including docker) that you could run on a regular cloud server.
- Running processes are snapshotted and reused. If you start & populate a database, that'll be included in the layer so that you don't have to re-run the steps to set up the database for every pipeline.
COPY
in webapp.io does not invalidate the cache when it runs, instead the files copied are monitored for read/write starting at that point. This means thatCOPY . .
is much more common in Layerfiles than Dockerfiles- You can copy files from parent directories (
COPY /file1 .
orCOPY ../.. .
) and inherit from other LayerfilesFROM ../../other/Layerfile
File watching COPY
In most CI providers and in Docker, you need to micromanage cache keys. The following Dockerfile and Layerfile are equivalent because we watch which files are read by each step:
Dockerfile
Layerfile
Instead of micromanaging COPY, you can simply copy the entire repository and we'll load the bottommost layer from the cache which agrees with a commit's changes.
Faster installs: The CACHE directive
Sometimes there are steps which will run repeatedly because their constituent files change often, usually source files. Consider this Layerfile:
Layerfile
In this case, unless you change package.json
, the default webapp.io cache will skip the entire pipeline after every push.
The CACHE
directive only acts to speed up the npm ci
step in this case.
Note that CACHE
will "leak" state across runs, so it might allow one run to break all following ones until someone force-retries without caches.
To avoid this problem, only cache stateless directories (which usually contain "cache" in their paths)
Some other examples:
- /var/cache/apt
- /root/.cache/go-build
- ~/.npm ~/.next/cache ~/.yarn/cache
SPLIT
Parallelizing directive
Webapp.io provides a utility to run tests in parallel - SPLIT 5 duplicates the entire VM 5 times at the point it executes. In practice this means that you can run tests in parallel without worrying about race conditions causing flaky tests.
Rails: knapsack
See knapsack pro
- Install the gem
- Run
KNAPSACK_GENERATE_REPORT=true bundle exec rspec spec
on your local computer git add knapsack_rspec_report.json && git commit -m 'knapsack' && git push origin master
Your Layerfile will look something like this:
Layerfile
Go: custom test runner
See this file for an example parallel test runner for go.
The Layerfile from that example:
Layerfile
RUN REPEATABLE
Restores state from previous runs
Sometimes it's not sufficient to just cache directories (CACHE
), it'd be best to cache complex state such as running processes or mounted files.
Webapp.io provides this powerful but dangerous caching mechanism via RUN REPEATABLE
. It's particularly useful for complicated declarative cluster state like docker
, docker-compose
and kubectl
.
It's recommended to combine RUN REPEATABLE with multi-stage builds for large performance improvements.
RUN REPEATABLE for Docker
Layerfile
In this Layerfile, the docker cache from previous runs will be reused because RUN REPEATABLE uses the cache from after the last time this step ran.
If you had three pipelines at 9am, 10am, and 11am, the effective steps run would look like this:
- 9am pipeline: cp (9am files) . && docker build -t myimage
- 10am pipeline: cp -a (9am files) . && docker build -t myimage && cp -a (10am files) . && docker build -t myimage
- 11am pipeline: cp -a (9am files) . && docker build -t myimage && cp -a (10am files) . && docker build -t myimage && cp -a (11am files) . && docker build -t myimage
In particular, docker would see that it had been used multiple times, and would be able to re-use the docker cache from previous invocations to greatly improve build speed.
RUN REPEATABLE for docker-compose
Layerfile
In this Layerfile, all of these things are reused from the moment immediately after the previous invocation:
- The docker layer cache (e.g., pulled images)
- Any created networks or volumes
RUN REPEATABLE for kubernetes (kubectl, k8s, k3s)
Layerfile
RUN REPEATABLE gives 50-95% speedups here.
In this Layerfile, we'd set up a kubernetes cluster for you and then snapshot it after you'd started all of your services.
The next time you push, kubernetes' own declarative logic would figure out which pods to delete/restart given the manifests created. This means that if you had 20 microservices and only changed one, it'd be the only one that is re-deployed with this Layerfile.