Why is this step rerunning?
webapp.io uses snapshots to speed up your code in a few simple ways:
- All processes within a VM are “snapshotted”.
- A snapshot is taken about every 20 seconds.
- Snapshots are marked by a banner below the accompanying step.
- A step will be skipped if the files it uses haven’t changed.
If a snapshot is loaded, all the steps above it are skipped. This means that costly, repetitive steps that don’t read many files should be placed as high up in the Layerfile as possible to avoid rerunning.
Snapshots created on a
RUN REPEATABLE directive have a special property: the files restored are from after the file last ran. A more detailed explanation on RUN REPEATABLE and a contrast between webapp.io and Docker’s caching systems are available for further reading. Some potential inefficiencies in the use of the layer caching system are listed below:
Common problems with Docker
Docker reading the entire directory:
docker build copies all files in the context directory that aren’t ignored by a
.dockerignore file. Since webapp.io tracks which files are read, this causes webapp.io to rerun if any of the files read by Docker are changed. An easy solution is to add a .dockerignore file that stops docker from reading irrelevant files.
Why did a lock exist during RUN REPEATABLE?
For ‘docker run’ or ‘docker-compose up’:
The conditions of this error are:
- Volume mounts are from the destination of a
- Volume mounts are created in the command run by
docker-compose up -d)
- The containers keep running after
Some common resolutions are:
Solution 1: Run
docker-compose up -d in a separate run directive: Putting
docker-compose up -d outside of
RUN REPEATABLE will break condition (2), so a common solution is something like this:
RUN REPEATABLE docker-compose build --parallel RUN docker-compose up -d
Solution 2: Don’t use volumes: Remove volume blocks from your
docker-compose.yml file and don’t run
docker run with the
--volume flags. Consider the following example:
docker-compose -f <(sed /volume:/,2d docker-compose.yml) up -d
For more complicated files, a command like
yq can be used for a similar purpose.
Solution 3: Copy everything to another directory: Copying the entire directory somewhere else will resolve this issue, but cause the step to never be skipped (as all files are read):
RUN REPEATABLE rsync -a --delete . /tmp/running/ && cd /tmp/running && docker-compose up -d
For ‘npm run start’ or other persistent servers
Sometimes, web servers (especially node.js versions) within
RUN REPEATABLE will cause this problem as well. The simplest solution is to start the webserver in a non-repeatable directive or to copy the files before starting it:
Copy everything to another directory: When copying your files to another directory, your Layerfile may contain something like this:
RUN REPEATABLE rsync -a --delete . /tmp/running/ && cd /tmp/running && ( pkill node || true; ) && nohup npm run start&
This is not done by default because it breaks the file watching. However, this guarantees that all files are read.
What is causing a Yarn Error?
Yarn doesn’t always perform well under heavy loads. Two common solutions are:
Solution 1: Install Yarn less often. Put
yarn install as high up as possible in your Layerfile so that it is cached. When creating complex workflows that contain Yarn, run
yarn install in the parent Layerfile.
For example, consider the following graphs:
On the left,
yarn install runs five times. On the right,
yarn install only runs once and then is inherited by its children. When appropriate, use the
SPLIT directive after running
yarn install to reduce unnecessary repetition.
Solution 2: Use npm instead. Yarn and npm have similar speeds when they are cached.