What are Layerfiles? uses files called Layerfile to let our users:

  1. Create full-stack environments
  2. Build multi-step CI/CD workflows
  3. Run and parallelize their end-to-end tests

Here's an example

Layerfiles run top down, and take snapshots as they progress. Consider this configuration:


FROM COPY / /root RUN npm install RUN BACKGROUND npm run start EXPOSE WEBSITE localhost:3000
How it would build

It would:

  1. Start building from a VM with NodeJS version 16 installed,
  2. Copy the repository files into the machine
  3. Install the dependencies with npm install
  4. Start the webserver in the background with npm run start
  5. Wait for the webserver at port 3000 to be running, then expose it as a website

Key idea: Snapshotting the runner state

In the example above, steps like npm install can be re-used between commits unless the files package.json or package-lock.json change. automatically skips steps like npm install for you by watching which files are read. In the example above, it would take a snapshot of the VM after npm install ran, and notice that the step only read the files package.json and package-lock.json.

If you push another commit which doesn't change either of those files, the Layerfile build would load the snapshot taken after the step last ran, and skip it entirely.

Multiple build steps: The Layerfile graph

Layerfiles can be composed, inherited from, and split into complex CI workflows.

Consider these three Layerfiles:

1. Layerfile at (repo root)/Layerfile


FROM vm/ubuntu:18.04 RUN apt-get update && apt-get install postgresql python3

2. Layerfile at (repo root)/web/tests/Layerfile


FROM /Layerfile COPY /web . RUN ./

3. Layerfile at (repo root)/web/Layerfile


FROM /Layerfile COPY /web . RUN BACKGROUND ./ EXPOSE WEBSITE localhost:8080

When built, these three Layerfiles will automatically combine into a build graph:

Advanced workflow graph example

Here, has searched for files named 'Layerfile', discovered all three of these files, and linked them based on their parents (by their FROM lines)

There are many directives which can change the Layerfile graph, see SPLIT, BUTTON, and WAIT below for some examples.

Sharing configuration across repositories: The Layerfile Library

FROM can inherit Layerfile configurations from the internet.

The Layerfile library contains configurations that we've pre-created like FROM It also lets you define your own shared configurations from scratch.

If your organization is called my-org, you could define my-org/base:v1.0.0 to install the base dependencies for your organization, and then the Layerfiles in all of your repositories could use FROM my-org/base:v1.0.0 to re-use build layers and configurations.

The possible instructions in a Layerfile

The BUILD ENV instruction

The BUILD ENV instruction tells the layerfile to rebuild when a variable changes.


  • Commonly used with $SUBDOMAIN to ensure each branch has the proper value:


BUILD ENV SUBDOMAIN RUN echo "HOST=$" >> .env RUN docker-compose up -d

Possible values



The SUBDOMAIN variable is often used to set the HOST variable for webservers.

It is a cleaned up version of the $GIT_BRANCH variable, acceptable for use in a URL.

  • feat/add-some-dashboard-pages becomes add-some-dashboard-pages

Common use is to set HOST=$


The DEPLOYMENT_HOST variable is set if a deployment exists for your run.

It's often used to tell a webserver where it is being hosted.

If there are multiple deployments, a single one is returned.


CI=true, IS_CI_MACHINE=true, CI_MACHINE=true, IN_CI_MACHINE=true, IN_CI=true

These CI variables are always true while running a Layerfile.



The DEBIAN_FRONTEND variable is always set to noninteractive in To change this behavior, use, e.g., ENV DEBIAN_FRONTEND=readline



GIT_TAG is the result of running git describe --always in the repository.



GIT_COMMIT is the result of running git rev-parse HEAD in the repository.



GIT_COMMIT is the first 12 characters of running git rev-parse HEAD in the repository.


GIT_COMMIT_TITLE="[improvement] do something"



GIT_CLONE_URL is a token which can be used to clone this repository. git clone https://[email protected]/org/repo.git



EXPOSE_WEBSITE_HOST is the hostname exposed by EXPOSE WEBSITE

It's often used to link a frontend with a backend when running both with EXPOSE WEBSITE and RUN BACKGROUND

You can even reference this before EXPOSE WEBSITE is ever used, but the URL is only live after the run passes.

Note: Unavailable for use by BUILD ENV



WEBAPPIO is always true when running a Layerfile



GIT_BRANCH is the branch which is checked out in this repository.



JOB_ID always exists. It's set to the ID of the current running job.



PULL_REQUEST_URL may or may not exist. It's a link to the pull request that triggered this pipeline.



REPOSITORY_NAME is the name of the repository. If the repository is at, this would be "b"



REPOSITORY_OWNER is the name of the owner of this repository. If the repository is at, this would be "a"



ORGANIZATION_NAME is the name of the current organization. If the dashboard is at, this would be "myorg"



RUNNER_ID is the id of the current layerfile runner.



RETRY_INDEX is the current retry for the given runner (initially 1, then when retried once, 2, etc)


API_EXTRA=some data passed from API

API_EXTRA is optional data passed in when a run is started by the API.

The BUTTON instruction

BUTTON [message...]

The BUTTON instruction allows you to block the progress of a run until the button is pressed.


  • Commonly used for deployment: BUTTON would you like to deploy? followed by RUN ./ would not deploy unless the button was pressed.

The CACHE instruction

CACHE [cached directories...]

The CACHE instruction makes specific files/directories be shared across runs, almost always as a performance improvement.

See the tuning performance documentation for more details.


  • Use CACHE /var/cache/apt to speed up RUN apt-get update
  • Use CACHE ~/.cache/go-build to speed up RUN go install
  • Use CACHE ~/.npm ~/.next/cache ~/.yarn/cache to speed up npm install and yarn install

Each account gets a fixed amount of cache storage, and we periodically delete old or inactive caches.

The CHECKPOINT instruction

CHECKPOINT (name) or CHECKPOINT disabled

The CHECKPOINT instruction allows you to control exactly when will take snapshots of the pipeline.

On future runs, if no files or instructions have changed since the snapshot was taken, the runner will restore the snapshot instead of repeating work.

CHECKPOINT is not usually required, it's advised not to use it unless you are using the API or there is measurable performance benefit to doing so.


  • Use CHECKPOINT disabled to disable checkpointing from that point onwards
  • Use CHECKPOINT deploy to create a checkpoint named "deploy", which can be triggered as a lambda from our api
  • Use CHECKPOINT to expliticly take a checkpoint at a specific point (which happens automatically by default), or re-enable checkpointing after CHECKPOINT disabled

See the tuning performance documentation for more details.

The CLONE instruction

[!] Note: CLONE is only available if "Use new hypervisor" is enabled in your organization's settings

CLONE [repository URL] (DEFAULT=branch-name) (files...) [destination]

The CLONE instruction moves files from a repository to the runner.

Files can be:

  • relative (to the layerfile location for sources, and WORKDIR location, or /root if not specified for destination)
  • absolute (from the root of the repository for sources, and filesystem root for destination)

The CLONE directive will automatically add authentication credentials if your account is connected with the associated repository.

If the current Layerfile's branch is, e.g., feat-1, then CLONE will try to check out the feat-1 branch from [repository URL]. If that branch doesn't exist, it will fall back to the branch specified by DEFAULT=.

This facilitates multi-repository development by allowing changes to be made to libraries concurrently without needing them to be merged to be used.

CLONE watches files in the same manner as COPY, this means that steps will only re-run if files in the cloned repository change.


  • Use CLONE DEFAULT=master /library to clone the current branch (e.g., feat-1) if it exists, falling back to master otherwise to the /library directory in the runner.
  • Use CLONE /hello-project to copy the entire repository contents to the destination /hello-project in the runner.
  • Use CLONE DEFAULT=main /package.json /package-lock.json ./ to copy package.json and package-lock.json from the specified project, into the working directory in the runner.

The COPY instruction

COPY [files...] [destination]

The COPY instruction moves files from your repository to the runner.

Files can be:

  • relative (to the layerfile location for sources, and WORKDIR location, or /root if not specified for destination)
  • absolute (from the root of the repository for sources, and filesystem root for destination)


  • Use COPY . . to copy the directory containing the Layerfile to the current working directory (or /root if WORKDIR has not been used)
  • Use COPY package.json yarn.lock ./ to copy those two files to the current directory.
  • Use COPY / /root to copy the entire repository to /root in the runner.

The ENV instruction

ENV [key=value...] or BUILD ENV [key...]

The ENV instruction persistently sets environment variables in this Layerfile


  • ENV PATH=$GOPATH/bin:$PATH adds $GOPATH/bin to the existing path.
  • ENV CI=hello sets the variable $CI to the value hello.

The EXPOSE WEBSITE instruction

EXPOSE WEBSITE [location on runner] (path) (rewrite path)

The EXPOSE WEBSITE instruction creates a persistent link to view a webserver running at a specific port in the Layerfile. It's especially useful for sharing changes with non-technical stakeholders or running manual QA/review.

Additionally, the EXPOSE_WEBSITE_HOST environment variable is available even before EXPOSE WEBSITE if you need to "bake" the path to the exposed website URL.

If the default 2 minute timeout is not sufficient for your application, use the following pattern to wait until the server is ready: expose website directive


  • Use EXPOSE WEBSITE localhost:80 to expose the local webserver at port 80
  • Combine EXPOSE WEBSITE localhost:80 /api with EXPOSE WEBSITE localhost:3000 / to route all requests that start with /api to port 80 in the runner, and all other requests to port 3000.
  • Use EXPOSE WEBSITE localhost:80 /cypress$SPLIT after a SPLIT 5 directive to make each split have a unique path (e.g.,

The FROM instruction

FROM [source]

The FROM instruction tells what base to use to run tests from.

There can only be one FROM line in a Layerfile, and it must always be the first directive in the Layerfile.

For now, only FROM vm/ubuntu:18.04 is allowed as a top level, but inheriting from other Layerfiles is possible.


  • Use FROM vm/ubuntu:18.04 to use ubuntu:18.04 as the base.
  • Use FROM ../base to inherit from the file at ../base/Layerfile relative to the current Layerfile
  • Use FROM /base to inherit from the file at (repo root)/base/Layerfile)
  • Use FROM to inherit from the shared Layerfile library configuration rails:2.7.1
  • Use FROM my-org/base:v1.0.0 to inherit from a common image for your organization in the shared Layerfile library

The LABEL instruction

LABEL [key=value..]

The LABEL directive allows users to control modify meta aspects of their runs


LABEL display_name=cool_layerfile_name

Possible values


LABEL display_name=testName

The display_name key allows the user to modify the display name in the runs dashboard

display name


LABEL status=merge or LABEL status=hidden

The status key allows the user control the behaviour of check notification within your pull request.

  • The merge status will cause webapp to summarize all runs resulting from the SPLIT directive

    status merge

  • The hidden status will cause webapp to hide the run status of the Layerfile

The MEMORY instruction

MEMORY [number](K|M|G)

The MEMORY instruction allows you to specify how much memory your environment uses.

This directive must always go at the top of a Layerfile.

If used in conjunction with FROM /base-image, the parent Layerfile must be the one which specifies MEMORY, otherwise children might have to re-run steps from the parent Layerfile.


  • Use MEMORY 2G to ensure at least 2 gigabytes of memory are available.

The RUN instruction


The RUN instruction runs the given script, and fails the entire Layerfile if the given command fails.

For example, you might use RUN echo "the directory is $(pwd)" to print your current directory.


  • RUN echo hello prints "hello" to the terminal
  • RUN BACKGROUND python3 -m http.server run python3 -m http.server persistently in the background.
  • RUN REPEATABLE docker build -t hello is a performance optimization, see tuning performance

The SECRET ENV instruction

SECRET ENV [secret name...]

The SECRET ENV instruction adds values from secrets to the runner's environment.

Secrets are useful for storing sensitive information. They can hold passwords, API keys, or other private credentials. For security reasons, it is good practice to not keep this information within source code. Managing private data using secrets allows easy authentication with other services on your behalf. has a secrets manager built into the platform. This makes entering and editing secrets as simple as 1, 2, 3:

Step 1: Navigate to the secrets tab in your account. View of secrets page in

Step 2: Click ‘NEW’ in the top right corner. Follow the prompts to choose a secret name, value, and destination repository. View of dialogue box prompting secret creation in

Step 3: All done! View of created secret in


  • Use SECRET ENV ENV_FILE to expose your dotfile env .env and then use RUN echo "$ENV_FILE" | base64 -d > ~/.env to decode the uploaded env file to the specific location.

Who can create secrets?

Only owners of an organization's account can create and edit secrets. Permissions can be edited in the members tab, which can be found in the settings dropdown menu. The members tab displays all users in an organization.

View of, highlighting the members tab within the settings menu

Click on the name of a user to display their permissions. Only users with owner-level access can create secrets. An organization’s owner(s) can edit permissions for other users.

View of how permissions are visible below a member's name in's members tab

The SETUP FILE instruction

SETUP FILE [file ...]

The SETUP FILE instruction causes the contents of the given file to be sourced before every RUN command. This is equivalent to copy/pasting the contents of the file into the terminal before every RUN command.

A common use case is to set a lot of environment variables using an ".env" file, or specifying a custom ".bashrc" file.



# contents of echo 'This will print before every RUN command' # set the LOG_LEVEL environment variable to 'debug' export LOG_LEVEL=debug # load an .env file source /root/.env


Use SETUP FILE to run source before every RUN command.

The SKIP REMAINING IF instruction


The SKIP REMAINING IF instruction will cause remaining instructions in the Layerfile to be skipped if the condition is evaluated to true.

Multiple SKIP REMAINING IF instructions may be declared in one Layerfile.

Conditions may use any variable from BUILD ENV.

Conditions may use AND to group statements using logical AND.

Conditions may use != to evaluate statements are not true.


  • Use SKIP REMAINING IF GIT_BRANCH!=master to skip execution on any branch that is not master.
  • Use SKIP REMAINING IF GIT_BRANCH!=master AND REPOSITORY_NAME !=~ "web" would skip remaining actions if the branch is not master, and the repository name is not "web"
  • Use SKIP REMAINING IF GIT_COMMIT_TITLE =~ "\[skip tests\]" would skip remaining actions if the commit title contained "[skip tests]"
  • Use SKIP REMAINING IF GIT_BRANCH!=~^(master|dev)$ would skip remaining actions if the branch is anything besides master or dev

The SPLIT instruction


The SPLIT instruction causes the runner to duplicate its entire state a number of times at a specific point. Each copy of the runner will have SPLIT and SPLIT_NUM environment variables automatically set. The former will be the index of the runner, and the latter will be the number of copies.


  • Use SPLIT 3 and three copies of the runner will have ENV SPLIT=0 SPLIT_NUM=3 and ENV SPLIT=1 SPLIT_NUM=3 and so on.

The USER instruction

USER [username]

The USER instruction allows you to run as a non-root user.

The user is added to the root group to circumvent permission denied errors.


  • Use USER www to run the remaining commands as the www user.

The WAIT instruction

WAIT [layerfile paths...]

The WAIT instruction allows you to make one step require other steps to succeed before running.

It's especially useful for conditional actions like executing notifications, deployment, and CI/CD.


Continuous deployment with WAIT


# at deploy/Layerfile FROM vm/ubuntu:18.04 # Wait for the layerfiles at /unit-tests/Layerfile and /acceptance-tests/Layerfile WAIT /unit-tests /acceptance-tests RUN ./ RUN ./

Conditional deployment with WAIT and BUTTON


# at deploy/Layerfile FROM vm/ubuntu:18.04 # Wait for the layerfiles at /unit-tests/Layerfile and /acceptance-tests/Layerfile WAIT /unit-tests /acceptance-tests RUN ./ BUTTON deploy? RUN ./

What the job view will look like with WAIT

Advanced workflow graph example

The WORKDIR instruction

WORKDIR [directory]

The WORKDIR instruction changes the location from which files are resolved in the runner.


  • Use WORKDIR /tmp to run commands in the /tmp directory within the runner.
  • Use WORKDIR hello to run commands in the (workdir)/hello directory within the runner.

The AWS instruction

The AWS instruction provides multiple functionalities to ease your need to connect with AWS.

Examples usages:

Making sure you are authenticated as the correct AWS user:


#Use an Ubuntu 18.04 base image FROM vm/ubuntu:18.04 # install AWS CLI RUN apt-get update && apt-get install unzip RUN curl "" -o "" RUN unzip RUN ./aws/install # set up AWS link --region="us-east-1" # Attach desired permissions to the user RUN aws iam attach-user-policy --policy-arn arn:aws:iam:ACCOUNT-ID:aws:policy/AdministratorAccess --user-name Alice # Check caller identity RUN aws sts get-caller-identity

Running a new EC2 instance and run an example task with ECS:


#Use an Ubuntu 18.04 base image FROM vm/ubuntu:18.04 RUN curl "" -o "" RUN apt-get update && apt install unzip RUN unzip RUN ./aws/install #The following line specifies an EC2 user data script that launches your container instance into a non-default 'calcom' cluster #We save the script into my_script.txt for later use RUN echo -e '#!/bin/bash\necho "ECS_CLUSTER=calcom" >> /etc/ecs/ecs.config' >> my_script.txt AWS link --region='us-east-1' #Run an ECS optimized AMI and save the instance identifier into instance_id.txt for later use RUN aws ec2 run-instances --image-id ami-040d909ea4e56f8f3 --instance-type t2.micro --iam-instance-profile Name="ecsInstanceRole" \ --user-data file://my_script.txt --output text --query "Instances[*].InstanceId" > instance_id.txt #Wait for the instance to come up... RUN aws ec2 wait instance-status-ok --instance-ids $(cat instance_id.txt) #Start the pod with your own task configuration... #Note that it is required to set output format to text otherwise it might cause the instruction to hang RUN aws ecs run-task --cluster calcom --task-definition ECSCalComDemoTask:1 --output text #Done! You can now access the scheduler through the public IPv4 address of the EC2 instance #If you cannot connect, make sure you set the correct security group/inbound rules


Step 1: Navigate to the Organization tab under Settings in your account. View of organization settings page in

Note: If you can't see this option, contact us and we will enable this feature for your organization.

Step 2: Click ‘Integrate with AWS’ at the bottom. Download the CloudFormation template and then click Launch Stack on AWS. View of AWS integration page in

Step 3: Upload the template and create a stack with it. View of AWS CloudFormation console

Step 4: After the stack is created, go to output tab and copy the value of key WebAppIORoleARN. View of AWS CloudFormation console

Step 5: Paste the AWS role ARN back to and click Save. All done! View of AWS CloudFormation console

Available commands and syntax

AWS link --region=aws_region_name

This commands sets up environment variables of the vm for the AWS user we created for your organization and default region you specified.

You can attach relevant permissions to this user through AWS console/CLI.

After attaching required permissions, you can use AWS CLI in a layerfile like you normally do.

Note: link is currently the only supported command for AWS instruction. More is to come!