Request access

Intro

Layerfiles can be composed into arbitrarily powerful workflows.

Consider these three Layerfiles:

1. Layerfile at (repo root)/Layerfile

FROM vm/ubuntu:18.04
RUN apt-get update && apt-get install postgresql python3

2. Layerfile at (repo root)/web/Layerfile

FROM /Layerfile
COPY . .
RUN ./unittest.sh

3. Layerfile at (repo root)/web/tests/Layerfile

FROM /Layerfile
COPY .. .
RUN BACKGROUND ./start-webserver.sh
RUN ./e2etests.sh
EXPOSE WEBSITE localhost:8080

When commited to a repository, they will create the following execution graph, where each node is created by a layerfile:

Advanced workflow graph example

Here, webapp.io has searched for files named ‘Layerfile’, discovered all three of these files, and linked them based on their parents (look at the FROM lines)

Ramifications of inheritance using ‘FROM’

Beyond just ensuring that actions occur sequentially, FROM also shares files and processes between parents and children.

Layerfile 1 installs python3 here, and installs (and starts) a postgres instance, which means that layerfiles #2 and #3 both get a distinct copy of the database.

Layerfiles can set up a migrated database in 5 seconds

It’s common to use webapp.io to run QA processes against a full stack including open-source databases.

Consider the following Layerfile:

FROM vm/ubuntu:18.04

# install the latest version of Docker, as in the official Docker installation tutorial.
RUN apt-get update && \\
    apt-get install apt-transport-https ca-certificates curl software-properties-common && \\
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \\
    add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" && \\
    apt-get update && \\
    apt install docker-ce python3 python3-pip awscli

# install docker compose (easily starts required docker containers)
RUN curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" \
    -o /usr/local/bin/docker-compose

# copy files from the repository into this staging server
COPY . .

# start everything - RUN REPEATABLE is a performance improvement that restores the cache from the last time this step ran.
RUN REPEATABLE compose up -d --build --force-recreate --remove-orphans db redis
# run migrations
RUN docker-compose run web python3 manage.py migrate
# download anonymized prod data dump
SECRET ENV AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_DEFAULT_REGION
RUN aws s3 cp s3://staging_db_dumps/staging.sql /tmp/staging.sql
RUN cat /tmp/staging.sql | docker-compose exec -T db psql

There’s a lot to unpack here, but there are a few important takeaways from this example: 1. You can install docker & docker-compose and efficiently create containers within a Layerfile 2. RUN REPEATABLE lets you reuse built images & volumes from the last time this pipeline ran 3. webapp.io will create a snapshot with everything created so that you can avoid re-building, re-creating, and re-migrating database data every time.

As before, other Layerfiles can extend from this one to run e2e tests or create a full-stack demo environment with EXPOSE WEBSITE.

Logging in to Docker

webapp.io creates entire VMs as easily as Dockerfiles, so it’s common for our users to use Docker or docker-compose within webapp.io.

Docker hub rate limits requests made by unauthenticated users, so it’s imperative to create a docker hub account and log in to it to avoid failing tests.

The simplest way is to combine SECRET ENV with RUN:

  1. Add a new secret with key “DOCKER_LOGIN” in the secrets pane (the lock icon to the left)
  2. Make the value for that secret be your docker hub login
  3. Press the “Save” button to save the new secret.

Change your Layerfile and add the following lines after installing Docker:

SECRET ENV DOCKER_LOGIN
RUN echo "$DOCKER_LOGIN" | docker login --username (INSERT USERNAME) --password-stdin

Full example of Layerfile that installs & runs a docker container, then creates a persistent staging link from it:

FROM vm/ubuntu:18.04

# To note: Layerfiles create entire VMs, *not* containers!

# install the latest version of Docker, as in the official Docker installation tutorial.
RUN apt-get update && \
    apt-get install apt-transport-https ca-certificates curl software-properties-common && \
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
    add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" && \
    apt-get update && \
    apt install docker-ce

SECRET ENV DOCKER_LOGIN
RUN echo "$DOCKER_LOGIN" | docker login --username (INSERT USERNAME) --password-stdin

# copy files from the repository into this staging server
COPY . .

RUN docker build -t image .
RUN docker run -d -p 80:80 image
EXPOSE WEBSITE http://localhost:80

Deployments

EXPOSE WEBSITE allows you to whitelabel staging servers on your own domain.

For example, example.com could route $branch.demo.example.com to the latest commit on the branch $branch by adding a single DNS record CNAME *.demo demotarget.webapp.io

How to set up deployments:

Using webapp.io/dashboard, navigate to your organization’s settings.

View of organization page

Add the specific domain that you want everything to be exposed under. In the example below, we are adding demo.example.com. A CNAME record will be provided.

View of how to add domain

Add the CNAME record in your DNS hosting provider (ex: Cloudflare, godaddy, etc). Creating a new record can usually be done within the DNS settings. Once this is done, DNS IS SET UP can be found next to the new domain.

View of organization page after adding example domain

Next, navigate to the deployments tab. On the top right, click ‘NEW’ to create a new deployment rule. Fill in the appropriate fields.

View of adding deployment rule within deployments tab

The deployment is now listed under ‘RULES’. In the deployments tab, you can see whether a deployment is on, paused, or deleted. When a deployment is deleted, it can be restored either by rerunning the layerfile or by following the RE-RUN LAYERFILE prompt on the error message page shown below.

Error message when snapshot cannot be loaded

Use-cases for deployments

By default, EXPOSE WEBSITE creates staging servers are at https://(uuid).cidemo.co, where the uuid is unique for every Layerfile. The deployments page lets you customize this by adding a column to its table and adding a CNAME record on a domain you control.

Subdomains within deployments

Subdomains are preconfigured in webapp.io. Your webserver always sees the host as localhost. If you don’t want that to be the case, please contact support.

For example, say you have a deployment at deployment.demo.webapp.io. If you then go to hello.deployment.demo.webapp.io, it will go to the same deployment. Similarly, if you navigate to greetings.deployment.demo.webapp.io, it will also direct to deployment.demo.webapp.io. This happens by default.

Two layerfile polyrepo example

(backend repo)/layerfiles/backend/Layerfile
# backend
FROM vm/ubuntu:18.04

# install the latest version of Docker, as in the official Docker installation tutorial.
RUN apt-get update && \
    apt-get install apt-transport-https ca-certificates curl software-properties-common && \
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
    add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" && \
    apt-get update && \
    apt install docker-ce

COPY / .
RUN REPEATABLE docker build -t backend && docker run -d -p 80:80 backend

EXPOSE WEBSITE localhost:80 /api

(backend repo)/layerfiles/frontend/Layerfile
# backend
FROM vm/ubuntu:18.04

# install the latest version of Docker, as in the official Docker installation tutorial.
RUN apt-get update && \
    apt-get install apt-transport-https ca-certificates curl software-properties-common && \
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
    add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" && \
    apt-get update && \
    apt install docker-ce

RUN curl -Lo /usr/local/bin/fast-git-download https://gist.githubusercontent.com/ColinChartier/6bff7cf77adf7d2a8d7d699a5deed707/raw/0b89b3037548ce7e4fb24bea96628014da1bbf05/download && \
    chmod 755 /usr/local/bin/fast-git-download

# download the latest version of the frontend's "master" branch and build and start it.
RUN REPEATABLE fast-git-download frontend-repo-name /frontend origin/master && \
    cd /frontend && \
    docker build -t frontend && docker run -d -p 80:80 frontend

EXPOSE WEBSITE localhost:80

Deployments
  1. Create a single deployment from $branch.demo.yourdomain.com to the backend repository, leave the branch field empty

  2. Create a CNAME record from *.demo to demotarget.webapp.io

  3. Push the layerfiles above to a branch, say, “main”

  4. Visit main.demo.yourdomain.com - notice that requests to main.demo.yourdomain.com/api/hello go to the backend layerfile, while requests to main.demo.yourdomain.com go to the frontend layerfile (within the backend domain)

OAuth (logging in with external sites)

OAuth is what lets you log in to a service with an existing Google or Facebook account.

webapp.io customers often need to set a redirect target from their “test app” on an external service to allow logging in to their own accounts.

For this use case, we’ve created the layer-oauth-target.cidemo.co endpoint, and the flow looks like this:

  1. User visits abcd.cidemo.co
  2. User clicks “log in with Google”
  3. User is redirected to a Google login page for a test application
  4. The “redirect URI” for that login page is “layer-oauth-target.cidemo.co”, so the user is sent to layer-oauth-target.cidemo.co/oauth/login?code=hello
  5. layer-oauth-target.cidemo.co reads a cookie to see which cidemo site the user was last on, so the user is redirected back to abcd.cidemo.co/oauth/login?code=hello
  6. The application can now read the code and log the user in as usual.

Combining with white-labeled sites (routing)

The same can be done for layer-oauth-target.demo.example.com, in the case where a route with $branch.demo.example.com exists.

Using Yarn

Yarn is a popular JavaScript package manager that is often used as an alternative to npm.

To install Yarn, your Layerfile will include something like this:

FROM vm/ubuntu:18.04
# To note: Layerfiles create entire VMs, *not* containers!
RUN curl -fSsL https://deb.nodesource.com/setup_12.x | bash && \
    curl -fSsL https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
    echo "deb https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list && \
    apt-get update && \
    apt-get install nodejs yarn
COPY . .
RUN yarn install --frozen-lockfile
RUN BACKGROUND yarn start
EXPOSE WEBSITE http://localhost:3000

Information on optimizing and troubleshooting Yarn in webapp.io can be found here.


Edit these docs