To speed up deployments, we'll remove the healthcheck step.
This adds some risk to deployments for non-web roles - if they don't
have a Docker healthcheck configured then the only check we do is if
the container is running.
If there is a bad image we might see the container running before it
exits and deploy it. Previously the healthcheck step would have avoided
this by ensuring a web container could boot and serve traffic first.
To mitigate this, we'll add a deployment barrier. Until one of the
primary role containers passes its healthcheck, we'll keep the barrier
up and avoid stopping the containers on the non-primary roles.
It the primary role container fails its healthcheck, we'll close the
barrier and shut down the new containers on the waiting roles.
We also have a new integration test to check we correctly handle a
a broken image. This highlighted that SSHKit's default runner will
stop at the first error it encounters. We'll now have a custom runner
that waits for all threads to finish allowing them to clean up.
Allow hosts to be tagged so we can have host specific env variables.
We might want host specific env variables for things like datacenter
specific tags or testing GC settings on a specific host.
Right now you either need to set up a separate role, or have the app
be host aware.
Now you can define tag env variables and assign those to hosts.
For example:
```
servers:
- 1.1.1.1
- 1.1.1.2: tag1
- 1.1.1.2: tag2
- 1.1.1.3: [ tag1, tag2 ]
env_tags:
tag1:
ENV1: value1
tag2:
ENV2: value2
```
The tag env supports the full env format, allowing you to set secret and
clear values.
If you are deploying more than one destination to a host, the latest
tags will conflict, so we'll append the destination to the tag.
The latest tag is used when booting the app or exec-ing a new container.
If a deploy doesn't complete on a host for all roles then we should
probably not be using it, so move the tagging to the end of the boot
process.
When calling `kamal app exec` for new non interactive containers, run
the command per role on each server and include the role config
including the environment.
Fixes: https://github.com/basecamp/kamal/issues/492
During deployments both the old and new containers will be active for a
small period of time. There also may be lagging requests for older CSS
and JS after the deployment.
This can lead to 404s if a request for old assets hits a new container
or visa-versa.
This PR makes sure that both sets of assets are available throughout the
deployment from before the new version of the app is booted.
This can be configured by setting the asset path:
```yaml
asset_path: "/rails/public/assets"
```
The process is:
1. We extract the assets out of the container, with docker run, docker
cp, docker stop. Docker run sets the container command to "sleep" so
this needs to be available in the container.
2. We create an asset volume directory on the host for the new version
of the app on the host and copy the assets in there.
3. If there is a previous deployment we also copy the new assets into
its asset volume and copy the older assets into the new asset volume.
4. We start the new container mapping the asset volume over the top of
the container's asset path.
This means the both the old and new versions have replaced the asset
path with a volume containing both sets of assets and should be able
to serve any request during the deployment. The older assets will
continue to be available until the next deployment.
When replacing a container currently we:
1. Boot the new container
2. Wait for it to become healthy
3. Stop the old container
Traefik will send requests to the old container until it notices that it
is unhealthy. But it may have stopped serving requests before that point
which can result in errors.
To get round that the new boot process is:
1. Create a directory with a single file on the host
2. Boot the new container, mounting the cord file into /tmp and
including a check for the file in the docker healthcheck
3. Wait for it to become healthy
4. Delete the healthcheck file ("cut the cord") for the old container
5. Wait for it to become unhealthy and give Traefik a couple of seconds
to notice
6. Stop the old container
The extra steps ensure that Traefik stops sending requests before the
old container is shutdown.