Add commands for managing proxy boot config. Since the proxy can be
shared by multiple applications, the configuration doesn't belong in
`config/deploy.yml`.
Instead you can set the config with:
```
Usage:
kamal proxy boot_config <set|get|clear>
Options:
[--publish], [--no-publish], [--skip-publish] # Publish the proxy ports on the host
# Default: true
[--http-port=N] # HTTP port to publish on the host
# Default: 80
[--https-port=N] # HTTPS port to publish on the host
# Default: 443
[--docker-options=option=value option2=value2] # Docker options to pass to the proxy container
```
By default we boot the proxy with `--publish 80:80 --publish 443:443`.
You can stop it from publishing ports, specify different ports and pass
other docker options.
The config is stored in `.kamal/proxy/options` as arguments to be passed
verbatim to docker run.
Where someone wants to set the options in their application they can do
that by calling `kamal proxy boot_config set` in a pre-deploy hook.
There's an example in the integration tests showing how to use this to
front kamal-proxy with Traefik, using an accessory.
Secrets should be interpolated at runtime so we do want the file in git.
But add a warning at the top to avoid adding secrets or git ignore the
file if you do.
Also provide examples of the three options for interpolating secrets.
The integrations tests use their own registry so avoid hitting docker
hub rate limits.
This was using a self signed certificate but instead use
`--insecure-registry` to let the docker daemon use HTTP.
Add an app with roles to the integration tests. We'll deploy two web
containers and one worker. The worker just sleeps, so we are testing
that the container has booted.
When replacing a container currently we:
1. Boot the new container
2. Wait for it to become healthy
3. Stop the old container
Traefik will send requests to the old container until it notices that it
is unhealthy. But it may have stopped serving requests before that point
which can result in errors.
To get round that the new boot process is:
1. Create a directory with a single file on the host
2. Boot the new container, mounting the cord file into /tmp and
including a check for the file in the docker healthcheck
3. Wait for it to become healthy
4. Delete the healthcheck file ("cut the cord") for the old container
5. Wait for it to become unhealthy and give Traefik a couple of seconds
to notice
6. Stop the old container
The extra steps ensure that Traefik stops sending requests before the
old container is shutdown.
Setting env variables in the docker arguments requires having them on
the deploy host.
Instead we'll add two new commands `kamal env push` and
`kamal env delete` which will manage copying the environment as .env
files to the remote host.
Docker will pick up the file with `--env-file <path-to-file>`. Env files
will be stored under `<kamal run directory>/env`.
Running `kamal env push` will create env files for each role and
accessory, and traefik if required.
`kamal envify` has been updated to also push the env files.
By avoiding using `kamal envify` and creating the local and remote
secrets manually, you can now avoid accessing secrets needed
for the docker runtime environment locally. You will still need build
secrets.
One thing to note - the Docker doesn't parse the environment variables
in the env file, one result of this is that you can't specify multi-line
values - see https://github.com/moby/moby/issues/12997.
We maybe need to look docker config or docker secrets longer term to get
around this.
Hattip to @kevinmcconnell - this was all his idea.
If there are uncommitted changes in the app repository when building,
then append `_uncommitted_<random>` to it to distinguish the image
from one built from a clean checkout.
Also change the version used when renaming a container on redeploy to
distinguish and explain the version suffixes.
In the image prune command --all overrides --dangling=true. This removes
the image git sha image tag for the latest image which prevented
us from rolling back to it.
I've updated the integration test to now test deploy, redeploy and
rollback.
Rather than waiting 5 seconds and hoping for the best after we boot
docker compose, add docker healthchecks and wait for all the containers
to be healthy.
Adds a simple integration test to ensure that `mrsk deploy` works.
Everything required is spun up with docker compose:
- shared: a container that contains an ssh key and a self signed cert to
be shared between the images
- deployer: the image we will deploy from
- registry: a docker registry
- two vm images to deploy into
- load_balancer: an nginx load balancer to use between our images
The other images are in privileged mode so that we can run
docker-in-docker. We need to run docker inside the images - mapping in
the docker socket doesn't work because both VMs would share the host
daemon.
The docker registry requires a self signed cert as you cannot use basic
auth over HTTP except on localhost. It runs on port 4443 rather than 443
because docker refused to accept that "registry" is a docker host and
tries to push images to docker.io/registry. "registry:4443" works fine.
The shared container contains the ssh keys for the deployer and vms, and
the self signed cert for the registry. When the shared container boots,
it copies them into a shared volume.
The other deployer and vm images are built with soft links from the
shared volume to the require locations. Their boot scripts wait for the
files to be copied in before continuing.
The root mrsk folder is mapped into the deployer container. On boot it
builds the gem and installs it.
Right now there's just a single test. We confirm that the load balancer
is returning a 502, run `mrsk deploy` and then confirm it returns 200.