Setting `GITHUB_TOKEN` as in the docs results in reusing the existing
`GITHUB_TOKEN` since `gh` returns that env var if it's set:
```bash
GITHUB_TOKEN=junk gh config get -h github.com oauth_token
junk
```
Using the original env ensures that the templates will be evaluated the
same way regardless of whether envify had been previously invoked.
We didn't log boot errors if there was one role because there was no
barrier and the logging is done by the first host to close the barrier.
Let's always create the barrier to fix this.
- Add local logout to `kamal registry logout`
- Add `skip_local` and `skip_remote` options to `kamal registry` commands
- Skip local login in `kamal deploy` when `--skip-push` is used
Load the hosts from the contexts before trying to build.
If there is no context, we'll create one. If there is one but the hosts
don't match we'll re-create.
Where we just have a local context, there won't be any hosts but we
still inspect the builder to check that it exists.
Validate the Kamal configuration giving useful warning on errors.
Each section of the configuration has its own config class and a YAML
file containing documented example configuration.
You can run `kamal docs` to see the example configuration, and
`kamal docs <section>` to see the example configuration for a specific
section.
The validation matches the configuration to the example configuration
checking that there are no unknown keys and that the values are of
matching types.
Where there is more complex validation - e.g for envs and servers, we
have custom validators that implement those rules.
Additonally the configuration examples are used to generate the
configuration documentation in the kamal-site repo.
You generate them by running:
```
bundle exec bin/docs <kamal-site-checkout>
```
When cloning the git repo:
1. Try to clone
2. If there's already a build directory reset it
3. Check the clone is valid
If anything goes wrong during that process:
1. Delete the clone directory
2. Clone it again
3. Check the clone is valid
Raise any errors after that
Using a different SSHKit runner doesn't work well, because the group
runner uses the Parallel runner internally. So instead we'll patch its
behaviour to fail slow.
We'll also get it to return all the errors so we can report on all the
hosts that failed.
If a primary role container is unhealthy, we might take a while to
timeout the health check poller. In the meantime if we have started the
other roles, they'll be running tow containers.
This could be a problem, especially if they read run jobs as that
doubles the worker capacity which could cause exessive load.
We'll wait for the first primary role container to boot successfully
before starting the other containers from other roles.