|
|
# Deploying
|
|
|
It is recommended to read this in full before doing anything!
|
|
|
Every step in [docker install](docker-install) should be done.
|
|
|
|
|
|
First make sure to create a production.yml file corresponding to your needs, see [the docker-compose docs](https://docs.docker.com/compose/). This file will be used in conjunction with the default docker-compose.yml file, an example is given, you can copy it as a bootstrap for your own.
|
|
|
|
|
|
$ cp production.yml{_example,}
|
|
|
|
|
|
Be wary that docker does everything in /var/lib/docker rather than the current path, so it's advisable to either override the volumes configuration or the whole docker data directory if your system partition is small.
|
|
|
|
|
|
Change the admin password and the secret key in the variables.env file **before building**!
|
|
|
The variables.env file is passed to the containers, some configurations need to be passed directly to the docker-compose build through environment variables on the host.
|
|
|
Configure exim (message transfer agent) with:
|
|
|
|
|
|
$ export MAIL_PRIMARY_HOST=domainname.com
|
|
|
|
|
|
Setup the task queues to avoid having your tasks being killed because of oom, by setting the concurrency on both main and low-priority workers. They should more or less match your mem_limit (cf production.yml) divided by the maximum memory taken by the tasks consumed by these workers respectively. This is assuming trying to have decent performance on a single machine, if running a swarm and each worker has its own machine the limits don't make sense.
|
|
|
|
|
|
$ export CELERY_MAIN_CONC=10
|
|
|
$ export CELERY_LOW_CONC=4
|
|
|
|
|
|
Alternatively you can create a file called `.env` and write those variables in it, docker will pick them up automatically.
|
|
|
|
|
|
### Update the code:
|
|
|
|
|
|
$ git pull origin master
|
|
|
|
|
|
### Build:
|
|
|
|
|
|
$ docker-compose -f docker-compose.yml -f production.yml build --build-arg VERSION_DATE="$(git log -1 --format=%ad)"
|
|
|
|
|
|
### Run:
|
|
|
|
|
|
$ docker-compose -f docker-compose.yml -f production.yml up -d
|
|
|
|
|
|
### Setting the domain name:
|
|
|
|
|
|
When sending emails we don't have access to the request, which means we don't know on which domain to generate urls for. You can set it up here: /admin/sites/site/.
|
|
|
It may become a build option at some point.
|
|
|
|
|
|
### Backing up the database:
|
|
|
|
|
|
For example to do it everyday at 3am you could add in your crontab:
|
|
|
|
|
|
0 3 * * * docker exec -u postgres escriptorium_db_1 pg_dump -Fc escriptorium > /path/to/backups/db-$(date +"\%Y\%m\%d-\%H\%M").dump
|
|
|
|
|
|
In case you changed $POSTGRES_USER, $POSTGRES_DB make sure to change them accordingly, since cron doesn't have access to those.
|
|
|
|
|
|
And send it away a while later:
|
|
|
|
|
|
0 4 * * * rsync -r /path/to/backups/ distantuser@distantserver:/distant/path/to/backups
|
|
|
|
|
|
|
|
|
### Using a gpu
|
|
|
|
|
|
First make sure to have a supported distribution [here](https://nvidia.github.io/nvidia-docker/).
|
|
|
Then install the latest nvidia drivers.
|
|
|
For Debian 10 the easiest way is to use the [buster backports](http://ftp.debian.org/debian/dists/buster/).
|
|
|
`echo deb http://deb.debian.org/debian buster-backports main > /etc/apt/sources.list`
|
|
|
`sudo apt-get update`
|
|
|
`apt-get install -t buster-backports nvidia-driver`
|
|
|
|
|
|
*insert additional steps here*
|
|
|
|
|
|
Since docker-compose [doesn't have a way to use Docker's --gpus argument](https://github.com/docker/compose/issues/6691), we have to use the deprecated [nvidia-docker2](https://github.com/NVIDIA/nvidia-docker/wiki/Installation-(version-2.0)) along with the old docker-compose file api 2.4.
|
|
|
|
|
|
You can follow the installation process explained [here](https://github.com/NVIDIA/nvidia-docker/tree/master), but instead of `nvidia-container-toolkit` install `nvidia-docker2`.
|
|
|
|
|
|
Beware that this may overwrite your `/etc/docker/daemon.json` configuration so make sure to update it as needed.
|
|
|
|
|
|
Then in your production.yml file, uncomment the dedicated GPU environment variables and configurations.
|
|
|
To make use of more gpus, simply add more workers following the same configuration.
|
|
|
Then rebuild & run the containers.
|
|
|
|