... | ... | @@ -22,65 +22,3 @@ To update: |
|
|
$ git pull
|
|
|
$ docker-compose up -d --build
|
|
|
|
|
|
# Deploying
|
|
|
|
|
|
First make sure to create a production.yml file corresponding to your needs, see [the docker-compose docs](https://docs.docker.com/compose/). Be wary that docker does everything in /var/lib/docker rather than the current path, so it's advisable to either override the volumes configuration or the whole docker data directory if your system partition is small.
|
|
|
Change the admin password and the secret key in the variables.env file **before building**!
|
|
|
The variables.env file is passed to the containers, some configurations need to be passed directly to the docker-compose build through environment variables on the host.
|
|
|
Configure exim (to send emails) with:
|
|
|
|
|
|
$ export MAIL_PRIMARY_HOST=domainname.com
|
|
|
|
|
|
Alternatively you can create a file called `.env` and write those variables in it, docker will pick them up automatically.
|
|
|
|
|
|
### Update the code:
|
|
|
|
|
|
$ git pull origin master
|
|
|
|
|
|
### Build:
|
|
|
|
|
|
$ docker-compose -f docker-compose.yml -f production.yml build --build-arg VERSION_DATE="$(git log -1 --format=%ad)"
|
|
|
|
|
|
### Run:
|
|
|
|
|
|
$ docker-compose -f docker-compose.yml -f production.yml up -d
|
|
|
|
|
|
### Setting the domain name:
|
|
|
|
|
|
When sending emails we don't have access to the request, which means we don't know on which domain to generate urls for. You can set it up here: /admin/sites/site/.
|
|
|
It may become a build option at some point.
|
|
|
|
|
|
### backing up the database:
|
|
|
|
|
|
For example to do it everyday at 3am you could add in your crontab:
|
|
|
|
|
|
0 3 * * * docker exec -u postgres escriptorium_db_1 pg_dump -Fc escriptorium > /path/to/backups/db-$(date +"\%Y\%m\%d-\%H\%M").dump
|
|
|
|
|
|
In case you changed $POSTGRES_USER, $POSTGRES_DB make sure to change them accordingly, since cron doesn't have access to those.
|
|
|
|
|
|
And send it away a while later:
|
|
|
|
|
|
0 4 * * * rsync -r /path/to/backups/ distantuser@distantserver:/distant/path/to/backups
|
|
|
|
|
|
|
|
|
### Using a gpu
|
|
|
|
|
|
First make sure to have a supported distribution [here](https://nvidia.github.io/nvidia-docker/).
|
|
|
Then install the latest nvidia drivers.
|
|
|
For Debian 10 the easiest way is to use the [buster backports](http://ftp.debian.org/debian/dists/buster/).
|
|
|
`echo deb http://deb.debian.org/debian buster-backports main > /etc/apt/sources.list`
|
|
|
`sudo apt-get update`
|
|
|
`apt-get install -t buster-backports nvidia-driver`
|
|
|
|
|
|
*insert additional steps here*
|
|
|
|
|
|
Since docker-compose [doesn't have a way to use Docker's --gpus argument](https://github.com/docker/compose/issues/6691), we have to use the deprecated [nvidia-docker2](https://github.com/NVIDIA/nvidia-docker/wiki/Installation-(version-2.0)) along with the old docker-compose file api 2.4.
|
|
|
|
|
|
You can follow the installation process explained [here](https://github.com/NVIDIA/nvidia-docker/tree/master), but instead of `nvidia-container-toolkit` install `nvidia-docker2`.
|
|
|
|
|
|
Beware that this may overwrite your `/etc/docker/daemon.json` configuration so make sure to update it as needed.
|
|
|
|
|
|
Then in your production.yml file, uncomment the dedicated GPU environment variables and configurations.
|
|
|
To make use of more gpus, simply add more workers following the same configuration.
|
|
|
Then rebuild & run the containers.
|
|
|
|