GeoNode and Docker

GeoNode needs a set of external services to run, including a DBMS (PostgreSQL), a message broker (rabbitMQ), and a GeoServer instance. To develop, run, and test GeoNode you will probably need one, or more, of these services up and running.

As a developer or as a frontend designer, you may only want to concentrate on your tasks (which will probably be the customization of a GeoNode project) without dealing with all these other services.

Here comes handy Docker. Docker is a platform that enables you to separate your applications from your infrastructure. By using Docker you can run all the ancillary services needed by GeoNode without installing them directly onto your system but have Docker run them in a sandboxed environment.

Quick introduction to Docker

Docker runs services in containers, usually one container for each needed service.

You can see these containers as lightweight virtual machines: all of the files, binaries, libraries, configuration, and so on, all lie within the container so that your system libraries may differ from the ones in the container.

Volumes

These containers are created and destroyed on request. The filesystem(s) they use may be as volatile as the container itself (when the container is destroyed, the file system content is lost) or can be a persistent volume. This may be useful, for instance, if you run a DB in a container; when the service/container is restarted, the storage files are preserved, so you get back all the content you had when the database was previously run. Persisted volumes are handled by the Docker engine and may not be accessible from the outside of the container.

Another choice is having directories inside the container as mapped volumes onto local directories in the host filesystem. In this case, any change in your host filesystem is reflected in the filesystem used by the container and vice versa. This is an optimal solution for developers: while working on the project, they only have to edit and save their part of the GeoNode project, and it will immediately be ready and used by GeoNode (in some cases a restart of the container is required, but you get the point.)

Docker compose

The docker command allows you to manage a single docker item at a time: container, images, volumes, etc.

In a generic architecture, and in GeoNode as a particular case, we need more than one service running. To orchestrate, and handle, a set of services docker-compose will be used.

docker-compose allows you to specify how different containers shall relate to each other. It is also the setting of the dependencies between containers (e.g.: “service X will not be started until service Y is up”). You can easily share volumes across different containers, a private network will connect the different containers.

Files involved

The main configuration file for docker-compose is called docker-compose.yml. Having created your GeoNode project from the geonode-project template, you should already have that file in your project, perfectly ready for running your GeoNode instance and all of its required ancillary services.

You can configure the whole set of services, most of which share a subset of configuration items, by editing the file .env.

When running docker-compose, it will automatically (if not told otherwise) look for the files in the current directory:

  • .env: environment variables

  • docker-compose.yml: configuration for building and running the containers

  • docker-compose.override.yml: local overriding of the container’s configuration.

Our docker-compose will handle these services:

  • db: a postgres instance

  • rabbitmq: the rabbitmq service, used to control background processes

  • geoserver: no further explanation is needed for this :)

  • django: the GeoNode “core” process

  • celery: the service spawning background processes. This container uses the very same image as the django container since we have GeoNode code running in here as well.

  • geonode: nginx service, which routes/proxies the requests toward GeoNode and GeoServer.

  • letsencrypt: used to manage certificates for the HTTPS protocol

  • data-dir-conf: a service that initializes the GeoServer data directory

Running docker

In your project directory, run:

docker-compose build

Note: this command will run for quite a long time(10-20 mins) and will download lots of stuff.

This command will build the images for the containers. It’s more or less like installing virtual machines from scratch, one for each service. You can think of the image as a frozen installed VM, while the container is a living instance.

Once the images are built, you can run all the containers using the command:

docker-compose up

This will create all the required volumes, create the virtual network, and instantiate the images by running all the containers.

The processes will be run in the current shell, and you will see the logs for all the containers scroll on the terminal.

If you want to detach the processes and make them run in the background, you can use the -d argument (docker-compose up -d).

If you want to visualize the logs you can run the command:

docker-compose logs [-f] [container ...]
  • -f: follow, works as tail -f

  • container: name of the containers you want the logs of. If omitted, you’ll get the logs for all the running containers.

Stopping and restarting containers

You can stop the containers with the command docker-compose stop and restart with docker-compose start. Using these commands, the volatile filesystem will be preserved between runs. You can also stop the containers using docker-compose down; this will destroy the current container and its volatile filesystem. You’ll need to issue up again to recreate a fresh container from its image.

Next Section: Put geonode-project in Production