Reloading your development environment as you make changes to your code is an obvious step for developers working with projects locally. However, if there’s no setup for efficient/fast auto-reloading, this can be quite a productivity killer, especially with a modern heavy-weight dev stack where stopping/starting manually could often be quite slow. Most popular, modern frameworks and stacks often ship with built-in capability for auto-reloading with file changes for this very reason. But if you are moving to a docker-based development environment, those solutions won’t be applicable out of the box. This is due to project files being copied to the virtual container during the build process. In this article, let’s look at a couple of approaches around how you can auto reload docker environment when:
- Your stack doesn’t come with built-in auto-reload capability
- Your environment has built-in auto-reload functionality, but after docker integration, you lost that due to project files being moved to the docker image.
Note: The examples provided here are for Python/Django with Docker, but they should be applicable to any language/development stack in general.
#1 Mount project directory as external volume:
The very first basic intuition we can have to solve this reloading problem within the Docker environment is to make Docker use the current working directory somehow. However, it doesn’t work like that out of the box due to the portability feature, one of the main reasons to use Docker in the first place. However, we can achieve exactly that with some minor additional setup. It uses Docker’s volume mounting feature to reflect code changes in the container without any external restart mechanism.
This works well if your development stack comes with auto-reload capability itself. This is also the fastest method to auto reload Docker, as most development stacks already optimize for reloading on file changes.
The steps for this approach include:
- Defining an explicit external volume(which is defined as the current working directory of the project) (compose.yml/docker-compose.yml file)
- Mount that volume to the application directory within docker. (compose.yml/docker-compose.yml file)
- Use the application directory as the current working directory (Dockerfile)
The compose configuration is as follows:
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
volumes:
- web-server:/app
volumes:
web-server:
driver: local
driver_opts:
o: bind
type: none
device: ./
The Dockerfile would look something like this:
...
VOLUME ["/app"]
WORKDIR /app
EXPOSE 8000
COPY ./requirements.txt /app/
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
/py/bin/pip install -r /app/requirements.txt
COPY . /app/
...
Code language: JavaScript (javascript)
Pros:
- The fastest way to get your environment auto-reloaded with very low latency between and services refreshed.
Cons:
- Support from the dev stack is required for the auto reload feature.
#2 Auto Reload docker using built-in “watch” functionality:
In this second approach, we would leverage the built-in auto-reload functionality(aka “watch”) provided by the Docker package itself. This approach requires:
- Defining “develop” specific watch behaviour
- Running “docker compose”/”docker-compose” command with the “–watch” flag
Here’s an example of compose file configuration:
services:
web:
develop:
watch:
- action: sync
path: ./
target: /app
- action: rebuild
path: requirements.txt
Also, the command for running the Docker project would look something like:
docker-compose up --build --watch
If things are set correctly, it should show a “watch enabled” status log while starting the container(s):
To learn about these configurations in more detail, please refer to Docker’s watch and develop documentation.
Pros:
- Works regardless of whether or not the development stack comes with an auto-reload feature.
Cons:
- This is typically slower than the previously described “volume mount” approach.
Conclusion:
Which approach to use would depend entirely on the development environment and your preference. As a rule of thumb, if the development stack ships with an auto-reload feature, the volume mount approach would be more efficient. If not, with a “watch” configuration, you can simply supercharge your productivity that wouldn’t have existed without docker. In a multi-container approach, a mix of both could also be a viable choice as well.
Leave a Reply