Setting up a new developer on a project should be smooth and repeatable. The task is straightforward: to be able to run it on a developer’s box. After various tries and errors with Virtualbox (too big an image), Vagrant (too slow), pure Docker (too complicated) I found Docker Compose is the ideal tool to achieve what I wanted.
However, before I try to explain why this is a beautiful solution, let me elaborate the problem that’s solved by running Docker Compose in development.
Modularity
The main problem is, not every application requires the same server infrastructure. Even if you are running Magento, the combination is limitless: Apache or Nginx for a web server? Percona, MySQL or perhaps MariaDB is running in production, therefore should run in development too. Redis or Memcached for cache? SOLR, Elasticsearch or something else for search index storage? Would a custom extension require a particular PHP module? With Docker Compose, you can replace these components by changing the configuration.
Same version of everything
With a single configuration file, it’s possible to set the version number for every component. There’s no need to litter your development box with different versions of PHP and Apache, everything is configured and contained in one place. There are no more excuses like “It worked on my machine”. Everyone will run the exact same version as you.
Speed
Unlike virtualisation, Docker is running natively and communicating with the kernel directly. There’s no performance loss (theoretically). Given the same hardware, your software will run with the exact same speed as if it was installed on the host operating system directly.
There’s no other solution providing all three of the above benefits, but Docker Compose. Your CI server should run its tests against a production-like version, or maybe against different versions. You also should be able to run such a version locally for the day to day development, and you want to be able to run it fast.
The docker-compose.yml file can be pushed to git, changes distributed across developers, peer-reviewed and easily understood. There are numerous Docker images to choose from, so swapping a software with another is as easy as changing a line in the configuration file.
Let’s see a simple Docker Compose configuration file that’s providing PHP 7 and MySQL:
version: '2'
services:
db:
image: mysql:5.7
volumes:
- "./dev/data/db:/var/lib/mysql"
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: myadmin
MYSQL_DATABASE: database
MYSQL_USER: myadmin
MYSQL_PASSWORD: myadmin
apache:
image: webdevops/php-apache-dev:alpine-3-php7
depends_on:
- db
volumes:
- ".:/web"
ports:
- "80:80"
environment:
WEB_DOCUMENT_ROOT: /web
My favourite Apache/PHP image is alpine-3-php7. It’s small and contains everything that Magento 2 needs to run optimally. I used MySQL 5.7 as database. It’s essential to keep data separated from the architecture, so I added the volumes config to each image. The database files are stored in dev/data/db directory of the project, so adding this to .gitignore is essential. For the same reason, the dev/data/db directory will be owned by a different user, so be careful when running chown
against the document root. The /web document root for Apache is mapped to the same directory where the docker-compose.yml file is located, so opening up a web browser and pointing it to localhost is immediately running the application.
These are the most frequent commands I’m using with Docker Compose:
docker-compose up
- start the server
docker-compose up --build
- start the servers with a rebuild, so changes to the configuration are pulled from the docker hub
docker-compose exec --user application apache bash
- It’s like you are SSH into the server, it’s opening a bash prompt as the application user
docker-compose down --remove-orphans --rmi all
- this command stop, remove and clean up any trace of the application from your system
Remember about the dev/data/db directory? You can compress it and send it to another developer. If you ever tried to import a vast database dump, then you understand the goodie in this. Rather than hours of MySQL import, tar can inflate the whole database in minutes. Moreover, who does not like saving time?
Here is an even better way of saving time. Typing frequently used long commands are boring. We as developers should spend our time coding, where we are more productive. By setting up aliases for these commands will speed up starting with a new project:
alias cup='docker-compose up'
alias cbash='docker-compose exec --user application apache bash'
alias csudo='docker-compose exec apache bash'
alias cmagento='docker-compose exec --user application apache /usr/bin/php -dmemory_limit=2G /web/bin/magento'
Note how cmagento is running bin/magento with a large memory limit. I keep this memory limit since a site with 1M+ SKU used too much memory while running the indexer.
Summary
Even if you don’t use Docker in production, It can still be a good idea to run it locally for development.
This process should be as close to “docker-compose up” as possible. New developers will understand and hopefully love this.
I hope someone can find this useful, lets spread the word and use this excellent tool and don’t litter your development box with different versions of everything.
Comments