Within the Docker community there tends to be two modes of thought with regard to running processes: run multiple supervised processes in a container or only run a single process per container. The former makes it easier to encapsulate applications that may need more than one service (think NGiNX + uWSGI + Memcached + Redis) while the latter makes more sense because Docker only supports running a single command. This article will be about the former, using Supervisor to supervise multiple processes and log their output.
One major issue with running multiple processes in a Docker container is getting access to log output. There are lots of logging services out there designed to ease the burden and aggregate multiple Docker container logs into a single location for storage and analysis, but in this case we’re only interested in a single container. By default, Supervisor logs to a specific file for each of its child processes. While this is great for organization, it’s a hassle when running inside a Docker container. Since we can see the standard output and error from the Docker container itself and docker logs
gives us the same information if the container is running as a daemon, ideally we should be able to see our basic logs the same way.
This post assumes you already have a Docker image with Supervisor installed and a program or two that you’re supervising.
The first step is to modify /etc/supervisor/supervisord.conf
and add loglevel=debug
to the [supervisord]
section:
[supervisord] logfile=/var/log/supervisor/supervisord.log; pidfile=/var/run/supervisord.pid; childlogdir=/var/log/supervisor; loglevel=debug
Next, modify your program-specific configuration file and add redirect_stderr=true
. We’ll use /etc/supervisor/conf.d/supervisord-apache2.conf
as an example:
[program:apache2] command=[APACHE_START_COMMAND] numprocs=1 autostart=2 redirect_stderr=true
This sets up Supervisor to log at the debug level and redirect the child’s stderr back to the main Supervisor process. The final step involves writing log output to /dev/stdout
which depends on the particular program you’re running. For example, to force Apache logs (which would include mod_php errors, mod_python errors, etc.) to display in your Docker output, set ErrorLog
to /dev/stdout
.
Example Docker output of a WordPress container containing a PHP syntax error sending Supervised log output to /dev/stdout
:
2014-09-27 17:49:20,029 INFO success: mysqld entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2014-09-27 17:49:20,029 INFO success: apache2 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2014-09-27 17:49:23,215 DEBG 'apache2' stdout output: [Sat Sep 27 17:49:23.215379 2014] [:error] [pid 875] [client 192.168.59.3:64633] PHP Fatal error: Call to undefined function efine() in /app/wp-includes/wp-db.php on line 15
For larger and more distributed setups, this would most likely not be a sufficient logging method, but for quick tests and smaller applications, this can make log viewing just a bit easier.
Sign up for Turret.IO — the only data-driven marketing platform made specifically for developers.
bigdong
“set ErrorLog to /dev/stdout.”
set what ?
Christian
Thank you, it works flawlessly!