Managing Python Apps with systemd

Managing Python Apps with systemd

Running a Python application on a Linux server goes far beyond typing python app.py in a terminal. In production, you need your application to start automatically on boot, restart after crashes, log output in a structured way, and integrate cleanly with the rest of the operating system. That is exactly what systemd was designed to do.

This article walks through everything you need to know to manage Python applications with systemd: writing unit files, handling virtual environments, managing environment variables, logging, sandboxing, and running multiple instances of the same service.

Why systemd?

systemd is the init system and service manager on virtually every modern Linux distribution, including Ubuntu, Debian, Fedora, CentOS, and Arch Linux. When you manage your Python application as a systemd service, you gain several advantages over ad-hoc solutions like nohup, screen, or tmux:

  • The application starts automatically when the server boots.
  • systemd restarts it if it crashes, with configurable backoff policies.
  • Logs are captured by journald, which provides rotation, filtering, and centralized access out of the box.
  • Resource limits (CPU, memory, file descriptors) can be enforced declaratively.
  • Dependencies between services are handled natively (for example, starting your app only after PostgreSQL is ready).

A minimal unit file

A systemd unit file is an INI-style configuration file that describes a service. Unit files for custom services are typically placed in /etc/systemd/system/. Let's start with the simplest possible example for a Python web application.

Create a file at /etc/systemd/system/myapp.service:

[Unit]
Description=My Python Application
After=network.target

[Service]
Type=simple
User=myapp
Group=myapp
WorkingDirectory=/opt/myapp
ExecStart=/opt/myapp/venv/bin/python -u app.py
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Let's break this file down section by section.

The [Unit] section

Description is a human-readable label that shows up in logs and status output. After=network.target tells systemd to start this service only after the network stack is initialized. Note that After defines ordering, not a hard dependency. If you need to express a hard dependency, you would add Requires= or Wants= as well.

The [Service] section

Type=simple means the process started by ExecStart is the main process of the service. This is the correct type for most Python applications, since they typically stay in the foreground.

User and Group specify the Unix user and group under which the process runs. You should always run your application as an unprivileged user, never as root.

WorkingDirectory sets the current working directory for the process. This matters if your application uses relative paths for configuration files or templates.

ExecStart is the command to run. Notice that we point directly to the Python binary inside the virtual environment. This is critical and will be discussed in more detail shortly.

The -u flag passed to Python disables output buffering, ensuring that print statements and log messages appear in the journal immediately rather than being held in a buffer.

Restart=on-failure tells systemd to restart the service if the process exits with a non-zero exit code or is killed by a signal. RestartSec=5 adds a five-second delay before restarting, which prevents tight restart loops if the application fails immediately on startup.

The [Install] section

WantedBy=multi-user.target means the service will be started when the system reaches multi-user mode (the normal state for a server). This line is what makes systemctl enable work.

Working with virtual environments

The cleanest way to handle virtual environments in systemd is to reference the Python interpreter inside the venv directly in the ExecStart directive, as shown above:

ExecStart=/opt/myapp/venv/bin/python -u app.py

This approach works because the Python binary inside a virtual environment already knows about its own site-packages. There is no need to "activate" the virtual environment. Activation is a shell convenience that modifies the PATH variable, but since systemd doesn't run a shell, you simply bypass it entirely by using the full path to the interpreter.

If your application uses a console script entry point (installed via pip install -e . or pip install .), you can reference that directly instead:

ExecStart=/opt/myapp/venv/bin/myapp-server --port 8000

Managing environment variables

Python applications often rely on environment variables for configuration: database URLs, API keys, debug flags, and so on. systemd provides several ways to set them.

Inline with Environment

You can set variables directly in the unit file:

[Service]
Environment=DATABASE_URL=postgresql://localhost/mydb
Environment=LOG_LEVEL=info
Environment=WORKERS=4

This approach is straightforward but has two drawbacks. First, the values are visible to anyone who can read the unit file or run systemctl show myapp. Second, editing requires a daemon reload.

Using an environment file

A better approach for sensitive values is to use EnvironmentFile:

[Service]
EnvironmentFile=/etc/myapp/environment

The referenced file uses a simple KEY=VALUE format, one variable per line:

DATABASE_URL=postgresql://localhost/mydb
SECRET_KEY=a1b2c3d4e5f6
LOG_LEVEL=info

Make sure this file is owned by root and has restrictive permissions:

sudo chown root:root /etc/myapp/environment
sudo chmod 600 /etc/myapp/environment

The service process can still read the variables at runtime because systemd loads them before dropping privileges.

Essential systemctl commands

Once your unit file is in place, here is the standard workflow for managing the service:

# Reload unit files after creating or editing them
sudo systemctl daemon-reload

# Start the service
sudo systemctl start myapp

# Stop the service
sudo systemctl stop myapp

# Restart the service
sudo systemctl restart myapp

# Reload configuration without full restart (if the app supports SIGHUP)
sudo systemctl reload myapp

# Enable the service to start on boot
sudo systemctl enable myapp

# Disable auto-start on boot
sudo systemctl disable myapp

# Check the current status
sudo systemctl status myapp

The status command is particularly useful. It shows whether the service is running, its PID, memory usage, and the most recent log lines.

Logging with journald

By default, systemd captures everything your application writes to stdout and stderr and sends it to journald. You can query these logs with journalctl:

# View all logs for the service
journalctl -u myapp

# Follow the log output in real time
journalctl -u myapp -f

# Show logs since the last boot
journalctl -u myapp -b

# Show logs from the last hour
journalctl -u myapp --since "1 hour ago"

# Show only error-level messages (priority 0-3)
journalctl -u myapp -p err

# Output in JSON format for parsing
journalctl -u myapp -o json

Structured logging with Python

To get the most out of journald, consider using Python's built-in logging module configured to write to stderr in a structured format. Here is a minimal setup:

import logging
import sys

logging.basicConfig(
    stream=sys.stderr,
    level=logging.INFO,
    format="%(asctime)s [%(levelname)s] %(name)s: %(message)s",
)

logger = logging.getLogger("myapp")

For even tighter integration, you can use the systemd-python package to send structured fields directly to the journal:

from systemd.journal import JournalHandler
import logging

logger = logging.getLogger("myapp")
logger.addHandler(JournalHandler(SYSLOG_IDENTIFIER="myapp"))
logger.setLevel(logging.INFO)

logger.info("Application started", extra={"REQUEST_ID": "abc-123"})

This allows you to filter logs by custom fields using journalctl.

Hardening and sandboxing

systemd offers a suite of security directives that let you restrict what your service can do, following the principle of least privilege. Here is a hardened version of the [Service] section:

[Service]
Type=simple
User=myapp
Group=myapp
WorkingDirectory=/opt/myapp
ExecStart=/opt/myapp/venv/bin/python -u app.py
Restart=on-failure
RestartSec=5

# Filesystem restrictions
ProtectHome=true
ProtectSystem=strict
ReadWritePaths=/var/lib/myapp
PrivateTmp=true

# Network and device restrictions
PrivateDevices=true
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX

# Privilege restrictions
NoNewPrivileges=true
CapabilityBoundingSet=

# System call filtering
SystemCallArchitectures=native
SystemCallFilter=@system-service

# Memory protections
MemoryDenyWriteExecute=true

Let's look at what each of these does.

ProtectHome=true makes /home, /root, and /run/user inaccessible to the service. ProtectSystem=strict mounts the entire file system hierarchy as read-only, except paths explicitly listed in ReadWritePaths. Together, these ensure your application cannot modify anything outside its designated data directory.

PrivateTmp=true gives the service its own private /tmp directory, isolated from other processes. PrivateDevices=true prevents access to physical device nodes.

NoNewPrivileges=true prevents the process (and any child processes) from gaining additional privileges through setuid binaries or other mechanisms. CapabilityBoundingSet= (set to an empty value) drops all Linux capabilities.

SystemCallFilter=@system-service restricts the process to a predefined set of system calls that are typical for network services. This blocks many of the system calls used in exploitation attempts.

You can test how well your unit file is hardened by running:

systemd-analyze security myapp

This command prints a score from 0 (fully exposed) to 10 (fully hardened), along with specific recommendations.

Resource limits

You can constrain the resources your service is allowed to consume. This is especially useful in shared hosting environments or when running untrusted workloads:

[Service]
# Limit memory usage to 512 MB
MemoryMax=512M

# Limit CPU usage to 50% of one core
CPUQuota=50%

# Limit the number of open file descriptors
LimitNOFILE=4096

# Limit the number of child processes
LimitNPROC=64

# Set a maximum runtime (kill the process after 1 hour)
RuntimeMaxSec=3600

These limits leverage Linux cgroups under the hood. If the process exceeds MemoryMax, the kernel's OOM killer will terminate it, and systemd will restart it according to the Restart policy.

Running a WSGI/ASGI server

Most Python web applications in production run behind a WSGI or ASGI server. Here is how to set up Gunicorn as a systemd service:

[Unit]
Description=Gunicorn serving My Web Application
After=network.target

[Service]
Type=notify
User=myapp
Group=myapp
WorkingDirectory=/opt/myapp
EnvironmentFile=/etc/myapp/environment
ExecStart=/opt/myapp/venv/bin/gunicorn \
    --workers 4 \
    --bind unix:/run/myapp/gunicorn.sock \
    --access-logfile - \
    --error-logfile - \
    wsgi:app
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
RuntimeDirectory=myapp

[Install]
WantedBy=multi-user.target

A few things are different here. Type=notify means Gunicorn will signal systemd when it is ready to accept connections (Gunicorn has built-in systemd notify support). This is important because it tells systemd to consider the service "started" only after the workers have been forked and are listening.

RuntimeDirectory=myapp automatically creates the directory /run/myapp/ with correct ownership when the service starts, and removes it when the service stops. This is where the Unix socket lives.

ExecReload sends a SIGHUP signal to Gunicorn's master process, which triggers a graceful worker restart. This allows you to run systemctl reload myapp during deployments without dropping connections.

For an ASGI application using Uvicorn, the ExecStart would look like this:

ExecStart=/opt/myapp/venv/bin/uvicorn \
    --host 0.0.0.0 \
    --port 8000 \
    --workers 4 \
    app:application

Running multiple instances with template units

If you need to run multiple instances of the same application (for example, on different ports or with different configurations), systemd template units are the elegant solution.

Create a file at /etc/systemd/system/myapp@.service (note the @ in the filename):

[Unit]
Description=My Python Application (instance %i)
After=network.target

[Service]
Type=simple
User=myapp
Group=myapp
WorkingDirectory=/opt/myapp
EnvironmentFile=/etc/myapp/%i.env
ExecStart=/opt/myapp/venv/bin/python -u app.py --port %i
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

The %i specifier is replaced by the instance name, which is whatever you place after the @ when managing the service:

# Start instances on different ports
sudo systemctl start myapp@8001
sudo systemctl start myapp@8002
sudo systemctl start myapp@8003

# Enable all three on boot
sudo systemctl enable myapp@8001 myapp@8002 myapp@8003

# Check status of a specific instance
sudo systemctl status myapp@8002

Each instance reads its own environment file (/etc/myapp/8001.env, /etc/myapp/8002.env, and so on), allowing you to customize configuration per instance.

Handling deployments

A typical deployment workflow for a systemd-managed Python application looks like this:

#!/usr/bin/env bash
set -euo pipefail

APP_DIR=/opt/myapp

# Pull new code
cd "$APP_DIR"
git pull origin main

# Install/update dependencies
"$APP_DIR/venv/bin/pip" install -r requirements.txt

# Run database migrations
"$APP_DIR/venv/bin/python" manage.py migrate

# Restart the service
sudo systemctl restart myapp

# Verify it started correctly
sleep 2
sudo systemctl is-active --quiet myapp && echo "Deployment successful" || echo "Deployment FAILED"

If you are using Gunicorn, you can replace the restart with a reload for zero-downtime deployments. Gunicorn will fork new workers with the updated code while the old workers finish processing their current requests.

Pre-start and post-stop hooks

systemd allows you to run commands before the main process starts and after it stops. This is useful for tasks like database migrations, cache warming, or cleanup:

[Service]
ExecStartPre=/opt/myapp/venv/bin/python manage.py migrate --noinput
ExecStart=/opt/myapp/venv/bin/gunicorn wsgi:app
ExecStartPost=/opt/myapp/venv/bin/python manage.py warmup_cache
ExecStopPost=/usr/bin/rm -f /var/lib/myapp/celerybeat-schedule

ExecStartPre runs before the main process. If it fails (exits with a non-zero code), the service will not start. ExecStartPost runs after the main process has started. ExecStopPost runs after the main process has stopped, regardless of how it stopped.

Socket activation

systemd can manage the listening socket on behalf of your application, passing it to the process on startup. This enables zero-downtime restarts at the socket level: systemd holds the socket open and queues incoming connections while the application is restarting.

Create a socket unit at /etc/systemd/system/myapp.socket:

[Unit]
Description=Socket for My Python Application

[Socket]
ListenStream=/run/myapp/app.sock
SocketUser=www-data
SocketMode=0660

[Install]
WantedBy=sockets.target

Then modify your service unit to accept the socket:

[Unit]
Description=My Python Application
Requires=myapp.socket
After=myapp.socket

[Service]
Type=simple
User=myapp
Group=myapp
WorkingDirectory=/opt/myapp
ExecStart=/opt/myapp/venv/bin/python -u app.py
NonBlocking=true

In your Python code, you receive the socket via file descriptor 3 (the standard systemd convention) using the sd-daemon protocol or the systemd-python library.

Watchdog integration

The systemd watchdog monitors your application for hangs. If the application fails to send periodic heartbeat notifications, systemd considers it unresponsive and restarts it.

Enable it in your unit file:

[Service]
Type=notify
WatchdogSec=30

Then, in your Python application, send heartbeat signals at regular intervals. The interval should be roughly half the WatchdogSec value:

import sdnotify
import threading

notifier = sdnotify.SystemdNotifier()

def watchdog_ping():
    while True:
        notifier.notify("WATCHDOG=1")
        threading.Event().wait(15)

threading.Thread(target=watchdog_ping, daemon=True).start()

# Notify systemd that startup is complete
notifier.notify("READY=1")

If the event loop of your application freezes, the watchdog pings will stop, and systemd will restart the process after the timeout expires.

Common pitfalls

Here are some issues that commonly trip people up when running Python applications under systemd.

Buffered output. Python buffers stdout by default. If you forget the -u flag or don't set PYTHONUNBUFFERED=1, your logs may appear delayed or not at all in journald. Always disable buffering.

Forgetting daemon-reload. After editing a unit file, you must run systemctl daemon-reload before the changes take effect. This is easy to forget and leads to confusion when your edits seem to be ignored.

Hardcoded paths. Avoid relying on ~ or relative paths in unit files. systemd does not expand tilde, and the working directory may not be what you expect. Always use absolute paths.

Permission issues. If your application needs to bind to a port below 1024, do not run it as root. Instead, use AmbientCapabilities=CAP_NET_BIND_SERVICE in the unit file to grant just that one capability.

Missing dependencies. If your service depends on a database like PostgreSQL or Redis, make sure you declare the dependency properly with After= and Requires= or Wants=. Without this, your application may start before the database is ready and fail on the first connection attempt.

Putting it all together

Here is a production-ready unit file that incorporates most of the techniques discussed in this article:

[Unit]
Description=My Production Python Application
Documentation=https://github.com/myorg/myapp
After=network.target postgresql.service redis.service
Wants=postgresql.service redis.service

[Service]
Type=notify
User=myapp
Group=myapp
WorkingDirectory=/opt/myapp
EnvironmentFile=/etc/myapp/environment
RuntimeDirectory=myapp

ExecStartPre=/opt/myapp/venv/bin/python manage.py migrate --noinput
ExecStart=/opt/myapp/venv/bin/gunicorn \
    --workers 4 \
    --worker-class gthread \
    --threads 2 \
    --bind unix:/run/myapp/gunicorn.sock \
    --access-logfile - \
    --error-logfile - \
    --timeout 120 \
    wsgi:app
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
WatchdogSec=30

# Resource limits
MemoryMax=1G
CPUQuota=200%
LimitNOFILE=8192

# Security hardening
ProtectHome=true
ProtectSystem=strict
ReadWritePaths=/var/lib/myapp
PrivateTmp=true
PrivateDevices=true
NoNewPrivileges=true
CapabilityBoundingSet=
SystemCallArchitectures=native
SystemCallFilter=@system-service
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX

[Install]
WantedBy=multi-user.target

This unit file gives you automatic startup, graceful restarts, dependency ordering, environment file support, database migrations on startup, memory and CPU limits, a full security sandbox, and watchdog monitoring.

systemd is one of the most powerful tools available for running Python applications reliably in production. Learning to write good unit files is a worthwhile investment that will save you from a whole class of operational headaches.