Picture by Writer
# Introduction
Python and information initiatives have a dependency drawback. Between Python variations, digital environments, system-level packages, and working system variations, getting another person’s code to run in your machine can typically take longer than understanding the code itself.
Docker solves this by packaging your code and its total surroundings — Python model, dependencies, system libraries — right into a single artifact referred to as the picture. From the picture you can begin containers that run identically in your laptop computer, your teammate’s machine, and a cloud server. You cease debugging environments and begin transport work.
On this article, you may be taught Docker by means of sensible examples with a deal with information initiatives: containerizing a script, serving a machine studying mannequin with FastAPI, wiring up a multi-service pipeline with Docker Compose, and scheduling a job with a cron container.
# Conditions
Earlier than working by means of the examples, you may want:
- Docker and Docker Compose put in on your working system. Comply with the official set up information on your platform.
- Familiarity with the command line and Python.
- Familiarity with writing a Dockerfile, constructing a picture, and operating a container from that picture.
If you happen to’d like a fast refresher, listed here are a few articles to get you on top of things:
You do not want deep Docker data to comply with alongside. Every instance explains what’s occurring because it goes.
# Containerizing a Python Script with Pinned Dependencies
Let’s begin with the most typical use case: you may have a Python script and a necessities.txt, and also you need it to run reliably anyplace.
We’ll construct a knowledge cleansing script that reads a uncooked gross sales CSV file, removes duplicates, fills in lacking values, and writes a cleaned model to disk.
// Structuring the Mission
The mission is organized as follows:
data-cleaner/
├── Dockerfile
├── necessities.txt
├── clean_data.py
└── information/
└── raw_sales.csv
// Writing the Script
Here is the info cleansing script that makes use of Pandas to do the heavy lifting:
# clean_data.py
import pandas as pd
import os
INPUT_PATH = "data/raw_sales.csv"
OUTPUT_PATH = "data/cleaned_sales.csv"
print("Reading data...")
df = pd.read_csv(INPUT_PATH)
print(f"Rows before cleaning: {len(df)}")
# Drop duplicate rows
df = df.drop_duplicates()
# Fill lacking numeric values with column median
for col in df.select_dtypes(embody="number").columns:
df[col] = df[col].fillna(df[col].median())
# Fill lacking textual content values with 'Unknown'
for col in df.select_dtypes(embody="object").columns:
df[col] = df[col].fillna('Unknown')
print(f"Rows after cleaning: {len(df)}")
df.to_csv(OUTPUT_PATH, index=False)
print(f"Cleaned file saved to {OUTPUT_PATH}")
// Pinning Dependencies
Pinning precise variations is necessary. With out it, pip set up pandas would possibly set up totally different variations on totally different machines. Pinned variations assure everybody will get the identical conduct. You may outline the precise variations within the necessities.txt file like so:
pandas==2.2.0
openpyxl==3.1.2
// Defining the Dockerfile
This Dockerfile builds a minimal, cache-friendly picture for the cleansing script:
# Use a slim Python 3.11 base picture
FROM python:3.11-slim
# Set the working listing contained in the container
WORKDIR /app
# Copy and set up dependencies first (for layer caching)
COPY necessities.txt .
RUN pip set up --no-cache-dir -r necessities.txt
# Copy the script into the container
COPY clean_data.py .
# Default command to run when the container begins
CMD ["python", "clean_data.py"]
There are some things value explaining right here. We use python:3.11-slim as an alternative of the complete Python picture as a result of it is considerably smaller and strips out packages you do not want.
We copy necessities.txt earlier than copying the remainder of the code and that is intentional. Docker builds pictures in layers and caches every one. If you happen to solely change clean_data.py, Docker will not reinstall all of your dependencies on the following construct. It reuses the cached pip layer and jumps straight to copying your up to date script. That small ordering choice can prevent minutes of rebuild time.
// Constructing and Working
With the picture constructed, you’ll be able to run the container and mount your native information folder:
# Construct the picture and tag it
docker construct -t data-cleaner .
# Run it, mounting your native information/ folder into the container
docker run --rm -v $(pwd)/information:/app/information data-cleaner
The -v $(pwd)/information:/app/information flag mounts your native information/ folder into the container at /app/information. That is how the script reads your CSV and the way the cleaned output will get written again to your machine. Nothing is baked into the picture and the info stays in your filesystem.
The --rm flag routinely removes the container after it finishes. Since it is a one-off script, there is no motive to maintain a stopped container mendacity round.
# Serving a Machine Studying Mannequin with FastAPI
You have skilled a mannequin and also you need to make it accessible over HTTP so different companies can ship information and get predictions again. FastAPI works nice for this: it is quick, light-weight, and handles enter validation with Pydantic.
// Structuring the Mission
The mission separates the mannequin artifact from the appliance code:
ml-api/
├── Dockerfile
├── necessities.txt
├── app.py
└── mannequin.pkl
// Writing the App
The next app masses the mannequin as soon as at startup and exposes a /predict endpoint:
# app.py
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import pickle
import numpy as np
app = FastAPI(title="Sales Forecast API")
# Load the mannequin as soon as at startup
with open("model.pkl", "rb") as f:
mannequin = pickle.load(f)
class PredictRequest(BaseModel):
area: str
month: int
marketing_spend: float
units_in_stock: int
class PredictResponse(BaseModel):
area: str
predicted_revenue: float
@app.get("/health")
def well being():
return {"status": "ok"}
@app.publish("/predict", response_model=PredictResponse)
def predict(request: PredictRequest):
attempt:
options = [[
request.month,
request.marketing_spend,
request.units_in_stock
]]
prediction = mannequin.predict(options)
return PredictResponse(
area=request.area,
predicted_revenue=spherical(float(prediction[0]), 2)
)
besides Exception as e:
increase HTTPException(status_code=500, element=str(e))
The PredictRequest class does the enter validation for you. If somebody sends a request with a lacking subject or a string the place a quantity is predicted, FastAPI rejects it with a transparent error message earlier than your mannequin code even runs. The mannequin is loaded as soon as at startup — not on each request — which retains response instances quick.
The /well being endpoint is a small however necessary addition: Docker, load balancers, and cloud platforms use it to test whether or not your service is definitely up and prepared.
// Defining the Dockerfile
This Dockerfile bakes the mannequin straight into the picture so the container is totally self-contained:
FROM python:3.11-slim
WORKDIR /app
COPY necessities.txt .
RUN pip set up --no-cache-dir -r necessities.txt
# Copy the mannequin and the app collectively
COPY mannequin.pkl .
COPY app.py .
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
The mannequin.pkl is baked into the picture at construct time. This implies the container is totally self-contained, and also you need not mount something while you run it. The --host 0.0.0.0 flag tells Uvicorn to pay attention on all community interfaces contained in the container, not simply localhost. With out this, you will not be capable of attain the API from outdoors the container.
// Constructing and Working
Construct the picture and begin the API server:
docker construct -t ml-api .
docker run --rm -p 8000:8000 ml-api
Take a look at it with curl:
curl -X POST
-H "Content-Type: application/json"
-d '{"region": "North", "month": 3, "marketing_spend": 5000.0, "units_in_stock": 320}'
# Constructing a Multi-Service Pipeline with Docker Compose
Actual information initiatives not often contain only one course of. You would possibly want a database, a script that masses information into it, and a dashboard that reads from it — all operating collectively.
Docker Compose enables you to outline and run a number of containers as a single software. Every service has its personal container, however all of them share a personal community to allow them to discuss to one another.
// Structuring the Mission
The pipeline splits every service into its personal subdirectory:
pipeline/
├── docker-compose.yml
├── loader/
│ ├── Dockerfile
│ ├── necessities.txt
│ └── load_data.py
└── dashboard/
├── Dockerfile
├── necessities.txt
└── app.py
// Defining the Compose File
This Compose file declares all three companies and wires them along with well being checks and shared URL surroundings variables:
# docker-compose.yml
model: "3.9"
companies:
db:
picture: postgres:15
surroundings:
POSTGRES_USER: admin
POSTGRES_PASSWORD: secret
POSTGRES_DB: analytics
volumes:
- pgdata:/var/lib/postgresql/information
healthcheck:
take a look at: ["CMD-SHELL", "pg_isready -U admin -d analytics"]
interval: 5s
retries: 5
loader:
construct: ./loader
depends_on:
db:
situation: service_healthy
surroundings:
DATABASE_URL: postgresql://admin:secret@db:5432/analytics
dashboard:
construct: ./dashboard
depends_on:
db:
situation: service_healthy
ports:
- "8501:8501"
surroundings:
DATABASE_URL: postgresql://admin:secret@db:5432/analytics
volumes:
pgdata:
// Writing the Loader Script
This script waits briefly for the database, then masses a CSV into the gross sales desk utilizing SQLAlchemy:
# loader/load_data.py
import pandas as pd
from sqlalchemy import create_engine
import os
import time
DATABASE_URL = os.environ["DATABASE_URL"]
# Give the DB a second to be totally prepared
time.sleep(3)
engine = create_engine(DATABASE_URL)
df = pd.read_csv("sales_data.csv")
df.to_sql("sales", engine, if_exists="replace", index=False)
print(f"Loaded {len(df)} rows into the sales table.")
Let’s take a better have a look at the Compose file. Every service runs in its personal container, however they’re all on the identical Docker-managed community, to allow them to attain one another utilizing the service identify as a hostname. The loader connects to db:5432 — and never localhost — as a result of db is the service identify, and Docker handles the DNS decision routinely.
The healthcheck on the PostgreSQL service is necessary. depends_on alone solely waits for the container to start out, not for PostgreSQL to be prepared to simply accept connections. The healthcheck makes use of pg_isready to substantiate the database is definitely up earlier than the loader tries to attach. The pgdata quantity persists the database between runs; stopping and restarting the pipeline will not wipe your information.
// Beginning Every part
Deliver up all companies with a single command:
docker compose up --build
To cease all the pieces, run:
# Scheduling Jobs with a Cron Container
Typically you want a script to run on a schedule. Possibly it fetches information from an API each hour and writes it to a database or a file. You do not need to arrange a full orchestration system like Airflow for one thing this straightforward. A cron container does the job cleanly.
// Structuring the Mission
The mission features a crontab file alongside the script and Dockerfile:
data-fetcher/
├── Dockerfile
├── necessities.txt
├── fetch_data.py
└── crontab
// Writing the Fetch Script
This script makes use of Requests to hit an API endpoint and saves the outcomes as a timestamped CSV:
# fetch_data.py
import requests
import pandas as pd
from datetime import datetime
import os
API_URL = "
OUTPUT_DIR = "/app/output"
os.makedirs(OUTPUT_DIR, exist_ok=True)
print(f"[{datetime.now()}] Fetching data...")
response = requests.get(API_URL, timeout=10)
response.raise_for_status()
information = response.json()
df = pd.DataFrame(information["records"])
timestamp = datetime.now().strftime("%Y%m%d_%H%M")
output_path = f"{OUTPUT_DIR}/sales_{timestamp}.csv"
df.to_csv(output_path, index=False)
print(f"[{datetime.now()}] Saved {len(df)} records to {output_path}")
// Defining the Crontab
The crontab schedules the script to run each hour and redirects all output to a log file:
# Run each hour, on the hour
0 * * * * python /app/fetch_data.py >> /var/log/fetch.log 2>&1
The >> /var/log/fetch.log 2>&1 half redirects each customary output and error output to a log file. That is the way you examine what occurred after the very fact.
// Defining the Dockerfile
This Dockerfile installs cron, registers the schedule, and retains it operating within the foreground:
FROM python:3.11-slim
# Set up cron
RUN apt-get replace && apt-get set up -y cron && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY necessities.txt .
RUN pip set up --no-cache-dir -r necessities.txt
COPY fetch_data.py .
COPY crontab /and so forth/cron.d/fetch-job
# Set right permissions and register the crontab
RUN chmod 0644 /and so forth/cron.d/fetch-job && crontab /and so forth/cron.d/fetch-job
# cron -f runs cron within the foreground, which is required for Docker
CMD ["cron", "-f"]
The cron -f flag is necessary right here. Docker retains a container alive so long as its most important course of is operating. If cron ran within the background (its default), the principle course of would exit instantly and Docker would cease the container. The -f flag retains cron operating within the foreground so the container stays alive.
// Constructing and Working
Construct the picture and begin the container in indifferent mode:
docker construct -t data-fetcher .
docker run -d --name fetcher -v $(pwd)/output:/app/output data-fetcher
Verify the logs any time:
docker exec fetcher cat /var/log/fetch.log
The output folder is mounted out of your native machine, so the CSV recordsdata land in your filesystem despite the fact that the script runs contained in the container.
# Wrapping Up
I hope you discovered this Docker article useful. Docker does not need to be difficult. Begin with the primary instance, swap in your personal script and dependencies, and get snug with the build-run cycle. As soon as you’ve got accomplished that, the opposite patterns comply with naturally. Docker is an efficient match when:
- You want reproducible environments throughout machines or staff members
- You are sharing scripts or fashions which have particular dependency necessities
- You are constructing multi-service programs that must run collectively reliably
- You need to deploy anyplace with out setup friction
That mentioned, you don’t at all times want to make use of Docker for your entire Python work. It is in all probability overkill when:
- You are doing fast, exploratory evaluation just for your self
- Your script has no exterior dependencies past the usual library
- You are early in a mission and your necessities are altering quickly
If you happen to’re considering going additional, try 5 Easy Steps to Mastering Docker for Knowledge Science.
Completely happy coding!
Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, information science, and content material creation. Her areas of curiosity and experience embody DevOps, information science, and pure language processing. She enjoys studying, writing, coding, and low! At the moment, she’s engaged on studying and sharing her data with the developer neighborhood by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates participating useful resource overviews and coding tutorials.



