Jenkins CI/CD (6/11): Integration Tests With Docker Compose
Summary: You add a real Postgres database to your CI pipeline using Docker Compose, write integration tests that connect, create a table, insert data, and read it back, then wire everything into the Jenkinsfile with proper bring-up, readiness checks, and guaranteed teardown. By the end your pipeline runs both unit tests and integration tests against a live database on every build.
Example Values Used in This Tutorial
| Key | Value |
|---|---|
| Postgres image | postgres:16 |
| Database name | testdb |
| Database user | testuser |
| Database password | testpass |
| Docker Compose file | docker-compose.yml |
| Integration test file | tests/test_integration.py |
| JUnit output (integration) | results/junit-integration.xml |
0. Prerequisites
- A working Jenkins controller at
http://localhost:8080(Part 1). - The
hellociPython package with passing unit tests committed to Git (Part 2). - A Jenkinsfile with Setup Python, Install Dependencies, Lint, and Unit Tests stages (Parts 3-5).
- Docker and Docker Compose v2 installed on the Jenkins agent (
docker compose versionshould printv2.x). - Familiarity with editing the
Jenkinsfileand triggering builds from the Jenkins UI.
Note: Docker Compose v2 is a Docker CLI plugin. The command is
docker compose(with a space), not the legacydocker-compose(with a hyphen). Ifdocker compose versionfails, install the Docker Compose plugin for your platform.
1. Integration Tests vs Unit Tests
The unit tests you wrote in Part 2 exercise pure Python logic. They call greet("Alice") and check the return value. No network, no database, no filesystem — just function calls and assertions. That is exactly what unit tests should do.
Integration tests are different. They verify that your code works correctly when it talks to real external systems. A function that builds a SQL query might pass every unit test, but fail catastrophically when it hits an actual Postgres database with real types, real constraints, and real network latency.
The distinction matters for CI:
- Unit tests are fast, isolated, and have zero dependencies. Run them first.
- Integration tests are slower, require infrastructure (a database, a message broker, an API), and can fail for environmental reasons. Run them after unit tests pass.
In this tutorial you spin up a Postgres container, connect to it from Python, and prove the integration works. Then you teach Jenkins to do the same thing on every build.
2. Create the Docker Compose File
Create a docker-compose.yml file in the root of your helloci repository. This file tells Docker Compose to start a Postgres 16 container with a test database, user, and password.
services:
postgres:
image: postgres:16
environment:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
ports:
- "5432:5432"
healthcheck:
test: ["CMD", "pg_isready", "-U", "testuser", "-d", "testdb"]
interval: 2s
timeout: 5s
retries: 10Code language: YAML (yaml)
A few things to note about this file:
- The
environmentblock creates the database and user on first startup. No manual SQL setup needed. - The
portsmapping exposes Postgres onlocalhost:5432so your tests can connect from outside the container. - The
healthcheckblock tells Docker Compose when the container is actually ready to accept connections. This is critical — Postgres needs a few seconds to initialize, and connecting before it is ready causes test failures.
Warning: The
testpasspassword is fine for a throwaway CI database. Never use weak passwords for databases that hold real data.
3. Add psycopg2-binary to Test Dependencies
Your integration tests need a Postgres driver. Open pyproject.toml and add psycopg2-binary to the test dependency group.
If your pyproject.toml currently looks like this:
[project.optional-dependencies]
test = [
"pytest",
"ruff",
]Code language: TOML, also INI (ini)
Change it to:
[project.optional-dependencies]
test = [
"pytest",
"ruff",
"psycopg2-binary",
]Code language: TOML, also INI (ini)
The psycopg2-binary package is a self-contained Postgres adapter that does not require libpq development headers. It is ideal for CI environments where you do not want to compile C extensions.
Reinstall the test dependencies locally to pick up the new package:
.venv/bin/pip install -e ".[test]"Code language: Shell Session (shell)
Verify the driver is available:
.venv/bin/python -c "import psycopg2; print(psycopg2.__version__)"Code language: Shell Session (shell)
You should see a version string like 2.9.9. If you get an ImportError, the install did not work — check the pip output for errors.
4. Write the Integration Tests
Create the file tests/test_integration.py. This test connects to the Postgres container, creates a table, inserts a row, reads it back, and verifies the data.
import psycopg2
import pytest
@pytest.fixture
def db_conn():
"""Connect to the test Postgres database and clean up after."""
conn = psycopg2.connect(
host="localhost",
port=5432,
dbname="testdb",
user="testuser",
password="testpass",
)
conn.autocommit = True
yield conn
conn.close()
def test_create_table(db_conn):
"""Create a table and verify it exists."""
cur = db_conn.cursor()
cur.execute("DROP TABLE IF EXISTS greetings;")
cur.execute(
"""
CREATE TABLE greetings (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
message TEXT NOT NULL
);
"""
)
cur.execute(
"SELECT EXISTS ("
" SELECT FROM information_schema.tables"
" WHERE table_name = 'greetings'"
");"
)
exists = cur.fetchone()[0]
assert exists is True
cur.close()
def test_insert_and_read(db_conn):
"""Insert a row and read it back."""
cur = db_conn.cursor()
cur.execute("DROP TABLE IF EXISTS greetings;")
cur.execute(
"""
CREATE TABLE greetings (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
message TEXT NOT NULL
);
"""
)
cur.execute(
"INSERT INTO greetings (name, message) VALUES (%s, %s);",
("Alice", "Hello, Alice!"),
)
cur.execute("SELECT name, message FROM greetings WHERE name = %s;", ("Alice",))
row = cur.fetchone()
assert row is not None
assert row[0] == "Alice"
assert row[1] == "Hello, Alice!"
cur.close()
def test_multiple_rows(db_conn):
"""Insert multiple rows and verify the count."""
cur = db_conn.cursor()
cur.execute("DROP TABLE IF EXISTS greetings;")
cur.execute(
"""
CREATE TABLE greetings (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
message TEXT NOT NULL
);
"""
)
names = [("Alice", "Hello, Alice!"), ("Bob", "Hello, Bob!"), ("Eve", "Hello, Eve!")]
for name, message in names:
cur.execute(
"INSERT INTO greetings (name, message) VALUES (%s, %s);",
(name, message),
)
cur.execute("SELECT COUNT(*) FROM greetings;")
count = cur.fetchone()[0]
assert count == 3
cur.close()Code language: Python (python)
Each test is independent. The db_conn fixture opens a fresh connection, and each test drops and recreates the table so there are no leftover rows from a previous test. This isolation is important — integration tests that depend on execution order are brittle and hard to debug.
Tip: Setting
autocommit = Trueon the connection means every SQL statement is committed immediately. This keeps the tests simple by avoiding explicitconn.commit()calls.
5. Run Integration Tests Locally
Before wiring anything into Jenkins, prove the integration tests work on your machine. This is the same bring-up/test/teardown flow that Jenkins will follow.
Start the Postgres container in the background:
docker compose up -dCode language: Shell Session (shell)
Wait for the healthcheck to report healthy:
docker compose exec postgres pg_isready -U testuser -d testdb --timeout=30Code language: Shell Session (shell)
You should see output like:
localhost:5432 - accepting connectionsCode language: Shell Session (shell)
Run the integration tests:
.venv/bin/pytest tests/test_integration.py -vCode language: Shell Session (shell)
Expected output:
tests/test_integration.py::test_create_table PASSED
tests/test_integration.py::test_insert_and_read PASSED
tests/test_integration.py::test_multiple_rows PASSEDCode language: Shell Session (shell)
Tear down the container and remove the volume:
docker compose down -vCode language: Shell Session (shell)
The -v flag removes the anonymous volume that Postgres uses for data storage. Without it, leftover data could leak into the next test run.
Note: If the tests fail with a connection refused error, the container may not be fully ready. Run
docker compose psto check the health status. If the status showsstarting, wait a few seconds and try again.
6. Update the Jenkinsfile
Open the Jenkinsfile in your repository root. Replace its contents with the following. This is the complete file — every stage, from Setup Python through Integration Tests, with a post block that guarantees cleanup.
pipeline {
agent any
stages {
stage('Setup Python') {
steps {
sh 'python3 -m venv .venv'
sh '.venv/bin/pip install --upgrade pip'
}
}
stage('Install Dependencies') {
steps {
sh '.venv/bin/pip install -e ".[test]"'
}
}
stage('Lint') {
steps {
sh '.venv/bin/ruff check src/ tests/'
}
}
stage('Unit Tests') {
steps {
sh 'mkdir -p results'
sh '.venv/bin/pytest tests/test_greet.py --junitxml=results/junit.xml'
}
}
stage('Integration Tests') {
steps {
sh 'docker compose up -d'
sh 'docker compose exec -T postgres pg_isready -U testuser -d testdb --timeout=30'
sh '.venv/bin/pytest tests/test_integration.py --junitxml=results/junit-integration.xml -v'
}
}
}
post {
always {
sh 'docker compose down -v || true'
junit 'results/*.xml'
}
}
}Code language: Groovy (groovy)
This Jenkinsfile has five stages and a post block. Here is what changed compared to Part 5:
- A new Integration Tests stage brings up Postgres, waits for it to be ready, and runs the integration test file.
- The
post { always }block now callsdocker compose down -vto tear down the container, then collects all JUnit XML files withjunit 'results/*.xml'. - The
|| trueafterdocker compose down -vprevents the teardown from failing the build if the containers were never started (for example, if the build failed before reaching the Integration Tests stage).
The Unit Tests stage targets only tests/test_greet.py and writes to results/junit.xml. The Integration Tests stage targets only tests/test_integration.py and writes to results/junit-integration.xml. Keeping the output files separate means Jenkins can show you which type of test failed without mixing results.
7. Why Readiness Checks Matter
You might be tempted to skip the pg_isready check and add a sleep 10 instead. Do not do that.
The sleep approach has two problems:
- Too short — on a slow agent, Postgres might need 15 seconds. Your tests fail intermittently, and you waste hours debugging a timing issue.
- Too long — on a fast agent, Postgres is ready in 2 seconds. You wait 10 seconds for nothing on every single build, forever.
The pg_isready command is a purpose-built readiness probe. It connects to Postgres and checks whether the server is accepting connections. The --timeout=30 flag tells it to keep trying for up to 30 seconds before giving up. If Postgres is ready in 2 seconds, the command returns in 2 seconds. If it takes 12 seconds, the command returns in 12 seconds. You get the fastest possible startup without the risk of connecting too early.
This pattern applies to any service you spin up in CI:
| Service | Readiness check |
|---|---|
| Postgres | pg_isready -U user -d dbname |
| MySQL | mysqladmin ping -h localhost |
| Redis | redis-cli ping |
| HTTP API | curl --retry 10 --retry-connrefused http://localhost:8080/health |
Tip: If you ever see flaky integration tests in CI that pass locally, the first thing to check is the readiness probe. Nine times out of ten the service was not ready when the tests started.
8. Teardown in post { always }
The post { always } block runs after the pipeline finishes, regardless of whether the build passed or failed. This is the only safe place to put teardown logic.
Consider what happens without it:
- Docker Compose starts Postgres.
- The integration tests fail.
- Jenkins marks the build as failed and stops.
- The Postgres container keeps running.
- The next build starts. Port
5432is already in use. Docker Compose fails. The build fails again — but now for an infrastructure reason, not a test failure.
By putting docker compose down -v in post { always }, you guarantee the container is stopped and the volume is removed no matter what happens during the build. The || true suffix is a safety net — if the containers were never started (because the build failed before reaching the Integration Tests stage), docker compose down would exit with an error. The || true swallows that error so it does not obscure the real failure.
post {
always {
sh 'docker compose down -v || true'
junit 'results/*.xml'
}
}Code language: Groovy (groovy)
The junit 'results/*.xml' step uses a glob pattern to collect both results/junit.xml (unit tests) and results/junit-integration.xml (integration tests). Jenkins merges them into a single test report dashboard.
Warning: Never put teardown commands inside a
stage. If a previous stage fails, subsequent stages are skipped — and your teardown never runs. Always usepost { always }.
9. Commit and Trigger the Build
You have four new or modified files to commit:
docker-compose.yml— the Postgres service definition.pyproject.toml— updated test dependencies.tests/test_integration.py— the integration test file.Jenkinsfile— the updated pipeline.
Stage and commit everything:
git add docker-compose.yml pyproject.toml tests/test_integration.py Jenkinsfile
git commit -m "Add integration tests with Docker Compose Postgres"
git push origin mainCode language: Shell Session (shell)
Go to your pipeline job in Jenkins and click Build Now.
Watch the Stage View as the build progresses. You should see five stages complete in order:
- Setup Python — creates the venv.
- Install Dependencies — installs the package and test dependencies including
psycopg2-binary. - Lint — runs ruff.
- Unit Tests — runs unit tests and writes JUnit XML.
- Integration Tests — starts Postgres, waits for readiness, runs integration tests.
After the build completes, the post block tears down the Postgres container and collects the test reports.
Click Test Result on the build page. You should see all tests — unit and integration — merged into a single report. The integration tests appear under the tests.test_integration class.
10. What to Do When It Fails
If the Integration Tests stage fails, check these common issues.
| Symptom | Cause | Fix |
|---|---|---|
docker: command not found | Docker not installed or jenkins user lacks permissions | sudo usermod -aG docker jenkins then restart Jenkins |
bind: address already in use on port 5432 | Previous build left a container running | docker compose down -v on the agent |
psycopg2.OperationalError: Connection refused | Port mapping wrong or firewall blocking localhost | Verify docker-compose.yml maps 5432:5432 |
pg_isready: timeout expired | Postgres took longer than 30 seconds on a slow agent | Increase --timeout or check available memory |
Tip: For the Docker permission issue, the agent must be restarted after adding the
jenkinsuser to thedockergroup. A simplesudo systemctl restart jenkinsis enough.
Summary
You extended the CI pipeline from unit tests only to unit tests plus integration tests against a real Postgres database. Here is what you accomplished:
- Created a
docker-compose.ymlthat starts Postgres 16 with a healthcheck. - Added
psycopg2-binaryto the test dependencies inpyproject.toml. - Wrote integration tests that connect to Postgres, create a table, insert rows, and verify the data.
- Ran the full bring-up/test/teardown cycle locally before touching Jenkins.
- Added an Integration Tests stage to the Jenkinsfile that starts Postgres, waits for readiness with
pg_isready, and runs the integration test suite. - Put
docker compose down -vinpost { always }to guarantee cleanup on every build. - Learned why readiness checks beat
sleepand why teardown belongs inpost { always }, not in a stage.
The pipeline now validates both pure logic (unit tests) and real database interactions (integration tests) on every build. Next up in Part 7: you capture build artifacts and Docker logs so that when something does fail, you have everything you need to debug it.