Jenkins CI/CD (9/11): Concurrency and Port Conflicts
Summary: You diagnose the port-collision problem that strikes when two builds run at the same time on the same Jenkins agent, then fix it two ways: first by disabling concurrent builds entirely with
disableConcurrentBuilds(), then by isolating each build with unique Docker Compose project names and internal Docker networks. By the end you have a concurrency-safe Jenkinsfile and a clear understanding of when to serialize builds versus when to let them overlap.
Example Values Used in This Tutorial
| Key | Value |
|---|---|
| Concurrency option | disableConcurrentBuilds() |
| Compose project env | COMPOSE_PROJECT_NAME |
| Project name formula | ${JOB_NAME}-${BUILD_NUMBER} |
| Previous parts | Parts 1-8 completed |
0. Prerequisites
- A working Jenkins controller at
http://localhost:8080(Part 1). - The
hellociPython package with unit tests, integration tests, linting, and Docker Compose Postgres (Parts 2-6). - A Jenkinsfile with timeout, retry, workspace cleanup, and artifact archiving (Parts 7-8).
- Docker and Docker Compose v2 installed on the Jenkins agent.
- The current
docker-compose.ymlexposes Postgres on host port5432.
Note: If you jumped ahead, go back and work through Parts 7 and 8 first. This tutorial builds directly on the Jenkinsfile produced in Part 8.
1. The Problem — Two Builds, One Port
Everything works when builds run one at a time. The trouble starts when two builds overlap.
Picture this scenario: you push a commit while a build is already in progress. Jenkins starts a second build immediately. Both builds reach the Integration Tests stage and both call docker compose up -d. The first build binds Postgres to host port 5432. The second build tries to bind the same port and fails:
Error response from daemon: driver failed programming external connectivity
on endpoint postgres: Bind for 0.0.0.0:5432 failed: port is already allocatedCode language: Shell Session (shell)
The second build crashes — not because of a test failure, but because of a port collision. This is the “works sometimes” trap. Your pipeline passes when builds run alone and fails randomly when they overlap.
The root cause is in docker-compose.yml. The ports mapping pins Postgres to a fixed host port:
ports:
- "5432:5432"Code language: YAML (yaml)
Two containers cannot bind to the same host port at the same time. It does not matter that they belong to different builds — the host network is shared.
2. Why Jenkins Is Different From GitHub Actions
If you have used GitHub Actions, GitLab CI, or any “fresh VM per run” system, you might wonder why this is even an issue. On those platforms every job gets its own isolated virtual machine or container. Two jobs that both bind port 5432 never collide because they run on separate hosts.
Jenkins works differently. A Jenkins agent is typically a long-lived machine (physical, VM, or a persistent container) that runs multiple builds sequentially or concurrently. The workspace directory is different for each build, but the host network, the Docker daemon, and the port space are shared.
This shared-agent model is one of Jenkins’s strengths — you avoid the cold-start overhead of spinning up a fresh VM for every build. But it means you must manage resource isolation yourself. Ports, Docker container names, and volume mounts are all potential collision points when two builds run side by side.
There are three patterns for dealing with this:
| Pattern | Approach | Throughput |
|---|---|---|
| A — Serialize | disableConcurrentBuilds() | One build at a time |
| B — Isolate | Unique project names + Docker networks | Full parallelism |
| C — Dedicate | One agent per build | Full parallelism, higher cost |
Pattern A is the simplest. Pattern B is more powerful. Pattern C is a brief mention — it solves the problem by throwing hardware at it.
3. Pattern A — Disable Concurrent Builds
The fastest fix is to tell Jenkins: never run two builds of this job at the same time. If a second build is triggered while the first is still running, it waits in the queue until the first one finishes.
Add disableConcurrentBuilds() to the options block:
options {
timeout(time: 15, unit: 'MINUTES')
disableConcurrentBuilds()
}Code language: Groovy (groovy)
That is the entire change. Jenkins handles the queuing automatically. When the running build finishes (pass or fail), the next queued build starts.
3.1 The Full Jenkinsfile — Pattern A
Here is the complete Jenkinsfile with disableConcurrentBuilds() applied. This is the same pipeline from Part 8 with one line added to the options block.
pipeline {
agent any
options {
timeout(time: 15, unit: 'MINUTES')
disableConcurrentBuilds()
}
stages {
stage('Setup Python') {
steps {
sh 'python3 -m venv .venv'
sh '.venv/bin/pip install --upgrade pip'
}
}
stage('Install Dependencies') {
steps {
sh '.venv/bin/pip install -e ".[test]"'
}
}
stage('Lint') {
steps {
sh '.venv/bin/ruff check src/ tests/'
}
}
stage('Unit Tests') {
steps {
sh 'mkdir -p results'
sh '.venv/bin/pytest tests/test_greet.py --junitxml=results/junit.xml'
}
}
stage('Integration Tests') {
steps {
retry(2) {
sh 'docker compose up -d'
}
sh 'docker compose exec -T postgres pg_isready -U testuser -d testdb --timeout=30'
sh '.venv/bin/pytest tests/test_integration.py --junitxml=results/junit-integration.xml -v'
}
}
}
post {
always {
sh 'docker compose logs > results/docker-logs.txt 2>&1 || true'
sh 'docker compose down -v || true'
junit 'results/*.xml'
archiveArtifacts artifacts: 'results/**', allowEmptyArchive: true
cleanWs()
}
}
}Code language: Groovy (groovy)
The disableConcurrentBuilds() directive sits next to timeout. Together they mean: each build gets at most 15 minutes, and only one build runs at a time.
Tip:
disableConcurrentBuilds()is the right default for most small teams. It eliminates an entire class of infrastructure bugs in exchange for slightly longer queue times.
4. Pattern B — Unique Project Names and Docker Networks
Pattern A works, but it serializes everything. If your team pushes frequently, builds queue up and feedback slows down. Pattern B lets builds run in parallel by giving each one its own isolated Docker Compose environment.
The fix has two parts:
- Set
COMPOSE_PROJECT_NAMEto a unique value per build so Docker Compose creates separate containers, networks, and volumes for each build. - Remove the fixed host port mapping and let the tests connect through Docker’s internal network instead.
4.1 Set the Environment Variable
Add an environment block to the Jenkinsfile:
pipeline {
agent any
options {
timeout(time: 15, unit: 'MINUTES')
}
environment {
COMPOSE_PROJECT_NAME = "${env.JOB_NAME}-${env.BUILD_NUMBER}"
}Code language: Groovy (groovy)
Docker Compose uses COMPOSE_PROJECT_NAME to prefix all resource names. Build number 42 of job helloci creates containers named helloci-42-postgres-1 instead of helloci-postgres-1. Two builds running at the same time get completely separate containers.
Note: Notice that
disableConcurrentBuilds()is removed fromoptions. Pattern B is designed to allow parallel builds, so you do not want to serialize them.
Warning: In a multibranch pipeline,
JOB_NAMEcontains/characters (e.g.,myorg/helloci/feature-branch), which produces an invalid Compose project name. Sanitize it first:COMPOSE_PROJECT_NAME = "${env.JOB_NAME.replaceAll('[^a-zA-Z0-9_-]', '-')}-${env.BUILD_NUMBER}".
4.2 Remove the Fixed Host Port
Open docker-compose.yml and remove the ports mapping entirely:
services:
postgres:
image: postgres:16
environment:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
healthcheck:
test: ["CMD", "pg_isready", "-U", "testuser", "-d", "testdb"]
interval: 2s
timeout: 5s
retries: 10Code language: YAML (yaml)
Without ports, Postgres is only accessible inside the Docker Compose network. No host port, no collision.
4.3 Connect Tests Through the Docker Network
Your integration tests currently connect to localhost:5432. With the host port removed, that no longer works. You need a way for pytest to reach Postgres inside the Docker network.
The cleanest pattern for CI is to add a lightweight test-runner service to docker-compose.yml that shares the Docker network with Postgres. This keeps the test environment fully isolated — no host ports, no IP lookups, no extra plumbing. Here is the approach:
services:
postgres:
image: postgres:16
environment:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
healthcheck:
test: ["CMD", "pg_isready", "-U", "testuser", "-d", "testdb"]
interval: 2s
timeout: 5s
retries: 10
test-runner:
image: python:3.12-slim
working_dir: /app
volumes:
- .:/app
depends_on:
postgres:
condition: service_healthy
environment:
DB_HOST: postgres
DB_PORT: "5432"Code language: YAML (yaml)
The test-runner service mounts your project directory and can reach postgres by hostname on the internal Docker network. Update your integration tests to read connection details from environment variables:
import os
host = os.environ.get("DB_HOST", "localhost")
port = int(os.environ.get("DB_PORT", "5432"))Code language: Python (python)
Then update the Integration Tests stage in the Jenkinsfile to use docker compose run:
stage('Integration Tests') {
steps {
sh '''
docker compose run --rm test-runner \
sh -c "pip install -e '.[test]' && \
pytest tests/test_integration.py \
--junitxml=results/junit-integration.xml -v"
'''
}
}Code language: Groovy (groovy)
Both commands run inside the same container, so packages installed by pip are available when pytest executes. The --rm flag removes the container after the combined command finishes.
Each build gets its own Compose project, its own network, its own Postgres container, and its own test runner. No shared ports, no collisions.
Warning: Pattern B adds complexity. You need to manage the test-runner service, mount volumes correctly, and ensure results files are written back to the workspace. Only adopt it if your team actually needs parallel builds.
Tip: Alternatives to the test-runner service include running
docker compose execto invoke pytest inside an existing container, looking up the container IP withdocker inspect, or usingdocker compose portto discover dynamically assigned host ports. These work but are more fragile in CI where network topology varies between agents.
5. Pattern C — Dedicated Agents
A third option is to give each build its own Docker-capable agent. Jenkins supports this through the Docker agent plugin or Kubernetes plugin, which spins up a fresh container or pod for every build.
pipeline {
agent {
docker {
image 'python:3.12-slim'
args '--network host'
}
}Code language: Groovy (groovy)
This approach mimics the “fresh VM per run” model of GitHub Actions. Each build gets an isolated environment with its own network namespace, so port conflicts cannot happen.
The tradeoff is cost and complexity. You need a container orchestrator (Docker-in-Docker or Kubernetes), more compute resources, and agent templates. For a small team with a single Jenkins controller, this is usually overkill. For a large organization running hundreds of builds per day, it is the standard approach.
Tip: Pattern C is worth exploring if you have already outgrown Patterns A and B. The Jenkins documentation for the Docker Pipeline plugin and the Kubernetes plugin covers the setup in detail.
6. Choosing a Pattern
Use this table to decide which approach fits your situation.
| Factor | Pattern A: Serialize | Pattern B: Isolate | Pattern C: Dedicate |
|---|---|---|---|
| Complexity | Minimal — one line | Moderate — env vars, Compose changes | High — agent infrastructure |
| Throughput | One build at a time | Full parallelism | Full parallelism |
| Port conflicts | Impossible (serialized) | Impossible (isolated networks) | Impossible (isolated hosts) |
| Setup time | Seconds | 30 minutes | Hours to days |
| Best for | Small teams, low push frequency | Medium teams, frequent pushes | Large orgs, many pipelines |
For this tutorial series, Pattern A is the recommended approach. It is one line of configuration, it eliminates the problem completely, and it matches the single-agent setup you have been building since Part 1. Adopt Pattern B or C later if and when build queue times become a bottleneck.
7. Test It — Trigger Two Builds Rapidly
Commit the updated Jenkinsfile with disableConcurrentBuilds() (Pattern A):
git add Jenkinsfile
git commit -m "Add disableConcurrentBuilds to prevent port conflicts"
git push origin mainCode language: Shell Session (shell)
Now trigger two builds in quick succession. You can do this from the Jenkins UI or from the command line.
7.1 From the Jenkins UI
- Open your pipeline job in Jenkins.
- Click Build Now.
- Immediately click Build Now again before the first build finishes.
Look at the build queue in the left sidebar. The second build should show a message like:
Build #43 is waiting for build #42 to finish (concurrency limit: 1)Code language: Shell Session (shell)
The second build does not start until the first one completes. No port conflict, no race condition.
7.2 From the CLI
If you have the Jenkins CLI or the curl method configured:
curl -X POST http://localhost:8080/job/helloci/build --user admin:TOKEN
curl -X POST http://localhost:8080/job/helloci/build --user admin:TOKENCode language: Shell Session (shell)
Replace admin:TOKEN with your Jenkins username and API token.
7.3 Verify the Results
After both builds finish:
- Build #42 should show a normal pass/fail result.
- Build #43 should also show a normal pass/fail result — with no port conflict errors.
- The Console Output for Build #43 should show that it started after Build #42 completed.
If you implemented Pattern B instead, trigger the same two builds. This time both builds should start immediately and run in parallel. Check docker ps during the builds — you should see two separate sets of containers with different project name prefixes (for example, helloci-42-postgres-1 and helloci-43-postgres-1).
Tip: If you want to make the concurrency behavior visible, add a
sh 'sleep 30'to the Integration Tests stage temporarily. This gives you a wide window to trigger the second build and observe the queuing behavior. Remove the sleep when you are done testing.
8. Common Mistakes
8.1 Forgetting cleanWs() With Concurrent Builds
If you use Pattern B (parallel builds) but forget cleanWs() in the post block, leftover files from one build can leak into the next. The cleanWs() step from Part 8 becomes even more important when builds overlap. Always keep it in post { always }.
8.2 Hardcoded Container Names
If your docker-compose.yml uses container_name: my-postgres, Docker Compose ignores COMPOSE_PROJECT_NAME for that service. Two builds try to create a container with the same name and the second one fails. Never use container_name in a CI Compose file — let Docker Compose generate names from the project prefix.
8.3 Named Volumes Without Project Scoping
Named volumes like pgdata in docker-compose.yml are scoped to the Compose project name automatically. But if you reference an external volume by a fixed name, two builds share it and corrupt each other’s data. Use the default anonymous volumes or let COMPOSE_PROJECT_NAME handle scoping.
8.4 Using disableConcurrentBuilds() With Pattern B
If you add disableConcurrentBuilds() and also set COMPOSE_PROJECT_NAME per build, the unique project names are wasted — builds never overlap anyway. Pick one pattern. Do not mix them.
Summary
You identified the concurrency problem that causes random port-conflict failures when two Jenkins builds run at the same time on the same agent. Here is what you accomplished:
- Diagnosed the root cause: two Docker Compose stacks binding to the same host port
5432simultaneously. - Understood why Jenkins agents differ from ephemeral CI runners — shared host, shared network, shared ports.
- Implemented Pattern A (
disableConcurrentBuilds()) to serialize builds and eliminate the problem with a single line of configuration. - Learned Pattern B (unique
COMPOSE_PROJECT_NAMEper build plus Docker-internal networking) for teams that need parallel builds. - Saw Pattern C (dedicated agents per build) as a third option for larger organizations.
- Tested the fix by triggering two builds rapidly and verifying that the second one queues correctly.
The Jenkinsfile now prevents the “works fine alone, fails under concurrency” class of bugs. Next up in Part 10: you build a release pipeline that triggers on Git tags and produces distributable wheel and sdist packages.