Jenkins CI/CD (4/11): Test Reporting With JUnit Results in Jenkins
Summary: You configure pytest to emit JUnit XML, wire the report into your Jenkinsfile with a
post { always { junit } }block, and unlock Jenkins’ built-in test-result dashboard — giving you trend graphs, failure drill-downs, and test history that raw console logs can never provide.
Example Values Used in This Tutorial
| Key | Value |
|---|---|
| JUnit output path | results/junit.xml |
| Pytest flag | --junitxml=results/junit.xml |
| Jenkins post step | junit 'results/junit.xml' |
| Results directory | results/ |
0. Prerequisites
- A working Jenkins controller at
http://localhost:8080(Part 1). - The
hellociPython package with passing unit tests committed to Git (Part 2). - A Jenkins pipeline job that checks out the repo, creates a venv, installs dependencies, and runs
pytest(Part 3). - Familiarity with editing a
Jenkinsfileand triggering builds from the Jenkins UI.
Note: If your pipeline from Part 3 is not yet green, go back and fix it before continuing. This tutorial builds directly on that working pipeline.
1. Logs Are Not Enough
Right now your Jenkins build either passes or fails, and the only way to understand what happened is to scroll through the console log. That works when you have five tests. It stops working fast.
Consider what happens as the project grows:
- You have 80 tests and one fails — you scroll through hundreds of lines looking for the word
FAILED. - A test that passed yesterday now fails — there is no history to compare against.
- A teammate asks “how many tests do we have?” — nobody knows without running the suite locally.
Jenkins solves all of this with structured test reports. Instead of parsing raw text, you give Jenkins a machine-readable XML file in JUnit format. Jenkins reads that file and builds a dashboard with pass/fail counts, trend graphs, and per-test failure details.
The key insight: your CI tool should understand your test results, not just display your logs.
2. Tell Pytest to Produce JUnit XML
Pytest has built-in support for JUnit XML output. You do not need to install any extra plugins.
Run this locally to see what happens:
mkdir -p results
.venv/bin/pytest tests/ --junitxml=results/junit.xmlCode language: Shell Session (shell)
After the run completes, open results/junit.xml:
cat results/junit.xmlCode language: Shell Session (shell)
You will see XML that looks like this:
<?xml version="1.0" encoding="utf-8"?>
<testsuites>
<testsuite name="pytest" errors="0" failures="0" skipped="0" tests="8" time="0.420">
<testcase classname="tests.test_greet" name="test_greet_basic" time="0.001"/>
<testcase classname="tests.test_greet" name="test_greet_empty_string" time="0.001"/>
<!-- ... remaining tests ... -->
</testsuite>
</testsuites>Code language: HTML, XML (xml)
Each <testcase> element records a test name, its class, and how long it took. Failed tests include a <failure> child element with the traceback. This is the format Jenkins expects.
Tip: The
--junitxmlflag is all you need. Pytest handles the rest — no configuration files, no extra dependencies.
3. Update the Jenkinsfile
Open the Jenkinsfile in your repository root. Replace its contents with the following:
pipeline {
agent any
stages {
stage('Setup Python') {
steps {
sh 'python3 -m venv .venv'
sh '.venv/bin/pip install --upgrade pip'
}
}
stage('Install Dependencies') {
steps {
sh '.venv/bin/pip install -e ".[test]"'
}
}
stage('Run Unit Tests') {
steps {
sh 'mkdir -p results'
sh '.venv/bin/pytest tests/ --junitxml=results/junit.xml'
}
}
}
post {
always {
junit 'results/junit.xml'
}
}
}Code language: Groovy (groovy)
Three things changed compared to Part 3:
- The
Run Unit Testsstage now creates theresults/directory withmkdir -p resultsbefore running pytest. - The
--junitxml=results/junit.xmlflag tells pytest to write the XML report file. - A
post { always { junit } }block at the pipeline level tells Jenkins to read that XML file after every build — whether the build passes or fails.
The always keyword is important. If a test fails, pytest exits with a non-zero code and Jenkins marks the stage as failed. Without always, the junit step would be skipped and you would lose the very failure data you need most.
Warning: If you put the
junitstep insidepost { success { } }, Jenkins will only collect test results on green builds. Failures are exactly when you need the report. Always usealways.
4. Run the Build
Commit and push the updated Jenkinsfile:
git add Jenkinsfile
git commit -m "Add JUnit XML test reporting to pipeline"
git push origin mainCode language: Shell Session (shell)
Go to your pipeline job in Jenkins and click Build Now.
Once the build completes, look at the build page. You will see something new: a Test Result link in the left sidebar and a test summary on the build page itself.
Click Test Result. Jenkins shows you:
- The total number of tests that ran.
- How many passed, how many failed, and how many were skipped.
- The duration of each test.
- A clickable drill-down to individual test cases.
This is the aha moment. Instead of scanning through console output, you have a structured, searchable dashboard that tells you exactly what happened.
5. Watch the Trend Build Over Time
Run the build two or three more times by clicking Build Now. Go back to the pipeline job’s main page (not an individual build — the job page).
A Test Result Trend graph appears. This graph plots the number of passing and failing tests across builds. Over days and weeks, it becomes one of the most valuable views in your entire CI setup:
- A sudden spike in failures means a bad merge.
- A steady upward line in total tests means the team is writing tests.
- A flat line means nobody is testing new code.
Tip: The trend graph only appears after two or more builds with test results. If you do not see it yet, trigger another build.
6. See What a Failure Looks Like
Structured reports earn their keep when something breaks. Intentionally introduce a failing test to see the failure workflow.
Open tests/test_greet.py and add a test that will fail:
def test_greet_broken():
from helloci import greet
assert greet("World") == "this is intentionally wrong"Code language: Python (python)
Commit and push:
git add tests/test_greet.py
git commit -m "Add intentionally broken test"
git push origin mainCode language: Shell Session (shell)
Trigger the build in Jenkins. When it finishes, the build page shows a yellow or red indicator and the test summary shows one failure.
Click Test Result. Jenkins shows the failing test name, the assertion error, and the full traceback — all without you ever opening the console log.
Click the failing test name. Jenkins shows you:
- The error message.
- The stack trace.
- The test’s history across previous builds (did it pass before? when did it start failing?).
This per-test history is something raw logs simply cannot give you. When a test starts failing, you can see the exact build where it broke and correlate that with the commits in that build.
7. Fix the Test
Remove the broken test from tests/test_greet.py (delete the test_greet_broken function you added in the previous section).
Commit and push:
git add tests/test_greet.py
git commit -m "Remove intentionally broken test"
git push origin mainCode language: Shell Session (shell)
Trigger one more build. The test results go back to all-green, and the trend graph now shows the dip-and-recovery pattern — one build with a failure, the next build clean. This is exactly the kind of signal that helps a team catch and fix regressions quickly.
8. Add results/ to .gitignore
The results/ directory is a build artifact. It should not be committed to version control.
Open .gitignore and add the following line:
results/Code language: Shell Session (shell)
If your .gitignore does not exist yet, create it with that single line.
Commit the change:
git add .gitignore
git commit -m "Ignore results directory"
git push origin mainCode language: Shell Session (shell)
Note: Jenkins creates the
results/directory fresh on every build inside its workspace. The directory in your local repo (from the test run in Section 2) is just leftover from your local experiment.
Summary
You moved from “scroll through the console log” to “click a dashboard” in one pipeline change. Here is what you accomplished:
- Added
--junitxml=results/junit.xmlto the pytest command so it produces a machine-readable report. - Added
mkdir -p resultsto the pipeline so the output directory always exists. - Added
post { always { junit 'results/junit.xml' } }so Jenkins collects the report on every build — pass or fail. - Explored the Test Result page and saw per-test pass/fail details, durations, and stack traces.
- Watched the Test Result Trend graph build up across multiple runs.
- Intentionally broke a test to see how Jenkins surfaces failures, then fixed it and confirmed the recovery.
- Added
results/to.gitignoreto keep build artifacts out of version control.
The junit step is the single highest-value addition you can make to a pipeline. It turns Jenkins from a “did it pass?” indicator into a test management tool with history, trends, and drill-down diagnostics.
Next up in Part 5: you add a fast-fail lint stage with ruff so style problems are caught before tests even run.