Skip to main content

FastAPI Continuous Testing

Introduction

Continuous testing is an essential practice in modern software development that involves running automated tests throughout the development lifecycle. For FastAPI applications, continuous testing ensures that your API endpoints remain functional, robust, and reliable as you add new features or refactor existing code.

In this tutorial, we'll explore how to set up continuous testing for your FastAPI applications. We'll cover everything from local testing workflows to integrating tests with CI/CD pipelines, helping you build more reliable APIs with confidence.

Why Continuous Testing Matters for APIs

Before diving into implementation, let's understand why continuous testing is crucial for FastAPI applications:

  1. Early Bug Detection: Find and fix issues before they reach production
  2. Regression Prevention: Ensure new code doesn't break existing functionality
  3. Documentation: Tests serve as living documentation for your API's behavior
  4. Confidence in Refactoring: Safely improve your code with tests as a safety net
  5. Quality Assurance: Maintain consistent API quality across development cycles

Setting Up a Testing Structure

A well-organized testing structure makes continuous testing more manageable. Here's a recommended project structure for a FastAPI application:

my_fastapi_app/
├── app/
│ ├── __init__.py
│ ├── main.py
│ ├── models.py
│ ├── routers/
│ └── dependencies.py
├── tests/
│ ├── __init__.py
│ ├── conftest.py
│ ├── test_main.py
│ └── test_routers/
├── .github/
│ └── workflows/
│ └── test.yml
└── pytest.ini

Let's create a simple FastAPI application to test:

python
# app/main.py
from fastapi import FastAPI, HTTPException

app = FastAPI(title="Task Manager API")

# In-memory database for demo purposes
tasks = {}
task_id_counter = 1

@app.post("/tasks/", status_code=201)
async def create_task(title: str, description: str = None):
global task_id_counter
task = {"id": task_id_counter, "title": title, "description": description, "completed": False}
tasks[task_id_counter] = task
task_id_counter += 1
return task

@app.get("/tasks/{task_id}")
async def get_task(task_id: int):
if task_id not in tasks:
raise HTTPException(status_code=404, detail="Task not found")
return tasks[task_id]

@app.get("/tasks/")
async def list_tasks():
return list(tasks.values())

@app.put("/tasks/{task_id}")
async def update_task(task_id: int, title: str = None, description: str = None, completed: bool = None):
if task_id not in tasks:
raise HTTPException(status_code=404, detail="Task not found")

task = tasks[task_id]

if title is not None:
task["title"] = title
if description is not None:
task["description"] = description
if completed is not None:
task["completed"] = completed

return task

Continuous Testing Locally

1. Create Test Fixtures

First, let's create a conftest.py file with fixtures to help us test our FastAPI application:

python
# tests/conftest.py
import pytest
from fastapi.testclient import TestClient
from app.main import app, tasks, task_id_counter

@pytest.fixture
def client():
# Reset the in-memory database before each test
global task_id_counter
tasks.clear()
task_id_counter = 1
return TestClient(app)

@pytest.fixture
def sample_task(client):
response = client.post("/tasks/", params={"title": "Test Task", "description": "This is a test"})
return response.json()

2. Write Test Cases

Next, let's write some tests for our API endpoints:

python
# tests/test_main.py
def test_create_task(client):
response = client.post("/tasks/", params={"title": "New Task"})
assert response.status_code == 201
data = response.json()
assert data["title"] == "New Task"
assert data["id"] == 1
assert data["completed"] is False

def test_get_task(client, sample_task):
task_id = sample_task["id"]
response = client.get(f"/tasks/{task_id}")
assert response.status_code == 200
assert response.json() == sample_task

def test_get_nonexistent_task(client):
response = client.get("/tasks/999")
assert response.status_code == 404

def test_list_tasks(client, sample_task):
# Add another task
client.post("/tasks/", params={"title": "Second Task"})

response = client.get("/tasks/")
assert response.status_code == 200
tasks = response.json()
assert len(tasks) == 2
assert tasks[0]["id"] == 1
assert tasks[1]["id"] == 2

def test_update_task(client, sample_task):
task_id = sample_task["id"]
response = client.put(
f"/tasks/{task_id}",
params={"title": "Updated Task", "completed": True}
)
assert response.status_code == 200
updated_task = response.json()
assert updated_task["title"] == "Updated Task"
assert updated_task["completed"] is True
assert updated_task["description"] == sample_task["description"] # Should remain unchanged

3. Create a pytest.ini File

To configure pytest for our continuous testing workflow, let's create a pytest.ini file:

ini
# pytest.ini
[pytest]
testpaths = tests
python_files = test_*.py
python_functions = test_*

4. Implement Automated Test Runner

For local development, we can create a script that watches for file changes and automatically runs tests:

python
# scripts/watch_tests.py
import time
import subprocess
import os
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler

class TestRunner(FileSystemEventHandler):
def on_modified(self, event):
# Skip hidden files and directories
if event.src_path.startswith('.'):
return

# Only run tests when Python files are modified
if event.src_path.endswith('.py'):
print(f"\n🔄 File {event.src_path} changed. Running tests...")
subprocess.run(["pytest", "-xvs"])

if __name__ == "__main__":
event_handler = TestRunner()
observer = Observer()
observer.schedule(event_handler, path=".", recursive=True)
observer.start()

print("🔍 Watching for file changes to run tests automatically...")
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()

To use this script, install the required package and run it:

bash
pip install watchdog
python scripts/watch_tests.py

Continuous Testing with GitHub Actions

To make our testing truly continuous, let's set up GitHub Actions to run tests automatically on every push and pull request.

Create a file at .github/workflows/test.yml:

yaml
name: Test FastAPI Application

on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]

jobs:
test:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3

- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.10'

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest pytest-cov
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi

- name: Test with pytest
run: |
pytest --cov=app tests/ --cov-report=xml

- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
fail_ci_if_error: true

This workflow will:

  1. Run on every push to main or develop branches
  2. Run on every pull request targeting those branches
  3. Set up Python environment
  4. Install dependencies
  5. Run tests with coverage reporting
  6. Upload coverage reports to Codecov (optional)

Pre-commit Hooks for Test Quality

Let's also set up pre-commit hooks to ensure tests run before each commit, preventing you from committing code that breaks tests:

First, create a .pre-commit-config.yaml file:

yaml
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files

- repo: local
hooks:
- id: pytest-check
name: pytest-check
entry: pytest
language: system
pass_filenames: false
always_run: true

Install pre-commit and set up the hooks:

bash
pip install pre-commit
pre-commit install

Now, tests will run automatically before each commit, ensuring your code passes all tests.

Continuous Testing with Database Integration

Real-world applications often require database testing. Let's extend our continuous testing setup to include database tests.

First, update conftest.py to handle database setup and teardown:

python
# tests/conftest.py
import pytest
from fastapi.testclient import TestClient
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from app.database import Base
from app.main import app, get_db

# Use an in-memory SQLite database for testing
SQLALCHEMY_DATABASE_URL = "sqlite:///./test.db"

@pytest.fixture(scope="function")
def db_session():
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

# Create all tables
Base.metadata.create_all(bind=engine)

db = TestingSessionLocal()
try:
yield db
finally:
db.close()
# Drop all tables after test
Base.metadata.drop_all(bind=engine)

@pytest.fixture
def client(db_session):
# Override the get_db dependency to use our test database
def override_get_db():
try:
yield db_session
finally:
pass

app.dependency_overrides[get_db] = override_get_db
with TestClient(app) as client:
yield client

# Remove the override after test
app.dependency_overrides = {}

Test Coverage Reporting

Monitoring test coverage is vital to ensure your tests are comprehensive. Let's set up continuous coverage reporting:

bash
# Run tests with coverage
pytest --cov=app tests/ --cov-report=html

# This creates a htmlcov/ directory with coverage reports

To ensure we maintain good coverage, add a coverage threshold check to our workflow:

yaml
# Update the test step in .github/workflows/test.yml
- name: Test with pytest and check coverage
run: |
pytest --cov=app tests/ --cov-report=xml --cov-fail-under=80

This will fail CI if test coverage falls below 80%.

Monitoring Test Performance

As your test suite grows, it's important to monitor test performance to prevent slow tests from hindering your workflow. Let's add timing to our tests:

bash
pytest --durations=10

This will show the 10 slowest tests, helping you identify performance bottlenecks.

Best Practices for Continuous Testing in FastAPI

  1. Isolate tests: Each test should be independent and not rely on previous test states
  2. Mock external services: Use mocking for APIs, databases, or any external dependencies
  3. Test edge cases: Don't just test the happy path; test error conditions too
  4. Use parametrized tests: Test multiple inputs with a single test function
  5. Separate unit and integration tests: Run fast unit tests more frequently
  6. Keep tests fast: Slow tests discourage frequent runs, defeating the purpose of continuous testing

Example: Parametrized Tests

Let's enhance our tests with parameterization:

python
# tests/test_main.py
import pytest

@pytest.mark.parametrize("task_data,expected_status", [
({"title": "Valid Task"}, 201),
({"title": ""}, 422), # Empty title should fail validation
])
def test_create_task_validation(client, task_data, expected_status):
response = client.post("/tasks/", params=task_data)
assert response.status_code == expected_status

Summary

Continuous testing is an essential practice for developing robust FastAPI applications. By implementing automated tests and running them continuously throughout the development process, you can catch bugs early, prevent regressions, and maintain high-quality APIs.

In this tutorial, we've covered:

  • Setting up a structured testing environment for FastAPI
  • Creating comprehensive test cases for API endpoints
  • Implementing local continuous testing with auto-test runners
  • Configuring CI/CD pipelines with GitHub Actions
  • Using pre-commit hooks to enforce testing discipline
  • Adding database integration tests
  • Monitoring and maintaining test coverage
  • Best practices for FastAPI continuous testing

By incorporating these practices into your development workflow, you'll build more reliable APIs and have greater confidence in your code changes.

Additional Resources

Exercises

  1. Add tests for DELETE functionality to the task API
  2. Implement authentication in the API and write tests for authenticated endpoints
  3. Set up continuous testing with multiple Python versions using GitHub Actions matrix
  4. Create performance tests that measure API response time
  5. Implement database migration testing to ensure schema changes don't break functionality


If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)