Pytest markers: organize and filter your tests

A test suite with 500+ tests takes time to run. When you're debugging a login bug, you don't need to wait for the slow integration tests or the database migration checks. pytest markers let you tag tests and run exactly the subset you need.

Markers work just like labels. Tag a test as slowsmoke, or database, then use the -m flag to run only what matters right now. The full suite still runs in CI - but locally, you pick and choose.

Prerequisites

  • Python 3.8 or later
  • pytest 8.0+ (examples tested with pytest 9.0.2)
  • Install: pip install pytest

No special plugins needed - markers are a built-in pytest feature.

Key facts

Built-in markers worth knowing

pytest ships with several markers out of the box. These four show up in almost every project.

skip and skipif

skip unconditionally skips a test. skipif skips based on a condition - useful for platform-specific or version-dependent tests.

import sys
import pytest

@pytest.mark.skip(reason="waiting on API v3 release")
def test_new_endpoint():
    pass

@pytest.mark.skipif(sys.version_info < (3, 11), reason="requires Python 3.11+")
def test_exception_groups():
    assert True

Skipped tests show as s in output with the reason available via -v.

xfail

Mark a test as expected to fail. Handy for known bugs — the test stays in the suite as a reminder without breaking the build.

@pytest.mark.xfail(reason="issue #247 — race condition in cache")
def test_concurrent_cache_write():
    cache.write("key", "value")
    assert cache.read("key") == "value"

If an xfail test unexpectedly passes, pytest reports it as XPASS. Add strict=True to make unexpected passes fail the build - helpful for tracking when bugs get fixed.

parametrize

Run the same test with different inputs. Cuts down on copy-paste test functions.

@pytest.mark.parametrize("input_val,expected", [
    ("hello", 5),
    ("", 0),
    ("pytest", 6),
])
def test_string_length(input_val, expected):
    assert len(input_val) == expected

Each parameter set runs as a separate test case, so failures point to the exact input that broke.

Creating and registering custom markers

Custom markers are where things get practical. Tag tests by speed, feature area, or environment.

@pytest.mark.slow
def test_full_migration():
    run_all_migrations()
    assert db.schema_version == LATEST

@pytest.mark.smoke
def test_health_check():
    response = client.get("/health")
    assert response.status_code == 200

Register markers to avoid warnings. Without registration, pytest prints PytestUnknownMarkWarning for every custom marker. Add them to pyproject.toml:

[tool.pytest.ini_options]
markers = [
    "slow: tests that take more than 5 seconds",
    "smoke: critical path tests for deploy verification",
    "database: tests that require a live database connection",
]

Then enable strict mode so typos cause errors instead of silent warnings:

[tool.pytest.ini_options]
addopts = "--strict-markers"

Now @pytest.mark.smoek (typo) fails immediately instead of creating a phantom marker nobody notices for months.

Running tests by marker

The -m flag accepts boolean expressions. This is the real payoff.

# Run only smoke tests
pytest -m smoke

# Run everything except slow tests
pytest -m "not slow"

# Run smoke tests that don't need a database
pytest -m "smoke and not database"

# Run slow OR database tests
pytest -m "slow or database"

Practical use cases:

  • Local development: pytest -m "not slow" — skip the heavy stuff while iterating
  • Pre-push hook: pytest -m smoke — run critical checks in under 30 seconds
  • CI pipeline: pytest (no -m) — run the full suite
  • Nightly builds: pytest -m slow — run the expensive tests when nobody's waiting

Check what markers exist in your project with:

pytest --markers

This prints every registered marker with its description.

Applying markers at every level

Markers aren't limited to individual functions. Apply them to classes or entire modules.

Class level — every test method in the class inherits the marker:

@pytest.mark.database
class TestUserRepository:
    def test_create_user(self):
        pass

    def test_delete_user(self):
        pass

Module level — tag all tests in a file using the pytestmark variable:

# test_integration.py
import pytest

pytestmark = pytest.mark.slow

def test_full_sync():
    pass  # automatically marked as slow

def test_bulk_import():
    pass  # also marked as slow

Multiple markers work too: pytestmark = [pytest.mark.slow, pytest.mark.database].

Automating markers with conftest.py

For teams that keep forgetting to add markers, automate it. The pytest_collection_modifyitems hook applies markers based on test location or name.

# conftest.py
import pytest

def pytest_collection_modifyitems(config, items):
    for item in items:
        if "integration" in str(item.path):
            item.add_marker(pytest.mark.slow)
        if "test_api" in item.nodeid:
            item.add_marker(pytest.mark.smoke)

Any test file inside an integration/ folder gets the slow marker automatically. Tests with test_api in their name get smoke. No manual tagging needed.

Register the markers programmatically in the same file:

def pytest_configure(config):
    config.addinivalue_line("markers", "slow: auto-applied to integration tests")
    config.addinivalue_line("markers", "smoke: auto-applied to API tests")

Common issues

Markers silently ignored. If -m smoke runs zero tests, check whether the marker is actually applied. Run pytest --collect-only -m smoke to see what pytest selects without executing anything. Empty output means no tests matched.

Unknown marker warnings flood the console. This happens when markers aren't registered in pyproject.toml or conftest.py. Add --strict-markers to addopts — it turns those warnings into errors, making them impossible to miss.

Markers from conftest.py don't apply to subdirectories. A conftest.py in tests/ applies to everything under tests/, but a conftest.py in tests/unit/ only applies to tests/unit/. Place the pytest_collection_modifyitems hook in the root conftest.py if the rule should apply project-wide.

Parametrize combined with markers. Adding @pytest.mark.slow alongside @pytest.mark.parametrize marks every parameter set as slow — not just one. To mark individual cases, use pytest.param with a marks argument:

@pytest.mark.parametrize("n,expected", [
    (1, 1),
    pytest.param(1000000, 1000000, marks=pytest.mark.slow),
])
def test_fibonacci(n, expected):
    assert fibonacci(n) == expected

Markers take five minutes to set up and save time on every test run after that. Start with three markers — slowsmoke, and whatever your most common test category is — then add more as the suite grows.

The pytest markers documentation covers advanced patterns like marking individual parametrize cases and using markers with fixtures. The custom markers examples page has more conftest.py patterns worth checking out.