Test Polarity Deficit Signal (TPD)¶
Signal ID: TPD
Full name: Test Polarity Deficit
Type: Scoring signal (contributes to drift score)
Default weight: 0.04
Scope: file_local
What TPD detects¶
TPD detects test suites that contain only happy-path assertions — tests that verify correct behavior but never test boundary conditions, error cases, or invalid inputs. A test suite that only checks "does it work?" without checking "does it fail correctly?" provides incomplete safety.
Before — happy path only¶
# tests/test_calculator.py
def test_add():
assert add(2, 3) == 5
def test_subtract():
assert subtract(10, 4) == 6
def test_multiply():
assert multiply(3, 7) == 21
Three tests, all positive. No tests for divide-by-zero, overflow, or invalid inputs.
After — balanced polarity¶
def test_add():
assert add(2, 3) == 5
def test_subtract():
assert subtract(10, 4) == 6
def test_divide_by_zero():
with pytest.raises(ZeroDivisionError):
divide(10, 0)
def test_add_invalid_type():
with pytest.raises(TypeError):
add("a", 3)
def test_multiply_overflow():
assert multiply(MAX_INT, 2) == expected_overflow_result
Why test polarity matters¶
- False confidence — 100% passing tests with only happy paths gives a false sense of security.
- AI generates happy-path tests by default — LLMs produce tests that verify the example output, not edge cases.
- Bugs live in edge cases — the most dangerous bugs are in error paths, boundary conditions, and unexpected inputs.
- Regression risk — without negative tests, error-handling changes can silently break without any test failure.
How the score is calculated¶
TPD analyzes each test file for the presence of negative test indicators:
- Check for
pytest.raises— standard pytest error expectation. - Check for
assertRaises— unittest-style error expectations. - Check for boundary/edge-case assertions — comparisons against zero, None, empty containers, limits.
- Calculate polarity ratio — tests with negative assertions vs. total tests.
Modules where < 20% of tests have negative polarity are flagged.
Severity thresholds:
| Score range | Severity |
|---|---|
| ≥ 0.7 | HIGH |
| ≥ 0.5 | MEDIUM |
| ≥ 0.3 | LOW |
| < 0.3 | INFO |
How to fix TPD findings¶
- Add error-case tests — for each function, ask "what should happen with invalid input?"
- Use
pytest.raises— explicit error expectations are more readable than try/except in tests. - Test boundary conditions — empty lists, None, zero, maximum values.
- Follow the testing pyramid for polarity — aim for ≥ 30% negative/boundary tests.
Configuration¶
Detection details¶
- Identify test files via naming conventions (
test_*.py,*_test.py). - Parse test functions from AST.
- Scan for negative indicators —
pytest.raises,assertRaises,with self.assertRaises, exception-related assertions. - Calculate polarity ratio per test file.
- Flag test files below the negative polarity threshold.
TPD is deterministic and AST-only.
Related signals¶
- EDS (Explainability Deficit) — checks for test existence. TPD checks for test quality.
- BAT (Bypass Accumulation) — detects
@pytest.mark.skipmarkers. TPD detects missing negative tests.