Quick Start¶
Repo:
mick-gsk/drift· Package:drift-analyzer· Command:drift
You want to try drift? Good — you'll have your first findings in two minutes.
If you want to see what drift catches before you install it, check out Problems Every Vibe-Coder Recognizes — real code examples from AI-assisted projects.
0. Before you start¶
Drift requires Python 3.11+. If your shell or CI runner is still on 3.10, fix that first — you want the first run to be a signal test, not an environment fight.
1. Install¶
No project handy? Try on FastAPI
Real findings on a real codebase — no setup, no risk.2. Analyze your repository¶
3. What you'll see¶
Here's what a typical first run looks like:
╭─ drift analyze myproject/ ──────────────────────────────────────────────────╮
│ DRIFT SCORE 0.52 │ 87 files │ 412 functions │ AI: 34% │ 2.1s │
╰──────────────────────────────────────────────────────────────────────────────╯
┌──┬────────┬───────┬──────────────────────────────────────┬──────────────────────┐
│ │ Signal │ Score │ Title │ Location │
├──┼────────┼───────┼──────────────────────────────────────┼──────────────────────┤
│◉ │ PFS │ 0.85 │ Error handling split 4 ways │ src/api/routes.py:42 │
│◉ │ AVS │ 0.72 │ DB import in API layer │ src/api/auth.py:18 │
│○ │ MDS │ 0.61 │ 3 near-identical validators │ src/utils/valid.py │
└──┴────────┴───────┴──────────────────────────────────────┴──────────────────────┘
Three numbers, three meanings
Drift Score (header): Overall repository coherence — higher means more structural erosion. This is an orientation metric, not a pass/fail threshold.
Finding Score (table column): Confidence that this specific finding is a real structural issue. ≥ 0.7 = strong signal, 0.4–0.7 = moderate, < 0.4 = weak.
Precision claim (site-wide): Historical accuracy of drift findings across the benchmark corpus. Currently 77% strict / 95% lenient on the v0.5 baseline. This describes methodology accuracy, not a per-repo promise.
4. How to read your first findings¶
My recommendation: start with the highest-scored findings and check if they match what already felt expensive to maintain.
- Score ≥ 0.7 → strong signal, likely a real structural issue worth investigating
- Score 0.4–0.7 → moderate signal, review when you touch that module
- Score < 0.4 → weak signal, likely noise in small repos — skip for now
Each finding links to a specific file and line. Start with the highest-scored findings and check if the pattern matches your understanding of the codebase.
Typical first-run decisions:
- You see repeated pattern variants in one module → standardize on one implementation shape before adding more features there.
- You see a boundary violation at a stable layer edge → add or tighten an architecture rule before it spreads.
- You mostly see weak findings in a small repo → keep drift in observation mode and revisit after the codebase grows.
Finding looks wrong?
Two options:
- Suppress locally: Add
# drift:context deliberate-variantabove the flagged line - Report it: 30-second false-positive report — helps improve the next release
False positives are expected on first runs. Drift improves with every report.
5. Verify your installation¶
Drift can analyze its own codebase — useful to confirm everything works:
Next: add to your workflow¶
pre-commit (fastest path)¶
# .pre-commit-config.yaml
repos:
- repo: https://github.com/mick-gsk/drift
rev: v2.6.0
hooks:
- id: drift-report # start report-only, switch to drift-check later
CI (report-only first)¶
The recommended first step is report-only CI (no build failures):
The GitHub Action now follows the same safe default. Tighten to high only after your team has reviewed a few real runs.
See Team Rollout for the full progressive adoption path, Integrations for CI details, and Prompts to Try for copy-paste prompts you can hand to your AI assistant.
analyze vs check — when to use which¶
drift analyze |
drift check |
|
|---|---|---|
| Purpose | Full repository scan | Diff-scoped CI gate |
| Scope | All files matching include/exclude | Only changed files (--diff) |
| Typical use | Local exploration, baseline creation | Pull request CI checks |
| Output | Rich terminal, JSON, SARIF | Same formats, plus exit code gating |
| Key flags | --path, --sort-by, --format |
--fail-on, --diff, --baseline |
| Speed | Seconds to minutes (depends on repo size) | Typically < 10 seconds |
Rule of thumb: Use analyze when you want a complete picture. Use check when you want a fast CI gate on changed code.