Add practitioner decision tree and getting started guide (B1b-d)#287
Add practitioner decision tree and getting started guide (B1b-d)#287
Conversation
Phase B1 foundation docs making diff-diff discoverable for data science practitioners. Business-framed estimator selection, end-to-end marketing campaign walkthrough, and README entry point. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Overall Assessment This is a docs-only PR, but the new practitioner decision tree and walkthrough make methodology-bearing estimator-selection and interpretation claims. I found two unmitigated P1 issues in that guidance, plus one broken survey example. I did not find matching Executive Summary
Methodology
Code Quality No findings in scope for a docs-only PR. Performance No findings in scope for a docs-only PR. Maintainability No findings. The Tech Debt No separate deferrable tech-debt item. The blocking issues above are not mitigated by existing Security No findings; I did not see secrets, credentials, or PII introduced by the diff. Documentation/Tests
Path to Approval
|
…n API Four findings from AI code review on #287: P1 - ContinuousDiD: add Strong Parallel Trends caveat. Standard PT only identifies ATT(d|d); ATT(d) and ACRT(d) require SPT (REGISTRY.md:475). Soften ATT(d) language and add a warning admonition matching the existing note in choosing_estimator.rst:252. P1 - SyntheticDiD example: pass post_periods explicitly. The default behavior takes the last half of periods, which with 12 periods includes period 6 - a pre-treatment period in the narrated scenario. Now derives post_periods from the data's "post" column so the analysis window matches the campaign window. P2 - SurveyDesign example: fix API. The dataclass takes weights/strata/psu (not data/weight). The previous example would raise TypeError. P3 - getting started guide: use print(results.summary()) for the table. DiDResults.__repr__ is a one-line summary; the full table comes from summary(). The narration claimed a table where only a one-liner appeared. All corrected examples re-verified executable; Sphinx build clean for the new files. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
/ai-review |
|
🔁 AI review rerun (requested by @igerber) Head SHA: Overall Assessment ✅ Looks good The prior blocking findings are resolved, and I did not find any unmitigated P0/P1 issues in the changed files. Executive Summary
Methodology
Code Quality
Performance
Maintainability
Tech Debt
Security
Documentation/Tests
|
Summary
docs/practitioner_decision_tree.rst) mapping 5 common scenarios (simultaneous campaign, staggered rollout, varying spend, few markets, survey data) to recommended estimatorsdocs/practitioner_getting_started.rst) walking through a marketing campaign analysis with runnable code, validity checks, and stakeholder communication templatedocs/doc-deps.yamlwith dependency entries for 7 source filesMethodology references (required if estimator / math changes)
Validation
python -W all(no warnings suppressed):doc:cross-references resolve; all link targets verified to existSecurity / privacy
Generated with Claude Code