Russ Poldrack's Ten Rules for AI-Assisted Scientific Coding
Table of content

Russ Poldrack is the Albert Ray Lang Professor of Psychology and Chair of the Department of Psychology at Stanford University. As director of the Stanford Center for Open and Reproducible Science and associate director of Stanford Data Science, he brings decades of experience in neuroinformatics and open science to the question of how researchers should use AI coding tools without compromising methodological rigor.
His open-source textbook Better Code, Better Science and the peer-reviewed “Ten Simple Rules for AI-Assisted Coding in Science” offer the first systematic framework for scientific computing with AI agents.
Background
- Professor and Chair of Psychology at Stanford University
- Director of the Stanford Center for Open and Reproducible Science
- Associate Director of Stanford Data Science
- Created OpenNeuro.org, Neurovault.org, and fMRIPrep neuroimaging tools
- Author of 300+ publications with 80,000+ citations
- PhD from University of Illinois at Urbana-Champaign
GitHub | Blog | Lab Website
The Problem: Fast Code, Bad Science
AI coding tools accelerate development but introduce risks specific to scientific computing. From Poldrack’s Transmitter article:
“‘AI wrote it’ is not a valid defense for flawed methodology or incorrect results.”
The tools make it easy to generate code that runs but produces wrong results. In scientific contexts, wrong results become published findings that mislead the field.
Ten Rules for AI-Assisted Scientific Coding
Poldrack and collaborators organized the rules around four themes:
Theme 1: Maintain Domain Knowledge
| Rule | Principle |
|---|---|
| 1. Understand what you’re doing | Domain expertise remains essential. AI assists but cannot replace scientific judgment. |
| 2. Develop code-reading skills | Reviewing code critically is as important as writing it. |
Theme 2: Effective AI Collaboration
| Rule | Principle |
|---|---|
| 3. Specify problems clearly first | Use detailed project requirements documents before generating code. |
| 4. Create explicit task lists | Break projects into discrete, manageable tasks. |
| 5. Manage context windows | Keep active information focused. Large contexts suffer from “context rot.” |
| 6. Develop effective prompting workflows | Learn best practices for communicating with coding agents. |
Theme 3: Rigorous Testing
| Rule | Principle |
|---|---|
| 7. Implement test-driven development | Develop tests before or alongside code generation. |
| 8. Use caution with AI-generated tests | AI can create inadequate tests or modify them to pass artificially. |
| 9. Validate all code functionality | Independent verification ensures code performs intended operations. |
Theme 4: Scientific Responsibility
| Rule | Principle |
|---|---|
| 10. Accept full accountability | Scientists bear responsibility for all generated code. No exceptions. |
The Specification Workflow
Poldrack advocates a documentation-driven approach using five key files:
project/
PRD.md # Project requirements document
CLAUDE.md # Agent instructions and preferences
PLANNING.md # System architecture and tech stack
TASKS.md # Itemized task list by milestones
SCRATCHPAD.md # Workspace for ongoing notes
Custom Claude commands (stored in .claude/commands/) automate recurring tasks:
| Command | Purpose |
|---|---|
/freshstart | Load all documentation at session start |
/summ+commit | Summarize progress and commit changes |
/clear | Wipe context at natural breakpoints |
Strategies for Agent Usage
From his strategies post:
Provide tools for autonomy. Equip agents with everything they need in PLANNING.md and CLAUDE.md. Consider MCP servers for structured tool access.
Use examples for learning. LLMs excel at in-context learning. Provide code samples demonstrating your preferred style.
Commit frequently. Regular version control allows quick reversions when agents pursue unproductive paths.
Redirect away from quick fixes. Stop agents when they suggest workarounds:
“Focus on solving the problem rather than generating a workaround that avoids solving the problem.”
Update instructions when confusion occurs. Add clarifications to CLAUDE.md when agents violate guidelines.
Encourage deeper reasoning. When agents hit suboptimal approaches: “Please think harder about what might be going on here.”
Context Management
Monitor context usage via the /context command. Poldrack recommends compacting or clearing at 50% capacity. Context windows that grow too large lose focus and produce degraded outputs.
Better Code, Better Science
The open-source textbook covers:
- Version control fundamentals
- Software engineering principles for scientists
- AI coding assistant usage
- Agentic coding workflows
- Testing and validation strategies
Released under CC-BY-NC, the book serves as a teaching resource for scientific computing.
Key Takeaways
| Principle | Implementation |
|---|---|
| Domain expertise first | Understand the science before generating code |
| Specify before coding | Write PRDs and task lists upfront |
| Test independently | Never trust AI-generated tests without verification |
| Manage context actively | Monitor and clear context windows regularly |
| Accept full responsibility | Your name goes on the paper, not the AI’s |
Links
- Neural Strategies Substack
- Better Code, Better Science
- Ten Simple Rules paper
- Stanford Profile
- GitHub: @poldrack
- Poldrack Lab
Next: Brian Casel’s Spec-Driven Development
Get updates
New guides, workflows, and AI patterns. No spam.
Thank you! You're on the list.