TL;DR:
📖 Quick Start
- Who it’s for: Developers new to coding agents
- Time to complete: 20-30 minutes reading
- Prerequisites: None—this is the overview
- Expected outcome: Complete mental model + first prompts to try
- Next step: Get your first win in 15 minutes
The shift from copilots to agents happened when LLMs learned to:
An agent is just an LLM calling tools in a loop to achieve a goal. (Credit: Simon Willison)
Good agent work isn’t about grand architectural changes—it’s small wins compounded:
Each task: clear goal, feedback loop, verification.
goal → think → choose_tool → call_tool → observe_result
→ decide: done? otherwise refine and loop
flowchart TD
Start([Goal]) --> Think[Think: Analyze situation]
Think --> Choose[Choose Tool]
Choose --> Call[Call Tool]
Call --> Observe[Observe Result]
Observe --> Decide{Goal Reached?}
Decide -->|No| Think
Decide -->|Yes| End([Done])
This loop is why agents work. They don’t just generate code—they:
Amp is built on four principles:
In practice, this means:
💡 Token Management: Start new threads around 50–100k tokens; beyond ~100k quality degrades. For comprehensive token hygiene best practices, see Cost and Time Tips in Power Patterns
Pick a small task (failing test, UI tweak), give it to the agent with a verification step, review the diff, done.
Example prompt:
Run the tests, list failures, fix one file at a time, re-run.
Stop after green.
Keep threads small, use external memory (context.md files), maintain tight feedback loops, and leverage git staging.
Pattern:
Start new threads often → Avoid context rot
Write to context.md → Future threads reference it
Stage good changes → Discard bad ones
When simple won’t cut it, reach for subagents (parallelization), Oracle (deep reasoning), or Librarian (cross-repo research).
Example:
Use 3 subagents to convert these CSS files to Tailwind
🔨 Try It Now: First Quick Win
Task: Get a verifiable result in under 5 minutes
Prompt (pick one based on your codebase):
Run the tests in @tests/ and list all failures. Fix the first failing test and re-run to verify it passes.Or:
Remove all console.log and debugger statements from @src/components/ and verify the build still succeeds.Verification:
- Tests go from failing to passing, OR
- Build succeeds with no console statements
Expected outcome: One clean, verified change you can stage and commit.
Quick wins:
Planning without code:
Feedback loops:
🔨 Try It Now: Wish-List Scaffolding
Task: Turn a feature idea into a structured plan
Prompt:
I want to add [feature name]. Create a plan file at .agents/plans/todo/[feature].md with Goal, Current State, Scope (in/out), Steps, and Success Criteria. Don't write any code yet.Verification:
- Plan file exists in .agents/plans/todo/
- Contains all required sections
- Scope clearly separates MVP from nice-to-haves
Expected outcome: A structured plan ready for Oracle review before any coding starts.
Deep analysis (uses Oracle):
Cross-repo research (uses Librarian):
🔨 Try It Now: Task.md Handoff
Task: Save context from a long thread for later use
Prompt:
Summarize the key decisions, constraints, and next steps from this conversation into .agents/context/[topic].md. Include what worked, what didn't, and any patterns we established.Verification:
- Context file created in .agents/context/
- Contains actionable information for future threads
- No implementation details, just decisions and patterns
Expected outcome: Clean external memory you can reference in new threads to avoid context rot.
Context sprawl: Threads degrade beyond ~100k tokens. Fix: start new threads often. See Cost and Time Tips for token management.
Vague prompts: “Build a batch tool” won’t work. Fix: add file paths, links, constraints, approach guidance.
No feedback loops: Agent can’t verify its work. Fix: include test runs, screenshots, build checks.
Micromanaging: Telling the agent every tiny step. Fix: give it the goal and verification criteria.
Underprompting: Assuming the agent knows what you want. Fix: be explicit—“Use git blame to tell me who wrote this component.”
✅ Short threads: Starting new threads around 50-100k tokens ✅ Clear goals: Every prompt has a verification step ✅ Feedback loops: Agent can run tests/builds to verify ✅ Git discipline: Reviewing diffs, staging good changes ✅ Right mode: Using Rush for simple, Smart for most, Oracle for critical decisions
If you’re missing any, revisit the relevant sections above.
💡 Mode Selection: Pick Rush for small tasks (67% cheaper, 50% faster), Smart for complex work (default), or Oracle for deep reasoning. For comprehensive guidance on choosing modes, see Choosing Your Mode in Power Patterns
Ready to put this into practice? Follow the recommended path:
Next: Get Your First Win in 15 Minutes — End-to-end first win with a real task.
Or start with fundamentals: What is an Agent? — Core concepts explained.
Practice Path:
Standalone:
Resources:
Remember: Programming with agents is paint-by-numbers—you provide the structure and direction, the agent fills in the details. Start small, iterate fast, compound your wins.