Back to Post
Coding with Agents in 2025: A Practical Field Guide
1 / 1

TL;DR:

📖 Quick Start

Why Agents Now?

The shift from copilots to agents happened when LLMs learned to:

An agent is just an LLM calling tools in a loop to achieve a goal. (Credit: Simon Willison)

What “Good” Looks Like

Good agent work isn’t about grand architectural changes—it’s small wins compounded:

Each task: clear goal, feedback loop, verification.

The Core Loop

goal → think → choose_tool → call_tool → observe_result
    → decide: done? otherwise refine and loop
flowchart TD
    Start([Goal]) --> Think[Think: Analyze situation]
    Think --> Choose[Choose Tool]
    Choose --> Call[Call Tool]
    Call --> Observe[Observe Result]
    Observe --> Decide{Goal Reached?}
    Decide -->|No| Think
    Decide -->|Yes| End([Done])

This loop is why agents work. They don’t just generate code—they:

The Amp Mindset

Amp is built on four principles:

  1. Unconstrained token usage - no artificial context limits
  2. Always uses the best models - multi-model approach for each task
  3. Raw model power - full access to AI capabilities
  4. Built to evolve - adapts with new models

In practice, this means:

💡 Token Management: Start new threads around 50–100k tokens; beyond ~100k quality degrades. For comprehensive token hygiene best practices, see Cost and Time Tips in Power Patterns

Three Workflows to Master

1. First Win in 15 Minutes

Pick a small task (failing test, UI tweak), give it to the agent with a verification step, review the diff, done.

Example prompt:

Run the tests, list failures, fix one file at a time, re-run.
Stop after green.

2. Workflows That Stick

Keep threads small, use external memory (context.md files), maintain tight feedback loops, and leverage git staging.

Pattern:

Start new threads often → Avoid context rot
Write to context.md → Future threads reference it
Stage good changes → Discard bad ones

3. Power Patterns

When simple won’t cut it, reach for subagents (parallelization), Oracle (deep reasoning), or Librarian (cross-repo research).

Example:

Use 3 subagents to convert these CSS files to Tailwind

Prompts You Can Copy Today

🔨 Try It Now: First Quick Win

Task: Get a verifiable result in under 5 minutes

Prompt (pick one based on your codebase):

Run the tests in @tests/ and list all failures.
Fix the first failing test and re-run to verify it passes.

Or:

Remove all console.log and debugger statements from @src/components/
and verify the build still succeeds.

Verification:

Expected outcome: One clean, verified change you can stage and commit.

Quick wins:

Planning without code:

Feedback loops:

🔨 Try It Now: Wish-List Scaffolding

Task: Turn a feature idea into a structured plan

Prompt:

I want to add [feature name]. Create a plan file at .agents/plans/todo/[feature].md
with Goal, Current State, Scope (in/out), Steps, and Success Criteria.
Don't write any code yet.

Verification:

Expected outcome: A structured plan ready for Oracle review before any coding starts.

Deep analysis (uses Oracle):

Cross-repo research (uses Librarian):

🔨 Try It Now: Task.md Handoff

Task: Save context from a long thread for later use

Prompt:

Summarize the key decisions, constraints, and next steps from this conversation
into .agents/context/[topic].md. Include what worked, what didn't, and any
patterns we established.

Verification:

Expected outcome: Clean external memory you can reference in new threads to avoid context rot.

Common Pitfalls

Context sprawl: Threads degrade beyond ~100k tokens. Fix: start new threads often. See Cost and Time Tips for token management.

Vague prompts: “Build a batch tool” won’t work. Fix: add file paths, links, constraints, approach guidance.

No feedback loops: Agent can’t verify its work. Fix: include test runs, screenshots, build checks.

Micromanaging: Telling the agent every tiny step. Fix: give it the goal and verification criteria.

Underprompting: Assuming the agent knows what you want. Fix: be explicit—“Use git blame to tell me who wrote this component.”

Self-Check: Are You Doing These 5 Things?

Short threads: Starting new threads around 50-100k tokens ✅ Clear goals: Every prompt has a verification step ✅ Feedback loops: Agent can run tests/builds to verify ✅ Git discipline: Reviewing diffs, staging good changes ✅ Right mode: Using Rush for simple, Smart for most, Oracle for critical decisions

If you’re missing any, revisit the relevant sections above.

💡 Mode Selection: Pick Rush for small tasks (67% cheaper, 50% faster), Smart for complex work (default), or Oracle for deep reasoning. For comprehensive guidance on choosing modes, see Choosing Your Mode in Power Patterns

Your First 30 Minutes with Amp

  1. Install: ampcode.com/install
  2. Setup: CLI + VS Code extension, learn shortcuts (⌘I, ⌘L)
  3. Mode: Default Smart works for most tasks; switch to Rush for small, clear tasks
  4. First task: Pick a failing test or small UI change
  5. Prompt: Clear goal + verification step
  6. Review: Use git staging area—stage good, discard bad
  7. Iterate: Refine prompt if needed, try again
  8. Success: Green tests or working feature

What’s Next

Ready to put this into practice? Follow the recommended path:

Next: Get Your First Win in 15 Minutes — End-to-end first win with a real task.

Or start with fundamentals: What is an Agent? — Core concepts explained.

Practice Path:

  1. What is an Agent or You are here: Overview
  2. First Win in 15 Minutes
  3. Workflows That Stick
  4. Power Patterns
  5. Planning Workflow

Standalone:

Resources:


Remember: Programming with agents is paint-by-numbers—you provide the structure and direction, the agent fills in the details. Start small, iterate fast, compound your wins.