AI Agent Design Patterns for Developers
What Makes an AI Agent Different?
An AI agent is not just a single LLM call. It's a system that thinks, acts, observes, and iterates. An agent can use tools, make decisions, and recover from mistakes. Understanding agent patterns helps you build systems that are more intelligent and reliable than simple prompt chains.
Agent vs. Prompt Chain
A Simple Prompt Chain
You pass data through steps:
- Get user input
- Call LLM with prompt
- Return LLM output
The LLM makes one decision and outputs the result. If the result is wrong, it stays wrong.
An Agent
You give the agent a goal and tools:
- Agent receives goal: "Calculate the revenue impact of a 10% price increase"
- Agent thinks: "I need to fetch current revenue data"
- Agent uses tool: calls database API
- Agent observes: "Current revenue is $5M"
- Agent thinks: "Now I can calculate the impact"
- Agent uses another tool: calculator or Python
- Agent observes: "10% increase is $500K"
- Agent returns final answer
The agent reasons about what it needs, uses available tools, and adapts based on results.
The ReAct Pattern: Reason, Act, Observe
ReAct is the foundation of most agent systems. The loop is:
- Reason: Agent thinks about what to do next
- Act: Agent takes an action (uses a tool, calls an API, writes code)
- Observe: Agent sees the result and updates its understanding
- Repeat until the goal is reached
How ReAct Works
The agent's internal monologue might look like:
Thought: The user wants to know how many active users signed up this month.
Action: I should query the database for signup data.
Tool: query_database
Input: SELECT COUNT(*) FROM users WHERE created_at > '2025-03-01'
Observation: The database returned 342.
Thought: I have the answer. The user wants active users, so I should also check retention.
Action: Query for users who have logged in in the last 7 days.
Tool: query_database
Input: SELECT COUNT(DISTINCT user_id) FROM sessions WHERE created_at > NOW() - INTERVAL 7 DAY
Observation: 287 users are active.
Thought: I now have both pieces of information. Let me provide a clear answer.
Final Answer: 342 users signed up this month, and 287 of them are currently active.
Each step gives the agent more information. It uses that information to decide the next step.
Tool-Calling Agents
An agent needs a set of tools to accomplish tasks. Tools are functions it can call.
Common Agent Tools
- Search - Query the internet or internal docs
- Database - Read/write from databases
- API - Call external services
- Code execution - Run Python or other code
- File operations - Read/write files
- Calculator - Perform math
- Shell commands - Execute system commands
Tool Definition
For each tool, you define:
- Name: What the agent calls it
- Description: What it does (helps agent decide when to use it)
- Parameters: What inputs it needs
- Returns: What it outputs
Example Tool
def get_user_data(user_id: int) -> dict:
"""
Fetch user information from the database.
Args:
user_id: The unique identifier of the user
Returns:
A dictionary with user details (name, email, signup_date, etc)
"""
# Implementation
return {"name": "Alice", "email": "alice@example.com", "signup_date": "2025-01-15"}
The description is critical. The agent reads it to decide whether this tool helps.
Multi-Step Planning Agents
Some tasks are too complex for a single reasoning loop. A planning agent breaks tasks into steps.
How Planning Agents Work
- Planning phase: Agent receives a goal and generates a plan (step 1, step 2, step 3)
- Execution phase: Agent executes each step using tools and observing results
- Adaptation: If a step fails, agent adjusts the plan
Example Plan
Goal: "Build a summary of Q1 2025 performance"
Generated plan:
- Fetch revenue data for Jan, Feb, Mar 2025
- Fetch customer count for each month
- Fetch churn rate for each month
- Calculate month-over-month growth
- Identify top customers by revenue
- Compile into a report
The agent now executes this plan step by step, using tools and adapting if needed.
Agent Frameworks
LangGraph
Built by the LangChain team. You define a graph of states and transitions. Agents move between states, calling tools and LLMs as needed. Good for complex workflows.
from langgraph.graph import StateGraph
graph = StateGraph(AgentState)
graph.add_node("reasoning", reason)
graph.add_node("acting", act)
graph.add_node("observing", observe)
CrewAI
Multi-agent framework where multiple AI agents collaborate. Each agent has a role, goal, and tools. They work together to solve problems.
agent1 = Agent(role="Researcher", goal="Find information", tools=[search_tool])
agent2 = Agent(role="Analyst", goal="Analyze findings", tools=[calculator])
AutoGen (Microsoft)
Framework for multi-agent conversations. Agents have conversations with each other, with the ability to code, use tools, and reason.
OpenAI Function Calling
Simplest approach. You define functions, pass them to the API, and the LLM decides which to call. Less flexible than frameworks, but simpler.
When to Use Agents vs. Simple Chains
Use Agents When:
- The task requires multiple steps or decisions
- The agent needs to use tools (APIs, databases, code execution)
- You want the system to recover from mistakes
- The task is complex enough that a single prompt can't capture it
- You need to cite sources or show reasoning
Use Simple Chains When:
- The task is straightforward (one step)
- No tools are needed
- Speed matters and you want to minimize LLM calls
- The problem is well-defined and doesn't require adaptation
Common Failure Modes
Infinite Loops
Agent keeps calling the same tool and getting the same result. Prevent with step limits and loop detection.
Tool Misuse
Agent calls the wrong tool or misunderstands parameters. Improve by writing clear tool descriptions and adding validation.
Hallucinated Tools
Agent tries to call a tool that doesn't exist. Validate tool calls before executing.
Getting Lost
Agent forgets the original goal and goes off track. Help by reminding the agent of the goal in each iteration.
Token Explosion
Agent's reasoning history grows and consumes tokens. Manage by summarizing old reasoning or using a sliding window.
Scoping Agent Tasks for Reliability
The more specific the task, the more reliable the agent.
Bad Task Scoping
"Help me with my business."
- Too vague. Agent doesn't know where to start.
Good Task Scoping
"Analyze last month's revenue. Compare it to the previous month. Calculate growth rate. Identify top 5 revenue-generating customers."
- Specific steps. Clear tools needed. Agent knows what success looks like.
Task Scoping Checklist
- Does the task have clear success criteria?
- Is the task scoped narrowly enough for an agent to complete it?
- Are the required tools available?
- Can the agent verify it succeeded?
Building Your First Agent
- Start simple: Use an agent framework (LangGraph or CrewAI)
- Define tools: What does your agent need to do its job?
- Write tool descriptions: Help the agent understand when to use each tool
- Set a clear goal: What should the agent accomplish?
- Test with examples: Try common and edge-case scenarios
- Add error handling: What happens if a tool fails?
- Monitor and iterate: Log agent decisions and improve based on failures
Practice
Try building a simple agent:
- Pick a domain (e.g., e-commerce, documentation, data analysis)
- Define 3-5 tools it needs
- Give it a specific task
- Observe how it reasons and decides
- Notice where it fails and why
As you get comfortable, try more complex agents with multiple tools and planning phases.
Discussion
Sign in to comment. Your account must be at least 1 day old.