LangChain, CrewAI, Claude Agent SDK, and more — choose the right framework for your project. Understand the strengths and tradeoffs of each approach.
Building agents from scratch (like we'll do in Topic 15) is educational and gives you total control. But in production, you're reinventing the same patterns: tool parsing, memory management, error handling, logging, monitoring. Frameworks handle all this for you.
Don't Repeat Yourself: Proven patterns for planning, tool calling, memory. Ecosystem: Integrations with LLMs, databases, tools, and APIs. Production Ready: Logging, error handling, monitoring built in. Community: Thousands of examples, plugins, and best practices. Faster Development: Focus on your logic, not infrastructure.
Frameworks add dependencies and abstraction layers. If you don't understand how agents work (you now do!), debugging is hard. Frameworks can also be slower and more expensive than a minimal custom agent.
LangChain is the most popular agent framework. It abstracts LLM interactions, tool calling, memory, and chains. It works with any model (Claude, GPT, Llama, etc.).
from langchain.agents import AgentExecutor, Tool, create_react_agent from langchain_anthropic import ChatAnthropic # 1. Define your tools tools = [ Tool( name="web_search", func=web_search_impl, description="Search the web for current information" ) ] # 2. Create an agent model = ChatAnthropic(model="claude-opus-4-6") agent = create_react_agent(model, tools, prompt=...) executor = AgentExecutor(agent=agent, tools=tools) # 3. Run it result = executor.invoke({"input": "What are the latest AI news?"})
Mature, well-documented, huge ecosystem of integrations, excellent tooling.
Heavy abstraction, steeper learning curve, slower than minimal agents, opinionated design.
CrewAI is built specifically for multi-agent systems. Each agent has a role, tools, and expertise. They work together on a shared task. Great for complex workflows.
from crewai import Agent, Task, Crew from langchain.tools import tool # Define agents with specific roles researcher = Agent( role="Research Analyst", goal="Find current market trends", tools=[web_search_tool], model="gpt-4" ) writer = Agent( role="Report Writer", goal="Write a comprehensive report", tools=[], model="gpt-4" ) # Define tasks research_task = Task( description="Research AI trends in 2025", agent=researcher ) writing_task = Task( description="Write a report based on research", agent=writer ) # Create and run crew crew = Crew( agents=[researcher, writer], tasks=[research_task, writing_task] ) result = crew.kickoff()
Multi-agent workflows with clear role separation. Research + analysis + writing. Manager overseeing engineers. This is where CrewAI shines.
Built specifically for Claude by Anthropic. Lightweight, focused on tool use through Claude's native tool_use feature. Great if you're committed to Claude models.
from anthropic import Anthropic client = Anthropic() tools = [ { "name": "calculate", "description": "Perform math calculation", "input_schema": { "type": "object", "properties": { "expression": {"type": "string"} } } } ] messages = [ {"role": "user", "content": "What's 15 + 27?"} ] response = client.messages.create( model="claude-opus-4-6", max_tokens=1024, tools=tools, messages=messages ) # Claude returns tool_use blocks. Parse and execute. for block in response.content: if block.type == "tool_use": tool_result = execute_tool(block.name, block.input) # Add result back and continue
Lightweight, optimized for Claude's native tool_use, excellent documentation, tight Anthropic support.
Claude-only, smaller ecosystem than LangChain, fewer advanced features (no built-in memory management).
AutoGen focuses on agents that communicate with each other via conversation. Great for simulating multi-party discussions and debates.
from autogen import AssistantAgent, UserProxyAgent # Create agents that talk to each other assistant = AssistantAgent( name="Assistant", llm_config={"model": "gpt-4"} ) user_proxy = UserProxyAgent( name="User", human_input_mode="TERMINATE" ) # Start a conversation user_proxy.initiate_chat( assistant, message="Write Python code to count prime numbers up to 100" )
AutoGen excels at iterative code generation. An assistant writes code, a code executor runs it, reports bugs, and the assistant fixes them. Great for complex programming tasks.
Here's a quick reference to help you choose:
| Framework | Ease of Use | Flexibility | Multi-Agent | Production Ready | Community |
|---|---|---|---|---|---|
| LangChain | Medium | Very High | Yes (manual) | Yes | Huge |
| CrewAI | Easy | Medium | Yes (native) | Growing | Growing |
| Claude SDK | Easy | Medium | Possible | Yes | Small (Anthropic) |
| AutoGen | Easy | High | Yes | Yes | Large (Microsoft) |
| Build Custom | Hard | Unlimited | Yes | Up to you | N/A |
Sometimes frameworks are overkill. A minimal custom agent might be better:
- Task is simple (1-3 tools) - You want minimal dependencies - Latency is critical - You're optimizing for cost - Framework limitations matter - Learning/prototyping
- Complex workflows - Multi-agent systems - Memory/persistence needed - Many integrations required - Production deployment - Team collaboration
def minimal_agent(goal, tools, llm): messages = [{"role": "user", "content": goal}] for _ in range(10): response = llm.generate(messages) if "DONE" in response: return response.replace("DONE: ", "") tool_name, args = parse(response) if tool_name in tools: result = tools[tool_name](**args) else: result = "Unknown tool" messages.append({"role": "assistant", "content": response}) messages.append({"role": "user", "content": result}) return "Max iterations reached"
Start simple. Build a minimal agent, validate the approach, then add frameworks as you scale. Don't add complexity upfront. This is the pragmatic path in production.
1. Which framework is specifically built for multi-agent systems with clear role separation?
2. What's the main advantage of building a custom agent instead of using a framework?
3. Which framework is Anthropic's official agent solution?
4. AutoGen is particularly strong for which type of task?
Frameworks save you time and provide battle-tested patterns, but they add complexity and dependencies. LangChain is the most mature and flexible, with the largest ecosystem. CrewAI excels at multi-agent orchestration with clear role separation. Claude Agent SDK is lightweight and optimized for Claude models. AutoGen is best for iterative code generation and agent conversations. Custom agents are the right choice for simple, latency-critical, or learning-focused projects. Choose based on your task complexity, team size, and deployment needs — not hype.
Next up → Topic 15: Building a Simple Agent from Scratch
Now you'll implement a complete research agent in pure Python, no frameworks.