dailytutorfor.you
& AI Science Data

Build Multi- Agent Systems with LangGraph and Deep Agents: Full Guide 2026

Learn how to build powerful multiagent systems using LangGraph and Deep Agents. This complete tutorial includes architecture, step-by-step implementation, best practices, and an example of a code that can be executed directly to automate complex tasks.

17 min read

Building Multi-Agent Systems with LangGraph and Deep Agents: Complete Guide 2026

Introduction

Imagine having a team of virtual assistants working together automatically—one agent researches, another writes reports, and another performs reviews. This is no longer just a futuristic concept, but a reality you can build today with LangGraph and Deep Agents.

Multi-agent systems are an architecture where multiple AI agents work together to complete complex tasks. Unlike a single agent that must handle everything alone, multi-agent systems enable specialization, parallelism, and better results for tasks requiring various expertise.

LangGraph is an orchestration framework from LangChain that allows you to build stateful, durable agent workflows that can be deployed to production. Meanwhile, Deep Agents is a ready-to-use "agent harness" with various built-in tools like planning, filesystem access, and subagent spawning.

Why is this important in 2026?

  • GitHub Trending: Repository langchain-ai/deepagents skyrocketed with 14,516+ stars and 1,415 new stars in a single day
  • Industry Needs: Companies like Klarna, Replit, and Elastic have already adopted LangGraph for production systems
  • Efficiency: Multi-agent systems can reduce development time from weeks to hours for complex tasks

Prerequisites

Before starting, make sure you have:

1. Basic Knowledge

  • Python 3.10+ (experience with async/await will help)
  • Basic understanding of LLMs (Large Language Models) like GPT-4, Claude, or Gemini
  • Basic concepts of graphs and state machines

2. Environment Setup

# Create virtual environment python -m venv venv source venv/bin/activate # Linux/Mac # or .\venv\Scripts\activate # Windows # Install main dependencies pip install -U langgraph langchain langchain-openai deepagents # Optional: For development and debugging pip install langsmith jupyter

3. API Keys

You need an API key from one of the LLM providers:

# OpenAI export OPENAI_API_KEY="sk-your-key-here" # Or for Google Gemini export GOOGLE_API_KEY="your-key-here" # For tracing and debugging (recommended) export LANGSMITH_TRACING=true export LANGSMITH_API_KEY="your-langsmith-key"

Core Concepts: Understanding LangGraph Foundations

Before diving into code, let's understand the fundamental concepts that make LangGraph powerful.

1. State - Shared Memory

State is like a "shared notepad" carried by all agents in the system. Each node (step) can read from and write to this state.

from typing import TypedDict, Annotated from langgraph.graph import add_messages class AgentState(TypedDict): """State shared between all agents in the system.""" messages: Annotated[list, add_messages] # Conversation history current_task: str # Current task being worked on research_results: str # Research results draft_content: str # Generated draft review_feedback: str # Feedback from reviewer iterations: int # Counter for tracking

Analogy: Think of state like a whiteboard in a meeting room. Each person (agent) can see what's already written and add new information.

2. Nodes - Individual Steps

Nodes are Python functions that perform specific work. Each node receives state, does something, and returns updates to the state.

from langgraph.graph import StateGraph, MessagesState def research_node(state: MessagesState) -> dict: """Node for researching a topic.""" # Extract task from state task = state["messages"][-1].content # Simulate research (in practice, use actual tools) research_result = f"Research completed for: {task}" # Return update to state return {"messages": [{"role": "assistant", "content": research_result}]} def write_node(state: MessagesState) -> dict: """Node for writing content.""" research = state["messages"][-1].content draft = f"Draft based on: {research}" return {"messages": [{"role": "assistant", "content": draft}]}

3. Edges - Connections Between Steps

Edges determine the flow of execution. There are two types:

  • Normal edges: Always go to the same next node
  • Conditional edges: Route based on state conditions
from langgraph.graph import END def should_continue(state: MessagesState) -> str: """Conditional routing function.""" if state.get("iterations", 0) >= 3: return "finish" return "continue" # Build graph with edges graph = StateGraph(MessagesState) graph.add_node("research", research_node) graph.add_node("write", write_node) # Normal edge graph.add_edge("research", "write") # Conditional edge graph.add_conditional_edges( "write", should_continue, {"continue": "research", "finish": END} )

4. Graph - Complete Workflow

A graph is the complete picture of how all nodes and edges connect together.

┌─────────────────────────────────────────────────────────────────┐ │ MULTI-AGENT WORKFLOW │ ├─────────────────────────────────────────────────────────────────┤ │ │ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ │ RESEARCH │────►│ WRITE │────►│ REVIEW │ │ │ │ Agent │ │ Agent │ │ Agent │ │ │ └──────────┘ └──────────┘ └────┬─────┘ │ │ ▲ │ │ │ │ ┌──────────┐ │ │ │ └───────────│ REVISION│◄─────────┘ │ │ │ NEEDED? │ │ │ └──────────┘ │ │ │ │ │ ┌────▼─────┐ │ │ │ END │ │ │ └──────────┘ │ │ │ └─────────────────────────────────────────────────────────────────┘ ```## Step-by-Step: Building Your First Multi-Agent System Now let's build a real multi-agent system from scratch. ### Step 1: Define State and Configuration First, create a clear state definition: ```python # state.py from typing import TypedDict, Annotated from langgraph.graph import add_messages class MultiAgentState(TypedDict): """State shared between all agents.""" messages: Annotated[list, add_messages] # Conversation history current_task: str # Current task research_results: str # Research findings draft_content: str # Written draft review_feedback: str # Reviewer feedback iterations: int # Iteration counter final_output: str # Final result

Step 2: Create Individual Agents

Each agent has a specific role. Here are examples:

Researcher Agent:

# agents/researcher.py from langchain_openai import ChatOpenAI from langgraph.prebuilt import create_react_agent from tools.search_tools import web_search, doc_search def create_researcher(): """Create a research agent with search tools.""" model = ChatOpenAI(model="gpt-4o-mini", temperature=0) tools = [web_search, doc_search] agent = create_react_agent( model, tools, state_modifier="""You are a research agent. Your job is to gather information for the task given. Use the search tools to find relevant data. Summarize your findings clearly. Return concise, factual information.""" ) return agent

Writer Agent:

# agents/writer.py from langchain_openai import ChatOpenAI def create_writer(): """Create a writing agent.""" model = ChatOpenAI(model="gpt-4o", temperature=0.7) agent = create_react_agent( model, [], # No tools, just writing state_modifier="""You are a professional writer agent. Your job is to create polished, engaging content. Use the research results provided in the state. Write in a clear, professional style. Structure content with headings and paragraphs.""" ) return agent

Reviewer Agent:

# agents/reviewer.py from langchain_openai import ChatOpenAI def create_reviewer(): """Create a review agent.""" model = ChatOpenAI(model="gpt-4o", temperature=0) agent = create_react_agent( model, [], state_modifier="""You are a quality reviewer agent. Your job is to review written content for: - Accuracy and completeness - Grammar and style - Structure and flow - Factual errors Provide specific, actionable feedback. Rate content as 'approved' or 'needs_revision'.""" ) return agent

Step 3: Build the Workflow Graph

Now connect all agents into a workflow:

# graph/workflow.py from langgraph.graph import StateGraph, START, END from agents.researcher import create_researcher from agents.writer import create_writer from agents.reviewer import create_reviewer from state import MultiAgentState def build_workflow(): """Build the multi-agent workflow graph.""" # Create agent instances researcher = create_researcher() writer = create_writer() reviewer = create_reviewer() # Define node functions async def research_node(state: MultiAgentState) -> dict: task = state["current_task"] result = await researcher.ainvoke({"messages": [{"role": "user", "content": f"Research: {task}"}]}) return {"research_results": result["messages"][-1].content} async def write_node(state: MultiAgentState) -> dict: research = state["research_results"] result = await writer.ainvoke({"messages": [{"role": "user", "content": f"Write article based on: {research}"}]}) return {"draft_content": result["messages"][-1].content} async def review_node(state: MultiAgentState) -> dict: draft = state["draft_content"] result = await reviewer.ainvoke({"messages": [{"role": "user", "content": f"Review: {draft}"}]}) feedback = result["messages"][-1].content return {"review_feedback": feedback, "iterations": state.get("iterations", 0) + 1} # Conditional routing def should_revise(state: MultiAgentState) -> str: if "approved" in state["review_feedback"].lower(): return "approved" if state["iterations"] >= 3: return "max_iterations" return "revise" # Build graph graph = StateGraph(MultiAgentState) # Add nodes graph.add_node("research", research_node) graph.add_node("write", write_node) graph.add_node("review", review_node) # Add edges graph.add_edge(START, "research") graph.add_edge("research", "write") graph.add_edge("write", "review") # Conditional edges graph.add_conditional_edges( "review", should_revise, { "approved": END, "max_iterations": END, "revise": "write" } ) return graph.compile()

Step 4: Run the Workflow

# main.py import asyncio from graph.workflow import build_workflow async def main(): workflow = build_workflow() initial_state = { "current_task": "Write an article about multi-agent systems", "iterations": 0 } result = await workflow.ainvoke(initial_state) print("=== FINAL OUTPUT ===") print(result["draft_content"]) print(" === REVIEW FEEDBACK ===") print(result["review_feedback"]) print(f" Iterations: {result['iterations']}") if __name__ == "__main__": asyncio.run(main())

Using Deep Agents: Production-Ready Agent Harness

Deep Agents provides a complete "agent harness" that's ready to use. This significantly simplifies development.

What is Deep Agents?

Deep Agents is an open-source project from LangChain that provides:

  • Built-in tools: Filesystem, web search, code execution
  • Planning capabilities: Automatic task decomposition
  • Subagent spawning: Delegate tasks to specialized subagents
  • Persistence: Checkpoint and resume workflows
  • Human-in-the-loop: Easy integration of human approval

Installation

pip install deepagents

Quick Start

from deepagents import Agent, Tool from langchain_openai import ChatOpenAI # Define custom tool @Tool def calculate_metrics(data: str) -> str: """Calculate metrics from data.""" # Your calculation logic return f"Metrics calculated from: {data}" # Create agent with tools agent = Agent( model=ChatOpenAI(model="gpt-4o"), tools=[calculate_metrics], system_prompt="You are a helpful data analysis agent." ) # Run agent result = agent.run("Analyze the sales data and calculate metrics") print(result)

Multi-Agent with Deep Agents

from deepagents import Agent, Team from langchain_openai import ChatOpenAI # Create specialized agents researcher = Agent( name="researcher", model=ChatOpenAI(model="gpt-4o-mini"), system_prompt="You research and gather information." ) writer = Agent( name="writer", model=ChatOpenAI(model="gpt-4o"), system_prompt="You write engaging content." ) reviewer = Agent( name="reviewer", model=ChatOpenAI(model="gpt-4o"), system_prompt="You review and provide feedback." ) # Create team team = Team( agents=[researcher, writer, reviewer], workflow="sequential" # or "parallel", "conditional" ) # Execute result = await team.run("Write a report about AI agents") ```## Best Practices for Multi-Agent Systems ### 1. Clear Role Definition Each agent should have a single, clear responsibility: ```python # Good: Specific role RESEARCHER_PROMPT = """You are a research specialist. Your ONLY job is to find and summarize information. Do NOT write content or make decisions.""" # Bad: Vague role AGENT_PROMPT = """You help with various tasks.""" # Too vague!

2. State Immutability

Always return new state, never modify directly:

# Good: Return new state def research_node(state: State) -> dict: result = do_research(state["task"]) return {"research_data": result} # New state only # Bad: Mutate state directly def research_node(state: State) -> dict: state["research_data"] = do_research(state["task"]) # Don't do this! return state

3. Error Handling

Always handle errors gracefully:

from langgraph.pregel import GraphRecursionError try: result = await graph.ainvoke(initial_state) except GraphRecursionError: print("Graph exceeded maximum iterations") # Handle gracefully except Exception as e: print(f"Error: {e}") # Fallback logic

4. Persistence for Long-Running Workflows

For workflows that might take hours:

from langgraph.checkpoint.memory import MemorySaver from langgraph.checkpoint.postgres import PostgresSaver # Memory checkpoint (for development) checkpointer = MemorySaver() # PostgreSQL checkpoint (for production) # checkpointer = PostgresSaver(connection_string) graph = build_workflow() app = graph.compile(checkpointer=checkpointer) # Now you can resume from checkpoint result = await app.ainvoke( initial_state, config={"configurable": {"thread_id": "my-workflow-123"}} )

Common Mistakes to Avoid

❌ Mistake 1: Too Many Agents

Don't create separate agents for every small task. Use 3-5 specialized agents maximum.

Wrong:

agents = [researcher, writer, editor, formatter, publisher, promoter]

Correct:

agents = [researcher, writer, reviewer] # Consolidated roles

❌ Mistake 2: Infinite Loops

Always set maximum iterations:

def should_continue(state: State) -> str: if state["iterations"] >= MAX_ITERATIONS: return "end" # Prevent infinite loop if is_complete(state): return "end" return "continue"

❌ Mistake 3: No Human-in-the-loop

For critical tasks, add human checkpoints:

from langgraph.prebuilt import ToolNode from langgraph.checkpoint import MemorySaver # Add interrupt before critical decisions graph.add_node("human_review", interrupt=True) # Resume after human approval result = await app.ainvoke(None, config={"thread_id": "xxx"})

❌ Mistake 4: Shared Mutable State

Each agent should return new state, not modify existing:

# Wrong state["data"].append(new_item) # Correct return {"data": state["data"] + [new_item]}

Advanced Patterns

Parallel Execution

Run multiple agents simultaneously:

import asyncio async def parallel_workflow(state: State) -> dict: # Run research and outline generation in parallel research_task = researcher.ainvoke(state) outline_task = outliner.ainvoke(state) research_result, outline_result = await asyncio.gather( research_task, outline_task ) return { "research": research_result, "outline": outline_result }

Hierarchical Agent Systems

Orchestrator with specialized subagents:

┌─────────────┐ │ ORCHESTRATOR│ └──────┬──────┘ │ ┌─────────────┼─────────────┐ ▼ ▼ ▼ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ Research │ │ Content │ │ Code │ │ Team │ │ Team │ │ Team │ └────┬─────┘ └────┬─────┘ └────┬─────┘ │ │ │ ┌────┴────┐ ┌────┴────┐ ┌────┴────┐ │Web │ │Writer │ │Frontend │ │Research │ │Editor │ │Backend │ │Doc │ │Reviewer │ │DevOps │ │Analysis │ │ │ │ │ └─────────┘ └─────────┘ └─────────┘

Production Deployment

Docker Deployment

# Dockerfile FROM python:3.11-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD ["python", "main.py"]
# docker-compose.yml version: '3.8' services: multi-agent: build: . environment: - OPENAI_API_KEY=${OPENAI_API_KEY} - LANGSMITH_API_KEY=${LANGSMITH_API_KEY} volumes: - ./data:/app/data

Monitoring with LangSmith

import os # Enable tracing os.environ["LANGSMITH_TRACING"] = "true" os.environ["LANGSMITH_PROJECT"] = "multi-agent-production" # All runs will be traced automatically result = await graph.ainvoke(initial_state) # View traces at: https://smith.langchain.com

Summary & Next Steps

Key Takeaways

  1. LangGraph provides orchestration for complex multi-agent workflows
  2. Deep Agents offers ready-to-use agent harness with tools
  3. State is the shared memory between all agents
  4. Nodes are individual steps, Edges are connections
  5. Persistence enables long-running, resumable workflows

Comparison: Single vs Multi-Agent

AspectSingle AgentMulti-Agent
ComplexitySimple tasksComplex workflows
SpecializationGeneral purposeDomain experts
ParallelismSequentialParallel possible
DebuggingEasyMore complex
Use caseChat, Q&AResearch, Writing, Coding

Resources


Conclusion

Multi-agent systems with LangGraph and Deep Agents open new possibilities for building sophisticated AI applications. By combining specialized agents into orchestrated workflows, you can tackle complex tasks that would be impossible for a single agent.

The key is starting simple: build one agent, then gradually add more as needed. Focus on clear role definitions, proper state management, and robust error handling.

In 2026, multi-agent systems are becoming the standard architecture for production AI applications. With LangGraph and Deep Agents, you have the tools to build them effectively.

Ready to build your first multi-agent system? Start with the simple workflow example above, then expand as you get comfortable with the concepts. 🚀