LangChain & LangGraph

Chains · LCEL · Agents · LangGraph state machines · Multi-agent orchestration


LangChain Core Concepts

LangChain provides composable primitives for building LLM applications: LLMs/ChatModels, Prompts, Output parsers, Retrievers, and Tools. These are composed into chains using LCEL.

LCEL — LangChain Expression Language

LCEL uses the pipe operator (|) to compose runnables. Every LCEL component has .invoke(), .stream(), and .batch().

from langchain_core.prompts import ChatPromptTemplate
from langchain_anthropic import ChatAnthropic
from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_template("Answer this: {question}")
llm    = ChatAnthropic(model="claude-3-sonnet-20240229")
parser = StrOutputParser()

chain = prompt | llm | parser
result = chain.invoke({"question": "What is RAG?"})

Parallel execution with RunnableParallel

from langchain_core.runnables import RunnableParallel

parallel = RunnableParallel(
    summary   = summarise_chain,
    sentiment = sentiment_chain,
    entities  = entity_chain,
)
result = parallel.invoke({"text": document})
# result = {"summary": ..., "sentiment": ..., "entities": ...}

LangGraph — When to use it

LCEL is for linear/parallel pipelines. Use LangGraph when you need:

LangGraph Core Primitives

PrimitivePurpose
StateGraphThe graph. Typed state flows through nodes.
State (TypedDict)Shared data structure passed between nodes
NodeA Python function: receives state, returns state update
EdgeConnection between nodes — unconditional or conditional
CheckpointerPersist state to DB for resume/replay (MemorySaver, SqliteSaver)
InterruptPause execution and wait for human input

Building a ReAct Agent in LangGraph

from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
import operator

class AgentState(TypedDict):
    messages: Annotated[list, operator.add]

def agent_node(state: AgentState):
    response = llm_with_tools.invoke(state["messages"])
    return {"messages": [response]}

def should_continue(state: AgentState) -> str:
    last = state["messages"][-1]
    if last.tool_calls:
        return "tools"
    return END

graph = StateGraph(AgentState)
graph.add_node("agent", agent_node)
graph.add_node("tools", ToolNode(tools))
graph.set_entry_point("agent")
graph.add_conditional_edges("agent", should_continue)
graph.add_edge("tools", "agent")

app = graph.compile()

LoanIQ: 7-Agent LangGraph Pattern

LoanIQ uses a supervisor pattern — Agent 01 classifies and routes, then independent analysis agents (02, 03, 04) run in parallel via Send() API, results merge into a single state, then policy check (05), scoring (06), and decision (07) run sequentially.

from langgraph.types import Send

def router_node(state: OverallState):
    # Fan out to parallel agents
    return [
        Send("financial_agent", {"case": state["case"]}),
        Send("credit_agent",    {"case": state["case"]}),
        Send("property_agent",  {"case": state["case"]}),
    ]

graph.add_conditional_edges("router", router_node,
    ["financial_agent", "credit_agent", "property_agent"])

Checkpointing & State Persistence

LangGraph checkpointers save state after every node. This enables:

from langgraph.checkpoint.sqlite import SqliteSaver

checkpointer = SqliteSaver.from_conn_string("./state.db")
app = graph.compile(checkpointer=checkpointer)

config = {"configurable": {"thread_id": "loan-app-12345"}}
result = app.invoke(initial_state, config=config)

Common Interview Questions

Q: LCEL vs LangGraph — when to use which?

LCEL for stateless pipelines: prompt → LLM → parser. LangGraph for stateful workflows with loops, branching, or multi-agent coordination. In practice, LoanIQ uses LCEL inside each agent node (for the prompt → LLM → parser call) and LangGraph to orchestrate between agents.

Q: How does LangGraph handle agent failures?

Wrap node logic in try/except. On exception, write error info to state and route to an error-handling node via conditional edge. The error handler decides: retry with backoff, fall back to a degraded path, or surface the error to the caller. LangSmith captures the full trace including the exception.

Q: What is the difference between a node and a tool?

A node is a graph execution step — it receives state, does work, returns state updates. A tool is a function an LLM can call via tool-use / function calling. Tools typically run inside a node (via ToolNode). All LLM tool calls appear in the graph as a tools node.