search_docs tool, streams intermediate graph state, and shows how to reuse the same thread across turns. Both TypeScript and Python servers expose a /kickoff NDJSON stream that CometChat can consume.
What You’ll Build
- A LangGraph state machine with an assistant node and a tool node.
- An in-memory knowledge base plus a
search_docstool that returns cited bullets. - Streaming runs that keep history via the built-in
MemorySavercheckpointer. - A starting point you can wrap in HTTP + SSE for CometChat’s Bring Your Own Agent flow.
Prerequisites
- TypeScript: Node.js 18+ (Node 20 recommended);
OPENAI_API_KEYin.env(optionalKNOWLEDGE_OPENAI_MODEL, defaultgpt-4o-mini). - Python: Python 3.10+;
OPENAI_API_KEYin.env(optionalMODEL, defaultgpt-4o-mini). - CometChat app + AI Agent entry.
Quick links
- Repo root: ai-agent-lang-graph-examples
- TypeScript project: typescript/langgraph-knowledge-agent (
src/graph.ts,src/server.ts,.env.example) - Python project: python/langgraph_knowledge_agent (
agent.py,server.py,.env)
How it works
- Graph —
StateGraph(MessagesAnnotation)addsassistant(ChatOpenAI bound to tools) andtools(executes tool calls) nodes, with conditional edges that loop until no more tool calls are requested. - Tooling —
search_docs(insrc/graph.ts) callssearchDocsover a small mock corpus (src/data/corpus.ts) and formats matches as markdown bullets for citations. - State —
MemorySavercheckpoints are keyed byconfigurable.thread_id, so multiple turns share context. - Streaming —
app.stream(..., { streamMode: "values" })yields incremental states; each message printed insrc/index.tsshows the graph progressing through tool calls and replies.
Setup (TypeScript)
1
Install
cd typescript/langgraph-knowledge-agent && npm install2
Env
Copy
../.env.example to .env; set OPENAI_API_KEY (optional KNOWLEDGE_OPENAI_MODEL).3
Run demo
npm run demo — “How do I stream intermediate results?” (streams tool calls + replies to stdout).4
Run server
npm run server → POST /kickoff on http://localhost:3000.Setup (Python)
1
Install
cd python && python -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt2
Env
Create
.env with OPENAI_API_KEY (optional MODEL).3
Run server
python -m langgraph_knowledge_agent.server → POST /kickoff on http://localhost:8000.Project structure
- TypeScript: Graph src/graph.ts, Demo src/index.ts, Server src/server.ts, Data src/data, Config .env.example + package.json
- Python: Graph agent.py, Server server.py, Data data/, Config .env + requirements.txt
Step 1 - Inspect the LangGraph
buildKnowledgeGraph (in src/graph.ts) binds search_docs to ChatOpenAI, routes every assistant reply through shouldCallTools, and executes tool calls via runTools. The system sets temperature to 0 and defaults to gpt-4o-mini unless you override KNOWLEDGE_OPENAI_MODEL.
Streaming API (HTTP)
Event order (both TypeScript and Python servers):text_start → text_delta chunks → tool_call_start → tool_call_args → tool_call_end → tool_result → text_end → done (error on failure). Each event includes message_id; echo thread_id/run_id from the client if you want threading.
Example requests:
messages must be non-empty; invalid payloads return 400.
Adapt for CometChat
- Point CometChat BYO Agent at your public
/kickoffendpoint (TypeScript or Python). - Parse the NDJSON events; render
text_deltastreaming, showtool_call_*steps if desired, and stop ontext_end/done. - Keep
OPENAI_API_KEY(and any model overrides) server-side; add auth headers on the route if needed. - Swap the mock
search_docsimplementation with your own retrieval layer while keeping the same tool signature.