Overview
LangChain provides a powerful framework for building AI agents that can access GDELT data through the MCP server. This guide shows how to integrate GDELT MCP tools and the system prompt into LangChain agents using the official langchain-mcp-adapters package.
The langchain-mcp-adapters package provides seamless integration between MCP servers and LangChain agents.
Installation
pip install langchain-mcp-adapters langchain-openai langchain
Complete Example
import os
import asyncio
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent
async def main():
api_key = os.getenv("GDELT_API_KEY")
if not api_key:
raise ValueError("GDELT_API_KEY not set")
# Initialize MCP client
mcp_client = MultiServerMCPClient({
"gdelt-cloud": {
"transport": "streamable_http",
"url": "https://gdelt-cloud-mcp.fastmcp.app/mcp",
"headers": {
"Authorization": f"Bearer {api_key}"
}
}
})
# Fetch tools (4 API tools + 3 code resources)
gdelt_tools = await mcp_client.get_tools(server_name="gdelt-cloud")
print(f"Loaded {len(gdelt_tools)} GDELT tools")
# Fetch the system prompt
gdelt_core_prompt = ""
try:
messages = await mcp_client.get_prompt(
"gdelt-cloud", "gdelt_system_prompt"
)
if messages:
gdelt_core_prompt = "\n\n".join(
msg.content for msg in messages
)
except Exception as e:
print(f"Warning: Could not fetch prompt: {e}")
# Build system prompt
system_prompt = f"""You are a GDELT research assistant.
{gdelt_core_prompt}
Additional instructions:
- Always cite sources using article URLs
- Start with detail=summary for discovery calls
- Explain your filter choices
"""
# Create agent
llm = ChatOpenAI(model="gpt-4", temperature=0)
agent = create_agent(
model=llm,
tools=gdelt_tools,
system_prompt=system_prompt
)
# Run a query
result = await agent.ainvoke({
"messages": [{
"role": "user",
"content": "What are the top protest stories this week?"
}]
})
print(result["messages"][-1]["content"])
if __name__ == "__main__":
asyncio.run(main())
Key Components
1. MCP Client Configuration
from langchain_mcp_adapters.client import MultiServerMCPClient
import os
mcp_client = MultiServerMCPClient({
"gdelt-cloud": {
"transport": "streamable_http",
"url": "https://gdelt-cloud-mcp.fastmcp.app/mcp",
"headers": {
"Authorization": f"Bearer {os.environ['GDELT_API_KEY']}"
}
}
})
Never hardcode API keys. Always use environment variables or secure secret management.
# Get GDELT tools
gdelt_tools = await mcp_client.get_tools(server_name="gdelt-cloud")
# Get system prompt
messages = await mcp_client.get_prompt(
"gdelt-cloud", "gdelt_system_prompt"
)
gdelt_core_prompt = "\n\n".join(msg.content for msg in messages)
The GDELT MCP server provides a system prompt with guidance on the data model, available tools, response size control (detail, limit, offset), and recommended workflows. Always include it in your agent.
3. Create and Run Agent
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent
llm = ChatOpenAI(model="gpt-4", temperature=0)
agent = create_agent(
model=llm,
tools=gdelt_tools,
system_prompt=system_prompt
)
result = await agent.ainvoke({
"messages": [{
"role": "user",
"content": "Find conflict events in the Middle East today"
}]
})
The GDELT MCP server provides four API tools:
| Tool | Description |
|---|
| get_media_events | Discover top news stories — filter by category, country, event type, location |
| get_media_event_cluster | Deep-dive into a single story — articles, entities, metrics |
| get_entity | GEG entity profile — linked stories, co-occurrences, timeline |
| get_domain | News domain profile — stats, top entities, recent articles |
And three CAMEO code reference resources:
| Resource | Description |
|---|
| cameo-country-codes | ISO-3 country codes for actor filtering |
| cameo-event-codes | Event type taxonomy (01–20 root codes) |
| goldstein-scale | Event intensity scale (-10 to +10) |
Recommended Workflow
The system prompt guides agents through a scan → zoom → enrich pattern:
# 1. SCAN — Discover what's happening
result = await agent.ainvoke({
"messages": [{"role": "user", "content":
"What are the top conflict stories today?"
}]
})
# Agent calls: get_media_events(days=1, category="conflict_security",
# detail="summary", limit=10)
# 2. ZOOM — Deep-dive into a story
result = await agent.ainvoke({
"messages": [{"role": "user", "content":
"Tell me more about that Iran story"
}]
})
# Agent calls: get_media_event_cluster(cluster_id="abc123",
# detail="standard")
# 3. ENRICH — Background on key players
result = await agent.ainvoke({
"messages": [{"role": "user", "content":
"What else has Iran been in the news for this week?"
}]
})
# Agent calls: get_entity(canonical_name="iran", type="organization",
# days=7, detail="standard")
Reference Documentation
Next Steps