This project implements a Model Context Protocol (MCP) server that enables AI assistants to access up-to-date documentation for various Python libraries, including LangChain, LlamaIndex, and OpenAI. By leveraging this server, AI assistants can dynamically fetch and provide relevant information from official documentation sources. The goal is to ensure that AI applications always have access to the latest official documentation.
The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers.
- Dynamic Documentation Retrieval: Fetches the latest documentation content for specified Python libraries.
- Asynchronous Web Searches: Utilizes the SERPER API to perform efficient web searches within targeted documentation sites.
- HTML Parsing: Employs BeautifulSoup to extract readable text from HTML content.
- Extensible Design: Easily add support for additional libraries by updating the configuration.
- Python 3.8 or higher
- UV for Python Package Management (or pip if you're a pleb)
- A Serper API key (for Google searches or "SERP"s)
- Claude Desktop or Claude Code (for testing)
git clone https://github.com/Sreedeep-SS/docret-mcp-server.git
cd docret-mcp-server
-
On macOS/Linux:
python3 -m venv env source env/bin/activate
-
On Windows:
python -m venv env .\env\Scripts\activate
With the virtual environment activated, install the required dependencies:
pip install -r requirements.txt
or if you are using uv:
uv sync
Before running the application, configure the required environment variables. This project uses the SERPER API for searching documentation and requires an API key.
-
Create a
.env
file in the root directory of the project. -
Add the following environment variable:
SERPER_API_KEY=your_serper_api_key_here
Replace your_serper_api_key_here
with your actual API key.
Once the dependencies are installed and environment variables are set up, you can start the MCP server.
python main.py
This will launch the server and make it ready to handle requests.
The MCP server provides an API to fetch documentation content from supported libraries. It works by querying the SERPER API for relevant documentation links and scraping the page content.
To search for documentation on a specific topic within a library, use the get_docs
function. This function takes two parameters:
query
: The topic to search for (e.g., "Chroma DB")library
: The name of the library (e.g., "langchain")
Example usage:
from main import get_docs
result = await get_docs("memory management", "openai")
print(result)
This will return the extracted text from the relevant OpenAI documentation pages.
You can integrate this MCP server with AI assistants like Claude or custom-built AI models. To configure the assistant to interact with the server, use the following configuration:
{
"servers": [
{
"name": "Documentation Retrieval Server",
"command": "python /path/to/main.py"
}
]
}
Ensure that the correct path to main.py
is specified.
The server currently supports the following libraries:
- LangChain
- LlamaIndex
- OpenAI
To add support for additional libraries, update the docs_urls
dictionary in main.py
with the library name and its documentation URL:
docs_urls = {
"langchain": "python.langchain.com/docs",
"llama-index": "docs.llamaindex.ai/en/stable",
"openai": "platform.openai.com/docs",
"new-library": "new-library-docs-url.com",
}
📌 Roadmap
Surely this is really exciting for me and I'm looking forward to build more on this and stay updated with the latest news and ideas that can be implemented
This is what I have on my mind:
-
- Expand the
docs_urls
dictionary with additional libraries. - Modify the get_docs function to handle different formats of documentation pages.
- Use regex-based or AI-powered parsing to better extract meaningful content.
- Provide an API endpoint to dynamically add new libraries.
- Expand the
-
- Use Redis or an in-memory caching mechanism like
functools.lru_cache
- Implement time-based cache invalidation.
- Cache results per library and per search term.
- Use Redis or an in-memory caching mechanism like
-
- Use
GPT-4
,BART
, orT5
for summarizing scraped documentation. - There are also
Claude 3 Haiku
,Gemini 1.5 Pro
,GPT-4-mini
,Open-mistral-nemo
,Hugging Face Models
and many more that can be used. All of which are subject to debate. - Let users choose between raw documentation text and a summarized version.
- Use
-
- Use FastAPI to expose API endpoints. (Just because)
- Build a simple frontend dashboard for API interaction. (Why not?)
-
- Use pytest and unittest for API and scraping reliability tests. (Last thing we want is this thing turning into a nuclear bomb)
- Implement CI/CD workflows to automatically run tests on every push. (The bread and butter of course)
-
- Database Integrations
- Google Docs/Sheets/Drive Integration
- File System Operations
- Git Integration
- Integrating Communication Platforms to convert ideas into product
- Docker and Kubernetes management
For more details on MCP servers and their implementation, refer to the guide:
- Introducing the Model Context Protocol
- https://modelcontextprotocol.io/
- Adding MCP to your python project
This project is licensed under the MIT License. See the LICENSE file for more details.