Support for Agentic Retrieval in LangChain (Inspired by Azure AI Search) #31947
kevaldekivadiya2415
announced in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Checked
Feature request
Requesting support for an Agentic Retrieval pipeline in LangChain, similar to the new agentic retrieval capability in Azure AI Search. This feature leverages an LLM to decompose complex user queries into multiple parallel subqueries using chat history and context, and combines the results in a structured, ranked response optimized for RAG applications.
Motivation
LangChain already excels in building composable chains and retrieval-augmented generation (RAG) pipelines. However, it currently lacks native support for automatic query decomposition, parallel subquery execution, and structured response formats driven by LLMs — key features now offered by Azure AI Search.
This capability would significantly improve recall and answer quality in multi-turn conversational AI, copilots, and agent-based applications.
Proposal (If applicable)
Proposed Features
AgenticRetriever component:
Accepts a complex user query + optional chat history.
Uses an LLM to:
Subquery Executor:
Semantic Merger & Ranker:
Structured Output:
Final response includes:
Integration with LangGraph for orchestrating the agentic workflow with observability.
Benefits
Implementation Ideas
AgenticRetriever(...)
to wrap existing retrievers (e.g., FAISS, Elastic, AstraDB, etc.).Example API Design:
Would love to collaborate or contribute on this if it aligns with LangChain’s roadmap. Let me know what you think!
—
Keval Dekivadiya
Maintainer of FastAPI GenAI Boilerplate
Beta Was this translation helpful? Give feedback.
All reactions