The platform for reliable agents.
LangChain is a framework for building agents and LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development β all while future-proofing decisions as the underlying technology evolves.
pip install langchainIf you're looking for more advanced customization or agent orchestration, check out LangGraph, our framework for building controllable agent workflows.
Documentation:
- docs.langchain.com β Comprehensive documentation, including conceptual overviews and guides
- reference.langchain.com/python β API reference docs for LangChain packages
Discussions: Visit the LangChain Forum to connect with the community and share all of your technical questions, ideas, and feedback.
Note
Looking for the JS/TS library? Check out LangChain.js.
LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.
Use LangChain for:
- Real-time data augmentation. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChain's vast library of integrations with model providers, tools, vector stores, retrievers, and more.
- Model interoperability. Swap models in and out as your engineering team experiments to find the best choice for your application's needs. As the industry frontier evolves, adapt quickly β LangChain's abstractions keep you moving without losing momentum.
- Rapid prototyping. Quickly build and iterate on LLM applications with LangChain's modular, component-based architecture. Test different approaches and workflows without rebuilding from scratch, accelerating your development cycle.
- Production-ready features. Deploy reliable applications with built-in support for monitoring, evaluation, and debugging through integrations like LangSmith. Scale with confidence using battle-tested patterns and best practices.
- Vibrant community and ecosystem. Leverage a rich ecosystem of integrations, templates, and community-contributed components. Benefit from continuous improvements and stay up-to-date with the latest AI developments through an active open-source community.
- Flexible abstraction layers. Work at the level of abstraction that suits your needs - from high-level chains for quick starts to low-level components for fine-grained control. LangChain grows with your application's complexity.
While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.
To improve your LLM application development, pair LangChain with:
- LangGraph β Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows β and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
- Integrations β List of LangChain integrations, including chat & embedding models, tools & toolkits, and more
- LangSmith β Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
- LangSmith Deployment β Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams β and iterate quickly with visual prototyping in LangSmith Studio.
- Deep Agents (new!) β Build agents that can plan, use subagents, and leverage file systems for complex tasks
- API Reference β Detailed reference on navigating base packages and integrations for LangChain.
- Contributing Guide β Learn how to contribute to LangChain projects and find good first issues.
- Code of Conduct β Our community guidelines and standards for participation.