Skip to content

SylphAI-Inc/AdalFlow

Repository files navigation

AdalFlow logo

⚑ AdalFlow is a PyTorch-like library to build and auto-optimize any LM workflows, from Chatbots, RAG, to Agents. ⚑

Try Quickstart in Colab

PyPI Version PyPI Downloads PyPI Downloads GitHub stars Open Issues License discord-invite

Why AdalFlow

  1. 100% Open-source Agents SDK: Lightweight and requires no additional API to setup Human-in-the-Loop and Tracing Functionalities.
  2. Say goodbye to manual prompting: AdalFlow provides a unified auto-differentiative framework for both zero-shot optimization and few-shot prompt optimization. Our research, LLM-AutoDiff and Learn-to-Reason Few-shot In Context Learning, achieve the highest accuracy among all auto-prompt optimization libraries.
  3. Switch your LLM app to any model via a config: AdalFlow provides Model-agnostic building blocks for LLM task pipelines, ranging from RAG, Agents to classical NLP tasks.

AdalFlow Optimized Prompt

AdalFlow MLflow Integration

View Documentation

Quick Start

Install AdalFlow with pip:

pip install adalflow

Hello World Agent Example

from adalflow import Agent, Runner
from adalflow.components.model_client.openai_client import OpenAIClient

# Create a simple agent
agent = Agent(
    name="Assistant",
    model_client=OpenAIClient(),
    model_kwargs={"model": "gpt-4o", "temperature": 0.3}
)

runner = Runner(agent=agent)

result = runner.call(prompt_kwargs={"input_str": "Write a haiku about AI and coding"})
print(result.answer)

# Output:
# Code flows like water,
# AI minds think in patterns,
# Logic blooms in bytes.

Set your OPENAI_API_KEY environment variable to run this example.

Agent with Tools

def calculator(expression: str) -> str:
    """Evaluate a mathematical expression."""
    try:
        result = eval(expression)
        return f"Result: {result}"
    except Exception as e:
        return f"Error: {e}"

# Create agent with tools
agent = Agent(
    name="CalculatorAgent",
    tools=[calculator],
    model_client=OpenAIClient(),
    model_kwargs={"model": "gpt-4o", "temperature": 0.3}
)

runner = Runner(agent=agent)

result = runner.call(prompt_kwargs={"input_str": "Calculate 15 * 7 + 23"})
print(result.answer)

# Output: The result of 15 * 7 + 23 is 128.

View Quickstart: Learn How AdalFlow optimizes LM workflows end-to-end in 15 mins.

Go to Documentation for tracing, human-in-the-loop, and more.

Research

[Jan 2025] Auto-Differentiating Any LLM Workflow: A Farewell to Manual Prompting

  • LLM Applications as auto-differentiation graphs
  • Token-efficient and better performance than DsPy

Collaborations

We work closely with the VITA Group at University of Texas at Austin, under the leadership of Dr. Atlas Wang, who provides valuable support in driving project initiatives.

For collaboration, contact Li Yin.

Hiring

We are looking for a Dev Rel to help us build the community and support our users. If you are interested, please contact Li Yin.

Documentation

AdalFlow full documentation available at adalflow.sylph.ai:

AdalFlow: A Tribute to Ada Lovelace

AdalFlow is named in honor of Ada Lovelace, the pioneering female mathematician who first recognized that machines could go beyond mere calculations. As a team led by a female founder, we aim to inspire more women to pursue careers in AI.

Community & Contributors

The AdalFlow is a community-driven project, and we welcome everyone to join us in building the future of LLM applications.

Join our Discord community to ask questions, share your projects, and get updates on AdalFlow.

To contribute, please read our Contributor Guide.

Contributors

contributors

Acknowledgements

Many existing works greatly inspired AdalFlow library! Here is a non-exhaustive list:

  • πŸ“š PyTorch for design philosophy and design pattern of Component, Parameter, Sequential.
  • πŸ“š Micrograd: A tiny autograd engine for our auto-differentiative architecture.
  • πŸ“š Text-Grad for the Textual Gradient Descent text optimizer.
  • πŸ“š DSPy for inspiring the __{input/output}__fields in our DataClass and the bootstrap few-shot optimizer.
  • πŸ“š OPRO for adding past text instructions along with its accuracy in the text optimizer.
  • πŸ“š PyTorch Lightning for the AdalComponent and Trainer.