Skip to content

SalesforceAIResearch/promptomatix

Promptomatix Logo

Promptomatix

A Powerful Framework for LLM Prompt Optimization

Overview | Installation | Examples | Features | API Docs | CLI

πŸ“‹ Overview

Promptomatix is an AI-driven framework designed to automate and optimize large language model (LLM) prompts. It provides a structured approach to prompt optimization, ensuring consistency, cost-effectiveness, and high-quality outputs while reducing the trial-and-error typically associated with manual prompt engineering.

The framework leverages the power of DSPy and advanced optimization techniques to iteratively refine prompts based on task requirements, synthetic data, and user feedback. Whether you're a researcher exploring LLM capabilities or a developer building production applications, Promptomatix provides a comprehensive solution for prompt optimization.

πŸ“š API Documentation: Comprehensive API documentation is available in the docs/ directory, including detailed reference guides for all modules and functions.

πŸ—οΈ Architecture

The Promptomatix architecture consists of several key components:

  • Input Processing: Analyzes raw user input to determine task type and requirements
  • Synthetic Data Generation: Creates training and testing datasets tailored to the specific task
  • Optimization Engine: Uses DSPy or meta-prompt backends to iteratively improve prompts
  • Evaluation System: Assesses prompt performance using task-specific metrics
  • Feedback Integration: Incorporates human feedback for continuous improvement
  • Session Management: Tracks optimization progress and maintains detailed logs

🌟 Key Features

  • Zero-Configuration Intelligence: Automatically analyzes tasks, selects techniques, and configures prompts
  • Automated Dataset Generation: Creates synthetic training and testing data tailored to your specific domain
  • Task-Specific Optimization: Selects the appropriate DSPy module and metrics based on task type
  • Real-Time Human Feedback: Incorporates user feedback for iterative prompt refinement
  • Comprehensive Session Management: Tracks optimization progress and maintains detailed logs
  • Framework Agnostic Design: Supports multiple LLM providers (OpenAI, Anthropic, Cohere)
  • CLI and API Interfaces: Flexible usage through command-line or REST API

βš™οΈ Installation

Quick Install (Recommended)

# Clone the repository
git clone https://github.com/airesearch-emu/promptomatix.git
cd promptomatix

# Install with one command
./install.sh

The installer will:

  • βœ… Check Python 3 installation
  • βœ… Create a virtual environment (promptomatix_env)
  • βœ… Initialize git submodules (DSPy)
  • βœ… Install all dependencies

πŸ”§ Activate the Environment

Important: You need to activate the virtual environment each time you use Promptomatix:

# Activate the environment
source promptomatix_env/bin/activate

# You'll see (promptomatix_env) in your prompt when activated

πŸ”‘ Set Up API Keys

# Set your API keys
export OPENAI_API_KEY="your_openai_api_key"
export ANTHROPIC_API_KEY="your_anthropic_api_key"

# Or create a .env file
cp .env.example .env
# Edit .env with your actual API keys

πŸš€ Test Installation

# Test the installation
python -m src.promptomatix.main --raw_input "Given a questions about human anatomy answer it in two words" --model_name "gpt-3.5-turbo" --backend "simple_meta_prompt" --synthetic_data_size 10 --model_provider "openai"

πŸ’‘ Pro Tips

Auto-activation: Add this to your ~/.bashrc or ~/.zshrc:

alias promptomatix='source promptomatix_env/bin/activate && promptomatix'

Deactivate when done:

deactivate

πŸš€ Example Usage

Interactive Notebooks

The best way to learn Promptomatix is through our comprehensive Jupyter notebooks:

# Navigate to examples
cd examples/notebooks

# Start with basic usage
jupyter notebook 01_basic_usage.ipynb

Notebook Guide:

  • 01_basic_usage.ipynb - Simple prompt optimization workflow (start here!)
  • 02_prompt_optimization.ipynb - Advanced optimization techniques
  • 03_metrics_evaluation.ipynb - Evaluation and metrics analysis
  • 04_advanced_features.ipynb - Advanced features and customization

Command Line Examples

# Basic optimization
python -m src.promptomatix.main --raw_input "Classify text sentiment into positive or negative"

# With custom model and parameters
python -m src.promptomatix.main --raw_input "Summarize this article" \
  --model_name "gpt-4" \
  --temperature 0.3 \
  --task_type "summarization"
# Advanced configuration
python -m src.promptomatix.main --raw_input "Given a questions about human anatomy answer it in two words" \
  --model_name "gpt-3.5-turbo" \
  --backend "simple_meta_prompt" \
  --synthetic_data_size 10 \
  --model_provider "openai"

# Using your own CSV data files
python -m src.promptomatix.main --raw_input "Classify the given IMDb rating" \
  --model_name "gpt-3.5-turbo" \
  --backend "simple_meta_prompt" \
  --model_provider "openai" \
  --load_data_local \
  --local_train_data_path "/path/to/your/train_data.csv" \
  --local_test_data_path "/path/to/your/test_data.csv" \
  --train_data_size 50 \
  --valid_data_size 20 \
  --input_fields rating \
  --output_fields category

Python API Examples

from promptomatix import process_input, generate_feedback, optimize_with_feedback

# Basic optimization
result = process_input(
    raw_input="Classify text sentiment",
    model_name="gpt-3.5-turbo",
    task_type="classification"
)

# Generate feedback for improvement
feedback = generate_feedback(
    optimized_prompt=result['result'],
    input_fields=result['input_fields'],
    output_fields=result['output_fields'],
    model_name="gpt-3.5-turbo"
)

# Optimize with feedback
improved_result = optimize_with_feedback(result['session_id'])

# Using your own CSV data files
result = process_input(
    raw_input="Classify the given IMDb rating",
    model_name="gpt-3.5-turbo",
    backend="simple_meta_prompt",
    model_provider="openai",
    load_data_local=True,
    local_train_data_path="/path/to/your/train_data.csv",
    local_test_data_path="/path/to/your/test_data.csv",
    train_data_size=50,
    valid_data_size=20,
    input_fields=["rating"],
    output_fields=["category"]
)

πŸ“ Project Structure

promptomatix/
β”œβ”€β”€ images/                # Project images and logos
β”œβ”€β”€ libs/                  # External libraries or submodules (e.g., DSPy)
β”œβ”€β”€ logs/                  # Log files
β”œβ”€β”€ promptomatix_env/      # Python virtual environment
β”œβ”€β”€ sessions/              # Saved optimization sessions
β”œβ”€β”€ dist/                  # Distribution files (if any)
β”œβ”€β”€ build/                 # Build artifacts (if any)
β”œβ”€β”€ examples/              # Example notebooks and scripts
β”œβ”€β”€ src/
β”‚   └── promptomatix/      # Core Python package
β”‚       β”œβ”€β”€ cli/
β”‚       β”œβ”€β”€ core/
β”‚       β”œβ”€β”€ metrics/
β”‚       β”œβ”€β”€ utils/
β”‚       β”œβ”€β”€ __init__.py
β”‚       β”œβ”€β”€ main.py
β”‚       β”œβ”€β”€ lm_manager.py
β”‚       └── logger.py
β”œβ”€β”€ .env.example
β”œβ”€β”€ .gitignore
β”œβ”€β”€ .gitmodules
β”œβ”€β”€ .python-version
β”œβ”€β”€ CODEOWNERS
β”œβ”€β”€ CODE_OF_CONDUCT.md
β”œβ”€β”€ CONTRIBUTING.md
β”œβ”€β”€ LICENSE.txt
β”œβ”€β”€ README.md
β”œβ”€β”€ SECURITY.md
β”œβ”€β”€ how_to_license.md
β”œβ”€β”€ install.sh
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ setup.py

πŸ“– Citation

If you find Promptomatix useful in your research or work, please consider citing us:

@misc{murthy2025promptomatixautomaticpromptoptimization,
      title={Promptomatix: An Automatic Prompt Optimization Framework for Large Language Models}, 
      author={Rithesh Murthy and Ming Zhu and Liangwei Yang and Jielin Qiu and Juntao Tan and Shelby Heinecke and Caiming Xiong and Silvio Savarese and Huan Wang},
      year={2025},
      eprint={2507.14241},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.14241}, 
}

πŸ“š Further Reading

For detailed guidelines on effective prompt engineering, please refer to Appendix B (page 17) of our paper:

πŸ“¬ Contact

For questions, suggestions, or contributions, please contact:

Rithesh Murthy
Email: [email protected]

About

An Automatic Prompt Optimization Framework for Large Language Models

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •