forked from camel-ai/owl
-
Notifications
You must be signed in to change notification settings - Fork 0
feat: Enhance Local Model Support via Ollama #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This commit introduces a file upload feature to the Gradio web interface, allowing users to attach files directly to their tasks. Key changes: - Adds a `gr.File` component to the UI in `owl/webapp.py`. - Modifies the backend function `run_owl` to accept an `uploaded_file` path and appends it to the user's prompt for the agents to process. - Updates the Gradio event handlers to correctly pass the file path from the new UI component to the backend. - Improves the UI by updating the default prompt text and example questions to guide users on how to use the new file upload capability.
This commit introduces two major features to the Gradio web interface to improve usability:
1. **Integrated File Management:**
- Adds a `gr.File` component to the UI, allowing users to upload files directly instead of specifying a local path in the prompt.
- The backend is updated to append the uploaded file's path to the task prompt for the agents to process.
2. **Session and Project History:**
- Adds a "History" tab to the UI to view past runs.
- After each successful run, the final answer and full conversation log are saved to a unique, timestamped directory under `owl/history/`.
- The "History" tab contains a dropdown to select a past run and view its saved artifacts. A refresh button is included, and the list automatically updates after a new run is completed.
This commit introduces three major features to the Gradio web interface to improve usability and customization:
1. **Integrated File Management:**
- Adds a `gr.File` component to the UI, allowing users to upload files directly.
- The backend appends the uploaded file's path to the task prompt.
2. **Session and Project History:**
- Adds a "History" tab to the UI to view past runs.
- Saves the final answer and log of each successful run to a timestamped directory in `owl/history/`.
- The "History" tab allows browsing and viewing of past run details.
3. **UI for Toolkit Management:**
- Adds a `gr.CheckboxGroup` to the UI for selecting which toolkits to use.
- Refactors `examples/run.py` to dynamically load tools based on the user's selection.
- The web app logic in `owl/webapp.py` uses `inspect` to gracefully handle modules that do not support dynamic toolkit selection, ensuring backward compatibility.
This commit introduces UI components to allow users to configure the AI model and its temperature directly from the web interface. Key changes: - Adds a `gr.Dropdown` for model selection and a `gr.Slider` for temperature to `owl/webapp.py`, nested within an accordion for a clean UI. - Refactors `examples/run.py` to accept `model_name` and `temperature` as optional parameters in its `construct_society` function. - Updates the backend logic in `owl/webapp.py` to pass these parameters down to the agent creation logic. - Uses the `inspect` module to ensure backward compatibility by only passing the new parameters to functions that support them.
This commit introduces UI components to allow users to configure the AI model and its temperature directly from the web interface. Key changes: - Adds a `gr.Dropdown` for model selection and a `gr.Slider` for temperature to `owl/webapp.py`, nested within an accordion for a clean UI. - Refactors `examples/run.py` to accept `model_name` and `temperature` as optional parameters in its `construct_society` function. - Updates the backend logic in `owl/webapp.py` to pass these parameters down to the agent creation logic. - Uses the `inspect` module to ensure backward compatibility by only passing the new parameters to functions that support them.
This change introduces the initial phase of OpenRouter integration, allowing users to select OpenRouter from the web UI and specify a custom model name. Key changes: - Adds a new example script `examples/run_openrouter.py` that configures the agent society to use the OpenRouter API. - Updates `owl/webapp.py` to include "run_openrouter" in the list of selectable modules. - Adds a conditional text input field in the UI that appears when OpenRouter is selected, allowing users to enter a specific model name (e.g., "mistralai/mistral-7b-instruct"). - Modifies the backend logic to dynamically pass the `openrouter_model_name` to the agent creation process using Python's `inspect` module, ensuring backward compatibility. - Updates the `.env_template` to include a placeholder for the `OPENROUTER_API_KEY`.
…, and cooldown. This change builds upon the basic OpenRouter integration by introducing a resilient key management system, making the feature more robust and suitable for heavier usage. Key changes: - Creates a new `owl/key_manager.py` file containing two new classes: - `KeyManager`: Manages a pool of API keys provided via a comma-separated environment variable (`OPENROUTER_API_KEY`). It handles key rotation and enforces a cooldown period for keys that fail. - `ResilientOpenAICompatibleModel`: A wrapper class that inherits from `OpenAICompatibleModel`. It uses the `KeyManager` to dynamically select an API key for each request. It overrides the `step` method to catch API errors (e.g., authentication, rate limits) and automatically retries with the next available key after putting the failed key on cooldown. - Updates `examples/run_openrouter.py` to use the new `ResilientOpenAICompatibleModel` instead of the standard `ModelFactory`. This integrates the resilient key management logic into the agent society. This completes the implementation of the advanced OpenRouter features, providing a seamless and error-resistant experience when using the OpenRouter provider.
This feature significantly improves user experience by displaying the turn-by-turn dialogue as it happens, rather than waiting for the entire process to complete. Key changes: - Refactors the UI in `owl/webapp.py`, replacing the static `gr.Markdown` log display with an interactive `gr.Chatbot` component. The old log view is preserved in a new "Full Logs" tab. - Modifies the `run_society` function in `owl/utils/enhanced_role_playing.py` to accept an optional `message_queue`. It places each turn's data into this queue as it's generated. - Implements a new `stream_conversation` generator function in `owl/webapp.py`. This function runs the agent society in a background thread and listens to the message queue, yielding live updates to the `gr.Chatbot` component. - Updates all associated event handlers to support the new UI components and streaming logic.
This feature makes it significantly easier for users to run OWL with local models, improving flexibility and privacy. Key changes: - Refactors `examples/run_ollama.py` to accept a dynamic model name as a parameter, replacing the previously hardcoded model names. It also simplifies the model configuration and allows the Ollama server URL to be configured via an environment variable (`OLLAMA_API_BASE_URL`). - Integrates the Ollama runner into the `owl/webapp.py` UI. This includes adding a `run_ollama` option to the module dropdown and a conditional text input for the user to specify their desired model. - Updates the backend logic in the web app to be fully compatible with the new parameterized Ollama script. - Adds a new section to the `README.md` file with detailed instructions on how to set up and use Ollama with the OWL system.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This patch enhances local model support by deeply integrating Ollama into the web UI. It refactors the
run_ollama.pyscript for dynamic model selection, adds the necessary UI components and logic towebapp.py, and includes comprehensive documentation in theREADME.md.