-
Notifications
You must be signed in to change notification settings - Fork 246
Consolodate run and chat commands together also allow specification of prefix in ramalama.conf #1706
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Reviewer's GuideThis PR refactors the CLI by extracting shared chat/run options into a helper, adjusts runtime flag handling for the serve command, extends the BaseConfig schema with new fields (prefix, ocr, dryrun, selinux, default_image), updates default_prefix logic to honor user config, and enriches documentation with the new prefix setting. Sequence diagram for CLI option handling in chat and run commandssequenceDiagram
participant User as actor User
participant CLI as CLI
participant Config as Config
User->>CLI: Invoke 'chat' or 'run' command
CLI->>CLI: chat_run_options(parser)
CLI->>Config: default_prefix() (uses CONFIG.prefix if set)
Config-->>CLI: Return prefix
CLI-->>User: Command runs with options (including prefix, color, rag)
Updated class diagram for BaseConfig and related config typesclassDiagram
class BaseConfig {
+str api = "none"
+str carimage = "registry.access.redhat.com/ubi10-micro:latest"
+bool container
+int ctx_size = 2048
+str default_image = DEFAULT_IMAGE
+bool dryrun = False
+SUPPORTED_ENGINES engine
+list[str] env
+str host = "0.0.0.0"
+str image
+dict[str, str] images
+bool keep_groups = False
+int ngl = -1
+bool ocr = False
+str port = str(DEFAULT_PORT)
+str prefix
+str pull = "newer"
+Literal rag_format = "qdrant"
+SUPPORTED_RUNTIMES runtime = "llama.cpp"
+bool selinux = False
+RamalamaSettings settings
+str store
+str temp = "0.8"
+int threads = -1
+str transport = "ollama"
+UserConfig user
+__post_init__()
}
class RamalamaSettings {
}
class UserConfig {
}
BaseConfig --> RamalamaSettings : settings
BaseConfig --> UserConfig : user
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @rhatdan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request streamlines the command-line interface by consolidating shared options between the chat
and run
commands. Additionally, it introduces a new configuration option to allow users to customize the prompt prefix, enhancing both consistency and user experience.
Highlights
- Command Argument Consolidation: Common command-line arguments, specifically
--color
,--prefix
, and--rag
, have been consolidated into a new helper function (chat_run_options
). This function is now used by both thechat
andrun
commands, reducing code duplication and improving consistency in argument parsing. - Configurable Prompt Prefix: A new
prefix
option has been introduced inramalama.conf
and its man page (ramalama.conf.5.md
), allowing users to specify a custom prefix for chat and run command prompts. This provides greater customization over the user interface. - Dynamic Default Prefix Logic: The
default_prefix
function inramalama/chat.py
now prioritizes a user-definedprefix
from the configuration (CONFIG.prefix
) before falling back to the existing engine-based default prefixes (e.g., Podman, Docker, No Engine, No Emoji support).
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @rhatdan - I've reviewed your changes - here's some feedback:
- In default_prefix, move the CONFIG.prefix check above the EMOJI guard so that a user-specified prefix is honored even when emoji support is disabled.
- The restructuring of BaseConfig dataclass fields alters the default field order and could impact existing config loading—please verify backward compatibility or preserve the original order.
- You’ve DRYed up run and chat flags with chat_run_options—consider applying the same pattern to the serve subcommand to keep CLI options consistent.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In default_prefix, move the CONFIG.prefix check above the EMOJI guard so that a user-specified prefix is honored even when emoji support is disabled.
- The restructuring of BaseConfig dataclass fields alters the default field order and could impact existing config loading—please verify backward compatibility or preserve the original order.
- You’ve DRYed up run and chat flags with chat_run_options—consider applying the same pattern to the serve subcommand to keep CLI options consistent.
## Individual Comments
### Comment 1
<location> `docs/ramalama.conf.5.md:131` </location>
<code_context>
+| Podman | "🦭 > " |
+| Docker | "🐋 > " |
+| No Engine | "🦙 > " |
+| No IMOGI support| "> " |
+
+#prefix = ""
</code_context>
<issue_to_address>
Possible typo: 'IMOGI' should likely be 'EMOJI'.
In the table, change 'No IMOGI support' to 'No EMOJI support' for accuracy.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
| No IMOGI support| "> " |
=======
| No EMOJI support| "> " |
>>>>>>> REPLACE
</suggested_fix>
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
docs/ramalama.conf.5.md
Outdated
| Podman | "🦭 > " | | ||
| Docker | "🐋 > " | | ||
| No Engine | "🦙 > " | | ||
| No IMOGI support| "> " | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (typo): Possible typo: 'IMOGI' should likely be 'EMOJI'.
In the table, change 'No IMOGI support' to 'No EMOJI support' for accuracy.
| No IMOGI support| "> " | | |
| No EMOJI support| "> " | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively consolidates the command-line options for run
and chat
commands by introducing a shared helper function, which improves code maintainability. It also successfully adds the new prefix
configuration option, with support in ramalama.conf
and as a command-line flag. The documentation has been updated accordingly. I've left a few minor suggestions to fix typos and improve consistency in the documentation files.
# Podman: "🦭 > " | ||
# Docker: "🐋 > " | ||
# No Engine: "🦙 > " | ||
# No IMOGI support: "> " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docs/ramalama.conf.5.md
Outdated
|
||
#prefix = "" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Signed-off-by: Daniel J Walsh <[email protected]>
Signed-off-by: Daniel J Walsh <[email protected]>
@@ -52,6 +52,9 @@ def default_prefix(): | |||
if not EMOJI: | |||
return "> " | |||
|
|||
if CONFIG.prefix: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this override the prompt when emojis are not supported? E.g if running native windows python via cmd.exe/powershell (if that even works?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets look at that as a followon.
Summary by Sourcery
Allow users to define a custom prompt prefix for chat and run commands, centralize shared CLI options into a helper function, and enhance BaseConfig with additional settings while updating documentation to cover the new prefix configuration.
New Features:
Enhancements:
Documentation: