Skip to content

Conversation

rhatdan
Copy link
Member

@rhatdan rhatdan commented Jul 16, 2025

Summary by Sourcery

Allow users to define a custom prompt prefix for chat and run commands, centralize shared CLI options into a helper function, and enhance BaseConfig with additional settings while updating documentation to cover the new prefix configuration.

New Features:

  • Add --prefix flag to chat and run commands for custom prompt prefixes
  • Introduce persistent prefix configuration in ramalama.conf via new prefix setting

Enhancements:

  • Extract common flags (--color, --prefix, --rag) into shared chat_run_options for chat and run commands
  • Extend and reorganize BaseConfig with new fields (api, default_image, dryrun, ocr, selinux)

Documentation:

  • Document the prefix configuration in ramalama.conf and the man page with engine-based defaults

Copy link
Contributor

sourcery-ai bot commented Jul 16, 2025

Reviewer's Guide

This PR refactors the CLI by extracting shared chat/run options into a helper, adjusts runtime flag handling for the serve command, extends the BaseConfig schema with new fields (prefix, ocr, dryrun, selinux, default_image), updates default_prefix logic to honor user config, and enriches documentation with the new prefix setting.

Sequence diagram for CLI option handling in chat and run commands

sequenceDiagram
    participant User as actor User
    participant CLI as CLI
    participant Config as Config
    User->>CLI: Invoke 'chat' or 'run' command
    CLI->>CLI: chat_run_options(parser)
    CLI->>Config: default_prefix() (uses CONFIG.prefix if set)
    Config-->>CLI: Return prefix
    CLI-->>User: Command runs with options (including prefix, color, rag)
Loading

Updated class diagram for BaseConfig and related config types

classDiagram
    class BaseConfig {
        +str api = "none"
        +str carimage = "registry.access.redhat.com/ubi10-micro:latest"
        +bool container
        +int ctx_size = 2048
        +str default_image = DEFAULT_IMAGE
        +bool dryrun = False
        +SUPPORTED_ENGINES engine
        +list[str] env
        +str host = "0.0.0.0"
        +str image
        +dict[str, str] images
        +bool keep_groups = False
        +int ngl = -1
        +bool ocr = False
        +str port = str(DEFAULT_PORT)
        +str prefix
        +str pull = "newer"
        +Literal rag_format = "qdrant"
        +SUPPORTED_RUNTIMES runtime = "llama.cpp"
        +bool selinux = False
        +RamalamaSettings settings
        +str store
        +str temp = "0.8"
        +int threads = -1
        +str transport = "ollama"
        +UserConfig user
        +__post_init__()
    }
    class RamalamaSettings {
    }
    class UserConfig {
    }
    BaseConfig --> RamalamaSettings : settings
    BaseConfig --> UserConfig : user
Loading

File-Level Changes

Change Details Files
Extract shared chat and run CLI options into a helper
  • Introduce chat_run_options() to define --color, --prefix, and --rag
  • Replace duplicate argument definitions in chat_parser and run_parser with chat_run_options()
  • Restrict runtime_options to only add --rag for serve, not run
ramalama/cli.py
Extend BaseConfig schema with new settings
  • Add api default and default_image fields
  • Introduce dryrun, ocr, prefix, and selinux flags
  • Reorder container and image fields for consistency
ramalama/config.py
Honor configured prefix in default_prefix fallback
  • Check CONFIG.prefix before engine-based default
  • Ensure default_prefix returns user-specified prefix when set
ramalama/chat.py
Document new prefix option in configuration guides
  • Add prefix description and default values to ramalama.conf man page
  • Update ramalama.conf example with prefix mapping table
docs/ramalama.conf.5.md
docs/ramalama.conf

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @rhatdan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request streamlines the command-line interface by consolidating shared options between the chat and run commands. Additionally, it introduces a new configuration option to allow users to customize the prompt prefix, enhancing both consistency and user experience.

Highlights

  • Command Argument Consolidation: Common command-line arguments, specifically --color, --prefix, and --rag, have been consolidated into a new helper function (chat_run_options). This function is now used by both the chat and run commands, reducing code duplication and improving consistency in argument parsing.
  • Configurable Prompt Prefix: A new prefix option has been introduced in ramalama.conf and its man page (ramalama.conf.5.md), allowing users to specify a custom prefix for chat and run command prompts. This provides greater customization over the user interface.
  • Dynamic Default Prefix Logic: The default_prefix function in ramalama/chat.py now prioritizes a user-defined prefix from the configuration (CONFIG.prefix) before falling back to the existing engine-based default prefixes (e.g., Podman, Docker, No Engine, No Emoji support).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @rhatdan - I've reviewed your changes - here's some feedback:

  • In default_prefix, move the CONFIG.prefix check above the EMOJI guard so that a user-specified prefix is honored even when emoji support is disabled.
  • The restructuring of BaseConfig dataclass fields alters the default field order and could impact existing config loading—please verify backward compatibility or preserve the original order.
  • You’ve DRYed up run and chat flags with chat_run_options—consider applying the same pattern to the serve subcommand to keep CLI options consistent.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In default_prefix, move the CONFIG.prefix check above the EMOJI guard so that a user-specified prefix is honored even when emoji support is disabled.
- The restructuring of BaseConfig dataclass fields alters the default field order and could impact existing config loading—please verify backward compatibility or preserve the original order.
- You’ve DRYed up run and chat flags with chat_run_options—consider applying the same pattern to the serve subcommand to keep CLI options consistent.

## Individual Comments

### Comment 1
<location> `docs/ramalama.conf.5.md:131` </location>
<code_context>
+| Podman          | "🦭 > " |
+| Docker          | "🐋 > " |
+| No Engine       | "🦙 > " |
+| No IMOGI support| "> "    |
+
+#prefix = ""
</code_context>

<issue_to_address>
Possible typo: 'IMOGI' should likely be 'EMOJI'.

In the table, change 'No IMOGI support' to 'No EMOJI support' for accuracy.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
| No IMOGI support| "> "    |
=======
| No EMOJI support| "> "    |
>>>>>>> REPLACE

</suggested_fix>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

| Podman | "🦭 > " |
| Docker | "🐋 > " |
| No Engine | "🦙 > " |
| No IMOGI support| "> " |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (typo): Possible typo: 'IMOGI' should likely be 'EMOJI'.

In the table, change 'No IMOGI support' to 'No EMOJI support' for accuracy.

Suggested change
| No IMOGI support| "> " |
| No EMOJI support| "> " |

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively consolidates the command-line options for run and chat commands by introducing a shared helper function, which improves code maintainability. It also successfully adds the new prefix configuration option, with support in ramalama.conf and as a command-line flag. The documentation has been updated accordingly. I've left a few minor suggestions to fix typos and improve consistency in the documentation files.

# Podman: "🦭 > "
# Docker: "🐋 > "
# No Engine: "🦙 > "
# No IMOGI support: "> "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a typo in this comment. "IMOGI" should be "EMOJI".

# No EMOJI support: "> "


#prefix = ""

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This commented-out example is redundant and inconsistent with how other options are documented in this man page. For example, the port option doesn't include a commented-out TOML example. To maintain consistency and improve clarity, I recommend removing these lines.

@@ -52,6 +52,9 @@ def default_prefix():
if not EMOJI:
return "> "

if CONFIG.prefix:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this override the prompt when emojis are not supported? E.g if running native windows python via cmd.exe/powershell (if that even works?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets look at that as a followon.

@rhatdan rhatdan merged commit fbec474 into containers:main Jul 21, 2025
53 of 57 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants