Skip to content

Conversation

rhatdan
Copy link
Member

@rhatdan rhatdan commented Jun 23, 2025

Summary by Sourcery

Adapt ramalama stack and chat modules for compatibility with llama-stack by updating host binding, argument formatting, and command invocation patterns, and add robust attribute checks in the chat utility.

Bug Fixes:

  • Add hasattr checks around optional args (pid2kill, name) in chat kills() to prevent attribute errors

Enhancements:

  • Bind model server to 0.0.0.0 instead of localhost for external accessibility
  • Convert port, context size, and thread count arguments to strings for consistent CLI usage
  • Reformat container YAML to use JSON array and multiline args for llama-server and llama-stack commands
  • Update Containerfile CMD to JSON exec form for llama-stack entrypoint

Copy link
Contributor

sourcery-ai bot commented Jun 23, 2025

Reviewer's Guide

This PR modifies the ramalama stack to be compatible with llama-stack by switching the network binding to 0.0.0.0, refactoring the llama-server command and arguments into an inline YAML list with stringified parameters, standardizing the llama stack run invocation to exec-form in both the Kubernetes spec and Containerfile, and adding attribute checks to the chat kill routine for safer termination.

Sequence diagram for safer process/container termination in Chat.kills()

sequenceDiagram
    participant Chat
    participant OS
    participant ContainerManager
    Chat->>Chat: kills()
    alt args has pid2kill
        Chat->>OS: os.kill(pid2kill, SIGINT)
        Chat->>OS: os.kill(pid2kill, SIGTERM)
        Chat->>OS: os.kill(pid2kill, SIGKILL)
    else args has name
        Chat->>ContainerManager: stop_container(args, name)
    end
Loading

Class diagram for updated Stack and Chat classes

classDiagram
    class Stack {
        - args
        - name
        - host
        - model
        - stack_image
        + __init__(args)
        + generate()
    }
    class Chat {
        - args
        + _req()
        + kills()
        + loop()
    }
    Stack <|-- Chat
    %% Highlighted changes:
    %% - Stack.host now defaults to "0.0.0.0" instead of "127.0.0.1"
    %% - Stack.generate() refactored llama-server command/args handling
    %% - Chat.kills() now checks for attribute existence before accessing args.pid2kill and args.name
Loading

File-Level Changes

Change Details Files
Switch host binding and URL exposure
  • Changed default host from "127.0.0.1" to "0.0.0.0"
  • Updated RAMALAMA_URL environment variable to use dynamic host
  • Exposed service to all interfaces instead of localhost
ramalama/stack.py
Refactor llama-server command and argument generation
  • Removed separate llama_cmd variable
  • Converted numeric parameters (port, context, threads) to strings
  • Replaced args list with a joined multiline YAML list literal
ramalama/stack.py
Standardize llama stack run invocation to exec-form
  • Replaced shell wrapper (/bin/sh -c) args with direct exec-form list in Kubernetes spec
  • Switched Containerfile CMD to JSON array form
ramalama/stack.py
container-images/llama-stack/Containerfile
Harden chat kill routine with attribute checks
  • Added hasattr checks before accessing args.pid2kill
  • Added hasattr checks before accessing args.name
ramalama/chat.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @rhatdan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing the compatibility and operational robustness of the ramalama application within a llama-stack containerized deployment. The changes primarily involve adopting container best practices for command execution, improving argument handling for increased stability, and configuring network settings to facilitate seamless inter-container communication, thereby ensuring the application functions correctly in its intended environment.

Highlights

  • Container Command Execution: Updated CMD instructions in container-images/llama-stack/Containerfile and args definitions in ramalama/stack.py for the llama-stack container to use the exec form (JSON array) instead of the shell form. This is a best practice for container images, improving clarity and avoiding shell-related issues.
  • Robust Argument Handling: Added hasattr checks in ramalama/chat.py before accessing self.args.pid2kill and self.args.name. This prevents AttributeError exceptions if these optional arguments are not provided, making the application more robust.
  • Network Accessibility for Model Server: Changed the default host binding for the llama-server from 127.0.0.1 (localhost) to 0.0.0.0 (all network interfaces) in ramalama/stack.py. This ensures the model server is accessible from other containers or external hosts within the llama-stack environment.
  • Inter-Container Communication: Refactored the generation of llama-server arguments in ramalama/stack.py to be directly embedded as a YAML list string, and updated the RAMALAMA_URL environment variable to use the dynamic self.host (now 0.0.0.0) instead of hardcoding 127.0.0.1. This ensures correct communication between the llama-stack and model-server containers.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @rhatdan - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adapts the codebase for compatibility with llama-stack. The changes include updating the server host binding for external accessibility, converting argument types for CLI consistency, and refactoring container command definitions to the exec form, which are all positive improvements.

I've found one issue in ramalama/stack.py where the client connection URL is being constructed with 0.0.0.0, which is incorrect and will likely cause connection failures between containers in the pod. I've provided a suggestion to fix this.

The other changes, such as adding hasattr checks in ramalama/chat.py and updating the Containerfile CMD, improve the robustness and follow best practices.

@rhatdan rhatdan force-pushed the llama-stack branch 8 times, most recently from d94b61f to 2df4f6c Compare June 26, 2025 19:59
Adapt ramalama stack and chat modules for compatibility with llama-stack by updating host binding, argument formatting, and command invocation patterns, and add robust attribute checks in the chat utility.

Bug Fixes:

    Add hasattr checks around optional args (pid2kill, name) in chat kills() to prevent attribute errors

Enhancements:

    Bind model server to 0.0.0.0 instead of localhost for external accessibility
    Convert port, context size, and thread count arguments to strings for consistent CLI usage
    Reformat container YAML to use JSON array and multiline args for llama-server and llama-stack commands
    Update Containerfile CMD to JSON exec form for llama-stack entrypoint

Signed-off-by: Daniel J Walsh <[email protected]>
@rhatdan rhatdan merged commit 895fb0d into containers:main Jun 27, 2025
28 of 33 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants