Skip to content

MLX runtime support #1642

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 4, 2025
Merged

MLX runtime support #1642

merged 3 commits into from
Jul 4, 2025

Conversation

kush-gupt
Copy link
Contributor

@kush-gupt kush-gupt commented Jul 2, 2025

Assisted by: Cursor (Claude 4 sonnet)

Enables a new ‘mlx’ runtime option that integrates the mlx_lm Python package for optimized Apple Silicon inference, wiring it into the CLI, model execution paths (run, serve), enforcing platform constraints, updating documentation, and covering it with unit and system tests.

New Features:

  • Add MLX runtime support for interactive chat, one-shot generation, serving, and benchmarking via the mlx_lm CLI

Enhancements:

  • Enforce MLX runtime only on Apple Silicon macOS with automatic --nocontainer mode and validation
  • Refactor model store type hint to ModelStore | None and add helper methods for building MLX subprocess arguments

CI:

  • Install mlx-lm package in CI workflows

Documentation:

  • Document MLX runtime requirements and usage in README, man pages, and configuration files

Tests:

  • Add unit tests for MLX runtime argument building, validation, and behavior
  • Add system BATS tests covering MLX CLI commands and runtime constraints

Examples:

❯ ramalama --debug --runtime mlx run hf://mlx-community/Unsloth-Phi-4-4bit 'What is the answer to life?'
2025-07-01 20:55:09 - DEBUG - run_cmd: npu-smi info
2025-07-01 20:55:09 - DEBUG - Working directory: None
2025-07-01 20:55:09 - DEBUG - Ignore stderr: False
2025-07-01 20:55:09 - DEBUG - Ignore all: False
2025-07-01 20:55:09 - DEBUG - run_cmd: mthreads-gmi
2025-07-01 20:55:09 - DEBUG - Working directory: None
2025-07-01 20:55:09 - DEBUG - Ignore stderr: False
2025-07-01 20:55:09 - DEBUG - Ignore all: False
2025-07-01 20:55:09 - DEBUG - exec_cmd: python -m mlx_lm generate --model /Users/kugupta/.local/share/ramalama/store/huggingface/mlx-community/Unsloth-Phi-4-4bit/snapshots/sha256-d1f444c20e6479cbf8cf6bc057ee3248fa10cd42 --temp 0.8 --max-tokens 2048 --prompt "What is the answer to life?"
==========
The question "What is the answer to life?" is famously explored in Douglas Adams' "The Hitchhiker's Guide to the Galaxy" series, where the ultimate answer to life, the universe, and everything is humorously given as "42." However, this is a fictional and whimsical answer.

In a more philosophical or existential context, the "answer" to life can vary greatly depending on individual beliefs, values, and perspectives. Many people find meaning in relationships, personal growth, contributing to society, or spiritual beliefs. Ultimately, the answer to what makes life meaningful is deeply personal and subjective.
==========
Prompt: 14 tokens, 55.354 tokens-per-sec
Generation: 122 tokens, 16.332 tokens-per-sec
Peak memory: 8.330 GB
❯ ramalama --debug --runtime mlx run hf://mlx-community/Unsloth-Phi-4-4bit
2025-07-01 20:33:35 - DEBUG - run_cmd: npu-smi info
2025-07-01 20:33:35 - DEBUG - Working directory: None
2025-07-01 20:33:35 - DEBUG - Ignore stderr: False
2025-07-01 20:33:35 - DEBUG - Ignore all: False
2025-07-01 20:33:35 - DEBUG - run_cmd: mthreads-gmi
2025-07-01 20:33:35 - DEBUG - Working directory: None
2025-07-01 20:33:35 - DEBUG - Ignore stderr: False
2025-07-01 20:33:35 - DEBUG - Ignore all: False
2025-07-01 20:33:35 - DEBUG - exec_cmd: python -m mlx_lm chat --model /Users/kugupta/.local/share/ramalama/store/huggingface/mlx-community/Unsloth-Phi-4-4bit/snapshots/sha256-d1f444c20e6479cbf8cf6bc057ee3248fa10cd42 --temp 0.8 --max-tokens 2048
[INFO] Starting chat session with /Users/kugupta/.local/share/ramalama/store/huggingface/mlx-community/Unsloth-Phi-4-4bit/snapshots/sha256-d1f444c20e6479cbf8cf6bc057ee3248fa10cd42.
The command list:
- 'q' to exit
- 'r' to reset the chat
- 'h' to display these commands
>> What is the answer to life?
The question "What is the answer to life?" is famously left open in Douglas Adams's science fiction series "The Hitchhiker's Guide to the Galaxy." In the story, a supercomputer named Deep Thought is asked to find the meaning of life, the universe, and everything. After seven and a half million years of calculation, it reveals that the answer is simply "42." This has since become a popular cultural reference to the idea that the answers to profound questions may be ultimately unattainable or absurdly simple.

Philosophically, different traditions and thinkers propose various answers to what the meaning or purpose of life might be. Some suggest it is to seek happiness, love, or the pursuit of knowledge. Others propose that life has no inherent meaning beyond what each individual ascribes to it. Essentially, the question of life's meaning varies greatly depending on cultural, religious, and personal perspectives.
>> q
❯ ramalama --debug --runtime mlx serve hf://mlx-community/Unsloth-Phi-4-4bit
2025-07-01 20:34:11 - DEBUG - run_cmd: npu-smi info
2025-07-01 20:34:11 - DEBUG - Working directory: None
2025-07-01 20:34:11 - DEBUG - Ignore stderr: False
2025-07-01 20:34:11 - DEBUG - Ignore all: False
2025-07-01 20:34:11 - DEBUG - run_cmd: mthreads-gmi
2025-07-01 20:34:11 - DEBUG - Working directory: None
2025-07-01 20:34:11 - DEBUG - Ignore stderr: False
2025-07-01 20:34:11 - DEBUG - Ignore all: False
2025-07-01 20:34:11 - DEBUG - Checking if 8080 is available
2025-07-01 20:34:11 - DEBUG - run_cmd: npu-smi info
2025-07-01 20:34:11 - DEBUG - Working directory: None
2025-07-01 20:34:11 - DEBUG - Ignore stderr: False
2025-07-01 20:34:11 - DEBUG - Ignore all: False
2025-07-01 20:34:11 - DEBUG - run_cmd: mthreads-gmi
2025-07-01 20:34:11 - DEBUG - Working directory: None
2025-07-01 20:34:11 - DEBUG - Ignore stderr: False
2025-07-01 20:34:11 - DEBUG - Ignore all: False
2025-07-01 20:34:11 - DEBUG - exec_cmd: python -m mlx_lm server --model /Users/kugupta/.local/share/ramalama/store/huggingface/mlx-community/Unsloth-Phi-4-4bit/snapshots/sha256-d1f444c20e6479cbf8cf6bc057ee3248fa10cd42 --temp 0.8 --max-tokens 2048 --port 8080 --host 0.0.0.0
/Users/kugupta/Documents/work/ramalama/.venv/lib/python3.12/site-packages/mlx_lm/server.py:924: UserWarning: mlx_lm.server is not recommended for production as it only implements basic security checks.
  warnings.warn(
2025-07-01 20:34:14,048 - INFO - Starting httpd at 0.0.0.0 on port 8080...

❯ curl localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "messages": [{"role": "user", "content": "What is the answer to life?"}],
     "temperature": 0.8
   }'
{"id": "chatcmpl-7aff951b-0c46-49e6-ba36-4299fd859990", "system_fingerprint": "0.25.3-0.26.1-macOS-15.5-arm64-arm-64bit-applegpu_g15s", "object": "chat.completion", "model": "default_model", "created": 1751416485, "choices": [{"index": 0, "logprobs": {"token_logprobs": [0.0, -0.03125, -0.15625, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -0.125, -0.03125, -1.0, -0.5625, -0.046875, 0.0, -0.03125, -0.78125, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -0.421875, -0.03125, -0.09375, -0.109375, -0.046875, -0.390625, -0.328125, 0.0, 0.0, -0.03125, 0.0, -0.265625, 0.0, -0.03125, -1.421875, 0.0, -0.875, -0.1875, -0.703125, -0.75, -1.75, -2.125, -1.859375, -2.25, -0.15625, -0.21875, -0.390625, -0.296875, -0.28125, -0.59375, -0.03125, -0.1875, -0.015625, -0.421875, 0.0, -0.109375, -0.609375, -0.1875, -0.15625, 0.0, 0.0, -0.609375, -0.140625, -0.28125, -0.015625, 0.0, -0.484375, -0.03125, -0.140625, -0.828125, -0.25, 0.0, -0.046875, -0.125, -0.609375, 0.0, -0.21875, -1.109375, -0.0625, -0.53125, -0.328125, -0.765625, -0.0625, -0.96875, -0.015625, -0.875, -0.3125, -1.9375, 0.0, -0.46875, -1.046875, -0.03125, -0.890625, -1.96875, 0.0, -0.40625, -0.09375, -0.9375, 0.0, -0.484375, -2.015625, -0.015625, -1.21875, 0.0, 0.0, -0.03125, -1.15625, -0.0625, 0.0, -0.96875, -0.890625, -0.234375, -0.40625, -0.953125, -0.4375, -1.90625, -0.0625, -0.078125, -0.71875, 0.0, -0.015625, -0.03125], "top_logprobs": [], "tokens": [791, 3488, 330, 3923, 374, 279, 4320, 311, 2324, 7673, 374, 51287, 37260, 304, 31164, 27329, 6, 330, 791, 71464, 71, 25840, 596, 13002, 311, 279, 20238, 1, 4101, 11, 1405, 279, 4320, 374, 28485, 7162, 2728, 439, 330, 2983, 1210, 4452, 11, 420, 374, 264, 44682, 2077, 323, 8967, 369, 95471, 2515, 382, 644, 264, 810, 41903, 477, 67739, 2317, 11, 279, 330, 9399, 311, 2324, 1, 649, 13592, 19407, 11911, 389, 3927, 21463, 11, 13042, 36576, 11, 323, 4443, 11704, 13, 4427, 1253, 1505, 7438, 1555, 13901, 11, 3885, 1555, 12135, 11, 28697, 11, 477, 29820, 311, 279, 23460, 315, 3885, 13, 55106, 11, 279, 330, 9399, 311, 2324, 1, 374, 264, 17693, 4443, 3488, 430, 1855, 1732, 1253, 14532, 304, 872, 1866, 5016, 1648, 13, 100265]}, "finish_reason": "stop", "message": {"role": "assistant", "content": "The question \"What is the answer to life?\" is famously posed in Douglas Adams' \"The Hitchhiker's Guide to the Galaxy\" series, where the answer is humorously given as \"42.\" However, this is a fictional response and meant for comedic effect.\n\nIn a more philosophical or existential context, the \"answer to life\" can vary greatly depending on individual beliefs, cultural backgrounds, and personal experiences. Some may find meaning through religion, others through relationships, creativity, or contributing to the welfare of others. Ultimately, the \"answer to life\" is a deeply personal question that each person may interpret in their own unique way.", "tool_calls": []}}], "usage": {"prompt_tokens": 14, "completion_tokens": 129, "total_tokens": 143}}%

Summary by Sourcery

Enable the new MLX runtime option using the mlx_lm package for optimized Apple Silicon inference, integrate it into run/serve/benchmark commands, enforce platform and container constraints, update CI and documentation, and add comprehensive tests.

New Features:

  • Support MLX runtime for interactive chat and one-shot text generation
  • Support MLX runtime for model serving via the CLI
  • Support MLX runtime for model benchmarking workflows

Enhancements:

  • Automatically enforce MLX runtime only on macOS Apple Silicon with automatic no-container mode
  • Refactor MLX subprocess argument builder and integrate server-client model in the run command
  • Refactor model store type hint to Optional and add helper methods for MLX argument construction

CI:

  • Install mlx-lm package in CI workflows

Documentation:

  • Document MLX runtime requirements and usage in README, man pages, and configuration files

Tests:

  • Add unit tests for MLX runtime argument building, validation, exec args generation, and unsupported features
  • Add system BATS tests covering MLX CLI commands, platform checks, and runtime behaviors

Copy link
Contributor

sourcery-ai bot commented Jul 2, 2025

Reviewer's Guide

Implements a new MLX runtime by wiring the Python mlx_lm package into CLI parsing, Model execution, and chat flows—enforcing Apple Silicon macOS constraints, building subprocess arguments, updating documentation and CI, and covering behavior with unit and system tests.

Sequence diagram for MLX runtime model execution (run/serve/chat)

sequenceDiagram
    actor User
    participant CLI as CLI
    participant Model as Model
    participant MLX as mlx_lm subprocess
    participant Chat as Chat

    User->>CLI: ramalama --runtime=mlx run ...
    CLI->>Model: parse args, detect runtime=mlx
    Model->>Model: validate_args (Apple Silicon/macOS check)
    Model->>Model: build MLX exec args
    Model->>MLX: launch mlx_lm server subprocess
    Model->>Chat: connect to server (with retries)
    Chat->>MLX: send chat/completions request
    MLX-->>Chat: stream response
    Chat-->>User: display response
Loading

Entity relationship diagram for runtime configuration options

erDiagram
    CONFIG ||--o{ CLI : provides
    CLI ||--o{ Model : passes args
    Model ||--o{ MLX : launches subprocess
    CLI {
        string runtime
        bool container
    }
    Model {
        string model_store_path
        ModelStore model_store
    }
    MLX {
        string model_path
        list exec_args
    }
Loading

Class diagram for Model MLX runtime integration

classDiagram
    class Model {
        - _model_store: ModelStore | None
        + run(args)
        + _start_server(args)
        + _connect_and_chat(args, server_pid)
        + _handle_container_chat(args, server_pid)
        + _handle_mlx_chat(args)
        + _is_server_ready(port)
        + _cleanup_server_process(pid)
        + _build_mlx_exec_args(subcommand, model_path, args, extra)
        + mlx_serve(args, exec_model_path)
        + build_exec_args_serve(args, exec_model_path, ...)
        + validate_args(args)
    }
    class Chat {
        + _make_request_data()
        + _req()
        + kills()
    }
    Model "1" -- "1" Chat : uses
    Model <|-- ModelMLX : MLX runtime logic
    class ModelMLX {
        <<MLX runtime logic>>
    }
Loading

File-Level Changes

Change Details Files
Expose and enforce MLX runtime option in CLI
  • Add 'mlx' to the runtime choices
  • Automatically disable container mode for mlx
  • Log a warning when --container is overridden for mlx
ramalama/cli.py
Extend Model runtime logic to support MLX
  • Refactor Model.run to fork a child server and parent client flow
  • Add validate_args check for Apple Silicon macOS when runtime is mlx
  • Introduce helper methods (_start_server, _connect_and_chat, _handle_mlx_chat) for server-client orchestration
  • Implement _build_mlx_exec_args and mlx_serve to assemble mlx_lm CLI arguments
  • Refactor model_store type hint to Optional
ramalama/model.py
Disable perplexity and benchmarking under MLX
  • Raise NotImplementedError in build_exec_args_perplexity and build_exec_args_bench when runtime is mlx
ramalama/model.py
Adapt chat module for MLX-specific behavior
  • Omit the model field in JSON payload for mlx chats
  • Use an extended timeout during initial MLX connection
  • Suppress server kill during initial connection phase
ramalama/chat.py
Install and configure MLX dependency in CI
  • Add a CI step to pip install mlx-lm
.github/workflows/ci.yml
Document MLX runtime usage and requirements
  • Add MLX prerequisites, examples, and links in README and man pages
  • Update toml and conf docs to list 'mlx' as a valid runtime
  • Include mlx-lm in project credits
README.md
docs/ramalama-serve.1.md
docs/ramalama.conf
docs/ramalama.conf.5.md
Add unit and system tests for MLX workflows
  • Unit tests for mlx argument building, validation, and error cases
  • System BATS tests covering mlx CLI commands and device checks
  • Add helper function to detect Apple Silicon in bash tests
test/unit/test_model.py
test/system/helpers.bash
test/system/080-mlx.bats

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @kush-gupt, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the application's capabilities by introducing native support for Apple's MLX framework, specifically optimizing LLM inference on Apple Silicon. It seamlessly integrates the mlx_lm package into the core command-line interface and execution paths, ensuring efficient and platform-aware model operation. The changes also include necessary documentation updates and comprehensive testing to guarantee reliability and proper usage.

Highlights

  • New MLX Runtime Support: Introduced a new --runtime mlx option, enabling optimized inference for Large Language Models (LLMs) on Apple Silicon Macs by integrating with the mlx_lm Python package.
  • Automatic Container Handling: The MLX runtime automatically enforces --nocontainer mode, as mlx_lm runs directly on the host system. If --container is explicitly provided, a warning is issued, but --nocontainer is still applied.
  • Direct mlx_lm Integration: The run command now directly invokes mlx_lm for both interactive chat and one-shot generation, while the serve command utilizes mlx_lm server for REST API endpoints. This bypasses the previous daemonized service approach for MLX.
  • Platform and Constraint Validation: Added robust validation to ensure the MLX runtime is exclusively used on macOS systems with Apple Silicon hardware. Attempts to use MLX on unsupported platforms will result in an error.
  • Comprehensive Documentation and Testing: Updated the README, man pages (ramalama-serve.1.md, ramalama.conf.5.md), and configuration file (ramalama.conf) to reflect the new MLX runtime. A new, extensive system test suite (080-mlx.bats) and unit tests (test_model.py) have been added to cover MLX functionality, argument parsing, and constraint enforcement.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

sourcery-ai[bot]
sourcery-ai bot previously requested changes Jul 2, 2025
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @kush-gupt - I've reviewed your changes - here's some feedback:

Blocking issues:

  • Detected subprocess function 'Popen' without a static string. If this data can be controlled by a malicious actor, it may be an instance of command injection. Audit the use of this call to ensure it is not controllable by an external resource. You may consider using 'shlex.escape()'. (link)

General comments:

  • Avoid using shlex.quote(model_path) when building the exec_args list—passing the raw path will prevent embedding literal quotes into the subprocess arguments.
  • There’s duplicated logic between mlx_run_chat and _mlx_generate_response; consider extracting shared subprocess setup and I/O handling into a common helper to reduce maintenance overhead.
  • Add an upfront check in CLI initialization or validate_args to verify that the mlx_lm package is installed and error early with a clear message if it’s missing, instead of relying on a failing subprocess call.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Avoid using `shlex.quote(model_path)` when building the `exec_args` list—passing the raw path will prevent embedding literal quotes into the subprocess arguments.
- There’s duplicated logic between `mlx_run_chat` and `_mlx_generate_response`; consider extracting shared subprocess setup and I/O handling into a common helper to reduce maintenance overhead.
- Add an upfront check in CLI initialization or `validate_args` to verify that the `mlx_lm` package is installed and error early with a clear message if it’s missing, instead of relying on a failing subprocess call.

## Individual Comments

### Comment 1
<location> `ramalama/model.py:437` </location>
<code_context>
+        if subcommand not in allowed_subcommands:
+            raise ValueError(f"Invalid subcommand: {subcommand}")
+
+        exec_args = [
+            "python",
+            "-m",
+            "mlx_lm",
+            subcommand,
+            "--model",
+            shlex.quote(model_path),
+        ]
+
</code_context>

<issue_to_address>
Quoting model_path with shlex.quote in exec_args may cause issues.

Passing model_path through shlex.quote adds literal quotes, which can break argument parsing. Pass model_path as a raw string in exec_args instead.
</issue_to_address>

### Comment 2
<location> `ramalama/model.py:488` </location>
<code_context>
+
+        # For interactive mode, we need to capture the response
+        # Consume stderr concurrently to avoid deadlocks if its buffer fills.
+        def _drain_stderr(proc):
+            if proc.stderr is None:
+                return
+            while True:
+                chunk = proc.stderr.read(1024)
+                if chunk == "" and proc.poll() is not None:
+                    break
+                if chunk and "EOFError" not in chunk:
+                    sys.stderr.write(chunk)
+                    sys.stderr.flush()
</code_context>

<issue_to_address>
Filtering out 'EOFError' from stderr may hide other relevant errors.

Filtering out 'EOFError' may suppress important context in stderr. Please reconsider whether this exclusion is necessary, or ensure all relevant errors are still visible for debugging.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
                if chunk and "EOFError" not in chunk:
                    sys.stderr.write(chunk)
                    sys.stderr.flush()
=======
                if chunk:
                    sys.stderr.write(chunk)
                    sys.stderr.flush()
>>>>>>> REPLACE

</suggested_fix>

### Comment 3
<location> `test/system/080-mlx.bats:161` </location>
<code_context>
+    is "$output" ".*python.*-m.*mlx_lm.*chat.*" "should use MLX chat command"
+}
+
+@test "ramalama --runtime=mlx rejects --name option" {
+    skip_if_not_apple_silicon
+    skip_if_no_mlx
+    
+    # --name requires container mode, which MLX doesn't support
+    run_ramalama 1 --runtime=mlx run --name test ${MODEL}
+    is "$output" ".*--nocontainer.*--name.*conflict.*" "should show conflict error"
+}
+
</code_context>

<issue_to_address>
Edge case: Add a test for MLX runtime with both --container and --nocontainer flags.

Consider adding a test that provides both --container and --nocontainer flags together to verify that the CLI handles this conflict properly and returns a clear error message.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
+@test "ramalama --runtime=mlx rejects --name option" {
+    skip_if_not_apple_silicon
+    skip_if_no_mlx
+    
+    # --name requires container mode, which MLX doesn't support
+    run_ramalama 1 --runtime=mlx run --name test ${MODEL}
+    is "$output" ".*--nocontainer.*--name.*conflict.*" "should show conflict error"
+}
=======
+@test "ramalama --runtime=mlx rejects --name option" {
+    skip_if_not_apple_silicon
+    skip_if_no_mlx
+    
+    # --name requires container mode, which MLX doesn't support
+    run_ramalama 1 --runtime=mlx run --name test ${MODEL}
+    is "$output" ".*--nocontainer.*--name.*conflict.*" "should show conflict error"
+}
+
+@test "ramalama --runtime=mlx rejects both --container and --nocontainer flags" {
+    skip_if_not_apple_silicon
+    skip_if_no_mlx
+
+    # Providing both --container and --nocontainer should result in a conflict error
+    run_ramalama 1 --runtime=mlx run --container --nocontainer ${MODEL}
+    is "$output" ".*(--container.*--nocontainer|--nocontainer.*--container).*conflict.*" "should show conflict error for both flags"
+}
>>>>>>> REPLACE

</suggested_fix>

## Security Issues

### Issue 1
<location> `ramalama/model.py:501` </location>

<issue_to_address>
**security (opengrep-rules.python.lang.security.audit.dangerous-subprocess-use-audit):** Detected subprocess function 'Popen' without a static string. If this data can be controlled by a malicious actor, it may be an instance of command injection. Audit the use of this call to ensure it is not controllable by an external resource. You may consider using 'shlex.escape()'.

*Source: opengrep*
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully integrates MLX runtime support for Apple Silicon, enhancing performance across various ramalama commands. The changes are well-structured, covering documentation, CLI, core logic, and testing. Feedback includes correcting a misleading error message, adding a missing newline in a test file, and improving code clarity for a specific mlx_lm behavior.

@kush-gupt kush-gupt force-pushed the feat/mlx branch 2 times, most recently from bee40f5 to 3f4b9b5 Compare July 2, 2025 01:35
ramalama/chat.py Outdated
@@ -69,7 +69,11 @@ def __init__(self, args):
self.request_in_process = False
self.prompt = args.prefix

self.url = f"{args.url}/chat/completions"
# MLX server uses /v1/chat/completions endpoint
if getattr(args, "runtime", None) == "mlx":
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I don't understand here is the url should be with /v1/ for both, ramalama chat adds v1 by default:

$ ramalama chat -h | grep -A1 "the url"
  --url URL             the url to send requests to (default:
                        http://127.0.0.1:8080/v1)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, this was an artifact I had from a previous bug, I can take it out

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And that bug was the server implementation lol!

llama.cpp url does not seem to have v1 in it from what I see:

Default runtime: http://127.0.0.1:8080/chat/completions
MLX runtime: http://127.0.0.1:8080/v1/chat/completions
llama.cpp runtime: http://127.0.0.1:8080/chat/completions

@ericcurtin
Copy link
Member

ericcurtin commented Jul 2, 2025

I think we should use:

mlx_lm.server

for everything, like we use:

llama-server

for everything llama.cpp based these days.

I worry about vibe coding sometimes, it feels like there's at least 200 extra lines of code here than necessary (although I expect switching to mlx_lm.server will eliminate many of these, blaming vibe coding is probably wrong 😄 ), this code could be simpler.

@ericcurtin
Copy link
Member

We get the unified "ramalama chat/run" experience if we use server also. We don't have to document, well if it's mlx it's like this, if it's llama.cpp it's like this, etc.

@ericcurtin
Copy link
Member

I can't wait to play around and do some mlx vs llama.cpp performance comparisons once this is in though for a few minutes

@kush-gupt
Copy link
Contributor Author

I worry about vibe coding sometimes, it feels like there's at least 200 extra lines of code here than necessary (although I expect switching to mlx_lm.server will eliminate many of these, blaming vibe coding is probably wrong 😄 ), this code could be simpler.

You should have seen it before, lol!

I think on one hand yes, I could have made a smaller implementation that performed relatively well. But that would've taken me (a still slightly rusty coder) a lot more time than Claude did. It also created unit and bats tests, something I may have (potentially) omitted in the past at times :)

@kush-gupt
Copy link
Contributor Author

I think we should use:

mlx_lm.server

for everything, like we use:

llama-server

for everything llama.cpp based these days.

I actually did have this as my first draft, but I couldn't find a way to get streaming to work.

I was decided between consistency with llama.cpp and user experience, would run without steaming be worth the consistency?

@ericcurtin
Copy link
Member

I think we should use:
mlx_lm.server
for everything, like we use:
llama-server
for everything llama.cpp based these days.

I actually did have this as my first draft, but I couldn't find a way to get streaming to work.

I was decided between consistency with llama.cpp and user experience, would run without steaming be worth the consistency?

Can we do with no streaming now? And open an issue on mlx github with streaming as a feature request. If mlx add that feature we can then use it.

I want to consolidate on OpenAI-api usage, it just makes maintenance easier.

@kush-gupt
Copy link
Contributor Author

New examples from client/server MLX:

❯ ramalama --debug --runtime mlx run hf://mlx-community/Unsloth-Phi-4-4bit "What is the answer to life?"
2025-07-02 11:58:58 - DEBUG - run_cmd: npu-smi info
2025-07-02 11:58:58 - DEBUG - Working directory: None
2025-07-02 11:58:58 - DEBUG - Ignore stderr: False
2025-07-02 11:58:58 - DEBUG - Ignore all: False
2025-07-02 11:58:58 - DEBUG - run_cmd: mthreads-gmi
2025-07-02 11:58:58 - DEBUG - Working directory: None
2025-07-02 11:58:58 - DEBUG - Ignore stderr: False
2025-07-02 11:58:58 - DEBUG - Ignore all: False
2025-07-02 11:58:58 - DEBUG - run_cmd: podman inspect quay.io/ramalama/ramalama:0.10
2025-07-02 11:58:58 - DEBUG - Working directory: None
2025-07-02 11:58:58 - DEBUG - Ignore stderr: False
2025-07-02 11:58:58 - DEBUG - Ignore all: True
2025-07-02 11:58:58 - DEBUG - Command finished with return code: 0
2025-07-02 11:58:58 - DEBUG - Checking if 8080 is available
MLX server not ready, waiting... (attempt 1/10)
2025-07-02 11:58:58 - DEBUG - Checking if 8080 is available
2025-07-02 11:58:58 - DEBUG - run_cmd: npu-smi info
2025-07-02 11:58:58 - DEBUG - Working directory: None
2025-07-02 11:58:58 - DEBUG - Ignore stderr: False
2025-07-02 11:58:58 - DEBUG - Ignore all: False
2025-07-02 11:58:58 - DEBUG - run_cmd: mthreads-gmi
2025-07-02 11:58:58 - DEBUG - Working directory: None
2025-07-02 11:58:58 - DEBUG - Ignore stderr: False
2025-07-02 11:58:58 - DEBUG - Ignore all: False
2025-07-02 11:58:58 - DEBUG - exec_cmd: python -m mlx_lm server --model /Users/kugupta/.local/share/ramalama/store/huggingface/mlx-community/Unsloth-Phi-4-4bit/snapshots/sha256-d1f444c20e6479cbf8cf6bc057ee3248fa10cd42 --temp 0.8 --max-tokens 2048 --port 8080 --host 0.0.0.0
/Users/kugupta/Documents/work/ramalama/.venv/lib/python3.12/site-packages/mlx_lm/server.py:924: UserWarning: mlx_lm.server is not recommended for production as it only implements basic security checks.
  warnings.warn(
2025-07-02 11:58:59,721 - INFO - Starting httpd at 0.0.0.0 on port 8080...
2025-07-02 11:59:02 - DEBUG - Request: URL=http://127.0.0.1:8080/v1/chat/completions, Data=b'{"stream": true, "messages": [{"role": "user", "content": "What is the answer to life?"}]}', Headers={'Content-Type': 'application/json'}
127.0.0.1 - - [02/Jul/2025 11:59:02] "POST /v1/chat/completions HTTP/1.1" 200 -
The question "What is the answer to life?" is famously addressed in Douglas Adams' science fiction series "The Hitchhiker's Guide to the Galaxy," where the answer is humorously given as "42." However, beyond this comedic context, the question of life's meaning is a profound philosophical and existential inquiry. Different cultures, religions, philosophies, and individuals offer various interpretations, often focusing on fulfillment, purpose, happiness, love, knowledge, or spiritual enlightenment. Ultimately, the answer may vary greatly depending on one's personal beliefs and values.
❯ ramalama --debug --runtime mlx run hf://mlx-community/Unsloth-Phi-4-4bit
2025-07-02 11:59:42 - DEBUG - run_cmd: npu-smi info
2025-07-02 11:59:42 - DEBUG - Working directory: None
2025-07-02 11:59:42 - DEBUG - Ignore stderr: False
2025-07-02 11:59:42 - DEBUG - Ignore all: False
2025-07-02 11:59:42 - DEBUG - run_cmd: mthreads-gmi
2025-07-02 11:59:42 - DEBUG - Working directory: None
2025-07-02 11:59:42 - DEBUG - Ignore stderr: False
2025-07-02 11:59:42 - DEBUG - Ignore all: False
2025-07-02 11:59:42 - DEBUG - run_cmd: podman inspect quay.io/ramalama/ramalama:0.10
2025-07-02 11:59:42 - DEBUG - Working directory: None
2025-07-02 11:59:42 - DEBUG - Ignore stderr: False
2025-07-02 11:59:42 - DEBUG - Ignore all: True
2025-07-02 11:59:42 - DEBUG - Command finished with return code: 0
2025-07-02 11:59:42 - DEBUG - Checking if 8080 is available
MLX server not ready, waiting... (attempt 1/10)
2025-07-02 11:59:42 - DEBUG - Checking if 8080 is available
2025-07-02 11:59:42 - DEBUG - run_cmd: npu-smi info
2025-07-02 11:59:42 - DEBUG - Working directory: None
2025-07-02 11:59:42 - DEBUG - Ignore stderr: False
2025-07-02 11:59:42 - DEBUG - Ignore all: False
2025-07-02 11:59:42 - DEBUG - run_cmd: mthreads-gmi
2025-07-02 11:59:42 - DEBUG - Working directory: None
2025-07-02 11:59:42 - DEBUG - Ignore stderr: False
2025-07-02 11:59:42 - DEBUG - Ignore all: False
2025-07-02 11:59:42 - DEBUG - exec_cmd: python -m mlx_lm server --model /Users/kugupta/.local/share/ramalama/store/huggingface/mlx-community/Unsloth-Phi-4-4bit/snapshots/sha256-d1f444c20e6479cbf8cf6bc057ee3248fa10cd42 --temp 0.8 --max-tokens 2048 --port 8080 --host 0.0.0.0
/Users/kugupta/Documents/work/ramalama/.venv/lib/python3.12/site-packages/mlx_lm/server.py:924: UserWarning: mlx_lm.server is not recommended for production as it only implements basic security checks.
  warnings.warn(
2025-07-02 11:59:43,976 - INFO - Starting httpd at 0.0.0.0 on port 8080...
🍏 > What is the answer to life?
2025-07-02 11:59:52 - DEBUG - Request: URL=http://127.0.0.1:8080/v1/chat/completions, Data=b'{"stream": true, "messages": [{"role": "user", "content": "What is the answer to life?"}]}', Headers={'Content-Type': 'application/json'}
127.0.0.1 - - [02/Jul/2025 11:59:52] "POST /v1/chat/completions HTTP/1.1" 200 -
The question "What is the answer to life?" is famously posed in Douglas Adams' "The Hitchhiker's Guide to the Galaxy" series, where the ultimate answer to the "ultimate question of life, the universe, and everything" is humorously given as 42. However, in a more philosophical or existential context, the answer to life is subjective and varies greatly among different cultures, religions, philosophies, and individuals.

Some may find meaning in their relationships, work, personal growth, or contribution to society, while others may seek fulfillment through spirituality, art, or the pursuit of knowledge. Ultimately, the answer to life depends on your personal beliefs, values, and experiences.
🍏 > /bye
❯ ramalama --debug --runtime mlx serve hf://mlx-community/Unsloth-Phi-4-4bit
2025-07-02 12:00:25 - DEBUG - run_cmd: npu-smi info
2025-07-02 12:00:25 - DEBUG - Working directory: None
2025-07-02 12:00:25 - DEBUG - Ignore stderr: False
2025-07-02 12:00:25 - DEBUG - Ignore all: False
2025-07-02 12:00:25 - DEBUG - run_cmd: mthreads-gmi
2025-07-02 12:00:25 - DEBUG - Working directory: None
2025-07-02 12:00:25 - DEBUG - Ignore stderr: False
2025-07-02 12:00:25 - DEBUG - Ignore all: False
2025-07-02 12:00:25 - DEBUG - run_cmd: podman inspect quay.io/ramalama/ramalama:0.10
2025-07-02 12:00:25 - DEBUG - Working directory: None
2025-07-02 12:00:25 - DEBUG - Ignore stderr: False
2025-07-02 12:00:25 - DEBUG - Ignore all: True
2025-07-02 12:00:25 - DEBUG - Command finished with return code: 0
2025-07-02 12:00:25 - DEBUG - Checking if 8080 is available
2025-07-02 12:00:25 - DEBUG - Checking if 8090 is available
2025-07-02 12:00:25 - DEBUG - run_cmd: npu-smi info
2025-07-02 12:00:25 - DEBUG - Working directory: None
2025-07-02 12:00:25 - DEBUG - Ignore stderr: False
2025-07-02 12:00:25 - DEBUG - Ignore all: False
2025-07-02 12:00:25 - DEBUG - run_cmd: mthreads-gmi
2025-07-02 12:00:25 - DEBUG - Working directory: None
2025-07-02 12:00:25 - DEBUG - Ignore stderr: False
2025-07-02 12:00:25 - DEBUG - Ignore all: False
2025-07-02 12:00:25 - DEBUG - exec_cmd: python -m mlx_lm server --model /Users/kugupta/.local/share/ramalama/store/huggingface/mlx-community/Unsloth-Phi-4-4bit/snapshots/sha256-d1f444c20e6479cbf8cf6bc057ee3248fa10cd42 --temp 0.8 --max-tokens 2048 --port 8090 --host 0.0.0.0
/Users/kugupta/Documents/work/ramalama/.venv/lib/python3.12/site-packages/mlx_lm/server.py:924: UserWarning: mlx_lm.server is not recommended for production as it only implements basic security checks.
  warnings.warn(
2025-07-02 12:00:26,977 - INFO - Starting httpd at 0.0.0.0 on port 8090...

❯ curl localhost:8090/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "messages": [{"role": "user", "content": "What is the answer to life?"}],
     "temperature": 0.8
   }'
{"id": "chatcmpl-b8804092-4457-4ce8-b444-453cf847c260", "system_fingerprint": "0.25.3-0.26.1-macOS-15.5-arm64-arm-64bit-applegpu_g15s", "object": "chat.completion", "model": "default_model", "created": 1751472045, "choices": [{"index": 0, "logprobs": {"token_logprobs": [0.0, -0.03125, -0.15625, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -0.125, -0.03125, -1.0, -0.5625, -0.046875, 0.0, -0.03125, -0.640625, 0.0, 0.0, -1.46875, -0.09375, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -0.03125, 0.0, -0.59375, -0.046875, -0.09375, -0.1875, 0.0, -0.25, -0.546875, -0.015625, 0.0, -0.703125, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -0.015625, 0.0, 0.0, -0.1875, 0.0, -0.3125, 0.0, 0.0, 0.0, -0.078125, -0.609375, 0.0, -0.65625, -0.09375, -1.34375, 0.0, -0.375, -3.578125, 0.0, -1.453125, -0.453125, 0.0, -0.203125, 0.0, 0.0, 0.0, 0.0, -0.015625, -0.1875, -1.5625, -0.03125, -2.078125, -0.015625, -0.921875, -0.03125, -3.515625, 0.0, 0.0, -0.34375, -0.015625, -2.09375, -0.765625, -0.15625, -0.015625, -0.125, -0.1875, -1.71875, -3.3125, -1.125, -0.28125, -2.21875, 0.0, -0.09375, -0.671875, -3.703125, -3.859375, -0.734375, -0.4375, 0.0, -0.140625, -0.421875, -1.671875, -0.390625, 0.0, 0.0, -2.296875, -1.4375, -1.359375, -0.3125, -1.765625, -0.234375, -1.34375, -1.4375, -0.5, -0.875, -0.015625, 0.0, 0.0, -0.03125, -0.359375, -1.265625, -0.03125, -0.4375, -0.203125, -0.421875, 0.0, -3.296875, 0.0, -2.84375, 0.0, -0.671875, -0.265625, -1.421875, -1.234375, -0.03125, -4.765625, 0.0, -2.234375, -0.40625, -0.203125, -1.6875, -0.0625, -0.59375, -0.1875, 0.0, -0.015625, -3.5, -1.546875, -0.203125, -0.15625, -0.296875, 0.0, -0.25, -0.28125, -1.015625, -2.765625, -0.015625, -1.4375, -0.09375, -0.203125, 0.0, -0.015625, -4.34375, 0.0, -0.328125, -0.015625, -1.453125, -0.4375, -0.375, 0.0, 0.0], "top_logprobs": [], "tokens": [791, 3488, 330, 3923, 374, 279, 4320, 311, 2324, 7673, 374, 51287, 37260, 304, 31164, 27329, 6, 8198, 17422, 4101, 11, 330, 791, 71464, 71, 25840, 596, 13002, 311, 279, 20238, 1210, 763, 279, 4101, 11, 279, 17139, 4320, 311, 2324, 11, 279, 15861, 11, 323, 4395, 374, 28485, 7162, 2728, 439, 279, 1396, 220, 2983, 13, 4452, 11, 279, 5885, 304, 279, 3446, 5296, 430, 814, 656, 539, 3604, 1440, 1148, 279, 3488, 374, 11, 39686, 279, 23965, 323, 8530, 279, 91182, 3225, 315, 9455, 264, 35044, 11, 45813, 4320, 311, 1778, 28254, 41903, 4860, 382, 30690, 11597, 2740, 323, 304, 1690, 18330, 32006, 11, 279, 330, 57865, 315, 2324, 1, 649, 387, 44122, 323, 13592, 19407, 505, 832, 3927, 477, 7829, 311, 2500, 13, 4427, 1253, 1505, 7438, 1555, 12135, 11, 53881, 11, 6677, 11, 477, 18330, 21463, 11, 1418, 369, 3885, 433, 2643, 387, 1766, 304, 279, 33436, 315, 23871, 323, 4443, 57383, 13, 55106, 11, 279, 4320, 311, 420, 3488, 649, 387, 17693, 4443, 323, 18222, 389, 832, 596, 78162, 323, 11704, 13, 100265]}, "finish_reason": "stop", "message": {"role": "assistant", "content": "The question \"What is the answer to life?\" is famously posed in Douglas Adams' science fiction series, \"The Hitchhiker's Guide to the Galaxy.\" In the series, the ultimate answer to life, the universe, and everything is humorously given as the number 42. However, the characters in the story note that they do not actually know what the question is, highlighting the complexity and perhaps the impossibility of finding a singular, definitive answer to such profound philosophical questions.\n\nPhilosophically and in many spiritual traditions, the \"meaning of life\" can be subjective and vary greatly from one individual or culture to another. Some may find meaning through relationships, accomplishments, knowledge, or spiritual beliefs, while for others it might be found in the pursuit of happiness and personal fulfillment. Ultimately, the answer to this question can be deeply personal and dependent on one's worldview and experiences.", "tool_calls": []}}], "usage": {"prompt_tokens": 14, "completion_tokens": 178, "total_tokens": 192}}%

@kush-gupt
Copy link
Contributor Author

@sourcery-ai dismiss

@sourcery-ai sourcery-ai bot dismissed their stale review July 2, 2025 16:03

Automated Sourcery review dismissed.

@kush-gupt
Copy link
Contributor Author

@sourcery-ai review

@kush-gupt
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces MLX runtime support for Apple Silicon, which is a significant enhancement. The implementation is well-structured, with clear separation of concerns, especially the refactoring of the run method into smaller, more manageable functions. The platform validation and automatic --nocontainer mode for MLX are great additions for user experience. The new unit and system tests provide excellent coverage for the new functionality.

I have a few minor suggestions to improve maintainability by replacing magic numbers with named constants and adhering to Python's import conventions. Overall, this is a high-quality contribution.

@@ -142,6 +142,10 @@ def _req(self):
i = 0.01
total_time_slept = 0
response = None

# Adjust timeout based on whether we're in initial connection phase
max_timeout = 30 if getattr(self.args, "initial_connection", False) else 16
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The timeout values 30 and 16 are magic numbers. To improve readability and maintainability, they should be defined as constants with descriptive names at the module level.

For example:

# At module level
INITIAL_CONNECTION_TIMEOUT_S = 30
CHAT_RESPONSE_TIMEOUT_S = 16

Then you can use these constants here.

Comment on lines +411 to +440
def _handle_mlx_chat(self, args):
"""Handle chat for MLX runtime with connection retries."""
args.ignore = getattr(args, "dryrun", False)
args.initial_connection = True
max_retries = 10

for i in range(max_retries):
try:
if self._is_server_ready(args.port):
args.initial_connection = False
time.sleep(1) # Give server time to stabilize
chat.chat(args)
break
else:
if args.debug:
print(f"MLX server not ready, waiting... (attempt {i+1}/{max_retries})", file=sys.stderr)
time.sleep(3)
continue

except Exception as e:
if i >= max_retries - 1:
print(f"Error: Failed to connect to MLX server after {max_retries} attempts: {e}", file=sys.stderr)
self._cleanup_server_process(args.pid2kill)
raise e
if args.debug:
print(f"Connection attempt failed, retrying... (attempt {i+1}/{max_retries}): {e}", file=sys.stderr)
time.sleep(3)

args.initial_connection = False
return 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This function uses several magic numbers for configuration (e.g., max_retries = 10 on line 415, time.sleep(1) on line 421, time.sleep(3) on lines 427 and 437). It would be better to define these as named constants at the module level. This improves readability and makes it easier to adjust these values in the future.

For example:

# At module level
MAX_MLX_CONNECTION_RETRIES = 10
MLX_SERVER_STABILIZATION_WAIT_S = 1
MLX_CONNECTION_RETRY_WAIT_S = 3

if not pid:
return

import signal
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

According to PEP 8, imports should be at the top of the file. Moving import signal to the top of ramalama/model.py improves code style consistency and can avoid re-importing the module if this function is called multiple times. Placing imports inside functions is generally discouraged unless it's for resolving circular dependencies or for optional, platform-specific imports, which is not the case here.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @kush-gupt - I've reviewed your changes and they look great!

Blocking issues:

  • time.sleep() call; did you mean to leave this in? (link)
  • time.sleep() call; did you mean to leave this in? (link)
  • time.sleep() call; did you mean to leave this in? (link)
  • time.sleep() call; did you mean to leave this in? (link)
Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments

### Comment 1
<location> `ramalama/chat.py:168` </location>
<code_context>

-        print(f"\rError: could not connect to: {self.url}", file=sys.stderr)
-        self.kills()
+        # Only show error and kill if not in initial connection phase
+        if not getattr(self.args, "initial_connection", False):
+            print(f"\rError: could not connect to: {self.url}", file=sys.stderr)
+            self.kills()
</code_context>

<issue_to_address>
Suppressing error output during initial connection may hide useful diagnostics.

Consider logging connection errors at the debug level during the initial connection phase to aid troubleshooting without adding unnecessary noise.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
        # Only show error and kill if not in initial connection phase
        if not getattr(self.args, "initial_connection", False):
            print(f"\rError: could not connect to: {self.url}", file=sys.stderr)
            self.kills()
=======
        # Only show error and kill if not in initial connection phase
        if not getattr(self.args, "initial_connection", False):
            print(f"\rError: could not connect to: {self.url}", file=sys.stderr)
            self.kills()
        else:
            import logging
            logging.debug(f"Could not connect to: {self.url}")
>>>>>>> REPLACE

</suggested_fix>

### Comment 2
<location> `test/unit/test_model.py:213` </location>
<code_context>
+    def test_mlx_run_uses_server_client_model(
</code_context>

<issue_to_address>
Missing test for server process failure or connection timeout in MLX run.

Add a test case for server startup failure or connection timeout to verify proper error handling and resource cleanup.
</issue_to_address>

## Security Issues

### Issue 1
<location> `ramalama/model.py:421` </location>

<issue_to_address>
**security (opengrep-rules.python.lang.best-practice.arbitrary-sleep):** time.sleep() call; did you mean to leave this in?

*Source: opengrep*
</issue_to_address>

### Issue 2
<location> `ramalama/model.py:427` </location>

<issue_to_address>
**security (opengrep-rules.python.lang.best-practice.arbitrary-sleep):** time.sleep() call; did you mean to leave this in?

*Source: opengrep*
</issue_to_address>

### Issue 3
<location> `ramalama/model.py:437` </location>

<issue_to_address>
**security (opengrep-rules.python.lang.best-practice.arbitrary-sleep):** time.sleep() call; did you mean to leave this in?

*Source: opengrep*
</issue_to_address>

### Issue 4
<location> `ramalama/model.py:461` </location>

<issue_to_address>
**security (opengrep-rules.python.lang.best-practice.arbitrary-sleep):** time.sleep() call; did you mean to leave this in?

*Source: opengrep*
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +213 to +222
def test_mlx_run_uses_server_client_model(
self, mock_socket_class, mock_chat, mock_fork, mock_compute_port, mock_machine, mock_system
):
"""Test that MLX runtime uses server-client model in run method"""
mock_system.return_value = "Darwin"
mock_machine.return_value = "arm64"
mock_compute_port.return_value = "8080"
mock_fork.return_value = 123 # Parent process

# Mock socket to simulate successful connection (server ready)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Missing test for server process failure or connection timeout in MLX run.

Add a test case for server startup failure or connection timeout to verify proper error handling and resource cleanup.

Comment on lines 361 to 368
if pid == 0:
args.host = CONFIG.host
args.generate = ""
args.detach = True
self.serve(args, True)
# Child process - start the server
self._start_server(args)
return 0
else:
args.url = f"http://127.0.0.1:{args.port}"
args.pid2kill = ""
if args.container:
_, status = os.waitpid(pid, 0)
if status != 0:
raise ValueError(f"Failed to serve model {self.model_name}, for ramalama run command")
args.ignore = args.dryrun
for i in range(0, 6):
try:
chat.chat(args)
break
except Exception as e:
if i > 5:
raise e
time.sleep(1)
# Parent process - connect to server and start chat
return self._connect_and_chat(args, pid)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): We've found these issues:

@kush-gupt kush-gupt force-pushed the feat/mlx branch 2 times, most recently from 611ba70 to 6a15d36 Compare July 2, 2025 16:40
@kush-gupt
Copy link
Contributor Author

kush-gupt commented Jul 2, 2025

Could someone fill me in on why setting --host 0.0.0.0 in a container is a bad thing for llama.cpp serve?

@rhatdan
Copy link
Member

rhatdan commented Jul 3, 2025

Could someone fill me in on why setting --host 0.0.0.0 in a container is a bad thing for llama.cpp serve?

It is not, as far as I know, my understanding was that this was the default, so it just becomes repetitive, if the server needs it then add it.

@ericcurtin
Copy link
Member

By the way, I am glad we are getting this in because mlx is compatible with safetensors.

It means macOS users can test safetensors files, like what would be used in vLLM like on RHEL AI or OpenShift AI.

Upto now we had no way of doing this because llama.cpp is only compatible with gguf.

Would be cool to see people running these on RamaLama on macOS for example:

https://huggingface.co/RedHatAI

@rhatdan
Copy link
Member

rhatdan commented Jul 3, 2025

Needs a rebase.

@kush-gupt
Copy link
Contributor Author

Needs a rebase.

I believe I did this correctly now!

Signed-off-by: Kush Gupta <[email protected]>
@rhatdan
Copy link
Member

rhatdan commented Jul 4, 2025

LGTM

@rhatdan rhatdan merged commit c9f9f69 into containers:main Jul 4, 2025
9 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants