Skip to content

There's a change that we want that avoids using software rasterizers #1495

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 10, 2025

Conversation

ericcurtin
Copy link
Member

@ericcurtin ericcurtin commented Jun 10, 2025

It avoids using llvmpipe when Vulkan is built in and fallsback to ggml-cpu.

Summary by Sourcery

Bump the pinned commit SHAs for whisper.cpp and llama.cpp in the build script to incorporate upstream changes that avoid using llvmpipe by falling back to ggml-cpu when Vulkan is enabled.

Enhancements:

  • Inherit upstream Vulkan support changes to avoid software rasterizers and default to ggml-cpu

Build:

  • Update whisper.cpp pinned SHA to 51c6961c7b64b406833f4b6a4a20e67142f69225
  • Update llama.cpp pinned SHA to 97340b4c9924be86704dbf155e97c8319849ee19

It avoids using llvmpipe when Vulkan is built in and fallsback to
ggml-cpu.

Signed-off-by: Eric Curtin <[email protected]>
Copy link
Contributor

sourcery-ai bot commented Jun 10, 2025

Reviewer's Guide

This PR updates the pinned commit SHAs for whisper.cpp and llama.cpp in the build script to pull in changes that disable llvmpipe when Vulkan is available and fall back to the ggml CPU implementation.

Sequence Diagram for Backend Selection Logic with llvmpipe Fallback

sequenceDiagram
    participant App as Application
    participant Vulkan as Vulkan Runtime
    participant CPU_Backend as GGML_CPU_Backend

    App->>Vulkan: Query Vulkan Support
    Vulkan-->>App: Vulkan Support Details (isAvailable, isLlvmpipeIfAvailable)

    alt isAvailable
        App->>App: Vulkan is available
        alt isLlvmpipeIfAvailable
            App->>App: Driver is llvmpipe. Avoiding.
            App->>CPU_Backend: Initialize ggml-cpu (fallback)
            CPU_Backend-->>App: ggml-cpu Initialized
        else Not llvmpipe
            App->>App: Driver is hardware. Using Vulkan.
            App->>Vulkan: Initialize Vulkan
            Vulkan-->>App: Vulkan Initialized
        end
    else Not isAvailable
        App->>App: Vulkan not available.
        App->>CPU_Backend: Initialize ggml-cpu (fallback)
        CPU_Backend-->>App: ggml-cpu Initialized
    end
Loading

File-Level Changes

Change Details Files
Bump pinned ggml commit SHAs for whisper.cpp and llama.cpp
  • Updated whisper_cpp_sha to 51c6961c7b64b406833f4b6a4a20e67142f69225
  • Updated llama_cpp_sha to 97340b4c9924be86704dbf155e97c8319849ee19
container-images/scripts/build_llama_and_whisper.sh

Possibly linked issues

  • #0: PR avoids software rasterizer (llvmpipe) and uses CPU fallback, addressing issue with Vulkan on CPU-only systems.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@rhatdan
Copy link
Member

rhatdan commented Jun 10, 2025

LGTM

@rhatdan rhatdan enabled auto-merge June 10, 2025 12:08
@rhatdan rhatdan disabled auto-merge June 10, 2025 12:08
@rhatdan rhatdan merged commit 4be8cbc into main Jun 10, 2025
9 of 16 checks passed
@rhatdan
Copy link
Member

rhatdan commented Jun 10, 2025

Should we build new ramalama image?

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ericcurtin - I've reviewed your changes and they look great!

Here's what I looked at during the review
  • 🟢 General issues: all looks good
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@ericcurtin ericcurtin deleted the dont-use-llvmpipe branch June 10, 2025 12:10
@ericcurtin
Copy link
Member Author

@rhatdan upto you, it will fix some performance/memory issues, Monday would be fine also.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants