-
Notifications
You must be signed in to change notification settings - Fork 235
There's a change that we want that avoids using software rasterizers #1495
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
It avoids using llvmpipe when Vulkan is built in and fallsback to ggml-cpu. Signed-off-by: Eric Curtin <[email protected]>
Reviewer's GuideThis PR updates the pinned commit SHAs for whisper.cpp and llama.cpp in the build script to pull in changes that disable llvmpipe when Vulkan is available and fall back to the ggml CPU implementation. Sequence Diagram for Backend Selection Logic with llvmpipe FallbacksequenceDiagram
participant App as Application
participant Vulkan as Vulkan Runtime
participant CPU_Backend as GGML_CPU_Backend
App->>Vulkan: Query Vulkan Support
Vulkan-->>App: Vulkan Support Details (isAvailable, isLlvmpipeIfAvailable)
alt isAvailable
App->>App: Vulkan is available
alt isLlvmpipeIfAvailable
App->>App: Driver is llvmpipe. Avoiding.
App->>CPU_Backend: Initialize ggml-cpu (fallback)
CPU_Backend-->>App: ggml-cpu Initialized
else Not llvmpipe
App->>App: Driver is hardware. Using Vulkan.
App->>Vulkan: Initialize Vulkan
Vulkan-->>App: Vulkan Initialized
end
else Not isAvailable
App->>App: Vulkan not available.
App->>CPU_Backend: Initialize ggml-cpu (fallback)
CPU_Backend-->>App: ggml-cpu Initialized
end
File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
LGTM |
Should we build new ramalama image? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @ericcurtin - I've reviewed your changes and they look great!
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟢 Complexity: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
@rhatdan upto you, it will fix some performance/memory issues, Monday would be fine also. |
It avoids using llvmpipe when Vulkan is built in and fallsback to ggml-cpu.
Summary by Sourcery
Bump the pinned commit SHAs for whisper.cpp and llama.cpp in the build script to incorporate upstream changes that avoid using llvmpipe by falling back to ggml-cpu when Vulkan is enabled.
Enhancements:
Build: