Skip to content

Want to pick up support for gemma3n #1623

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 27, 2025
Merged

Want to pick up support for gemma3n #1623

merged 1 commit into from
Jun 27, 2025

Conversation

ericcurtin
Copy link
Member

@ericcurtin ericcurtin commented Jun 27, 2025

And the other latest and greatest llama.cpp features

Summary by Sourcery

Enhancements:

  • Bump llama.cpp commit SHA to f667f1e6244e1f420512fa66692b7096ff17f366 to include gemma3n support and other recent updates

And the other latest and greatest llama.cpp features

Signed-off-by: Eric Curtin <[email protected]>
Copy link
Contributor

sourcery-ai bot commented Jun 27, 2025

Reviewer's Guide

This PR updates the llama.cpp build script to point at the latest upstream commit, thereby pulling in support for gemma3n and other recent llama.cpp enhancements.

File-Level Changes

Change Details Files
Bump llama.cpp commit SHA in build script
  • Replaced old SHA value with new commit hash
container-images/scripts/build_llama_and_whisper.sh

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@ericcurtin
Copy link
Member Author

As requested by @giuseppe

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ericcurtin - I've reviewed your changes - here's some feedback:

  • The PR title mentions adding gemma3n support, but the diff only updates the llama.cpp SHA—please include the actual gemma3n integration changes or adjust the description.
  • Instead of hardcoding a raw SHA, consider pinning to a llama.cpp release tag or ensure the script explicitly checks out the specified commit after cloning for clarity.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The PR title mentions adding gemma3n support, but the diff only updates the llama.cpp SHA—please include the actual gemma3n integration changes or adjust the description.
- Instead of hardcoding a raw SHA, consider pinning to a llama.cpp release tag or ensure the script explicitly checks out the specified commit after cloning for clarity.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Member

@giuseppe giuseppe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @ericcurtin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates the llama.cpp submodule to a newer version, presumably to add support for the gemma3n model and incorporate the latest features from the llama.cpp project. The change involves updating the SHA hash in the build script.

Highlights

  • llama.cpp Update: The llama.cpp submodule is updated to commit f667f1e6244e1f420512fa66692b7096ff17f366.
  • gemma3n Support: The PR description indicates that this update brings support for gemma3n.
  • Latest llama.cpp Features: The PR description also mentions incorporating the latest llama.cpp features.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request updates the llama.cpp commit SHA in the build_llama_and_whisper.sh script to include gemma3n support and other recent updates. It's important to ensure that the new commit is stable and doesn't introduce any regressions.

@@ -268,7 +268,7 @@ clone_and_build_whisper_cpp() {
}

clone_and_build_llama_cpp() {
local llama_cpp_sha="97340b4c9924be86704dbf155e97c8319849ee19"
local llama_cpp_sha="f667f1e6244e1f420512fa66692b7096ff17f366"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Updating the llama.cpp SHA is a good practice to keep the project up-to-date with the latest features and bug fixes. However, it's crucial to verify that this specific commit has been thoroughly tested and is stable, especially considering the potential for breaking changes in external dependencies. Ensure that the integration tests pass with this new SHA.

@rhatdan
Copy link
Member

rhatdan commented Jun 27, 2025

LGTM

@rhatdan rhatdan merged commit ca9885a into main Jun 27, 2025
10 of 16 checks passed
@ericcurtin ericcurtin deleted the bump-llamacpp2 branch June 27, 2025 16:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants