Skip to content

Conversation

@tattn
Copy link
Owner

@tattn tattn commented Oct 2, 2025

#74

  • Support tool calls for FoundationModelsClient
    • LLMSession of FoundationModels hasn't supported tool calls yet.

This pull request refactors the tool call handling logic for LLM clients by moving the default implementations of generateToolCalls and resumeStream into a protocol extension, and updates the FoundationModelsClient to support tool integration and error handling. The changes reduce code duplication, improve extensibility, and add better error reporting for unavailable models.

Key changes:

Refactoring and Code Reuse:

  • Moved default implementations of generateToolCalls and resumeStream from individual clients (LlamaClient, MLXClient) into a protocol extension for LLMClient, reducing code duplication and centralizing tool call logic. [1] [2] [3]

Tool Integration:

  • Updated FoundationModelsClient to accept a list of tools via its initializer and static factory, allowing tool support to be configured at client creation. [1] [2]

Streaming and Error Handling:

  • Added a responseStream(from:) method to FoundationModelsClient for streaming both text and tool call responses, and improved error handling by introducing a FoundationModelsClientError for unavailable models. [1] [2]

These changes make the codebase cleaner, easier to extend, and more robust in handling tool-based interactions and model availability.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @tattn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the FoundationModelsClient by integrating robust tool call support and improving its streaming capabilities. It achieves this by refactoring common tool-calling logic into a shared LLMClient protocol extension, thereby reducing code duplication across various LLM clients. Additionally, it introduces better error handling for unavailable models within the FoundationModelsClient.

Highlights

  • Refactoring Tool Call Logic: The default implementations of generateToolCalls and resumeStream have been moved from individual clients (like LlamaClient and MLXClient) into a protocol extension for LLMClient, centralizing tool call logic and reducing code duplication.
  • FoundationModelsClient Tool Integration: The FoundationModelsClient now supports tool integration, allowing a list of Tool objects to be passed during its initialization and via its static factory method, enabling tool support to be configured at client creation.
  • Enhanced Streaming for FoundationModelsClient: A new responseStream(from:) method has been added to FoundationModelsClient to enable streaming of both text and tool call responses, providing a unified streaming interface.
  • Improved Error Handling: A new FoundationModelsClientError enum has been introduced to specifically handle cases where a model is unavailable, providing clearer and more specific error reporting.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a good step towards unifying tool call logic across different LLM clients. It successfully refactors duplicated code from LlamaClient and MLXClient into a default implementation in an LLMClient extension, and adds the necessary plumbing for tool support and error handling in FoundationModelsClient.

Repository owner deleted a comment from gemini-code-assist bot Oct 2, 2025
@tattn tattn force-pushed the feature/support-tool-foundationmodels-client branch from 8dea1c5 to 2ce22c6 Compare October 2, 2025 13:43
@tattn tattn merged commit 939f27d into main Oct 2, 2025
8 checks passed
@tattn tattn deleted the feature/support-tool-foundationmodels-client branch October 2, 2025 14:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant