-
Notifications
You must be signed in to change notification settings - Fork 242
Bugfix for chat #1679
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bugfix for chat #1679
Conversation
Reviewer's GuideReintroduced conditional 'model' parameter in request payload using getattr, replacing the previous unconditional uppercase MODEL assignment. Class diagram for updated request data construction in chat moduleclassDiagram
class Chat {
- args
- conversation_history
+ _make_request_data()
}
Chat : _make_request_data() now conditionally adds 'model' to data if self.args.model exists
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
@kush-gupt PTAL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @ericcurtin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a bug in the chat functionality by reintroducing a necessary conditional check for the model
parameter. This ensures that the model
is correctly included in API requests only when it's available, preventing potential errors or misconfigurations that could arise from its previous unconditional assignment, and also corrects a case sensitivity issue in the attribute name.
Highlights
- Bugfix for Model Parameter Handling: Reinstated the conditional assignment of the
model
parameter within the_make_request_data
function inramalama/chat.py
. This restores logic that ensures themodel
is only added to the request data ifself.args.model
exists, addressing a recent regression as it was previously removed. - Attribute Name Correction: Corrected the attribute name used for retrieving the model from
self.args.MODEL
(uppercase) toself.args.model
(lowercase), aligning with standard Python conventions and fixing a potential typo.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @ericcurtin - I've reviewed your changes and they look great!
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
ramalama/chat.py
Outdated
if not (hasattr(self.args, 'runtime') and self.args.runtime == "mlx"): | ||
data["model"] = self.args.MODEL | ||
if getattr(self.args, "model", False): | ||
data["model"] = self.args.model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Merge nested if conditions (merge-nested-ifs
)
if not (hasattr(self.args, 'runtime') and self.args.runtime == "mlx"): | |
data["model"] = self.args.MODEL | |
if getattr(self.args, "model", False): | |
data["model"] = self.args.model | |
if not (hasattr(self.args, 'runtime') and self.args.runtime == "mlx") and getattr(self.args, "model", False): | |
data["model"] = self.args.model | |
Explanation
Too much nesting can make code difficult to understand, and this is especiallytrue in Python, where there are no brackets to help out with the delineation of
different nesting levels.
Reading deeply nested code is confusing, since you have to keep track of which
conditions relate to which levels. We therefore strive to reduce nesting where
possible, and the situation where two if
conditions can be combined using
and
is an easy win.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
The pull request reintroduces a previously removed block of code that sets the 'model' attribute in the request data for the chat API. A suggestion has been made to add a check to ensure the model attribute has a valid value before assignment.
ramalama/chat.py
Outdated
if getattr(self.args, "model", False): | ||
data["model"] = self.args.model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider checking if self.args.model
is not None or empty before assigning it to data["model"]
to prevent potential errors if the value is missing or invalid.
if getattr(self.args, "model", False): | |
data["model"] = self.args.model | |
if getattr(self.args, "model", False) and self.args.model: | |
data["model"] = self.args.model |
2dc055b
to
862afa8
Compare
This fixes the model error indeed! For chatting with mlx, a wrong model name won't show any errors on the chat side. It just repeats the api request until infinity, and the only indication is the mlx server process logs
|
An agnostic model name that works is "default_model", as nothing else worked with my continue integration |
On second thought I think we should just remove this bit and go back to the original code prior to mlx:
if the user passes some bogus model, that's user error, we shouldn't mask that |
This was recently removed: + if getattr(self.args, "model", False): + data["model"] = self.args.model it is required Signed-off-by: Eric Curtin <[email protected]>
862afa8
to
3da38bc
Compare
LGTM |
This was recently removed:
it is required
Summary by Sourcery
Restore conditional addition of the
model
field in chat request payloads based on the presence of the--model
argument.Bug Fixes:
self.args.model
is set instead of always usingself.args.MODEL
.Enhancements:
getattr(self.args, "model", False)
for safe attribute access.