-
Notifications
You must be signed in to change notification settings - Fork 69
add llm__achat to all (but one) supported providers in llm__chat #369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add llm__achat to all (but one) supported providers in llm__chat #369
Conversation
""" WalkthroughAsynchronous Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant LLMApiProvider
participant LLMClient
User->>LLMApiProvider: await llm__achat(messages, ...)
LLMApiProvider->>LLMClient: await acompletion(messages, ...)
LLMClient-->>LLMApiProvider: ChatDataClass (async)
LLMApiProvider-->>User: ChatDataClass (async)
Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a new asynchronous chat method llm__achat
alongside existing llm__chat
, updating the core interface, all supported provider implementations (except one), and associated example output fixtures.
- Define
llm__achat
as an abstract method in the LLM interface. - Implement
llm__achat
in each provider class to callllm_client.acompletion
. - Add or update example JSON fixtures (
achat_output.json
) to reflect the new endpoint.
Reviewed Changes
Copilot reviewed 28 out of 28 changed files in this pull request and generated 3 comments.
Show a summary per file
File | Description |
---|---|
edenai_apis/features/llm/llm_interface.py | Add abstract llm__achat signature |
edenai_apis/apis/xai/xai_llm_api.py | Implement llm__achat method |
edenai_apis/apis/xai/outputs/llm/achat_output.json | Add example fixture for achat |
edenai_apis/apis/together_ai/together_ai_api.py | Implement llm__achat method |
edenai_apis/apis/together_ai/outputs/llm/achat_output.json | Add example fixture for achat |
edenai_apis/apis/replicate/replicate_api.py | Implement llm__achat method |
edenai_apis/apis/replicate/outputs/llm/achat_output.json | Add example fixture for achat |
edenai_apis/apis/openai/openai_llm_api.py | Implement llm__achat method |
edenai_apis/apis/openai/outputs/llm/achat_output.json | Add example fixture for achat |
edenai_apis/apis/mistral/mistral_api.py | Implement llm__achat method |
edenai_apis/apis/mistral/outputs/llm/achat_output.json | Add example fixture for achat |
edenai_apis/apis/minimax/minimax_api.py | Implement llm__achat method |
edenai_apis/apis/minimax/outputs/llm/achat_output.json | Add example fixture for achat |
edenai_apis/apis/microsoft/microsoft_llm_api.py | Implement llm__achat method |
edenai_apis/apis/microsoft/outputs/llm/achat_output.json | Add example fixture for achat |
edenai_apis/apis/meta/meta_api.py | Implement llm__achat method |
edenai_apis/apis/meta/outputs/llm/achat_output.json | Add example fixture for achat |
edenai_apis/apis/iointelligence/iointelligence_api.py | Implement llm__achat method |
edenai_apis/apis/iointelligence/outputs/llm/achat_output.json | Add example fixture for achat |
edenai_apis/apis/groq/groq_api.py | Implement llm__achat method |
edenai_apis/apis/groq/outputs/llm/achat_output.json | Add example fixture for achat |
edenai_apis/apis/google/outputs/llm/achat_output.json | Update example fixture for achat |
edenai_apis/apis/deepseek/deepseek_api.py | Implement llm__achat method |
edenai_apis/apis/deepseek/outputs/llm/achat_output.json | Add example fixture for achat |
edenai_apis/apis/anthropic/anthropic_api.py | Implement llm__achat method |
edenai_apis/apis/anthropic/outputs/llm/achat_output.json | Add example fixture for achat |
edenai_apis/apis/amazon/amazon_llm_api.py | Implement llm__achat method |
edenai_apis/apis/amazon/outputs/llm/achat_output.json | Add example fixture for achat |
Comments suppressed due to low confidence (3)
edenai_apis/features/llm/llm_interface.py:120
- The docstring for llm__achat references parameters like
chatbot_global_action
andtop_k
that do not exist in the signature; please update it to accurately describe the new parameters.
"""
edenai_apis/features/llm/llm_interface.py:76
- New abstract method
llm__achat
is added without accompanying tests; please add unit or integration tests to verify implementations callllm_client.acompletion
correctly.
@abstractmethod
edenai_apis/apis/google/outputs/llm/achat_output.json:12
- [nitpick] The example content for Google output omits the apostrophe after "seein" unlike other fixtures; consider restoring consistency or confirming the change was intentional.
"content": "Arrr, matey! What ye be seein in this here image is a grand pathway, made of wooden planks, weavin' its way through a lush and green landscape. The verdant grass sways in the gentle breeze, and the sky above be a brilliant blue, decorated with fluffy white clouds. Ye can spot trees and bushes on either side, makin' it a perfect setting for a stroll amongst nature. A peaceful place for a pirate at heart, aye!",
extra_headers=extra_headers, | ||
functions=functions, | ||
function_call=function_call, | ||
base_url=self.base_url, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In llm__achat you pass base_url=self.base_url
instead of the base_url
parameter; this will ignore callers' provided URL and should be changed to the local base_url
argument.
Copilot uses AI. Check for mistakes.
@@ -1,15 +1,15 @@ | |||
{ | |||
"id": "chatcmpl-B71qqu4Y7m1ZVuF5YGqxz7LXzgf0Y", | |||
"created": 1741015112, | |||
"model": "gpt-4o-mini-2024-07-18", | |||
"model": "model", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fixture uses a placeholder "model" instead of an actual model name like other providers; this inconsistency may break snapshot tests.
"model": "model", | |
"model": "gpt-3.5-turbo", |
Copilot uses AI. Check for mistakes.
@@ -88,3 +88,84 @@ def llm__chat( | |||
**kwargs, | |||
) | |||
return response | |||
|
|||
async def llm__achat( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The llm__achat
method signature is duplicated across many provider classes; consider extracting a shared base or mixin to reduce code repetition and ensure consistency.
Copilot uses AI. Check for mistakes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 19
♻️ Duplicate comments (4)
edenai_apis/apis/xai/outputs/llm/achat_output.json (1)
1-41
: Same concerns as the Iointelligence fixture (model name, provider_time, duplicated content).edenai_apis/apis/mistral/outputs/llm/achat_output.json (1)
1-41
: Same concerns as the Iointelligence fixture (model name, provider_time, duplicated content).edenai_apis/apis/together_ai/outputs/llm/achat_output.json (1)
1-41
: Same concerns as the Iointelligence fixture (model name, provider_time, duplicated content).edenai_apis/apis/meta/outputs/llm/achat_output.json (1)
1-41
: Same concerns as the Iointelligence fixture (model name, provider_time, duplicated content).
🧹 Nitpick comments (10)
edenai_apis/apis/google/outputs/llm/achat_output.json (1)
12-12
: Restore the missing pirate-style apostropheThe dialect elsewhere keeps the trailing apostrophe (e.g. weavin', makin').
For consistency, consider adding it back to seein and use the straight ASCII'
to avoid encoding hiccups:- "content": "Arrr, matey! What ye be seein in this here image is a grand pathway, made of wooden planks, weavin' its way through a lush and green landscape. ... + "content": "Arrr, matey! What ye be seein' in this here image is a grand pathway, made of wooden planks, weavin' its way through a lush and green landscape. ...edenai_apis/apis/anthropic/outputs/llm/achat_output.json (1)
19-20
: Clarifyprovider_time
scale to avoid downstream confusionThe sample value
3692885792
is ~117 years if interpreted as seconds since epoch, but far too small for milliseconds.
Explicitly state the unit (e.g.,"provider_time_ms": 1234
) or use a realistic epoch-seconds integer to prevent consumers or tests from mis-parsing timing data.edenai_apis/apis/microsoft/outputs/llm/achat_output.json (1)
19-20
:provider_time
magnitude looks unrealisticA value in the billions is unlikely to be either milliseconds or seconds since epoch. Consider switching to a clearly-named, realistic field (
provider_latency_ms
) to avoid misleading benchmarks.edenai_apis/apis/openai/outputs/llm/achat_output.json (1)
19-20
: Use a plausible latency figure or annotate units
"provider_time": 3692885792
will likely trip simple sanity checks.
Either scale it to milliseconds (<10 000 for typical calls) or document the measurement unit.edenai_apis/apis/deepseek/outputs/llm/achat_output.json (1)
19-20
: Questionableprovider_time
sampleThe placeholder value appears inconsistent with real-world response times. Replace with a believable latency or rename the field to make its purpose explicit.
edenai_apis/apis/minimax/outputs/llm/achat_output.json (1)
19-20
: Consider revisingprovider_time
placeholderThe large number does not correspond to common time units and may confuse automated parsers. Recommend using a realistic duration in milliseconds or seconds.
edenai_apis/apis/iointelligence/outputs/llm/achat_output.json (1)
19-20
:provider_time
an order of magnitude too large
3692885792
milliseconds ≈ 42 days. Either document the unit or use a realistic latency (e.g.,< 10 000 ms
).edenai_apis/apis/amazon/outputs/llm/achat_output.json (2)
4-4
: Consider using a more specific model identifier.The generic "model" value could be more descriptive. Compare with other providers that use specific model names like "gpt-4o-mini-2024-07-18".
12-12
: Consider using provider-specific sample content.The content is identical to other providers' output files. Consider using Amazon Bedrock-specific sample responses to better represent the actual API behavior.
edenai_apis/apis/groq/outputs/llm/achat_output.json (1)
12-12
: Consider provider-specific sample content.The sample content is identical across all providers. Consider using Groq-specific sample responses to better represent actual API behavior.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (28)
edenai_apis/apis/amazon/amazon_llm_api.py
(1 hunks)edenai_apis/apis/amazon/outputs/llm/achat_output.json
(1 hunks)edenai_apis/apis/anthropic/anthropic_api.py
(1 hunks)edenai_apis/apis/anthropic/outputs/llm/achat_output.json
(1 hunks)edenai_apis/apis/deepseek/deepseek_api.py
(1 hunks)edenai_apis/apis/deepseek/outputs/llm/achat_output.json
(1 hunks)edenai_apis/apis/google/outputs/llm/achat_output.json
(1 hunks)edenai_apis/apis/groq/groq_api.py
(1 hunks)edenai_apis/apis/groq/outputs/llm/achat_output.json
(1 hunks)edenai_apis/apis/iointelligence/iointelligence_api.py
(1 hunks)edenai_apis/apis/iointelligence/outputs/llm/achat_output.json
(1 hunks)edenai_apis/apis/meta/meta_api.py
(1 hunks)edenai_apis/apis/meta/outputs/llm/achat_output.json
(1 hunks)edenai_apis/apis/microsoft/microsoft_llm_api.py
(1 hunks)edenai_apis/apis/microsoft/outputs/llm/achat_output.json
(1 hunks)edenai_apis/apis/minimax/minimax_api.py
(1 hunks)edenai_apis/apis/minimax/outputs/llm/achat_output.json
(1 hunks)edenai_apis/apis/mistral/mistral_api.py
(1 hunks)edenai_apis/apis/mistral/outputs/llm/achat_output.json
(1 hunks)edenai_apis/apis/openai/openai_llm_api.py
(1 hunks)edenai_apis/apis/openai/outputs/llm/achat_output.json
(1 hunks)edenai_apis/apis/replicate/outputs/llm/achat_output.json
(1 hunks)edenai_apis/apis/replicate/replicate_api.py
(1 hunks)edenai_apis/apis/together_ai/outputs/llm/achat_output.json
(1 hunks)edenai_apis/apis/together_ai/together_ai_api.py
(1 hunks)edenai_apis/apis/xai/outputs/llm/achat_output.json
(1 hunks)edenai_apis/apis/xai/xai_llm_api.py
(1 hunks)edenai_apis/features/llm/llm_interface.py
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
edenai_apis/apis/openai/openai_llm_api.py (2)
edenai_apis/features/llm/chat/chat_dataclass.py (1)
ChatDataClass
(199-214)edenai_apis/llmengine/llm_engine.py (1)
acompletion
(871-987)
edenai_apis/apis/microsoft/microsoft_llm_api.py (4)
edenai_apis/apis/groq/groq_api.py (1)
llm__achat
(141-219)edenai_apis/apis/openai/openai_llm_api.py (1)
llm__achat
(90-168)edenai_apis/features/llm/llm_interface.py (1)
llm__achat
(77-139)edenai_apis/llmengine/llm_engine.py (1)
acompletion
(871-987)
edenai_apis/apis/mistral/mistral_api.py (4)
edenai_apis/apis/groq/groq_api.py (1)
llm__achat
(141-219)edenai_apis/features/llm/llm_interface.py (1)
llm__achat
(77-139)edenai_apis/features/text/chat/chat_dataclass.py (1)
ChatDataClass
(24-30)edenai_apis/llmengine/llm_engine.py (1)
acompletion
(871-987)
🪛 Ruff (0.12.2)
edenai_apis/apis/openai/openai_llm_api.py
92-92: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
edenai_apis/apis/microsoft/microsoft_llm_api.py
93-93: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
edenai_apis/apis/anthropic/anthropic_api.py
216-216: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
edenai_apis/apis/together_ai/together_ai_api.py
146-146: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
edenai_apis/apis/deepseek/deepseek_api.py
142-142: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
edenai_apis/apis/meta/meta_api.py
209-209: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
edenai_apis/features/llm/llm_interface.py
79-79: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
edenai_apis/apis/mistral/mistral_api.py
250-250: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
edenai_apis/apis/groq/groq_api.py
143-143: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
edenai_apis/apis/amazon/amazon_llm_api.py
94-94: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
edenai_apis/apis/xai/xai_llm_api.py
94-94: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
edenai_apis/apis/iointelligence/iointelligence_api.py
120-120: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
edenai_apis/apis/replicate/replicate_api.py
327-327: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
edenai_apis/apis/minimax/minimax_api.py
228-228: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
🪛 Pylint (3.3.7)
edenai_apis/apis/openai/openai_llm_api.py
[refactor] 90-90: Too many arguments (34/7)
(R0913)
[refactor] 90-90: Too many positional arguments (34/5)
(R0917)
[error] 129-129: unsupported operand type(s) for |
(E1131)
edenai_apis/apis/microsoft/microsoft_llm_api.py
[refactor] 91-91: Too many arguments (34/7)
(R0913)
[refactor] 91-91: Too many positional arguments (34/5)
(R0917)
[error] 130-130: unsupported operand type(s) for |
(E1131)
edenai_apis/apis/anthropic/anthropic_api.py
[refactor] 214-214: Too many arguments (34/7)
(R0913)
[refactor] 214-214: Too many positional arguments (34/5)
(R0917)
[error] 253-253: unsupported operand type(s) for |
(E1131)
edenai_apis/apis/together_ai/together_ai_api.py
[refactor] 144-144: Too many arguments (34/7)
(R0913)
[refactor] 144-144: Too many positional arguments (34/5)
(R0917)
[error] 183-183: unsupported operand type(s) for |
(E1131)
edenai_apis/apis/deepseek/deepseek_api.py
[refactor] 140-140: Too many arguments (34/7)
(R0913)
[refactor] 140-140: Too many positional arguments (34/5)
(R0917)
[error] 179-179: unsupported operand type(s) for |
(E1131)
edenai_apis/apis/meta/meta_api.py
[refactor] 207-207: Too many arguments (34/7)
(R0913)
[refactor] 207-207: Too many positional arguments (34/5)
(R0917)
[error] 246-246: unsupported operand type(s) for |
(E1131)
edenai_apis/features/llm/llm_interface.py
[refactor] 77-77: Too many arguments (34/7)
(R0913)
[refactor] 77-77: Too many positional arguments (34/5)
(R0917)
[error] 116-116: unsupported operand type(s) for |
(E1131)
edenai_apis/apis/mistral/mistral_api.py
[refactor] 248-248: Too many arguments (34/7)
(R0913)
[refactor] 248-248: Too many positional arguments (34/5)
(R0917)
[error] 287-287: unsupported operand type(s) for |
(E1131)
edenai_apis/apis/groq/groq_api.py
[refactor] 141-141: Too many arguments (34/7)
(R0913)
[refactor] 141-141: Too many positional arguments (34/5)
(R0917)
[error] 179-179: unsupported operand type(s) for |
(E1131)
edenai_apis/apis/amazon/amazon_llm_api.py
[refactor] 92-92: Too many arguments (34/7)
(R0913)
[refactor] 92-92: Too many positional arguments (34/5)
(R0917)
[error] 131-131: unsupported operand type(s) for |
(E1131)
edenai_apis/apis/xai/xai_llm_api.py
[refactor] 92-92: Too many arguments (34/7)
(R0913)
[refactor] 92-92: Too many positional arguments (34/5)
(R0917)
[error] 131-131: unsupported operand type(s) for |
(E1131)
edenai_apis/apis/iointelligence/iointelligence_api.py
[refactor] 118-118: Too many arguments (34/7)
(R0913)
[refactor] 118-118: Too many positional arguments (34/5)
(R0917)
[error] 157-157: unsupported operand type(s) for |
(E1131)
edenai_apis/apis/replicate/replicate_api.py
[refactor] 325-325: Too many arguments (34/7)
(R0913)
[refactor] 325-325: Too many positional arguments (34/5)
(R0917)
[error] 364-364: unsupported operand type(s) for |
(E1131)
edenai_apis/apis/minimax/minimax_api.py
[refactor] 226-226: Too many arguments (34/7)
(R0913)
[refactor] 226-226: Too many positional arguments (34/5)
(R0917)
[error] 265-265: unsupported operand type(s) for |
(E1131)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: test
🔇 Additional comments (10)
edenai_apis/apis/google/outputs/llm/achat_output.json (1)
4-4
: Double-check downstream code expectations for themodel
fieldReplacing the concrete model identifier with the generic string
"model"
is fine for a redacted sample, but some unit/integration tests or client code may parse this field to assert provider-specific behaviour.
Please verify that no existing test fixtures or schema validations depend on the former identifier.edenai_apis/apis/replicate/outputs/llm/achat_output.json (1)
1-42
: Sample output JSON structure looks correct.The JSON follows the expected OpenAI-compatible chat completion response format with appropriate metadata, usage statistics, and timing information. Good example for async chat completion responses.
edenai_apis/apis/openai/openai_llm_api.py (2)
90-168
: Async method implementation looks correct.The async method properly mirrors the synchronous version and correctly uses
await
withself.llm_client.acompletion()
. Parameter handling is consistent and comprehensive.
129-129
: Python 3.10+ Compatibility ConfirmedThe
str | None
union syntax is supported starting in Python 3.10, and bothrequires-python = ">=3.11"
in pyproject.toml andpython = "^3.11"
in Poetry confirm the project’s minimum Python version is ≥3.11. No changes needed.edenai_apis/apis/microsoft/microsoft_llm_api.py (2)
91-170
: Async method implementation is well-structured.The async method correctly mirrors the synchronous version and properly uses
await
withself.llm_client.acompletion()
. The implementation is consistent with other providers in the codebase.
130-130
: No action needed: project requires Python >=3.11
Thepyproject.toml
specifiesrequires-python = ">=3.11"
, so using thestr | None
union syntax is fully supported.edenai_apis/apis/amazon/amazon_llm_api.py (1)
9-9
: AmazonLLMApi is correctly initialized via AmazonApi’s constructorThe
AmazonLLMApi
mix-in doesn’t need its own__init__
: it relies onAmazonApi
(which subclassesAmazonLLMApi
) to set upself.llm_client
in its constructor. All calls toself.llm_client
inAmazonLLMApi
will work as long as you instantiateAmazonApi
.You can safely ignore the missing-constructor warning for
AmazonLLMApi
.Likely an incorrect or invalid review comment.
edenai_apis/apis/groq/groq_api.py (1)
141-219
: Confirmed:llm__achat
abstract method is properly defined and inherited
- edenai_apis/features/llm/llm_interface.py defines
@abstractmethod async def llm__achat(…)
with the full async signature.- edenai_apis/apis/groq/groq_api.py declares
class GroqApi(..., LlmInterface):
and implementsasync def llm__achat(…)
matching the interface.No further changes required.
edenai_apis/apis/minimax/minimax_api.py (1)
226-304
: Async method implementation looks correct.The
llm__achat
method properly mirrors the synchronousllm__chat
method and correctly usesawait self.llm_client.acompletion()
for async execution. The parameter forwarding and return type are consistent.Note: The high argument count (34 parameters) is inherited from the OpenAI API design and matches the synchronous version, so this is acceptable for API compatibility.
edenai_apis/apis/xai/xai_llm_api.py (1)
92-171
: Async method implementation looks correct.The
llm__achat
method properly mirrors the synchronousllm__chat
method and correctly usesawait self.llm_client.acompletion()
for async execution. The parameter forwarding and return type are consistent.Note: The high argument count (34 parameters) is inherited from the OpenAI API design and matches the synchronous version, so this is acceptable for API compatibility.
"model": "gpt-4o-mini-2024-07-18", | ||
"object": "chat.completion", | ||
"system_fingerprint": "fp_7fcd609668", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Model name doesn’t match the provider – very misleading sample
The Iointelligence sample references the OpenAI-specific model string "gpt-4o-mini-2024-07-18"
. A provider-agnostic or Iointelligence-specific model identifier is expected; otherwise, integrators might assume the wrong capabilities or pricing.
🤖 Prompt for AI Agents
In edenai_apis/apis/iointelligence/outputs/llm/achat_output.json around lines 4
to 6, the model name "gpt-4o-mini-2024-07-18" is specific to OpenAI and
misleading for the Iointelligence provider. Replace this with a
provider-agnostic or Iointelligence-specific model identifier that accurately
reflects the source to avoid confusion about capabilities or pricing.
{ | ||
"id": "chatcmpl-B71qqu4Y7m1ZVuF5YGqxz7LXzgf0Y", | ||
"created": 1741015112, | ||
"model": "gpt-4o-mini-2024-07-18", | ||
"object": "chat.completion", | ||
"system_fingerprint": "fp_7fcd609668", | ||
"choices": [ | ||
{ | ||
"finish_reason": "stop", | ||
"index": 0, | ||
"message": { | ||
"content": "Arrr, matey! What ye be seein’ in this here image is a grand pathway, made of wooden planks, weavin' its way through a lush and green landscape. The verdant grass sways in the gentle breeze, and the sky above be a brilliant blue, decorated with fluffy white clouds. Ye can spot trees and bushes on either side, makin' it a perfect setting for a stroll amongst nature. A peaceful place for a pirate at heart, aye!", | ||
"role": "assistant", | ||
"tool_calls": null, | ||
"function_call": null | ||
} | ||
} | ||
], | ||
"provider_time": 3692885792, | ||
"edenai_time": null, | ||
"usage": { | ||
"completion_tokens": 99, | ||
"prompt_tokens": 1170, | ||
"total_tokens": 1269, | ||
"completion_tokens_details": { | ||
"accepted_prediction_tokens": 0, | ||
"audio_tokens": 0, | ||
"reasoning_tokens": 0, | ||
"rejected_prediction_tokens": 0, | ||
"text_tokens": 99 | ||
}, | ||
"prompt_tokens_details": { | ||
"audio_tokens": 0, | ||
"cached_tokens": 1024, | ||
"text_tokens": null, | ||
"image_tokens": null | ||
} | ||
}, | ||
"service_tier": "default", | ||
"cost": 0.0002349 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Duplicate IDs & identical payload across all providers – consider unique, provider-tailored fixtures
The same id
, prompt/usage stats, and pirate-style answer appear in every new achat_output.json
. This hurts test fidelity and makes it hard to spot provider-specific parsing issues (e.g., different token accounting schemas). Generate one realistic fixture per provider (unique id
, model, usage layout).
🤖 Prompt for AI Agents
In edenai_apis/apis/iointelligence/outputs/llm/achat_output.json lines 1 to 41,
the fixture uses a duplicate id and identical payload shared across all
providers, which reduces test fidelity and obscures provider-specific parsing
issues. To fix this, generate a unique fixture for this provider with a distinct
id, model name, and usage statistics that realistically reflect this provider's
response format and token accounting. Ensure the content and metadata are
tailored to this provider to improve test accuracy.
{ | ||
"id": "chatcmpl-B71qqu4Y7m1ZVuF5YGqxz7LXzgf0Y", | ||
"created": 1741015112, | ||
"model": "gpt-4o-mini-2024-07-18", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Verify model name accuracy for Groq API.
The model name "gpt-4o-mini-2024-07-18" appears to be OpenAI-specific. Groq typically uses different model identifiers. Consider using a Groq-appropriate model name.
🤖 Prompt for AI Agents
In edenai_apis/apis/groq/outputs/llm/achat_output.json at line 4, the model name
"gpt-4o-mini-2024-07-18" is OpenAI-specific and not appropriate for the Groq
API. Replace this model name with the correct Groq model identifier as per
Groq's documentation or API specifications to ensure compatibility.
|
||
async def llm__achat( | ||
self, | ||
messages: List = [], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix mutable default argument.
Using a mutable list as a default argument can lead to unexpected behavior. Replace with None
and initialize within the function.
- messages: List = [],
+ messages: Optional[List] = None,
Then initialize within the function:
if messages is None:
messages = []
🧰 Tools
🪛 Ruff (0.12.2)
92-92: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
🤖 Prompt for AI Agents
In edenai_apis/apis/openai/openai_llm_api.py at line 92, the function uses a
mutable default argument `messages: List = []`, which can cause unexpected
behavior. Change the default value to `None` and add a check inside the function
to initialize `messages` to an empty list if it is `None`.
|
||
async def llm__achat( | ||
self, | ||
messages: List = [], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix mutable default argument.
Using a mutable list as a default argument can lead to unexpected behavior. Replace with None
and initialize within the function.
- messages: List = [],
+ messages: Optional[List] = None,
Then initialize within the function:
if messages is None:
messages = []
🧰 Tools
🪛 Ruff (0.12.2)
93-93: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
🤖 Prompt for AI Agents
In edenai_apis/apis/microsoft/microsoft_llm_api.py at line 93, the function uses
a mutable default argument for the parameter 'messages' by setting it to an
empty list. This can cause unexpected behavior due to shared state across
function calls. To fix this, change the default value of 'messages' to None and
add a check inside the function to initialize 'messages' to an empty list if it
is None.
async def llm__achat( | ||
self, | ||
messages: List = [], | ||
model: Optional[str] = None, | ||
timeout: Optional[Union[float, str, httpx.Timeout]] = None, | ||
temperature: Optional[float] = None, | ||
top_p: Optional[float] = None, | ||
n: Optional[int] = None, | ||
stream: Optional[bool] = None, | ||
stream_options: Optional[dict] = None, | ||
stop: Optional[str] = None, | ||
stop_sequences: Optional[any] = None, | ||
max_tokens: Optional[int] = None, | ||
presence_penalty: Optional[float] = None, | ||
frequency_penalty: Optional[float] = None, | ||
logit_bias: Optional[dict] = None, | ||
modalities: Optional[List[Literal["text", "audio", "image"]]] = None, | ||
audio: Optional[Dict] = None, | ||
# openai v1.0+ new params | ||
response_format: Optional[ | ||
Union[dict, Type[BaseModel]] | ||
] = None, # Structured outputs | ||
seed: Optional[int] = None, | ||
tools: Optional[List] = None, | ||
tool_choice: Optional[Union[str, dict]] = None, | ||
logprobs: Optional[bool] = None, | ||
top_logprobs: Optional[int] = None, | ||
parallel_tool_calls: Optional[bool] = None, | ||
deployment_id=None, | ||
extra_headers: Optional[dict] = None, | ||
# soon to be deprecated params by OpenAI -> This should be replaced by tools | ||
functions: Optional[List] = None, | ||
function_call: Optional[str] = None, | ||
base_url: Optional[str] = None, | ||
api_version: Optional[str] = None, | ||
api_key: Optional[str] = None, | ||
model_list: Optional[list] = None, # pass in a list of api_base,keys, etc. | ||
drop_invalid_params: bool = True, # If true, all the invalid parameters will be ignored (dropped) before sending to the model | ||
user: str | None = None, | ||
# Optional parameters | ||
**kwargs, | ||
) -> LLMChatDataClass: | ||
response = await self.llm_client.acompletion( | ||
messages=messages, | ||
model=model, | ||
timeout=timeout, | ||
temperature=temperature, | ||
top_p=top_p, | ||
n=n, | ||
stream=stream, | ||
stream_options=stream_options, | ||
stop=stop, | ||
stop_sequences=stop_sequences, | ||
max_tokens=max_tokens, | ||
presence_penalty=presence_penalty, | ||
frequency_penalty=frequency_penalty, | ||
logit_bias=logit_bias, | ||
response_format=response_format, | ||
seed=seed, | ||
tools=tools, | ||
tool_choice=tool_choice, | ||
logprobs=logprobs, | ||
top_logprobs=top_logprobs, | ||
parallel_tool_calls=parallel_tool_calls, | ||
deployment_id=deployment_id, | ||
extra_headers=extra_headers, | ||
functions=functions, | ||
function_call=function_call, | ||
base_url=base_url, | ||
api_version=api_version, | ||
api_key=api_key, | ||
model_list=model_list, | ||
drop_invalid_params=drop_invalid_params, | ||
user=user, | ||
modalities=modalities, | ||
audio=audio, | ||
**kwargs, | ||
) | ||
return response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix mutable default argument and union operator syntax.
The async method implementation follows the correct pattern but has the same issues as other providers:
- Mutable default argument:
messages: List = []
can cause shared state bugs - Union operator syntax:
user: str | None = None
may not be compatible with older Python versions
Apply this fix:
async def llm__achat(
self,
- messages: List = [],
+ messages: Optional[List] = None,
model: Optional[str] = None,
# ... other parameters ...
- user: str | None = None,
+ user: Optional[str] = None,
# Optional parameters
**kwargs,
) -> LLMChatDataClass:
+ if messages is None:
+ messages = []
response = await self.llm_client.acompletion(
messages=messages,
# ... rest of parameters ...
)
return response
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async def llm__achat( | |
self, | |
messages: List = [], | |
model: Optional[str] = None, | |
timeout: Optional[Union[float, str, httpx.Timeout]] = None, | |
temperature: Optional[float] = None, | |
top_p: Optional[float] = None, | |
n: Optional[int] = None, | |
stream: Optional[bool] = None, | |
stream_options: Optional[dict] = None, | |
stop: Optional[str] = None, | |
stop_sequences: Optional[any] = None, | |
max_tokens: Optional[int] = None, | |
presence_penalty: Optional[float] = None, | |
frequency_penalty: Optional[float] = None, | |
logit_bias: Optional[dict] = None, | |
modalities: Optional[List[Literal["text", "audio", "image"]]] = None, | |
audio: Optional[Dict] = None, | |
# openai v1.0+ new params | |
response_format: Optional[ | |
Union[dict, Type[BaseModel]] | |
] = None, # Structured outputs | |
seed: Optional[int] = None, | |
tools: Optional[List] = None, | |
tool_choice: Optional[Union[str, dict]] = None, | |
logprobs: Optional[bool] = None, | |
top_logprobs: Optional[int] = None, | |
parallel_tool_calls: Optional[bool] = None, | |
deployment_id=None, | |
extra_headers: Optional[dict] = None, | |
# soon to be deprecated params by OpenAI -> This should be replaced by tools | |
functions: Optional[List] = None, | |
function_call: Optional[str] = None, | |
base_url: Optional[str] = None, | |
api_version: Optional[str] = None, | |
api_key: Optional[str] = None, | |
model_list: Optional[list] = None, # pass in a list of api_base,keys, etc. | |
drop_invalid_params: bool = True, # If true, all the invalid parameters will be ignored (dropped) before sending to the model | |
user: str | None = None, | |
# Optional parameters | |
**kwargs, | |
) -> LLMChatDataClass: | |
response = await self.llm_client.acompletion( | |
messages=messages, | |
model=model, | |
timeout=timeout, | |
temperature=temperature, | |
top_p=top_p, | |
n=n, | |
stream=stream, | |
stream_options=stream_options, | |
stop=stop, | |
stop_sequences=stop_sequences, | |
max_tokens=max_tokens, | |
presence_penalty=presence_penalty, | |
frequency_penalty=frequency_penalty, | |
logit_bias=logit_bias, | |
response_format=response_format, | |
seed=seed, | |
tools=tools, | |
tool_choice=tool_choice, | |
logprobs=logprobs, | |
top_logprobs=top_logprobs, | |
parallel_tool_calls=parallel_tool_calls, | |
deployment_id=deployment_id, | |
extra_headers=extra_headers, | |
functions=functions, | |
function_call=function_call, | |
base_url=base_url, | |
api_version=api_version, | |
api_key=api_key, | |
model_list=model_list, | |
drop_invalid_params=drop_invalid_params, | |
user=user, | |
modalities=modalities, | |
audio=audio, | |
**kwargs, | |
) | |
return response | |
async def llm__achat( | |
self, | |
messages: Optional[List] = None, | |
model: Optional[str] = None, | |
timeout: Optional[Union[float, str, httpx.Timeout]] = None, | |
temperature: Optional[float] = None, | |
top_p: Optional[float] = None, | |
n: Optional[int] = None, | |
stream: Optional[bool] = None, | |
stream_options: Optional[dict] = None, | |
stop: Optional[str] = None, | |
stop_sequences: Optional[any] = None, | |
max_tokens: Optional[int] = None, | |
presence_penalty: Optional[float] = None, | |
frequency_penalty: Optional[float] = None, | |
logit_bias: Optional[dict] = None, | |
modalities: Optional[List[Literal["text", "audio", "image"]]] = None, | |
audio: Optional[Dict] = None, | |
# openai v1.0+ new params | |
response_format: Optional[ | |
Union[dict, Type[BaseModel]] | |
] = None, | |
seed: Optional[int] = None, | |
tools: Optional[List] = None, | |
tool_choice: Optional[Union[str, dict]] = None, | |
logprobs: Optional[bool] = None, | |
top_logprobs: Optional[int] = None, | |
parallel_tool_calls: Optional[bool] = None, | |
deployment_id=None, | |
extra_headers: Optional[dict] = None, | |
# soon to be deprecated params by OpenAI -> This should be replaced by tools | |
functions: Optional[List] = None, | |
function_call: Optional[str] = None, | |
base_url: Optional[str] = None, | |
api_version: Optional[str] = None, | |
api_key: Optional[str] = None, | |
model_list: Optional[list] = None, | |
drop_invalid_params: bool = True, | |
user: Optional[str] = None, | |
# Optional parameters | |
**kwargs, | |
) -> LLMChatDataClass: | |
if messages is None: | |
messages = [] | |
response = await self.llm_client.acompletion( | |
messages=messages, | |
model=model, | |
timeout=timeout, | |
temperature=temperature, | |
top_p=top_p, | |
n=n, | |
stream=stream, | |
stream_options=stream_options, | |
stop=stop, | |
stop_sequences=stop_sequences, | |
max_tokens=max_tokens, | |
presence_penalty=presence_penalty, | |
frequency_penalty=frequency_penalty, | |
logit_bias=logit_bias, | |
response_format=response_format, | |
seed=seed, | |
tools=tools, | |
tool_choice=tool_choice, | |
logprobs=logprobs, | |
top_logprobs=top_logprobs, | |
parallel_tool_calls=parallel_tool_calls, | |
deployment_id=deployment_id, | |
extra_headers=extra_headers, | |
functions=functions, | |
function_call=function_call, | |
base_url=base_url, | |
api_version=api_version, | |
api_key=api_key, | |
model_list=model_list, | |
drop_invalid_params=drop_invalid_params, | |
user=user, | |
modalities=modalities, | |
audio=audio, | |
**kwargs, | |
) | |
return response |
🧰 Tools
🪛 Ruff (0.12.2)
143-143: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
🪛 Pylint (3.3.7)
[refactor] 141-141: Too many arguments (34/7)
(R0913)
[refactor] 141-141: Too many positional arguments (34/5)
(R0917)
[error] 179-179: unsupported operand type(s) for |
(E1131)
🤖 Prompt for AI Agents
In edenai_apis/apis/groq/groq_api.py around lines 141 to 219, fix the mutable
default argument by changing the default value of the messages parameter from an
empty list to None and then initializing it inside the method if needed. Also,
replace the union operator syntax for the user parameter from 'str | None' to
'Optional[str]' to ensure compatibility with older Python versions.
@@ -223,6 +223,86 @@ def llm__chat( | |||
) | |||
return response | |||
|
|||
async def llm__achat( | |||
self, | |||
messages: List = [], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix mutable default argument.
Using a mutable list as a default argument can lead to unexpected behavior where the same list instance is shared across function calls.
- messages: List = [],
+ messages: Optional[List] = None,
Then initialize within the function:
if messages is None:
messages = []
🧰 Tools
🪛 Ruff (0.12.2)
228-228: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
🤖 Prompt for AI Agents
In edenai_apis/apis/minimax/minimax_api.py at line 228, the function uses a
mutable default argument for the parameter 'messages' by setting it to an empty
list. This can cause the same list instance to be shared across multiple calls.
To fix this, change the default value of 'messages' to None and then inside the
function, check if 'messages' is None and if so, initialize it to an empty list.
api_key: Optional[str] = None, | ||
model_list: Optional[list] = None, # pass in a list of api_base,keys, etc. | ||
drop_invalid_params: bool = True, # If true, all the invalid parameters will be ignored (dropped) before sending to the model | ||
user: str | None = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix union operator syntax for older Python versions.
The |
union operator syntax requires Python 3.10+. Use Union
from typing for broader compatibility.
- user: str | None = None,
+ user: Optional[str] = None,
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
user: str | None = None, | |
- user: str | None = None, | |
+ user: Optional[str] = None, |
🧰 Tools
🪛 Pylint (3.3.7)
[error] 265-265: unsupported operand type(s) for |
(E1131)
🤖 Prompt for AI Agents
In edenai_apis/apis/minimax/minimax_api.py at line 265, replace the union type
hint using the `|` operator with the `Union` type from the typing module to
ensure compatibility with Python versions older than 3.10. Import `Union` if not
already imported, and change `user: str | None = None` to `user: Union[str,
None] = None`.
|
||
async def llm__achat( | ||
self, | ||
messages: List = [], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix mutable default argument.
Using a mutable list as a default argument can lead to unexpected behavior where the same list instance is shared across function calls.
- messages: List = [],
+ messages: Optional[List] = None,
Then initialize within the function:
if messages is None:
messages = []
🧰 Tools
🪛 Ruff (0.12.2)
94-94: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
🤖 Prompt for AI Agents
In edenai_apis/apis/xai/xai_llm_api.py at line 94, the function uses a mutable
list as a default argument which can cause shared state issues across calls.
Change the default value of the messages parameter from an empty list to None,
and then inside the function, add a check to initialize messages to an empty
list if it is None.
api_key: Optional[str] = None, | ||
model_list: Optional[list] = None, # pass in a list of api_base,keys, etc. | ||
drop_invalid_params: bool = True, # If true, all the invalid parameters will be ignored (dropped) before sending to the model | ||
user: str | None = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix union operator syntax for older Python versions.
The |
union operator syntax requires Python 3.10+. Use Union
from typing for broader compatibility.
- user: str | None = None,
+ user: Optional[str] = None,
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Pylint (3.3.7)
[error] 131-131: unsupported operand type(s) for |
(E1131)
🤖 Prompt for AI Agents
In edenai_apis/apis/xai/xai_llm_api.py at line 131, replace the union type hint
using the `|` operator with the `Union` type from the typing module to ensure
compatibility with Python versions older than 3.10. Import `Union` if not
already imported, and change the annotation from `str | None` to `Union[str,
None]`.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
edenai_apis/apis/xai/info.json (1)
227-234
: Ensure the new"achat"
capability is fully wired & documentedThe
"achat"
key is now exposed with the same version string as"chat"
, but:
- Up-stream runtime: verify that
llm__achat
is actually declared inxai_llm_api.py
and registered in provider discovery, otherwise this entry will appear in the public catalog but trigger 501/404 at runtime.- Constraints: unlike most blocks in this file, the
"achat"
entry (and its sibling"chat"
) omit a"constraints"
object. If the async and sync endpoints share the same model list & default, consider duplicating the"constraints"
from"chat"
(or declaring a common constant) to avoid client-side ambiguity.- Tests / schema validation: add/update JSON-schema or unit tests that iterate over
info.json
capabilities to include"achat"
so regressions are caught automatically.If everything is already handled in the implementation layer and tests, feel free to ignore; otherwise a quick follow-up PR to address the above will save support tickets.
edenai_apis/apis/microsoft/info.json (2)
145-145
: Prefer multi-line formatting for long arraysFlattening the
file_extensions
list onto a single line trims bytes but hurts future diff readability and increases merge-conflict risk. Unless size is truly critical, keep one-item-per-line.
785-785
: Same readability issue forlanguages
/documents
listsThe same one-liner compression was applied to multiple huge arrays. The original multi-line layout is far easier to scan and maintain. Consider reverting for consistency and maintainability.
Also applies to: 1162-1162, 1178-1179, 1205-1208
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
edenai_apis/apis/amazon/info.json
(1 hunks)edenai_apis/apis/anthropic/info.json
(2 hunks)edenai_apis/apis/deepseek/info.json
(1 hunks)edenai_apis/apis/groq/info.json
(1 hunks)edenai_apis/apis/iointelligence/info.json
(1 hunks)edenai_apis/apis/meta/info.json
(2 hunks)edenai_apis/apis/microsoft/info.json
(7 hunks)edenai_apis/apis/minimax/info.json
(1 hunks)edenai_apis/apis/mistral/info.json
(1 hunks)edenai_apis/apis/openai/info.json
(1 hunks)edenai_apis/apis/replicate/info.json
(1 hunks)edenai_apis/apis/together_ai/info.json
(1 hunks)edenai_apis/apis/xai/info.json
(1 hunks)
✅ Files skipped from review due to trivial changes (10)
- edenai_apis/apis/deepseek/info.json
- edenai_apis/apis/together_ai/info.json
- edenai_apis/apis/meta/info.json
- edenai_apis/apis/iointelligence/info.json
- edenai_apis/apis/anthropic/info.json
- edenai_apis/apis/minimax/info.json
- edenai_apis/apis/mistral/info.json
- edenai_apis/apis/openai/info.json
- edenai_apis/apis/replicate/info.json
- edenai_apis/apis/amazon/info.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: test
🔇 Additional comments (3)
edenai_apis/apis/xai/info.json (1)
1-226
: Formatting-only changes look fine
All edits up to the “llm” block are indentation / whitespace re-flows. JSON remains syntactically valid and semantically unchanged.edenai_apis/apis/microsoft/info.json (1)
1726-1730
: Async capability correctly exposed – just verify provider registryThe new
"llm" → "achat"
entry with version"llmengine (v2)"
matches the newly-addedllm__achat
method. Looks good. Please run the capability-resolution tests (or a quickgrep '"achat"'
) to ensure the registry now discovers this async variant everywhere it’s needed.edenai_apis/apis/groq/info.json (1)
15-22
: Downstream code supports the newllm
section – no changes required
- The JSON loader (
load_info_file
/load_provider
) treats all top-level keys generically, so adding"llm"
has no impact.list_features
dynamically picks upllm__chat
andllm__achat
methods, so runtime discovery and any CLI listing will include both.- The existing tests (
test_version_exists
,test_implemented_features_documented
, etc.) already validate thatinfo.json
contains bothllm.chat
andllm.achat
entries and will pass as-is.
Summary by CodeRabbit
New Features
Style