Skip to content

add llm__achat to all (but one) supported providers in llm__chat #369

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 28, 2025

Conversation

Mounir-charef
Copy link
Contributor

@Mounir-charef Mounir-charef commented Jul 17, 2025

Summary by CodeRabbit

  • New Features

    • Introduced asynchronous chat completion support across multiple AI providers, enabling non-blocking chat interactions in compatible integrations.
    • Added sample output files showcasing async chat completion responses for various providers.
    • Updated API capability metadata to include async chat completion support ("achat") with version "llmengine (v2)" across multiple providers.
  • Style

    • Reformatted and improved readability of API capability JSON files without changing functionality.
    • Minor typographical corrections in sample output files.

Copy link

coderabbitai bot commented Jul 17, 2025

"""

Walkthrough

Asynchronous llm__achat methods were added to multiple LLM API provider classes and the shared interface, mirroring the synchronous llm__chat methods but enabling async chat completions. Sample output JSON files for async chat completions were introduced for each provider. The API metadata JSON files were updated to include the new "achat" entry with version "llmengine (v2)". A minor fix was made to a Google output JSON.

Changes

File(s) Change Summary
edenai_apis/features/llm/llm_interface.py Added async abstract method llm__achat to the LlmInterface class.
edenai_apis/apis/*/*_api.py
(Amazon, Anthropic, Deepseek, Groq, Iointelligence, Meta, Microsoft, Minimax, Mistral, OpenAI, Replicate, TogetherAI, XAI)
Added async method llm__achat to each provider's API class, mirroring the synchronous llm__chat.
edenai_apis/apis/*/outputs/llm/achat_output.json
(Amazon, Anthropic, Deepseek, Groq, Iointelligence, Meta, Microsoft, Minimax, Mistral, OpenAI, Replicate, TogetherAI, XAI)
Added new JSON files with sample async chat completion responses for each provider.
edenai_apis/apis/google/outputs/llm/achat_output.json Updated model field to a generic value and fixed apostrophe in assistant message.
edenai_apis/apis/*/info.json
(Amazon, Anthropic, Deepseek, Groq, Iointelligence, Meta, Microsoft, Minimax, Mistral, OpenAI, Replicate, TogetherAI, XAI)
Reformatted JSON metadata files and added new "achat" entry with version "llmengine (v2)" under "llm".

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant LLMApiProvider
    participant LLMClient

    User->>LLMApiProvider: await llm__achat(messages, ...)
    LLMApiProvider->>LLMClient: await acompletion(messages, ...)
    LLMClient-->>LLMApiProvider: ChatDataClass (async)
    LLMApiProvider-->>User: ChatDataClass (async)
Loading

Possibly related PRs

Suggested reviewers

  • juandavidcruzgomez

Poem

In a warren of code, the rabbits hop,
Now async chat flows without a stop!
Each provider’s voice, both swift and bright,
Returns replies by day or night.
With JSON tales and methods new,
The codebase grows—hip hip, yahoo!
🐇✨
"""

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch add-remaining-provider-to-async-llm-chat

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai auto-generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces a new asynchronous chat method llm__achat alongside existing llm__chat, updating the core interface, all supported provider implementations (except one), and associated example output fixtures.

  • Define llm__achat as an abstract method in the LLM interface.
  • Implement llm__achat in each provider class to call llm_client.acompletion.
  • Add or update example JSON fixtures (achat_output.json) to reflect the new endpoint.

Reviewed Changes

Copilot reviewed 28 out of 28 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
edenai_apis/features/llm/llm_interface.py Add abstract llm__achat signature
edenai_apis/apis/xai/xai_llm_api.py Implement llm__achat method
edenai_apis/apis/xai/outputs/llm/achat_output.json Add example fixture for achat
edenai_apis/apis/together_ai/together_ai_api.py Implement llm__achat method
edenai_apis/apis/together_ai/outputs/llm/achat_output.json Add example fixture for achat
edenai_apis/apis/replicate/replicate_api.py Implement llm__achat method
edenai_apis/apis/replicate/outputs/llm/achat_output.json Add example fixture for achat
edenai_apis/apis/openai/openai_llm_api.py Implement llm__achat method
edenai_apis/apis/openai/outputs/llm/achat_output.json Add example fixture for achat
edenai_apis/apis/mistral/mistral_api.py Implement llm__achat method
edenai_apis/apis/mistral/outputs/llm/achat_output.json Add example fixture for achat
edenai_apis/apis/minimax/minimax_api.py Implement llm__achat method
edenai_apis/apis/minimax/outputs/llm/achat_output.json Add example fixture for achat
edenai_apis/apis/microsoft/microsoft_llm_api.py Implement llm__achat method
edenai_apis/apis/microsoft/outputs/llm/achat_output.json Add example fixture for achat
edenai_apis/apis/meta/meta_api.py Implement llm__achat method
edenai_apis/apis/meta/outputs/llm/achat_output.json Add example fixture for achat
edenai_apis/apis/iointelligence/iointelligence_api.py Implement llm__achat method
edenai_apis/apis/iointelligence/outputs/llm/achat_output.json Add example fixture for achat
edenai_apis/apis/groq/groq_api.py Implement llm__achat method
edenai_apis/apis/groq/outputs/llm/achat_output.json Add example fixture for achat
edenai_apis/apis/google/outputs/llm/achat_output.json Update example fixture for achat
edenai_apis/apis/deepseek/deepseek_api.py Implement llm__achat method
edenai_apis/apis/deepseek/outputs/llm/achat_output.json Add example fixture for achat
edenai_apis/apis/anthropic/anthropic_api.py Implement llm__achat method
edenai_apis/apis/anthropic/outputs/llm/achat_output.json Add example fixture for achat
edenai_apis/apis/amazon/amazon_llm_api.py Implement llm__achat method
edenai_apis/apis/amazon/outputs/llm/achat_output.json Add example fixture for achat
Comments suppressed due to low confidence (3)

edenai_apis/features/llm/llm_interface.py:120

  • The docstring for llm__achat references parameters like chatbot_global_action and top_k that do not exist in the signature; please update it to accurately describe the new parameters.
        """

edenai_apis/features/llm/llm_interface.py:76

  • New abstract method llm__achat is added without accompanying tests; please add unit or integration tests to verify implementations call llm_client.acompletion correctly.
    @abstractmethod

edenai_apis/apis/google/outputs/llm/achat_output.json:12

  • [nitpick] The example content for Google output omits the apostrophe after "seein" unlike other fixtures; consider restoring consistency or confirming the change was intentional.
        "content": "Arrr, matey! What ye be seein in this here image is a grand pathway, made of wooden planks, weavin' its way through a lush and green landscape. The verdant grass sways in the gentle breeze, and the sky above be a brilliant blue, decorated with fluffy white clouds. Ye can spot trees and bushes on either side, makin' it a perfect setting for a stroll amongst nature. A peaceful place for a pirate at heart, aye!",

extra_headers=extra_headers,
functions=functions,
function_call=function_call,
base_url=self.base_url,
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In llm__achat you pass base_url=self.base_url instead of the base_url parameter; this will ignore callers' provided URL and should be changed to the local base_url argument.

Copilot uses AI. Check for mistakes.

@@ -1,15 +1,15 @@
{
"id": "chatcmpl-B71qqu4Y7m1ZVuF5YGqxz7LXzgf0Y",
"created": 1741015112,
"model": "gpt-4o-mini-2024-07-18",
"model": "model",
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fixture uses a placeholder "model" instead of an actual model name like other providers; this inconsistency may break snapshot tests.

Suggested change
"model": "model",
"model": "gpt-3.5-turbo",

Copilot uses AI. Check for mistakes.

@@ -88,3 +88,84 @@ def llm__chat(
**kwargs,
)
return response

async def llm__achat(
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The llm__achat method signature is duplicated across many provider classes; consider extracting a shared base or mixin to reduce code repetition and ensure consistency.

Copilot uses AI. Check for mistakes.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

♻️ Duplicate comments (4)
edenai_apis/apis/xai/outputs/llm/achat_output.json (1)

1-41: Same concerns as the Iointelligence fixture (model name, provider_time, duplicated content).

edenai_apis/apis/mistral/outputs/llm/achat_output.json (1)

1-41: Same concerns as the Iointelligence fixture (model name, provider_time, duplicated content).

edenai_apis/apis/together_ai/outputs/llm/achat_output.json (1)

1-41: Same concerns as the Iointelligence fixture (model name, provider_time, duplicated content).

edenai_apis/apis/meta/outputs/llm/achat_output.json (1)

1-41: Same concerns as the Iointelligence fixture (model name, provider_time, duplicated content).

🧹 Nitpick comments (10)
edenai_apis/apis/google/outputs/llm/achat_output.json (1)

12-12: Restore the missing pirate-style apostrophe

The dialect elsewhere keeps the trailing apostrophe (e.g. weavin', makin').
For consistency, consider adding it back to seein and use the straight ASCII ' to avoid encoding hiccups:

-        "content": "Arrr, matey! What ye be seein in this here image is a grand pathway, made of wooden planks, weavin' its way through a lush and green landscape. ...
+        "content": "Arrr, matey! What ye be seein' in this here image is a grand pathway, made of wooden planks, weavin' its way through a lush and green landscape. ...
edenai_apis/apis/anthropic/outputs/llm/achat_output.json (1)

19-20: Clarify provider_time scale to avoid downstream confusion

The sample value 3692885792 is ~117 years if interpreted as seconds since epoch, but far too small for milliseconds.
Explicitly state the unit (e.g., "provider_time_ms": 1234) or use a realistic epoch-seconds integer to prevent consumers or tests from mis-parsing timing data.

edenai_apis/apis/microsoft/outputs/llm/achat_output.json (1)

19-20: provider_time magnitude looks unrealistic

A value in the billions is unlikely to be either milliseconds or seconds since epoch. Consider switching to a clearly-named, realistic field (provider_latency_ms) to avoid misleading benchmarks.

edenai_apis/apis/openai/outputs/llm/achat_output.json (1)

19-20: Use a plausible latency figure or annotate units

"provider_time": 3692885792 will likely trip simple sanity checks.
Either scale it to milliseconds (<10 000 for typical calls) or document the measurement unit.

edenai_apis/apis/deepseek/outputs/llm/achat_output.json (1)

19-20: Questionable provider_time sample

The placeholder value appears inconsistent with real-world response times. Replace with a believable latency or rename the field to make its purpose explicit.

edenai_apis/apis/minimax/outputs/llm/achat_output.json (1)

19-20: Consider revising provider_time placeholder

The large number does not correspond to common time units and may confuse automated parsers. Recommend using a realistic duration in milliseconds or seconds.

edenai_apis/apis/iointelligence/outputs/llm/achat_output.json (1)

19-20: provider_time an order of magnitude too large

3692885792 milliseconds ≈ 42 days. Either document the unit or use a realistic latency (e.g., < 10 000 ms).

edenai_apis/apis/amazon/outputs/llm/achat_output.json (2)

4-4: Consider using a more specific model identifier.

The generic "model" value could be more descriptive. Compare with other providers that use specific model names like "gpt-4o-mini-2024-07-18".


12-12: Consider using provider-specific sample content.

The content is identical to other providers' output files. Consider using Amazon Bedrock-specific sample responses to better represent the actual API behavior.

edenai_apis/apis/groq/outputs/llm/achat_output.json (1)

12-12: Consider provider-specific sample content.

The sample content is identical across all providers. Consider using Groq-specific sample responses to better represent actual API behavior.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 842d99f and 60c0b87.

📒 Files selected for processing (28)
  • edenai_apis/apis/amazon/amazon_llm_api.py (1 hunks)
  • edenai_apis/apis/amazon/outputs/llm/achat_output.json (1 hunks)
  • edenai_apis/apis/anthropic/anthropic_api.py (1 hunks)
  • edenai_apis/apis/anthropic/outputs/llm/achat_output.json (1 hunks)
  • edenai_apis/apis/deepseek/deepseek_api.py (1 hunks)
  • edenai_apis/apis/deepseek/outputs/llm/achat_output.json (1 hunks)
  • edenai_apis/apis/google/outputs/llm/achat_output.json (1 hunks)
  • edenai_apis/apis/groq/groq_api.py (1 hunks)
  • edenai_apis/apis/groq/outputs/llm/achat_output.json (1 hunks)
  • edenai_apis/apis/iointelligence/iointelligence_api.py (1 hunks)
  • edenai_apis/apis/iointelligence/outputs/llm/achat_output.json (1 hunks)
  • edenai_apis/apis/meta/meta_api.py (1 hunks)
  • edenai_apis/apis/meta/outputs/llm/achat_output.json (1 hunks)
  • edenai_apis/apis/microsoft/microsoft_llm_api.py (1 hunks)
  • edenai_apis/apis/microsoft/outputs/llm/achat_output.json (1 hunks)
  • edenai_apis/apis/minimax/minimax_api.py (1 hunks)
  • edenai_apis/apis/minimax/outputs/llm/achat_output.json (1 hunks)
  • edenai_apis/apis/mistral/mistral_api.py (1 hunks)
  • edenai_apis/apis/mistral/outputs/llm/achat_output.json (1 hunks)
  • edenai_apis/apis/openai/openai_llm_api.py (1 hunks)
  • edenai_apis/apis/openai/outputs/llm/achat_output.json (1 hunks)
  • edenai_apis/apis/replicate/outputs/llm/achat_output.json (1 hunks)
  • edenai_apis/apis/replicate/replicate_api.py (1 hunks)
  • edenai_apis/apis/together_ai/outputs/llm/achat_output.json (1 hunks)
  • edenai_apis/apis/together_ai/together_ai_api.py (1 hunks)
  • edenai_apis/apis/xai/outputs/llm/achat_output.json (1 hunks)
  • edenai_apis/apis/xai/xai_llm_api.py (1 hunks)
  • edenai_apis/features/llm/llm_interface.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
edenai_apis/apis/openai/openai_llm_api.py (2)
edenai_apis/features/llm/chat/chat_dataclass.py (1)
  • ChatDataClass (199-214)
edenai_apis/llmengine/llm_engine.py (1)
  • acompletion (871-987)
edenai_apis/apis/microsoft/microsoft_llm_api.py (4)
edenai_apis/apis/groq/groq_api.py (1)
  • llm__achat (141-219)
edenai_apis/apis/openai/openai_llm_api.py (1)
  • llm__achat (90-168)
edenai_apis/features/llm/llm_interface.py (1)
  • llm__achat (77-139)
edenai_apis/llmengine/llm_engine.py (1)
  • acompletion (871-987)
edenai_apis/apis/mistral/mistral_api.py (4)
edenai_apis/apis/groq/groq_api.py (1)
  • llm__achat (141-219)
edenai_apis/features/llm/llm_interface.py (1)
  • llm__achat (77-139)
edenai_apis/features/text/chat/chat_dataclass.py (1)
  • ChatDataClass (24-30)
edenai_apis/llmengine/llm_engine.py (1)
  • acompletion (871-987)
🪛 Ruff (0.12.2)
edenai_apis/apis/openai/openai_llm_api.py

92-92: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

edenai_apis/apis/microsoft/microsoft_llm_api.py

93-93: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

edenai_apis/apis/anthropic/anthropic_api.py

216-216: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

edenai_apis/apis/together_ai/together_ai_api.py

146-146: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

edenai_apis/apis/deepseek/deepseek_api.py

142-142: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

edenai_apis/apis/meta/meta_api.py

209-209: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

edenai_apis/features/llm/llm_interface.py

79-79: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

edenai_apis/apis/mistral/mistral_api.py

250-250: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

edenai_apis/apis/groq/groq_api.py

143-143: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

edenai_apis/apis/amazon/amazon_llm_api.py

94-94: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

edenai_apis/apis/xai/xai_llm_api.py

94-94: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

edenai_apis/apis/iointelligence/iointelligence_api.py

120-120: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

edenai_apis/apis/replicate/replicate_api.py

327-327: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

edenai_apis/apis/minimax/minimax_api.py

228-228: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

🪛 Pylint (3.3.7)
edenai_apis/apis/openai/openai_llm_api.py

[refactor] 90-90: Too many arguments (34/7)

(R0913)


[refactor] 90-90: Too many positional arguments (34/5)

(R0917)


[error] 129-129: unsupported operand type(s) for |

(E1131)

edenai_apis/apis/microsoft/microsoft_llm_api.py

[refactor] 91-91: Too many arguments (34/7)

(R0913)


[refactor] 91-91: Too many positional arguments (34/5)

(R0917)


[error] 130-130: unsupported operand type(s) for |

(E1131)

edenai_apis/apis/anthropic/anthropic_api.py

[refactor] 214-214: Too many arguments (34/7)

(R0913)


[refactor] 214-214: Too many positional arguments (34/5)

(R0917)


[error] 253-253: unsupported operand type(s) for |

(E1131)

edenai_apis/apis/together_ai/together_ai_api.py

[refactor] 144-144: Too many arguments (34/7)

(R0913)


[refactor] 144-144: Too many positional arguments (34/5)

(R0917)


[error] 183-183: unsupported operand type(s) for |

(E1131)

edenai_apis/apis/deepseek/deepseek_api.py

[refactor] 140-140: Too many arguments (34/7)

(R0913)


[refactor] 140-140: Too many positional arguments (34/5)

(R0917)


[error] 179-179: unsupported operand type(s) for |

(E1131)

edenai_apis/apis/meta/meta_api.py

[refactor] 207-207: Too many arguments (34/7)

(R0913)


[refactor] 207-207: Too many positional arguments (34/5)

(R0917)


[error] 246-246: unsupported operand type(s) for |

(E1131)

edenai_apis/features/llm/llm_interface.py

[refactor] 77-77: Too many arguments (34/7)

(R0913)


[refactor] 77-77: Too many positional arguments (34/5)

(R0917)


[error] 116-116: unsupported operand type(s) for |

(E1131)

edenai_apis/apis/mistral/mistral_api.py

[refactor] 248-248: Too many arguments (34/7)

(R0913)


[refactor] 248-248: Too many positional arguments (34/5)

(R0917)


[error] 287-287: unsupported operand type(s) for |

(E1131)

edenai_apis/apis/groq/groq_api.py

[refactor] 141-141: Too many arguments (34/7)

(R0913)


[refactor] 141-141: Too many positional arguments (34/5)

(R0917)


[error] 179-179: unsupported operand type(s) for |

(E1131)

edenai_apis/apis/amazon/amazon_llm_api.py

[refactor] 92-92: Too many arguments (34/7)

(R0913)


[refactor] 92-92: Too many positional arguments (34/5)

(R0917)


[error] 131-131: unsupported operand type(s) for |

(E1131)

edenai_apis/apis/xai/xai_llm_api.py

[refactor] 92-92: Too many arguments (34/7)

(R0913)


[refactor] 92-92: Too many positional arguments (34/5)

(R0917)


[error] 131-131: unsupported operand type(s) for |

(E1131)

edenai_apis/apis/iointelligence/iointelligence_api.py

[refactor] 118-118: Too many arguments (34/7)

(R0913)


[refactor] 118-118: Too many positional arguments (34/5)

(R0917)


[error] 157-157: unsupported operand type(s) for |

(E1131)

edenai_apis/apis/replicate/replicate_api.py

[refactor] 325-325: Too many arguments (34/7)

(R0913)


[refactor] 325-325: Too many positional arguments (34/5)

(R0917)


[error] 364-364: unsupported operand type(s) for |

(E1131)

edenai_apis/apis/minimax/minimax_api.py

[refactor] 226-226: Too many arguments (34/7)

(R0913)


[refactor] 226-226: Too many positional arguments (34/5)

(R0917)


[error] 265-265: unsupported operand type(s) for |

(E1131)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🔇 Additional comments (10)
edenai_apis/apis/google/outputs/llm/achat_output.json (1)

4-4: Double-check downstream code expectations for the model field

Replacing the concrete model identifier with the generic string "model" is fine for a redacted sample, but some unit/integration tests or client code may parse this field to assert provider-specific behaviour.
Please verify that no existing test fixtures or schema validations depend on the former identifier.

edenai_apis/apis/replicate/outputs/llm/achat_output.json (1)

1-42: Sample output JSON structure looks correct.

The JSON follows the expected OpenAI-compatible chat completion response format with appropriate metadata, usage statistics, and timing information. Good example for async chat completion responses.

edenai_apis/apis/openai/openai_llm_api.py (2)

90-168: Async method implementation looks correct.

The async method properly mirrors the synchronous version and correctly uses await with self.llm_client.acompletion(). Parameter handling is consistent and comprehensive.


129-129: Python 3.10+ Compatibility Confirmed

The str | None union syntax is supported starting in Python 3.10, and both requires-python = ">=3.11" in pyproject.toml and python = "^3.11" in Poetry confirm the project’s minimum Python version is ≥3.11. No changes needed.

edenai_apis/apis/microsoft/microsoft_llm_api.py (2)

91-170: Async method implementation is well-structured.

The async method correctly mirrors the synchronous version and properly uses await with self.llm_client.acompletion(). The implementation is consistent with other providers in the codebase.


130-130: No action needed: project requires Python >=3.11
The pyproject.toml specifies requires-python = ">=3.11", so using the str | None union syntax is fully supported.

edenai_apis/apis/amazon/amazon_llm_api.py (1)

9-9: AmazonLLMApi is correctly initialized via AmazonApi’s constructor

The AmazonLLMApi mix-in doesn’t need its own __init__: it relies on AmazonApi (which subclasses AmazonLLMApi) to set up self.llm_client in its constructor. All calls to self.llm_client in AmazonLLMApi will work as long as you instantiate AmazonApi.

You can safely ignore the missing-constructor warning for AmazonLLMApi.

Likely an incorrect or invalid review comment.

edenai_apis/apis/groq/groq_api.py (1)

141-219: Confirmed: llm__achat abstract method is properly defined and inherited

  • edenai_apis/features/llm/llm_interface.py defines
    @abstractmethod async def llm__achat(…) with the full async signature.
  • edenai_apis/apis/groq/groq_api.py declares
    class GroqApi(..., LlmInterface): and implements async def llm__achat(…) matching the interface.

No further changes required.

edenai_apis/apis/minimax/minimax_api.py (1)

226-304: Async method implementation looks correct.

The llm__achat method properly mirrors the synchronous llm__chat method and correctly uses await self.llm_client.acompletion() for async execution. The parameter forwarding and return type are consistent.

Note: The high argument count (34 parameters) is inherited from the OpenAI API design and matches the synchronous version, so this is acceptable for API compatibility.

edenai_apis/apis/xai/xai_llm_api.py (1)

92-171: Async method implementation looks correct.

The llm__achat method properly mirrors the synchronous llm__chat method and correctly uses await self.llm_client.acompletion() for async execution. The parameter forwarding and return type are consistent.

Note: The high argument count (34 parameters) is inherited from the OpenAI API design and matches the synchronous version, so this is acceptable for API compatibility.

Comment on lines +4 to +6
"model": "gpt-4o-mini-2024-07-18",
"object": "chat.completion",
"system_fingerprint": "fp_7fcd609668",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Model name doesn’t match the provider – very misleading sample

The Iointelligence sample references the OpenAI-specific model string "gpt-4o-mini-2024-07-18". A provider-agnostic or Iointelligence-specific model identifier is expected; otherwise, integrators might assume the wrong capabilities or pricing.

🤖 Prompt for AI Agents
In edenai_apis/apis/iointelligence/outputs/llm/achat_output.json around lines 4
to 6, the model name "gpt-4o-mini-2024-07-18" is specific to OpenAI and
misleading for the Iointelligence provider. Replace this with a
provider-agnostic or Iointelligence-specific model identifier that accurately
reflects the source to avoid confusion about capabilities or pricing.

Comment on lines +1 to +41
{
"id": "chatcmpl-B71qqu4Y7m1ZVuF5YGqxz7LXzgf0Y",
"created": 1741015112,
"model": "gpt-4o-mini-2024-07-18",
"object": "chat.completion",
"system_fingerprint": "fp_7fcd609668",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "Arrr, matey! What ye be seein’ in this here image is a grand pathway, made of wooden planks, weavin' its way through a lush and green landscape. The verdant grass sways in the gentle breeze, and the sky above be a brilliant blue, decorated with fluffy white clouds. Ye can spot trees and bushes on either side, makin' it a perfect setting for a stroll amongst nature. A peaceful place for a pirate at heart, aye!",
"role": "assistant",
"tool_calls": null,
"function_call": null
}
}
],
"provider_time": 3692885792,
"edenai_time": null,
"usage": {
"completion_tokens": 99,
"prompt_tokens": 1170,
"total_tokens": 1269,
"completion_tokens_details": {
"accepted_prediction_tokens": 0,
"audio_tokens": 0,
"reasoning_tokens": 0,
"rejected_prediction_tokens": 0,
"text_tokens": 99
},
"prompt_tokens_details": {
"audio_tokens": 0,
"cached_tokens": 1024,
"text_tokens": null,
"image_tokens": null
}
},
"service_tier": "default",
"cost": 0.0002349
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Duplicate IDs & identical payload across all providers – consider unique, provider-tailored fixtures

The same id, prompt/usage stats, and pirate-style answer appear in every new achat_output.json. This hurts test fidelity and makes it hard to spot provider-specific parsing issues (e.g., different token accounting schemas). Generate one realistic fixture per provider (unique id, model, usage layout).

🤖 Prompt for AI Agents
In edenai_apis/apis/iointelligence/outputs/llm/achat_output.json lines 1 to 41,
the fixture uses a duplicate id and identical payload shared across all
providers, which reduces test fidelity and obscures provider-specific parsing
issues. To fix this, generate a unique fixture for this provider with a distinct
id, model name, and usage statistics that realistically reflect this provider's
response format and token accounting. Ensure the content and metadata are
tailored to this provider to improve test accuracy.

{
"id": "chatcmpl-B71qqu4Y7m1ZVuF5YGqxz7LXzgf0Y",
"created": 1741015112,
"model": "gpt-4o-mini-2024-07-18",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Verify model name accuracy for Groq API.

The model name "gpt-4o-mini-2024-07-18" appears to be OpenAI-specific. Groq typically uses different model identifiers. Consider using a Groq-appropriate model name.

🤖 Prompt for AI Agents
In edenai_apis/apis/groq/outputs/llm/achat_output.json at line 4, the model name
"gpt-4o-mini-2024-07-18" is OpenAI-specific and not appropriate for the Groq
API. Replace this model name with the correct Groq model identifier as per
Groq's documentation or API specifications to ensure compatibility.


async def llm__achat(
self,
messages: List = [],
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix mutable default argument.

Using a mutable list as a default argument can lead to unexpected behavior. Replace with None and initialize within the function.

-        messages: List = [],
+        messages: Optional[List] = None,

Then initialize within the function:

if messages is None:
    messages = []
🧰 Tools
🪛 Ruff (0.12.2)

92-92: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

🤖 Prompt for AI Agents
In edenai_apis/apis/openai/openai_llm_api.py at line 92, the function uses a
mutable default argument `messages: List = []`, which can cause unexpected
behavior. Change the default value to `None` and add a check inside the function
to initialize `messages` to an empty list if it is `None`.


async def llm__achat(
self,
messages: List = [],
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix mutable default argument.

Using a mutable list as a default argument can lead to unexpected behavior. Replace with None and initialize within the function.

-        messages: List = [],
+        messages: Optional[List] = None,

Then initialize within the function:

if messages is None:
    messages = []
🧰 Tools
🪛 Ruff (0.12.2)

93-93: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

🤖 Prompt for AI Agents
In edenai_apis/apis/microsoft/microsoft_llm_api.py at line 93, the function uses
a mutable default argument for the parameter 'messages' by setting it to an
empty list. This can cause unexpected behavior due to shared state across
function calls. To fix this, change the default value of 'messages' to None and
add a check inside the function to initialize 'messages' to an empty list if it
is None.

Comment on lines +141 to +219
async def llm__achat(
self,
messages: List = [],
model: Optional[str] = None,
timeout: Optional[Union[float, str, httpx.Timeout]] = None,
temperature: Optional[float] = None,
top_p: Optional[float] = None,
n: Optional[int] = None,
stream: Optional[bool] = None,
stream_options: Optional[dict] = None,
stop: Optional[str] = None,
stop_sequences: Optional[any] = None,
max_tokens: Optional[int] = None,
presence_penalty: Optional[float] = None,
frequency_penalty: Optional[float] = None,
logit_bias: Optional[dict] = None,
modalities: Optional[List[Literal["text", "audio", "image"]]] = None,
audio: Optional[Dict] = None,
# openai v1.0+ new params
response_format: Optional[
Union[dict, Type[BaseModel]]
] = None, # Structured outputs
seed: Optional[int] = None,
tools: Optional[List] = None,
tool_choice: Optional[Union[str, dict]] = None,
logprobs: Optional[bool] = None,
top_logprobs: Optional[int] = None,
parallel_tool_calls: Optional[bool] = None,
deployment_id=None,
extra_headers: Optional[dict] = None,
# soon to be deprecated params by OpenAI -> This should be replaced by tools
functions: Optional[List] = None,
function_call: Optional[str] = None,
base_url: Optional[str] = None,
api_version: Optional[str] = None,
api_key: Optional[str] = None,
model_list: Optional[list] = None, # pass in a list of api_base,keys, etc.
drop_invalid_params: bool = True, # If true, all the invalid parameters will be ignored (dropped) before sending to the model
user: str | None = None,
# Optional parameters
**kwargs,
) -> LLMChatDataClass:
response = await self.llm_client.acompletion(
messages=messages,
model=model,
timeout=timeout,
temperature=temperature,
top_p=top_p,
n=n,
stream=stream,
stream_options=stream_options,
stop=stop,
stop_sequences=stop_sequences,
max_tokens=max_tokens,
presence_penalty=presence_penalty,
frequency_penalty=frequency_penalty,
logit_bias=logit_bias,
response_format=response_format,
seed=seed,
tools=tools,
tool_choice=tool_choice,
logprobs=logprobs,
top_logprobs=top_logprobs,
parallel_tool_calls=parallel_tool_calls,
deployment_id=deployment_id,
extra_headers=extra_headers,
functions=functions,
function_call=function_call,
base_url=base_url,
api_version=api_version,
api_key=api_key,
model_list=model_list,
drop_invalid_params=drop_invalid_params,
user=user,
modalities=modalities,
audio=audio,
**kwargs,
)
return response
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix mutable default argument and union operator syntax.

The async method implementation follows the correct pattern but has the same issues as other providers:

  1. Mutable default argument: messages: List = [] can cause shared state bugs
  2. Union operator syntax: user: str | None = None may not be compatible with older Python versions

Apply this fix:

    async def llm__achat(
        self,
-        messages: List = [],
+        messages: Optional[List] = None,
        model: Optional[str] = None,
        # ... other parameters ...
-        user: str | None = None,
+        user: Optional[str] = None,
        # Optional parameters
        **kwargs,
    ) -> LLMChatDataClass:
+        if messages is None:
+            messages = []
        response = await self.llm_client.acompletion(
            messages=messages,
            # ... rest of parameters ...
        )
        return response
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def llm__achat(
self,
messages: List = [],
model: Optional[str] = None,
timeout: Optional[Union[float, str, httpx.Timeout]] = None,
temperature: Optional[float] = None,
top_p: Optional[float] = None,
n: Optional[int] = None,
stream: Optional[bool] = None,
stream_options: Optional[dict] = None,
stop: Optional[str] = None,
stop_sequences: Optional[any] = None,
max_tokens: Optional[int] = None,
presence_penalty: Optional[float] = None,
frequency_penalty: Optional[float] = None,
logit_bias: Optional[dict] = None,
modalities: Optional[List[Literal["text", "audio", "image"]]] = None,
audio: Optional[Dict] = None,
# openai v1.0+ new params
response_format: Optional[
Union[dict, Type[BaseModel]]
] = None, # Structured outputs
seed: Optional[int] = None,
tools: Optional[List] = None,
tool_choice: Optional[Union[str, dict]] = None,
logprobs: Optional[bool] = None,
top_logprobs: Optional[int] = None,
parallel_tool_calls: Optional[bool] = None,
deployment_id=None,
extra_headers: Optional[dict] = None,
# soon to be deprecated params by OpenAI -> This should be replaced by tools
functions: Optional[List] = None,
function_call: Optional[str] = None,
base_url: Optional[str] = None,
api_version: Optional[str] = None,
api_key: Optional[str] = None,
model_list: Optional[list] = None, # pass in a list of api_base,keys, etc.
drop_invalid_params: bool = True, # If true, all the invalid parameters will be ignored (dropped) before sending to the model
user: str | None = None,
# Optional parameters
**kwargs,
) -> LLMChatDataClass:
response = await self.llm_client.acompletion(
messages=messages,
model=model,
timeout=timeout,
temperature=temperature,
top_p=top_p,
n=n,
stream=stream,
stream_options=stream_options,
stop=stop,
stop_sequences=stop_sequences,
max_tokens=max_tokens,
presence_penalty=presence_penalty,
frequency_penalty=frequency_penalty,
logit_bias=logit_bias,
response_format=response_format,
seed=seed,
tools=tools,
tool_choice=tool_choice,
logprobs=logprobs,
top_logprobs=top_logprobs,
parallel_tool_calls=parallel_tool_calls,
deployment_id=deployment_id,
extra_headers=extra_headers,
functions=functions,
function_call=function_call,
base_url=base_url,
api_version=api_version,
api_key=api_key,
model_list=model_list,
drop_invalid_params=drop_invalid_params,
user=user,
modalities=modalities,
audio=audio,
**kwargs,
)
return response
async def llm__achat(
self,
messages: Optional[List] = None,
model: Optional[str] = None,
timeout: Optional[Union[float, str, httpx.Timeout]] = None,
temperature: Optional[float] = None,
top_p: Optional[float] = None,
n: Optional[int] = None,
stream: Optional[bool] = None,
stream_options: Optional[dict] = None,
stop: Optional[str] = None,
stop_sequences: Optional[any] = None,
max_tokens: Optional[int] = None,
presence_penalty: Optional[float] = None,
frequency_penalty: Optional[float] = None,
logit_bias: Optional[dict] = None,
modalities: Optional[List[Literal["text", "audio", "image"]]] = None,
audio: Optional[Dict] = None,
# openai v1.0+ new params
response_format: Optional[
Union[dict, Type[BaseModel]]
] = None,
seed: Optional[int] = None,
tools: Optional[List] = None,
tool_choice: Optional[Union[str, dict]] = None,
logprobs: Optional[bool] = None,
top_logprobs: Optional[int] = None,
parallel_tool_calls: Optional[bool] = None,
deployment_id=None,
extra_headers: Optional[dict] = None,
# soon to be deprecated params by OpenAI -> This should be replaced by tools
functions: Optional[List] = None,
function_call: Optional[str] = None,
base_url: Optional[str] = None,
api_version: Optional[str] = None,
api_key: Optional[str] = None,
model_list: Optional[list] = None,
drop_invalid_params: bool = True,
user: Optional[str] = None,
# Optional parameters
**kwargs,
) -> LLMChatDataClass:
if messages is None:
messages = []
response = await self.llm_client.acompletion(
messages=messages,
model=model,
timeout=timeout,
temperature=temperature,
top_p=top_p,
n=n,
stream=stream,
stream_options=stream_options,
stop=stop,
stop_sequences=stop_sequences,
max_tokens=max_tokens,
presence_penalty=presence_penalty,
frequency_penalty=frequency_penalty,
logit_bias=logit_bias,
response_format=response_format,
seed=seed,
tools=tools,
tool_choice=tool_choice,
logprobs=logprobs,
top_logprobs=top_logprobs,
parallel_tool_calls=parallel_tool_calls,
deployment_id=deployment_id,
extra_headers=extra_headers,
functions=functions,
function_call=function_call,
base_url=base_url,
api_version=api_version,
api_key=api_key,
model_list=model_list,
drop_invalid_params=drop_invalid_params,
user=user,
modalities=modalities,
audio=audio,
**kwargs,
)
return response
🧰 Tools
🪛 Ruff (0.12.2)

143-143: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

🪛 Pylint (3.3.7)

[refactor] 141-141: Too many arguments (34/7)

(R0913)


[refactor] 141-141: Too many positional arguments (34/5)

(R0917)


[error] 179-179: unsupported operand type(s) for |

(E1131)

🤖 Prompt for AI Agents
In edenai_apis/apis/groq/groq_api.py around lines 141 to 219, fix the mutable
default argument by changing the default value of the messages parameter from an
empty list to None and then initializing it inside the method if needed. Also,
replace the union operator syntax for the user parameter from 'str | None' to
'Optional[str]' to ensure compatibility with older Python versions.

@@ -223,6 +223,86 @@ def llm__chat(
)
return response

async def llm__achat(
self,
messages: List = [],
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix mutable default argument.

Using a mutable list as a default argument can lead to unexpected behavior where the same list instance is shared across function calls.

-        messages: List = [],
+        messages: Optional[List] = None,

Then initialize within the function:

if messages is None:
    messages = []
🧰 Tools
🪛 Ruff (0.12.2)

228-228: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

🤖 Prompt for AI Agents
In edenai_apis/apis/minimax/minimax_api.py at line 228, the function uses a
mutable default argument for the parameter 'messages' by setting it to an empty
list. This can cause the same list instance to be shared across multiple calls.
To fix this, change the default value of 'messages' to None and then inside the
function, check if 'messages' is None and if so, initialize it to an empty list.

api_key: Optional[str] = None,
model_list: Optional[list] = None, # pass in a list of api_base,keys, etc.
drop_invalid_params: bool = True, # If true, all the invalid parameters will be ignored (dropped) before sending to the model
user: str | None = None,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix union operator syntax for older Python versions.

The | union operator syntax requires Python 3.10+. Use Union from typing for broader compatibility.

-        user: str | None = None,
+        user: Optional[str] = None,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
user: str | None = None,
- user: str | None = None,
+ user: Optional[str] = None,
🧰 Tools
🪛 Pylint (3.3.7)

[error] 265-265: unsupported operand type(s) for |

(E1131)

🤖 Prompt for AI Agents
In edenai_apis/apis/minimax/minimax_api.py at line 265, replace the union type
hint using the `|` operator with the `Union` type from the typing module to
ensure compatibility with Python versions older than 3.10. Import `Union` if not
already imported, and change `user: str | None = None` to `user: Union[str,
None] = None`.


async def llm__achat(
self,
messages: List = [],
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix mutable default argument.

Using a mutable list as a default argument can lead to unexpected behavior where the same list instance is shared across function calls.

-        messages: List = [],
+        messages: Optional[List] = None,

Then initialize within the function:

if messages is None:
    messages = []
🧰 Tools
🪛 Ruff (0.12.2)

94-94: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

🤖 Prompt for AI Agents
In edenai_apis/apis/xai/xai_llm_api.py at line 94, the function uses a mutable
list as a default argument which can cause shared state issues across calls.
Change the default value of the messages parameter from an empty list to None,
and then inside the function, add a check to initialize messages to an empty
list if it is None.

api_key: Optional[str] = None,
model_list: Optional[list] = None, # pass in a list of api_base,keys, etc.
drop_invalid_params: bool = True, # If true, all the invalid parameters will be ignored (dropped) before sending to the model
user: str | None = None,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix union operator syntax for older Python versions.

The | union operator syntax requires Python 3.10+. Use Union from typing for broader compatibility.

-        user: str | None = None,
+        user: Optional[str] = None,

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Pylint (3.3.7)

[error] 131-131: unsupported operand type(s) for |

(E1131)

🤖 Prompt for AI Agents
In edenai_apis/apis/xai/xai_llm_api.py at line 131, replace the union type hint
using the `|` operator with the `Union` type from the typing module to ensure
compatibility with Python versions older than 3.10. Import `Union` if not
already imported, and change the annotation from `str | None` to `Union[str,
None]`.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
edenai_apis/apis/xai/info.json (1)

227-234: Ensure the new "achat" capability is fully wired & documented

The "achat" key is now exposed with the same version string as "chat", but:

  1. Up-stream runtime: verify that llm__achat is actually declared in xai_llm_api.py and registered in provider discovery, otherwise this entry will appear in the public catalog but trigger 501/404 at runtime.
  2. Constraints: unlike most blocks in this file, the "achat" entry (and its sibling "chat") omit a "constraints" object. If the async and sync endpoints share the same model list & default, consider duplicating the "constraints" from "chat" (or declaring a common constant) to avoid client-side ambiguity.
  3. Tests / schema validation: add/update JSON-schema or unit tests that iterate over info.json capabilities to include "achat" so regressions are caught automatically.

If everything is already handled in the implementation layer and tests, feel free to ignore; otherwise a quick follow-up PR to address the above will save support tickets.

edenai_apis/apis/microsoft/info.json (2)

145-145: Prefer multi-line formatting for long arrays

Flattening the file_extensions list onto a single line trims bytes but hurts future diff readability and increases merge-conflict risk. Unless size is truly critical, keep one-item-per-line.


785-785: Same readability issue for languages / documents lists

The same one-liner compression was applied to multiple huge arrays. The original multi-line layout is far easier to scan and maintain. Consider reverting for consistency and maintainability.

Also applies to: 1162-1162, 1178-1179, 1205-1208

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 60c0b87 and f2cd100.

📒 Files selected for processing (13)
  • edenai_apis/apis/amazon/info.json (1 hunks)
  • edenai_apis/apis/anthropic/info.json (2 hunks)
  • edenai_apis/apis/deepseek/info.json (1 hunks)
  • edenai_apis/apis/groq/info.json (1 hunks)
  • edenai_apis/apis/iointelligence/info.json (1 hunks)
  • edenai_apis/apis/meta/info.json (2 hunks)
  • edenai_apis/apis/microsoft/info.json (7 hunks)
  • edenai_apis/apis/minimax/info.json (1 hunks)
  • edenai_apis/apis/mistral/info.json (1 hunks)
  • edenai_apis/apis/openai/info.json (1 hunks)
  • edenai_apis/apis/replicate/info.json (1 hunks)
  • edenai_apis/apis/together_ai/info.json (1 hunks)
  • edenai_apis/apis/xai/info.json (1 hunks)
✅ Files skipped from review due to trivial changes (10)
  • edenai_apis/apis/deepseek/info.json
  • edenai_apis/apis/together_ai/info.json
  • edenai_apis/apis/meta/info.json
  • edenai_apis/apis/iointelligence/info.json
  • edenai_apis/apis/anthropic/info.json
  • edenai_apis/apis/minimax/info.json
  • edenai_apis/apis/mistral/info.json
  • edenai_apis/apis/openai/info.json
  • edenai_apis/apis/replicate/info.json
  • edenai_apis/apis/amazon/info.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🔇 Additional comments (3)
edenai_apis/apis/xai/info.json (1)

1-226: Formatting-only changes look fine
All edits up to the “llm” block are indentation / whitespace re-flows. JSON remains syntactically valid and semantically unchanged.

edenai_apis/apis/microsoft/info.json (1)

1726-1730: Async capability correctly exposed – just verify provider registry

The new "llm" → "achat" entry with version "llmengine (v2)" matches the newly-added llm__achat method. Looks good. Please run the capability-resolution tests (or a quick grep '"achat"') to ensure the registry now discovers this async variant everywhere it’s needed.

edenai_apis/apis/groq/info.json (1)

15-22: Downstream code supports the new llm section – no changes required

  • The JSON loader (load_info_file/load_provider) treats all top-level keys generically, so adding "llm" has no impact.
  • list_features dynamically picks up llm__chat and llm__achat methods, so runtime discovery and any CLI listing will include both.
  • The existing tests (test_version_exists, test_implemented_features_documented, etc.) already validate that info.json contains both llm.chat and llm.achat entries and will pass as-is.

@juandavidcruzgomez juandavidcruzgomez merged commit a27ba0a into master Jul 28, 2025
3 of 4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants