Skip to content

Conversation

@Avi-Robusta
Copy link
Contributor

Requires new relay release for it to work

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 19, 2025

Walkthrough

Adds cluster-aware remote Robusta model discovery, moves Robusta API-key loading into the Config class (exposing SupabaseDal at module scope), adds LOAD_ALL_ROBUSTA_MODELS env flag, updates server startup and handlers to call Config.load_robusta_api_key, and updates tests to cover these flows.

Changes

Cohort / File(s) Summary
Config: Robusta loading
holmes/config.py
Added configure_robusta_ai_model, _load_default_robusta_config, and load_robusta_api_key; refactored model_post_init to delegate RobustA loading; added account_id and session_token fields; moved SupabaseDal import to module scope; populates _model_list from remote fetch and falls back to default on errors.
Robusta client: model fetch
holmes/clients/robusta_client.py
Added cached fetch_robusta_models(account_id, token) -> Optional[List[str]] that POSTs to {ROBUSTA_API_ENDPOINT}/api/llm/models (10s), returns "models" on success, logs and returns None on exception.
Env vars
holmes/common/env_vars.py
Added LOAD_ALL_ROBUSTA_MODELS = load_bool("LOAD_ALL_ROBUSTA_MODELS", True) to control loading of remote RobustA models.
Server: startup & handlers
server.py
Replaced module-level util calls with config.load_robusta_api_key(dal=dal); call added to sync_before_server_start; per-request calls updated to use Config method.
Utils: removed relocation
holmes/utils/robusta.py
Removed former module-level load_robusta_api_key(dal, config) (logic migrated into Config.load_robusta_api_key).
Core usage sites
holmes/core/investigation.py
Switched from module-level util to config.load_robusta_api_key(dal=dal) in investigate_issues and get_investigation_context.
Tests: config Robusta loading
tests/config_class/test_config_load_robusta_ai.py, tests/test_server_endpoints.py
Tests updated to mock cluster context and fetch_robusta_models, expect cluster-named models (e.g., Robusta/test), add credential-mocking helpers, adjust test signatures, and move mocks to patch holmes.config.Config.load_robusta_api_key.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant App
  participant Config
  participant SupabaseDal
  participant RobustaAPI as Robusta API
  participant Logger

  App->>Config: model_post_init()
  alt _should_load_robusta_ai() is false
    Config-->>App: return
  else _should_load_robusta_ai() is true
    Config->>Config: configure_robusta_ai_model()
    alt missing cluster or LOAD_ALL_ROBUSTA_MODELS is false
      Config->>Logger: load default Robusta model
      Config->>Config: _load_default_robusta_config()
      Config-->>App: return
    else
      Config->>SupabaseDal: get_ai_credentials(cluster_name)
      SupabaseDal-->>Config: account_id, token
      Config->>RobustaAPI: POST /api/llm/models (10s)
      alt success + models present
        RobustaAPI-->>Config: {"models": [...]}
        Config->>Logger: log each model
        Config-->>App: model_list populated
      else error or empty
        RobustaAPI--xConfig: error/empty
        Config->>Logger: log error
        Config->>Config: _load_default_robusta_config()
        Config-->>App: return
      end
    end
  end
Loading
sequenceDiagram
  autonumber
  participant Server
  participant Config
  participant SupabaseDal
  participant Logger

  Server->>Config: sync_before_server_start()
  Config->>Config: load_robusta_api_key(dal)
  alt credentials available and ROBUSTA_AI allowed
    Config->>SupabaseDal: get_ai_credentials()
    SupabaseDal-->>Config: account_id, token
    Config->>Config: set api_key & account/session
    Config-->>Server: return
  else missing creds or flag disables
    Config->>Logger: use default Robusta config or no-op
    Config-->>Server: return
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • nherment
  • arikalon1

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch multiple_llm_support

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limits.

🔭 Outside diff range comments (1)
holmes/utils/robusta.py (1)

6-10: Add explicit return type to satisfy mypy and project typing requirements.

The project enforces type hints; annotate the function to return None.

Apply this diff:

-def load_robusta_api_key(dal: SupabaseDal, config: Config):
+def load_robusta_api_key(dal: SupabaseDal, config: Config) -> None:
🧹 Nitpick comments (3)
holmes/utils/robusta.py (1)

1-1: Decouple ROBUSTA_AI import and avoid runtime import of Config (type-only import).

Importing ROBUSTA_AI from holmes.config couples this utility to a heavy module and risks circular imports. Import ROBUSTA_AI directly from holmes.common.env_vars, and import Config only for type-checking.

Apply this diff within the selected range:

-from holmes.config import Config, ROBUSTA_AI
+from holmes.common.env_vars import ROBUSTA_AI

Additionally, add the following near the top of the file (outside the selected range) to keep type hints without a runtime import:

from __future__ import annotations
from typing import TYPE_CHECKING

if TYPE_CHECKING:
    from holmes.config import Config
holmes/config.py (2)

139-146: Default _model_list to a dict to avoid None-handling pitfalls.

If parse_models_file returns None when the file is missing, later dict operations will fail silently or cause exceptions. Default to {}.

Apply this diff:

-        self._model_list = parse_models_file(MODEL_LIST_FILE_LOCATION)
+        self._model_list = parse_models_file(MODEL_LIST_FILE_LOCATION) or {}

146-154: Optional: Fallback to default Robusta model when cluster_name is missing.

Right now, a missing cluster_name returns early and skips the default fallback (which only happens on exceptions). If the intent is to use Robusta when enabled but cluster_name is unavailable, consider loading the default model here.

Apply this diff:

-            if not self.cluster_name:
-                return
+            if not self.cluster_name:
+                logging.info("No cluster_name configured; loading default Robusta AI model")
+                self._model_list[ROBUSTA_AI_MODEL_NAME] = {
+                    "base_url": ROBUSTA_API_ENDPOINT,
+                }
+                return
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between bcea41f and 9e4a363.

📒 Files selected for processing (2)
  • holmes/config.py (3 hunks)
  • holmes/utils/robusta.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)

Files:

  • holmes/utils/robusta.py
  • holmes/config.py
🧠 Learnings (1)
📚 Learning: 2025-07-08T08:45:41.069Z
Learnt from: nherment
PR: robusta-dev/holmesgpt#610
File: .github/workflows/llm-evaluation.yaml:39-42
Timestamp: 2025-07-08T08:45:41.069Z
Learning: The robusta-dev/holmesgpt codebase has comprehensive existing validation for Azure environment variables (AZURE_API_BASE, AZURE_API_KEY, AZURE_API_VERSION) and MODEL in tests/llm/utils/classifiers.py, tests/llm/conftest.py, and holmes/core/llm.py. Don't suggest adding redundant validation logic.

Applied to files:

  • holmes/utils/robusta.py
🧬 Code Graph Analysis (1)
holmes/config.py (1)
holmes/core/supabase_dal.py (1)
  • SupabaseDal (57-583)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.11)
  • GitHub Check: build (3.10)
🔇 Additional comments (4)
holmes/config.py (4)

36-36: LGTM: SupabaseDal import moved to runtime where it’s needed.

This aligns with the new loading flow. Watch for import cycles, but at present there’s no direct usage of utils/robusta from here, so this should be safe.


170-176: Good resilience: sane fallback on failures.

Catching exceptions and loading a default Robusta model guarantees a usable configuration even when the remote discovery fails.


178-196: Gating logic reads clean and preserves backward compatibility.

The tri-state handling of ROBUSTA_AI, the MODEL env short-circuit, and honoring a provided model list make sense.


9-9: requests dependency confirmed in pyproject.toml

  • Found at pyproject.toml (line 54):
    requests = "^2.32.4"

No action needed for declaration.
Optionally, if you’re running mypy in strict mode and want to drop the # type: ignore, add the official stub package:

poetry add -D types-requests

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
holmes/config.py (1)

64-71: Bug: env var substitution result is discarded in parse_models_file.

The loop rebinds params locally and doesn’t write back to models, so replacements don’t take effect.

-def parse_models_file(path: str):
-    models = load_yaml_file(path, raise_error=False, warn_not_found=False)
-
-    for _, params in models.items():
-        params = replace_env_vars_values(params)
-
-    return models
+def parse_models_file(path: str):
+    models = load_yaml_file(path, raise_error=False, warn_not_found=False) or {}
+    for key, params in list(models.items()):
+        models[key] = replace_env_vars_values(params)
+    return models
♻️ Duplicate comments (1)
holmes/config.py (1)

519-525: Inject API key by comparing hostnames, not by 'robusta.dev' substring.

This will break for staging/on‑prem endpoints where the host differs from robusta.dev while still being the configured ROBUSTA_API_ENDPOINT. Compare hosts instead.

-        if (
-            not api_key
-            and "robusta.dev" in model_params.get("base_url", "")
-            and self.api_key
-        ):
-            api_key = self.api_key.get_secret_value()
+        if not api_key and self.api_key:
+            endpoint_host = urlparse(ROBUSTA_API_ENDPOINT).netloc
+            base_host = urlparse(model_params.get("base_url", "")).netloc
+            if endpoint_host and base_host and endpoint_host == base_host:
+                api_key = self.api_key.get_secret_value()

Add this import at the top of the file:

from urllib.parse import urlparse
🧹 Nitpick comments (1)
holmes/config.py (1)

146-175: Robust remote model loader with fallback — nice.

  • Uses a 10s timeout and raise_for_status.
  • Falls back to a default config on any exception.
  • Per-model base_url construction looks correct.

Two small follow-ups to harden behavior:

  • Consider typing the response structure (TypedDict) to guard .get("models", ...) shape via mypy.
  • Log the count of models loaded for easier troubleshooting.

If you want, I can add a small unit test that simulates: success, HTTP error, DAL exception, and no cluster_name paths.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 0caf2d7 and 7851df9.

📒 Files selected for processing (1)
  • holmes/config.py (3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)

Files:

  • holmes/config.py
🧠 Learnings (1)
📚 Learning: 2025-07-08T08:45:41.069Z
Learnt from: nherment
PR: robusta-dev/holmesgpt#610
File: .github/workflows/llm-evaluation.yaml:39-42
Timestamp: 2025-07-08T08:45:41.069Z
Learning: The robusta-dev/holmesgpt codebase has comprehensive existing validation for Azure environment variables (AZURE_API_BASE, AZURE_API_KEY, AZURE_API_VERSION) and MODEL in tests/llm/utils/classifiers.py, tests/llm/conftest.py, and holmes/core/llm.py. Don't suggest adding redundant validation logic.

Applied to files:

  • holmes/config.py
🧬 Code Graph Analysis (1)
holmes/config.py (1)
holmes/core/supabase_dal.py (2)
  • SupabaseDal (57-583)
  • get_ai_credentials (463-475)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: llm_evals
  • GitHub Check: Pre-commit checks
  • GitHub Check: Pre-commit checks
🔇 Additional comments (2)
holmes/config.py (2)

36-36: SupabaseDal import at runtime scope LGTM.

This aligns with the new runtime use here and avoids TYPE_CHECKING hacks. Assuming no circular import (looks safe from the provided SupabaseDal snippet).


141-145: Good gating: early-return when ROBUSTA AI should not be loaded.

This keeps model init fast and side-effect free unless explicitly enabled.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
holmes/config.py (3)

9-9: Remove unnecessary type ignore on requests import and rely on typing stubs.

requests ships with typing or can be satisfied via types-requests. Drop the suppression to comply with type-checking guidelines.

-import requests  # type: ignore
+import requests

Also add the typing stub in your dev dependencies (e.g., types-requests) if mypy requires it.


164-170: Robusta API key is not passed to DefaultLLM; inject it based on endpoint host match.

Models loaded here only set base_url; _get_llm doesn’t fallback to self.api_key. Calls to Robusta endpoints will go out unauthenticated. Inject the key when the model’s base_url host matches ROBUSTA_API_ENDPOINT.

Proposed change outside the selected range (in _get_llm and imports):

-from typing import TYPE_CHECKING, Any, List, Optional, Union
+from typing import TYPE_CHECKING, Any, List, Optional, Union
+from urllib.parse import urlparse
 def _get_llm(self, model_key: Optional[str] = None, tracer=None) -> "LLM":
     api_key: Optional[str] = None
     model = self.model
     model_params = {}
     if self._model_list:
         # get requested model or the first credentials if no model requested.
         model_params = (
             self._model_list.get(model_key, {}).copy()
             if model_key
             else next(iter(self._model_list.values())).copy()
         )
         api_key = model_params.pop("api_key", api_key)
         model = model_params.pop("model", model)
 
+    # Fallback: inject config.api_key for Robusta endpoints when not explicitly provided
+    if not api_key and self.api_key:
+        endpoint_host = urlparse(ROBUSTA_API_ENDPOINT).netloc
+        base_host = urlparse(model_params.get("base_url", "")).netloc
+        if endpoint_host and base_host and endpoint_host == base_host:
+            api_key = self.api_key.get_secret_value()
+
     return DefaultLLM(model, api_key, model_params, tracer)  # type: ignore

176-180: Annotate return type for mypy compliance.

This method returns None; add the explicit annotation.

-    def _load_default_robusta_config(self):
+    def _load_default_robusta_config(self) -> None:
         logging.info("Loading default Robusta AI model")
         self._model_list[ROBUSTA_AI_MODEL_NAME] = {
             "base_url": ROBUSTA_API_ENDPOINT,
         }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 7851df9 and e804ef7.

📒 Files selected for processing (2)
  • holmes/config.py (3 hunks)
  • holmes/utils/robusta.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • holmes/utils/robusta.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)

Files:

  • holmes/config.py
🧠 Learnings (1)
📚 Learning: 2025-07-08T08:45:41.069Z
Learnt from: nherment
PR: robusta-dev/holmesgpt#610
File: .github/workflows/llm-evaluation.yaml:39-42
Timestamp: 2025-07-08T08:45:41.069Z
Learning: The robusta-dev/holmesgpt codebase has comprehensive existing validation for Azure environment variables (AZURE_API_BASE, AZURE_API_KEY, AZURE_API_VERSION) and MODEL in tests/llm/utils/classifiers.py, tests/llm/conftest.py, and holmes/core/llm.py. Don't suggest adding redundant validation logic.

Applied to files:

  • holmes/config.py
🧬 Code Graph Analysis (1)
holmes/config.py (1)
holmes/core/supabase_dal.py (2)
  • SupabaseDal (57-583)
  • get_ai_credentials (463-475)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.11)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.11)
🔇 Additional comments (2)
holmes/config.py (2)

36-36: Importing SupabaseDal at module scope is appropriate.

It’s used at runtime in model initialization, so moving it out of TYPE_CHECKING is correct.


140-146: Nice refactor: gate and delegate Robusta AI loading.

Early return via _should_load_robusta_ai() and delegating to _load_robusta_ai_models() improves readability and separation of concerns.

@Avi-Robusta Avi-Robusta force-pushed the multiple_llm_support branch from 278e7b3 to 968bd8e Compare August 19, 2025 12:43
@Avi-Robusta Avi-Robusta requested a review from moshemorad August 19, 2025 12:57
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
server.py (1)

85-93: Guard startup against failures in load_robusta_api_key

Uncaught exceptions here can fail the entire server startup, unlike the surrounding steps which are protected. Wrap this call similarly to keep startup resilient.

 def sync_before_server_start():
-    load_robusta_api_key(dal=dal, config=config)
+    try:
+        load_robusta_api_key(dal=dal, config=config)
+    except Exception:
+        logging.error("Failed to load Robusta API key", exc_info=True)
+        # Continue startup; endpoints also load the key on-demand.
     try:
         update_holmes_status_in_db(dal, config)

Optional: add an explicit return type (project enforces typing).

def sync_before_server_start() -> None:
    ...
♻️ Duplicate comments (2)
holmes/clients/robusta_client.py (1)

17-19: Add missing type hint and avoid suppressing requests typing

  • The cluster_name parameter is missing a type annotation; project requires type hints.
  • Also, avoid # type: ignore on the requests import; prefer adding stubs so mypy can type-check.

Apply this diff for the function signature:

-@cache
-def fetch_robusta_models(cluster_name) -> Optional[List[str]]:
+@cache
+def fetch_robusta_models(cluster_name: str) -> Optional[List[str]]:

Additionally (outside the selected lines):

  • Remove # type: ignore from the import requests line at the top of this file.
  • Ensure the repo includes typing stubs for requests (e.g., add types-requests to dev dependencies) so mypy passes without ignores, per prior guidance already given in this codebase.

Also applies to: 23-33

holmes/config.py (1)

176-183: Annotate _load_default_robusta_config return type

Matches project typing requirements and a prior review suggestion.

-    def _load_default_robusta_config(self):
+    def _load_default_robusta_config(self) -> None:
🧹 Nitpick comments (3)
holmes/common/env_vars.py (1)

30-30: Make LOAD_ALL_ROBUSTA_MODELS type explicit

Small nit for mypy clarity: this is an Optional[bool]. Being explicit helps with IDEs and static checks.

-LOAD_ALL_ROBUSTA_MODELS = load_bool("LOAD_ALL_ROBUSTA_MODELS", True)
+LOAD_ALL_ROBUSTA_MODELS: Optional[bool] = load_bool("LOAD_ALL_ROBUSTA_MODELS", True)
holmes/config.py (1)

41-41: Remove unused runtime import of SupabaseDal

This import isn’t used at runtime (type hints use forward refs and TYPE_CHECKING), and may trigger unnecessary initialization. Keep it under TYPE_CHECKING only to avoid Ruff F401 and overhead.

-from holmes.core.supabase_dal import SupabaseDal
tests/config_class/test_config_load_robusta_ai.py (1)

87-97: Add tests for feature-flag = False and empty/None remote responses

Two valuable scenarios to add:

  • LOAD_ALL_ROBUSTA_MODELS set to "false" should trigger default Robusta config (not remote list).
  • Remote returns [] or None should also fall back to default.

Example additions:

@patch("holmes.config.ROBUSTA_AI", True)
@patch("holmes.config.Config._Config__get_cluster_name", return_value="test")
@patch("holmes.config.fetch_robusta_models", return_value=[])
def test_server_falls_back_when_remote_empty(mock_fetch, mock_cluster, *, monkeypatch):
    monkeypatch.setenv("API_KEY", "test")
    config = Config.load_from_env()
    assert "Robusta" in config._model_list  # default loaded

@patch("holmes.config.ROBUSTA_AI", True)
@patch("holmes.config.Config._Config__get_cluster_name", return_value="test")
@patch("holmes.config.fetch_robusta_models", return_value=["Robusta/test"])
def test_server_flag_disables_remote_models(mock_fetch, mock_cluster, *, monkeypatch):
    monkeypatch.setenv("API_KEY", "test")
    monkeypatch.setenv("LOAD_ALL_ROBUSTA_MODELS", "false")
    config = Config.load_from_env()
    assert "Robusta" in config._model_list
    assert "Robusta/test" not in config._model_list
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 278e7b3 and bd08b74.

📒 Files selected for processing (5)
  • holmes/clients/robusta_client.py (2 hunks)
  • holmes/common/env_vars.py (1 hunks)
  • holmes/config.py (3 hunks)
  • server.py (1 hunks)
  • tests/config_class/test_config_load_robusta_ai.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)

Files:

  • server.py
  • holmes/common/env_vars.py
  • holmes/clients/robusta_client.py
  • tests/config_class/test_config_load_robusta_ai.py
  • holmes/config.py
{tests/**/*.py,pyproject.toml}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Only use pytest markers that are defined in pyproject.toml; never introduce undefined markers in tests

Files:

  • tests/config_class/test_config_load_robusta_ai.py
🧠 Learnings (1)
📚 Learning: 2025-07-08T08:45:41.069Z
Learnt from: nherment
PR: robusta-dev/holmesgpt#610
File: .github/workflows/llm-evaluation.yaml:39-42
Timestamp: 2025-07-08T08:45:41.069Z
Learning: The robusta-dev/holmesgpt codebase has comprehensive existing validation for Azure environment variables (AZURE_API_BASE, AZURE_API_KEY, AZURE_API_VERSION) and MODEL in tests/llm/utils/classifiers.py, tests/llm/conftest.py, and holmes/core/llm.py. Don't suggest adding redundant validation logic.

Applied to files:

  • holmes/common/env_vars.py
🧬 Code Graph Analysis (4)
server.py (1)
holmes/utils/robusta.py (1)
  • load_robusta_api_key (6-10)
holmes/clients/robusta_client.py (1)
holmes/core/supabase_dal.py (1)
  • get_ai_credentials (463-475)
tests/config_class/test_config_load_robusta_ai.py (1)
holmes/config.py (2)
  • Config (78-528)
  • load_from_env (238-268)
holmes/config.py (2)
holmes/clients/robusta_client.py (1)
  • fetch_robusta_models (18-32)
holmes/core/supabase_dal.py (1)
  • SupabaseDal (57-583)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.11)
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.11)
  • GitHub Check: llm_evals
🔇 Additional comments (6)
holmes/config.py (3)

13-20: LGTM: wiring for remote models and env flags looks correct

Importing fetch_robusta_models and the relevant env vars cleanly sets up the new flow and keeps concerns separated.


146-150: LGTM: clearer post-init flow

Early-returning when Robusta AI shouldn’t load and delegating to configure_robusta_ai_model() keeps the init path maintainable and testable.


151-175: LGTM: robust remote-model loading with sane fallbacks

  • Skips when no api_key.
  • Falls back when cluster is missing, flag is disabled, fetch fails, or returns empty.
  • Populates model list with per-model base_url and api_key as requested.

No blockers here.

tests/config_class/test_config_load_robusta_ai.py (3)

12-18: LGTM: verifies loading when ROBUSTA_AI=True

Solid coverage of the happy path with cluster + API key present.


20-30: LGTM: default/no-other-models behavior when ROBUSTA_AI is unset

Good assertion that only the fetched Robusta model is present.


41-55: LGTM: coexistence with existing models

Confirms Robusta/test is added alongside preexisting models; this guards against regressions in merging logic.

@Avi-Robusta Avi-Robusta requested a review from moshemorad August 20, 2025 07:06
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
server.py (1)

185-201: Fix typo: request.stored_instrucitons → request.stored_instructions; also move API key loading inside try/except.

  • The flag name typo prevents stored instructions from ever being used.
  • load_robusta_api_key is outside the try/except, causing inconsistent error handling vs other endpoints.

Apply this diff:

-@app.post("/api/workload_health_check")
-def workload_health_check(request: WorkloadHealthRequest):
-    config.load_robusta_api_key(dal=dal)
-    try:
+@app.post("/api/workload_health_check")
+def workload_health_check(request: WorkloadHealthRequest):
+    try:
+        config.load_robusta_api_key(dal=dal)
         resource = request.resource
         workload_alerts: list[str] = []
         if request.alert_history:
             workload_alerts = dal.get_workload_issues(
                 resource, request.alert_history_since_hours
             )

         instructions = request.instructions or []
-        if request.stored_instrucitons:
+        if request.stored_instructions:
             stored_instructions = dal.get_resource_instructions(
                 resource.get("kind", "").lower(), resource.get("name")
             )
             if stored_instructions:
                 instructions.extend(stored_instructions.instructions)
♻️ Duplicate comments (3)
holmes/config.py (3)

187-194: Annotate return type of _load_default_robusta_config.

Mypy compliance: explicitly mark as returning None.

Apply this diff:

-    def _load_default_robusta_config(self):
+    def _load_default_robusta_config(self) -> None:

168-181: Endpoint path moved out of config per feedback.

The HTTP model-fetch request was moved to holmes/clients/robusta_client.fetch_robusta_models, aligning with prior feedback to keep request logic out of Config.


176-181: API key included per-model in model list.

Models now carry api_key alongside base_url, matching the desired structure.

🧹 Nitpick comments (3)
holmes/core/investigation.py (1)

30-31: Load Robusta API key at the call sites — good placement; ensure idempotency.

Both call sites proactively load credentials before constructing AI instances, which aligns with the new Config-based flow. Ensure load_robusta_api_key is idempotent/cheap (it appears to use DAL token caching), as these paths are hot.

Optionally, add a type for trace_span to satisfy strict mypy configs:

-def investigate_issues(..., trace_span=DummySpan(),) -> InvestigationResult:
+def investigate_issues(..., trace_span: DummySpan = DummySpan(),) -> InvestigationResult:

Also applies to: 84-85

holmes/config.py (2)

154-186: Solid multi-model loading with fallbacks; add a log when remote returns empty/no models and tighten typing to drop a type: ignore.

  • Fallbacks cover missing cluster, disabled feature flag, missing creds, fetch failures.
  • Add a warning when models is empty to aid observability.
  • Since self.api_key is ensured above when entering the models loop, assert it to help mypy and remove the inline type ignore.

Apply this diff:

-            models = fetch_robusta_models(
+            models = fetch_robusta_models(
                 self.account_id, self.session_token.get_secret_value()
             )
             if not models:
-                self._load_default_robusta_config()
+                logging.warning(
+                    "No Robusta models returned from %s; falling back to default",
+                    ROBUSTA_API_ENDPOINT,
+                )
+                self._load_default_robusta_config()
                 return

-            for model in models:
+            assert self.api_key is not None  # narrow Optional for type-checkers
+            for model in models:
                 logging.info(f"Loading Robusta AI model: {model}")
                 self._model_list[model] = {
                     "base_url": f"{ROBUSTA_API_ENDPOINT}/llm/{model}",
-                    "api_key": self.api_key.get_secret_value(),  # type: ignore
+                    "api_key": self.api_key.get_secret_value(),
                 }

541-547: Annotate return type of load_robusta_api_key.

Public method should declare it returns None; improves readability and mypy’s strictness.

Apply this diff:

-    def load_robusta_api_key(self, dal: SupabaseDal):
+    def load_robusta_api_key(self, dal: SupabaseDal) -> None:
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between bd08b74 and 0d53117.

📒 Files selected for processing (6)
  • holmes/clients/robusta_client.py (2 hunks)
  • holmes/config.py (5 hunks)
  • holmes/core/investigation.py (2 hunks)
  • holmes/utils/robusta.py (0 hunks)
  • server.py (5 hunks)
  • tests/test_server_endpoints.py (4 hunks)
💤 Files with no reviewable changes (1)
  • holmes/utils/robusta.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)

Files:

  • holmes/clients/robusta_client.py
  • holmes/core/investigation.py
  • tests/test_server_endpoints.py
  • server.py
  • holmes/config.py
{tests/**/*.py,pyproject.toml}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Only use pytest markers that are defined in pyproject.toml; never introduce undefined markers in tests

Files:

  • tests/test_server_endpoints.py
🧬 Code Graph Analysis (3)
holmes/core/investigation.py (1)
holmes/config.py (1)
  • load_robusta_api_key (541-546)
server.py (1)
holmes/config.py (1)
  • load_robusta_api_key (541-546)
holmes/config.py (2)
holmes/clients/robusta_client.py (1)
  • fetch_robusta_models (18-31)
holmes/core/supabase_dal.py (1)
  • get_ai_credentials (463-475)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.11)
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.11)
  • GitHub Check: llm_evals
🔇 Additional comments (4)
holmes/clients/robusta_client.py (1)

1-2: LGTM on new imports and module-level logging.

Imports are correctly placed at the top and used appropriately.

tests/test_server_endpoints.py (1)

12-12: Correctly updated patch targets to Config.load_robusta_api_key.

Mocks now align with the new integration surface. Signatures and decorator order remain correct.

Also applies to: 81-81, 143-143, 208-214

server.py (1)

254-256: Consistent error handling around credential loading in request handlers.

Calls to config.load_robusta_api_key are inside try/except blocks here, which is the right pattern.

Also applies to: 283-285, 322-324

holmes/config.py (1)

149-153: Early return guard is good; defer to configure_robusta_ai_model only when needed.

This keeps non-Robusta paths fast and simplifies post-init logic.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (2)
tests/config_class/test_config_load_robusta_ai.py (2)

6-10: Extract helper function to module level for better reuse.

The fake_load_robusta_api_key function can be moved to module level and shared across multiple tests, avoiding code duplication.

Apply this diff to move the helper function to module level:

-def fake_load_robusta_api_key(config, _):
-    config.account_id = "mock-account"
-    config.session_token = SecretStr("mock-token")
-    config.api_key = SecretStr("mock-token")
+def fake_load_robusta_api_key(self, _):
+    self.account_id = "mock-account"
+    self.session_token = SecretStr("mock-token")
+    self.api_key = SecretStr("mock-token")

107-120: Improve test clarity and documentation.

The test name suggests it should NOT load Robusta AI when MODEL var exists, but the implementation actually loads it. Based on the assertion on Line 119, this appears to test a scenario where Robusta models are loaded even when a MODEL environment variable is set.

Consider renaming the test to better reflect its actual behavior or add a comment explaining why Robusta models are loaded despite the MODEL environment variable being set. Also, use the extracted helper function:

 def test_server_loads_robusta_ai_when_model_var_exists(
     mock_cluster, mock_fetch, *, monkeypatch
 ):
+    # Test that Robusta models are still loaded even when MODEL env var is set
     monkeypatch.setenv("MODEL", "some_model")
 
-    def fake_loader(self, dal):
-        self.account_id = "mock-account"
-        self.session_token = SecretStr("mock-token")
-        self.api_key = SecretStr("mock-token")
-
-    monkeypatch.setattr(Config, "load_robusta_api_key", fake_loader)
+    monkeypatch.setattr(Config, "load_robusta_api_key", fake_load_robusta_api_key)
     config = Config.load_from_env()
     assert "Robusta/test" in config._model_list
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 0d53117 and eee8476.

📒 Files selected for processing (1)
  • tests/config_class/test_config_load_robusta_ai.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)

Files:

  • tests/config_class/test_config_load_robusta_ai.py
{tests/**/*.py,pyproject.toml}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Only use pytest markers that are defined in pyproject.toml; never introduce undefined markers in tests

Files:

  • tests/config_class/test_config_load_robusta_ai.py
🧬 Code Graph Analysis (1)
tests/config_class/test_config_load_robusta_ai.py (1)
holmes/config.py (2)
  • Config (78-546)
  • load_from_env (249-279)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: llm_evals
  • GitHub Check: Pre-commit checks
  • GitHub Check: Pre-commit checks
🔇 Additional comments (2)
tests/config_class/test_config_load_robusta_ai.py (2)

2-2: LGTM! Import follows project guidelines.

The SecretStr import is properly placed at the top of the file and is used consistently throughout the test helper function.


13-13: LGTM! Consistent keyword-only parameter usage.

All test functions consistently use the *, monkeypatch pattern for keyword-only parameters, which improves code consistency and prevents accidental positional argument passing.

Also applies to: 21-21, 36-37, 50-50, 63-64, 83-84, 97-98, 108-109, 125-126, 135-136

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
holmes/config.py (1)

538-543: Refactor get_models_list to return a Python list, not a JSON‐string

The method is annotated as returning List[str] but currently returns str. Callers in server.py (/api/model) and holmes_status.py will benefit from a real list (FastAPI and your DB layer will serialize it correctly).

• holmes/config.py: lines 538–543

-    def get_models_list(self) -> List[str]:
-        if self._model_list:
-            return json.dumps(list(self._model_list.keys()))  # type: ignore
-
-        return json.dumps([self.model])  # type: ignore
+    def get_models_list(self) -> List[str]:
+        if self._model_list:
+            return list(self._model_list.keys())
+
+        return [self.model]

• Remove the now-unnecessary # type: ignore
• Verify that downstream code (DB insert, API responses) handles a Python list (JSON serialization is automatic)

♻️ Duplicate comments (4)
holmes/config.py (4)

13-20: Good separation: remote fetch moved to client + env flag import added.

Importing fetch_robusta_models from a dedicated client and introducing LOAD_ALL_ROBUSTA_MODELS keeps config concerns cleaner and aligns with prior feedback to move the loading request elsewhere.


171-174: Nice: empty/None model-list fallback implemented.

Falling back to default when remote returns no models addresses earlier feedback and prevents a broken initialization path.


175-181: Consider storing api_key per model entry to avoid special-casing in _get_llm.

Past feedback suggested embedding the key with each model. Doing so removes the “is_robusta_model” special path and simplifies _get_llm.

             for model in models:
                 logging.info(f"Loading Robusta AI model: {model}")
                 self._model_list[model] = {
                     "base_url": f"{ROBUSTA_API_ENDPOINT}/llm/{model}",
-                    "is_robusta_model": True,
+                    "is_robusta_model": True,
+                    "api_key": (
+                        self.api_key.get_secret_value() if self.api_key else None
+                    ),
                 }

Also, please double-check the endpoint path segment (“/llm/{model}” vs any previously used “/deployments/{model}”) is correct.


187-194: Add explicit return type annotation for _load_default_robusta_config.

This satisfies the project’s type-hints policy and prior comment.

-    def _load_default_robusta_config(self):
+    def _load_default_robusta_config(self) -> None:
🧹 Nitpick comments (4)
holmes/config.py (4)

149-153: Initialization ordering: configure_robusta_ai_model may run before credentials are available.

model_post_init calls configure_robusta_ai_model immediately. If the API key isn’t loaded yet (e.g., server loads it later), default Robusta model may never be added. Consider priming credentials here when possible.

 def model_post_init(self, __context: Any) -> None:
     self._model_list = parse_models_file(MODEL_LIST_FILE_LOCATION)

     if not self._should_load_robusta_ai():
         return

+    # Ensure credentials exist early so default fallback can be loaded if needed
+    if self.cluster_name and not self.api_key:
+        dal = SupabaseDal(self.cluster_name)
+        self.load_robusta_api_key(dal)
     self.configure_robusta_ai_model()

Please verify this doesn’t regress CLI flows where Supabase isn’t initialized.


544-550: Add return type for load_robusta_api_key.

Explicitly annotate the mutator as returning None.

-    def load_robusta_api_key(self, dal: SupabaseDal):
+    def load_robusta_api_key(self, dal: SupabaseDal) -> None:

69-75: Nit: env-var replacement likely doesn’t persist into models.

Rebinding params without writing back is unclear. If replace_env_vars_values returns a new dict, assign it back; if it mutates in-place, consider explicitness for readability.

-    for _, params in models.items():
-        params = replace_env_vars_values(params)
+    for name, params in models.items():
+        models[name] = replace_env_vars_values(params)

9-11: Add PyYAML typing stubs and remove the inline ignore

We currently depend on pyyaml (pyproject.toml line 17) but haven’t pulled in its stub package, so mypy can’t see yaml’s types. To clean up the import:

• In pyproject.toml (or your dev-requirements), under dev-dependencies add:

types-PyYAML = "^6.0.1"

• Run poetry install (or pip install -r requirements-dev.txt).
• In holmes/config.py, remove the # type: ignore on the import yaml line.

This will let mypy pick up the stubs and keep imports clean.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between eee8476 and 83615df.

📒 Files selected for processing (1)
  • holmes/config.py (6 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)

Files:

  • holmes/config.py
🧬 Code Graph Analysis (1)
holmes/config.py (2)
holmes/clients/robusta_client.py (1)
  • fetch_robusta_models (18-31)
holmes/core/supabase_dal.py (1)
  • get_ai_credentials (463-475)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: llm_evals
  • GitHub Check: Pre-commit checks
  • GitHub Check: Pre-commit checks
🔇 Additional comments (3)
holmes/config.py (3)

41-41: SupabaseDal import at runtime looks appropriate.

Bringing SupabaseDal to module scope is fine and supports runtime use where needed.


82-84: Adding account_id and session_token fields is reasonable.

Using SecretStr for the session token is the right call.


530-533: LLM API key injection for Robusta models is fine.

Using the presence of “is_robusta_model” to pick self.api_key is correct given the current design.

@moshemorad moshemorad requested a review from arikalon1 August 20, 2025 11:18
@github-actions
Copy link
Contributor

Results of HolmesGPT evals

  • ask_holmes: 35/39 test cases were successful, 1 regressions, 2 skipped, 1 setup failures
Test suite Test case Status
ask 01_how_many_pods
ask 02_what_is_wrong_with_pod
ask 03_what_is_the_command_to_port_forward
ask 04_related_k8s_events ↪️
ask 05_image_version
ask 09_crashpod
ask 10_image_pull_backoff
ask 11_init_containers
ask 14_pending_resources
ask 15_failed_readiness_probe
ask 17_oom_kill
ask 18_crash_looping_v2
ask 19_detect_missing_app_details
ask 20_long_log_file_search
ask 24_misconfigured_pvc
ask 28_permissions_error 🚧
ask 29_events_from_alert_manager ↪️
ask 39_failed_toolset
ask 41_setup_argo
ask 42_dns_issues_steps_new_tools
ask 43_current_datetime_from_prompt
ask 45_fetch_deployment_logs_simple
ask 51_logs_summarize_errors
ask 53_logs_find_term
ask 54_not_truncated_when_getting_pods
ask 59_label_based_counting
ask 60_count_less_than
ask 61_exact_match_counting
ask 63_fetch_error_logs_no_errors
ask 79_configmap_mount_issue
ask 83_secret_not_found
ask 86_configmap_like_but_secret
ask 93_calling_datadog
ask 93_calling_datadog
ask 93_calling_datadog
ask 97_logs_clarification_needed
ask 110_k8s_events_image_pull
ask 24a_misconfigured_pvc_basic
ask 13a_pending_node_selector_basic

Legend

  • ✅ the test was successful
  • ↪️ the test was skipped
  • ⚠️ the test failed but is known to be flaky or known to fail
  • 🚧 the test had a setup failure (not a code regression)
  • 🔧 the test failed due to mock data issues (not a code regression)
  • ❌ the test failed and should be fixed before merging the PR

@moshemorad moshemorad merged commit a9bf9e8 into master Aug 20, 2025
10 of 11 checks passed
@moshemorad moshemorad deleted the multiple_llm_support branch August 20, 2025 11:36
@coderabbitai coderabbitai bot mentioned this pull request Aug 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants