-
Notifications
You must be signed in to change notification settings - Fork 184
[ROB-1535] Added llm support to investigation #857
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds cluster-aware remote Robusta model discovery, moves Robusta API-key loading into the Config class (exposing SupabaseDal at module scope), adds LOAD_ALL_ROBUSTA_MODELS env flag, updates server startup and handlers to call Config.load_robusta_api_key, and updates tests to cover these flows. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant App
participant Config
participant SupabaseDal
participant RobustaAPI as Robusta API
participant Logger
App->>Config: model_post_init()
alt _should_load_robusta_ai() is false
Config-->>App: return
else _should_load_robusta_ai() is true
Config->>Config: configure_robusta_ai_model()
alt missing cluster or LOAD_ALL_ROBUSTA_MODELS is false
Config->>Logger: load default Robusta model
Config->>Config: _load_default_robusta_config()
Config-->>App: return
else
Config->>SupabaseDal: get_ai_credentials(cluster_name)
SupabaseDal-->>Config: account_id, token
Config->>RobustaAPI: POST /api/llm/models (10s)
alt success + models present
RobustaAPI-->>Config: {"models": [...]}
Config->>Logger: log each model
Config-->>App: model_list populated
else error or empty
RobustaAPI--xConfig: error/empty
Config->>Logger: log error
Config->>Config: _load_default_robusta_config()
Config-->>App: return
end
end
end
sequenceDiagram
autonumber
participant Server
participant Config
participant SupabaseDal
participant Logger
Server->>Config: sync_before_server_start()
Config->>Config: load_robusta_api_key(dal)
alt credentials available and ROBUSTA_AI allowed
Config->>SupabaseDal: get_ai_credentials()
SupabaseDal-->>Config: account_id, token
Config->>Config: set api_key & account/session
Config-->>Server: return
else missing creds or flag disables
Config->>Logger: use default Robusta config or no-op
Config-->>Server: return
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limits.
🔭 Outside diff range comments (1)
holmes/utils/robusta.py (1)
6-10: Add explicit return type to satisfy mypy and project typing requirements.The project enforces type hints; annotate the function to return None.
Apply this diff:
-def load_robusta_api_key(dal: SupabaseDal, config: Config): +def load_robusta_api_key(dal: SupabaseDal, config: Config) -> None:
🧹 Nitpick comments (3)
holmes/utils/robusta.py (1)
1-1: Decouple ROBUSTA_AI import and avoid runtime import of Config (type-only import).Importing
ROBUSTA_AIfromholmes.configcouples this utility to a heavy module and risks circular imports. ImportROBUSTA_AIdirectly fromholmes.common.env_vars, and importConfigonly for type-checking.Apply this diff within the selected range:
-from holmes.config import Config, ROBUSTA_AI +from holmes.common.env_vars import ROBUSTA_AIAdditionally, add the following near the top of the file (outside the selected range) to keep type hints without a runtime import:
from __future__ import annotations from typing import TYPE_CHECKING if TYPE_CHECKING: from holmes.config import Configholmes/config.py (2)
139-146: Default _model_list to a dict to avoid None-handling pitfalls.If
parse_models_filereturnsNonewhen the file is missing, later dict operations will fail silently or cause exceptions. Default to{}.Apply this diff:
- self._model_list = parse_models_file(MODEL_LIST_FILE_LOCATION) + self._model_list = parse_models_file(MODEL_LIST_FILE_LOCATION) or {}
146-154: Optional: Fallback to default Robusta model when cluster_name is missing.Right now, a missing
cluster_namereturns early and skips the default fallback (which only happens on exceptions). If the intent is to use Robusta when enabled butcluster_nameis unavailable, consider loading the default model here.Apply this diff:
- if not self.cluster_name: - return + if not self.cluster_name: + logging.info("No cluster_name configured; loading default Robusta AI model") + self._model_list[ROBUSTA_AI_MODEL_NAME] = { + "base_url": ROBUSTA_API_ENDPOINT, + } + return
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
holmes/config.py(3 hunks)holmes/utils/robusta.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)
Files:
holmes/utils/robusta.pyholmes/config.py
🧠 Learnings (1)
📚 Learning: 2025-07-08T08:45:41.069Z
Learnt from: nherment
PR: robusta-dev/holmesgpt#610
File: .github/workflows/llm-evaluation.yaml:39-42
Timestamp: 2025-07-08T08:45:41.069Z
Learning: The robusta-dev/holmesgpt codebase has comprehensive existing validation for Azure environment variables (AZURE_API_BASE, AZURE_API_KEY, AZURE_API_VERSION) and MODEL in tests/llm/utils/classifiers.py, tests/llm/conftest.py, and holmes/core/llm.py. Don't suggest adding redundant validation logic.
Applied to files:
holmes/utils/robusta.py
🧬 Code Graph Analysis (1)
holmes/config.py (1)
holmes/core/supabase_dal.py (1)
SupabaseDal(57-583)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build (3.12)
- GitHub Check: build (3.11)
- GitHub Check: build (3.10)
🔇 Additional comments (4)
holmes/config.py (4)
36-36: LGTM: SupabaseDal import moved to runtime where it’s needed.This aligns with the new loading flow. Watch for import cycles, but at present there’s no direct usage of utils/robusta from here, so this should be safe.
170-176: Good resilience: sane fallback on failures.Catching exceptions and loading a default Robusta model guarantees a usable configuration even when the remote discovery fails.
178-196: Gating logic reads clean and preserves backward compatibility.The tri-state handling of
ROBUSTA_AI, theMODELenv short-circuit, and honoring a provided model list make sense.
9-9: requests dependency confirmed in pyproject.toml
- Found at pyproject.toml (line 54):
requests = "^2.32.4"No action needed for declaration.
Optionally, if you’re running mypy in strict mode and want to drop the# type: ignore, add the official stub package:poetry add -D types-requests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
holmes/config.py (1)
64-71: Bug: env var substitution result is discarded inparse_models_file.The loop rebinds
paramslocally and doesn’t write back tomodels, so replacements don’t take effect.-def parse_models_file(path: str): - models = load_yaml_file(path, raise_error=False, warn_not_found=False) - - for _, params in models.items(): - params = replace_env_vars_values(params) - - return models +def parse_models_file(path: str): + models = load_yaml_file(path, raise_error=False, warn_not_found=False) or {} + for key, params in list(models.items()): + models[key] = replace_env_vars_values(params) + return models
♻️ Duplicate comments (1)
holmes/config.py (1)
519-525: Inject API key by comparing hostnames, not by'robusta.dev'substring.This will break for staging/on‑prem endpoints where the host differs from
robusta.devwhile still being the configuredROBUSTA_API_ENDPOINT. Compare hosts instead.- if ( - not api_key - and "robusta.dev" in model_params.get("base_url", "") - and self.api_key - ): - api_key = self.api_key.get_secret_value() + if not api_key and self.api_key: + endpoint_host = urlparse(ROBUSTA_API_ENDPOINT).netloc + base_host = urlparse(model_params.get("base_url", "")).netloc + if endpoint_host and base_host and endpoint_host == base_host: + api_key = self.api_key.get_secret_value()Add this import at the top of the file:
from urllib.parse import urlparse
🧹 Nitpick comments (1)
holmes/config.py (1)
146-175: Robust remote model loader with fallback — nice.
- Uses a 10s timeout and raise_for_status.
- Falls back to a default config on any exception.
- Per-model base_url construction looks correct.
Two small follow-ups to harden behavior:
- Consider typing the response structure (TypedDict) to guard
.get("models", ...)shape via mypy.- Log the count of models loaded for easier troubleshooting.
If you want, I can add a small unit test that simulates: success, HTTP error, DAL exception, and no cluster_name paths.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
holmes/config.py(3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)
Files:
holmes/config.py
🧠 Learnings (1)
📚 Learning: 2025-07-08T08:45:41.069Z
Learnt from: nherment
PR: robusta-dev/holmesgpt#610
File: .github/workflows/llm-evaluation.yaml:39-42
Timestamp: 2025-07-08T08:45:41.069Z
Learning: The robusta-dev/holmesgpt codebase has comprehensive existing validation for Azure environment variables (AZURE_API_BASE, AZURE_API_KEY, AZURE_API_VERSION) and MODEL in tests/llm/utils/classifiers.py, tests/llm/conftest.py, and holmes/core/llm.py. Don't suggest adding redundant validation logic.
Applied to files:
holmes/config.py
🧬 Code Graph Analysis (1)
holmes/config.py (1)
holmes/core/supabase_dal.py (2)
SupabaseDal(57-583)get_ai_credentials(463-475)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: llm_evals
- GitHub Check: Pre-commit checks
- GitHub Check: Pre-commit checks
🔇 Additional comments (2)
holmes/config.py (2)
36-36: SupabaseDal import at runtime scope LGTM.This aligns with the new runtime use here and avoids TYPE_CHECKING hacks. Assuming no circular import (looks safe from the provided SupabaseDal snippet).
141-145: Good gating: early-return when ROBUSTA AI should not be loaded.This keeps model init fast and side-effect free unless explicitly enabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
holmes/config.py (3)
9-9: Remove unnecessary type ignore on requests import and rely on typing stubs.
requestsships with typing or can be satisfied viatypes-requests. Drop the suppression to comply with type-checking guidelines.-import requests # type: ignore +import requestsAlso add the typing stub in your dev dependencies (e.g., types-requests) if mypy requires it.
164-170: Robusta API key is not passed to DefaultLLM; inject it based on endpoint host match.Models loaded here only set base_url;
_get_llmdoesn’t fallback toself.api_key. Calls to Robusta endpoints will go out unauthenticated. Inject the key when the model’s base_url host matchesROBUSTA_API_ENDPOINT.Proposed change outside the selected range (in
_get_llmand imports):-from typing import TYPE_CHECKING, Any, List, Optional, Union +from typing import TYPE_CHECKING, Any, List, Optional, Union +from urllib.parse import urlparsedef _get_llm(self, model_key: Optional[str] = None, tracer=None) -> "LLM": api_key: Optional[str] = None model = self.model model_params = {} if self._model_list: # get requested model or the first credentials if no model requested. model_params = ( self._model_list.get(model_key, {}).copy() if model_key else next(iter(self._model_list.values())).copy() ) api_key = model_params.pop("api_key", api_key) model = model_params.pop("model", model) + # Fallback: inject config.api_key for Robusta endpoints when not explicitly provided + if not api_key and self.api_key: + endpoint_host = urlparse(ROBUSTA_API_ENDPOINT).netloc + base_host = urlparse(model_params.get("base_url", "")).netloc + if endpoint_host and base_host and endpoint_host == base_host: + api_key = self.api_key.get_secret_value() + return DefaultLLM(model, api_key, model_params, tracer) # type: ignore
176-180: Annotate return type for mypy compliance.This method returns None; add the explicit annotation.
- def _load_default_robusta_config(self): + def _load_default_robusta_config(self) -> None: logging.info("Loading default Robusta AI model") self._model_list[ROBUSTA_AI_MODEL_NAME] = { "base_url": ROBUSTA_API_ENDPOINT, }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
holmes/config.py(3 hunks)holmes/utils/robusta.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- holmes/utils/robusta.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)
Files:
holmes/config.py
🧠 Learnings (1)
📚 Learning: 2025-07-08T08:45:41.069Z
Learnt from: nherment
PR: robusta-dev/holmesgpt#610
File: .github/workflows/llm-evaluation.yaml:39-42
Timestamp: 2025-07-08T08:45:41.069Z
Learning: The robusta-dev/holmesgpt codebase has comprehensive existing validation for Azure environment variables (AZURE_API_BASE, AZURE_API_KEY, AZURE_API_VERSION) and MODEL in tests/llm/utils/classifiers.py, tests/llm/conftest.py, and holmes/core/llm.py. Don't suggest adding redundant validation logic.
Applied to files:
holmes/config.py
🧬 Code Graph Analysis (1)
holmes/config.py (1)
holmes/core/supabase_dal.py (2)
SupabaseDal(57-583)get_ai_credentials(463-475)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: build (3.12)
- GitHub Check: build (3.10)
- GitHub Check: build (3.11)
- GitHub Check: build (3.12)
- GitHub Check: build (3.10)
- GitHub Check: build (3.11)
🔇 Additional comments (2)
holmes/config.py (2)
36-36: Importing SupabaseDal at module scope is appropriate.It’s used at runtime in model initialization, so moving it out of TYPE_CHECKING is correct.
140-146: Nice refactor: gate and delegate Robusta AI loading.Early return via
_should_load_robusta_ai()and delegating to_load_robusta_ai_models()improves readability and separation of concerns.
278e7b3 to
968bd8e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
server.py (1)
85-93: Guard startup against failures inload_robusta_api_keyUncaught exceptions here can fail the entire server startup, unlike the surrounding steps which are protected. Wrap this call similarly to keep startup resilient.
def sync_before_server_start(): - load_robusta_api_key(dal=dal, config=config) + try: + load_robusta_api_key(dal=dal, config=config) + except Exception: + logging.error("Failed to load Robusta API key", exc_info=True) + # Continue startup; endpoints also load the key on-demand. try: update_holmes_status_in_db(dal, config)Optional: add an explicit return type (project enforces typing).
def sync_before_server_start() -> None: ...
♻️ Duplicate comments (2)
holmes/clients/robusta_client.py (1)
17-19: Add missing type hint and avoid suppressingrequeststyping
- The
cluster_nameparameter is missing a type annotation; project requires type hints.- Also, avoid
# type: ignoreon therequestsimport; prefer adding stubs so mypy can type-check.Apply this diff for the function signature:
-@cache -def fetch_robusta_models(cluster_name) -> Optional[List[str]]: +@cache +def fetch_robusta_models(cluster_name: str) -> Optional[List[str]]:Additionally (outside the selected lines):
- Remove
# type: ignorefrom theimport requestsline at the top of this file.- Ensure the repo includes typing stubs for requests (e.g., add
types-requeststo dev dependencies) so mypy passes without ignores, per prior guidance already given in this codebase.Also applies to: 23-33
holmes/config.py (1)
176-183: Annotate_load_default_robusta_configreturn typeMatches project typing requirements and a prior review suggestion.
- def _load_default_robusta_config(self): + def _load_default_robusta_config(self) -> None:
🧹 Nitpick comments (3)
holmes/common/env_vars.py (1)
30-30: MakeLOAD_ALL_ROBUSTA_MODELStype explicitSmall nit for mypy clarity: this is an Optional[bool]. Being explicit helps with IDEs and static checks.
-LOAD_ALL_ROBUSTA_MODELS = load_bool("LOAD_ALL_ROBUSTA_MODELS", True) +LOAD_ALL_ROBUSTA_MODELS: Optional[bool] = load_bool("LOAD_ALL_ROBUSTA_MODELS", True)holmes/config.py (1)
41-41: Remove unused runtime import ofSupabaseDalThis import isn’t used at runtime (type hints use forward refs and
TYPE_CHECKING), and may trigger unnecessary initialization. Keep it underTYPE_CHECKINGonly to avoid Ruff F401 and overhead.-from holmes.core.supabase_dal import SupabaseDaltests/config_class/test_config_load_robusta_ai.py (1)
87-97: Add tests for feature-flag = False and empty/None remote responsesTwo valuable scenarios to add:
- LOAD_ALL_ROBUSTA_MODELS set to "false" should trigger default Robusta config (not remote list).
- Remote returns [] or None should also fall back to default.
Example additions:
@patch("holmes.config.ROBUSTA_AI", True) @patch("holmes.config.Config._Config__get_cluster_name", return_value="test") @patch("holmes.config.fetch_robusta_models", return_value=[]) def test_server_falls_back_when_remote_empty(mock_fetch, mock_cluster, *, monkeypatch): monkeypatch.setenv("API_KEY", "test") config = Config.load_from_env() assert "Robusta" in config._model_list # default loaded @patch("holmes.config.ROBUSTA_AI", True) @patch("holmes.config.Config._Config__get_cluster_name", return_value="test") @patch("holmes.config.fetch_robusta_models", return_value=["Robusta/test"]) def test_server_flag_disables_remote_models(mock_fetch, mock_cluster, *, monkeypatch): monkeypatch.setenv("API_KEY", "test") monkeypatch.setenv("LOAD_ALL_ROBUSTA_MODELS", "false") config = Config.load_from_env() assert "Robusta" in config._model_list assert "Robusta/test" not in config._model_list
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (5)
holmes/clients/robusta_client.py(2 hunks)holmes/common/env_vars.py(1 hunks)holmes/config.py(3 hunks)server.py(1 hunks)tests/config_class/test_config_load_robusta_ai.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)
Files:
server.pyholmes/common/env_vars.pyholmes/clients/robusta_client.pytests/config_class/test_config_load_robusta_ai.pyholmes/config.py
{tests/**/*.py,pyproject.toml}
📄 CodeRabbit Inference Engine (CLAUDE.md)
Only use pytest markers that are defined in pyproject.toml; never introduce undefined markers in tests
Files:
tests/config_class/test_config_load_robusta_ai.py
🧠 Learnings (1)
📚 Learning: 2025-07-08T08:45:41.069Z
Learnt from: nherment
PR: robusta-dev/holmesgpt#610
File: .github/workflows/llm-evaluation.yaml:39-42
Timestamp: 2025-07-08T08:45:41.069Z
Learning: The robusta-dev/holmesgpt codebase has comprehensive existing validation for Azure environment variables (AZURE_API_BASE, AZURE_API_KEY, AZURE_API_VERSION) and MODEL in tests/llm/utils/classifiers.py, tests/llm/conftest.py, and holmes/core/llm.py. Don't suggest adding redundant validation logic.
Applied to files:
holmes/common/env_vars.py
🧬 Code Graph Analysis (4)
server.py (1)
holmes/utils/robusta.py (1)
load_robusta_api_key(6-10)
holmes/clients/robusta_client.py (1)
holmes/core/supabase_dal.py (1)
get_ai_credentials(463-475)
tests/config_class/test_config_load_robusta_ai.py (1)
holmes/config.py (2)
Config(78-528)load_from_env(238-268)
holmes/config.py (2)
holmes/clients/robusta_client.py (1)
fetch_robusta_models(18-32)holmes/core/supabase_dal.py (1)
SupabaseDal(57-583)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: build (3.10)
- GitHub Check: build (3.12)
- GitHub Check: build (3.12)
- GitHub Check: build (3.11)
- GitHub Check: build (3.10)
- GitHub Check: build (3.11)
- GitHub Check: llm_evals
🔇 Additional comments (6)
holmes/config.py (3)
13-20: LGTM: wiring for remote models and env flags looks correctImporting
fetch_robusta_modelsand the relevant env vars cleanly sets up the new flow and keeps concerns separated.
146-150: LGTM: clearer post-init flowEarly-returning when Robusta AI shouldn’t load and delegating to
configure_robusta_ai_model()keeps the init path maintainable and testable.
151-175: LGTM: robust remote-model loading with sane fallbacks
- Skips when no
api_key.- Falls back when cluster is missing, flag is disabled, fetch fails, or returns empty.
- Populates model list with per-model base_url and api_key as requested.
No blockers here.
tests/config_class/test_config_load_robusta_ai.py (3)
12-18: LGTM: verifies loading when ROBUSTA_AI=TrueSolid coverage of the happy path with cluster + API key present.
20-30: LGTM: default/no-other-models behavior when ROBUSTA_AI is unsetGood assertion that only the fetched Robusta model is present.
41-55: LGTM: coexistence with existing modelsConfirms Robusta/test is added alongside preexisting models; this guards against regressions in merging logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
server.py (1)
185-201: Fix typo: request.stored_instrucitons → request.stored_instructions; also move API key loading inside try/except.
- The flag name typo prevents stored instructions from ever being used.
- load_robusta_api_key is outside the try/except, causing inconsistent error handling vs other endpoints.
Apply this diff:
-@app.post("/api/workload_health_check") -def workload_health_check(request: WorkloadHealthRequest): - config.load_robusta_api_key(dal=dal) - try: +@app.post("/api/workload_health_check") +def workload_health_check(request: WorkloadHealthRequest): + try: + config.load_robusta_api_key(dal=dal) resource = request.resource workload_alerts: list[str] = [] if request.alert_history: workload_alerts = dal.get_workload_issues( resource, request.alert_history_since_hours ) instructions = request.instructions or [] - if request.stored_instrucitons: + if request.stored_instructions: stored_instructions = dal.get_resource_instructions( resource.get("kind", "").lower(), resource.get("name") ) if stored_instructions: instructions.extend(stored_instructions.instructions)
♻️ Duplicate comments (3)
holmes/config.py (3)
187-194: Annotate return type of _load_default_robusta_config.Mypy compliance: explicitly mark as returning None.
Apply this diff:
- def _load_default_robusta_config(self): + def _load_default_robusta_config(self) -> None:
168-181: Endpoint path moved out of config per feedback.The HTTP model-fetch request was moved to holmes/clients/robusta_client.fetch_robusta_models, aligning with prior feedback to keep request logic out of Config.
176-181: API key included per-model in model list.Models now carry api_key alongside base_url, matching the desired structure.
🧹 Nitpick comments (3)
holmes/core/investigation.py (1)
30-31: Load Robusta API key at the call sites — good placement; ensure idempotency.Both call sites proactively load credentials before constructing AI instances, which aligns with the new Config-based flow. Ensure load_robusta_api_key is idempotent/cheap (it appears to use DAL token caching), as these paths are hot.
Optionally, add a type for trace_span to satisfy strict mypy configs:
-def investigate_issues(..., trace_span=DummySpan(),) -> InvestigationResult: +def investigate_issues(..., trace_span: DummySpan = DummySpan(),) -> InvestigationResult:Also applies to: 84-85
holmes/config.py (2)
154-186: Solid multi-model loading with fallbacks; add a log when remote returns empty/no models and tighten typing to drop a type: ignore.
- Fallbacks cover missing cluster, disabled feature flag, missing creds, fetch failures.
- Add a warning when models is empty to aid observability.
- Since self.api_key is ensured above when entering the models loop, assert it to help mypy and remove the inline type ignore.
Apply this diff:
- models = fetch_robusta_models( + models = fetch_robusta_models( self.account_id, self.session_token.get_secret_value() ) if not models: - self._load_default_robusta_config() + logging.warning( + "No Robusta models returned from %s; falling back to default", + ROBUSTA_API_ENDPOINT, + ) + self._load_default_robusta_config() return - for model in models: + assert self.api_key is not None # narrow Optional for type-checkers + for model in models: logging.info(f"Loading Robusta AI model: {model}") self._model_list[model] = { "base_url": f"{ROBUSTA_API_ENDPOINT}/llm/{model}", - "api_key": self.api_key.get_secret_value(), # type: ignore + "api_key": self.api_key.get_secret_value(), }
541-547: Annotate return type of load_robusta_api_key.Public method should declare it returns None; improves readability and mypy’s strictness.
Apply this diff:
- def load_robusta_api_key(self, dal: SupabaseDal): + def load_robusta_api_key(self, dal: SupabaseDal) -> None:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (6)
holmes/clients/robusta_client.py(2 hunks)holmes/config.py(5 hunks)holmes/core/investigation.py(2 hunks)holmes/utils/robusta.py(0 hunks)server.py(5 hunks)tests/test_server_endpoints.py(4 hunks)
💤 Files with no reviewable changes (1)
- holmes/utils/robusta.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)
Files:
holmes/clients/robusta_client.pyholmes/core/investigation.pytests/test_server_endpoints.pyserver.pyholmes/config.py
{tests/**/*.py,pyproject.toml}
📄 CodeRabbit Inference Engine (CLAUDE.md)
Only use pytest markers that are defined in pyproject.toml; never introduce undefined markers in tests
Files:
tests/test_server_endpoints.py
🧬 Code Graph Analysis (3)
holmes/core/investigation.py (1)
holmes/config.py (1)
load_robusta_api_key(541-546)
server.py (1)
holmes/config.py (1)
load_robusta_api_key(541-546)
holmes/config.py (2)
holmes/clients/robusta_client.py (1)
fetch_robusta_models(18-31)holmes/core/supabase_dal.py (1)
get_ai_credentials(463-475)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: build (3.12)
- GitHub Check: build (3.11)
- GitHub Check: build (3.10)
- GitHub Check: build (3.12)
- GitHub Check: build (3.10)
- GitHub Check: build (3.11)
- GitHub Check: llm_evals
🔇 Additional comments (4)
holmes/clients/robusta_client.py (1)
1-2: LGTM on new imports and module-level logging.Imports are correctly placed at the top and used appropriately.
tests/test_server_endpoints.py (1)
12-12: Correctly updated patch targets to Config.load_robusta_api_key.Mocks now align with the new integration surface. Signatures and decorator order remain correct.
Also applies to: 81-81, 143-143, 208-214
server.py (1)
254-256: Consistent error handling around credential loading in request handlers.Calls to config.load_robusta_api_key are inside try/except blocks here, which is the right pattern.
Also applies to: 283-285, 322-324
holmes/config.py (1)
149-153: Early return guard is good; defer to configure_robusta_ai_model only when needed.This keeps non-Robusta paths fast and simplifies post-init logic.
…/holmesgpt into multiple_llm_support
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (2)
tests/config_class/test_config_load_robusta_ai.py (2)
6-10: Extract helper function to module level for better reuse.The
fake_load_robusta_api_keyfunction can be moved to module level and shared across multiple tests, avoiding code duplication.Apply this diff to move the helper function to module level:
-def fake_load_robusta_api_key(config, _): - config.account_id = "mock-account" - config.session_token = SecretStr("mock-token") - config.api_key = SecretStr("mock-token") +def fake_load_robusta_api_key(self, _): + self.account_id = "mock-account" + self.session_token = SecretStr("mock-token") + self.api_key = SecretStr("mock-token")
107-120: Improve test clarity and documentation.The test name suggests it should NOT load Robusta AI when MODEL var exists, but the implementation actually loads it. Based on the assertion on Line 119, this appears to test a scenario where Robusta models are loaded even when a MODEL environment variable is set.
Consider renaming the test to better reflect its actual behavior or add a comment explaining why Robusta models are loaded despite the MODEL environment variable being set. Also, use the extracted helper function:
def test_server_loads_robusta_ai_when_model_var_exists( mock_cluster, mock_fetch, *, monkeypatch ): + # Test that Robusta models are still loaded even when MODEL env var is set monkeypatch.setenv("MODEL", "some_model") - def fake_loader(self, dal): - self.account_id = "mock-account" - self.session_token = SecretStr("mock-token") - self.api_key = SecretStr("mock-token") - - monkeypatch.setattr(Config, "load_robusta_api_key", fake_loader) + monkeypatch.setattr(Config, "load_robusta_api_key", fake_load_robusta_api_key) config = Config.load_from_env() assert "Robusta/test" in config._model_list
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
tests/config_class/test_config_load_robusta_ai.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)
Files:
tests/config_class/test_config_load_robusta_ai.py
{tests/**/*.py,pyproject.toml}
📄 CodeRabbit Inference Engine (CLAUDE.md)
Only use pytest markers that are defined in pyproject.toml; never introduce undefined markers in tests
Files:
tests/config_class/test_config_load_robusta_ai.py
🧬 Code Graph Analysis (1)
tests/config_class/test_config_load_robusta_ai.py (1)
holmes/config.py (2)
Config(78-546)load_from_env(249-279)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: llm_evals
- GitHub Check: Pre-commit checks
- GitHub Check: Pre-commit checks
🔇 Additional comments (2)
tests/config_class/test_config_load_robusta_ai.py (2)
2-2: LGTM! Import follows project guidelines.The
SecretStrimport is properly placed at the top of the file and is used consistently throughout the test helper function.
13-13: LGTM! Consistent keyword-only parameter usage.All test functions consistently use the
*, monkeypatchpattern for keyword-only parameters, which improves code consistency and prevents accidental positional argument passing.Also applies to: 21-21, 36-37, 50-50, 63-64, 83-84, 97-98, 108-109, 125-126, 135-136
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
holmes/config.py (1)
538-543: Refactorget_models_listto return a Python list, not a JSON‐stringThe method is annotated as returning
List[str]but currently returnsstr. Callers inserver.py(/api/model) andholmes_status.pywill benefit from a real list (FastAPI and your DB layer will serialize it correctly).• holmes/config.py: lines 538–543
- def get_models_list(self) -> List[str]: - if self._model_list: - return json.dumps(list(self._model_list.keys())) # type: ignore - - return json.dumps([self.model]) # type: ignore + def get_models_list(self) -> List[str]: + if self._model_list: + return list(self._model_list.keys()) + + return [self.model]• Remove the now-unnecessary
# type: ignore
• Verify that downstream code (DB insert, API responses) handles a Python list (JSON serialization is automatic)
♻️ Duplicate comments (4)
holmes/config.py (4)
13-20: Good separation: remote fetch moved to client + env flag import added.Importing fetch_robusta_models from a dedicated client and introducing LOAD_ALL_ROBUSTA_MODELS keeps config concerns cleaner and aligns with prior feedback to move the loading request elsewhere.
171-174: Nice: empty/None model-list fallback implemented.Falling back to default when remote returns no models addresses earlier feedback and prevents a broken initialization path.
175-181: Consider storing api_key per model entry to avoid special-casing in _get_llm.Past feedback suggested embedding the key with each model. Doing so removes the “is_robusta_model” special path and simplifies _get_llm.
for model in models: logging.info(f"Loading Robusta AI model: {model}") self._model_list[model] = { "base_url": f"{ROBUSTA_API_ENDPOINT}/llm/{model}", - "is_robusta_model": True, + "is_robusta_model": True, + "api_key": ( + self.api_key.get_secret_value() if self.api_key else None + ), }Also, please double-check the endpoint path segment (“/llm/{model}” vs any previously used “/deployments/{model}”) is correct.
187-194: Add explicit return type annotation for _load_default_robusta_config.This satisfies the project’s type-hints policy and prior comment.
- def _load_default_robusta_config(self): + def _load_default_robusta_config(self) -> None:
🧹 Nitpick comments (4)
holmes/config.py (4)
149-153: Initialization ordering: configure_robusta_ai_model may run before credentials are available.model_post_init calls configure_robusta_ai_model immediately. If the API key isn’t loaded yet (e.g., server loads it later), default Robusta model may never be added. Consider priming credentials here when possible.
def model_post_init(self, __context: Any) -> None: self._model_list = parse_models_file(MODEL_LIST_FILE_LOCATION) if not self._should_load_robusta_ai(): return + # Ensure credentials exist early so default fallback can be loaded if needed + if self.cluster_name and not self.api_key: + dal = SupabaseDal(self.cluster_name) + self.load_robusta_api_key(dal) self.configure_robusta_ai_model()Please verify this doesn’t regress CLI flows where Supabase isn’t initialized.
544-550: Add return type for load_robusta_api_key.Explicitly annotate the mutator as returning None.
- def load_robusta_api_key(self, dal: SupabaseDal): + def load_robusta_api_key(self, dal: SupabaseDal) -> None:
69-75: Nit: env-var replacement likely doesn’t persist into models.Rebinding params without writing back is unclear. If replace_env_vars_values returns a new dict, assign it back; if it mutates in-place, consider explicitness for readability.
- for _, params in models.items(): - params = replace_env_vars_values(params) + for name, params in models.items(): + models[name] = replace_env_vars_values(params)
9-11: Add PyYAML typing stubs and remove the inline ignoreWe currently depend on
pyyaml(pyproject.toml line 17) but haven’t pulled in its stub package, so mypy can’t seeyaml’s types. To clean up the import:• In pyproject.toml (or your dev-requirements), under dev-dependencies add:
types-PyYAML = "^6.0.1"• Run
poetry install(orpip install -r requirements-dev.txt).
• In holmes/config.py, remove the# type: ignoreon theimport yamlline.This will let mypy pick up the stubs and keep imports clean.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
holmes/config.py(6 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.py: ALWAYS place Python imports at the top of the file, not inside functions or methods
Type hints are required (project is type-checked with mypy)
Use Ruff for formatting and linting (configured in pyproject.toml)
Files:
holmes/config.py
🧬 Code Graph Analysis (1)
holmes/config.py (2)
holmes/clients/robusta_client.py (1)
fetch_robusta_models(18-31)holmes/core/supabase_dal.py (1)
get_ai_credentials(463-475)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: llm_evals
- GitHub Check: Pre-commit checks
- GitHub Check: Pre-commit checks
🔇 Additional comments (3)
holmes/config.py (3)
41-41: SupabaseDal import at runtime looks appropriate.Bringing SupabaseDal to module scope is fine and supports runtime use where needed.
82-84: Adding account_id and session_token fields is reasonable.Using SecretStr for the session token is the right call.
530-533: LLM API key injection for Robusta models is fine.Using the presence of “is_robusta_model” to pick self.api_key is correct given the current design.
Requires new relay release for it to work