Skip to content

Conversation

@Sheeproid
Copy link
Contributor

No description provided.

@Sheeproid Sheeproid requested a review from moshemorad August 10, 2025 14:05
@Sheeproid Sheeproid enabled auto-merge (squash) August 10, 2025 14:05
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 10, 2025

Walkthrough

A new configuration option, mock_policy: always_mock, was added to a YAML test case file. This change directs the test to consistently mock external dependencies or services during execution. No other aspects of the test logic or configuration were altered.

Changes

Cohort / File(s) Change Summary
Test Case Configuration Update
tests/llm/fixtures/test_ask_holmes/93_calling_datadog/test_case.yaml
Added mock_policy: always_mock to enforce mocking of external dependencies.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 470b1d6 and ac3af26.

📒 Files selected for processing (1)
  • tests/llm/fixtures/test_ask_holmes/93_calling_datadog/test_case.yaml (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
tests/**

📄 CodeRabbit Inference Engine (CLAUDE.md)

Tests must match source structure under tests/

Files:

  • tests/llm/fixtures/test_ask_holmes/93_calling_datadog/test_case.yaml
**/*.yaml

📄 CodeRabbit Inference Engine (CLAUDE.md)

ALWAYS use Secrets for scripts, not inline manifests or ConfigMaps (prevents code visibility with kubectl describe)

Files:

  • tests/llm/fixtures/test_ask_holmes/93_calling_datadog/test_case.yaml
tests/**/*.yaml

📄 CodeRabbit Inference Engine (CLAUDE.md)

tests/**/*.yaml: Never use names that hint at the problem or expected behavior in resource names (e.g., avoid 'broken-pod', 'test-project-that-does-not-exist', 'crashloop-app'). Use neutral names that don't give away what the LLM should discover
Each test must use a dedicated namespace app- to prevent conflicts
All pod names must be unique across tests
Resource naming should be neutral, not hint at the problem
Use minimal resource footprints (e.g., reduce memory/CPU for Loki in tests)

Files:

  • tests/llm/fixtures/test_ask_holmes/93_calling_datadog/test_case.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: llm_evals
  • GitHub Check: Pre-commit checks
  • GitHub Check: Pre-commit checks
🔇 Additional comments (2)
tests/llm/fixtures/test_ask_holmes/93_calling_datadog/test_case.yaml (2)

1-26: No Kubernetes manifests in this test directory – guidelines not applicable

  • The only YAML files under tests/llm/fixtures/test_ask_holmes/93_calling_datadog are test_case.yaml and toolsets.yaml, neither declares apiVersion, kind, metadata, namespace, etc.
  • There are no resource names or namespaces to validate against the “app-” pattern.
  • No inline scripts or manifests here, so secrets and resource‐footprint policies don’t apply.

If you add any Kubernetes manifests in this test in the future, please ensure:

  • Namespaces follow app-<testid>.
  • Resource names remain neutral (no hints at failure modes).
  • Sensitive data is injected via Secrets.
  • CPU/memory requests and limits are kept minimal.

18-19: Confirm harness support for mock_policy

All mock_policy values (inherit, never_mock, always_mock) are defined and used consistently:

  • MockPolicy enum in tests/llm/utils/mock_toolset.py:69 defines those values.
  • Documentation (docs/development/evals/adding-new-eval.md:75–76) matches the implementation.
  • Calls in MockToolsetManager (lines 502–514) honor always_mock and never_mock; inherit falls back to mock_generation_config.mode.

test_type: "server" is not used by the mock loader

  • We found no references to test_type in MockToolsetManager or related fixtures.
  • The setting is only metadata in YAML; it does not enforce network prohibition.

Mock responses align with expected_output

  • Existing server tests with always_mock have no failures—mocks are applied.
  • You may spot-check output wording against tests/llm/fixtures/.../expected_output.yaml.

Next steps:

  • If you want test_type: "server" to block live network calls or secrets, add support in MockToolsetManager or the test harness.
  • Otherwise, the harness behaves correctly: always_mock skips generation and uses mocks; never_mock enforces live runs; inherit defers to the global mode.
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch mock-datadog-test

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions
Copy link
Contributor

Results of HolmesGPT evals

  • ask_holmes: 23/43 test cases were successful, 1 regressions, 1 skipped, 18 mock failures
Test suite Test case Status
ask 01_how_many_pods
ask 02_what_is_wrong_with_pod 🔧
ask 03_what_is_the_command_to_port_forward 🔧
ask 04_related_k8s_events ↪️
ask 05_image_version 🔧
ask 09_crashpod
ask 10_image_pull_backoff 🔧
ask 11_init_containers
ask 14_pending_resources
ask 15_failed_readiness_probe
ask 17_oom_kill
ask 18_crash_looping_v2
ask 19_detect_missing_app_details 🔧
ask 24_misconfigured_pvc 🔧
ask 28_permissions_error
ask 29_events_from_alert_manager 🔧
ask 39_failed_toolset 🔧
ask 41_setup_argo
ask 42_dns_issues_steps_new_tools 🔧
ask 43_current_datetime_from_prompt
ask 45_fetch_deployment_logs_simple
ask 51_logs_summarize_errors 🔧
ask 53_logs_find_term
ask 54_not_truncated_when_getting_pods 🔧
ask 59_label_based_counting
ask 60_count_less_than 🔧
ask 61_exact_match_counting
ask 63_fetch_error_logs_no_errors
ask 77_liveness_probe_misconfiguration 🔧
ask 79_configmap_mount_issue 🔧
ask 83_secret_not_found 🔧
ask 86_configmap_like_but_secret 🔧
ask 88_affinity_like_but_taints 🔧
ask 89_runbook_missing_cloudwatch 🔧
ask 90_runbook_basic_selection 🔧
ask 93_calling_datadog
ask 93_calling_datadog
ask 93_calling_datadog
ask 97_logs_clarification_needed 🔧
ask 100_historical_logs 🔧
ask 110_k8s_events_image_pull 🔧
ask 24a_misconfigured_pvc_basic 🔧
ask 13a_pending_node_selector_basic 🔧

Legend

  • ✅ the test was successful
  • ↪️ the test was skipped
  • ⚠️ the test failed but is known to be flaky or known to fail
  • 🔧 the test failed due to mock data issues (not a code regression)
  • ❌ the test failed and should be fixed before merging the PR

@Sheeproid Sheeproid merged commit 63d95d9 into master Aug 10, 2025
10 of 11 checks passed
@Sheeproid Sheeproid deleted the mock-datadog-test branch August 10, 2025 14:13
@coderabbitai coderabbitai bot mentioned this pull request Aug 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants