Skip to content

Conversation

@moshemorad
Copy link
Contributor

No description provided.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 14, 2025

Walkthrough

Adds metadata["max_output_tokens"] (sourced from llm.get_maximum_output_token()) to final outputs in ToolCallingLLM.call and ToolCallingLLM.call_stream when completing responses.

Changes

Cohort / File(s) Summary
LLM response metadata augmentation
holmes/core/tool_calling_llm.py
Injects metadata["max_output_tokens"] alongside existing usage/max_tokens in call() and call_stream() finalization paths. No control-flow or signature changes.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Suggested reviewers

  • aantn

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description Check ❓ Inconclusive No pull request description was provided, so there is insufficient information to determine whether the author conveyed intent, scope, or testing notes for the changeset; therefore this check is inconclusive. Request that the author add a brief description summarizing the change (adding metadata["max_output_tokens"]), the rationale for the change, and any relevant testing or compatibility notes so reviewers have appropriate context.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title clearly and concisely summarizes the primary change: adding a "max_output_tokens" value to Holmes responses, which matches the modifications in holmes/core/tool_calling_llm.py where metadata["max_output_tokens"] is set in call() and call_stream(). It is short, specific, and immediately understandable to reviewers scanning PR history.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch add_also_max_output_tokens

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@moshemorad moshemorad requested a review from aantn September 14, 2025 11:25
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
holmes/core/tool_calling_llm.py (1)

425-431: Set metadata["max_output_tokens"] for non-post-processing return

metadata["max_output_tokens"] is only set in the post-processing branch; add the same assignment before the direct LLMResult return (holmes/core/tool_calling_llm.py — before return at ~line 441).

                 perf_timing.end(f"- completed in {i} iterations -")
+                # Keep metadata consistent with post-processing and streaming responses
+                metadata["max_output_tokens"] = maximum_output_token
                 return LLMResult(
                     result=text_response,
                     tool_calls=tool_calls,
                     prompt=json.dumps(messages, indent=2),
                     messages=messages,
                     **costs.model_dump(),  # Include all cost fields
                     metadata=metadata,
                 )
🧹 Nitpick comments (1)
holmes/core/tool_calling_llm.py (1)

428-429: Nit: Clarify metadata naming for context size vs. output tokens

metadata["max_tokens"] holds the model context window size (not a “max completion tokens” parameter). Consider renaming to metadata["context_window_tokens"] (or adding this as an alias) to avoid confusion now that metadata also contains max_output_tokens.

Also applies to: 872-873

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 92516fb and ffa33ba.

📒 Files selected for processing (1)
  • holmes/core/tool_calling_llm.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Use Ruff for formatting and linting
Type hints are required (checked by mypy)
Always place Python imports at the top of the file, not inside functions or methods

Files:

  • holmes/core/tool_calling_llm.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Pre-commit checks
  • GitHub Check: llm_evals
🔇 Additional comments (1)
holmes/core/tool_calling_llm.py (1)

869-876: LGTM: max_output_tokens included in streaming final message

Streaming path now populates metadata["max_output_tokens"] alongside usage and max_tokens. Matches the PR intent.

If clients consume this field, confirm no downstream schema validation breaks when the field appears in streaming but was previously absent.

@github-actions
Copy link
Contributor

Results of HolmesGPT evals

  • ask_holmes: 28/37 test cases were successful, 4 regressions, 2 skipped, 3 setup failures
Test suite Test case Status
ask 01_how_many_pods
ask 02_what_is_wrong_with_pod
ask 04_related_k8s_events ↪️
ask 05_image_version
ask 09_crashpod
ask 10_image_pull_backoff
ask 110_k8s_events_image_pull
ask 11_init_containers
ask 13a_pending_node_selector_basic
ask 14_pending_resources
ask 15_failed_readiness_probe
ask 17_oom_kill
ask 18_crash_looping_v2 🚧
ask 19_detect_missing_app_details
ask 20_long_log_file_search
ask 24_misconfigured_pvc
ask 24a_misconfigured_pvc_basic
ask 28_permissions_error 🚧
ask 29_events_from_alert_manager ↪️
ask 39_failed_toolset
ask 41_setup_argo
ask 42_dns_issues_steps_new_tools
ask 43_current_datetime_from_prompt
ask 45_fetch_deployment_logs_simple
ask 51_logs_summarize_errors
ask 53_logs_find_term
ask 54_not_truncated_when_getting_pods
ask 59_label_based_counting
ask 60_count_less_than 🚧
ask 61_exact_match_counting
ask 63_fetch_error_logs_no_errors
ask 79_configmap_mount_issue
ask 83_secret_not_found
ask 86_configmap_like_but_secret
ask 93_calling_datadog[0]
ask 93_calling_datadog[1]
ask 93_calling_datadog[2]

Legend

  • ✅ the test was successful
  • ↪️ the test was skipped
  • ⚠️ the test failed but is known to be flaky or known to fail
  • 🚧 the test had a setup failure (not a code regression)
  • 🔧 the test failed due to mock data issues (not a code regression)
  • ❌ the test failed and should be fixed before merging the PR

@moshemorad moshemorad merged commit a69880c into master Sep 14, 2025
7 checks passed
@moshemorad moshemorad deleted the add_also_max_output_tokens branch September 14, 2025 11:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants