Skip to content

Documentation for prefix cache-aware router #55218

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

eicherseiji
Copy link
Contributor

Why are these changes needed?

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @eicherseiji, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces comprehensive documentation for the PrefixAwarePow2RequestRouter, a specialized component designed to optimize Large Language Model (LLM) inference. The new guide aims to equip users with the knowledge to effectively configure, deploy, and monitor this router, thereby improving cache locality and overall performance for LLM serving workloads.

Highlights

  • New Documentation Added: A dedicated guide for the PrefixAwarePow2RequestRouter has been added to the advanced guides section of the documentation.
  • LLM Inference Optimization Explained: The new documentation details the router's three-tier algorithm for balancing cache locality and load distribution, specifically for Large Language Model (LLM) inference.
  • Configuration and Deployment Guidance: The guide provides practical examples and parameters for configuring and deploying LLM applications using this specialized router.
  • Performance Tuning and Debugging: Sections are included to help users optimize performance, monitor routing decisions, and troubleshoot common issues such as low cache hit rates, load imbalance, or memory growth.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds comprehensive documentation for the new PrefixAwarePow2RequestRouter. The guide is well-structured and covers the functionality, configuration, and best practices. I've identified a critical issue in the main code example that would prevent it from running, along with a minor typo. I've provided suggestions to correct these issues. The changes to index.md look good.

Comment on lines +69 to +95
```python
import ray
from ray import serve
from ray.llm._internal.serve.request_router.prefix_aware.prefix_aware_router import (
PrefixAwarePow2ReplicaRouter
)

llm_config = LLMConfig(
model_loading_config=dict(
model_id="qwen-0.5b",
model_source="Qwen/Qwen2.5-0.5B-Instruct",
),
deployment_config=dict(
request_router_config=dict(
request_router_class=PrefixAwarePow2ReplicaRouter,
request_router_kwargs=
),
# Configure routing behavior
request_router_kwargs={
"imbalanced_threshold": 5, # More aggressive load balancing
"match_rate_threshold": 0.15, # Require 15% match rate
"do_eviction": True, # Enable memory management
"eviction_threshold_chars": 500_000,
"eviction_target_chars": 400_000,
"eviction_interval_secs": 30,
}
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This code example has a few issues that will prevent it from running:

  1. LLMConfig and build_openai_app are used without being imported, which will cause a NameError.
  2. There's a syntax error in deployment_config. The request_router_kwargs is specified at the wrong level, and there's a stray request_router_kwargs= line which is invalid syntax.

Here is a corrected version of the code block with the necessary imports and correct configuration structure to make it runnable for users.

import ray
from ray import serve
from ray.llm import LLMConfig
from ray.llm.openai import build_openai_app
from ray.llm._internal.serve.request_router.prefix_aware.prefix_aware_router import (
    PrefixAwarePow2ReplicaRouter
)

llm_config = LLMConfig(
    model_loading_config=dict(
        model_id="qwen-0.5b",
        model_source="Qwen/Qwen2.5-0.5B-Instruct",
    ),
    deployment_config=dict(
        request_router_config=dict(
            request_router_class=PrefixAwarePow2ReplicaRouter,
            request_router_kwargs={
                "imbalanced_threshold": 5,  # More aggressive load balancing
                "match_rate_threshold": 0.15,  # Require 15% match rate
                "do_eviction": True,  # Enable memory management
                "eviction_threshold_chars": 500_000,
                "eviction_target_chars": 400_000,
                "eviction_interval_secs": 30,
            }
        )
    ),
)

# Deploy the application
app = build_openai_app({"llm_configs": [llm_config]})
serve.run(app)

(prefix-aware-algorithm)=
## How Prefix Cache-Aware Routing Works

The `PrefixAwarePow2RequestRouter` implements a three tier routing strategy that balances cache locality with load distribution:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better readability and grammatical correctness, it's best to hyphenate compound adjectives like 'three-tier' when they precede a noun.

Suggested change
The `PrefixAwarePow2RequestRouter` implements a three tier routing strategy that balances cache locality with load distribution:
The `PrefixAwarePow2RequestRouter` implements a three-tier routing strategy that balances cache locality with load distribution:

```python
import ray
from ray import serve
from ray.llm._internal.serve.request_router.prefix_aware.prefix_aware_router import (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you considered exposing this from a public module that is not _internal? If you keep the _internal, when you stabilize the API, you will have no choice but to move it. If you already put it at a location that might be stable, you might be able to keep it. One reasonable choice would be from ray.serve.llm.routers import PrefixAwarePow2ReplicaRouter. It might also make it more likely for people to adopt this feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants