Skip to content
This repository was archived by the owner on Mar 13, 2025. It is now read-only.

Commit 891ccd7

Browse files
committed
updated README
Signed-off-by: Kourosh Hakhamaneshi <[email protected]>
1 parent 0bf97ce commit 891ccd7

File tree

1 file changed

+2
-351
lines changed

1 file changed

+2
-351
lines changed

README.md

Lines changed: 2 additions & 351 deletions
Original file line numberDiff line numberDiff line change
@@ -1,352 +1,3 @@
1-
============================
2-
# Archiving Ray LLM
1+
# Ray LLM APIs are now upstreamed and moved to Ray Repo
32

4-
We had started RayLLM to simplify setting up and deploying LLMs on top of Ray Serve. In the past few months, vLLM has made significant improvements in ease of use. We are archiving the RayLLM project and instead adding some examples to our [Ray Serve docs](https://docs.ray.io/en/master/serve/tutorials/vllm-example.html) for deploying LLMs with Ray Serve and vLLM. This will reduce another library for the community to learn about and greatly simplify the workflow to serve LLMs at scale. We also recently launched [Hosted Anyscale](https://www.anyscale.com/) where you can serve LLMs with Ray Serve with some more capabilities out of the box like multi-lora with serve multiplexing, JSON mode function calling and further performance enhancements.
5-
6-
7-
============================
8-
# RayLLM - LLMs on Ray
9-
10-
The hosted Aviary Explorer is not available anymore.
11-
Visit [Anyscale](https://endpoints.anyscale.com) to experience models served with RayLLM.
12-
13-
[![Build status](https://badge.buildkite.com/d6d7af987d1db222827099a953410c4e212b32e8199ca513be.svg?branch=master)](https://buildkite.com/anyscale/aviary-docker)
14-
15-
RayLLM (formerly known as Aviary) is an LLM serving solution that makes it easy to deploy and manage
16-
a variety of open source LLMs, built on [Ray Serve](https://docs.ray.io/en/latest/serve/index.html). It does this by:
17-
18-
- Providing an extensive suite of pre-configured open source LLMs, with defaults that work out of the box.
19-
- Supporting Transformer models hosted on [Hugging Face Hub](http://hf.co) or present on local disk.
20-
- Simplifying the deployment of multiple LLMs
21-
- Simplifying the addition of new LLMs
22-
- Offering unique autoscaling support, including scale-to-zero.
23-
- Fully supporting multi-GPU & multi-node model deployments.
24-
- Offering high performance features like continuous batching, quantization and streaming.
25-
- Providing a REST API that is similar to OpenAI's to make it easy to migrate and cross test them.
26-
- Supporting multiple LLM backends out of the box, including [vLLM](https://github.com/vllm-project/vllm) and [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM).
27-
28-
In addition to LLM serving, it also includes a CLI and a web frontend (Aviary Explorer) that you can use to compare the outputs of different models directly, rank them by quality, get a cost and latency estimate, and more.
29-
30-
RayLLM supports continuous batching and quantization by integrating with [vLLM](https://github.com/vllm-project/vllm). Continuous batching allows you to get much better throughput and latency than static batching. Quantization allows you to deploy compressed models with cheaper hardware requirements and lower inference costs. See [quantization guide](models/continuous_batching/quantization/README.md) for more details on running quantized models on RayLLM.
31-
32-
RayLLM leverages [Ray Serve](https://docs.ray.io/en/latest/serve/index.html), which has native support for autoscaling
33-
and multi-node deployments. RayLLM can scale to zero and create
34-
new model replicas (each composed of multiple GPU workers) in response to demand.
35-
36-
# Getting started
37-
38-
## Deploying RayLLM
39-
40-
The guide below walks you through the steps required for deployment of RayLLM on Ray Serve.
41-
42-
### Locally
43-
44-
We highly recommend using the official `anyscale/ray-llm` Docker image to run RayLLM. Manually installing RayLLM is currently not a supported use-case due to specific dependencies required, some of which are not available on pip.
45-
46-
```shell
47-
cache_dir=${XDG_CACHE_HOME:-$HOME/.cache}
48-
49-
docker run -it --gpus all --shm-size 1g -p 8000:8000 -e HF_HOME=~/data -v $cache_dir:~/data anyscale/ray-llm:latest bash
50-
# Inside docker container
51-
serve run ~/serve_configs/amazon--LightGPT.yaml
52-
```
53-
54-
### On a Ray Cluster
55-
56-
RayLLM uses Ray Serve, so it can be deployed on Ray Clusters.
57-
58-
Currently, we only have a guide and pre-configured YAML file for AWS deployments.
59-
**Make sure you have exported your AWS credentials locally.**
60-
61-
```bash
62-
export AWS_ACCESS_KEY_ID=...
63-
export AWS_SECRET_ACCESS_KEY=...
64-
export AWS_SESSION_TOKEN=...
65-
```
66-
67-
Start by cloning this repo to your local machine.
68-
69-
You may need to specify your AWS private key in the `deploy/ray/rayllm-cluster.yaml` file.
70-
See [Ray on Cloud VMs](https://docs.ray.io/en/latest/cluster/vms/index.html) page in
71-
Ray documentation for more details.
72-
73-
```shell
74-
git clone https://github.com/ray-project/ray-llm.git
75-
cd ray-llm
76-
77-
# Start a Ray Cluster (This will take a few minutes to start-up)
78-
ray up deploy/ray/rayllm-cluster.yaml
79-
```
80-
81-
#### Connect to your Cluster
82-
83-
```shell
84-
# Connect to the Head node of your Ray Cluster (This will take several minutes to autoscale)
85-
ray attach deploy/ray/rayllm-cluster.yaml
86-
87-
# Deploy the LightGPT model.
88-
serve run serve_configs/amazon--LightGPT.yaml
89-
```
90-
91-
You can deploy any model in the `models` directory of this repo,
92-
or define your own model YAML file and run that instead.
93-
94-
### On Kubernetes
95-
96-
For Kubernetes deployments, please see our documentation for [deploying on KubeRay](https://github.com/ray-project/ray-llm/tree/master/docs/kuberay).
97-
98-
## Query your models
99-
100-
Once the models are deployed, you can install a client outside of the Docker container to query the backend.
101-
102-
```shell
103-
pip install "rayllm @ git+https://github.com/ray-project/ray-llm.git"
104-
```
105-
106-
You can query your RayLLM deployment in many ways.
107-
108-
In all cases start out by doing:
109-
110-
```shell
111-
export ENDPOINT_URL="http://localhost:8000/v1"
112-
```
113-
114-
This is because your deployment is running locally, but you can also access remote deployments (in which case you would set `ENDPOINT_URL` to a remote URL).
115-
116-
### Using curl
117-
118-
You can use curl at the command line to query your deployed LLM:
119-
120-
```shell
121-
% curl $ENDPOINT_URL/chat/completions \
122-
-H "Content-Type: application/json" \
123-
-d '{
124-
"model": "meta-llama/Llama-2-7b-chat-hf",
125-
"messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"}],
126-
"temperature": 0.7
127-
}'
128-
```
129-
130-
```text
131-
{
132-
"id":"meta-llama/Llama-2-7b-chat-hf-308fc81f-746e-4682-af70-05d35b2ee17d",
133-
"object":"text_completion","created":1694809775,
134-
"model":"meta-llama/Llama-2-7b-chat-hf",
135-
"choices":[
136-
{
137-
"message":
138-
{
139-
"role":"assistant",
140-
"content":"Hello there! *adjusts glasses* It's a pleasure to meet you! Is there anything I can help you with today? Have you got a question or a task you'd like me to assist you with? Just let me know!"
141-
},
142-
"index":0,
143-
"finish_reason":"stop"
144-
}
145-
],
146-
"usage":{"prompt_tokens":30,"completion_tokens":53,"total_tokens":83}}
147-
```
148-
149-
### Connecting directly over python
150-
151-
Use the `requests` library to connect with Python. Use this script to receive a streamed response, automatically parse the outputs, and print just the content.
152-
153-
```python
154-
import os
155-
import json
156-
import requests
157-
158-
s = requests.Session()
159-
160-
api_base = os.getenv("ENDPOINT_URL")
161-
url = f"{api_base}/chat/completions"
162-
body = {
163-
"model": "meta-llama/Llama-2-7b-chat-hf",
164-
"messages": [
165-
{"role": "system", "content": "You are a helpful assistant."},
166-
{"role": "user", "content": "Tell me a long story with many words."}
167-
],
168-
"temperature": 0.7,
169-
"stream": True,
170-
}
171-
172-
with s.post(url, json=body, stream=True) as response:
173-
for chunk in response.iter_lines(decode_unicode=True):
174-
if chunk is not None:
175-
try:
176-
# Get data from reponse chunk
177-
chunk_data = chunk.split("data: ")[1]
178-
179-
# Get message choices from data
180-
choices = json.loads(chunk_data)["choices"]
181-
182-
# Pick content from first choice
183-
content = choices[0]["delta"]["content"]
184-
185-
print(content, end="", flush=True)
186-
except json.decoder.JSONDecodeError:
187-
# Chunk was not formatted as expected
188-
pass
189-
except KeyError:
190-
# No message was contained in the chunk
191-
pass
192-
print("")
193-
```
194-
195-
### Using the OpenAI SDK
196-
197-
RayLLM uses an OpenAI-compatible API, allowing us to use the OpenAI
198-
SDK to access our deployments. To do so, we need to set the `OPENAI_API_BASE` env var.
199-
200-
```shell
201-
export OPENAI_API_BASE=http://localhost:8000/v1
202-
export OPENAI_API_KEY='not_a_real_key'
203-
```
204-
205-
```python
206-
import openai
207-
208-
# List all models.
209-
models = openai.Model.list()
210-
print(models)
211-
212-
# Note: not all arguments are currently supported and will be ignored by the backend.
213-
chat_completion = openai.ChatCompletion.create(
214-
model="meta-llama/Llama-2-7b-chat-hf",
215-
messages=[
216-
{"role": "system", "content": "You are a helpful assistant."},
217-
{"role": "user", "content": "Say 'test'."}
218-
],
219-
temperature=0.7
220-
)
221-
print(chat_completion)
222-
```
223-
224-
# RayLLM Reference
225-
226-
## Installing RayLLM
227-
228-
To install RayLLM and its dependencies, run the following command:
229-
230-
```shell
231-
pip install "rayllm @ git+https://github.com/ray-project/ray-llm.git"
232-
```
233-
234-
RayLLM consists of a set of configurations and utilities for deploying LLMs on Ray Serve,
235-
in addition to a frontend (Aviary Explorer), both of which come with additional
236-
dependencies. To install the dependencies for the frontend run the following commands:
237-
238-
```shell
239-
pip install "rayllm[frontend] @ git+https://github.com/ray-project/ray-llm.git"
240-
```
241-
242-
The backend dependencies are heavy weight, and quite large. We recommend using the official
243-
`anyscale/ray-llm` image. Installing the backend manually is not a supported usecase.
244-
245-
### Usage stats collection
246-
247-
Ray collects basic, non-identifiable usage statistics to help us improve the project.
248-
For more information on what is collected and how to opt-out, see the
249-
[Usage Stats Collection](https://docs.ray.io/en/latest/cluster/usage-stats.html) page in
250-
Ray documentation.
251-
252-
## Using RayLLM through the CLI
253-
254-
RayLLM uses the Ray Serve CLI that allows you to interact with deployed models.
255-
256-
```shell
257-
# Start a new model in Ray Serve from provided configuration
258-
serve run serve_configs/<model_config_path>
259-
260-
# Get the status of the running deployments
261-
serve status
262-
263-
# Get the current config of current live Serve applications
264-
serve config
265-
266-
# Shutdown all Serve applications
267-
serve shutdown
268-
```
269-
270-
## RayLLM Model Registry
271-
272-
You can easily add new models by adding two configuration files.
273-
To learn more about how to customize or add new models,
274-
see the [Model Registry](models/README.md).
275-
276-
# Frequently Asked Questions
277-
278-
## How do I add a new model?
279-
280-
The easiest way is to copy the configuration of the existing model's YAML file and modify it. See models/README.md for more details.
281-
282-
## How do I deploy multiple models at once?
283-
284-
Run multiple models at once by aggregating the Serve configs for different models into a single, unified config. For example, use this config to run the `LightGPT` and `Llama-2-7b-chat` model in a single Serve application:
285-
286-
```yaml
287-
# File name: serve_configs/config.yaml
288-
289-
applications:
290-
- name: router
291-
import_path: rayllm.backend:router_application
292-
route_prefix: /
293-
args:
294-
models:
295-
- ./models/continuous_batching/amazon--LightGPT.yaml
296-
- ./models/continuous_batching/meta-llama--Llama-2-7b-chat-hf.yaml
297-
```
298-
299-
The config includes both models in the `model` argument for the `router`. Additionally, the Serve configs for both model applications are included. Save this unified config file to the `serve_configs/` folder.
300-
301-
Run the config to deploy the models:
302-
303-
```shell
304-
serve run serve_configs/<config.yaml>
305-
```
306-
307-
## How do I deploy a model to multiple nodes?
308-
309-
All our default model configurations enforce a model to be deployed on one node for high performance. However, you can easily change this if you want to deploy a model across nodes for lower cost or GPU availability. In order to do that, go to the YAML file in the model registry and change `placement_strategy` to `PACK` instead of `STRICT_PACK`.
310-
311-
## My deployment isn't starting/working correctly, how can I debug?
312-
313-
There can be several reasons for the deployment not starting or not working correctly. Here are some things to check:
314-
315-
1. You might have specified an invalid model id.
316-
2. Your model may require resources that are not available on the cluster. A common issue is that the model requires Ray custom resources (eg. `accelerator_type_a10`) in order to be scheduled on the right node type, while your cluster is missing those custom resources. You can either modify the model configuration to remove those custom resources or better yet, add them to the node configuration of your Ray cluster. You can debug this issue by looking at Ray Autoscaler logs ([monitor.log](https://docs.ray.io/en/latest/ray-observability/user-guides/configure-logging.html#system-component-logs)).
317-
3. Your model is a gated Hugging Face model (eg. meta-llama). In that case, you need to set the `HUGGING_FACE_HUB_TOKEN` environment variable cluster-wide. You can do that either in the Ray cluster configuration or by setting it before running `serve run`
318-
4. Your model may be running out of memory. You can usually spot this issue by looking for keywords related to "CUDA", "memory" and "NCCL" in the replica logs or `serve run` output. In that case, consider reducing the `max_batch_prefill_tokens` and `max_batch_total_tokens` (if applicable). See models/README.md for more information on those parameters.
319-
320-
In general, [Ray Dashboard](https://docs.ray.io/en/latest/serve/monitoring.html#ray-dashboard) is a useful debugging tool, letting you monitor your Ray Serve / LLM application and access Ray logs.
321-
322-
A good sanity check is deploying the test model in tests/models/. If that works, you know you can deploy _a_ model.
323-
324-
### How do I write a program that accesses both OpenAI and your hosted model at the same time?
325-
326-
The OpenAI `create()` commands allow you to specify the `API_KEY` and `API_BASE`. So you can do something like this.
327-
328-
```python
329-
# Call your self-hosted model running on the local host:
330-
OpenAI.ChatCompletion.create(api_base="http://localhost:8000/v1", api_key="",...)
331-
332-
# Call OpenAI. Set OPENAI_API_KEY to your key and unset OPENAI_API_BASE
333-
OpenAI.ChatCompletion.create(api_key="OPENAI_API_KEY", ...)
334-
```
335-
336-
## Getting Help and Filing Bugs / Feature Requests
337-
338-
We are eager to help you get started with RayLLM. You can get help on:
339-
340-
- Via Slack -- fill in [this form](https://docs.google.com/forms/d/e/1FAIpQLSfAcoiLCHOguOm8e7Jnn-JJdZaCxPGjgVCvFijHB5PLaQLeig/viewform) to sign up.
341-
- Via [Discuss](https://discuss.ray.io/c/llms-generative-ai/27).
342-
343-
For bugs or for feature requests, please submit them [here](https://github.com/ray-project/ray-llm/issues/new).
344-
345-
## Contributions
346-
347-
We are also interested in accepting contributions. Those could be anything from a new evaluator, to integrating a new model with a yaml file, to more.
348-
Feel free to post an issue first to get our feedback on a proposal first, or just file a PR and we commit to giving you prompt feedback.
349-
350-
We use `pre-commit` hooks to ensure that all code is formatted correctly.
351-
Make sure to `pip install pre-commit` and then run `pre-commit install`.
352-
You can also run `./format` to run the hooks manually.
3+
This repository has been archived and is no longer maintained. We have created `ray.serve.llm` and `ray.data.llm` APIs to simplify deployment of LLMs on top of [Ray](https://docs.ray.io/en/latest/). These APIs are now directly integrated into Ray and managed by the Ray team. The history of this repository is moved to [archived-master](https://github.com/ray-project/ray-llm/tree/archived-master) branch only for historical context.

0 commit comments

Comments
 (0)