Skip to content

Conversation

huydhn
Copy link
Contributor

@huydhn huydhn commented Jul 2, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results

Purpose

Update vLLM to PyTorch 2.8.0 now that it has been release

Test Plan

CI

Test Result

There are some failures, I'm trying to evaluate each one to confirm that they are existing failures from main.

The benchmark results look normal.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @huydhn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request initiates the process of upgrading the project's PyTorch dependency to version 2.8.0, targeting its release candidate builds. This involves updating not only the core PyTorch version but also ensuring compatibility across related libraries like Torchaudio and Torchvision, and adjusting build and test environments to source these new versions from PyTorch's test indices. The changes are foundational, impacting various dependency management files and build configurations.

Highlights

  • PyTorch Version Update: The core PyTorch dependency has been updated from version 2.7.0 to 2.8.0 across pyproject.toml, requirements/build.txt, requirements/cpu.txt, requirements/cuda.txt, requirements/rocm-build.txt, and requirements/test.in.
  • Associated Library Updates: Torchaudio and Torchvision have been updated to their 2.8.0 and 0.23.0 compatible versions, respectively, to align with the new PyTorch release.
  • Test Index Configuration: Build and test configurations, including .pre-commit-config.yaml, docker/Dockerfile, and various requirements/*.txt files, have been adjusted to pull PyTorch and related packages from the test release candidate index URLs (e.g., https://download.pytorch.org/whl/test/cu128).
  • CUDA and ROCm Dependency Alignment: Specific CUDA-related dependencies in requirements/test.txt (e.g., nvidia-cublas, nvidia-cudnn) have been updated to align with the PyTorch 2.8.0 release candidate, and ROCm versions in requirements/rocm-build.txt are updated to 6.3.
  • FlashInfer and Triton Updates: The FlashInfer wheel URL in docker/Dockerfile has been updated to reflect PyTorch 2.8, and the Triton version in requirements/test.txt has been bumped from 3.3.0 to 3.4.0.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added ci/build rocm Related to AMD ROCm labels Jul 2, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates PyTorch to version 2.8.0 and its related dependencies like torchvision, torchaudio, and Triton. The changes are mostly version bumps in requirement files and configuration files to use the PyTorch test package index. The changes look consistent with the goal of the PR. I've found one minor issue related to redundant configuration that could be improved for better maintainability.

Copy link

github-actions bot commented Jul 2, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added documentation Improvements or additions to documentation deepseek Related to DeepSeek models frontend llama Related to Llama models multi-modality Related to multi-modality (#4194) new-model Requests to new models performance Performance-related issues qwen Related to Qwen models structured-output labels Jul 8, 2025
@mergify mergify bot added speculative-decoding v1 tpu Related to Google TPUs labels Jul 8, 2025
Copy link

mergify bot commented Jul 8, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @huydhn.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot removed tpu Related to Google TPUs needs-rebase labels Jul 8, 2025
@hmellor
Copy link
Member

hmellor commented Aug 30, 2025

@vadimkantorov yes the nightlies from now on should include PyTorch 2.8

@vadimkantorov
Copy link

Are nightlies deleted over time? I'm still struggling to figure out a URL - the docs should provide concrete URL examples, not just templates using COMMIT and VERSION

Could you please advise what the URL is or where to find all currently available/published nightly wheel URLs?

Thanks :)

@hmellor
Copy link
Member

hmellor commented Aug 30, 2025

If you just want the latest nightly follow the instructions at https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html#install-the-latest-code (the docs link you provided is ancient)

@vadimkantorov
Copy link

vadimkantorov commented Aug 30, 2025

I'd like to get the direct wheel link (e.g. for placing into uv pyproject), this readme just shows pip install command, and another readme shows a template for a link, but no concrete final example (otherwise hard to grasp the correct version format: is it including dev+commit or not? Commit in short or long form?):


export VLLM_VERSION=0.5.4 # vLLM's main branch version is currently set to latest released tag
pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/nightly/vllm-${VLLM_VERSION}-cp38-abi3-manylinux1_x86_64.whl
# You can also access a specific commit
# export VLLM_COMMIT=...
# pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/${VLLM_COMMIT}/vllm-${VLLM_VERSION}-cp38-abi3-manylinux1_x86_64.whl

And also - how long does such a wheel kept online?

@hmellor
Copy link
Member

hmellor commented Aug 30, 2025

Sorry, I don't know the answer to that question

@nWEIdia
Copy link

nWEIdia commented Aug 30, 2025

I'm still struggling to figure out a URL

You can use this direct URL:

wget https://vllm-wheels.s3.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl
pip uninstall torch
pip install vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl
pip install pipdeptree
pipdeptree -p vllm
pip list |grep vllm

wei:~$ pip list |grep vllm
vllm 0.10.1rc2.dev397+g038e9be4e (this commit refers to: 038e9be)

pipdeptree -p vllm |grep torch
├── torch [required: ==2.8.0, installed: 2.8.0]

You can also: unzip vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl (wheel unpack does not seem to work)
and should be able to see: vllm-0.10.1rc2.dev397+g038e9be4e.dist-info
the METADATA file has: Requires-Dist: torch==2.8.0

@youkaichao
Copy link
Member

@vadimkantorov
Copy link

@nWEIdia Do you know how long these wheels stay up? basically I'd like to have a pin on a particular nightly commit. The wheel at https://vllm-wheels.s3.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl likely gets updated every day, so it's not good to pin on it in pyproject.toml.

What should be a sustainable way of pinning on a given particular commit in pyproject.toml? (at least until a new release gets put up, as I imagine it's costly to preserve forever the per-commit heavy-sized wheels)

@nWEIdia
Copy link

nWEIdia commented Sep 2, 2025

@nWEIdia Do you know how long these wheels stay up? basically I'd like to have a pin on a particular nightly commit. The wheel at https://vllm-wheels.s3.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl likely gets updated every day, so it's not good to pin on it in pyproject.toml.

What should be a sustainable way of pinning on a given particular commit in pyproject.toml? (at least until a new release gets put up, as I imagine it's costly to preserve forever the per-commit heavy-sized wheels)

I don't know how long the nightly wheels would stay up.
I agree it would be great if the wheel with the real commit id as part of the name could also be viewable. We know they are in S3, I could not directly view them either. If these wheels could be viewable, then it would make your workflow work.

@vadimkantorov
Copy link

the commands are in https://blog.vllm.ai/2025/01/10/dev-experience.html

@youkaichao strangely these do not include uv add --index for nightlies (or any uv add, actually)...

@simon-mo
Copy link
Collaborator

simon-mo commented Sep 2, 2025

Commit wise, they are also stored in

- https://vllm-wheels.s3.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl
+ https://vllm-wheels.s3.amazonaws.com/COMMIT_HASH/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl

We currently don't have TTL for wheels artifact

@nWEIdia
Copy link

nWEIdia commented Sep 2, 2025

Commit wise, they are also stored in

- https://vllm-wheels.s3.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl
+ https://vllm-wheels.s3.amazonaws.com/COMMIT_HASH/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl

We currently don't have TTL for wheels artifact

Thanks @simon-mo ! You are right, I can click and download e.g. this file: https://vllm-wheels.s3.amazonaws.com/038e9be4eb7a63189c8980845d80cb96957b9919/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl

@vadimkantorov
Copy link

vadimkantorov commented Sep 2, 2025

I've tried, and doing uv sync --active in the curdir (the only file in curdir is pyproject.toml printed below), leads to an error, which I honestly cannot decypher:

  × No solution found when resolving dependencies:
  ╰─▶ Because outlines==0.1.11 depends on outlines-core==0.1.26 and vllm==0.10.1rc2.dev397+g038e9be4e depends on
      outlines{platform_machine == 's390x'}==0.1.11, we can conclude that vllm==0.10.1rc2.dev397+g038e9be4e and
      outlines-core{platform_machine != 's390x'}==0.2.10 are incompatible.
      And because vllm==0.10.1rc2.dev397+g038e9be4e depends on outlines-core{platform_machine != 's390x'}==0.2.10, we can
      conclude that vllm==0.10.1rc2.dev397+g038e9be4e cannot be used.
      And because only vllm==0.10.1rc2.dev397+g038e9be4e is available and your project depends on vllm, we can conclude
      that your project's requirements are unsatisfiable.
# pyproject.toml

[build-system]
requires = ["setuptools>=65", "wheel"]
build-backend = "setuptools.build_meta"

[project]
name =  "test_vllm"
version = "0.0.0.1"
requires-python = ">=3.12"
dependencies = [
    "vllm",
    "torch>=2.8",
    "flash-attn==2.8.2",
]

[tool.uv.sources]
vllm = { url = "https://vllm-wheels.s3.amazonaws.com/038e9be4eb7a63189c8980845d80cb96957b9919/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl" }

[tool.uv]
no-build-isolation-package = ["flash-attn"]

@vadimkantorov
Copy link

It appears that because of:

we cannot directly use this wheel url with uv :(

@vadimkantorov
Copy link

This works. But I don't understand what g930a24144 refers to. It's certainly not a commit because of some g prefix, the commit seems this: 930a241

# pyproject.toml

[build-system]
requires = ["setuptools>=65", "wheel"]
build-backend = "setuptools.build_meta"

[project]
name =  "test_vllm"
version = "0.0.0.1"
requires-python = ">=3.12"
dependencies = [
    "vllm",
    "torch>=2.8",
    "flash-attn==2.8.2",
]

[tool.uv.sources]
vllm = { url = "https://wheels.vllm.ai/nightly/vllm-0.10.2rc2.dev39%2Bg930a24144-cp38-abi3-manylinux1_x86_64.whl" }

[tool.uv]
override-dependencies = ["outlines-core==0.2.10"]

@vadimkantorov
Copy link

How do I get the similar URL for 038e9be ?

@nWEIdia
Copy link

nWEIdia commented Sep 2, 2025

How do I get the similar URL for 038e9be ?

Would it be this one? https://wheels.vllm.ai/nightly/0.10.1rc2.dev397%2Bg038e9be4e-cp38-abi3-manylinux1_x86_64.whl

@vadimkantorov
Copy link

vadimkantorov commented Sep 2, 2025

Nope, "Not found" :(

Direct S3 paths are more predictable, but until #9244 is fixed, they cannot be used for uv pinning

If one goes to https://wheels.vllm.ai/nightly/vllm/, it contains only two links:
image

for the latest commit, and I'm worried that the next day the URLs will go stale and become 404

@nWEIdia
Copy link

nWEIdia commented Sep 2, 2025

@vadimkantorov
Copy link

vadimkantorov commented Sep 2, 2025

Wow, it seems that there was a version bump between:

Indeed, it's hard to predict just by commit hash...

I propose to have one of complete, concrete variants of pyproject.toml at https://docs.vllm.ai/en/stable/getting_started/installation/gpu.html#install-the-latest-code_1 (uv pip install does not modify pyproject.toml if I understand correctly)


Current working versions:

1), using index:

[build-system]
requires = ["setuptools>=65", "wheel"]
build-backend = "setuptools.build_meta"

[project]
name =  "test_vllm"
version = "0.0.0.1"
requires-python = ">=3.12"
dependencies = [
    "vllm==0.10.2rc2.dev39+g930a24144",
    "torch>=2.8",
    "flash-attn==2.8.2",
]

[tool.uv.sources]
vllm = { index = "vllm_nightly" }

[[tool.uv.index]]
name = "vllm_nightly"
url = "https://wheels.vllm.ai/nightly"

[tool.uv]
override-dependencies = ["outlines-core==0.2.10"]
  1. Using direct whl path and no index:
[build-system]
requires = ["setuptools>=65", "wheel"]
build-backend = "setuptools.build_meta"

[project]
name =  "test_vllm"
version = "0.0.0.1"
requires-python = ">=3.12"
dependencies = [
    "vllm",
    "torch>=2.8",
    "flash-attn==2.8.2",
]

[tool.uv.sources]
vllm = { url = "https://wheels.vllm.ai/nightly/vllm-0.10.2rc2.dev39%2Bg930a24144-cp38-abi3-manylinux1_x86_64.whl" }

[tool.uv]
override-dependencies = ["outlines-core==0.2.10"]
  1. Use per-commit index, following recommendation at https://docs.vllm.ai/en/stable/getting_started/installation/gpu.html#install-specific-revisions
[build-system]
requires = ["setuptools>=65", "wheel"]
build-backend = "setuptools.build_meta"

[project]
name =  "test_vllm"
version = "0.0.0.1"
requires-python = ">=3.12"
dependencies = [
    "vllm",
    "torch>=2.8",
    "flash-attn==2.8.2",
]

[tool.uv.sources]
vllm = { index = "vllm_commit" }

[[tool.uv.index]]
name = "vllm_commit"
url = "https://wheels.vllm.ai/930a24144c073a08cfecabd75a242e713bc4f57e"

[tool.uv]
override-dependencies = ["outlines-core==0.2.10"]

The failing fersion: using direct S3 path with per-commit hash

Failing because of:

[build-system]
requires = ["setuptools>=65", "wheel"]
build-backend = "setuptools.build_meta"

[project]
name =  "test_vllm"
version = "0.0.0.1"
requires-python = ">=3.12"
dependencies = [
    "vllm",
    "torch>=2.8",
    "flash-attn==2.8.2",
]

[tool.uv.sources]
vllm = { url = "https://vllm-wheels.s3.amazonaws.com/930a24144c073a08cfecabd75a242e713bc4f57e/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl" }

[tool.uv]
override-dependencies = ["outlines-core==0.2.10"]

@vadimkantorov
Copy link

vadimkantorov commented Sep 3, 2025

Also curious if there is a version of pyproject.toml (or requirements.txt) for pinning vllm to a given commit which is pip-compatible... The versions above use tool.uv.sources...

nopperl pushed a commit to pfnet/vllm that referenced this pull request Sep 3, 2025
Signed-off-by: Huy Do <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
MatthewBonanni pushed a commit to MatthewBonanni/vllm that referenced this pull request Sep 3, 2025
Signed-off-by: Huy Do <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Signed-off-by: Matthew Bonanni <[email protected]>
MatthewBonanni pushed a commit to MatthewBonanni/vllm that referenced this pull request Sep 3, 2025
Signed-off-by: Huy Do <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
842974287 pushed a commit to 842974287/vllm that referenced this pull request Sep 3, 2025
Signed-off-by: Huy Do <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Signed-off-by: Shiyan Deng <[email protected]>
lengrongfu pushed a commit to lengrongfu/vllm that referenced this pull request Sep 4, 2025
Signed-off-by: Huy Do <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build deepseek Related to DeepSeek models documentation Improvements or additions to documentation frontend llama Related to Llama models multi-modality Related to multi-modality (#4194) new-model Requests to new models performance Performance-related issues qwen Related to Qwen models ready ONLY add when PR is ready to merge/full CI is needed rocm Related to AMD ROCm speculative-decoding structured-output tool-calling v1
Projects
Status: Done
Status: Done
Development

Successfully merging this pull request may close these issues.