-
-
Notifications
You must be signed in to change notification settings - Fork 9.9k
Update PyTorch to 2.8.0 #20358
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update PyTorch to 2.8.0 #20358
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @huydhn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request initiates the process of upgrading the project's PyTorch dependency to version 2.8.0, targeting its release candidate builds. This involves updating not only the core PyTorch version but also ensuring compatibility across related libraries like Torchaudio and Torchvision, and adjusting build and test environments to source these new versions from PyTorch's test indices. The changes are foundational, impacting various dependency management files and build configurations.
Highlights
- PyTorch Version Update: The core PyTorch dependency has been updated from version 2.7.0 to 2.8.0 across
pyproject.toml
,requirements/build.txt
,requirements/cpu.txt
,requirements/cuda.txt
,requirements/rocm-build.txt
, andrequirements/test.in
. - Associated Library Updates: Torchaudio and Torchvision have been updated to their 2.8.0 and 0.23.0 compatible versions, respectively, to align with the new PyTorch release.
- Test Index Configuration: Build and test configurations, including
.pre-commit-config.yaml
,docker/Dockerfile
, and variousrequirements/*.txt
files, have been adjusted to pull PyTorch and related packages from thetest
release candidate index URLs (e.g.,https://download.pytorch.org/whl/test/cu128
). - CUDA and ROCm Dependency Alignment: Specific CUDA-related dependencies in
requirements/test.txt
(e.g.,nvidia-cublas
,nvidia-cudnn
) have been updated to align with the PyTorch 2.8.0 release candidate, and ROCm versions inrequirements/rocm-build.txt
are updated to 6.3. - FlashInfer and Triton Updates: The FlashInfer wheel URL in
docker/Dockerfile
has been updated to reflect PyTorch 2.8, and the Triton version inrequirements/test.txt
has been bumped from 3.3.0 to 3.4.0.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request updates PyTorch to version 2.8.0 and its related dependencies like torchvision, torchaudio, and Triton. The changes are mostly version bumps in requirement files and configuration files to use the PyTorch test package index. The changes look consistent with the goal of the PR. I've found one minor issue related to redundant configuration that could be improved for better maintainability.
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Huy Do <[email protected]>
@vadimkantorov yes the nightlies from now on should include PyTorch 2.8 |
Are nightlies deleted over time? I'm still struggling to figure out a URL - the docs should provide concrete URL examples, not just templates using COMMIT and VERSION Could you please advise what the URL is or where to find all currently available/published nightly wheel URLs? Thanks :) |
If you just want the latest nightly follow the instructions at https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html#install-the-latest-code (the docs link you provided is ancient) |
I'd like to get the direct wheel link (e.g. for placing into uv pyproject), this readme just shows pip install command, and another readme shows a template for a link, but no concrete final example (otherwise hard to grasp the correct version format: is it including dev+commit or not? Commit in short or long form?):
And also - how long does such a wheel kept online? |
Sorry, I don't know the answer to that question |
You can use this direct URL: wget https://vllm-wheels.s3.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl wei:~$ pip list |grep vllm pipdeptree -p vllm |grep torch You can also: unzip vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl (wheel unpack does not seem to work) |
@vadimkantorov the commands are in https://blog.vllm.ai/2025/01/10/dev-experience.html |
@nWEIdia Do you know how long these wheels stay up? basically I'd like to have a pin on a particular nightly commit. The wheel at What should be a sustainable way of pinning on a given particular commit in |
I don't know how long the nightly wheels would stay up. |
@youkaichao strangely these do not include |
Commit wise, they are also stored in - https://vllm-wheels.s3.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl
+ https://vllm-wheels.s3.amazonaws.com/COMMIT_HASH/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl We currently don't have TTL for wheels artifact |
Thanks @simon-mo ! You are right, I can click and download e.g. this file: https://vllm-wheels.s3.amazonaws.com/038e9be4eb7a63189c8980845d80cb96957b9919/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl |
I've tried, and doing
# pyproject.toml
[build-system]
requires = ["setuptools>=65", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "test_vllm"
version = "0.0.0.1"
requires-python = ">=3.12"
dependencies = [
"vllm",
"torch>=2.8",
"flash-attn==2.8.2",
]
[tool.uv.sources]
vllm = { url = "https://vllm-wheels.s3.amazonaws.com/038e9be4eb7a63189c8980845d80cb96957b9919/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl" }
[tool.uv]
no-build-isolation-package = ["flash-attn"] |
It appears that because of: we cannot directly use this wheel url with uv :( |
This works. But I don't understand what # pyproject.toml
[build-system]
requires = ["setuptools>=65", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "test_vllm"
version = "0.0.0.1"
requires-python = ">=3.12"
dependencies = [
"vllm",
"torch>=2.8",
"flash-attn==2.8.2",
]
[tool.uv.sources]
vllm = { url = "https://wheels.vllm.ai/nightly/vllm-0.10.2rc2.dev39%2Bg930a24144-cp38-abi3-manylinux1_x86_64.whl" }
[tool.uv]
override-dependencies = ["outlines-core==0.2.10"] |
How do I get the similar URL for 038e9be ? |
Would it be this one? https://wheels.vllm.ai/nightly/0.10.1rc2.dev397%2Bg038e9be4e-cp38-abi3-manylinux1_x86_64.whl |
Nope, "Not found" :( Direct S3 paths are more predictable, but until #9244 is fixed, they cannot be used for uv pinning If one goes to https://wheels.vllm.ai/nightly/vllm/, it contains only two links: for the latest commit, and I'm worried that the next day the URLs will go stale and become 404 |
Sorry I missed vllm- in there, please try again with https://wheels.vllm.ai/nightly/vllm-0.10.1rc2.dev397%2Bg038e9be4e-cp38-abi3-manylinux1_x86_64.whl |
Wow, it seems that there was a version bump between:
Indeed, it's hard to predict just by commit hash... I propose to have one of complete, concrete variants of Current working versions: 1), using index: [build-system]
requires = ["setuptools>=65", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "test_vllm"
version = "0.0.0.1"
requires-python = ">=3.12"
dependencies = [
"vllm==0.10.2rc2.dev39+g930a24144",
"torch>=2.8",
"flash-attn==2.8.2",
]
[tool.uv.sources]
vllm = { index = "vllm_nightly" }
[[tool.uv.index]]
name = "vllm_nightly"
url = "https://wheels.vllm.ai/nightly"
[tool.uv]
override-dependencies = ["outlines-core==0.2.10"]
[build-system]
requires = ["setuptools>=65", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "test_vllm"
version = "0.0.0.1"
requires-python = ">=3.12"
dependencies = [
"vllm",
"torch>=2.8",
"flash-attn==2.8.2",
]
[tool.uv.sources]
vllm = { url = "https://wheels.vllm.ai/nightly/vllm-0.10.2rc2.dev39%2Bg930a24144-cp38-abi3-manylinux1_x86_64.whl" }
[tool.uv]
override-dependencies = ["outlines-core==0.2.10"]
[build-system]
requires = ["setuptools>=65", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "test_vllm"
version = "0.0.0.1"
requires-python = ">=3.12"
dependencies = [
"vllm",
"torch>=2.8",
"flash-attn==2.8.2",
]
[tool.uv.sources]
vllm = { index = "vllm_commit" }
[[tool.uv.index]]
name = "vllm_commit"
url = "https://wheels.vllm.ai/930a24144c073a08cfecabd75a242e713bc4f57e"
[tool.uv]
override-dependencies = ["outlines-core==0.2.10"] The failing fersion: using direct S3 path with per-commit hash Failing because of: [build-system]
requires = ["setuptools>=65", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "test_vllm"
version = "0.0.0.1"
requires-python = ">=3.12"
dependencies = [
"vllm",
"torch>=2.8",
"flash-attn==2.8.2",
]
[tool.uv.sources]
vllm = { url = "https://vllm-wheels.s3.amazonaws.com/930a24144c073a08cfecabd75a242e713bc4f57e/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl" }
[tool.uv]
override-dependencies = ["outlines-core==0.2.10"] |
Also curious if there is a version of |
Signed-off-by: Huy Do <[email protected]> Co-authored-by: Michael Goin <[email protected]>
Signed-off-by: Huy Do <[email protected]> Co-authored-by: Michael Goin <[email protected]> Signed-off-by: Matthew Bonanni <[email protected]>
Signed-off-by: Huy Do <[email protected]> Co-authored-by: Michael Goin <[email protected]>
Signed-off-by: Huy Do <[email protected]> Co-authored-by: Michael Goin <[email protected]> Signed-off-by: Shiyan Deng <[email protected]>
Signed-off-by: Huy Do <[email protected]> Co-authored-by: Michael Goin <[email protected]>
Essential Elements of an Effective PR Description Checklist
Purpose
Update vLLM to PyTorch 2.8.0 now that it has been release
Test Plan
CI
Test Result
There are some failures, I'm trying to evaluate each one to confirm that they are existing failures from main.
TP_SIZE=1 DP_SIZE=2 pytest -v -s v1/test_async_llm_dp.py
is passing locally on my local H100TP_SIZE=2 DP_SIZE=2 pytest -v -s v1/test_async_llm_dp.py
is also passing locallymamba_ssm
package (probably after the recent 2.7.1 update), for example https://buildkite.com/vllm/ci/builds/26325#019887d6-fd23-447d-8e2c-067a04a33021/200-3499The benchmark results look normal.