Skip to content

Conversation

gforsyth
Copy link
Contributor

@gforsyth gforsyth commented May 8, 2025

dgl doesn't have Python 3.13 builds and it's also going to be dropped in 25.08, so we just aren't including it in the 3.13 migration.

Part of issue: rapidsai/build-planning#120

@gforsyth gforsyth requested a review from a team as a code owner May 8, 2025 21:22
@gforsyth gforsyth added improvement Improves an existing functionality non-breaking Introduces a non-breaking change labels May 8, 2025
@gforsyth gforsyth requested a review from bdice May 8, 2025 21:22
@gforsyth gforsyth added improvement Improves an existing functionality non-breaking Introduces a non-breaking change labels May 8, 2025
@alexbarghi-nv alexbarghi-nv requested review from a team as code owners May 9, 2025 06:49
@alexbarghi-nv
Copy link
Member

I modified depenencies.yaml to deal with build issues where it still attempts to install DGL on Python 3.13

@@ -68,8 +68,9 @@ jobs:
secrets: inherit
uses: rapidsai/shared-workflows/.github/workflows/[email protected]
with:
# This selects "ARCH=amd64 + the latest supported Python + CUDA".
matrix_filter: map(select(.ARCH == "amd64")) | group_by(.CUDA_VER|split(".")|map(tonumber)|.[0]) | map(max_by([(.PY_VER|split(".")|map(tonumber)), (.CUDA_VER|split(".")|map(tonumber))]))
# This selects "ARCH=amd64 + Python 3.12 + CUDA".
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would drop Python 3.10 and 3.11, which I don’t think we want? Just say “not 3.13” here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same in pr.yaml.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was already dropping Python 3.10 and 3.11 with the max_by(.PY_VER) -- I'm just telling it to use 3.12 instead of the new max of 3.13

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahhhh I missed that this is a pure wheel. (See pure-wheel: true below). That's fine, then.

Copy link
Member

@alexbarghi-nv alexbarghi-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Thanks @gforsyth and @bdice for getting this done

@@ -68,8 +68,9 @@ jobs:
secrets: inherit
uses: rapidsai/shared-workflows/.github/workflows/[email protected]
with:
# This selects "ARCH=amd64 + the latest supported Python + CUDA".
matrix_filter: map(select(.ARCH == "amd64")) | group_by(.CUDA_VER|split(".")|map(tonumber)|.[0]) | map(max_by([(.PY_VER|split(".")|map(tonumber)), (.CUDA_VER|split(".")|map(tonumber))]))
# This selects "ARCH=amd64 + Python 3.12 + CUDA".
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahhhh I missed that this is a pure wheel. (See pure-wheel: true below). That's fine, then.

@@ -3,7 +3,11 @@

set -euo pipefail

package_dir="python/cugraph-dgl"
if [ "$RAPIDS_PY_VERSION" != "3.13" ]; then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be unreachable, because we skip the wheel build job for Python 3.13?

We also skip ARM but we don't have exceptions in the script for that. I would favor just leaving this script as-is and letting the filtering be done at the GHA workflow level.

The same applies to all bash scripts in this PR.

@gforsyth
Copy link
Contributor Author

Removed the 3.13 branching stuff from all the bash scripts, ready for another look @bdice

@@ -46,7 +46,7 @@ set +e
# FIXME: TEMPORARILY disable MG PropertyGraph tests (experimental) tests and
# bulk sampler IO tests (hangs in CI)

if [[ "${RUNNER_ARCH}" != "ARM64" ]]; then
if [[ "${RUNNER_ARCH}" != "ARM64" && "${RAPIDS_PY_VERSION}" != "3.13" ]]; then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we skip these with a matrix filter instead? I think currently these spin up a job (for ARM) that immediately ends. Let’s avoid that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No material changes. Please revert this file.

@rapidsai rapidsai deleted a comment from alexbarghi-nv May 12, 2025
Copy link
Contributor

@bdice bdice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gforsyth I deleted the merge comment so that I can pre-emptively approve this. Please look at my final round of comments and trigger a merge when you are ready.

@gforsyth
Copy link
Contributor Author

/merge

@rapids-bot rapids-bot bot merged commit 616f1c7 into rapidsai:branch-25.06 May 12, 2025
75 checks passed
@gforsyth gforsyth deleted the skip_dgl_313 branch May 12, 2025 20:02
Copy link
Member

@jakirkham jakirkham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Gil for working on this! Also thanks Bradley and Alex for reviewing 🙏

Commented on one small change below that caught my eye

Comment on lines 19 to 23
if [[ "${PKG_CUDA_VER_MAJOR}" == "12" ]]; then
PYTORCH_CUDA_VER="121"
PYTORCH_CUDA_VER="121"
else
PYTORCH_CUDA_VER=$PKG_CUDA_VER
PYTORCH_CUDA_VER=$PKG_CUDA_VER
fi
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this dedent change snuck in

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improves an existing functionality non-breaking Introduces a non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants