Skip to content

[Performance] Fix queuing in llm wrappers #3125

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 10, 2025
Merged

Conversation

vmoens
Copy link
Collaborator

@vmoens vmoens commented Aug 8, 2025

No description provided.

Copy link

pytorch-bot bot commented Aug 8, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/3125

Note: Links to docs will display an error until the docs builds have been completed.

❌ 5 New Failures, 1 Cancelled Job, 13 Unrelated Failures

As of commit 5233ccf with merge base 0592fdc (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOB - The following job was cancelled. Please retry:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 8, 2025
@vmoens vmoens merged commit d8dde2e into main Aug 10, 2025
84 of 103 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. llm/api llm/perf
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant