Skip to content

Add PyTorch-native implementation of custom layers #1898

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Dec 3, 2023
Merged

Conversation

WoosukKwon
Copy link
Collaborator

@WoosukKwon WoosukKwon commented Dec 3, 2023

This PR adds the pytorch-native implementation of the custom ops to the _forward() method of the corresponding layers. While the _forward() method will be never called at the moment, this change will bring the following benefits:

  1. It helps users understand the custom layers
  2. It can be used for debugging. Users can simply replace forward with _forward and inspect the intermediate values.
  3. It reduces some redundant code in the tests.
  4. In the future, we can use pytorch compiler to generate fused kernel from the pytorch code, instead of using the custom ops.

@WoosukKwon WoosukKwon merged commit 9b29497 into main Dec 3, 2023
@WoosukKwon WoosukKwon deleted the torch-native branch December 3, 2023 05:18
xjpang pushed a commit to xjpang/vllm that referenced this pull request Dec 4, 2023
hongxiayang pushed a commit to hongxiayang/vllm that referenced this pull request Feb 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants