Skip to content

Conversation

Tcc0403
Copy link
Contributor

@Tcc0403 Tcc0403 commented Oct 1, 2024

Summary

Resolve #278 .

Details

Forward:

$$\begin{align} JSD(X, Y, \beta) &= JSD_{\beta}(P \Vert Q)\\\ &= \beta\ KL(P \Vert \beta P + (1-\beta)Q) + (1-\beta)\ KL(Q \Vert \beta P + (1-\beta)Q)\\\ &= \sum \beta\ PY + (1-\beta)QX - M\ logM \end{align}$$

where $X=logQ$, $Y=logP$ and $M=\beta P + (1-\beta)Q$.

Gradients:

$$\frac{\partial}{\partial X_i} JSD(X, Y, \beta) = (1-\beta)Q_i(X_i - logM_i)$$

Testing Done

jsd_memory
jsd_speed

  • Hardware Type: H100
  • run make test to ensure correctness
  • run make checkstyle to ensure code style
  • run make test-convergence to ensure convergence

@Tcc0403 Tcc0403 marked this pull request as ready for review October 2, 2024 08:43

def forward(self, p, q):
return LigerJSDFunction.apply(p, q)
def forward(self, log_q, log_p):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the correct order of input and target (student and teacher) respectively. would it be too confusing?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, the name is a bit confusing, or we can add some descriptions here to clarify

@Tcc0403
Copy link
Contributor Author

Tcc0403 commented Oct 2, 2024

@qingquansong @yundai424 ready for review!

qingquansong
qingquansong previously approved these changes Oct 2, 2024
Copy link
Collaborator

@qingquansong qingquansong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM in general! In case you're interested, I think one good future work is to make those KL or JSD losses similar to the fuse CE loss: feed teacher and student model last projection layer to the kernel and fuse it with the losses. Here teacher weight does not need grad and student will need grad.


def forward(self, p, q):
return LigerJSDFunction.apply(p, q)
def forward(self, log_q, log_p):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, the name is a bit confusing, or we can add some descriptions here to clarify

@ByronHsu
Copy link
Contributor

ByronHsu commented Oct 2, 2024

awesome work! waiting for the final nit review

@Tcc0403 Tcc0403 requested a review from qingquansong October 2, 2024 21:40
lancerts
lancerts previously approved these changes Oct 2, 2024
@lancerts lancerts enabled auto-merge (squash) October 2, 2024 21:42
@Tcc0403
Copy link
Contributor Author

Tcc0403 commented Oct 2, 2024

I think one good future work is to make those KL or JSD losses similar to the fuse CE loss: feed teacher and student model last projection layer to the kernel and fuse it with the losses. Here teacher weight does not need grad and student will need grad.

@qingquansong sure, I'm in.

@Tcc0403
Copy link
Contributor Author

Tcc0403 commented Oct 2, 2024

Forgot to add jsd in readme and liger_kernel.transformer

auto-merge was automatically disabled October 2, 2024 22:38

Head branch was pushed to by a user without write access

@lancerts lancerts enabled auto-merge (squash) October 2, 2024 23:15
@lancerts lancerts merged commit 6817c2d into linkedin:main Oct 3, 2024
3 checks passed
@Tcc0403 Tcc0403 deleted the jsd-beta branch December 1, 2024 03:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add general JSD (w/ beta) support
4 participants