You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I would like to use your library for implementing some transformer layers. It is nothing hard, just a cross-attention with ffn and also with RoPE. And here is the example of my code:
But also what I need is to add some additional embeddings(not positional) to the projected Q and K values but before the RoPE is applied. I have carefully checked the forward parameters from the code and till now could not find any option to do it. Could you please guide me.