Skip to content

Conversation

hako-mikan
Copy link
Contributor

This addresses the problem of how hard it is to reflect changes LoRA weights in the model.
When trying to modify the LoRA weights after they’ve been loaded, I want to use refresh, but it won’t update if the number of LoRAs hasn’t changed.
To deal with this, LoRA Block Weight tricks the refresh function by renaming the LoRA, but this isn’t ideal.
So, I’m making changes to force updates even when the type or number of LoRAs hasn’t changed.

@catboxanon catboxanon merged commit daee4c0 into lllyasviel:main Jan 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants