When I finetune the animate-anything, I found that the gradient of unfrozen layer of Unet (eg. conv_in) is None. And I print the requires_grad of conv_in, the result is True. It means that the fine-tuning on the animate-anything is not work. What makes this phenomenon? Maybe It has some problems in the training process.
Hope for your reply!