-
Notifications
You must be signed in to change notification settings - Fork 338
Open
Description
Thank you for your great work. I am conducting backdoor attack experiments using your code (in the first stage of the attack) and found that the attack effect did not show a decreasing effect in subsequent incremental learning. Therefore, I checked the model weight changes at each stage and found that the model weight did not change.
Here is the code I am adding in train_task_based.py
# Loop over all contexts.
for context, train_dataset in enumerate(train_datasets, 1):
weight_t=model.state_dict()['classifier.linear.weight']
weight_fc2_t = model.state_dict()["fcE.fcLayer2.linear.weight"]
weight_n = model.state_dict()['classifier.linear.weight']
weight_fc2_n = model.state_dict()["fcE.fcLayer2.linear.weight"]
# print("Model parameters at the end of the stage:", weight_n)
# weight_difference = weight_n - weight_t
weight_difference = weight_fc2_n - weight_fc2_t
print(weight_difference.sum())
Can you help me answer why the weights of the model hardly change in incremental learning
Metadata
Metadata
Assignees
Labels
No labels