Skip to content

Conversation

pkhk-1
Copy link
Collaborator

@pkhk-1 pkhk-1 commented Feb 19, 2025

No description provided.

Copy link

paddle-bot bot commented Feb 19, 2025

Thanks for your contribution!

)


class Statistical(object):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

统计逻辑和callback可以单独提出来,是公共部分,并对model的input_shape和efficient_token_count做检查

mem_gpu = (
train_result.metrics["train_mem_gpu_peaked_delta"] + train_result.metrics["train_mem_gpu_alloc_delta"]
)
logger.info(f'Memory_allocated:{memory_allocated}GB, max_memory_allocated: {max_memory_allocated}GB, memory_reserved:{memory_reserved}GB, max_memory_reserved: {max_memory_reserved}GB \n')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

callback的on_log方法中已经提供了这些性能数据,是否还需要重复log?

from paddlenlp.trainer.plugins.timer import get_timers


PADDLE_EMA_WEIGHTS_NAME = "ema_state.pdparams"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

多余变量

record the infomation of saving model.
"""
pass
# end_save_time = time.time()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以删除注释代码

def __init__(self, **kwargs):
super().__init__(**kwargs)
# self.benchmark_callback = BenchmarkCallback(self, self.args.save_steps, skip_step=self.args.benchmark_skip_steps)
self.benchmark_callback = BenchmarkCallback(self, 12, 1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

参数改成可配置?

@pkhk-1 pkhk-1 changed the title [done] add token log for llava bench [wip] add token log for llava bench Feb 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants