Skip to content

Saving models/parameters in fault tolerent training #2638

@typhoonzero

Description

@typhoonzero

Related PR: #2634

In a discussion with @helinwang this morning, previous thought was to save parameters to a distributed storage service by merging parameters from all pservers.

In general there are two ways:

  • save parameter snapshots on each pserver, and then merge them together
    • recommended method: use API call Save to trigger snapshot saving, each pserver saves parameters on the distributed filesystem, this also saves the pserver status for recovering. Users can use a "model merge tool" to merge all the parts of the model and then use it.
  • save merged parameter(the model) on one trainer
    • trainers will fetch the whole model every iteration, so saving models from trainer do not need a "merge" step. Models will be saved every pass.
    • how to select exactly one trainer to save the model
      • use etcd distributed lock or transaction
      • use something like hash(trainer_ip) % trainer_count == 0

Notice: when users want to stop the training and use the current output models, he can stop the job right away, because the job will save every pass model into the distributed storage service.

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions