Skip to content

Conversation

@moyix
Copy link

@moyix moyix commented Jan 22, 2021

The iteration argument eventually gets passed to join(), which expects a string. This fix changes the argument to None, which lets the name get generated correctly as something like global_step1000.

The `iteration` argument eventually gets passed to join(), which expects a string. This fix changes the argument to None, which lets the name get generated correctly as something like `global_step1000`.
@lekurile
Copy link
Contributor

HI @moyix,

Thank you for contributing to our DeepSpeedExamples repo! Can you please merge the latest from master and resolve any conflicts?

Thanks,
Lev

@lekurile
Copy link
Contributor

Closing this PR for now due to merge conflicts and it being opened ~2 years ago. @moyix We appreciate the contribution and please feel free to open another PR in the future :)

@lekurile lekurile closed this Sep 12, 2023
tjruwase added a commit that referenced this pull request Apr 12, 2025
* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* local rank arg

* Single writer option
tjruwase added a commit that referenced this pull request Apr 12, 2025
* Fast model checkpointing

* Support both legacy and serialized formats

* Add io_buffer_mb option

* Bug fix

* Force flush

* More model options; Refactor common codes

* --gpu option

* --half and more flexible options

* Add deepspeed.save_checkpoint()

* Free ds memory

* Improve repro

* Double I/O buffer (#56)

* Double I/O buffer (#60)

* Add checkpoint comparison (#62)

* Add checkpoint comparison

* Corrected a typo

Co-authored-by: Yang Li <[email protected]>

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* Perf statistics for save_checkpoint (#64)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* add logs for a100-80

* add torch* error log with half flag but without fused flag

* log for error

* local rank arg

* Handle local_rank arg (#78)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* local rank arg

* Single writer option

* Single writer option (#79)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* local rank arg

* Single writer option

* Allow missing folder

* DP writer refactor

* Update for DS; Add GDS

Signed-off-by: Olatunji Ruwase <[email protected]>

* Integrate GDS into deepspeed_model_save

---------

Signed-off-by: Olatunji Ruwase <[email protected]>
Co-authored-by: jerryyangli <[email protected]>
Co-authored-by: Yang Li <[email protected]>
Co-authored-by: GuanhuaWang <[email protected]>
tjruwase added a commit that referenced this pull request Jun 9, 2025
* Fast model checkpointing

* Support both legacy and serialized formats

* Add io_buffer_mb option

* Bug fix

* Force flush

* More model options; Refactor common codes

* --gpu option

* --half and more flexible options

* Add deepspeed.save_checkpoint()

* Free ds memory

* Improve repro

* Double I/O buffer (#56)

* Double I/O buffer (#60)

* Add checkpoint comparison (#62)

* Add checkpoint comparison

* Corrected a typo

Co-authored-by: Yang Li <[email protected]>

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* Perf statistics for save_checkpoint (#64)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* add logs for a100-80

* add torch* error log with half flag but without fused flag

* log for error

* local rank arg

* Handle local_rank arg (#78)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* local rank arg

* Single writer option

* Single writer option (#79)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* local rank arg

* Single writer option

* Allow missing folder

* DP writer refactor

* Update for DS; Add GDS

Signed-off-by: Olatunji Ruwase <[email protected]>

* Integrate GDS into deepspeed_model_save

* Rebase fast persist (#184)

* Fast model checkpointing

* Support both legacy and serialized formats

* Add io_buffer_mb option

* Bug fix

* Force flush

* More model options; Refactor common codes

* --gpu option

* --half and more flexible options

* Add deepspeed.save_checkpoint()

* Free ds memory

* Improve repro

* Double I/O buffer (#56)

* Double I/O buffer (#60)

* Add checkpoint comparison (#62)

* Add checkpoint comparison

* Corrected a typo

Co-authored-by: Yang Li <[email protected]>

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* Perf statistics for save_checkpoint (#64)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* add logs for a100-80

* add torch* error log with half flag but without fused flag

* log for error

* local rank arg

* Handle local_rank arg (#78)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* local rank arg

* Single writer option

* Single writer option (#79)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* local rank arg

* Single writer option

* Allow missing folder

* DP writer refactor

* Update for DS; Add GDS

Signed-off-by: Olatunji Ruwase <[email protected]>

* Integrate GDS into deepspeed_model_save

---------

Signed-off-by: Olatunji Ruwase <[email protected]>
Co-authored-by: jerryyangli <[email protected]>
Co-authored-by: Yang Li <[email protected]>
Co-authored-by: GuanhuaWang <[email protected]>

* Move folder

Signed-off-by: Olatunji Ruwase <[email protected]>

* Remove folder

Signed-off-by: Olatunji Ruwase <[email protected]>

* More cleanup

Signed-off-by: Olatunji Ruwase <[email protected]>

* torch changes

Signed-off-by: Olatunji Ruwase <[email protected]>

* sglang+zero_inference

* Remove file

* Add offload configs

* Add pin_memory

* Cleanup scripts

* SGLang README

* Remove file

---------

Signed-off-by: Olatunji Ruwase <[email protected]>
Co-authored-by: jerryyangli <[email protected]>
Co-authored-by: Yang Li <[email protected]>
Co-authored-by: GuanhuaWang <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
Co-authored-by: Hongwei Chen <[email protected]>
Co-authored-by: Zhipeng Wang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants