Skip to content

Conversation

@tjruwase
Copy link
Contributor

@tjruwase tjruwase commented Nov 9, 2020

  • test commits in DSE

  • Support for porgressive layer dropping

  • Minor changes on PLD

  • update the finetune script

  • PLD client

  • Remove theta option

Co-authored-by: Minjia Zhang [email protected]

* test commits in DSE

* Support for porgressive layer dropping

* Minor changes on PLD

* update the finetune script

* PLD client

* Remove theta option

Co-authored-by: Minjia Zhang <[email protected]>
Copy link
Contributor

@ShadenSmith ShadenSmith left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main concern is that all of the bert example complexity (and various DS features like sparse attention) also get carried over to this directory, so the diff is large enough it's tough to get much from this except searching for "progressive".

We don't have a great way to branch examples yet, though the DSE changes will fix that.

Maybe we can highlight the individual changes in the PLD tutorial for now? Without it, it'll be tough to generalize this work to other training setups.

Copy link
Contributor

@minjiaz minjiaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The client-side changes look good.

@tjruwase tjruwase merged commit fa1d1a7 into master Nov 10, 2020
tjruwase pushed a commit that referenced this pull request Apr 12, 2025
* Add checkpoint comparison

* Corrected a typo

Co-authored-by: Yang Li <[email protected]>
tjruwase added a commit that referenced this pull request Apr 12, 2025
* Fast model checkpointing

* Support both legacy and serialized formats

* Add io_buffer_mb option

* Bug fix

* Force flush

* More model options; Refactor common codes

* --gpu option

* --half and more flexible options

* Add deepspeed.save_checkpoint()

* Free ds memory

* Improve repro

* Double I/O buffer (#56)

* Double I/O buffer (#60)

* Add checkpoint comparison (#62)

* Add checkpoint comparison

* Corrected a typo

Co-authored-by: Yang Li <[email protected]>

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* Perf statistics for save_checkpoint (#64)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* add logs for a100-80

* add torch* error log with half flag but without fused flag

* log for error

* local rank arg

* Handle local_rank arg (#78)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* local rank arg

* Single writer option

* Single writer option (#79)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* local rank arg

* Single writer option

* Allow missing folder

* DP writer refactor

* Update for DS; Add GDS

Signed-off-by: Olatunji Ruwase <[email protected]>

* Integrate GDS into deepspeed_model_save

---------

Signed-off-by: Olatunji Ruwase <[email protected]>
Co-authored-by: jerryyangli <[email protected]>
Co-authored-by: Yang Li <[email protected]>
Co-authored-by: GuanhuaWang <[email protected]>
hwchen2017 pushed a commit that referenced this pull request Jun 8, 2025
* test commits in DSE

* Support for porgressive layer dropping

* Minor changes on PLD

* update the finetune script

* PLD client

* Remove theta option

Co-authored-by: Minjia Zhang <[email protected]>

Co-authored-by: Minjia Zhang <[email protected]>
tjruwase added a commit that referenced this pull request Jun 9, 2025
* Fast model checkpointing

* Support both legacy and serialized formats

* Add io_buffer_mb option

* Bug fix

* Force flush

* More model options; Refactor common codes

* --gpu option

* --half and more flexible options

* Add deepspeed.save_checkpoint()

* Free ds memory

* Improve repro

* Double I/O buffer (#56)

* Double I/O buffer (#60)

* Add checkpoint comparison (#62)

* Add checkpoint comparison

* Corrected a typo

Co-authored-by: Yang Li <[email protected]>

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* Perf statistics for save_checkpoint (#64)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* add logs for a100-80

* add torch* error log with half flag but without fused flag

* log for error

* local rank arg

* Handle local_rank arg (#78)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* local rank arg

* Single writer option

* Single writer option (#79)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* local rank arg

* Single writer option

* Allow missing folder

* DP writer refactor

* Update for DS; Add GDS

Signed-off-by: Olatunji Ruwase <[email protected]>

* Integrate GDS into deepspeed_model_save

* Rebase fast persist (#184)

* Fast model checkpointing

* Support both legacy and serialized formats

* Add io_buffer_mb option

* Bug fix

* Force flush

* More model options; Refactor common codes

* --gpu option

* --half and more flexible options

* Add deepspeed.save_checkpoint()

* Free ds memory

* Improve repro

* Double I/O buffer (#56)

* Double I/O buffer (#60)

* Add checkpoint comparison (#62)

* Add checkpoint comparison

* Corrected a typo

Co-authored-by: Yang Li <[email protected]>

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* Perf statistics for save_checkpoint (#64)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* add logs for a100-80

* add torch* error log with half flag but without fused flag

* log for error

* local rank arg

* Handle local_rank arg (#78)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* local rank arg

* Single writer option

* Single writer option (#79)

* save_checkpoint perf monitoring

* Disable checkpoint save on exit

* local rank arg

* Single writer option

* Allow missing folder

* DP writer refactor

* Update for DS; Add GDS

Signed-off-by: Olatunji Ruwase <[email protected]>

* Integrate GDS into deepspeed_model_save

---------

Signed-off-by: Olatunji Ruwase <[email protected]>
Co-authored-by: jerryyangli <[email protected]>
Co-authored-by: Yang Li <[email protected]>
Co-authored-by: GuanhuaWang <[email protected]>

* Move folder

Signed-off-by: Olatunji Ruwase <[email protected]>

* Remove folder

Signed-off-by: Olatunji Ruwase <[email protected]>

* More cleanup

Signed-off-by: Olatunji Ruwase <[email protected]>

* torch changes

Signed-off-by: Olatunji Ruwase <[email protected]>

* sglang+zero_inference

* Remove file

* Add offload configs

* Add pin_memory

* Cleanup scripts

* SGLang README

* Remove file

---------

Signed-off-by: Olatunji Ruwase <[email protected]>
Co-authored-by: jerryyangli <[email protected]>
Co-authored-by: Yang Li <[email protected]>
Co-authored-by: GuanhuaWang <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
Co-authored-by: Hongwei Chen <[email protected]>
Co-authored-by: Zhipeng Wang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants