-
Notifications
You must be signed in to change notification settings - Fork 0
Commit d073953
committed
pypi-diffusers: Autospec creation for update from version 0.32.2 to version 0.33.0
39th president of the United States, probably (1):
Fix for multi-GPU WAN inference (#10997)
Abhipsha Das (1):
[Model Card] standardize advanced diffusion training sdxl lora (#7615)
Ahmed Belgacem (1):
Fix redundant prev_output_channel assignment in UNet2DModel (#10945)
Alan Ponnachan (1):
Add torch_xla support to pipeline_aura_flow.py (#10365)
Alexey Zolotenkov (1):
Fix incorrect seed initialization when args.seed is 0 (#10964)
Ameer Azam (1):
Regarding the RunwayML path for V1.5 did change to stable-diffusion-v1-5/[stable-diffusion-v1-5/ stable-diffusion-inpainting] (#10476)
Andreas Jörg (1):
[examples/controlnet/train_controlnet_sd3.py] Fixes #11050 - Cast prompt_embeds and pooled_prompt_embeds to weight_dtype to prevent dtype mismatch (#11051)
Anton Obukhov (1):
Marigold Update: v1-1 models, Intrinsic Image Decomposition pipeline, documentation (#10884)
Aryan (35):
Fix TorchAO related bugs; revert device_map changes (#10371)
Update Flux docstrings (#10423)
Update variable names correctly in docs (#10435)
Fix hunyuan video attention mask dim (#10454)
Fix style (#10478)
[LoRA] Support original format loras for HunyuanVideo (#10376)
Add `_no_split_modules` to some models (#10308)
Fix Latte output_type (#10558)
Test sequential cpu offload for torchao quantization (#10506)
Fix offload tests for CogVideoX and CogView3 (#10547)
[core] Layerwise Upcasting (#10347)
Improve TorchAO error message (#10627)
[core] Pyramid Attention Broadcast (#9562)
Refactor gradient checkpointing (#10611)
Refactor OmniGen (#10771)
Disable PEFT input autocast when using fp8 layerwise casting (#10685)
Update FlowMatch docstrings to mention correct output classes (#10788)
Refactor CogVideoX transformer forward (#10789)
Module Group Offloading (#10503)
Remove print statements (#10836)
Some consistency-related fixes for HunyuanVideo (#10835)
SkyReels Hunyuan T2V & I2V (#10837)
[docs] Add CogVideoX Schedulers (#10885)
[refactor] SD3 docs & remove additional code (#10882)
[refactor] Remove additional Flux code (#10881)
[LoRA] Support Wan (#10943)
Hunyuan I2V (#10983)
[LoRA] CogView4 (#10981)
LTX 0.9.5 (#10968)
Group offloading improvements (#11094)
Fix Group offloading behaviour when using streams (#11097)
[core] FasterCache (#10163)
New HunyuanVideo-I2V (#11066)
Improve information about group offloading and layerwise casting (#11101)
Raise warning and round down if Wan num_frames is not 4k + 1 (#11167)
AstraliteHeart (2):
Add AuraFlow GGUF support (#10463)
Add missing `isinstance` for arg checks in GGUFParameter (#10834)
Bagheera (1):
fix for #7365, prevent pipelines from overriding provided prompt embeds (#7926)
Basile Lewandowski (1):
Change KolorsPipeline LoRA Loader to StableDiffusion (#11198)
Benjamin Bossan (1):
[LoRA] Implement hot-swapping of LoRA (#9453)
Bhavay Malhotra (1):
[train_controlnet.py] Fix the LR schedulers when num_train_epochs is passed in a distributed training env (#8461)
Bruno Magalhaes (1):
remove unnecessary call to `F.pad` (#10620)
Bubbliiiing (1):
Add EasyAnimateV5.1 text-to-video, image-to-video, control-to-video generation model (#10626)
C (3):
[Docs] Add documentation about using ParaAttention to optimize FLUX and HunyuanVideo (#10544)
Fix Graph Breaks When Compiling CogView4 (#10959)
Fix Wan I2V Quality (#11087)
Cheng Jin (1):
Resolve stride mismatch in UNet's ResNet to support Torch DDP (#11098)
CyberVy (7):
Fix Callback Tensor Inputs of the SDXL Controlnet Inpaint and Img2img Pipelines are missing "controlnet_image". (#10880)
Fix Callback Tensor Inputs of the SD Controlnet Pipelines are missing some elements. (#10907)
Improve load_ip_adapter RAM Usage (#10948)
Fix the missing parentheses when calling is_torchao_available in quantization_config.py. (#10961)
Fix Flux Controlnet Pipeline _callback_tensor_inputs Missing Some Elements (#10974)
Fix missing **kwargs in lora_pipeline.py (#11011)
fix _callback_tensor_inputs of sd controlnet inpaint pipeline missing some elements (#11073)
Célina (1):
use style bot GH Action from `huggingface_hub` (#10970)
Daniel Hipke (1):
Add a `disable_mmap` option to the `from_single_file` loader to improve load performance on network mounts (#10305)
Daniel Regado (9):
IP-Adapter support for `StableDiffusion3ControlNetPipeline` (#10363)
Added IP-Adapter for `StableDiffusion3ControlNetInpaintingPipeline` (#10561)
IP-Adapter for `StableDiffusion3InpaintPipeline` (#10581)
IP-Adapter for `StableDiffusion3Img2ImgPipeline` (#10589)
[Docs] Update SD3 ip_adapter model_id to diffusers checkpoint (#10597)
`MultiControlNetUnionModel` on SDXL (#10747)
SD3 IP-Adapter runtime checkpoint conversion (#10718)
Comprehensive type checking for `from_pretrained` kwargs (#10758)
Multi IP-Adapter for Flux pipelines (#10867)
Dev Rajput (1):
Add correct number of channels when resuming from checkpoint for Flux Control LoRa training (#10422)
Dhruv Nair (28):
[CI] Add minimal testing for legacy Torch versions (#10479)
[CI] Torch Min Version Test Fix (#10491)
[Single File] Fix loading Flux Dev finetunes with Comfy Prefix (#10545)
[CI] Update HF Token in Fast GPU Tests (#10568)
[CI] Update HF Token on Fast GPU Model Tests (#10570)
[CI] Update HF_TOKEN in all workflows (#10613)
[CI] Fix Truffle Hog failure (#10769)
[Single File] Add Single File support for Lumina Image 2.0 Transformer (#10781)
[CI] Fix incorrectly named test module for Hunyuan DiT (#10854)
[CI] Update always test Pipelines list in Pipeline fetcher (#10856)
[Docs] Fix toctree sorting (#10894)
[CI] Improvements to conditional GPU PR tests (#10859)
[CI] Fix Fast GPU tests on PR (#10912)
[CI] Fix for failing IP Adapter test in Fast GPU PR tests (#10915)
[CI] Update Stylebot Permissions (#10931)
[Single File] Add user agent to SF download requests. (#10979)
[Single File] Add single file support for Wan T2V/I2V (#10991)
Fix for fetching variants only (#10646)
[Quantization] Add Quanto backend (#10756)
[Quantization] Allow loading TorchAO serialized Tensor objects with torch>=2.6 (#11018)
[Refactor] Clean up import utils boilerplate (#11026)
Provide option to reduce CPU RAM usage in Group Offload (#11106)
[Quantization] dtype fix for GGUF + fix BnB tests (#11159)
[Docs] Update Wan Docs with memory optimizations (#11089)
[WIP] Add Wan Video2Video (#11053)
Add CacheMixin to Wan and LTX Transformers (#11187)
Fix Single File loading for LTX VAE (#11200)
Update Ruff to latest Version (#10919)
Dimitri Barbot (2):
Fix pipeline dtype unexpected change when using SDXL reference community pipelines in float16 mode (#10670)
Fix deterministic issue when getting pipeline dtype and device (#10696)
Doug J (1):
Update train_text_to_image_sdxl.py (#8830)
Edna (1):
Add Wan with STG as a community pipeline (#11184)
Eliseu Silva (7):
Make passing the IP Adapter mask to the attention mechanism optional (#10346)
feat: new community mixture_tiling_sdxl pipeline for SDXL (#10759)
fix: [Community pipeline] Fix flattened elements on image (#10774)
feat: add Mixture-of-Diffusers ControlNet Tile upscaler Pipeline for SDXL (#10951)
fix: mixture tiling sdxl pipeline - adjust gerating time_ids & embeddings (#11012)
fix: for checking mandatory and optional pipeline components (#11189)
feat: [Community Pipeline] - FaithDiff Stable Diffusion XL Pipeline (#11188)
Fanli Lin (8):
[tests] fix `AssertionError: Torch not compiled with CUDA enabled` (#10356)
[tests] make tests device-agnostic (part 3) (#10437)
make tensors contiguous before passing to safetensors (#10761)
[tests] make tests device-agnostic (part 4) (#10508)
[tests] enable bnb tests on xpu (#11001)
[tests] make cuda only tests device-agnostic (#11058)
[tests] no hard-coded cuda (#11186)
[tests] HunyuanDiTControlNetPipeline inference precision issue on XPU (#11197)
G.O.D (1):
fix bug for ascend npu (#10429)
Giuseppe Catalano (1):
Revert RePaint scheduler 'fix' (#10644)
Hanch Han (1):
[fix] refer use_framewise_encoding on AutoencoderKLHunyuanVideo._encode (#10600)
Haoyun Qin (1):
fix: support transformer models' `generation_config` in pipeline (#10779)
Ikpreet S Babra (1):
Fixed grammar in "write_own_pipeline" readme (#10706)
Ilya Drobyshevskiy (1):
fix flux controlnet bug (#11152)
Inigo Goiri (1):
Add support to pass image embeddings to the WAN I2V pipeline. (#11175)
Ishan Modi (1):
[Single File] Add single file loading for SANA Transformer (#10947)
Jacob Helwig (1):
Add sigmoid scheduler in `scheduling_ddpm.py` docs (#10648)
Juan Acevedo (3):
implementing flux on TPUs with ptxla (#10515)
reverts accidental change that removes attn_mask in attn. Improves fl… (#11065)
update readme instructions. (#11096)
Jun Yeop Na (2):
[train_dreambooth_lora.py] Fix the LR Schedulers when `num_train_epochs` is passed in a distributed training env (#10973)
[doc] Fix Korean Controlnet Train doc (#11141)
Junsong Chen (8):
[Sana] 1k PE bug fixed (#10431)
[Sana][bug fix]change clean_caption from True to False. (#10481)
[Sana 4K] (#10493)
[Sana] add Sana to auto-text2image-pipeline; (#10538)
[Sana-4K] (#10537)
[fix bug] PixArt inference_steps=1 (#11079)
[fix SANA-Sprint] (#11142)
add a timestep scale for sana-sprint teacher model (#11150)
Junyu Chen (2):
[DC-AE] support tiling for DC-AE (#10510)
[DC-AE, SANA] fix SanaMultiscaleLinearAttention apply_quadratic_attention bf16 (#10595)
Kenneth Gerald Hamilton (1):
Fixed requests.get function call by adding timeout parameter. (#11156)
Kinam Kim (1):
Add STG to community pipelines (#10960)
Le Zhuo (1):
Add support for lumina2 (#10642)
Leo Jiang (4):
[Sana 4K] Add vae tiling option to avoid OOM (#10583)
NPU adaption for RMSNorm (#10534)
NPU Adaption for Sanna (#10409)
[bugfix] NPU Adaption for Sana (#10724)
Linoy Tsaban (4):
small readme changes for advanced training examples (#10473)
[flux lora training] fix t5 training bug (#10845)
[Wan LoRAs] make T2V LoRAs compatible with Wan I2V (#11107)
[Flux LoRA] fix issues in flux lora scripts (#11111)
LittleNyima (1):
Add CogVideoX DDIM Inversion to Community Pipelines (#10956)
Lucain (1):
Remove cache migration script (#10619)
Luchao Qi (1):
[Typo] Update md files (#10404)
Marc Sun (5):
Fix compatibility with pipeline when loading model with device_map on single gpu (#10390)
[FEAT] DDUF format (#10037)
[FEAT] Model loading refactor (#10604)
store activation cls instead of function (#10832)
remove format check for safetensors file (#10864)
Mark (1):
[Docs] Fix environment variables in `installation.md` (#11179)
Marlon May (1):
Add community pipeline for semantic guidance for FLUX (#10610)
Mathias Parger (1):
speedup hunyuan encoder causal mask generation (#10764)
Max Podkorytov (1):
Fix enable memory efficient attention on ROCm (#10564)
Mikko Tukiainen (1):
Add missing MochiEncoder3D.gradient_checkpointing attribute (#11146)
Muyang Li (2):
Use `randn_tensor` to replace `torch.randn` (#10535)
Remove the FP32 Wrapper when evaluating (#10617)
Nicolas (1):
Fix train_text_to_image.py --help (#10711)
Omar Awile (1):
Fix documentation for FluxPipeline (#10563)
Parag Ekbote (7):
Notebooks for Community Scripts-5 (#10499)
Fix Documentation about Image-to-Image Pipeline (#10704)
Notebooks for Community Scripts-6 (#10713)
Extend Support for callback_on_step_end for AuraFlow and LuminaText2Img Pipelines (#10746)
Notebooks for Community Scripts-7 (#10846)
Add Example of IPAdapterScaleCutoffCallback to Docs (#10934)
Notebooks for Community Scripts-8 (#11128)
Pierre Chapuis (1):
fix default values of Flux guidance_scale in docstrings (#10982)
Rahul Raman (1):
Refactor instructpix2pix lora to support peft (#10205)
Raul Ciotescu (1):
width and height are mixed-up (#10629)
SahilCarterr (8):
[Add] torch_xla support to pipeline_sana.py (#10364)
[Fix] Broken links in hunyuan docs (#10402)
[Fix] broken links in docs (#10434)
[FIX] check_inputs function in Auraflow Pipeline (#10678)
[Fix] Type Hint in from_pretrained() to Ensure Correct Type Inference (#10714)
[FIX] check_inputs function in lumina2 (#10784)
[Fix] Docs overview.md (#10858)
[Fix] fp16 unscaling in train_dreambooth_lora_sdxl (#10889)
Sayak Paul (75):
[chore] post release 0.32.0 (#10361)
fix test pypi installation in the release workflow (#10360)
[training] fix: registration of out_channels in the control flux scripts. (#10367)
[LoRA] feat: support `unload_lora_weights()` for Flux Control. (#10206)
[training] add ds support to lora sd3. (#10378)
[LTX-Video] fix attribute adjustment for ltx. (#10426)
[Tests] add slow and nightly markers to sd3 lora integation. (#10458)
[LoRA] fix: lora unloading when using expanded Flux LoRAs. (#10397)
[Training] QoL improvements in the Flux Control training scripts (#10461)
[chore] remove prints from tests. (#10505)
[LoRA] clean up `load_lora_into_text_encoder()` and `fuse_lora()` copied from (#10495)
[LoRA] allow big CUDA tests to run properly for LoRA (and others) (#9845)
[CI] Match remaining assertions from big runner (#10521)
[Tests] skip tests properly with `unittest.skip()` (#10527)
[Docs] Add negative prompt docs to FluxPipeline (#10531)
[Flux] Improve true cfg condition (#10539)
[LoRA] improve failure handling for peft. (#10551)
[Docs] Update hunyuan_video.md to rectify the checkpoint id (#10524)
[LoRA] feat: support loading loras into 4bit quantized Flux models. (#10578)
[Tests] add: test to check 8bit bnb quantized models work with lora loading. (#10576)
[Chore] fix vae annotation in mochi pipeline (#10585)
[training] set rest of the blocks with `requires_grad` False. (#10607)
[chore] change licensing to 2025 from 2024. (#10615)
[Tests] modify the test slices for the failing flax test (#10630)
[docs] fix image path in para attention docs (#10632)
[chore] add a script to extract loras from full fine-tuned models (#10631)
[Tests] conditionally check `fp8_e4m3_bf16_max_memory < fp8_e4m3_fp32_max_memory` (#10669)
[tests] update llamatokenizer in hunyuanvideo tests (#10681)
[bitsandbytes] Simplify bnb int8 dequant (#10401)
[LoRA] fix peft state dict parsing (#10532)
[Tests] Test layerwise casting with training (#10765)
[chore] update notes generation spaces (#10592)
[LoRA] improve lora support for flux. (#10810)
[docs] add missing entries to the lora docs. (#10819)
[LoRA] make `set_adapters()` robust on silent failures. (#9618)
[misc] feat: introduce a style bot. (#10274)
[tests] use proper gemma class and config in lumina2 tests. (#10828)
[LoRA] add LoRA support to Lumina2 and fine-tuning script (#10818)
[Utils] add utilities for checking if certain utilities are properly documented (#7763)
[tests] test `encode_prompt()` in isolation (#10438)
[CI] install accelerate transformers from `main` (#10289)
[CI] run fast gpu tests conditionally on pull requests. (#10310)
fix: run tests from a pr workflow. (#9696)
[chore] template for remote vae. (#10849)
fix remote vae template (#10852)
[LoRA] restrict certain keys to be checked for peft config update. (#10808)
[chore] correct qk norm list. (#10876)
[Tests] fix: lumina2 lora fuse_nan test (#10911)
[style bot] improve security for the stylebot. (#10908)
[chore] fix-copies to flux pipelines (#10941)
[Tests] Remove more encode prompts tests (#10942)
Update evaluation.md (#10938)
[LoRA] feat: support non-diffusers lumina2 LoRAs. (#10909)
[tests] fix tests for save load components (#10977)
[CI] remove synchornized. (#10980)
[LoRA] remove full key prefix from peft. (#11004)
[LoRA] Improve copied from comments in the LoRA loader classes (#10995)
[LoRA] Improve warning messages when LoRA loading becomes a no-op (#10187)
[Tests] improve quantization tests by additionally measuring the inference memory savings (#11021)
[LoRA] support wan i2v loras from the world. (#11025)
[LoRA] change to warning from info when notifying the users about a LoRA no-op (#11044)
[Tests] restrict memory tests for quanto for certain schemes. (#11052)
[LoRA] feat: support non-diffusers wan t2v loras. (#11059)
[Tests] add requires peft decorator. (#11037)
[CI] pin transformers version for benchmarking. (#11067)
make PR GPU tests conditioned on styling. (#11099)
[CI] uninstall deps properly from pr gpu tests. (#11102)
[feat] implement `record_stream` when using CUDA streams during group offloading (#11081)
[bistandbytes] improve replacement warnings for bnb (#11132)
minor update to sana sprint docs. (#11236)
[docs] minor updates to dtype map docs. (#11237)
[LoRA] support more comyui loras for Flux 🚨 (#10985)
fix: SD3 ControlNet validation so that it runs on a A100. (#11238)
fix timeout constant (#11252)
fix consisid imports (#11254)
Shenghai Yuan (1):
[core] ConsisID (#10140)
Shitao Xiao (1):
Add OmniGen (#10148)
Steven Liu (9):
[docs] Quantization tip (#10249)
[docs] Video generation update (#10272)
[docs] Fix internal links (#10418)
[docs] Fix missing parameters in docstrings (#10419)
[docs] uv installation (#10622)
[docs] LoRA support (#10844)
[docs] Update prompt weighting docs (#10843)
[docs] Flux group offload (#10847)
[docs] MPS update (#11212)
Suprhimp (1):
[feat]Add strength in flux_fill pipeline (denoising strength for fluxfill) (#10603)
Teriks (5):
RFInversionFluxPipeline, small fix for enable_model_cpu_offload & enable_sequential_cpu_offload compatibility (#10480)
allow passing hf_token to load_textual_inversion (#10546)
SDXL ControlNet Union pipelines, make control_image argument immutible (#10663)
support StableDiffusionAdapterPipeline.from_single_file (#10552)
Fix SD2.X clip single file load projection_dim (#10770)
Thanh Le (2):
Fix inconsistent random transform in instruct pix2pix (#10698)
Faster set_adapters (#10777)
Tolga Cangöz (3):
[`Research Project`] Add AnyText: Multilingual Visual Text Generation And Editing (#8998)
Update README and example code for AnyText usage (#11028)
[LTX0.9.5] Refactor `LTXConditionPipeline` for text-only conditioning (#11174)
Vedat Baday (2):
fix(hunyuan-video): typo in height and width input check (#10684)
feat(training-utils): support device and dtype params in compute_density_for_timestep_sampling (#10699)
Vinh H. Pham (1):
Implement framewise encoding/decoding in LTX Video VAE (#10488)
Vladimir Mandic (1):
flux: make scheduler config params optional (#10384)
Wenhao Sun (1):
Add pipeline_stable_diffusion_xl_attentive_eraser (#10579)
Yaniv Galron (3):
removing redundant requires_grad = False (#10628)
typo fix (#10802)
making ```formatted_images``` initialization compact (#10801)
Yao Matrix (4):
map BACKEND_RESET_MAX_MEMORY_ALLOCATED to reset_peak_memory_stats on XPU (#11191)
enable 1 case on XPU (#11219)
introduce compute arch specific expectations and fix test_sd3_img2img_inference failure (#11227)
fix FluxReduxSlowTests::test_flux_redux_inference case failure on XPU (#11245)
YiYi Xu (7):
make style for huggingface/diffusers#10368 (#10370)
fix offload gpu tests etc (#10366)
follow-up refactor on lumina2 (#10776)
[Alibaba Wan Team] continue on #10921 Wan2.1 (#10922)
update check_input for cogview4 (#10966)
remove F.rms_norm for now (#11126)
add sana-sprint (#11074)
Yih-Dar (1):
Security fix (#10905)
Yuqian Hong (2):
create a script to train autoencoderkl (#10605)
[BUG] Fix Autoencoderkl train script (#11113)
Yuxuan Zhang (5):
CogView4 (supports different length c and uc) (#10649)
Update pipeline_cogview4.py (#10944)
[Docs] CogView4 comment fix (#10957)
CogView4 Control Block (#10809)
Modify the implementation of retrieve_timesteps in CogView4-Control. (#11125)
Zehuan Huang (1):
Support pass kwargs to cogvideox custom attention processor (#10456)
ZhengKai91 (1):
Fix aclnnRepeatInterleaveIntWithDim error on NPU for get_1d_rotary_pos_embed (#10820)
a120092009 (1):
[Quantization] support pass MappingType for TorchAoConfig (#10927)
alex choi (1):
ensure dtype match between diffused latents and vae weights (#8391)
andreabosisio (1):
Typo fix in the table number of a referenced paper (#10528)
baymax591 (1):
bugfix for npu not support float64 (#10123)
chaowenguo (4):
add pythor_xla support for render a video (#10443)
Update rerender_a_video.py fix dtype error (#10451)
add callable object to convert frame into control_frame to reduce cpu memory usage. (#10501)
add the xm.mark_step for the first denosing loop (#10530)
co63oc (1):
Fix pipeline_flux_controlnet.py (#11095)
célina (1):
Update Style Bot workflow (#11202)
dependabot[bot] (2):
Bump jinja2 from 3.1.4 to 3.1.5 in /examples/research_projects/realfill (#10377)
Bump jinja2 from 3.1.5 to 3.1.6 in /examples/research_projects/realfill (#10984)
fancydaddy (1):
add from_single_file to animatediff (#10924)
geronimi73 (1):
AutoModel instead of AutoModelForCausalLM (#10507)
hlky (59):
Default values in SD3 pipelines when submodules are not loaded (#10393)
Fix AutoPipeline `from_pipe` where source pipeline is missing target pipeline's optional components (#10400)
Add torch_xla and from_single_file support to TextToVideoZeroPipeline (#10445)
`lora_bias` PEFT version check in `unet.load_attn_procs` (#10474)
LEditsPP - examples, check height/width, add tiling/slicing (#10471)
Add torch_xla and from_single_file to instruct-pix2pix (#10444)
Use Pipelines without scheduler (#10439)
Fix HunyuanVideo produces NaN on PyTorch<2.5 (#10482)
Use pipelines without vae (#10441)
Update tokenizers in `pr_test_peft_backend` (#10132)
Fix tokenizers install from main in LoRA tests (#10494)
UNet2DModel mid_block_type (#10469)
PyTorch/XLA support (#10498)
Use Pipelines without unet (#10440)
Fix StableDiffusionInstructPix2PixPipelineSingleFileSlowTests (#10557)
Fix train_dreambooth_lora_sd3_miniature (#10554)
Fix Nightly AudioLDM2PipelineFastTests (#10556)
Fix batch > 1 in HunyuanVideo (#10548)
Move buffers to device (#10523)
Scheduling fixes on MPS (#10549)
Add IP-Adapter example to Flux docs (#10633)
ControlNet Union controlnet_conditioning_scale for multiple control inputs (#10666)
[training] Convert to ImageFolder script (#10664)
Add provider_options to OnnxRuntimeModel (#10661)
Quantized Flux with IP-Adapter (#10728)
EDMEulerScheduler accept sigmas, add final_sigmas_type (#10734)
Add `Self` type hint to `ModelMixin`'s `from_pretrained` (#10742)
Fix `use_lu_lambdas` and `use_karras_sigmas` with `beta_schedule=squaredcos_cap_v2` in `DPMSolverMultistepScheduler` (#10740)
DiffusionPipeline mixin `to`+FromOriginalModelMixin/FromSingleFileMixin `from_single_file` type hint (#10811)
`device_map` in `load_model_dict_into_meta` (#10851)
Fix `torch_dtype` in Kolors text encoder with `transformers` v4.49 (#10816)
Add SD3 ControlNet to AutoPipeline (#10888)
Experimental per control type scale for ControlNet Union (#10723)
Support IPAdapter for more Flux pipelines (#10708)
Add `remote_decode` to `remote_utils` (#10898)
Update VAE Decode endpoints (#10939)
Add VAE Decode endpoint slow test (#10946)
Fix loading OneTrainer Flux LoRA (#10978)
Wan VAE move scaling to pipeline (#10998)
Fix SD3 IPAdapter feature extractor (#11027)
Use `output_size` in `repeat_interleave` (#11030)
[hybrid inference 🍯🐝] Add VAE encode (#11017)
Wan Pipeline scaling fix, type hint warning, multi generator fix (#11007)
Rename Lumina(2)Text2ImgPipeline -> Lumina(2)Pipeline (#10827)
Quality options in `export_to_video` (#11090)
Flux with Remote Encode (#11091)
Don't override `torch_dtype` and don't use when `quantization_config` is set (#11039)
WanI2V encode_image (#11164)
Fix LatteTransformer3DModel dtype mismatch with enable_temporal_attentions (#11139)
Add `latents_mean` and `latents_std` to `SDXLLongPromptWeightingPipeline` (#11034)
allow models to run with a user-provided dtype map instead of a single dtype (#10301)
Revert `save_model` in ModelMixin save_pretrained and use safe_serialization=False in test (#11196)
[docs] `torch_dtype` map (#11194)
Fix enable_sequential_cpu_offload in CogView4Pipeline (#11195)
SchedulerMixin from_pretrained and ConfigMixin Self type annotation (#11192)
Flux quantized with lora (#10990)
AudioLDM2 Fixes (#11244)
AutoModel (#11115)
[docs] AutoModel (#11250)
jiqing-feng (2):
Enable dreambooth lora finetune example on other devices (#10602)
fix autocast (#11190)
kahmed10 (1):
add onnxruntime-migraphx as part of check for onnxruntime in import_utils.py (#10624)
kakukakujirori (1):
Bug fix in LTXImageToVideoPipeline.prepare_latents() when latents is already set (#10918)
kentdan3msu (1):
Set self._hf_peft_config_loaded to True when LoRA is loaded using `load_lora_adapter` in PeftAdapterMixin class (#11155)
lakshay sharma (1):
Update import_utils.py (#10329)
maxs-kan (1):
Fix Flux multiple Lora loading bug (#10388)
puhuk (2):
Update Custom Diffusion Documentation for Multiple Concept Inference to resolve issue #10791 (#10792)
Fix max_shift value in flux and related functions to 1.15 (issue #10675) (#10807)
sayakpaul (1):
Release: v0.33.0
sunxunle (1):
chore: remove redundant words (#10609)
suzukimain (2):
[Docs] Added `model search` to community_projects.md (#10358)
[Community] Enhanced `Model Search` (#10417)
victolee0 (1):
fix check_inputs func in LuminaText2ImgPipeline (#10651)
wonderfan (1):
chore: fix help messages in advanced diffusion examples (#10923)
xieofxie (1):
add provider_options in from_pretrained (#10719)
yupeng1111 (1):
fix wan i2v pipeline bugs (#10975)
Álvaro Somoza (1):
[Training] Better image interpolation in training scripts (#11206)1 parent 199a10f commit d073953Copy full SHA for d073953
File tree
Expand file treeCollapse file tree
5 files changed
+14
-14
lines changedFilter options
Expand file treeCollapse file tree
5 files changed
+14
-14
lines changedMakefile
Copy file name to clipboard+1-1Lines changed: 1 addition & 1 deletion
Original file line number | Diff line number | Diff line change | |
---|---|---|---|
| |||
1 | 1 |
| |
2 |
| - | |
| 2 | + | |
3 | 3 |
| |
4 | 4 |
| |
5 | 5 |
|
+1-1Lines changed: 1 addition & 1 deletion
Original file line number | Diff line number | Diff line change | |
---|---|---|---|
| |||
1 | 1 |
| |
2 | 2 |
| |
3 |
| - | |
| 3 | + | |
4 | 4 |
| |
5 | 5 |
| |
6 | 6 |
| |
|
+10-10Lines changed: 10 additions & 10 deletions
Original file line number | Diff line number | Diff line change | |
---|---|---|---|
| |||
2 | 2 |
| |
3 | 3 |
| |
4 | 4 |
| |
5 |
| - | |
6 |
| - | |
| 5 | + | |
| 6 | + | |
7 | 7 |
| |
8 | 8 |
| |
9 |
| - | |
10 |
| - | |
11 |
| - | |
12 |
| - | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
13 | 13 |
| |
14 | 14 |
| |
15 | 15 |
| |
| |||
73 | 73 |
| |
74 | 74 |
| |
75 | 75 |
| |
76 |
| - | |
77 |
| - | |
| 76 | + | |
| 77 | + | |
78 | 78 |
| |
79 |
| - | |
| 79 | + | |
80 | 80 |
| |
81 | 81 |
| |
82 | 82 |
| |
83 | 83 |
| |
84 | 84 |
| |
85 | 85 |
| |
86 | 86 |
| |
87 |
| - | |
| 87 | + | |
88 | 88 |
| |
89 | 89 |
| |
90 | 90 |
| |
|
release
Copy file name to clipboard+1-1Lines changed: 1 addition & 1 deletion
Original file line number | Diff line number | Diff line change | |
---|---|---|---|
| |||
1 |
| - | |
| 1 | + |
upstream
Copy file name to clipboard+1-1Lines changed: 1 addition & 1 deletion
Original file line number | Diff line number | Diff line change | |
---|---|---|---|
| |||
1 |
| - | |
| 1 | + |
0 commit comments