Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
640 commits
Select commit Hold shift + click to select a range
d77c53b
[docs] fix image path in para attention docs (#10632)
sayakpaul Jan 23, 2025
5483162
[docs] uv installation (#10622)
stevhliu Jan 23, 2025
9684c52
width and height are mixed-up (#10629)
raulc0399 Jan 23, 2025
37c9697
Add IP-Adapter example to Flux docs (#10633)
hlky Jan 23, 2025
a451c0e
removing redundant requires_grad = False (#10628)
YanivDorGalron Jan 23, 2025
5897137
[chore] add a script to extract loras from full fine-tuned models (#1…
sayakpaul Jan 24, 2025
87252d8
Add pipeline_stable_diffusion_xl_attentive_eraser (#10579)
Anonym0u3 Jan 24, 2025
07860f9
NPU Adaption for Sanna (#10409)
leisuzz Jan 24, 2025
4f3ec53
Add sigmoid scheduler in `scheduling_ddpm.py` docs (#10648)
JacobHelwig Jan 26, 2025
4fa2459
create a script to train autoencoderkl (#10605)
lavinal712 Jan 27, 2025
f7f36c7
Add community pipeline for semantic guidance for FLUX (#10610)
Marlon154 Jan 27, 2025
18f7d1d
ControlNet Union controlnet_conditioning_scale for multiple control i…
hlky Jan 27, 2025
4157177
[training] Convert to ImageFolder script (#10664)
hlky Jan 27, 2025
158c5c4
Add provider_options to OnnxRuntimeModel (#10661)
hlky Jan 27, 2025
8ceec90
fix check_inputs func in LuminaText2ImgPipeline (#10651)
victolee0 Jan 27, 2025
e89ab5b
SDXL ControlNet Union pipelines, make control_image argument immutibl…
Teriks Jan 27, 2025
fb42066
Revert RePaint scheduler 'fix' (#10644)
GiusCat Jan 27, 2025
658e24e
[core] Pyramid Attention Broadcast (#9562)
a-r-r-o-w Jan 27, 2025
f295e2e
[fix] refer use_framewise_encoding on AutoencoderKLHunyuanVideo._enco…
hanchchch Jan 28, 2025
c4d4ac2
Refactor gradient checkpointing (#10611)
a-r-r-o-w Jan 28, 2025
7b100ce
[Tests] conditionally check `fp8_e4m3_bf16_max_memory < fp8_e4m3_fp32…
sayakpaul Jan 28, 2025
196aef5
Fix pipeline dtype unexpected change when using SDXL reference commun…
dimitribarbot Jan 28, 2025
e6037e8
[tests] update llamatokenizer in hunyuanvideo tests (#10681)
sayakpaul Jan 29, 2025
33f9361
support StableDiffusionAdapterPipeline.from_single_file (#10552)
Teriks Jan 29, 2025
ea76880
fix(hunyuan-video): typo in height and width input check (#10684)
badayvedat Jan 29, 2025
aad69ac
[FIX] check_inputs function in Auraflow Pipeline (#10678)
SahilCarterr Jan 29, 2025
1ae9b05
Fix enable memory efficient attention on ROCm (#10564)
tenpercent Jan 31, 2025
5d2d239
Fix inconsistent random transform in instruct pix2pix (#10698)
Luvata Jan 31, 2025
9f28f1a
feat(training-utils): support device and dtype params in compute_dens…
badayvedat Feb 1, 2025
537891e
Fixed grammar in "write_own_pipeline" readme (#10706)
N0-Flux-given Feb 3, 2025
3e35f56
Fix Documentation about Image-to-Image Pipeline (#10704)
ParagEkbote Feb 3, 2025
5e8e6cb
[bitsandbytes] Simplify bnb int8 dequant (#10401)
sayakpaul Feb 4, 2025
f63d322
Fix train_text_to_image.py --help (#10711)
nkthiebaut Feb 4, 2025
dbe0094
Notebooks for Community Scripts-6 (#10713)
ParagEkbote Feb 4, 2025
5b1dcd1
[Fix] Type Hint in from_pretrained() to Ensure Correct Type Inference…
SahilCarterr Feb 4, 2025
23bc56a
add provider_options in from_pretrained (#10719)
xieofxie Feb 5, 2025
145522c
[Community] Enhanced `Model Search` (#10417)
suzukimain Feb 6, 2025
cd0a4a8
[bugfix] NPU Adaption for Sana (#10724)
leisuzz Feb 6, 2025
d43ce14
Quantized Flux with IP-Adapter (#10728)
hlky Feb 6, 2025
464374f
EDMEulerScheduler accept sigmas, add final_sigmas_type (#10734)
hlky Feb 7, 2025
9f5ad1d
[LoRA] fix peft state dict parsing (#10532)
sayakpaul Feb 10, 2025
7fb481f
Add `Self` type hint to `ModelMixin`'s `from_pretrained` (#10742)
hlky Feb 10, 2025
c80eda9
[Tests] Test layerwise casting with training (#10765)
sayakpaul Feb 11, 2025
8ae8008
speedup hunyuan encoder causal mask generation (#10764)
dabeschte Feb 11, 2025
ed4b752
[CI] Fix Truffle Hog failure (#10769)
DN6 Feb 11, 2025
798e171
Add OmniGen (#10148)
staoxiao Feb 11, 2025
c470274
feat: new community mixture_tiling_sdxl pipeline for SDXL (#10759)
elismasilva Feb 11, 2025
81440fd
Add support for lumina2 (#10642)
zhuole1025 Feb 11, 2025
57ac673
Refactor OmniGen (#10771)
a-r-r-o-w Feb 12, 2025
067eab1
Faster set_adapters (#10777)
Luvata Feb 12, 2025
28f48f4
[Single File] Add Single File support for Lumina Image 2.0 Transforme…
DN6 Feb 12, 2025
ca6330d
Fix `use_lu_lambdas` and `use_karras_sigmas` with `beta_schedule=squa…
hlky Feb 12, 2025
5105b5a
`MultiControlNetUnionModel` on SDXL (#10747)
guiyrt Feb 12, 2025
051ebc3
fix: [Community pipeline] Fix flattened elements on image (#10774)
elismasilva Feb 12, 2025
97abdd2
make tensors contiguous before passing to safetensors (#10761)
faaany Feb 13, 2025
a0c2299
Disable PEFT input autocast when using fp8 layerwise casting (#10685)
a-r-r-o-w Feb 13, 2025
8d081de
Update FlowMatch docstrings to mention correct output classes (#10788)
a-r-r-o-w Feb 13, 2025
ab42820
Refactor CogVideoX transformer forward (#10789)
a-r-r-o-w Feb 13, 2025
9a147b8
Module Group Offloading (#10503)
a-r-r-o-w Feb 14, 2025
27b9023
Update Custom Diffusion Documentation for Multiple Concept Inference …
puhuk Feb 14, 2025
a6b843a
[FIX] check_inputs function in lumina2 (#10784)
SahilCarterr Feb 14, 2025
69f919d
follow-up refactor on lumina2 (#10776)
yiyixuxu Feb 15, 2025
d90cd36
CogView4 (supports different length c and uc) (#10649)
zRzRzRzRzRzRzR Feb 15, 2025
952b913
typo fix (#10802)
YanivDorGalron Feb 16, 2025
3e99b56
Extend Support for callback_on_step_end for AuraFlow and LuminaText2I…
ParagEkbote Feb 16, 2025
3579cd2
[chore] update notes generation spaces (#10592)
sayakpaul Feb 17, 2025
c14057c
[LoRA] improve lora support for flux. (#10810)
sayakpaul Feb 17, 2025
b75b204
Fix max_shift value in flux and related functions to 1.15 (issue #106…
puhuk Feb 18, 2025
924f880
[docs] add missing entries to the lora docs. (#10819)
sayakpaul Feb 18, 2025
2bc82d6
DiffusionPipeline mixin `to`+FromOriginalModelMixin/FromSingleFileMix…
hlky Feb 19, 2025
6fe05b9
[LoRA] make `set_adapters()` robust on silent failures. (#9618)
sayakpaul Feb 19, 2025
f5929e0
[FEAT] Model loading refactor (#10604)
SunMarc Feb 19, 2025
680a8ed
[misc] feat: introduce a style bot. (#10274)
sayakpaul Feb 19, 2025
f8b54cf
Remove print statements (#10836)
a-r-r-o-w Feb 20, 2025
0fb7068
[tests] use proper gemma class and config in lumina2 tests. (#10828)
sayakpaul Feb 20, 2025
f10d3c6
[LoRA] add LoRA support to Lumina2 and fine-tuning script (#10818)
sayakpaul Feb 20, 2025
f550745
[Utils] add utilities for checking if certain utilities are properly …
sayakpaul Feb 20, 2025
5321712
Add missing `isinstance` for arg checks in GGUFParameter (#10834)
AstraliteHeart Feb 20, 2025
b2ca39c
[tests] test `encode_prompt()` in isolation (#10438)
sayakpaul Feb 20, 2025
a4c1aac
store activation cls instead of function (#10832)
SunMarc Feb 20, 2025
c7a8c43
fix: support transformer models' `generation_config` in pipeline (#10…
JeffersonQin Feb 20, 2025
5194138
Notebooks for Community Scripts-7 (#10846)
ParagEkbote Feb 20, 2025
1f85350
[CI] install accelerate transformers from `main` (#10289)
sayakpaul Feb 20, 2025
454f82e
[CI] run fast gpu tests conditionally on pull requests. (#10310)
sayakpaul Feb 20, 2025
d9ee387
SD3 IP-Adapter runtime checkpoint conversion (#10718)
guiyrt Feb 20, 2025
f070775
Some consistency-related fixes for HunyuanVideo (#10835)
a-r-r-o-w Feb 20, 2025
e3bc4aa
SkyReels Hunyuan T2V & I2V (#10837)
a-r-r-o-w Feb 21, 2025
1871a69
fix: run tests from a pr workflow. (#9696)
sayakpaul Feb 21, 2025
9055ccb
[chore] template for remote vae. (#10849)
sayakpaul Feb 21, 2025
6cef7d2
fix remote vae template (#10852)
sayakpaul Feb 21, 2025
2b2d042
[CI] Fix incorrectly named test module for Hunyuan DiT (#10854)
DN6 Feb 21, 2025
b27d4ed
[CI] Update always test Pipelines list in Pipeline fetcher (#10856)
DN6 Feb 21, 2025
d75ea3c
`device_map` in `load_model_dict_into_meta` (#10851)
hlky Feb 21, 2025
85fcbaf
[Fix] Docs overview.md (#10858)
SahilCarterr Feb 21, 2025
ffb6777
remove format check for safetensors file (#10864)
SunMarc Feb 21, 2025
64dec70
[docs] LoRA support (#10844)
stevhliu Feb 22, 2025
9c7e205
Comprehensive type checking for `from_pretrained` kwargs (#10758)
guiyrt Feb 22, 2025
6f74ef5
Fix `torch_dtype` in Kolors text encoder with `transformers` v4.49 (#…
hlky Feb 24, 2025
b0550a6
[LoRA] restrict certain keys to be checked for peft config update. (#…
sayakpaul Feb 24, 2025
aba4a57
Add SD3 ControlNet to AutoPipeline (#10888)
hlky Feb 24, 2025
3fdf173
[docs] Update prompt weighting docs (#10843)
stevhliu Feb 24, 2025
db21c97
[docs] Flux group offload (#10847)
stevhliu Feb 24, 2025
170833c
[Fix] fp16 unscaling in train_dreambooth_lora_sdxl (#10889)
SahilCarterr Feb 24, 2025
64af74f
[docs] Add CogVideoX Schedulers (#10885)
a-r-r-o-w Feb 24, 2025
36517f6
[chore] correct qk norm list. (#10876)
sayakpaul Feb 24, 2025
8759969
[Docs] Fix toctree sorting (#10894)
DN6 Feb 24, 2025
13f20c7
[refactor] SD3 docs & remove additional code (#10882)
a-r-r-o-w Feb 24, 2025
0404703
[refactor] Remove additional Flux code (#10881)
a-r-r-o-w Feb 25, 2025
cc7b5b8
[CI] Improvements to conditional GPU PR tests (#10859)
DN6 Feb 25, 2025
1450c2a
Multi IP-Adapter for Flux pipelines (#10867)
guiyrt Feb 25, 2025
613e77f
Fix Callback Tensor Inputs of the SDXL Controlnet Inpaint and Img2img…
CyberVy Feb 25, 2025
f0ac7aa
Security fix (#10905)
ydshieh Feb 25, 2025
3fab662
Marigold Update: v1-1 models, Intrinsic Image Decomposition pipeline,…
toshas Feb 26, 2025
764d7ed
[Tests] fix: lumina2 lora fuse_nan test (#10911)
sayakpaul Feb 26, 2025
9a8e8db
Fix Callback Tensor Inputs of the SD Controlnet Pipelines are missing…
CyberVy Feb 26, 2025
e5c43b8
[CI] Fix Fast GPU tests on PR (#10912)
DN6 Feb 27, 2025
501d9de
[CI] Fix for failing IP Adapter test in Fast GPU PR tests (#10915)
DN6 Feb 27, 2025
37a5f1b
Experimental per control type scale for ControlNet Union (#10723)
hlky Feb 27, 2025
d230ecc
[style bot] improve security for the stylebot. (#10908)
sayakpaul Feb 28, 2025
7007feb
[CI] Update Stylebot Permissions (#10931)
DN6 Mar 1, 2025
2d8a41c
[Alibaba Wan Team] continue on #10921 Wan2.1 (#10922)
yiyixuxu Mar 2, 2025
694f965
Support IPAdapter for more Flux pipelines (#10708)
hlky Mar 2, 2025
fc4229a
Add `remote_decode` to `remote_utils` (#10898)
hlky Mar 2, 2025
54043c3
Update VAE Decode endpoints (#10939)
hlky Mar 2, 2025
4aaa0d2
[chore] fix-copies to flux pipelines (#10941)
sayakpaul Mar 3, 2025
7513162
[Tests] Remove more encode prompts tests (#10942)
sayakpaul Mar 3, 2025
5e3b7d2
Add EasyAnimateV5.1 text-to-video, image-to-video, control-to-video g…
bubbliiiing Mar 3, 2025
9e910c4
Fix SD2.X clip single file load projection_dim (#10770)
Teriks Mar 3, 2025
c9a219b
add from_single_file to animatediff (#10924)
asgawegawew Mar 3, 2025
982f9b3
Add Example of IPAdapterScaleCutoffCallback to Docs (#10934)
ParagEkbote Mar 3, 2025
f92e599
Update pipeline_cogview4.py (#10944)
zRzRzRzRzRzRzR Mar 3, 2025
8f15be1
Fix redundant prev_output_channel assignment in UNet2DModel (#10945)
ahmedbelgacem Mar 3, 2025
30cef6b
Improve load_ip_adapter RAM Usage (#10948)
CyberVy Mar 4, 2025
7855ac5
[tests] make tests device-agnostic (part 4) (#10508)
faaany Mar 4, 2025
cc22058
Update evaluation.md (#10938)
sayakpaul Mar 4, 2025
97fda1b
[LoRA] feat: support non-diffusers lumina2 LoRAs. (#10909)
sayakpaul Mar 4, 2025
11d8e3c
[Quantization] support pass MappingType for TorchAoConfig (#10927)
a120092009 Mar 4, 2025
dcd77ce
Fix the missing parentheses when calling is_torchao_available in quan…
CyberVy Mar 4, 2025
3ee899f
[LoRA] Support Wan (#10943)
a-r-r-o-w Mar 4, 2025
b8215b1
Fix incorrect seed initialization when args.seed is 0 (#10964)
azolotenkov Mar 4, 2025
66bf7ea
feat: add Mixture-of-Diffusers ControlNet Tile upscaler Pipeline for …
elismasilva Mar 4, 2025
a74f02f
[Docs] CogView4 comment fix (#10957)
zRzRzRzRzRzRzR Mar 4, 2025
24c062a
update check_input for cogview4 (#10966)
yiyixuxu Mar 4, 2025
08f74a8
Add VAE Decode endpoint slow test (#10946)
hlky Mar 5, 2025
e031caf
[flux lora training] fix t5 training bug (#10845)
linoytsaban Mar 5, 2025
fbf6b85
use style bot GH Action from `huggingface_hub` (#10970)
hanouticelina Mar 5, 2025
37b8edf
[train_dreambooth_lora.py] Fix the LR Schedulers when `num_train_epoc…
flyxiv Mar 6, 2025
6e2a93d
[tests] fix tests for save load components (#10977)
sayakpaul Mar 6, 2025
b150276
Fix loading OneTrainer Flux LoRA (#10978)
hlky Mar 6, 2025
ea81a42
fix default values of Flux guidance_scale in docstrings (#10982)
catwell Mar 6, 2025
1be0202
[CI] remove synchornized. (#10980)
sayakpaul Mar 6, 2025
f103993
Bump jinja2 from 3.1.5 to 3.1.6 in /examples/research_projects/realfi…
dependabot[bot] Mar 6, 2025
54ab475
Fix Flux Controlnet Pipeline _callback_tensor_inputs Missing Some Ele…
CyberVy Mar 6, 2025
790a909
[Single File] Add user agent to SF download requests. (#10979)
DN6 Mar 6, 2025
748cb0f
Add CogVideoX DDIM Inversion to Community Pipelines (#10956)
LittleNyima Mar 6, 2025
d55f411
fix wan i2v pipeline bugs (#10975)
yupeng1111 Mar 7, 2025
2e5203b
Hunyuan I2V (#10983)
a-r-r-o-w Mar 7, 2025
6a0137e
Fix Graph Breaks When Compiling CogView4 (#10959)
chengzeyi Mar 7, 2025
363d1ab
Wan VAE move scaling to pipeline (#10998)
hlky Mar 7, 2025
a2d3d6a
[LoRA] remove full key prefix from peft. (#11004)
sayakpaul Mar 7, 2025
1357931
[Single File] Add single file support for Wan T2V/I2V (#10991)
DN6 Mar 7, 2025
b38450d
Add STG to community pipelines (#10960)
kinam0252 Mar 7, 2025
1fddee2
[LoRA] Improve copied from comments in the LoRA loader classes (#10995)
sayakpaul Mar 8, 2025
9a1810f
Fix for fetching variants only (#10646)
DN6 Mar 10, 2025
f5edaa7
[Quantization] Add Quanto backend (#10756)
DN6 Mar 10, 2025
0703ce8
[Single File] Add single file loading for SANA Transformer (#10947)
ishan-modi Mar 10, 2025
26149c0
[LoRA] Improve warning messages when LoRA loading becomes a no-op (#1…
sayakpaul Mar 10, 2025
8eefed6
[LoRA] CogView4 (#10981)
a-r-r-o-w Mar 10, 2025
e7e6d85
[Tests] improve quantization tests by additionally measuring the infe…
sayakpaul Mar 10, 2025
b88fef4
[`Research Project`] Add AnyText: Multilingual Visual Text Generation…
tolgacangoz Mar 10, 2025
9add071
[Quantization] Allow loading TorchAO serialized Tensor objects with t…
DN6 Mar 11, 2025
4e3ddd5
fix: mixture tiling sdxl pipeline - adjust gerating time_ids & embedd…
elismasilva Mar 11, 2025
e4b056f
[LoRA] support wan i2v loras from the world. (#11025)
sayakpaul Mar 11, 2025
7e0db46
Fix SD3 IPAdapter feature extractor (#11027)
hlky Mar 11, 2025
36d0553
chore: fix help messages in advanced diffusion examples (#10923)
wonderfan Mar 11, 2025
d87ce2c
Fix missing **kwargs in lora_pipeline.py (#11011)
CyberVy Mar 11, 2025
e7ffeae
Fix for multi-GPU WAN inference (#10997)
AmericanPresidentJimmyCarter Mar 11, 2025
5428046
[Refactor] Clean up import utils boilerplate (#11026)
DN6 Mar 12, 2025
8b4f8ba
Use `output_size` in `repeat_interleave` (#11030)
hlky Mar 12, 2025
733b44a
[hybrid inference 🍯🐝] Add VAE encode (#11017)
hlky Mar 12, 2025
4ea9f89
Wan Pipeline scaling fix, type hint warning, multi generator fix (#11…
hlky Mar 12, 2025
20e4b6a
[LoRA] change to warning from info when notifying the users about a L…
sayakpaul Mar 12, 2025
5551506
Rename Lumina(2)Text2ImgPipeline -> Lumina(2)Pipeline (#10827)
hlky Mar 13, 2025
5e48cd2
making ```formatted_images``` initialization compact (#10801)
YanivDorGalron Mar 13, 2025
ccc8321
Fix aclnnRepeatInterleaveIntWithDim error on NPU for get_1d_rotary_po…
ZhengKai91 Mar 13, 2025
2f0f281
[Tests] restrict memory tests for quanto for certain schemes. (#11052)
sayakpaul Mar 14, 2025
124ac3e
[LoRA] feat: support non-diffusers wan t2v loras. (#11059)
sayakpaul Mar 14, 2025
8ead643
[examples/controlnet/train_controlnet_sd3.py] Fixes #11050 - Cast pro…
andjoer Mar 14, 2025
6b9a333
reverts accidental change that removes attn_mask in attn. Improves fl…
entrpn Mar 14, 2025
be54a95
Fix deterministic issue when getting pipeline dtype and device (#10696)
dimitribarbot Mar 15, 2025
cc19726
[Tests] add requires peft decorator. (#11037)
sayakpaul Mar 15, 2025
82188ce
CogView4 Control Block (#10809)
zRzRzRzRzRzRzR Mar 15, 2025
1001425
[CI] pin transformers version for benchmarking. (#11067)
sayakpaul Mar 16, 2025
33d10af
Fix Wan I2V Quality (#11087)
chengzeyi Mar 17, 2025
2e83cbb
LTX 0.9.5 (#10968)
a-r-r-o-w Mar 18, 2025
b4d7e9c
make PR GPU tests conditioned on styling. (#11099)
sayakpaul Mar 18, 2025
813d42c
Group offloading improvements (#11094)
a-r-r-o-w Mar 18, 2025
3fe3bc0
Fix pipeline_flux_controlnet.py (#11095)
co63oc Mar 18, 2025
2791682
update readme instructions. (#11096)
entrpn Mar 18, 2025
cb1b8b2
Resolve stride mismatch in UNet's ResNet to support Torch DDP (#11098)
jinc7461 Mar 18, 2025
3be6706
Fix Group offloading behaviour when using streams (#11097)
a-r-r-o-w Mar 18, 2025
0ab8fe4
Quality options in `export_to_video` (#11090)
hlky Mar 18, 2025
ae14612
[CI] uninstall deps properly from pr gpu tests. (#11102)
sayakpaul Mar 19, 2025
fc28791
[BUG] Fix Autoencoderkl train script (#11113)
lavinal712 Mar 19, 2025
a34d97c
[Wan LoRAs] make T2V LoRAs compatible with Wan I2V (#11107)
linoytsaban Mar 19, 2025
56f7400
[tests] enable bnb tests on xpu (#11001)
faaany Mar 19, 2025
dc62e69
[fix bug] PixArt inference_steps=1 (#11079)
lawrence-cj Mar 20, 2025
9f2d5c9
Flux with Remote Encode (#11091)
hlky Mar 20, 2025
15ad97f
[tests] make cuda only tests device-agnostic (#11058)
faaany Mar 20, 2025
2c1ed50
Provide option to reduce CPU RAM usage in Group Offload (#11106)
DN6 Mar 20, 2025
e9fda39
remove F.rms_norm for now (#11126)
yiyixuxu Mar 20, 2025
f424b1b
Notebooks for Community Scripts-8 (#11128)
ParagEkbote Mar 20, 2025
9b2c0a7
fix _callback_tensor_inputs of sd controlnet inpaint pipeline missing…
CyberVy Mar 21, 2025
844221a
[core] FasterCache (#10163)
a-r-r-o-w Mar 21, 2025
8a63aa5
add sana-sprint (#11074)
yiyixuxu Mar 21, 2025
a7d53a5
Don't override `torch_dtype` and don't use when `quantization_config`…
hlky Mar 21, 2025
0213179
Update README and example code for AnyText usage (#11028)
tolgacangoz Mar 23, 2025
1d37f42
Modify the implementation of retrieve_timesteps in CogView4-Control. …
zRzRzRzRzRzRzR Mar 23, 2025
5dbe4f5
[fix SANA-Sprint] (#11142)
lawrence-cj Mar 24, 2025
8907a70
New HunyuanVideo-I2V (#11066)
a-r-r-o-w Mar 24, 2025
7aac77a
[doc] Fix Korean Controlnet Train doc (#11141)
flyxiv Mar 24, 2025
1ddf3f3
Improve information about group offloading and layerwise casting (#11…
a-r-r-o-w Mar 24, 2025
739d6ec
add a timestep scale for sana-sprint teacher model (#11150)
lawrence-cj Mar 25, 2025
7dc52ea
[Quantization] dtype fix for GGUF + fix BnB tests (#11159)
DN6 Mar 26, 2025
de6a88c
Set self._hf_peft_config_loaded to True when LoRA is loaded using `lo…
kentdan3msu Mar 26, 2025
5d970a4
WanI2V encode_image (#11164)
hlky Mar 28, 2025
617c208
[Docs] Update Wan Docs with memory optimizations (#11089)
DN6 Mar 28, 2025
75d7e5c
Fix LatteTransformer3DModel dtype mismatch with enable_temporal_atten…
hlky Mar 29, 2025
2c59af7
Raise warning and round down if Wan num_frames is not 4k + 1 (#11167)
a-r-r-o-w Mar 31, 2025
eb50def
[Docs] Fix environment variables in `installation.md` (#11179)
remarkablemark Mar 31, 2025
d6f4774
Add `latents_mean` and `latents_std` to `SDXLLongPromptWeightingPipel…
hlky Mar 31, 2025
e8fc8b1
Bug fix in LTXImageToVideoPipeline.prepare_latents() when latents is …
kakukakujirori Mar 31, 2025
5a6edac
[tests] no hard-coded cuda (#11186)
faaany Apr 1, 2025
df1d7b0
[WIP] Add Wan Video2Video (#11053)
DN6 Apr 1, 2025
a7f07c1
map BACKEND_RESET_MAX_MEMORY_ALLOCATED to reset_peak_memory_stats on …
yao-matrix Apr 2, 2025
4d5a96e
fix autocast (#11190)
jiqing-feng Apr 2, 2025
be0b7f5
fix: for checking mandatory and optional pipeline components (#11189)
elismasilva Apr 2, 2025
fe2b397
remove unnecessary call to `F.pad` (#10620)
bm-synth Apr 2, 2025
d8c617c
allow models to run with a user-provided dtype map instead of a singl…
hlky Apr 2, 2025
52b460f
[tests] HunyuanDiTControlNetPipeline inference precision issue on XPU…
faaany Apr 2, 2025
da857be
Revert `save_model` in ModelMixin save_pretrained and use safe_serial…
hlky Apr 2, 2025
e5c6027
[docs] `torch_dtype` map (#11194)
hlky Apr 2, 2025
54dac3a
Fix enable_sequential_cpu_offload in CogView4Pipeline (#11195)
hlky Apr 2, 2025
78c2fdc
SchedulerMixin from_pretrained and ConfigMixin Self type annotation (…
hlky Apr 2, 2025
b0ff822
Update import_utils.py (#10329)
Lakshaysharma048 Apr 2, 2025
c97b709
Add CacheMixin to Wan and LTX Transformers (#11187)
DN6 Apr 2, 2025
c4646a3
feat: [Community Pipeline] - FaithDiff Stable Diffusion XL Pipeline (…
elismasilva Apr 2, 2025
d9023a6
[Model Card] standardize advanced diffusion training sdxl lora (#7615)
chiral-carbon Apr 3, 2025
480510a
Change KolorsPipeline LoRA Loader to StableDiffusion (#11198)
BasileLewan Apr 3, 2025
bbdabab
Merge branch 'huggingface:main' into main
clementchadebec Apr 3, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
38 changes: 38 additions & 0 deletions .github/ISSUE_TEMPLATE/remote-vae-pilot-feedback.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
name: "\U0001F31F Remote VAE"
description: Feedback for remote VAE pilot
labels: [ "Remote VAE" ]

body:
- type: textarea
id: positive
validations:
required: true
attributes:
label: Did you like the remote VAE solution?
description: |
If you liked it, we would appreciate it if you could elaborate what you liked.

- type: textarea
id: feedback
validations:
required: true
attributes:
label: What can be improved about the current solution?
description: |
Let us know the things you would like to see improved. Note that we will work optimizing the solution once the pilot is over and we have usage.

- type: textarea
id: others
validations:
required: true
attributes:
label: What other VAEs you would like to see if the pilot goes well?
description: |
Provide a list of the VAEs you would like to see in the future if the pilot goes well.

- type: textarea
id: additional-info
attributes:
label: Notify the members of the team
description: |
Tag the following folks when submitting this feedback: @hlky @sayakpaul
1 change: 1 addition & 0 deletions .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ jobs:
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install pandas peft
python -m uv pip uninstall transformers && python -m uv pip install transformers==4.48.0
- name: Environment
run: |
python utils/print_env.py
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/build_docker_images.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ jobs:
id: file_changes
uses: jitterbit/get-changed-files@v1
with:
format: 'space-delimited'
format: "space-delimited"
token: ${{ secrets.GITHUB_TOKEN }}

- name: Build Changed Docker Images
Expand Down Expand Up @@ -67,6 +67,7 @@ jobs:
- diffusers-pytorch-cuda
- diffusers-pytorch-compile-cuda
- diffusers-pytorch-xformers-cuda
- diffusers-pytorch-minimum-cuda
- diffusers-flax-cpu
- diffusers-flax-tpu
- diffusers-onnxruntime-cpu
Expand Down
195 changes: 190 additions & 5 deletions .github/workflows/nightly_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -180,14 +180,128 @@ jobs:
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

run_big_gpu_torch_tests:
name: Torch tests on big GPU
strategy:
fail-fast: false
max-parallel: 2
runs-on:
group: aws-g6e-xlarge-plus
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host --gpus 0
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: NVIDIA-SMI
run: nvidia-smi
- name: Install dependencies
run: |
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install peft@git+https://github.com/huggingface/peft.git
pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install pytest-reportlog
- name: Environment
run: |
python utils/print_env.py
- name: Selected Torch CUDA Test on big GPU
env:
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG: :16:8
BIG_GPU_MEMORY: 40
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-m "big_gpu_with_torch_cuda" \
--make-reports=tests_big_gpu_torch_cuda \
--report-log=tests_big_gpu_torch_cuda.log \
tests/
- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_big_gpu_torch_cuda_stats.txt
cat reports/tests_big_gpu_torch_cuda_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v4
with:
name: torch_cuda_big_gpu_test_reports
path: reports
- name: Generate Report and Notify Channel
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

torch_minimum_version_cuda_tests:
name: Torch Minimum Version CUDA Tests
runs-on:
group: aws-g4dn-2xlarge
container:
image: diffusers/diffusers-pytorch-minimum-cuda
options: --shm-size "16gb" --ipc host --gpus 0
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: Install dependencies
run: |
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install peft@git+https://github.com/huggingface/peft.git
pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git

- name: Environment
run: |
python utils/print_env.py

- name: Run PyTorch CUDA tests
env:
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG: :16:8
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_torch_minimum_version_cuda \
tests/models/test_modeling_common.py \
tests/pipelines/test_pipelines_common.py \
tests/pipelines/test_pipeline_utils.py \
tests/pipelines/test_pipelines.py \
tests/pipelines/test_pipelines_auto.py \
tests/schedulers/test_schedulers.py \
tests/others

- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_torch_minimum_version_cuda_stats.txt
cat reports/tests_torch_minimum_version_cuda_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v4
with:
name: torch_minimum_version_cuda_test_reports
path: reports

run_flax_tpu_tests:
name: Nightly Flax TPU Tests
runs-on: docker-tpu
runs-on:
group: gcp-ct5lp-hightpu-8t
if: github.event_name == 'schedule'

container:
image: diffusers/diffusers-flax-tpu
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --privileged
options: --shm-size "16gb" --ipc host --privileged ${{ vars.V5_LITEPOD_8_ENV}} -v /mnt/hf_cache:/mnt/hf_cache
defaults:
run:
shell: bash
Expand Down Expand Up @@ -291,6 +405,77 @@ jobs:
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

run_nightly_quantization_tests:
name: Torch quantization nightly tests
strategy:
fail-fast: false
max-parallel: 2
matrix:
config:
- backend: "bitsandbytes"
test_location: "bnb"
additional_deps: ["peft"]
- backend: "gguf"
test_location: "gguf"
additional_deps: []
- backend: "torchao"
test_location: "torchao"
additional_deps: []
- backend: "optimum_quanto"
test_location: "quanto"
additional_deps: []
runs-on:
group: aws-g6e-xlarge-plus
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "20gb" --ipc host --gpus 0
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: NVIDIA-SMI
run: nvidia-smi
- name: Install dependencies
run: |
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install -U ${{ matrix.config.backend }}
if [ "${{ join(matrix.config.additional_deps, ' ') }}" != "" ]; then
python -m uv pip install ${{ join(matrix.config.additional_deps, ' ') }}
fi
python -m uv pip install pytest-reportlog
- name: Environment
run: |
python utils/print_env.py
- name: ${{ matrix.config.backend }} quantization tests on GPU
env:
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG: :16:8
BIG_GPU_MEMORY: 40
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
--make-reports=tests_${{ matrix.config.backend }}_torch_cuda \
--report-log=tests_${{ matrix.config.backend }}_torch_cuda.log \
tests/quantization/${{ matrix.config.test_location }}
- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_${{ matrix.config.backend }}_torch_cuda_stats.txt
cat reports/tests_${{ matrix.config.backend }}_torch_cuda_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v4
with:
name: torch_cuda_${{ matrix.config.backend }}_reports
path: reports
- name: Generate Report and Notify Channel
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

# M1 runner currently not well supported
# TODO: (Dhruv) add these back when we setup better testing for Apple Silicon
# run_nightly_tests_apple_m1:
Expand Down Expand Up @@ -329,7 +514,7 @@ jobs:
# shell: arch -arch arm64 bash {0}
# env:
# HF_HOME: /System/Volumes/Data/mnt/cache
# HF_TOKEN: ${{ secrets.HF_TOKEN }}
# HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
# run: |
# ${CONDA_RUN} python -m pytest -n 1 -s -v --make-reports=tests_torch_mps \
# --report-log=tests_torch_mps.log \
Expand Down Expand Up @@ -385,7 +570,7 @@ jobs:
# shell: arch -arch arm64 bash {0}
# env:
# HF_HOME: /System/Volumes/Data/mnt/cache
# HF_TOKEN: ${{ secrets.HF_TOKEN }}
# HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
# run: |
# ${CONDA_RUN} python -m pytest -n 1 -s -v --make-reports=tests_torch_mps \
# --report-log=tests_torch_mps.log \
Expand All @@ -405,4 +590,4 @@ jobs:
# if: always()
# run: |
# pip install slack_sdk tabulate
# python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
# python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
51 changes: 51 additions & 0 deletions .github/workflows/pr_style_bot.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
name: PR Style Bot

on:
issue_comment:
types: [created]

permissions:
contents: write
pull-requests: write

jobs:
style:
uses: huggingface/huggingface_hub/.github/workflows/style-bot-action.yml@main
with:
python_quality_dependencies: "[quality]"
pre_commit_script_name: "Download and Compare files from the main branch"
pre_commit_script: |
echo "Downloading the files from the main branch"

curl -o main_Makefile https://raw.githubusercontent.com/huggingface/diffusers/main/Makefile
curl -o main_setup.py https://raw.githubusercontent.com/huggingface/diffusers/refs/heads/main/setup.py
curl -o main_check_doc_toc.py https://raw.githubusercontent.com/huggingface/diffusers/refs/heads/main/utils/check_doc_toc.py

echo "Compare the files and raise error if needed"

diff_failed=0
if ! diff -q main_Makefile Makefile; then
echo "Error: The Makefile has changed. Please ensure it matches the main branch."
diff_failed=1
fi

if ! diff -q main_setup.py setup.py; then
echo "Error: The setup.py has changed. Please ensure it matches the main branch."
diff_failed=1
fi

if ! diff -q main_check_doc_toc.py utils/check_doc_toc.py; then
echo "Error: The utils/check_doc_toc.py has changed. Please ensure it matches the main branch."
diff_failed=1
fi

if [ $diff_failed -eq 1 ]; then
echo "❌ Error happened as we detected changes in the files that should not be changed ❌"
exit 1
fi

echo "No changes in the files. Proceeding..."
rm -rf main_Makefile main_setup.py main_check_doc_toc.py
style_command: "make style && make quality"
secrets:
bot_token: ${{ secrets.GITHUB_TOKEN }}
Loading
Loading