Wan2.2-Lightning
https://huggingface.co/lightx2v/Wan2.2-Lightninglightx2v/Wan2.2-Lightning
fp16へ変換
https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Wan22-LightningKijai/WanVideo_comfy/Wan22-Lightning
https://huggingface.co/woctordho/wan-lora-pruned/woctordho/wan-lora-pruned
さらにPruningしたもの
4stepで生成できるようにWan2.2を蒸留したモデル、およびそのLoRAモデル
https://huggingface.co/lightx2v/Wan2.2-Lightning/tree/main/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V2.0Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V2.0
なにか新しいのが増えた…
MoE image2video
https://huggingface.co/lightx2v/Wan2.2-I2V-A14B-Moe-Distill-Lightx2v/blob/main/loras/high_noise_model_rank64.safetensorsWan2.2-I2V-A14B-Moe-Distill-Lightx2v/high_noise_model_rank64.safetensors
https://huggingface.co/lightx2v/Wan2.2-I2V-A14B-Moe-Distill-Lightx2v/blob/main/loras/low_noise_model_rank64.safetensorsWan2.2-I2V-A14B-Moe-Distill-Lightx2v/low_noise_model_rank64.safetensors
全然うまく動かないな…nomadoor.icon
v250928 text2video
https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-T2V-A14B-4steps-lora-250928/high_noise_model.safetensorshigh_noise_model.safetensors
https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-T2V-A14B-4steps-lora-250928/low_noise_model.safetensorslow_noise_model.safetensors
https://huggingface.co/QuantStack/Wan2.2_T2V_A14B_4steps_25-09-28_Dyno_High_lightx2v-GGUF/tree/maingguf (not LoRA)
https://gyazo.com/ea104e3c7822eddffb07dcb48e6f9e94
Seko V1 image2video
https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1/high_noise_model.safetensorsWan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1/high_noise_model.safetensors
https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1/low_noise_model.safetensorsWan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1/low_noise_model.safetensors
v1.1 text2video
https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1.1/high_noise_model.safetensorshigh_noise_model.safetensors
https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1.1/low_noise_model.safetensorslow_noise_model.safetensors
https://gyazo.com/07aa4ea8115b8b52cf7ad3d51378d176
Wan2.2_text2video_14B_lightning_v1.1_gguf.json
high 2steps / low 2steps
https://gyazo.com/9ac8cedc31519bf18c31808c09d31bbd
(105s)
↓ 古いモデルなのでArchiveする(予定)
https://gyazo.com/a978d9380915c95df5ae093be772a9c8
Wan2.2_text2video_14B_lightx2v.json
おそらくなにか間違えているnomadoor.icon
High Noiseにlightx2vを使用すると許容できない劣化が発生するので、Low Noiseの方にだけ適用するのが流行ってるらしいnomadoor.icon
今のところHigh Noiseの高速化はDistanceSamplerの"Distance_fast"を有効化して7ステップ相当に減らして(3〜4ステップまで)やるのがベターな感はあるmorisoba65536.icon
https://www.reddit.com/r/StableDiffusion/comments/1mgh40w/wan22_best_of_both_worlds_quality_vs_speed/Wan2.2 Best of both worlds, quality vs speed. Original high noise model CFG 3.5 + low noise model Lightx2V CFG1
https://www.reddit.com/r/StableDiffusion/comments/1mi6bb9/wan22_problem_of_using_lightx2v_lora_to_speed_up/Wan2.2 Problem of using Lightx2v Lora to speed up!!
https://gyazo.com/5baf2016ef174ebb31460acd43e6aaba
Wan2.2_text2video_14B_lightx2v_low-only.json
High Noiseは(全体12stepの内)6step
Low Noiseはlightx2vを追加して(全体10stepの内)5step
比較 (RTX4070ti推論時間)
https://gyazo.com/4526c4c9afa645d2f9f577255bc72db1https://gyazo.com/995583cea43fdc5d09c719d1910ffa61https://gyazo.com/6721d74f6ec5559a19d96147407591b2
High&Low (174.35s) / Low-Only (364.53s) / LoRAなし 20step (725s)