IP-Adapterモデル
for SD v1.5
https://huggingface.co/h94/IP-Adapter/tree/main/modelsh94/IP-Adapter
ip-adapter-full-face_sd15
ip-adapter-plus-face_sd15
ip-adapter-plus-sd15
https://github.com/tencent-ailab/IP-Adapterplusはよりきめ細かい調整をした
ip-adapter_sd15
ip-adapter_sd15_light
https://huggingface.co/furusu/IP-Adapter/tree/mainwd15_ip_adapter_plus by furusu(laksjdjf)
https://huggingface.co/h94/IP-Adapter-FaceID/tree/mainfaceid
ip-adapter-faceid_sd15
ip-adapter-faceid-plus_sd15
ip-adapter-faceid-plusv2_sd15
ip-adapter-faceid-portrait_sd15
https://huggingface.co/ostris/ip-composition-adapter/tree/mainip_plus_composition_sd15
https://huggingface.co/flankechen/cat_face_ipadapterCat Face IP-Adapter
https://civitai.com/models/302691?modelVersionId=339910ipAdapterAnimeFine_v10
for SDXL
https://huggingface.co/h94/IP-Adapter/tree/main/sdxl_modelsh94/IP-Adapter
ip-adapter_sdxl
ip-adapter_sdxl_vit-h
OpenCLIP ViT-bigG/14のかわりにOpenCLIP ViT-H/14を使用して訓練https://github.com/tencent-ailab/IP-Adapter#sdxl_10
ViT-H/14のほうが遥かに軽量だが性能に大きな差はない
最初から1024✕1024で訓練せず、512✕512の後ファインチューニングの際アップスケールすることで計算コストを下げた
ip-adapter-plus_sdxl_vit-h
ip-adapter-plus-face_sdxl_vit-h
https://huggingface.co/h94/IP-Adapter-FaceID/tree/mainfaceid
ip-adapter-faceid_sdxl
ip-adapter-faceid-plusv2_sdxl
https://huggingface.co/ostris/ip-composition-adapter/tree/mainip_plus_composition_sdxl
for Flux
https://huggingface.co/XLabs-AI/flux-ip-adapter-v2XLabs-AI/flux-ip-adapter-v2
for Stable Diffusion 3.5
https://huggingface.co/InstantX/SD3.5-Large-IP-AdapterInstantX/SD3.5-Large-IP-Adapter
(IP-Adapterモデルではないけれど併用するので掲載)
Clip visionモデル
https://huggingface.co/h94/IP-Adapter/tree/main/models/image_encoderOpenCLIP ViT-H/14
https://huggingface.co/h94/IP-Adapter/tree/main/sdxl_models/image_encoderOpenCLIP ViT-bigG/14
ip-adapter_sdxlを使う場合のみこっち
IP-Adapter