在 Python 中向我的 HuggingFace 模型添加多个 LoRa 安全张量

问题描述 投票:0回答:2

假设我使用此脚本加载一个微调模型:(示例取自https://towardsdatascience.com/hugging-face-diffusers-can- Correctly-load-lora-now-a332501342a3

import torch
from diffusers import StableDiffusionPipeline
text2img_pipe = StableDiffusionPipeline.from_pretrained(
    "stablediffusionapi/deliberate-v2"
    , torch_dtype = torch.float16
    , safety_checker = None
 ).to("cuda:0")

 lora_path = "<path/to/lora.safetensors>" #only one tensor , not folder
 text2img_pipe.load_lora_weights(lora_path)

这会添加一个安全张量文件。如何加载多个安全张量?我在实例化

use_safetensors
时尝试了
StableDiffusionPipeline
参数,但不清楚应该将 safetensors 文件夹放在哪里。我有这样的错误:

OSError:在 {'vae/diffusion_pytorch_model.safetensors', 中找不到必要的

safetensors
权重, 'text_encoder/pytorch_model.bin', 'safety_checker/model.safetensors', 'vae/diffusion_pytorch_model.bin', 'text_encoder/model.safetensors', 'unet/diffusion_pytorch_model.bin', 'safety_checker/pytorch_model.bin', 'unet/diffusion_pytorch_model.safetensors'}(变体=无)

我也尝试过一个接一个地加载权重,但结果表明我没有保留之前加载的权重。

python model safe-tensors diffusers
2个回答
0
投票

目前只能使用1个LoRA,未来Diffusers将创建一个加载多个LoRA的功能,目前标记为WIP https://github.com/huggingface/diffusers/issues/2613


0
投票

因此您可以加载多个 loras。你总是可以的。之前我们需要Kohya的LoRA脚本。但现在我们可以像这样加载它们。下面是我用来加载和卸载 loras 的方法,只需传递我想要使用的当前 loras 即可。

def load_loras(pipe, settings):
    active_adapters = pipe.get_active_adapters()
    set_adapters_hash = hash_dict(settings["lora"])
    set_loras = []
    set_weights = []
    if len(settings["lora"]) > 0:
        pipe.enable_lora()
        print(f"Checking if Loras settings has changed...)")
        print(f"Stored: {getattr(pipe, 'set_adapters_hash', None)}")
        print(f"Current: {set_adapters_hash}")

        # I make and compare a hash to check if the loras changed as I
        # leave my pipe in memory.
        if getattr(pipe, 'set_adapters_hash', None) == set_adapters_hash:
            print("Loras settings has not changed")
            return 'Loras settings has not changed'

        pipe.unfuse_lora()

        for lora in settings["lora"]:
            file_name = lora["file_name"] or lora["name"]
            adapter_name = file_name.replace(".", "")
            if file_name not in active_adapters:
                print(f"Loading Lora: {file_name}")
                try:
                    pipe.load_lora_weights(
                        f"./assets/lora/{file_name}.safetensors",
                        weight_name=f"{file_name}.safetensors",
                        adapter_name=adapter_name,
                    )
                except:
                    print("Probably loaded already")

                set_loras.append(file_name)
                set_weights.append(lora["weight"])
            else:
               print(f"Lora: {file_name} already loaded")
                set_loras.append(file_name)
                set_weights.append(lora["weight"])

        pipe.unfuse_lora()
        pipe.set_adapters(set_loras, set_weights)
        pipe.set_adapters_hash = set_adapters_hash
        pipe.fuse_lora()
    else:
        pipe.disable_lora()

但是下面,我将给出另一个示例,其中没有我所做的所有循环和散列。希望更容易理解。

def load_two_loras(pipe):
    # Hardcoded Loras information
    lora1 = {"file_name": "lora1", "weight": 1.0}
    lora2 = {"file_name": "lora2", "weight": 0.5}

    # Enable Lora if it was disabled
    pipe.enable_lora()

    # Unfuse previous settings if any
    pipe.unfuse_lora()

    # Load Lora 1
    try:
        pipe.load_lora_weights(
            f"./assets/lora/{lora1['file_name']}.safetensors",
            weight_name=f"{lora1['file_name']}.safetensors",
            adapter_name=lora1['file_name'].replace(".", "")
        )
    except:
        print(f"Lora: {lora1['file_name']} probably loaded already")

    # Load Lora 2
    try:
        pipe.load_lora_weights(
            f"./assets/lora/{lora2['file_name']}.safetensors",
            weight_name=f"{lora2['file_name']}.safetensors",
            adapter_name=lora2['file_name'].replace(".", "")
        )
    except:
        print(f"Lora: {lora2['file_name']} probably loaded already")

    # Set and fuse the loaded Loras
    set_loras = [lora1['file_name'], lora2['file_name']]
    set_weights = [lora1['weight'], lora2['weight']]
    pipe.set_adapters(set_loras, set_weights)
    pipe.fuse_lora()
© www.soinside.com 2019 - 2024. All rights reserved.