Skip to content

[Bug]: TypeError: expected Tensor as element 0 in argument 0, but got tuple #12523

@ClipSkipper

Description

@ClipSkipper

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

When using any SDXL lora in current dev branch I get this error;

'TypeError: expected Tensor as element 0 in argument 0, but got tuple'

Images are generated with SDXL checkpoints, however all loras produce this error.

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

What should have happened?

Image generation should be as normal with lora loaded and weights applied.

Version or Commit where the problem happens

version: 1.5.1

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

Nvidia GPUs (RTX 20 above)

Cross attention optimization

xformers

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

--no-half-vae --xformers

List of extensions

none

Console logs

*** Error completing request
*** Arguments: ('task(zeliaaqznbf3mf6)', ' <lora:niji3D_test_v2:1>,Female,woman Cozy Knit Sweater in Oversized Fit, Fleece-lined Jogger Pants in Heather Gray, Chunky Knit Scarf in Neutral Tone, Slip-on Sneakers in White,Twisted Side Ponytail hairstyle (English Hollyhock,Rainy Season color background:1.3),   <lora:niji3D_test_v2:1>', 'deformed,large breasts,missing limbs,amputated,pants,shorts,cat ears,bad anatomy, naked, no clothes,disfigured, poorly drawn face, mutation, mutated,ugly, disgusting, blurry, watermark, watermarked, over saturated, obese, doubled face,b&w, black and white, sepia, nude, frekles, no masks,duplicate image, blur, paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), low resolution, normal quality, monochrome, grayscale, bad anatomy,(fat:1.2),facing away, looking away,tilted head,lowres,bad anatomy,bad hands, text, error, missing fingers,extra digit, fewer digits, cropped, worst quality, low quality, normal quality,jpeg artifacts,signature, watermark, username,blurry,bad feet,cropped,worst quality,low quality,normal quality,jpeg artifacts,signature,watermark,', [], 40, 'DPM++ SDE Karras', 1, 1, 7.5, 1024, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000027C86ED2AD0>, 0, False, '', 0.8, -1, -1, 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\processing.py", line 681, in process_images
        res = process_images_inner(p)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\processing.py", line 805, in process_images_inner
        p.setup_conds()
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\processing.py", line 1258, in setup_conds
        super().setup_conds()
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\processing.py", line 415, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\processing.py", line 403, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\prompt_parser.py", line 168, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\sd_models_xl.py", line 31, in get_learned_conditioning
        c = self.conditioner(sdxl_conds, force_zero_embeddings=['txt'] if force_zero_negative_prompt else [])
      File "C:\SDXL V1.1\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\SDXL V1.1\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 141, in forward
        emb_out = embedder(batch[embedder.input_key])
      File "C:\SDXL V1.1\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\sd_hijack_clip.py", line 273, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "C:\SDXL V1.1\stable-diffusion-webui\modules\sd_hijack_open_clip.py", line 57, in encode_with_transformers
        d = self.wrapped.encode_with_transformer(tokens)
      File "C:\SDXL V1.1\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 470, in encode_with_transformer
        x = self.text_transformer_forward(x, attn_mask=self.model.attn_mask)
      File "C:\SDXL V1.1\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 502, in text_transformer_forward
        x = r(x, attn_mask=attn_mask)
      File "C:\SDXL V1.1\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\SDXL V1.1\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 242, in forward
        x = q_x + self.ls_1(self.attention(q_x=self.ln_1(q_x), k_x=k_x, v_x=v_x, attn_mask=attn_mask))
      File "C:\SDXL V1.1\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 228, in attention
        return self.attn(
      File "C:\SDXL V1.1\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\SDXL V1.1\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 459, in network_MultiheadAttention_forward
        network_apply_weights(self)
      File "C:\SDXL V1.1\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 345, in network_apply_weights
        updown_qkv = torch.vstack([updown_q, updown_k, updown_v])
    TypeError: expected Tensor as element 0 in argument 0, but got tuple

Additional information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugReport of a confirmed bug

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions