Skip to content

Unfortunately, XL Models don't appear to be working. #4

@PizzaSlice-cmd

Description

@PizzaSlice-cmd

The non-xl model is working great along with the fp8 version of it but the XL models I get this error:

!!! Exception during processing !!! Error(s) in loading state_dict for HunyuanVideoFoley:
	size mismatch for audio_embedder.proj.weight: copying a param with shape torch.Size([1408, 128, 1]) from checkpoint, the shape in current model is torch.Size([1536, 128, 1]).
	size mismatch for audio_embedder.proj.bias: copying a param with shape torch.Size([1408]) from checkpoint, the shape in current model is torch.Size([1536]).
	size mismatch for visual_proj.w1.weight: copying a param with shape torch.Size([1408, 768]) from checkpoint, the shape in current model is torch.Size([1536, 768]).
	size mismatch for visual_proj.w2.weight: copying a param with shape torch.Size([1408, 1408]) from checkpoint, the shape in current model is torch.Size([1536, 1536]).
	size mismatch for visual_proj.w3.weight: copying a param with shape torch.Size([1408, 768]) from checkpoint, the shape in current model is torch.Size([1536, 768]).
	size mismatch for cond_in.linear_1.weight: copying a param with shape torch.Size([1408, 768]) from checkpoint, the shape in current model is torch.Size([1536, 768]).
	size mismatch for cond_in.linear_1.bias: copying a param with shape torch.Size([1408]) from checkpoint, the shape in current model is torch.Size([1536]).

It's a long block of text so i can't fit it all here.

Total VRAM 12281 MB, total RAM 65448 MB
pytorch version: 2.10.0+cu130
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.10.0+cu128 with CUDA 1208 (you have 2.10.0+cu130)
Python 3.10.11 (you have 3.10.11)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 SUPER : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 29451.0
working around nvidia conv3d memory bug.
Using pytorch attention
Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
ComfyUI version: 0.12.0
ComfyUI frontend version: 1.37.11

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions