Skip to content

ValueError: not enough values to unpack (expected 2, got 1) #3

@syf0215

Description

@syf0215

作者您好,我在推理和训练时都遇到了这个问题:
File "/PromptCC/models_CC.py", line 264, in forward clip_emb_A, img_feat_A = self.clip_model.encode_image(img_A) ValueError: too many values to unpack (expected 2, got 1)
看起来是因为原始的CLIP输出的是一整张图的特征,而这里的img_feat_A似乎是(N, h*w, 512)的特征
请问您可以分享下是怎么修改CLIP让它输出patch-level的图像特征吗?
感谢!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions