Skip to content

[BUG] PT: in customized OPs, ensure input tensors have continuous memory #3910

@njzjz

Description

@njzjz

Our kernels assume the memory of input tensors is continuous. It seems the torch's autograd may return tensors with non-continuous memory. Need to call the contiguous method to ensure the memory is contiguous.

Metadata

Metadata

Assignees

Labels

Type

Projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions