What happened?
Batch_size (num_samples) > 1 is currently broken
What are the steps to reproduce the bug?
Train with num_samples > 1. Loss is not converging.
Hedgedoc link to logs and more information. This ticket is public, do not attach files directly.
No response