Generating images using Variational autoencoders
-
Updated
Apr 6, 2020 - Jupyter Notebook
Generating images using Variational autoencoders
This PyTorch notebook implements a complete Transformer architecture from scratch. It features modular implementations of Multi-Head Attention, positional encoding, and causal masking, demonstrating the full encoder-decoder mechanics for sequence-to-sequence modeling.
Add a description, image, and links to the decoder topic page so that developers can more easily learn about it.
To associate your repository with the decoder topic, visit your repo's landing page and select "manage topics."