Conversation
|
We already have our own conv2d implementation, it’s not in the nn module but in the FixedPrecisionTensor class, I think. It was implemented before the rnn so it’s not at the right place (I know we should have moved it before). |
|
Hi @Jasopaum - upon further review of that class it does indeed look like a convolutional layer exists there. However, unbeknownst to me the class has an enormous amount of functionality which is unrelated to FixedPrecision representations. I suspect that this has been done for coding convenience and possibly justified for performance reasons. If so, this is done at the expense of code re-use and good organization and is in urgent need of refactoring. At present, I will not refer to it and have created a Github issue describing extensive refactoring required (#2897). Once the refactoring is completed, we can adjust the classes created in this PR to reference wherever the new functionality exists. |
|
I do agree that a refactor would do some good! 👍 |
karlhigley
left a comment
There was a problem hiding this comment.
Looks good, maybe a few things to clean up before merging though.
For the same reasons we have python implementations of LSTM and GRU, I have created a basic implementation of Conv2d and AvgPool2d
It does not include all functionality, but it does include assertions to cause errors for settings which are not supported.