-
-
Notifications
You must be signed in to change notification settings - Fork 2k
Description
Is your feature request related to a problem? Please describe.
Some optimizers (such as Adam) do not work with Federated Learning by default because they maintain a list of the N recent gradietns in a cache. For example, take this demo (https://github.com/OpenMined/PySyft/blob/master/examples/tutorials/Part%2002%20-%20Intro%20to%20Federated%20Learning.ipynb) and replace the optimizer with Adam and you'll geet a bug.
The solution is to have a separate optimizer for each worker in the FL process which you use at the appropriate time.
Describe the solution you'd like
Obviously this would be really annoying to have to write by hand every time... so please create a FLOptimizer which will - under the hood - maintain a list of optimizer objects, one for each worker - and use them in the appropriate context,.