Releases: OpenMined/PySyft
Releases · OpenMined/PySyft
v0.2.4
Changes since 0.2.3
New functionality:
- Added the option to compute
tanhusing sigmoid or Chebyshev approximation (#3113 by @gmuraru) - Added model parameters as a default optimizer parameter in
FederatedClient(#3117 by @gconst02) - Added an
Optimsclass to track remote optimizers in federated learning (#3179 by @rimijoker) - Added the ability to use a
Planinput as an output (#3199 by @tudorcebere) - Added the ability to send and get
Datasets(#2960 by @abogaziah)
Refactoring:
- Extensive improvements
Plan,Placeholder, and the Torch hook (#3080 by @LaRiffle) - Split
Operationclass into separate message and action classes (#3132 and #3168 by @Jasopaum and @karlhigley) - Added a new message for executing non-tensor operations on remote workers (#3048 by @midokura-silvia)
- Reworked
BaseWorker'smessage router to use message classes in handler functions (#3175 by @Jasopaum and @karlhigley) - Standardized the names of command message to
TensorCommandMessage,PlanCommandMessage, andWorkerCommandMessage(#3198 by @karlhigley) - Moved the
Placeholderclass to theexecutionpackage (#3211 by @karlhigley) - Converted existing formatted strings to f-strings for consistency and performance (#3186 by @TTitcombe
- Replaced zstd compression library with zlib (#3150 and #3164 by @refactormyself)
Bug fixes:
- Fixed
VirtualWorkermessage sending test (#3139 by @gmuraru) - Fixed event loop error when starting a
WebsocketServerWorkerin a notebook (#3196 by @imskr and #3204 by @gmuraru) - Stopped creating grad tensors for frozen layers (#3200 by @tudorcebere)
Documentation:
- Added homomorphic encryption to the Introduction section (#3135 by @LaRiffle)
- Updated supported PyTorch version to 1.4 (#3140 by @codeboy5)
- Added a note about support Python versions (#3154 by @J-Yash)
- Added a note about using Python 3.7 with conda (#3162 by @teddykoker)
- Added docstrings to callable pointer tests (#3130 by @steph-en-m)
- Added instructions for installing TF Encrypted (#3197 by @rimijoker)
Tutorials:
- Fixed FL training plan tutorials and added a PyGrid FL hosting example (#3185 by @vvmnnnkv)
- Cleaned up the tutorials directory structure (#3159 by @bicycleman15)
- Removed a stray cell from Part 5 - Welcome to the Sandbox (#3155 by @bicycleman15)
- Removed unnecessary
requires_gradin Part 2 (#3216 by @bicycleman15)
Translations:
- Bengali:
- Part 3 (#3133 by @adventuroussrv), Parts 4 and 5 (#3121 by @jabertuhin)
- German:
- Part 1 (#3178 by @vineetjai)
Builds:
- Set up code owners to automatically request reviews from the relevant OpenMined teams for various parts of the PySyft codebase (#3192 and #3215 by @karlhigley, @LaRiffle, and @AlanAboudib)
v0.2.3
Changes since 0.2.2
New functionality:
- Migrates PySyft to PyTorch 1.4 (#2930 by @gmuraru)
- Implements tanh for FixedPrecisionTensors using Chebyshev approximation (#3004 by @gmuraru)
- Adds the ability to simulate latency with VirtualWorkers (#3070 by @jefersonf)
- Adds Protobuf serialization for Placeholders, Plans, and States (#2972 by @karlhigley)
Refactoring:
- Reworks Plans for smoother serialization to multiple formats (#2910 by @LaRiffle and @vvmnnnkv)
- Moves Plans, Protocols, and States from the
messagingpackage to theexecutionpackage (#3078 by @karlhigley) - Renames Operation class to OperationMessage (#3090 by @karlhigley)
Bug fixes:
- Fixes retrieval of the fit() result in WebsocketClientWorker (#2948 by @brandonhee)
- Fixes numeric issues in handcrafted Conv and Pool implementations (#2945 and #2964 by @arshjot)
- Removes an insecure eval in native tensor interpreter (#2951 by @karlhigley)
- Fixes PyTorch JIT tracing compatibility for Plans (#2988 by @karlhigley)
- Removes workarounds for previous versions of PyTorch (#2999 by @gmuraru)
- Pins requests dependency specification to version 2.22.0 (#2970 by @ADMoreau)
- Fixes interoperability of AutogradTensors with other tensors vis a vis
requires_grad(#2998 by @gmuraru) - Improves logging, typing, and documentation of PATE implementation (#3033 by @TTitcombe)
- Fixes a potential security issue with unsafe YAML loading (#3037 by @systemshift)
- Raises an error when attempting to additively share FloatTensors (#3094 by @pierrepocreau)
- Improves testing for Syft's RNN implementation (#3092 by @jimboH)
- Changes dependency specifications to require compatible versions (#3119 by @karlhigley)
- Fixes compatibility with msgpack 1.0 serialization library (#3067 by @IonesioJunior and #3073 by @hdodenhof)
Documentation:
- Adds Sphinx documentation (#3017 by @Benardi)
- Fixes notebook test badge (#3028 by @jefersonf)
- Adds a link to the Udacity Secure And Private AI course (#3016 by @AVJdataminer)
- Improves instructions for developing protocol changes with
syft-proto(#2818 by @refactormyself)
Tutorials:
- Adds model evaluation to SplitNN tutorial (#2983 by @midokura-silvia)
- Adds a note to Part 10 FL with Secure Aggregation tutorial about hooking Numpy (#3022 by @fdroessler)
Translations:
- Bengali:
- French:
- Part 1 (#3107 by @r0cketr1kky)
- Hindi:
- Parts 5, 6, 7, 13b, and 13c (#2909 and #3055 by @raheja)
- Part 13a (#2958 by @Yugandhartripathi)
- Italian:
- Portuguese:
- Part 1 (#3035 by @MarcioPorto)
- Parts 7, 8, and 8bis (#2977 by @joaolcaas)
- Parts 9, 10 and 11 (#2980 by @jefersonf)
- Parts 12 and 13a (#3015 by @marcusvlc)
- Parts 12bis and 13b (#3020 by @hericlesme)
- Part 13c (#3023 by @Izabellaaaq)
- Spanish:
- Ukrainian:
Builds:
- Moves automated testing of PRs from Travis to Github Actions (#2936, #3012, and #3013 by @karlhigley and @systemshift)
- Adds a security scan for every PR (#3036 by @systemshift)
- Runs automated translation tests only on the notebooks that changed to speed up the builds (#3064 by @arturomf94)
- Automatically updates the
pysyft-notebookDocker image when changes are merged tomaster(#3030 by @linamnt) - Caches dependencies in Github Actions (#3124 by @imskr)
v0.2.3.a3
Third release of v0.2.3 to incorporate Syft protocol schema updates.
v0.2.3.a2
Second release of v0.2.3 to incorporate msgpack 1.0 compatibility and Syft protocol schema updates.
v0.2.3.a1
New functionality:
- Migrates PySyft to PyTorch 1.4 (#2930 by @gmuraru)
- Reworks Plans for smoother serialization to multiple formats (#2910 by @LaRiffle and @vvmnnnkv)
Bug fixes:
- Fixes numeric issues in handcrafted Conv and Pool implementations (#2945 and #2964 by @arshjot)
- Removes an insecure
evalin native tensor interpreter (#2951 by @karlhigley) - Fixes parameters to
ObjectRequestMessageinwebsocket_client.py(#2948 by @brandonhee)
Tutorial updates:
- Bengali: Parts 1 and 2 (#2938 and #2942 by @ucalyptus)
- Hindi: Part 13a (#2958 by @Yugandhartripathi)
- Spanish: Parts 6, 7, and 8 (#2941, #2944, and #2962 by @ricardopretelt and @arturomf94)
v0.2.2.a1
Includes some noteworthy new functionality:
- CUDA processing enabled in PySyft (#2735 and #2772 by @midokura-silvia)
- A PrivateTensor type (#2709 by @IonesioJunior)
- Promise and PromiseTensor types (#2610 by @Jasopaum)
- A complete test suite for msgpack serialization and standardization of the serialization format (#2762 and #2812 by @vvmnnnkv)
- An implementation of Distributed Association Scan Hammer (DASH) algorithm (#2658 by @andrelmfarias)
StringandStringPointertypes added to support NLP applications (#2684 by @AlanAboudib)- The ability to nest Plans within other Plans (#2791 by @gmuraru)
- Serializability for the
grad_fnin AutogradTensors (#2871 by @sukhadj) - PyGrid module moved to PySyft (#2760 by @IonesioJunior)
- A NumpyTensor type (#2913 by @iamtrask)
- Python implementations of
torch.nn.Conv2dandtorch.nn.AvgPool2d(#2896 by @iamtrask) - Approximate exp log inverse and sigmoid for SMPC (#2659 by @LaRiffle)
Also includes updates to the tutorials:
- A tutorial on Promises and PromiseTensors (#2786 by @Jasopaum)
- A split neural network tutorial (#2808 by @H4LL)
- Tutorial notebooks translated into Chinese, Spanish, Hindi, Indonesian, Korean, Portuguese, and Romanian (many PRs by @dljgs1, @Bingyy, @socd06, @ricardopretelt, @arturomf94, @darkmatter18, @Yugandhartripathi, @nggih, @wonderit, @jefersonf, and @gmuraru.)
And finally, includes:
- Many bug fixes, which are too numerous to list but nonetheless much appreciated!