Continue the work on Promises and PromiseTensors#2610
Continue the work on Promises and PromiseTensors#2610iamtrask merged 225 commits intoOpenMined:masterfrom
Conversation
update master
update master
* grad is None for CRT tensors * remarks from review * typo * Add possibility to overwrite functions on native tensors * Fix error in handle_func_command for AST * Add support for torch.roll(<AST>, <MPT>) * Rm the (spring) roll prints * removed duplicated tests for roll * added kwargs in native roll * removed .get() in AST roll * share and get for CRT tensors * basic test share and get CRT tensors * choice of CRT representation when using fix_prec() * typo * removed wrap test * added operations with scalars * minor changes * Update version number * added messages for __init__ assertions * Add explicit support of fix_prec on pointers * Disable gc-ing shared when simplying an AdditiveSharedTensor * Add test on ops for remote AST * Update tutorial 10 * assert fields are equal when sharing FPT * more assert messages * Add div by constant integer for autograd * Fix autograd div with AST * Add refresh option for AST and tests * Split a test into 2 * choice in field size for CRT tensors * Update README.md * Typo Fixes in Tutorial/Part 1 * Typo Fixes in Tutorials Part 2 * Typo Fixes in Tutorial Part 4 * Add files via upload * no more overflow for big fields and can represent neg values * Update README.md * Added fix_prec for Linear Object * Python call inside start_websocket_servers.py same a the python worker used to invoke it * minor modif in AST sub * change sign of neg when doing FPT mul * modif fix_prec and float_prec for CRT * more interesting tests * black * Fix typos * patch tf-encrypted version * wrap keras constructor hooks to fix decorator signature * Small changes to remove useless code * Fix typo bug in torch.roll for AST * Optimize _compress in serde and rm buggy test in test_serde * Make create_pointer a static method * Remove wrappers from AST shares and MPT children - Move functionalities from native to pointer objects - Make wrapper more like a real wrapper - Update functionalities accordingly * Add a no_wrap option for send() and share() to skip wrapping * Generalize use of no_wrap in additive_shared * Generalize use of no_wrap in crypto protocols * Small fix for AST mul / matmul * Add the data_size attribute to the BaseWorker Class * Add a get_packet_size static method to the WebsocketClientWorker class * Modify get_packet_size static method's interface arg in websocket_client.py * Modify get_packet_info (renamed) to sniff on packets transmitted * Modify docstring of get_packet_info static method * Add pyshark to requirements_dev.txt * Split original get_packet_info method into get_packets and read_packet functions * Add arguments to be passed to get_packets method to control sniffing better * Move network traffic monitoring utility to syft.generic.metrics.py * Add tutorial example of new metrics utility to monitor network traffic * To expand on the drafted metrics tutorial to give examples of the NetworkMonitor class * Edit metrics tutorial as per reviewer's suggestion * simple support for AST torch.dot * tests for dot * Typo * Syft Doc * change port in CI test * Revert "change port in CI test" This reverts commit 77f39d2. * Minor Typo Fixes * longer sleep * longer sleep test_objects_count_remote * Update README.md Add instructions on how to run docker image on a Mac * Remove the wrapper between FPT and AST * Fix tests accordingly * Fix circular import error * Fix how workers are provided to nn.module.send * Add docstring for MultiPointerTensor * Small fix and improvements in native.py * Update README.md * longer sleep for last tests * helper function to try websocket connection * Remove redundant time.sleep() calls. * Due to pickling error while creating a separate process for websockerserver on Windows modified the code to create the websocketserver within the current process context itself * ran black on websockets-example-MNIST * use 'operates' instead of 'operate' * Modify example to store test dataset on separate worker Models are sent and evaluated at the (remote) worker. * Address comments in pull request. * Modify evaluate function to return dictionary * Move train_config check into separate function * Make model execution determinstic * reduced the timeout inetrval on windows due to C timeval overflowerror * removed pre-computation of reconstruction coefficients * black * added field to LPT * reduced precision frac in tests * Few fix post review * Add default get and mid_get in abstract.py * reduced timeout for websocket connection to 999999 seconds * add field to get_class_attributes * rename to crt_precision.py and CRTPrecisionTensor * factorize assert residues are FPT * rename variable to base_residue * __imul__ for FPT * black * fixed multiplication FPT * notebook with more instruction and some changes in scripts start_websocket_servers have some unnecessary lines to provide python versions and is causing errors on windows 10 but since in subprocess.Popen we provide the name of exe that will execute certain file with certain arguments. In run_websocket_client logging.debug is replaced by print because logging is not showing losses and errors. Notebook has more instructions and possible solutions for possible errors and removed some extra code * Update Federated learning with websockets and federated averaging.ipynb * removed some wording * Update start_websocket_servers.py * minor fixes fixed formatting in the notebook, replaced print with logs and added a condition to start servers for other platforms other than windows 10 * condition over python variable instead of list * Add v0 of encrypted training on MNIST * Add files via upload * Add files via upload * Closes OpenMined#2412, remove simplification and detailing for plans that already have this done * Reverting example * Corrected some typos in the tutorials. * Black * Left padded some file names with a zero so tutorials are displayed in order * black checked * manual checked * black rechecked * black * docstring where CRT comes from * Adding a simplifier and a detailer for shape * Delete Part 8 - Introduction to Plans.ipynb * added notebook back * black * Initial * replace all lists with tuples in serde * removed print statements and fixed a couple bugs * updated notebook * update * Delete Part 8 - Introduction to Plans.ipynb * Added first pen testing challenge * balck * Update tutorial + add illustration * Update illustration link * Switch to Adam, because of custom learning_rate for Adadelta (OpenMined#2362) * Add "Installing PySyft after Cloning Repository" to CONTRIBUTING.md (OpenMined#2403) * Fix broken Openmined.org demo (OpenMined#2387) * Copy and edit WebsocketClientWorker and WebsocketServerWorker notebooks from Google Colab * Make the WebsocketServerWorker tutorial work, WebsocketClientWorker WIP * Make changes to WebsocketClientWorker and WebsocketServerWorker notebooks so they work in Colab * Update WebsocketServerWorker tutorial notebook for use in Colab * Add "import logging" statement to "WebsocketServerWorker" notebook * Change "print" statement into a "logging" one in websocket_server.py file * Update websocket_server.py as per reviewer request * One worker bug (OpenMined#2407) * one iteration doesn't need to change worker * One worker testcase * Plaintext speed regression notebook (OpenMined#2350) Adds notebook containing - torch implementations of a few linear algebra routines - initial implementations of linear regression and DASH. * Make the local worker aware of itself on TorchHook creation. (OpenMined#2431) * Make the local worker aware of itself on TorchHook creation. * Create test to ensure local worker is inside the _known_workers dict. * Move test to test_local_worker * bumpversion 0.1.21a1 -> 0.1.22a1 (OpenMined#2427) * Fix private tensor disclosure (OpenMined#2434) private tensors aren't meant to be accessible from a remote client, however, execute_command was getting any object using his id, this fix get the object using the get_obj method that doesn't return private tensors * Improve Build new tensor tutorial (OpenMined#2435) * Renamed func in hook_args for clarity (OpenMined#2408) * 1903 : renamed functions to remove ambiguity * 1903 : renamed functions to remove ambiguity[reformatted] * 1903 : renamed functions after suggestions * Clear objects for ObjectStorage with websocket connection (OpenMined#2410) * Make clear_objects() callable on remote ObjectStorage. Note that the signature of clear_objects() changed. It does no longer return self. * Remove comment * Modify function clear_objects to return self by default and undo changes to test_udacity.py * Add missing argument to remote clear_objects function * Created an actual Message type and moved Plan out of federated (OpenMined#2436) * moved plan out of federated folder into message folder; added a stub for Messagee type * black * changed all * put imports on separate linesE * removed promise code * relative -> absolute imports * removed promise stuff * unit teset for Message serde * imported Message * updated import * removed extraneous comment * rmeoved extraneous imports * unifed how refer to Message * newline * fixed a few inconsistent imports * name conflict * msg -> messaging * black
In the PR, bobby made a recommendation to make sure th right messages are sent. I actually found a bug when doing this which got sorted as well.
…to custom_message_types
| args = list(args) | ||
| for ia in range(len(args)): | ||
| if not isinstance(args[ia], (torch.Tensor, AbstractTensor)): | ||
| args[ia] = torch.tensor(args[ia]) |
There was a problem hiding this comment.
I think you can "just store the scalar value" in the plan, it looks like it works from what I've seen
| Args: | ||
| owner: An optional BaseWorker object to specify the worker on which | ||
| the tensor is located. | ||
| id: An optional string or integer id of the LoggingTensor. | ||
| """ |
There was a problem hiding this comment.
Please add a description of all arguments to the docstring
|
|
||
| return overloaded_attr | ||
|
|
||
| def _hook_promise_tensor(hook_self): |
There was a problem hiding this comment.
Why do we need to hook the promise tensor? Why can't we just define all the methods directly in the PromiseTensor class? The class resides within PySyft, so why don't we define the methods as for example DoubleTensor there? It doesn't seem to rely on the DoubleTensor being hooked before defining the method.
There was a problem hiding this comment.
Methods like DoubleTensor were in the file where the class is defined before but I was asked in some comments to move them here ^^
For the other methods, I think this file was supposed to be where this kind of method generation happen but maybe not?
There was a problem hiding this comment.
Ok, that sounds like a contradiction. I'd like to hear the opinion of @LaRiffle and @robert-wagner 😄
There was a problem hiding this comment.
Historically PromiseTensor has always been a little bit of an exception because of the way it works. I'm ok with this for now.
| hook.local_worker.is_client_worker = True | ||
|
|
||
|
|
||
| def test_protocol_waiting_promise(hook, workers): |
There was a problem hiding this comment.
a protocol should be a list of VirtualWorkers not a list of plans
|
Worth re-iterating that this is a mammoth task and that Jason is an absolute saint for all the work he's done on it. <3 |
Continue the work done here: #2516
TODO: