Add support for exp and log (with approx) on additive shared tensors#2659
Add support for exp and log (with approx) on additive shared tensors#2659
Conversation
…PySyft into encrypted_computation
| return (1 + self / 2 ** iterations) ** (2 ** iterations) | ||
|
|
||
| def sigmoid(self): | ||
| """ |
There was a problem hiding this comment.
Shouldn't we replace the sigmoid approximation by the mathematical definition using the approximated exp? Or do you think we lose precision by doing that?
I think we should at least compare the differences between both...
There was a problem hiding this comment.
Definitely! I'll make a comparison as part of my lib to see what's best, thansk for the suggestion
There was a problem hiding this comment.
You're welcome!
Maybe also do a time performance comparison, to see what are the trade-offs between both
|
Don't you think we should also add some efficiency tests in |
|
Yes, good idea! |
|
Super excited about this! |
iamtrask
left a comment
There was a problem hiding this comment.
Excited to merge this - looks like we're waiting on
#2659 (comment)
and possibly some performance based unit tests
| # Method syntax | ||
| x = torch.tensor([0.1, 0.2, 0.3]).fix_prec() | ||
|
|
||
| y = x + x | ||
|
|
||
| assert (y.child.child == torch.LongTensor([200, 400, 600])).all() | ||
| y = y.float_prec() | ||
|
|
||
| assert (y == torch.tensor([0.2, 0.4, 0.6])).all() | ||
|
|
There was a problem hiding this comment.
Why is this tested twice? (also tested in test_add_method())
test/torch/tensors/test_precision.py
Outdated
| cumsum += diff / (tolerance * norm) | ||
|
|
||
| cumsum /= 10 | ||
| print(cumsum) |
|
|
||
| for prec_frac, tolerance in fix_prec_tolerance.items(): | ||
| cumsum = torch.zeros(5) | ||
| for i in range(10): |
There was a problem hiding this comment.
Are you doing this because the approximation results vary significantly depending on the sharing?
If so, do you know when "bad" cases occur and how bad they are?
There was a problem hiding this comment.
Yes
No not really... :/
| one = self * 0 + 1 | ||
|
|
||
| if method == "exp": | ||
| result = one / (1 + (self * -1).exp()) |
There was a problem hiding this comment.
I think we might have talked about it at some point but should you use .inverse() here?
Another review is welcome :)
This PR extends the set of operations supported in Additive Sharing using approximations
Fix #2537