Skip to content

Add support for exp and log (with approx) on additive shared tensors#2659

Merged
LaRiffle merged 27 commits intomasterfrom
encrypted_computation
Dec 27, 2019
Merged

Add support for exp and log (with approx) on additive shared tensors#2659
LaRiffle merged 27 commits intomasterfrom
encrypted_computation

Conversation

@LaRiffle
Copy link
Contributor

@LaRiffle LaRiffle commented Oct 11, 2019

This PR extends the set of operations supported in Additive Sharing using approximations

  • Exponential
  • Logarithm
  • Inverse
  • Sigmoid

Fix #2537

@LaRiffle LaRiffle changed the title Add exponential (with approx) on fixed precision & additive shared tensors Add support for exp and log (with approx) on additive shared tensors Oct 11, 2019
@robert-wagner robert-wagner changed the base branch from dev to master October 26, 2019 18:16
return (1 + self / 2 ** iterations) ** (2 ** iterations)

def sigmoid(self):
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we replace the sigmoid approximation by the mathematical definition using the approximated exp? Or do you think we lose precision by doing that?

I think we should at least compare the differences between both...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely! I'll make a comparison as part of my lib to see what's best, thansk for the suggestion

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're welcome!

Maybe also do a time performance comparison, to see what are the trade-offs between both

@andrelmfarias
Copy link
Contributor

Don't you think we should also add some efficiency tests in test/efficiency_tests, as I did for tanh and sigmoid?

@LaRiffle
Copy link
Contributor Author

LaRiffle commented Nov 6, 2019

Yes, good idea!

@iamtrask
Copy link
Member

Super excited about this!

Copy link
Member

@iamtrask iamtrask left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excited to merge this - looks like we're waiting on

#2659 (comment)
and possibly some performance based unit tests

Comment on lines +100 to +109
# Method syntax
x = torch.tensor([0.1, 0.2, 0.3]).fix_prec()

y = x + x

assert (y.child.child == torch.LongTensor([200, 400, 600])).all()
y = y.float_prec()

assert (y == torch.tensor([0.2, 0.4, 0.6])).all()

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this tested twice? (also tested in test_add_method())

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will remove it

cumsum += diff / (tolerance * norm)

cumsum /= 10
print(cumsum)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be removed :)


for prec_frac, tolerance in fix_prec_tolerance.items():
cumsum = torch.zeros(5)
for i in range(10):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you doing this because the approximation results vary significantly depending on the sharing?
If so, do you know when "bad" cases occur and how bad they are?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes
No not really... :/

one = self * 0 + 1

if method == "exp":
result = one / (1 + (self * -1).exp())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we might have talked about it at some point but should you use .inverse() here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct!

@LaRiffle LaRiffle requested a review from iamtrask December 27, 2019 17:00
@LaRiffle LaRiffle dismissed robert-wagner’s stale review December 27, 2019 17:00

Another review is welcome :)

@LaRiffle LaRiffle merged commit a35542a into master Dec 27, 2019
@LaRiffle LaRiffle deleted the encrypted_computation branch December 27, 2019 17:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Exponential implementation in SMPC

7 participants