refactor(sdk): Use Cache instead of TtlCache for OAuth 2.0 server metadata#6484
refactor(sdk): Use Cache instead of TtlCache for OAuth 2.0 server metadata#6484zecakeh wants to merge 2 commits intomatrix-org:mainfrom
Cache instead of TtlCache for OAuth 2.0 server metadata#6484Conversation
…adata We also cache the data every time that `OAuth::server_metadata()` is called, in the hope of avoiding some requests with `OAuth::cached_server_metadata()`. Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
It is now unused. Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #6484 +/- ##
==========================================
+ Coverage 89.91% 89.93% +0.02%
==========================================
Files 381 381
Lines 105624 105581 -43
Branches 105624 105581 -43
==========================================
- Hits 94974 94958 -16
+ Misses 7011 6996 -15
+ Partials 3639 3627 -12 ☔ View full report in Codecov by Sentry. |
Hywan
left a comment
There was a problem hiding this comment.
Thank you for the patch. All this makes sense to me. I've a suggestion around the way we are dealing with “try to reduce the number of concurrent server_metadata calls”.
| let _server_metadata_guard = match server_metadata_cache.refresh_lock.try_lock() { | ||
| Ok(guard) => guard, | ||
| Err(_) => { | ||
| // There is already a refresh in progress, wait for it to finish. | ||
| let guard = server_metadata_cache.refresh_lock.lock().await; |
There was a problem hiding this comment.
We do a try_lock:
- if it's not locked, the lock is acquired,
- if it is locked, we wait to acquire the lock.
It's similar to a direct lock:
- if it's not locked, the lock will be acquired,
- if it is locked, we will wait to acquire the lock.
The only difference is: if we had to wait on the lock, it means a refresh was in progress, and so we can look in the Cached for the value. Okayy, I can see where you're coming from here: let's give the system a last chance to look into the cache before calling server_metadata. If that's right, if this is your intent, I think it's very unlikely that cached_server_metadata and server_metadata are called concurrently in a tight loop.
Instead, I think server_metadata should use the refresh_lock to see if there is an ongoing request (pseudo-rust-code):
async fn server_metadata(&self) -> … {
if let Ok(guard) = refresh_lock.try_lock() else {
// ongoing refresh
let guard = refresh_lock.lock().await;
return the cached value, it shouldn't be expired
};
do the HTTP request;
cache.server_metadata.set_value(…);
drop(guard)
return metadata;
}So basically, the logic you've here should move at the top of server_metadata. This way, even concurrent calls to server_metadata are protected. And the correct block in this method should be replaced by self.server_metadata() directly. Thoughts?
Note that server_metadata will use the cached value if and only if a call is waiting on an ongoing call. I think this is alright as the server won't change the returned value in a matter of a couple of milliseconds. Of course it can happen, but nothing can prevent that.
Thoughts?
There was a problem hiding this comment.
I agree, this is something I contemplated too. What should we do in case we are waiting on a call and the ongoing call results in an error? Should we cache and return the error again or retry?
There was a problem hiding this comment.
Good question. In case of an error, let's cache nothing. All calls waiting should retry, as no value is cached.
|
|
||
| /// A TTL cache where items get removed deterministically in the `get()` call. | ||
| #[derive(Debug)] | ||
| pub struct TtlCache<K: Eq + Hash, V: Clone> { |
There was a problem hiding this comment.
Should we rename ttl_cache.rs to ttl_value.rs or something like that?
There was a problem hiding this comment.
Maybe just ttl, if we need to re-add the TtlCache later, or other similar types?
We don't need a map to store a single value, besides it allows to have similar code to other fields in the
ClientCaches.We also cache the data every time that
OAuth::server_metadata()is called, in the hope of avoiding some requests when callingOAuth::cached_server_metadata().Finally we drop
TtlCachebecause it is now unused.CHANGELOG.mdfiles.