Skip to content

refactor(sdk): Use Cache instead of TtlCache for OAuth 2.0 server metadata#6484

Open
zecakeh wants to merge 2 commits intomatrix-org:mainfrom
zecakeh:authorization-metadata-cache
Open

refactor(sdk): Use Cache instead of TtlCache for OAuth 2.0 server metadata#6484
zecakeh wants to merge 2 commits intomatrix-org:mainfrom
zecakeh:authorization-metadata-cache

Conversation

@zecakeh
Copy link
Copy Markdown
Collaborator

@zecakeh zecakeh commented Apr 22, 2026

We don't need a map to store a single value, besides it allows to have similar code to other fields in the ClientCaches.

We also cache the data every time that OAuth::server_metadata() is called, in the hope of avoiding some requests when calling OAuth::cached_server_metadata().

Finally we drop TtlCache because it is now unused.

  • I've documented the public API Changes in the appropriate CHANGELOG.md files.
  • This PR was made with the help of AI.

zecakeh added 2 commits April 22, 2026 13:45
…adata

We also cache the data every time that `OAuth::server_metadata()` is
called, in the hope of avoiding some requests with
`OAuth::cached_server_metadata()`.

Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
It is now unused.

Signed-off-by: Kévin Commaille <zecakeh@tedomum.fr>
@zecakeh zecakeh requested a review from a team as a code owner April 22, 2026 11:47
@zecakeh zecakeh requested review from Hywan and removed request for a team April 22, 2026 11:47
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 22, 2026

Codecov Report

❌ Patch coverage is 68.00000% with 8 lines in your changes missing coverage. Please review.
✅ Project coverage is 89.93%. Comparing base (1d7d6c9) to head (3646d57).
✅ All tests successful. No failed tests found.

Files with missing lines Patch % Lines
crates/matrix-sdk/src/authentication/oauth/mod.rs 66.66% 7 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6484      +/-   ##
==========================================
+ Coverage   89.91%   89.93%   +0.02%     
==========================================
  Files         381      381              
  Lines      105624   105581      -43     
  Branches   105624   105581      -43     
==========================================
- Hits        94974    94958      -16     
+ Misses       7011     6996      -15     
+ Partials     3639     3627      -12     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@codspeed-hq
Copy link
Copy Markdown

codspeed-hq Bot commented Apr 22, 2026

Merging this PR will not alter performance

✅ 50 untouched benchmarks


Comparing zecakeh:authorization-metadata-cache (3646d57) with main (1d7d6c9)

Open in CodSpeed

Copy link
Copy Markdown
Member

@Hywan Hywan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the patch. All this makes sense to me. I've a suggestion around the way we are dealing with “try to reduce the number of concurrent server_metadata calls”.

Comment on lines +472 to +476
let _server_metadata_guard = match server_metadata_cache.refresh_lock.try_lock() {
Ok(guard) => guard,
Err(_) => {
// There is already a refresh in progress, wait for it to finish.
let guard = server_metadata_cache.refresh_lock.lock().await;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do a try_lock:

  • if it's not locked, the lock is acquired,
  • if it is locked, we wait to acquire the lock.

It's similar to a direct lock:

  • if it's not locked, the lock will be acquired,
  • if it is locked, we will wait to acquire the lock.

The only difference is: if we had to wait on the lock, it means a refresh was in progress, and so we can look in the Cached for the value. Okayy, I can see where you're coming from here: let's give the system a last chance to look into the cache before calling server_metadata. If that's right, if this is your intent, I think it's very unlikely that cached_server_metadata and server_metadata are called concurrently in a tight loop.

Instead, I think server_metadata should use the refresh_lock to see if there is an ongoing request (pseudo-rust-code):

async fn server_metadata(&self) -> … {
    if let Ok(guard) = refresh_lock.try_lock() else {
        // ongoing refresh
        let guard = refresh_lock.lock().await;
        return the cached value, it shouldn't be expired
    };

    do the HTTP request;
    cache.server_metadata.set_value();
    drop(guard)
    return metadata;
}

So basically, the logic you've here should move at the top of server_metadata. This way, even concurrent calls to server_metadata are protected. And the correct block in this method should be replaced by self.server_metadata() directly. Thoughts?

Note that server_metadata will use the cached value if and only if a call is waiting on an ongoing call. I think this is alright as the server won't change the returned value in a matter of a couple of milliseconds. Of course it can happen, but nothing can prevent that.

Thoughts?

Copy link
Copy Markdown
Collaborator Author

@zecakeh zecakeh Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, this is something I contemplated too. What should we do in case we are waiting on a call and the ongoing call results in an error? Should we cache and return the error again or retry?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. In case of an error, let's cache nothing. All calls waiting should retry, as no value is cached.


/// A TTL cache where items get removed deterministically in the `get()` call.
#[derive(Debug)]
pub struct TtlCache<K: Eq + Hash, V: Clone> {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's great! Thank you.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we rename ttl_cache.rs to ttl_value.rs or something like that?

Copy link
Copy Markdown
Collaborator Author

@zecakeh zecakeh Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe just ttl, if we need to re-add the TtlCache later, or other similar types?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, I prefer your proposal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants