fix: allow hashed token_id in /key/update endpoint#24969
fix: allow hashed token_id in /key/update endpoint#24969krrish-berri-2 merged 1 commit intoBerriAI:mainfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Greptile SummaryThis PR fixes
Confidence Score: 5/5
|
| Filename | Overview |
|---|---|
| litellm/proxy/management_endpoints/key_management_endpoints.py | Three hash_token() calls replaced with _hash_token_if_needed() in _check_team_key_limits, _check_org_key_limits, and update_key_fn to support pre-hashed token_ids passed from the Terraform provider. |
Flowchart
%%{init: {'theme': 'neutral'}}%%
flowchart TD
A["/key/update request\nkey = raw token OR pre-hashed token_id"] --> B{"_hash_token_if_needed(key)"}
B -- "raw token\nstarts with sk prefix" --> C["hash_token\n→ SHA-256 hash"]
B -- "already hashed\nnot raw token" --> D["return key unchanged"]
C --> E["Normalized hashed_key"]
D --> E
E --> F1["_check_team_key_limits\nexclude current key"]
E --> F2["_check_org_key_limits\nexclude current key"]
E --> F3["_delete_cache_key_object\ncache invalidation"]
Reviews (1): Last reviewed commit: "allow hashed token_id in /key/update end..." | Re-trigger Greptile
| # data.key may be a raw key (sk-...) or a pre-hashed token_id. | ||
| if isinstance(data, UpdateKeyRequest): | ||
| hashed_key = hash_token(data.key) | ||
| hashed_key = _hash_token_if_needed(data.key) |
There was a problem hiding this comment.
Missing test coverage for pre-hashed token_id path
The PR description explains the motivation clearly, but no tests are included that pass a pre-hashed token_id to /key/update. Without a test, it's easy for a future refactor to re-introduce this regression silently.
A minimal test could call update_key_fn (or hit the endpoint via the test client) with a pre-hashed token value and assert that:
- The limit-check exclusion filter (
key.token != hashed_key) correctly excludes the key. - The cache invalidation call receives the already-hashed value unchanged.
This same coverage gap applies to both the _check_team_key_limits path (line 947) and the _check_org_key_limits path (line 1106).
Rule Used: What: Ensure that any PR claiming to fix an issue ... (source)
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
…update and key rotation Two code paths in key_management_endpoints.py call hash_token() unconditionally when invalidating the user_api_key_cache after a key update. When the caller passes a pre-hashed token ID (not an sk- prefixed key), hash_token() double-hashes it, producing a cache key that does not match the actual cached entry. Cache invalidation silently fails. This is compounded by update_cache() which writes the stale cached key object back with a fresh 60s TTL after every successful request, preventing natural TTL expiry. The stale entry (with outdated fields like max_budget=None) persists indefinitely under load. PR BerriAI#24969 fixed this in update_key_fn but missed two other call sites: - _process_single_key_update (bulk update path) - _execute_virtual_key_regeneration (key rotation path) Fix: replace hash_token() with _hash_token_if_needed() in both locations, matching the pattern already used elsewhere in the file. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…update and key rotation Two code paths in key_management_endpoints.py call hash_token() unconditionally when invalidating the user_api_key_cache after a key update. When the caller passes a pre-hashed token ID (not an sk- prefixed key), hash_token() double-hashes it, producing a cache key that does not match the actual cached entry. Cache invalidation silently fails. This is compounded by update_cache() which writes the stale cached key object back with a fresh 60s TTL after every successful request, preventing natural TTL expiry. The stale entry (with outdated fields like max_budget=None) persists indefinitely under load. PR BerriAI#24969 fixed this in update_key_fn but missed two other call sites: - _process_single_key_update (bulk update path) - _execute_virtual_key_regeneration (key rotation path) Fix: replace hash_token() with _hash_token_if_needed() in both locations, matching the pattern already used elsewhere in the file. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…update and key rotation (#25552) Two code paths in key_management_endpoints.py call hash_token() unconditionally when invalidating the user_api_key_cache after a key update. When the caller passes a pre-hashed token ID (not an sk- prefixed key), hash_token() double-hashes it, producing a cache key that does not match the actual cached entry. Cache invalidation silently fails. This is compounded by update_cache() which writes the stale cached key object back with a fresh 60s TTL after every successful request, preventing natural TTL expiry. The stale entry (with outdated fields like max_budget=None) persists indefinitely under load. PR #24969 fixed this in update_key_fn but missed two other call sites: - _process_single_key_update (bulk update path) - _execute_virtual_key_regeneration (key rotation path) Fix: replace hash_token() with _hash_token_if_needed() in both locations, matching the pattern already used elsewhere in the file. Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
/key/updatecurrently callshash_token()directly on thekeyfield, which assumes the value is always a raw key (starting withsk-). This breaks when a pre-hashedtoken_idis passed instead.This change replaces three
hash_token()calls with_hash_token_if_needed(), which already exists and is already used consistently in/key/info,/key/delete, and the internal_get_and_validate_existing_keyandupdate_datahelpers.Changes
_check_team_key_limits: use_hash_token_if_needed(data.key)when excluding the current key from limit checks_check_org_key_limits: sameupdate_key_fn: use_hash_token_if_needed(key)for cache invalidation after updateWhy
This is required to support write-only key attributes in the Terraform provider (BerriAI/terraform-provider-litellm#27). The provider stores the SHA-256
token_idas the Terraform resource ID (safe to store in state) rather than the raw key, and passes it to/key/updateas the identifier. Without this fix, updates to managed keys fail.