Skip to content

fix: allow hashed token_id in /key/update endpoint#24969

Merged
krrish-berri-2 merged 1 commit intoBerriAI:mainfrom
joereyna:fix/key-update-accept-token-hash
Apr 3, 2026
Merged

fix: allow hashed token_id in /key/update endpoint#24969
krrish-berri-2 merged 1 commit intoBerriAI:mainfrom
joereyna:fix/key-update-accept-token-hash

Conversation

@joereyna
Copy link
Copy Markdown
Contributor

@joereyna joereyna commented Apr 2, 2026

Summary

/key/update currently calls hash_token() directly on the key field, which assumes the value is always a raw key (starting with sk-). This breaks when a pre-hashed token_id is passed instead.

This change replaces three hash_token() calls with _hash_token_if_needed(), which already exists and is already used consistently in /key/info, /key/delete, and the internal _get_and_validate_existing_key and update_data helpers.

Changes

  • _check_team_key_limits: use _hash_token_if_needed(data.key) when excluding the current key from limit checks
  • _check_org_key_limits: same
  • update_key_fn: use _hash_token_if_needed(key) for cache invalidation after update

Why

This is required to support write-only key attributes in the Terraform provider (BerriAI/terraform-provider-litellm#27). The provider stores the SHA-256 token_id as the Terraform resource ID (safe to store in state) rather than the raw key, and passes it to /key/update as the identifier. Without this fix, updates to managed keys fail.

@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 2, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
litellm Ready Ready Preview, Comment Apr 2, 2026 6:11am

Request Review

@codspeed-hq
Copy link
Copy Markdown
Contributor

codspeed-hq Bot commented Apr 2, 2026

Merging this PR will not alter performance

✅ 16 untouched benchmarks


Comparing joereyna:fix/key-update-accept-token-hash (d0df5d3) with main (d1df4e8)

Open in CodSpeed

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 2, 2026

Greptile Summary

This PR fixes /key/update to accept pre-hashed token_id values (in addition to raw API keys) by replacing three direct hash_token() calls with the existing _hash_token_if_needed() helper, which is already used consistently throughout the rest of the key management endpoint.

  • _check_team_key_limits: uses _hash_token_if_needed(data.key) to correctly exclude the key being updated from team limit checks
  • _check_org_key_limits: same fix applied symmetrically
  • update_key_fn: uses _hash_token_if_needed(key) for cache invalidation after the update
  • The underlying prisma_client.update_data() already called _hash_token_if_needed internally (line 3272 of utils.py), so the database write path was already correct — only the limit-check exclusion and cache invalidation paths needed this fix
  • The PR lacks automated tests (unit or integration) demonstrating the fix works with a pre-hashed token_id input, as noted in the custom review guidelines

Confidence Score: 5/5

  • Safe to merge — the change is minimal, targeted, and consistent with existing patterns throughout the file.
  • All three replacements use _hash_token_if_needed, which is the established pattern for this exact use case and is already called internally by update_data. The logic is straightforward: hash only if the token starts with the raw key prefix, pass through otherwise. No existing behavior is altered for users passing raw keys. The only concern is missing test coverage, which is P2 and does not affect correctness.
  • No files require special attention — the single changed file applies a well-understood, minimal fix.

Important Files Changed

Filename Overview
litellm/proxy/management_endpoints/key_management_endpoints.py Three hash_token() calls replaced with _hash_token_if_needed() in _check_team_key_limits, _check_org_key_limits, and update_key_fn to support pre-hashed token_ids passed from the Terraform provider.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A["/key/update request\nkey = raw token OR pre-hashed token_id"] --> B{"_hash_token_if_needed(key)"}
    B -- "raw token\nstarts with sk prefix" --> C["hash_token\n→ SHA-256 hash"]
    B -- "already hashed\nnot raw token" --> D["return key unchanged"]
    C --> E["Normalized hashed_key"]
    D --> E
    E --> F1["_check_team_key_limits\nexclude current key"]
    E --> F2["_check_org_key_limits\nexclude current key"]
    E --> F3["_delete_cache_key_object\ncache invalidation"]
Loading

Reviews (1): Last reviewed commit: "allow hashed token_id in /key/update end..." | Re-trigger Greptile

# data.key may be a raw key (sk-...) or a pre-hashed token_id.
if isinstance(data, UpdateKeyRequest):
hashed_key = hash_token(data.key)
hashed_key = _hash_token_if_needed(data.key)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Missing test coverage for pre-hashed token_id path

The PR description explains the motivation clearly, but no tests are included that pass a pre-hashed token_id to /key/update. Without a test, it's easy for a future refactor to re-introduce this regression silently.

A minimal test could call update_key_fn (or hit the endpoint via the test client) with a pre-hashed token value and assert that:

  1. The limit-check exclusion filter (key.token != hashed_key) correctly excludes the key.
  2. The cache invalidation call receives the already-hashed value unchanged.

This same coverage gap applies to both the _check_team_key_limits path (line 947) and the _check_org_key_limits path (line 1106).

Rule Used: What: Ensure that any PR claiming to fix an issue ... (source)

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 2, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

@krrish-berri-2 krrish-berri-2 self-requested a review April 3, 2026 04:13
@krrish-berri-2 krrish-berri-2 merged commit ee3e848 into BerriAI:main Apr 3, 2026
58 of 62 checks passed
dkindlund added a commit to dkindlund/litellm that referenced this pull request Apr 11, 2026
…update and key rotation

Two code paths in key_management_endpoints.py call hash_token()
unconditionally when invalidating the user_api_key_cache after a key
update.  When the caller passes a pre-hashed token ID (not an sk-
prefixed key), hash_token() double-hashes it, producing a cache key
that does not match the actual cached entry.  Cache invalidation
silently fails.

This is compounded by update_cache() which writes the stale cached key
object back with a fresh 60s TTL after every successful request,
preventing natural TTL expiry.  The stale entry (with outdated fields
like max_budget=None) persists indefinitely under load.

PR BerriAI#24969 fixed this in update_key_fn but missed two other call sites:
- _process_single_key_update (bulk update path)
- _execute_virtual_key_regeneration (key rotation path)

Fix: replace hash_token() with _hash_token_if_needed() in both
locations, matching the pattern already used elsewhere in the file.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dkindlund added a commit to dkindlund/litellm that referenced this pull request Apr 11, 2026
…update and key rotation

Two code paths in key_management_endpoints.py call hash_token()
unconditionally when invalidating the user_api_key_cache after a key
update.  When the caller passes a pre-hashed token ID (not an sk-
prefixed key), hash_token() double-hashes it, producing a cache key
that does not match the actual cached entry.  Cache invalidation
silently fails.

This is compounded by update_cache() which writes the stale cached key
object back with a fresh 60s TTL after every successful request,
preventing natural TTL expiry.  The stale entry (with outdated fields
like max_budget=None) persists indefinitely under load.

PR BerriAI#24969 fixed this in update_key_fn but missed two other call sites:
- _process_single_key_update (bulk update path)
- _execute_virtual_key_regeneration (key rotation path)

Fix: replace hash_token() with _hash_token_if_needed() in both
locations, matching the pattern already used elsewhere in the file.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
krrish-berri-2 pushed a commit that referenced this pull request Apr 12, 2026
…update and key rotation (#25552)

Two code paths in key_management_endpoints.py call hash_token()
unconditionally when invalidating the user_api_key_cache after a key
update.  When the caller passes a pre-hashed token ID (not an sk-
prefixed key), hash_token() double-hashes it, producing a cache key
that does not match the actual cached entry.  Cache invalidation
silently fails.

This is compounded by update_cache() which writes the stale cached key
object back with a fresh 60s TTL after every successful request,
preventing natural TTL expiry.  The stale entry (with outdated fields
like max_budget=None) persists indefinitely under load.

PR #24969 fixed this in update_key_fn but missed two other call sites:
- _process_single_key_update (bulk update path)
- _execute_virtual_key_regeneration (key rotation path)

Fix: replace hash_token() with _hash_token_if_needed() in both
locations, matching the pattern already used elsewhere in the file.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants