feat(proxy): add project-level guardrails support#25087
feat(proxy): add project-level guardrails support#25087krrish-berri-2 merged 1 commit intoBerriAI:mainfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Greptile SummaryThis PR adds guardrails and policies support at the project level, following the existing key/team pattern. The changes wire up Key changes:
Confidence Score: 5/5Safe to merge — the guardrail resolution logic is correct and the only remaining findings are stale log strings and a missing test for the policies path. All three findings are P2: two stale log messages and a missing test for the project-policies code path. No logic errors, no security issues, no backwards-incompatible changes. The persistence mechanism is already in place in project_endpoints.py, and project_metadata has been populated on UserAPIKeyAuth in the auth layer for some time. No files require special attention beyond the two stale log lines in litellm_pre_call_utils.py.
|
| Filename | Overview |
|---|---|
| litellm/proxy/_types.py | Adds guardrails and policies Optional fields to NewProjectRequest and UpdateProjectRequest. Correctly serialized into metadata by the LiteLLM_ManagementEndpoint_MetadataFields_Premium loop in project_endpoints.py — not the model validator (which only covers the non-premium list). |
| litellm/proxy/litellm_pre_call_utils.py | Extends guardrail and policy resolution to include project-level metadata across move_guardrails_to_metadata, _add_guardrails_from_key_or_team_metadata, and _add_guardrails_from_policies_in_metadata. Logic is correct; two verbose log strings still reference "key/team" instead of "key/team/project". |
| tests/test_litellm/proxy/test_litellm_pre_call_utils.py | Adds two new async tests for project-level guardrail union merging and project-only guardrails. Tests are well-structured and use mocks correctly. Missing coverage for the project-level policies path. |
Flowchart
%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[Incoming Request] --> B[move_guardrails_to_metadata]
B --> C{Any guardrail config?}
C -- "key / team / project / request" --> D[_add_guardrails_from_key_or_team_metadata]
C -- "None + policy engine not initialized" --> E[Early return / clean up]
D --> F{key_metadata guardrails?}
F -- yes --> G[Add to combined_guardrails set]
F -- no --> H{team_metadata guardrails?}
G --> H
H -- yes --> I[Add to combined_guardrails set]
H -- no --> J{project_metadata guardrails? NEW}
I --> J
J -- yes --> K[_premium_user_check + add to set]
J -- no --> L[Write combined list to request metadata]
K --> L
L --> M[_add_guardrails_from_policies_in_metadata]
M --> N{key / team / project policies? NEW}
N -- yes --> O[Resolve via PolicyRegistry + merge]
N -- no --> P[Return]
O --> P
Comments Outside Diff (2)
-
litellm/proxy/litellm_pre_call_utils.py, line 1588 (link)Stale log message still references "key/team"
The verbose log at line 1588 still says
"Policy engine: resolving guardrails from key/team policies"even though the function now also incorporates project-level policies. The same applies to the log at line 1646. Both should be updated to include "project" for accurate observability. -
litellm/proxy/litellm_pre_call_utils.py, line 1646 (link)This log line still refers to "key/team policies" but now also includes project-level policies. Update for accurate tracing.
Reviews (1): Last reviewed commit: "feat(proxy): add project-level guardrail..." | Re-trigger Greptile
| assert len(requested_guardrails) == 1 | ||
|
|
||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_project_guardrails_merge_with_key_and_team(): | ||
| """ | ||
| Test that project guardrails are merged with key and team guardrails (union semantics). | ||
| All three levels should contribute to the final guardrails list without duplicates. | ||
| """ | ||
| request_mock = MagicMock(spec=Request) | ||
| request_mock.url.path = "/chat/completions" | ||
| request_mock.url = MagicMock() | ||
| request_mock.url.__str__.return_value = "http://localhost/chat/completions" | ||
| request_mock.method = "POST" | ||
| request_mock.query_params = {} | ||
| request_mock.headers = {"Content-Type": "application/json"} | ||
| request_mock.client = MagicMock() | ||
| request_mock.client.host = "127.0.0.1" | ||
|
|
||
| data = { | ||
| "model": "gpt-3.5-turbo", | ||
| "messages": [{"role": "user", "content": "test"}], | ||
| } | ||
|
|
||
| user_api_key_dict = UserAPIKeyAuth( | ||
| api_key="test-key", | ||
| metadata={"guardrails": ["key-guardrail-1"]}, | ||
| team_metadata={"guardrails": ["team-guardrail-1", "key-guardrail-1"]}, | ||
| project_metadata={"guardrails": ["project-guardrail-1", "team-guardrail-1"]}, | ||
| ) | ||
|
|
||
| with patch("litellm.proxy.utils._premium_user_check"): | ||
| updated_data = await add_litellm_data_to_request( | ||
| data=data, | ||
| request=request_mock, | ||
| user_api_key_dict=user_api_key_dict, | ||
| proxy_config=MagicMock(), | ||
| general_settings={}, | ||
| version="test-version", | ||
| ) | ||
|
|
||
| metadata = updated_data.get("metadata", {}) | ||
| guardrails = metadata.get("guardrails", []) | ||
|
|
||
| # All three sources contribute | ||
| assert "key-guardrail-1" in guardrails | ||
| assert "team-guardrail-1" in guardrails | ||
| assert "project-guardrail-1" in guardrails | ||
| # No duplicates | ||
| assert guardrails.count("key-guardrail-1") == 1 | ||
| assert guardrails.count("team-guardrail-1") == 1 | ||
|
|
||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_project_guardrails_only(): | ||
| """ | ||
| Test that project guardrails work when key and team have no guardrails configured. | ||
| """ | ||
| request_mock = MagicMock(spec=Request) | ||
| request_mock.url.path = "/chat/completions" | ||
| request_mock.url = MagicMock() | ||
| request_mock.url.__str__.return_value = "http://localhost/chat/completions" | ||
| request_mock.method = "POST" | ||
| request_mock.query_params = {} | ||
| request_mock.headers = {"Content-Type": "application/json"} | ||
| request_mock.client = MagicMock() | ||
| request_mock.client.host = "127.0.0.1" | ||
|
|
||
| data = { | ||
| "model": "gpt-3.5-turbo", | ||
| "messages": [{"role": "user", "content": "test"}], | ||
| } | ||
|
|
||
| user_api_key_dict = UserAPIKeyAuth( | ||
| api_key="test-key", | ||
| metadata={}, | ||
| team_metadata={}, | ||
| project_metadata={"guardrails": ["project-guardrail-1", "project-guardrail-2"]}, | ||
| ) | ||
|
|
||
| with patch("litellm.proxy.utils._premium_user_check"): | ||
| updated_data = await add_litellm_data_to_request( | ||
| data=data, | ||
| request=request_mock, | ||
| user_api_key_dict=user_api_key_dict, | ||
| proxy_config=MagicMock(), | ||
| general_settings={}, | ||
| version="test-version", | ||
| ) | ||
|
|
||
| metadata = updated_data.get("metadata", {}) | ||
| guardrails = metadata.get("guardrails", []) | ||
|
|
||
| assert "project-guardrail-1" in guardrails | ||
| assert "project-guardrail-2" in guardrails | ||
| assert len(guardrails) == 2 | ||
|
|
||
|
|
||
| def test_update_model_if_key_alias_exists(): | ||
| """ | ||
| Test that _update_model_if_key_alias_exists properly updates the model when a key alias exists. |
There was a problem hiding this comment.
Missing test coverage for project-level policies
The PR adds project-level policies support in _add_guardrails_from_policies_in_metadata, but neither of the two new tests exercises the policies path — only guardrails are covered. Since policy resolution has its own code path (requiring the policy registry to be initialized), a test should be added to verify that project_metadata={"policies": [...]} is correctly accumulated into policy_names and ultimately resolved into guardrails on the request metadata.
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
The backend already supports guardrails on projects (PR BerriAI#25087), but the dashboard UI had no way to set them. This adds a guardrails multi-select field to the project form's Advanced Settings, following the same pattern used for team guardrails.
Relevant issues
Requested by customer — projects currently have no guardrails support, unlike teams and keys.
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/test_litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unit@greptileaiand received a Confidence Score of at least 4/5 before requesting a maintainer reviewDelays in PR merge?
If you're seeing a delay in your PR being merged, ping the LiteLLM Team on Slack (#pr-review).
CI (LiteLLM team)
Branch creation CI run
Link:
CI run for the last commit
Link:
Merge / cherry-pick CI run
Links:
Type
🆕 New Feature
Changes
Adds guardrails and policies support at the project level, following the existing team-level pattern.
litellm/proxy/_types.pyguardrails: Optional[List[str]]andpolicies: Optional[List[str]]fields toNewProjectRequestandUpdateProjectRequest. The existingmodel_validator+LiteLLM_ManagementEndpoint_MetadataFieldsloop automatically serializes these intometadata, so no endpoint code changes are needed.litellm/proxy/litellm_pre_call_utils.pymove_guardrails_to_metadata(): readsproject_metadatafromUserAPIKeyAuthand includes it in the early-exit check and both merge calls._add_guardrails_from_key_or_team_metadata(): newproject_metadataparameter (defaultNone), adds project guardrails to the union set alongside key and team._add_guardrails_from_policies_in_metadata(): same pattern — newproject_metadataparameter, adds project policies to the resolution set.tests/test_litellm/proxy/test_litellm_pre_call_utils.pytest_project_guardrails_merge_with_key_and_team: verifies union merge across all three levels with deduplication.test_project_guardrails_only: verifies project guardrails work when key and team have none configured.