feat(api): Add filter_dashboard_id parameter to apply dashboard filters to chart/data endpoint#38638
Conversation
Code Review Agent Run #d74eb2Actionable Suggestions - 0Review Details
Bito Usage GuideCommands Type the following command in the pull request comment and save the comment.
Refer to the documentation for additional commands. Configuration This repository uses Documentation & Help |
✅ Deploy Preview for superset-docs-preview ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
✅ Deploy Preview for superset-docs-preview ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
| "$ref": "#/components/schemas/ChartDataResponseResult" | ||
| }, | ||
| "type": "array" | ||
| }, |
There was a problem hiding this comment.
So I was not sure the best way to generate the updated OpenAPI docs. I found a command in the Superset CLI, but when I ran that, it added my new changes, but is also deleted a bunch of un-related stuff which made me nervous, so I just went ahead and manually updated the openapi.json file here.
Please let me know if there is a better approach!
| if dashboard_filter_context.extra_form_data: | ||
| efd = dashboard_filter_context.extra_form_data | ||
| extra_filters = efd.get("filters", []) | ||
|
|
||
| for query in json_body.get("queries", []): | ||
| if extra_filters: | ||
| existing = query.get("filters", []) | ||
| query["filters"] = existing + [ | ||
| {**f, "isExtra": True} for f in extra_filters | ||
| ] | ||
|
|
||
| extras = query.get("extras", {}) | ||
| for key in EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS: | ||
| if key in efd: | ||
| extras[key] = efd[key] | ||
| if extras: | ||
| query["extras"] = extras | ||
|
|
||
| for ( | ||
| src_key, | ||
| target_key, | ||
| ) in EXTRA_FORM_DATA_OVERRIDE_REGULAR_MAPPINGS.items(): | ||
| if src_key in efd: | ||
| query[target_key] = efd[src_key] | ||
|
|
||
| query["extra_form_data"] = efd |
There was a problem hiding this comment.
I noticed there are other places in the codebase where we appear to be doing similar things. For example I noticed there is a merge_extra_filters() function utils/core(
superset/superset/utils/core.py
Line 1126 in 6e7d6a8
This is not the best in that we are re-implimenting some logic here that exists in other forms on the front-end/backend, but from my knowledge of the codebase right now I believe this is the best approach. Please let me know if you have thoughts on another approach!
There was a problem hiding this comment.
This also applies to the content in dashboard_filter_context.py. Let me know if you think there is a better approach!
| def _get_filter_target_column(filter_config: dict[str, Any]) -> str | None: | ||
| """Extract the target column name from a native filter configuration.""" | ||
| if targets := filter_config.get("targets", []): | ||
| column = targets[0].get("column", {}) | ||
| if isinstance(column, dict): | ||
| return column.get("name") | ||
| if isinstance(column, str): | ||
| return column | ||
| return None |
There was a problem hiding this comment.
I included this target column info to help end users understand how dashboard filters were impacting their queries more precisely, but this might not be essential or super useful depending on the use case for this feature. Can remove if needed
| # We need to apply the form data to the global context as jinga | ||
| # templating pulls form data from the request globally, so this | ||
| # fallback ensures it has the filters and extra_form_data applied | ||
| # when used in get_sqla_query which constructs the final query. | ||
| g.form_data = json_body |
There was a problem hiding this comment.
This was painful to understand lol.
| filters_dashboard_id = request.args.get("filters_dashboard_id", type=int) | ||
| if filters_dashboard_id is not None: |
There was a problem hiding this comment.
Suggestion: Invalid filters_dashboard_id values are silently treated as if the parameter was not provided because request.args.get(..., type=int) returns None on conversion failure. This causes incorrect 200 responses without applying filters instead of a client error; explicitly validate parsing and return 400 when the parameter is present but non-integer. [logic error]
Severity Level: Major ⚠️
- ⚠️ GET chart data silently ignores malformed dashboard filter ID.
- ⚠️ Clients receive unfiltered results without explicit error.| filters_dashboard_id = request.args.get("filters_dashboard_id", type=int) | |
| if filters_dashboard_id is not None: | |
| filters_dashboard_id: int | None = None | |
| if "filters_dashboard_id" in request.args: | |
| try: | |
| filters_dashboard_id = int(request.args["filters_dashboard_id"]) | |
| except (TypeError, ValueError): | |
| return self.response_400( | |
| message=_("`filters_dashboard_id` must be an integer") | |
| ) | |
| if filters_dashboard_id is not None: |
Steps of Reproduction ✅
1. Hit GET chart data endpoint `/<int:pk>/data/` (`superset/charts/data/api.py:81`,
`get_data` at `:89`) with malformed query param:
`/api/v1/chart/<id>/data/?filters_dashboard_id=abc`.
2. In `get_data`, parsing uses `request.args.get("filters_dashboard_id", type=int)`
(`superset/charts/data/api.py:185`), which yields `None` for bad integer input; branch at
`:186` is skipped.
3. Because the dashboard filter branch is skipped, `get_dashboard_filter_context(...)`
(`:188-191`) is never called, so no validation/security/default-filter application occurs.
4. Endpoint still returns normal 200 data path; `dashboard_filters` metadata is only
attached when context exists (`superset/charts/data/api.py:534-535`), so client gets
silent unfiltered response instead of input error.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** superset/charts/data/api.py
**Line:** 185:186
**Comment:**
*Logic Error: Invalid `filters_dashboard_id` values are silently treated as if the parameter was not provided because `request.args.get(..., type=int)` returns `None` on conversion failure. This causes incorrect 200 responses without applying filters instead of a client error; explicitly validate parsing and return 400 when the parameter is present but non-integer.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| existing = query.get("filters", []) | ||
| query["filters"] = existing + [ | ||
| {**f, "isExtra": True} for f in extra_filters | ||
| ] |
There was a problem hiding this comment.
Suggestion: filters can be None in query objects (schema allows null), so concatenating it with a list raises a TypeError at runtime when dashboard filters are applied. Normalize missing/null filters to an empty list before concatenation. [null pointer]
Severity Level: Critical 🚨
- ❌ GET chart data crashes when saved query filters null.
- ⚠️ Dashboard default filters cannot be applied reliably.| existing = query.get("filters", []) | |
| query["filters"] = existing + [ | |
| {**f, "isExtra": True} for f in extra_filters | |
| ] | |
| existing = query.get("filters") or [] | |
| query["filters"] = existing + [ | |
| {**f, "isExtra": True} for f in extra_filters | |
| ] |
Steps of Reproduction ✅
1. Use chart save/update flow where `query_context` is accepted as raw JSON string
(`ChartPostSchema.query_context` in `superset/charts/schemas.py:213`; update persists
properties via `ChartDAO.update` in `superset/commands/chart/update.py:73`) and store a
query object containing `"filters": null`.
2. Call GET chart data endpoint with dashboard filter application enabled
(`superset/charts/data/api.py:81`, `:185-186`) and a dashboard whose defaults produce
`extra_form_data.filters` (`:199-203`).
3. Loop over saved query context (`superset/charts/data/api.py:201`) executes `existing =
query.get("filters", [])` (`:203`), returning `None` because key exists with null.
4. Next line performs `None + list` (`:204-206`), raising `TypeError` before request
validation/response handling, causing server error on this endpoint path.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** superset/charts/data/api.py
**Line:** 203:206
**Comment:**
*Null Pointer: `filters` can be `None` in query objects (schema allows null), so concatenating it with a list raises a `TypeError` at runtime when dashboard filters are applied. Normalize missing/null filters to an empty list before concatenation.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| extras = query.get("extras", {}) | ||
| for key in EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS: | ||
| if key in efd: | ||
| extras[key] = efd[key] |
There was a problem hiding this comment.
Suggestion: extras can be None in query objects (schema allows null), and assigning keys to None raises a runtime exception when extra form-data overrides are present. Coerce null extras to a dict before writing override keys. [null pointer]
Severity Level: Critical 🚨
- ❌ GET chart data fails when query extras is null.
- ⚠️ Time/extras overrides from dashboard defaults are lost.| extras = query.get("extras", {}) | |
| for key in EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS: | |
| if key in efd: | |
| extras[key] = efd[key] | |
| extras = query.get("extras") or {} | |
| for key in EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS: | |
| if key in efd: | |
| extras[key] = efd[key] |
Steps of Reproduction ✅
1. Persist a chart `query_context` containing `"extras": null` (allowed nullable query
object fields: `extras` in `superset/charts/schemas.py:1337-1340`; chart query_context
accepted as JSON string at `superset/charts/schemas.py:213`).
2. Invoke GET `/api/v1/chart/<id>/data/?filters_dashboard_id=<dashboard_id>` so dashboard
context is loaded (`superset/charts/data/api.py:185-191`) and `extra_form_data` is present
(`:197-199`).
3. Ensure `efd` includes any override key iterated by
`EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS` (`superset/charts/data/api.py:209-211`), then
mutation path runs.
4. `extras = query.get("extras", {})` returns `None` when key exists as null (`:208`), and
assignment `extras[key] = ...` at `:211` raises `TypeError`, breaking request execution.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** superset/charts/data/api.py
**Line:** 208:211
**Comment:**
*Null Pointer: `extras` can be `None` in query objects (schema allows null), and assigning keys to `None` raises a runtime exception when extra form-data overrides are present. Coerce null `extras` to a dict before writing override keys.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| if charts_in_scope := filter_config.get("chartsInScope"): | ||
| return chart_id in charts_in_scope |
There was a problem hiding this comment.
Suggestion: The scope check treats an empty chartsInScope list as if the field were missing and falls back to rootPath, which can incorrectly apply a filter to charts that should receive no filter. Handle the presence of chartsInScope explicitly so [] means "in scope for no charts". [logic error]
Severity Level: Critical 🚨
- ❌ Scoped-out filters can still affect chart SQL.
- ⚠️ Dashboard filter metadata reports misleading applied status.
- ⚠️ GET chart data differs from frontend scope semantics.| if charts_in_scope := filter_config.get("chartsInScope"): | |
| return chart_id in charts_in_scope | |
| if "chartsInScope" in filter_config: | |
| charts_in_scope = filter_config.get("chartsInScope") | |
| if charts_in_scope is not None: | |
| return chart_id in charts_in_scope |
Steps of Reproduction ✅
1. Call GET `api/v1/chart/<chart_id>/data/?filters_dashboard_id=<dashboard_id>`
(entrypoint in `superset/charts/data/api.py:185-190`), which invokes
`get_dashboard_filter_context(...)`.
2. Ensure dashboard metadata contains a native filter with explicit `"chartsInScope": []`
and default `scope.rootPath` (this field is persisted when present; see helper behavior in
`tests/unit_tests/charts/test_dashboard_filter_context.py:101-103`).
3. Execution reaches `_is_filter_in_scope_for_chart`
(`superset/charts/data/dashboard_filter_context.py:70-97`); line 81 treats `[]` as falsy,
so it skips explicit scope and falls back to `rootPath` logic.
4. For charts under `ROOT_ID`, line 96 returns True and filter is treated in-scope, so
defaults are merged (`dashboard_filter_context.py:274-277`) and applied to chart query
unexpectedly.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** superset/charts/data/dashboard_filter_context.py
**Line:** 81:82
**Comment:**
*Logic Error: The scope check treats an empty `chartsInScope` list as if the field were missing and falls back to `rootPath`, which can incorrectly apply a filter to charts that should receive no filter. Handle the presence of `chartsInScope` explicitly so `[]` means "in scope for no charts".
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| assert rv.status_code == status_code | ||
|
|
||
|
|
||
| class TestGetChartDataWithDashboardFilter(BaseTestChartDataApi): |
There was a problem hiding this comment.
Suggestion: This new test class uses the same birth-names fixture/query flow that is explicitly skipped elsewhere in this file due to DuckDB incompatibility, but it is missing the same class-level skip marker. In CI environments using DuckDB, these tests can fail even when the feature code is correct. Add the same skip guard used by adjacent chart data flow test classes. [logic error]
Severity Level: Major ⚠️
- ❌ DuckDB chart-data CI lane fails on new tests.
- ⚠️ Dashboard-filter feature validation blocked by fixture incompatibility.| class TestGetChartDataWithDashboardFilter(BaseTestChartDataApi): | |
| @pytest.mark.chart_data_flow | |
| @pytest.mark.skip( | |
| reason=( | |
| "TODO: Fix test class to work with DuckDB example data format. " | |
| "Birth names fixture conflicts with new example data structure." | |
| ) | |
| ) |
Steps of Reproduction ✅
1. Run integration tests in the DuckDB chart-data environment (the same environment
already acknowledged as broken in this file via class-level skips at
`tests/integration_tests/charts/data/api_tests.py:168-173` and `:1165-1169` with reason
`"Birth names fixture conflicts with new example data structure."`).
2. Execute any test from `TestGetChartDataWithDashboardFilter` (class declaration at
`tests/integration_tests/charts/data/api_tests.py:1815`), e.g.
`test_get_data_with_dashboard_filter_context` at `:1851`.
3. The test path uses the same birth-names fixture chain
(`@pytest.mark.usefixtures("load_birth_names_dashboard_with_slices")` at `:1849`, `:1892`,
`:1937`, etc.) and the same `"Genders"` slice setup in `_setup_chart_with_query_context`
(`:1818-1820`).
4. Because this class lacks the existing DuckDB skip guard while adjacent chart-data
classes have it, pytest executes this known-incompatible fixture path and the test fails
in DuckDB runs before validating the new dashboard-filter behavior.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** tests/integration_tests/charts/data/api_tests.py
**Line:** 1815:1815
**Comment:**
*Logic Error: This new test class uses the same birth-names fixture/query flow that is explicitly skipped elsewhere in this file due to DuckDB incompatibility, but it is missing the same class-level skip marker. In CI environments using DuckDB, these tests can fail even when the feature code is correct. Add the same skip guard used by adjacent chart data flow test classes.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
User description
SUMMARY
Addresses: #38133
This PR adds a feature to the /api/v1/chart/{id}/data GET endpoint, to enable you to specify an additional parameter: filters_dashboard_id. When you pass in the ID of a dashboard to this parameter, it will apply all the filters on the dashboard which are applicable to this chart.
It will ensure that:
Filters will only be applied to the chart:
The returned result will reflect the applied filters, and include an additional dashboard_filters object, which will show which filters exist on the dashboard, and which were applied to your request. If the user is using the "defaultToFirstItem" it will return a status of "not_applied_uses_default_to_first_item_prequery" to make it very clear to the end user a given filter was not applied because of this option. Column info is also included to help users understand the impact this filter had on their query.
Example:
TESTING INSTRUCTIONS
Spin up superset and create a filter on a dashboard. Make sure it is scoped to hit a chart you want to test. Hit the chart endpoint with filters_dashboard_id = the id of your dashboard with filters. I.E.
GET http://localhost:8088/api/v1/chart/89/data/?filters_dashboard_id=8
Check that query text and results have updated, and that the dashboard_filters include all filters. Remove the scoping from the chart in the filter settings and run again. Make sure the filter has not been applied. Also test removing the default value, and setting "Select first filter value by default" to true and ensuring both show in dashboard filters, but are not applied.
Run the tests:
ADDITIONAL INFORMATION
CodeAnt-AI Description
Apply dashboard native filter defaults to chart data API via filters_dashboard_id
What Changed
Impact
✅ Chart queries reflect dashboard default filters when requested✅ Clearer dashboard filter metadata in chart data responses✅ Fewer surprises when reproducing dashboard-filtered charts via API💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.