Skip to content

feat(api): Add filter_dashboard_id parameter to apply dashboard filters to chart/data endpoint#38638

Open
MDeadman wants to merge 3 commits intoapache:masterfrom
MDeadman:feature/chart-data-api-dashboard-filter-support
Open

feat(api): Add filter_dashboard_id parameter to apply dashboard filters to chart/data endpoint#38638
MDeadman wants to merge 3 commits intoapache:masterfrom
MDeadman:feature/chart-data-api-dashboard-filter-support

Conversation

@MDeadman
Copy link

@MDeadman MDeadman commented Mar 13, 2026

User description

SUMMARY

Addresses: #38133

This PR adds a feature to the /api/v1/chart/{id}/data GET endpoint, to enable you to specify an additional parameter: filters_dashboard_id. When you pass in the ID of a dashboard to this parameter, it will apply all the filters on the dashboard which are applicable to this chart.

It will ensure that:

  • You have access to the dashboard
  • The chart exists on the dashboard

Filters will only be applied to the chart:

  • If the chart is in scope for the given filter
  • If the filter has a default value, mirroring the front-end behaviour when loading up a dashbaord
  • IMPORTANT CAVEAT: The filter is NOT using the "Select first filter value by default" option. This option requires a prequery to get the first item in a column to use for the default value. This would require a re-implimentation of the logic used to power that feature, and would create an inconsistancy in results when using the QUERY type in the request, as we would not want to execute that pre-query with the QUERY setting. The downside of this approach is we now have an inconsistancy between the front-end behaviour and the API endpoint behaviour here, but after discussing with Beto and Kamil we figured this was the better approach.

The returned result will reflect the applied filters, and include an additional dashboard_filters object, which will show which filters exist on the dashboard, and which were applied to your request. If the user is using the "defaultToFirstItem" it will return a status of "not_applied_uses_default_to_first_item_prequery" to make it very clear to the end user a given filter was not applied because of this option. Column info is also included to help users understand the impact this filter had on their query.

Example:

	"dashboard_filters": {
		"filters": [
			{
				"id": "NATIVE_FILTER-HM7ype0AH9IJEHcJ4vyhZ",
				"name": "Genre test",
				"status": "applied",
				"column": "genre"
			}
		]
	}

TESTING INSTRUCTIONS

Spin up superset and create a filter on a dashboard. Make sure it is scoped to hit a chart you want to test. Hit the chart endpoint with filters_dashboard_id = the id of your dashboard with filters. I.E.

GET http://localhost:8088/api/v1/chart/89/data/?filters_dashboard_id=8

Check that query text and results have updated, and that the dashboard_filters include all filters. Remove the scoping from the chart in the filter settings and run again. Make sure the filter has not been applied. Also test removing the default value, and setting "Select first filter value by default" to true and ensuring both show in dashboard filters, but are not applied.

Run the tests:

pytest tests/integration_tests/charts/data/api_tests.py
pytest tests/unit_tests/charts/test_dashboard_filter_context.py

ADDITIONAL INFORMATION

  • Has associated issue: #38133
  • Required feature flags:
  • Changes UI
  • Includes DB Migration (follow approval process in SIP-59)
    • Migration is atomic, supports rollback & is backwards-compatible
    • Confirm DB migration upgrade and downgrade tested
    • Runtime estimates and downtime expectations provided
  • Introduces new feature or API
  • Removes existing feature or API

CodeAnt-AI Description

Apply dashboard native filter defaults to chart data API via filters_dashboard_id

What Changed

  • GET /api/v1/chart/{id}/data accepts filters_dashboard_id; when provided, server-side applies in-scope dashboard native filters that have static defaults to the chart's query context and merges their extra form data into each query.
  • The response includes dashboard_filters metadata describing each in-scope filter and whether its default was applied, omitted, or omitted because it uses the "defaultToFirstItem" prequery behavior; invalid dashboard IDs return 400 and access-denied returns 403.
  • g.form_data is populated so dashboard defaults are available during SQL/template rendering; OpenAPI schema and response schemas updated and new unit/integration tests added to cover behaviors.

Impact

✅ Chart queries reflect dashboard default filters when requested
✅ Clearer dashboard filter metadata in chart data responses
✅ Fewer surprises when reproducing dashboard-filtered charts via API

💡 Usage Guide

Checking Your Pull Request

Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.

Talking to CodeAnt AI

Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:

@codeant-ai ask: Your question here

This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.

Example

@codeant-ai ask: Can you suggest a safer alternative to storing this secret?

Preserve Org Learnings with CodeAnt

You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:

@codeant-ai: Your feedback here

This helps CodeAnt AI learn and adapt to your team's coding style and standards.

Example

@codeant-ai: Do not flag unused imports.

Retrigger review

Ask CodeAnt AI to review the PR again, by typing:

@codeant-ai: review

Check Your Repository Health

To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.

@bito-code-review
Copy link
Contributor

bito-code-review bot commented Mar 13, 2026

Code Review Agent Run #d74eb2

Actionable Suggestions - 0
Review Details
  • Files reviewed - 5 · Commit Range: ca2a953..fba2d73
    • superset/charts/data/api.py
    • superset/charts/data/dashboard_filter_context.py
    • superset/charts/schemas.py
    • tests/integration_tests/charts/data/api_tests.py
    • tests/unit_tests/charts/test_dashboard_filter_context.py
  • Files skipped - 1
    • docs/static/resources/openapi.json - Reason: Filter setting
  • Tools
    • Whispers (Secret Scanner) - ✔︎ Successful
    • Detect-secrets (Secret Scanner) - ✔︎ Successful
    • MyPy (Static Code Analysis) - ✔︎ Successful
    • Astral Ruff (Static Code Analysis) - ✔︎ Successful

Bito Usage Guide

Commands

Type the following command in the pull request comment and save the comment.

  • /review - Manually triggers a full AI review.

  • /pause - Pauses automatic reviews on this pull request.

  • /resume - Resumes automatic reviews.

  • /resolve - Marks all Bito-posted review comments as resolved.

  • /abort - Cancels all in-progress reviews.

Refer to the documentation for additional commands.

Configuration

This repository uses Superset You can customize the agent settings here or contact your Bito workspace admin at evan@preset.io.

Documentation & Help

AI Code Review powered by Bito Logo

@github-actions github-actions bot added api Related to the REST API doc Namespace | Anything related to documentation labels Mar 13, 2026
@netlify
Copy link

netlify bot commented Mar 13, 2026

Deploy Preview for superset-docs-preview ready!

Name Link
🔨 Latest commit ca2a953
🔍 Latest deploy log https://app.netlify.com/projects/superset-docs-preview/deploys/69b44f855bee080008ada0f1
😎 Deploy Preview https://deploy-preview-38638--superset-docs-preview.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@netlify
Copy link

netlify bot commented Mar 13, 2026

Deploy Preview for superset-docs-preview ready!

Name Link
🔨 Latest commit 7be7820
🔍 Latest deploy log https://app.netlify.com/projects/superset-docs-preview/deploys/69b7817d68dba90008186df6
😎 Deploy Preview https://deploy-preview-38638--superset-docs-preview.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

"$ref": "#/components/schemas/ChartDataResponseResult"
},
"type": "array"
},
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I was not sure the best way to generate the updated OpenAPI docs. I found a command in the Superset CLI, but when I ran that, it added my new changes, but is also deleted a bunch of un-related stuff which made me nervous, so I just went ahead and manually updated the openapi.json file here.

Please let me know if there is a better approach!

Comment on lines +197 to +222
if dashboard_filter_context.extra_form_data:
efd = dashboard_filter_context.extra_form_data
extra_filters = efd.get("filters", [])

for query in json_body.get("queries", []):
if extra_filters:
existing = query.get("filters", [])
query["filters"] = existing + [
{**f, "isExtra": True} for f in extra_filters
]

extras = query.get("extras", {})
for key in EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS:
if key in efd:
extras[key] = efd[key]
if extras:
query["extras"] = extras

for (
src_key,
target_key,
) in EXTRA_FORM_DATA_OVERRIDE_REGULAR_MAPPINGS.items():
if src_key in efd:
query[target_key] = efd[src_key]

query["extra_form_data"] = efd
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed there are other places in the codebase where we appear to be doing similar things. For example I noticed there is a merge_extra_filters() function utils/core(

def merge_extra_filters(form_data: dict[str, Any]) -> None: # noqa: C901
), but I was not sure how that was being used, and have not had the time to dig into whether that would be a viable or better approach.

This is not the best in that we are re-implimenting some logic here that exists in other forms on the front-end/backend, but from my knowledge of the codebase right now I believe this is the best approach. Please let me know if you have thoughts on another approach!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also applies to the content in dashboard_filter_context.py. Let me know if you think there is a better approach!

Comment on lines +183 to +191
def _get_filter_target_column(filter_config: dict[str, Any]) -> str | None:
"""Extract the target column name from a native filter configuration."""
if targets := filter_config.get("targets", []):
column = targets[0].get("column", {})
if isinstance(column, dict):
return column.get("name")
if isinstance(column, str):
return column
return None
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I included this target column info to help end users understand how dashboard filters were impacting their queries more precisely, but this might not be essential or super useful depending on the use case for this feature. Can remove if needed

Comment on lines +224 to +228
# We need to apply the form data to the global context as jinga
# templating pulls form data from the request globally, so this
# fallback ensures it has the filters and extra_form_data applied
# when used in get_sqla_query which constructs the final query.
g.form_data = json_body
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was painful to understand lol.

@betodealmeida betodealmeida self-requested a review March 13, 2026 18:10
@codeant-ai-for-open-source codeant-ai-for-open-source bot added the size:XXL This PR changes 1000+ lines, ignoring generated files label Mar 16, 2026
Comment on lines +185 to +186
filters_dashboard_id = request.args.get("filters_dashboard_id", type=int)
if filters_dashboard_id is not None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Invalid filters_dashboard_id values are silently treated as if the parameter was not provided because request.args.get(..., type=int) returns None on conversion failure. This causes incorrect 200 responses without applying filters instead of a client error; explicitly validate parsing and return 400 when the parameter is present but non-integer. [logic error]

Severity Level: Major ⚠️
- ⚠️ GET chart data silently ignores malformed dashboard filter ID.
- ⚠️ Clients receive unfiltered results without explicit error.
Suggested change
filters_dashboard_id = request.args.get("filters_dashboard_id", type=int)
if filters_dashboard_id is not None:
filters_dashboard_id: int | None = None
if "filters_dashboard_id" in request.args:
try:
filters_dashboard_id = int(request.args["filters_dashboard_id"])
except (TypeError, ValueError):
return self.response_400(
message=_("`filters_dashboard_id` must be an integer")
)
if filters_dashboard_id is not None:
Steps of Reproduction ✅
1. Hit GET chart data endpoint `/<int:pk>/data/` (`superset/charts/data/api.py:81`,
`get_data` at `:89`) with malformed query param:
`/api/v1/chart/<id>/data/?filters_dashboard_id=abc`.

2. In `get_data`, parsing uses `request.args.get("filters_dashboard_id", type=int)`
(`superset/charts/data/api.py:185`), which yields `None` for bad integer input; branch at
`:186` is skipped.

3. Because the dashboard filter branch is skipped, `get_dashboard_filter_context(...)`
(`:188-191`) is never called, so no validation/security/default-filter application occurs.

4. Endpoint still returns normal 200 data path; `dashboard_filters` metadata is only
attached when context exists (`superset/charts/data/api.py:534-535`), so client gets
silent unfiltered response instead of input error.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** superset/charts/data/api.py
**Line:** 185:186
**Comment:**
	*Logic Error: Invalid `filters_dashboard_id` values are silently treated as if the parameter was not provided because `request.args.get(..., type=int)` returns `None` on conversion failure. This causes incorrect 200 responses without applying filters instead of a client error; explicitly validate parsing and return 400 when the parameter is present but non-integer.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

Comment on lines +203 to +206
existing = query.get("filters", [])
query["filters"] = existing + [
{**f, "isExtra": True} for f in extra_filters
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: filters can be None in query objects (schema allows null), so concatenating it with a list raises a TypeError at runtime when dashboard filters are applied. Normalize missing/null filters to an empty list before concatenation. [null pointer]

Severity Level: Critical 🚨
- ❌ GET chart data crashes when saved query filters null.
- ⚠️ Dashboard default filters cannot be applied reliably.
Suggested change
existing = query.get("filters", [])
query["filters"] = existing + [
{**f, "isExtra": True} for f in extra_filters
]
existing = query.get("filters") or []
query["filters"] = existing + [
{**f, "isExtra": True} for f in extra_filters
]
Steps of Reproduction ✅
1. Use chart save/update flow where `query_context` is accepted as raw JSON string
(`ChartPostSchema.query_context` in `superset/charts/schemas.py:213`; update persists
properties via `ChartDAO.update` in `superset/commands/chart/update.py:73`) and store a
query object containing `"filters": null`.

2. Call GET chart data endpoint with dashboard filter application enabled
(`superset/charts/data/api.py:81`, `:185-186`) and a dashboard whose defaults produce
`extra_form_data.filters` (`:199-203`).

3. Loop over saved query context (`superset/charts/data/api.py:201`) executes `existing =
query.get("filters", [])` (`:203`), returning `None` because key exists with null.

4. Next line performs `None + list` (`:204-206`), raising `TypeError` before request
validation/response handling, causing server error on this endpoint path.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** superset/charts/data/api.py
**Line:** 203:206
**Comment:**
	*Null Pointer: `filters` can be `None` in query objects (schema allows null), so concatenating it with a list raises a `TypeError` at runtime when dashboard filters are applied. Normalize missing/null filters to an empty list before concatenation.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

Comment on lines +208 to +211
extras = query.get("extras", {})
for key in EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS:
if key in efd:
extras[key] = efd[key]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: extras can be None in query objects (schema allows null), and assigning keys to None raises a runtime exception when extra form-data overrides are present. Coerce null extras to a dict before writing override keys. [null pointer]

Severity Level: Critical 🚨
- ❌ GET chart data fails when query extras is null.
- ⚠️ Time/extras overrides from dashboard defaults are lost.
Suggested change
extras = query.get("extras", {})
for key in EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS:
if key in efd:
extras[key] = efd[key]
extras = query.get("extras") or {}
for key in EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS:
if key in efd:
extras[key] = efd[key]
Steps of Reproduction ✅
1. Persist a chart `query_context` containing `"extras": null` (allowed nullable query
object fields: `extras` in `superset/charts/schemas.py:1337-1340`; chart query_context
accepted as JSON string at `superset/charts/schemas.py:213`).

2. Invoke GET `/api/v1/chart/<id>/data/?filters_dashboard_id=<dashboard_id>` so dashboard
context is loaded (`superset/charts/data/api.py:185-191`) and `extra_form_data` is present
(`:197-199`).

3. Ensure `efd` includes any override key iterated by
`EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS` (`superset/charts/data/api.py:209-211`), then
mutation path runs.

4. `extras = query.get("extras", {})` returns `None` when key exists as null (`:208`), and
assignment `extras[key] = ...` at `:211` raises `TypeError`, breaking request execution.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** superset/charts/data/api.py
**Line:** 208:211
**Comment:**
	*Null Pointer: `extras` can be `None` in query objects (schema allows null), and assigning keys to `None` raises a runtime exception when extra form-data overrides are present. Coerce null `extras` to a dict before writing override keys.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

Comment on lines +81 to +82
if charts_in_scope := filter_config.get("chartsInScope"):
return chart_id in charts_in_scope
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: The scope check treats an empty chartsInScope list as if the field were missing and falls back to rootPath, which can incorrectly apply a filter to charts that should receive no filter. Handle the presence of chartsInScope explicitly so [] means "in scope for no charts". [logic error]

Severity Level: Critical 🚨
- ❌ Scoped-out filters can still affect chart SQL.
- ⚠️ Dashboard filter metadata reports misleading applied status.
- ⚠️ GET chart data differs from frontend scope semantics.
Suggested change
if charts_in_scope := filter_config.get("chartsInScope"):
return chart_id in charts_in_scope
if "chartsInScope" in filter_config:
charts_in_scope = filter_config.get("chartsInScope")
if charts_in_scope is not None:
return chart_id in charts_in_scope
Steps of Reproduction ✅
1. Call GET `api/v1/chart/<chart_id>/data/?filters_dashboard_id=<dashboard_id>`
(entrypoint in `superset/charts/data/api.py:185-190`), which invokes
`get_dashboard_filter_context(...)`.

2. Ensure dashboard metadata contains a native filter with explicit `"chartsInScope": []`
and default `scope.rootPath` (this field is persisted when present; see helper behavior in
`tests/unit_tests/charts/test_dashboard_filter_context.py:101-103`).

3. Execution reaches `_is_filter_in_scope_for_chart`
(`superset/charts/data/dashboard_filter_context.py:70-97`); line 81 treats `[]` as falsy,
so it skips explicit scope and falls back to `rootPath` logic.

4. For charts under `ROOT_ID`, line 96 returns True and filter is treated in-scope, so
defaults are merged (`dashboard_filter_context.py:274-277`) and applied to chart query
unexpectedly.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** superset/charts/data/dashboard_filter_context.py
**Line:** 81:82
**Comment:**
	*Logic Error: The scope check treats an empty `chartsInScope` list as if the field were missing and falls back to `rootPath`, which can incorrectly apply a filter to charts that should receive no filter. Handle the presence of `chartsInScope` explicitly so `[]` means "in scope for no charts".

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

assert rv.status_code == status_code


class TestGetChartDataWithDashboardFilter(BaseTestChartDataApi):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: This new test class uses the same birth-names fixture/query flow that is explicitly skipped elsewhere in this file due to DuckDB incompatibility, but it is missing the same class-level skip marker. In CI environments using DuckDB, these tests can fail even when the feature code is correct. Add the same skip guard used by adjacent chart data flow test classes. [logic error]

Severity Level: Major ⚠️
- ❌ DuckDB chart-data CI lane fails on new tests.
- ⚠️ Dashboard-filter feature validation blocked by fixture incompatibility.
Suggested change
class TestGetChartDataWithDashboardFilter(BaseTestChartDataApi):
@pytest.mark.chart_data_flow
@pytest.mark.skip(
reason=(
"TODO: Fix test class to work with DuckDB example data format. "
"Birth names fixture conflicts with new example data structure."
)
)
Steps of Reproduction ✅
1. Run integration tests in the DuckDB chart-data environment (the same environment
already acknowledged as broken in this file via class-level skips at
`tests/integration_tests/charts/data/api_tests.py:168-173` and `:1165-1169` with reason
`"Birth names fixture conflicts with new example data structure."`).

2. Execute any test from `TestGetChartDataWithDashboardFilter` (class declaration at
`tests/integration_tests/charts/data/api_tests.py:1815`), e.g.
`test_get_data_with_dashboard_filter_context` at `:1851`.

3. The test path uses the same birth-names fixture chain
(`@pytest.mark.usefixtures("load_birth_names_dashboard_with_slices")` at `:1849`, `:1892`,
`:1937`, etc.) and the same `"Genders"` slice setup in `_setup_chart_with_query_context`
(`:1818-1820`).

4. Because this class lacks the existing DuckDB skip guard while adjacent chart-data
classes have it, pytest executes this known-incompatible fixture path and the test fails
in DuckDB runs before validating the new dashboard-filter behavior.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** tests/integration_tests/charts/data/api_tests.py
**Line:** 1815:1815
**Comment:**
	*Logic Error: This new test class uses the same birth-names fixture/query flow that is explicitly skipped elsewhere in this file due to DuckDB incompatibility, but it is missing the same class-level skip marker. In CI environments using DuckDB, these tests can fail even when the feature code is correct. Add the same skip guard used by adjacent chart data flow test classes.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

api Related to the REST API doc Namespace | Anything related to documentation size/XXL size:XXL This PR changes 1000+ lines, ignoring generated files

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant