feat(configuration-issues): Making processing_errors queryable#109884
feat(configuration-issues): Making processing_errors queryable#109884
Conversation
Backend Test FailuresFailures on
|
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Autofix Details
Bugbot Autofix prepared a fix for the issue found in the latest run.
- ✅ Fixed: Parameter
additional_queriessilently dropped in table queryProcessingErrors.run_table_querynow forwardsadditional_queriesintorpc_dataset_common.TableQuery, so caller-provided additional filters are no longer dropped.
Or push these changes by commenting:
@cursor push b11c4835be
Preview (b11c4835be)
diff --git a/src/sentry/snuba/processing_errors_rpc.py b/src/sentry/snuba/processing_errors_rpc.py
--- a/src/sentry/snuba/processing_errors_rpc.py
+++ b/src/sentry/snuba/processing_errors_rpc.py
@@ -46,6 +46,7 @@
sampling_mode=sampling_mode,
resolver=search_resolver or cls.get_resolver(params, config),
page_token=page_token,
+ additional_queries=additional_queries,
),
params.debug,
)
wedamija
left a comment
There was a problem hiding this comment.
The received field pr has merged, although idk if it even needs to be queryable. Doesn't hurt to add it though.
Lgtm, just a couple of nits
src/sentry/testutils/cases.py
Outdated
| from datetime import datetime, timedelta, timezone | ||
| from uuid import uuid4 | ||
|
|
||
| from google.protobuf.timestamp_pb2 import Timestamp | ||
| from sentry_protos.snuba.v1.request_common_pb2 import TraceItemType | ||
| from sentry_protos.snuba.v1.trace_item_pb2 import TraceItem |
src/sentry/testutils/cases.py
Outdated
| def store_processing_errors(self, processing_errors): | ||
| """Store processing errors in the EAP dataset.""" | ||
| import requests | ||
| from django.conf import settings | ||
|
|
||
| files = { | ||
| f"processing_error_{i}": item.SerializeToString() | ||
| for i, item in enumerate(processing_errors) | ||
| } | ||
| response = requests.post( | ||
| settings.SENTRY_SNUBA + EAP_ITEMS_INSERT_ENDPOINT, | ||
| files=files, | ||
| ) | ||
| assert response.status_code == 200 |
There was a problem hiding this comment.
I added a method for storing these in https://github.com/getsentry/sentry/pull/109851/changes#diff-044678ee783c164a8acca43b4fa4a97e7402d231129865acf6a253e1670e922f, would be good to see if it works for your usecase so that we don't duplicate work
There was a problem hiding this comment.
@wedamija store_processing_errors is replaceable by produce_and_store_eap_items, however the latter tests producer correctness, requires passing producer specific params that seem redundant for API tests that only need the data to exist for testing querying behaviour. Also requires more boilterplate for forming the event_data and errors schema like here: to be able to use alongside produce_and_store_eap_items. This is consistent with the rest of the test classes like uptime, profile functions, occurrences in the file.
Not too strong of an opinion though, lmk if you want me to just replace it haha
There was a problem hiding this comment.
You can just use store_eap_items though. I'm going to factor out all the store_profile_functions, store_trace_metrics, etc etc in some other pr. It's unnecessary to create new functions for each item type, the file name passed isn't used. I think AI is just replicating some pattern
There was a problem hiding this comment.
You're right actually, getting rid of store_processing_errors
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| assert response.status_code == 200, response.content | ||
| assert [attrs for time, attrs in response.data["data"]] == [ | ||
| [{"count": count}] for count in event_counts | ||
| ] |
There was a problem hiding this comment.
test_count expects 6 time buckets, but 7 exist
Medium Severity
The test_count test asserts against 6 time buckets (event_counts = [6, 0, 6, 3, 0, 3]), but test_zerofill with the identical time range and interval expects 7 buckets ([[{"count": 0}]] * 7). The test_top_events_group_by_error_type also implicitly acknowledges 7 buckets by slicing [0:6]. The event_counts list in test_count is missing a trailing zero-count entry for the 7th bucket, so the assertion will fail when the response contains 7 data points.



Summary
Registers
processing_errorsas a queryable EAP dataset so the events and events-stats APIs can query processing error data (e.g. for issue details charts). Used established patterns for other queryable datasets as a guide.search/eap/processing_errors/module with column, attribute and aggregate function definitionsProcessingErrorsRPC dataset class insnuba/processing_errors_rpc.pysupporting table, timeseries, and top-events queriestypes.py,constants.py,utils.pyand enables it in theorganization_eventsandorganization_events_statsendpoints (including top-events support)ProcessingErrorTestCasemixin with helpers to create and store processing error itemsTest Plan
tests/sentry/search/eap/test_processing_errors.py— validates query parsing (filters, negation, in-filter, AND/OR, empty)tests/snuba/api/endpoints/test_organization_events_stats_processing_errors.py— validates count, zerofill, top events grouped byerror_type, filtering, and empty top events