feat(sourcemap-issues): Add ensure_sourcemap_detector() provisioning function#109749
feat(sourcemap-issues): Add ensure_sourcemap_detector() provisioning function#109749
Conversation
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Autofix Details
Bugbot Autofix prepared a fix for the issue found in the latest run.
- ✅ Fixed: Test doesn't actually verify cache usage
- Added a third call to ensure_sourcemap_detector and changed mock from objects.filter to objects.get to properly verify cache usage.
Or push these changes by commenting:
@cursor push aafa4e3840
Preview (aafa4e3840)
diff --git a/tests/sentry/processing_errors/test_provisioning.py b/tests/sentry/processing_errors/test_provisioning.py
--- a/tests/sentry/processing_errors/test_provisioning.py
+++ b/tests/sentry/processing_errors/test_provisioning.py
@@ -52,10 +52,11 @@
def test_uses_cache_on_second_call(self) -> None:
ensure_sourcemap_detector(self.project)
+ ensure_sourcemap_detector(self.project)
- with patch.object(Detector.objects, "filter", wraps=Detector.objects.filter) as mock_filter:
+ with patch.object(Detector.objects, "get", wraps=Detector.objects.get) as mock_get:
ensure_sourcemap_detector(self.project)
- mock_filter.assert_not_called()
+ mock_get.assert_not_called()
def test_separate_detectors_per_project(self) -> None:
other_project = self.create_project()
Backend Test FailuresFailures on
|
Lazily provisions a Detector with DataConditionGroup, DataConditions, and DetectorState for sourcemap configuration issue detection.
0d4bb29 to
e9a3ebe
Compare
…x cache test The cache was never populated after creating a new detector, so the second call always hit the DB. Populate the cache after creation. Also fix test_uses_cache_on_second_call which was mocking Detector.objects.filter but get_default_detector_for_project uses objects.get, so the assertion passed vacuously.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| ) | ||
|
|
||
| cache_key = get_detector_project_type_cache_key(project.id, slug) | ||
| cache.set(cache_key, detector, Detector.CACHE_TTL) |
There was a problem hiding this comment.
Cache set before DB transaction commits risks inconsistency
Medium Severity
The cache.set call happens inside the transaction.atomic() block, meaning the cache write (to Redis) executes immediately while the DB transaction hasn't committed yet. If the DB commit subsequently fails, the cache will reference a Detector that doesn't exist in the database. Additionally, a concurrent caller could read from cache between cache.set and transaction commit, receiving a detector whose related objects (condition group, conditions, state) aren't yet visible in the DB. The comparable ensure_default_anomaly_detector function avoids this by not setting cache inside the transaction. The cache write could be moved after the with block or use transaction.on_commit, consistent with patterns seen in sentry/workflow_engine/receivers/detector.py.



Lazily provisions a Detector with DataConditionGroup, DataConditions, and DetectorState for sourcemap configuration issue detection.