Conversation
Co-authored-by: MasumRab <8943353+MasumRab@users.noreply.github.com>
Co-authored-by: MasumRab <8943353+MasumRab@users.noreply.github.com>
… runner Co-authored-by: MasumRab <8943353+MasumRab@users.noreply.github.com>
Co-authored-by: MasumRab <8943353+MasumRab@users.noreply.github.com>
|
|
|
🤖 Hi @MasumRab, I've received your request, and I'm working on it now! You can track my progress in the logs for more details. |
|
🤖 I'm sorry @MasumRab, but I was unable to process your request. Please see the logs for more details. |
There was a problem hiding this comment.
Code Review
This pull request refactors the launcher and service setup scripts to resolve merge conflicts, improve modularity through helper functions, and standardize constants like PACKAGE_JSON. Key changes include the consolidation of environment validation logic, the addition of docstrings, and a fix for a regex validation in the backend startup. Review feedback identifies a critical NameError in the refactored _run_services function due to a missing args parameter and suggests restoring a fallback mechanism for notmuch installation to maintain robust dependency management.
| run_command( | ||
| [python_exe, "-m", "pip", "install", f"notmuch=={major_minor}"], | ||
| f"Installing notmuch {major_minor} to match system", | ||
| ) |
There was a problem hiding this comment.
The logic to install notmuch has been simplified, but it lost a useful fallback mechanism. The previous version attempted to install the latest version of notmuch if installing the system-matching version failed. Restoring this fallback would make the setup more robust.
# Try to install matching version, fallback to latest if not available
if not run_command(
[python_exe, "-m", "pip", "install", f"notmuch=={major_minor}"],
f"Installing notmuch {major_minor} to match system",
):
logger.warning(f"Could not install notmuch version {major_minor}, trying latest version as a fallback.")
run_command(
[python_exe, "-m", "pip", "install", "notmuch"],
"Installing latest notmuch version as a fallback",
)Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
Important Review skippedToo many files! This PR contains 160 files, which is 10 over the limit of 150. ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (160)
You can disable this status message by setting the Use the checkbox below for a quick retry:
WalkthroughThis PR standardizes CI/tool invocation (switches to Changes
Sequence Diagram(s)(Skipped — changes are primarily formatting with a few isolated helper additions; no new multi-component sequential flow requiring visualization.) Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
- Remove duplicate UserLogin class definition in modules/auth/routes.py - Remove duplicate get_data_source import in modules/categories/routes.py - Fix bare except in modules/new_ui/backend_adapter.py - Remove unused asyncio import in modules/system_status/__init__.py - Fix NLPEngine import in src/backend/extensions/example/example.py - Fix bare except in src/resolution/auto_resolver.py - Run ruff format for code consistency Co-authored-by: openhands <openhands@all-hands.dev>
| # Atomic write | ||
| temp_path = file_path.with_suffix(".tmp") | ||
| with open(temp_path, 'w') as f: | ||
| with open(temp_path, "w") as f: |
Check failure
Code scanning / CodeQL
Uncontrolled data used in path expression High
Copilot Autofix
AI 4 days ago
Copilot could not generate an autofix suggestion
Copilot could not generate an autofix suggestion for this alert. Try pushing a new commit or if the problem persists contact support.
| # Remove script tags | ||
| html_content = re.sub(r'<script[^>]*>.*?</script>', '', html_content, flags=re.DOTALL | re.IGNORECASE) | ||
| html_content = re.sub( | ||
| r"<script[^>]*>.*?</script>", "", html_content, flags=re.DOTALL | re.IGNORECASE |
Check failure
Code scanning / CodeQL
Bad HTML filtering regexp High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 4 days ago
In general, the robust way to fix regex-based HTML sanitization issues is to avoid using regular expressions to parse HTML and instead use a proper HTML parsing and sanitization library. Such libraries understand browser parsing rules, handle malformed tags, attributes, and different encodings, and are much less prone to bypasses than custom expressions.
For this specific function in modules/new_ui/utils.py, the best fix without changing the intended functionality (“Remove potentially dangerous HTML tags”) is to replace the current regex-based removals with a DOM-based sanitizer. A widely used option in Python is bleach, which wraps html5lib and provides configurable whitelists for tags and attributes. We can rewrite sanitize_html to parse the input HTML and then only allow a safe subset of tags and attributes, effectively stripping script/style tags, event handlers, and other dangerous constructs in a more complete and future-proof way. Concretely, we will: (1) import bleach at the top of the file, and (2) replace the body of sanitize_html with a call to bleach.clean, configuring tags, attributes, and protocols to keep the function’s semantics (return cleaned HTML, not plain text). No other functions in this file need to be changed.
| @@ -7,7 +7,9 @@ | ||
| from datetime import UTC, datetime | ||
| from typing import Any | ||
|
|
||
| import bleach | ||
|
|
||
|
|
||
| def clean_text(text: str | None) -> str: | ||
| """Clean and normalize text for analysis""" | ||
| if not text: | ||
| @@ -164,19 +165,36 @@ | ||
| if not html_content: | ||
| return "" | ||
|
|
||
| # Remove script tags | ||
| html_content = re.sub( | ||
| r"<script[^>]*>.*?</script>", "", html_content, flags=re.DOTALL | re.IGNORECASE | ||
| ) | ||
| # Use a well-tested HTML sanitization library instead of regex | ||
| # Allow only a minimal set of safe tags and attributes. | ||
| allowed_tags = [ | ||
| "b", | ||
| "strong", | ||
| "i", | ||
| "em", | ||
| "u", | ||
| "br", | ||
| "p", | ||
| "ul", | ||
| "ol", | ||
| "li", | ||
| "span", | ||
| "a", | ||
| ] | ||
|
|
||
| # Remove style tags | ||
| html_content = re.sub( | ||
| r"<style[^>]*>.*?</style>", "", html_content, flags=re.DOTALL | re.IGNORECASE | ||
| ) | ||
| allowed_attributes = { | ||
| "a": ["href", "title"], | ||
| "span": ["class"], | ||
| "p": ["class"], | ||
| } | ||
|
|
||
| # Remove event handlers | ||
| html_content = re.sub( | ||
| r'\s+on\w+\s*=\s*["\'][^"\']*["\']', "", html_content, flags=re.IGNORECASE | ||
| # Bleach will strip disallowed tags (including <script>, <style>, etc.) | ||
| # and unsafe attributes (including event handlers like onclick). | ||
| cleaned = bleach.clean( | ||
| html_content, | ||
| tags=allowed_tags, | ||
| attributes=allowed_attributes, | ||
| protocols=["http", "https", "mailto"], | ||
| strip=True, | ||
| ) | ||
|
|
||
| return html_content | ||
| return cleaned |
| @@ -1,4 +1,5 @@ | ||
| gradio | ||
| bleach==6.3.0 | ||
| plotly | ||
| psutil | ||
| pandas |
| Package | Version | Security advisories |
| bleach (pypi) | 6.3.0 | None |
- Remove unused dependencies: unimport, pip-autoremove - Update core dependencies to stable versions compatible with Python 3.11+ - Maintain compatibility with existing codebase Co-authored-by: openhands <openhands@all-hands.dev>
There was a problem hiding this comment.
Actionable comments posted: 17
🧹 Nitpick comments (23)
src/backend/python_nlp/ai_training.py (1)
66-68: Preserve root cause by chaining the caught exception.Please raise with
from eso traceback retains the originalKeyErrorcontext during debugging.Suggested diff
except KeyError as e: raise ValueError( f"Missing required variable {e} for template '{template_name}'" - ) + ) from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/python_nlp/ai_training.py` around lines 66 - 68, The ValueError raised for missing template variables should preserve the original KeyError context by chaining the exception; update the raise in the template handling code to re-raise the ValueError with "from e" (i.e., raise ValueError(f"Missing required variable {e} for template '{template_name}'") from e) so the original KeyError is retained in the traceback—apply this change where the current raise references e and template_name.src/backend/plugins/email_filter_node.py (1)
54-56: Consider fully expanding the multi-line call format.While the current multi-line formatting is valid, placing each argument on its own line with a trailing comma would better align with the PR's objective of "multi-line layouts with trailing commas for consistency" and improve diff clarity for future changes.
♻️ Proposed formatting adjustment
return self.run( - emails=data["emails"], filter_criteria=data["filter_criteria"] + emails=data["emails"], + filter_criteria=data["filter_criteria"], )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/plugins/email_filter_node.py` around lines 54 - 56, Change the single-line call to a fully-expanded multi-line call with each argument on its own line and trailing commas for consistency: update the return statement that invokes self.run(...) so that the emails and filter_criteria kwargs are each on their own line (reference the self.run call and the emails and filter_criteria argument names) and include a trailing comma after the last argument.src/backend/python_nlp/smart_retrieval.py (2)
189-190: Optional: Consider reducing trailing blank lines.The multiple blank lines after the
_save_checkpointmethod are more than typical Python convention (PEP 8 suggests 2 blank lines between top-level definitions). However, this is a very minor style point.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/python_nlp/smart_retrieval.py` around lines 189 - 190, Reduce excessive trailing blank lines after the _save_checkpoint method in smart_retrieval.py to follow PEP8-style conventions (use two blank lines between top-level definitions); locate the _save_checkpoint function and remove the extra blank lines that follow it so there are only the standard two blank lines before the next top-level definition.
85-86: Consider removing the standalone comment marker.Line 86 contains a solitary
#with no comment text. If this is intentional spacing, consider using a blank line instead for clarity. If it's unintended, it can be removed.🧹 Suggested cleanup
if conn: conn.close() - - # + def _store_credentials(self, creds: Credentials):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/python_nlp/smart_retrieval.py` around lines 85 - 86, Remove the solitary '#' comment marker in src/backend/python_nlp/smart_retrieval.py (it appears as a standalone line in the snippet) — either delete that lone '#' or replace it with an actual blank line for spacing; no code change beyond removing the unnecessary comment marker is needed and no functional symbols are affected..github/workflows/pr-check.yml (1)
87-90: Inconsistent tool invocation: some steps still useuv runwhile others usepython -m.Lines 88-89 (
py_compile) and line 107 (bandit) still useuv run, while Ruff (lines 81, 84) and pytest (line 93) switched topython -m. This inconsistency may cause confusion. Consider aligning all invocations to usepython -mfor consistency with the rest of the workflow changes.♻️ Proposed fix for consistency
- name: Python compile check run: | - uv run python -m py_compile src/**/*.py setup/*.py - uv run python -m py_compile modules/**/*.py + python -m py_compile src/**/*.py setup/*.py + python -m py_compile modules/**/*.py- name: Bandit security check - run: uv run bandit -r src/ modules/ setup/ -x tests/ + run: python -m bandit -r src/ modules/ setup/ -x tests/Also applies to: 106-107
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/pr-check.yml around lines 87 - 90, The workflow uses mixed invocations: replace the remaining "uv run" usages with the consistent "python -m" form so all tool runs match the rest of the file; specifically update the py_compile steps (the lines invoking "uv run python -m py_compile" for src/**/*.py and modules/**/*.py) and the bandit invocation (the line currently using "uv run bandit") to use "python -m" instead of "uv run" so all lint/test/security steps use the same invocation style.src/backend/python_backend/model_routes.py (1)
66-68: Consider documenting the 500 response in the route decorator.While this is a pre-existing issue (not introduced by the formatting change), SonarCloud correctly flags that the 500 status code should be documented in the
responsesparameter for accurate OpenAPI documentation.♻️ Suggested improvement
-@router.post("/api/models/{model_name}/unload", response_model=dict) +@router.post( + "/api/models/{model_name}/unload", + response_model=dict, + responses={500: {"description": "Failed to unload model"}}, +) async def unload_model(The same applies to
load_modelon line 31.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/python_backend/model_routes.py` around lines 66 - 68, The route decorators for load_model and unload_model need to document the 500 response in their OpenAPI metadata: update the decorator(s) that wrap the load_model and unload_model functions to include a responses={"500": {"description": "Internal server error: failed to load/unload model"}} entry (or similar) so the HTTPException(status_code=500, ...) is reflected in the OpenAPI docs; modify the decorator arguments for the functions named load_model and unload_model to add the 500 response description..github/workflows/ci.yml (1)
37-37: Switching fromuv synctouv pip install --systemmay affect reproducibility.Using
uv pip installinstead ofuv syncbypasses lockfile-based dependency resolution. Combined with--system, this installs packages directly into the system Python rather than an isolated environment. Consider whether this trade-off (potentially faster installs vs. reproducible builds) is intentional for CI.If lockfile reproducibility is desired,
uv sync --all-extraswould ensure consistent dependency versions across runs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml at line 37, The CI step currently uses the command "uv pip install --system -e .[dev,ml,data,viz,db,google]" which bypasses lockfile resolution and installs into the system Python; change this to use udeps lockfile-aware sync to preserve reproducible builds by replacing the command with a lockfile-based sync invocation (e.g., use "uv sync --all-extras" or the project's equivalent sync command) so dependencies are installed consistently in the CI environment instead of via direct pip system installs..github/workflows/push-check.yml (1)
79-81: Inconsistent invocation pattern in compile check step.Lines 80-81 still use
uv run python -m py_compilewhile the rest of the workflow switched to directpython -minvocation. For consistency with the lint steps (lines 48, 51), consider updating to:♻️ Suggested fix
- name: Python compile check run: | - uv run python -m py_compile src/**/*.py setup/*.py - uv run python -m py_compile modules/**/*.py || true + python -m py_compile src/**/*.py setup/*.py + python -m py_compile modules/**/*.py || true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/push-check.yml around lines 79 - 81, The workflow uses inconsistent invocation for the py_compile step: replace the two occurrences of "uv run python -m py_compile src/**/*.py setup/*.py" and "uv run python -m py_compile modules/**/*.py || true" with direct invocations "python -m py_compile src/**/*.py setup/*.py" and "python -m py_compile modules/**/*.py || true" so the compile checks match the rest of the file's direct "python -m" pattern (mirror how lint steps call python).src/core/model_routes.py (2)
380-382: Same recommendation: UseAnnotatedtype hints for dependency injection.This endpoint also triggers the SonarCloud warning. Applying the
ModelManagerDeptype alias suggested above would resolve this consistently.♻️ Proposed refactor
`@router.get`("/sentiment/model") -async def get_sentiment_model( - manager: DynamicModelManager = Depends(get_model_manager), -): +async def get_sentiment_model(manager: ModelManagerDep): """Get the best available sentiment analysis model."""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/model_routes.py` around lines 380 - 382, The dependency injection in get_sentiment_model uses a plain parameter type causing SonarCloud warnings; switch to the Annotated alias (e.g., ModelManagerDep = Annotated[DynamicModelManager, Depends(get_model_manager)]) and update the function signature to use ModelManagerDep for the manager parameter so the dependency is expressed via Annotated (reference the get_sentiment_model function and the ModelManagerDep alias).
367-369: Consider usingAnnotatedtype hints for FastAPI dependency injection.SonarCloud flags this as a best practice issue. Modern FastAPI (2.0+) recommends using
Annotatedtype hints for cleaner dependency injection.♻️ Proposed refactor using Annotated
+from typing import Annotated + +# Define reusable dependency type +ModelManagerDep = Annotated[DynamicModelManager, Depends(get_model_manager)] + `@router.get`("/available", response_model=List[dict]) -async def get_available_models( - manager: DynamicModelManager = Depends(get_model_manager), -): +async def get_available_models(manager: ModelManagerDep): """Get list of available models for AI engine integration."""This pattern can be applied consistently across all endpoints in this file for better maintainability.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/model_routes.py` around lines 367 - 369, The get_available_models endpoint currently types the dependency parameter as manager: DynamicModelManager = Depends(get_model_manager); update it to use Python's typing.Annotated for FastAPI dependency injection (e.g., manager: Annotated[DynamicModelManager, Depends(get_model_manager)]) and apply the same pattern to other endpoint parameters in this file (replace occurrences where a default Depends(...) is used) so type hints and dependency metadata are separated per FastAPI 2.x best practices.src/core/database.py (2)
463-490:asyncis unnecessary in_prepare_new_email_record.Line 463 defines this as
async, but the body is fully synchronous. This adds coroutine overhead and complexity in callers.Suggested refactor
- async def _prepare_new_email_record( + def _prepare_new_email_record( self, email_data: Dict[str, Any] ) -> Dict[str, Any]: @@ - full_email_record = await self._prepare_new_email_record(email_data) + full_email_record = self._prepare_new_email_record(email_data)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/database.py` around lines 463 - 490, The function _prepare_new_email_record is defined as async but contains only synchronous logic; remove the async keyword from its definition to make it a regular synchronous function and update its signature/return type accordingly, then find all callers of _prepare_new_email_record and remove any unnecessary awaits (or call it directly) so callers treat it as a normal function rather than a coroutine; reference the function name _prepare_new_email_record to locate the change and adjust any tests or call sites that expect a coroutine.
827-912:search_emails_with_limitis too complex for maintainability.This function is carrying multiple responsibilities (cache lookup, light-field matching, heavy-content I/O lookup, index repair, result shaping). Split into focused helpers to reduce cognitive complexity and make test coverage easier.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/database.py` around lines 827 - 912, The search_emails_with_limit function is doing too many things; refactor it by extracting focused helper methods to improve readability and testability: create _is_match_in_light(email_light, search_term_lower) to encapsulate the _search_index and subject/sender checks, _is_match_in_cached_content(email_id, search_term_lower) to wrap caching_manager.get_email_content logic, and _is_match_in_disk_content(email_id, search_term_lower) that uses _content_available_index, _get_email_content_path and asyncio.to_thread(self._read_content_sync, ...) and also repairs the index when a file is missing; keep caching logic using caching_manager.get_query_result / put_query_result in the main method and call _get_sorted_emails and _add_category_details only for orchestration, not business logic.src/core/workflow_engine.py (1)
600-635: Consider decomposing cleanup schedule calculation.
_calculate_cleanup_schedulehas deep nested loops/conditionals and is hard to reason about. Extract dependency checks into helper(s) or precompute adjacency/reachability to simplify and improve readability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/workflow_engine.py` around lines 600 - 635, The _calculate_cleanup_schedule method is hard to read due to deeply nested loops; refactor by extracting the dependency checks into one or two helpers and/or precomputing adjacency/reachability maps: create a helper like _build_adjacency_map(self) that converts self.workflow.connections into a dict mapping node_id -> set(of downstream node_ids), and a helper like _is_reachable(self, src, target, adjacency) that checks reachability (BFS/DFS) using that map; then rewrite _calculate_cleanup_schedule to iterate execution_order and for each prev_node_id use _is_reachable(prev_node_id, subsequent_node_id, adjacency) (or a precomputed transitive closure) to decide if a prev_node_id is still needed, simplifying the nested loops and replacing direct references to self.workflow.connections with the adjacency map.src/core/data/database_source.py (1)
21-27: Align nullable parameter annotations withNonedefaults.
category_idandis_unreaddefault toNone, but are typed as non-nullableint/bool. The abstract interface indata_source.pycorrectly usesOptional[int]andOptional[bool], and the implementation should match.Suggested fix
- async def get_emails( - self, - limit: int = 100, - offset: int = 0, - category_id: int = None, - is_unread: bool = None, - ) -> List[Dict[str, Any]]: + async def get_emails( + self, + limit: int = 100, + offset: int = 0, + category_id: int | None = None, + is_unread: bool | None = None, + ) -> List[Dict[str, Any]]:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/data/database_source.py` around lines 21 - 27, The get_emails method signature declares category_id: int and is_unread: bool but defaults them to None; update the annotations to Optional[int] and Optional[bool] and add Optional to the typing imports so the implementation matches the abstract interface (data_source.py) and avoids type mismatches for get_emails.src/backend/python_backend/dependencies.py (1)
54-60: Consider alternative formatting for multi-line global statement.While syntactically valid, using backslash continuations for the
globalstatement is unusual and can be fragile (trailing whitespace after\breaks it). An alternative is to keep it on one line or use multipleglobalstatements, which is more explicit:Alternative formatting options
Option 1 - Multiple global statements (more explicit):
- global \ - _model_manager_instance, \ - _ai_engine_instance, \ - _filter_manager_instance, \ - _workflow_engine_instance, \ - _plugin_manager_instance, \ - _gmail_service_instance + global _model_manager_instance + global _ai_engine_instance + global _filter_manager_instance + global _workflow_engine_instance + global _plugin_manager_instance + global _gmail_service_instanceOption 2 - Single line (if line length permits):
global _model_manager_instance, _ai_engine_instance, _filter_manager_instance, _workflow_engine_instance, _plugin_manager_instance, _gmail_service_instance🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/python_backend/dependencies.py` around lines 54 - 60, Replace the multi-line global statement that uses backslash continuations with a safer format: either a single-line global declaration for _model_manager_instance, _ai_engine_instance, _filter_manager_instance, _workflow_engine_instance, _plugin_manager_instance, _gmail_service_instance or separate explicit global statements (one per name) to avoid fragile backslash continuation; update the global usage in the module (where those symbols are declared/used) to the chosen format so the semantics remain identical but formatting is more robust.modules/new_ui/backend_adapter.py (1)
124-131: Verify path sanitization is sufficient for security.CodeQL flags potential path traversal at lines 129 and 150. While the sanitization
"".join(x for x in key if x.isalnum() or x in "_-")removes path separators and special characters, consider adding an explicit check to ensure the resolved path stays withinDATA_DIR.Proposed defensive check
def persist_item(self, key: str, data: Dict[str, Any]) -> bool: try: safe_key = "".join(x for x in key if x.isalnum() or x in "_-") + if not safe_key: + logger.error(f"Invalid key after sanitization: {key}") + return False file_path = DATA_DIR / f"{safe_key}.json" + # Ensure resolved path is within DATA_DIR + if not file_path.resolve().is_relative_to(DATA_DIR.resolve()): + logger.error(f"Path traversal attempt detected: {key}") + return False # Atomic write temp_path = file_path.with_suffix(".tmp")Apply similar checks in
retrieve_itemandlist_workflows.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/new_ui/backend_adapter.py` around lines 124 - 131, Sanitize and then explicitly verify that the computed file path stays inside DATA_DIR: after computing safe_key and file_path (and before creating temp_path or writing), resolve file_path (and DATA_DIR) and assert file_path.resolve().is_relative_to(DATA_DIR.resolve()) (or use a try/except with Path.relative_to for older Python) and raise/abort if it fails; apply the same defensive resolved-path check in retrieve_item and list_workflows to prevent path traversal vulnerabilities when using safe_key, file_path, temp_path and DATA_DIR.src/backend/node_engine/workflow_engine.py (1)
435-443: Consider flattening nested generic checks for readability.These nested
ifblocks can be merged into single conditions to reduce branching and satisfy the static-analysis warning without changing behavior.Refactor example
- if source_type == DataType.EMAIL_LIST: - if ( - isinstance(target_type, GenericType) - and target_type.base_type == DataType.LIST - ): - if ( - len(target_type.type_parameters) > 0 - and target_type.type_parameters[0] == DataType.EMAIL - ): - return True + if ( + source_type == DataType.EMAIL_LIST + and isinstance(target_type, GenericType) + and target_type.base_type == DataType.LIST + and len(target_type.type_parameters) > 0 + and target_type.type_parameters[0] == DataType.EMAIL + ): + return True - if target_type == DataType.EMAIL_LIST: - if ( - isinstance(source_type, GenericType) - and source_type.base_type == DataType.LIST - ): - if ( - len(source_type.type_parameters) > 0 - and source_type.type_parameters[0] == DataType.EMAIL - ): - return True + if ( + target_type == DataType.EMAIL_LIST + and isinstance(source_type, GenericType) + and source_type.base_type == DataType.LIST + and len(source_type.type_parameters) > 0 + and source_type.type_parameters[0] == DataType.EMAIL + ): + return TrueAlso applies to: 446-454
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/node_engine/workflow_engine.py` around lines 435 - 443, The nested conditional blocks checking target_type should be flattened into single boolean expressions: replace the multi-level ifs that check isinstance(target_type, GenericType) and target_type.base_type == DataType.LIST and then inspect target_type.type_parameters[0] == DataType.EMAIL with a single combined condition (e.g., isinstance(target_type, GenericType) and target_type.base_type == DataType.LIST and target_type.type_parameters and target_type.type_parameters[0] == DataType.EMAIL) in the same function/method in workflow_engine.py so the behavior remains identical but branching is reduced; apply the same refactor to the analogous block referencing target_type/type_parameters in the second occurrence (the block around lines 446-454).src/backend/python_backend/node_workflow_routes.py (1)
280-307: Consider documenting error responses in the route decorator.SonarCloud flags that HTTPException with status code 500 should be documented in the
responsesparameter for OpenAPI documentation completeness. This is optional but improves API documentation.📝 Example documentation
`@router.post`( - "/api/nodes/workflows/{workflow_id}/execute", response_model=ExecuteWorkflowResponse + "/api/nodes/workflows/{workflow_id}/execute", + response_model=ExecuteWorkflowResponse, + responses={500: {"description": "Internal server error during workflow execution"}} )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/python_backend/node_workflow_routes.py` around lines 280 - 307, The route decorator for execute_node_workflow should document the 500 error response so OpenAPI includes it; update the `@router.post`(...) decorator on execute_node_workflow to add a responses entry describing the HTTP 500 case (e.g., referencing HTTPException with status_code 500 and a brief description/schema) while keeping the existing response_model=ExecuteWorkflowResponse, so API docs reflect failures thrown inside the try/except that raise HTTPException(status_code=500, ...).modules/email/routes.py (1)
88-93: Consider usingAnnotatedtype hints for FastAPI dependency injection.Modern FastAPI (0.95+) recommends using
Annotatedfor dependency injection. This is optional but aligns with FastAPI best practices.📝 Example with Annotated
+from typing import Annotated + +CurrentUser = Annotated[str, Depends(get_current_active_user)] +DB = Annotated[DataSource, Depends(get_data_source)] + `@router.put`("/{email_id}", response_model=EmailResponse) `@log_performance` async def update_email( email_id: int, email_update: EmailUpdate, - current_user: str = Depends(get_current_active_user), - db: DataSource = Depends(get_data_source), + current_user: CurrentUser, + db: DB, ):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/email/routes.py` around lines 88 - 93, The update_email endpoint currently uses positional default Depends(...) for dependency injection; update the type hints to use Annotated for FastAPI-style dependencies by importing typing.Annotated and changing the parameter annotations for current_user and db to Annotated[str, Depends(get_current_active_user)] and Annotated[DataSource, Depends(get_data_source)] (keep EmailUpdate and email_id signatures unchanged) so FastAPI can recognize the dependencies per modern best practices in the update_email function.src/backend/python_backend/main.py (1)
255-295: Module-level imports not at top of file causing linter errors.The late imports for routers (lines 255, 258, 278, 283, 288, 295) are flagged by ruff as E402 violations. While these may be intentional to avoid circular imports, consider either:
- Moving these imports to the top of the file if circular imports are not an issue
- Adding
# noqa: E402comments if the late imports are intentional🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/python_backend/main.py` around lines 255 - 295, The late router imports (symbols: category_router_v1, email_router_v1, enhanced_router, workflow_router, advanced_workflow_router, node_workflow_router) are causing ruff E402 linter violations; either move these import statements to the top of the module with the other imports (so e.g. import .routes.v1.category_routes as category_router_v1, .enhanced_routes as enhanced_router, .workflow_routes as workflow_router, .advanced_workflow_routes as advanced_workflow_router, .node_workflow_routes as node_workflow_router) or, if the late imports are intentional to avoid circular imports, append a per-line noqa comment (`# noqa: E402`) to each late import to silence the linter (apply to the import lines referenced above and any corresponding app.include_router usages remain unchanged).src/backend/python_backend/dashboard_routes.py (1)
107-150: Minor inconsistency in exception formatting.The
HTTPExceptionat line 127 remains single-line while others in this file were reformatted to multi-line. This is a minor style inconsistency but doesn't affect functionality.♻️ Optional: Align exception formatting for consistency
except Exception as e: - raise HTTPException(status_code=500, detail=f"Failed to queue job: {str(e)}") + raise HTTPException( + status_code=500, detail=f"Failed to queue job: {str(e)}" + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/python_backend/dashboard_routes.py` around lines 107 - 150, The two route handlers trigger_weekly_growth_calculation and trigger_performance_metrics_aggregation use a single-line HTTPException in their except blocks which is inconsistent with the multi-line exception formatting used elsewhere; update the raise HTTPException(...) calls in both functions to the same multi-line style (split status_code and detail onto separate lines, matching the file's existing exception formatting) so formatting is consistent across the file.src/core/plugin_routes.py (1)
292-389: Consider usingAnnotatedtype hints for FastAPI dependencies.The static analysis tool flags multiple endpoints for not using
Annotatedtype hints for dependency injection. This is a modern FastAPI best practice (FastAPI 0.95+) that improves type checker compatibility and IDE support. This is a pre-existing pattern in the codebase, so addressing it can be deferred.Example for
get_plugin_system_status:from typing import Annotated async def get_plugin_system_status( manager: Annotated[PluginManager, Depends(get_plugin_manager)], ):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/plugin_routes.py` around lines 292 - 389, Update the FastAPI dependency annotations to use typing.Annotated for improved type checking: import Annotated and change each manager parameter to Annotated[PluginManager, Depends(get_plugin_manager)] in the functions get_plugin_system_status, execute_plugin_method, get_registered_hooks, trigger_hook, and check_security_sandbox; ensure Depends is imported from fastapi and remove the plain Depends usage while leaving all function logic unchanged.modules/auth/routes.py (1)
90-92: Consider sanitizing user-controlled data in logs.The logger at lines 90-92 includes
user_credentials.usernamedirectly. While logging usernames is common for debugging, it could be considered a minor security/privacy concern. If this is intentional for audit purposes, consider documenting it. Otherwise, you might want to use a sanitized identifier.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/auth/routes.py` around lines 90 - 92, The log call using logger.error with user_credentials.username should avoid emitting raw user-controlled data; replace it with a sanitized or obfuscated identifier (e.g., mask or hash the username) or a non-sensitive internal ID and update the logger call in the same block where logger.error is used (refer to logger and user_credentials.username) to log the sanitized value instead; if logging raw usernames is required for audit, add a short comment and ensure this behavior is documented and approved.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a0ce54c6-d72a-4a24-a8e0-73789fd4a7bf
⛔ Files ignored due to path filters (1)
test_timeout.zipis excluded by!**/*.zip
📒 Files selected for processing (141)
.github/workflows/ci.yml.github/workflows/pr-check.yml.github/workflows/push-check.ymlmodules/auth/__init__.pymodules/auth/routes.pymodules/categories/routes.pymodules/dashboard/__init__.pymodules/dashboard/models.pymodules/dashboard/routes.pymodules/default_ai_engine/analysis_components/intent_model.pymodules/default_ai_engine/analysis_components/sentiment_model.pymodules/default_ai_engine/analysis_components/topic_model.pymodules/default_ai_engine/analysis_components/urgency_model.pymodules/default_ai_engine/engine.pymodules/email/routes.pymodules/email_retrieval/email_retrieval_ui.pymodules/imbox/imbox_ui.pymodules/new_ui/app.pymodules/new_ui/backend_adapter.pymodules/new_ui/tests/test_adapter.pymodules/new_ui/utils.pymodules/notmuch/__init__.pymodules/notmuch/ui.pymodules/system_status/__init__.pymodules/system_status/models.pymodules/workflows/ui.pysetup/launch.pysetup/services.pysrc/backend/node_engine/email_nodes.pysrc/backend/node_engine/migration_utils.pysrc/backend/node_engine/node_base.pysrc/backend/node_engine/node_library.pysrc/backend/node_engine/security_manager.pysrc/backend/node_engine/test_generic_types.pysrc/backend/node_engine/test_integration.pysrc/backend/node_engine/test_migration.pysrc/backend/node_engine/test_node_security.pysrc/backend/node_engine/test_sanitization_types.pysrc/backend/node_engine/workflow_engine.pysrc/backend/node_engine/workflow_manager.pysrc/backend/plugins/email_filter_node.pysrc/backend/plugins/email_visualizer_plugin.pysrc/backend/plugins/plugin_manager.pysrc/backend/python_backend/advanced_workflow_routes.pysrc/backend/python_backend/ai_engine.pysrc/backend/python_backend/ai_routes.pysrc/backend/python_backend/auth.pysrc/backend/python_backend/category_data_manager.pysrc/backend/python_backend/config.pysrc/backend/python_backend/dashboard_routes.pysrc/backend/python_backend/database.pysrc/backend/python_backend/dependencies.pysrc/backend/python_backend/email_data_manager.pysrc/backend/python_backend/email_routes.pysrc/backend/python_backend/enhanced_routes.pysrc/backend/python_backend/exceptions.pysrc/backend/python_backend/filter_routes.pysrc/backend/python_backend/gmail_routes.pysrc/backend/python_backend/gradio_app.pysrc/backend/python_backend/json_database.pysrc/backend/python_backend/main.pysrc/backend/python_backend/model_routes.pysrc/backend/python_backend/models.pysrc/backend/python_backend/node_workflow_routes.pysrc/backend/python_backend/notebooks/email_analysis.ipynbsrc/backend/python_backend/performance_monitor.pysrc/backend/python_backend/performance_routes.pysrc/backend/python_backend/plugin_manager.pysrc/backend/python_backend/routes/v1/email_routes.pysrc/backend/python_backend/services/base_service.pysrc/backend/python_backend/services/category_service.pysrc/backend/python_backend/services/email_service.pysrc/backend/python_backend/settings.pysrc/backend/python_backend/tests/conftest.pysrc/backend/python_backend/tests/test_advanced_ai_engine.pysrc/backend/python_backend/tests/test_category_routes.pysrc/backend/python_backend/tests/test_database_optimizations.pysrc/backend/python_backend/tests/test_email_routes.pysrc/backend/python_backend/tests/test_gmail_routes.pysrc/backend/python_backend/tests/test_model_manager.pysrc/backend/python_backend/tests/test_training_routes.pysrc/backend/python_backend/tests/test_workflow_routes.pysrc/backend/python_backend/training_routes.pysrc/backend/python_backend/workflow_editor_ui.pysrc/backend/python_backend/workflow_engine.pysrc/backend/python_backend/workflow_manager.pysrc/backend/python_backend/workflow_routes.pysrc/backend/python_nlp/ai_training.pysrc/backend/python_nlp/data_strategy.pysrc/backend/python_nlp/gmail_integration.pysrc/backend/python_nlp/gmail_metadata.pysrc/backend/python_nlp/gmail_service.pysrc/backend/python_nlp/gmail_service_clean.pysrc/backend/python_nlp/nlp_engine.pysrc/backend/python_nlp/retrieval_monitor.pysrc/backend/python_nlp/smart_filters.pysrc/backend/python_nlp/smart_retrieval.pysrc/backend/python_nlp/tests/test_nlp_engine.pysrc/context_control/core.pysrc/context_control/logging.pysrc/core/advanced_workflow_engine.pysrc/core/ai_engine.pysrc/core/auth.pysrc/core/caching.pysrc/core/constants.pysrc/core/data/data_source.pysrc/core/data/database_source.pysrc/core/data/notmuch_data_source.pysrc/core/data/repository.pysrc/core/data_source.pysrc/core/database.pysrc/core/dynamic_model_manager.pysrc/core/enhanced_caching.pysrc/core/enhanced_error_reporting.pysrc/core/exceptions.pysrc/core/factory.pysrc/core/job_queue.pysrc/core/middleware.pysrc/core/model_registry.pysrc/core/model_routes.pysrc/core/models.pysrc/core/module_manager.pysrc/core/notmuch_data_source.pysrc/core/performance_monitor.pysrc/core/plugin_base.pysrc/core/plugin_manager.pysrc/core/plugin_routes.pysrc/core/rate_limiter.pysrc/core/security.pysrc/core/security_validator.pysrc/core/settings.pysrc/core/smart_filter_manager.pysrc/core/workflow_engine.pysrc/main.pysrc/resolution/__init__.pysrc/resolution/auto_resolver.pysrc/resolution/semantic_merger.pysrc/strategy/generator.pysrc/strategy/multi_phase_generator.pysrc/strategy/risk_assessor.pysrc/strategy/selector.py
💤 Files with no reviewable changes (1)
- src/backend/python_backend/tests/test_training_routes.py
| html_content = re.sub( | ||
| r"<script[^>]*>.*?</script>", "", html_content, flags=re.DOTALL | re.IGNORECASE | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
python - <<'PY'
import re
old = re.compile(r"<script[^>]*>.*?</script>", re.DOTALL | re.IGNORECASE)
new = re.compile(r"<script\b[^>]*>.*?</script\s*>", re.DOTALL | re.IGNORECASE)
payload = "<script>alert(1)</script >"
print("old_matches:", bool(old.search(payload)))
print("new_matches:", bool(new.search(payload)))
PYRepository: MasumRab/EmailIntelligence
Length of output: 106
🏁 Script executed:
cd /tmp/repository && git ls-files | grep -E "utils\.py$" | head -20Repository: MasumRab/EmailIntelligence
Length of output: 135
🏁 Script executed:
cd /tmp/repository && find . -name "utils.py" -path "*/new_ui/*" -type fRepository: MasumRab/EmailIntelligence
Length of output: 135
🏁 Script executed:
cd /tmp/repository && cat -n modules/new_ui/utils.py | sed -n '160,180p'Repository: MasumRab/EmailIntelligence
Length of output: 135
🏁 Script executed:
#!/bin/bash
python - <<'PY'
import re
# Test multiple regex patterns and payloads
old_pattern = re.compile(r"<script[^>]*>.*?</script>", re.DOTALL | re.IGNORECASE)
new_pattern = re.compile(r"<script\b[^>]*>.*?</script\s*>", re.DOTALL | re.IGNORECASE)
# Test cases covering various bypass attempts
test_cases = [
"<script>alert(1)</script>", # Normal case
"<script>alert(1)</script >", # Space before closing >
"<script>alert(1)</script >", # Multiple spaces
"<script>alert(1)</script\t>", # Tab before closing >
"<script>alert(1)</script\n>", # Newline before closing >
"<SCRIPT>alert(1)</SCRIPT>", # Uppercase
"<SCRIPT>alert(1)</SCRIPT >", # Uppercase with space
"<script attr='val'>alert(1)</script >", # With attributes
"<script\n>alert(1)</script>", # Newline in opening tag
]
print("Pattern comparison for various payloads:")
print("-" * 80)
for payload in test_cases:
old_match = bool(old_pattern.search(payload))
new_match = bool(new_pattern.search(payload))
status = "✓" if new_match and not old_match else ("✓ both" if new_match and old_match else "✗ neither")
print(f"Payload: {payload:<50} | Old: {old_match} | New: {new_match} | {status}")
PYRepository: MasumRab/EmailIntelligence
Length of output: 1004
🏁 Script executed:
git ls-files | grep -E "modules/new_ui/utils\.py"Repository: MasumRab/EmailIntelligence
Length of output: 93
🏁 Script executed:
find . -name "utils.py" -path "*/new_ui/*" -type f 2>/dev/nullRepository: MasumRab/EmailIntelligence
Length of output: 95
🏁 Script executed:
head -n 180 modules/new_ui/utils.py | tail -n 25Repository: MasumRab/EmailIntelligence
Length of output: 744
🏁 Script executed:
cat -n modules/new_ui/utils.py | sed -n '165,185p'Repository: MasumRab/EmailIntelligence
Length of output: 762
🏁 Script executed:
python - <<'PY'
import re
# Check if the style tag has the same vulnerability
style_pattern_old = re.compile(r"<style[^>]*>.*?</style>", re.DOTALL | re.IGNORECASE)
style_pattern_new = re.compile(r"<style\b[^>]*>.*?</style\s*>", re.DOTALL | re.IGNORECASE)
test_payload = "<style>body{color:red}</style >"
print("Style tag vulnerability:")
print(f"Old pattern matches: {bool(style_pattern_old.search(test_payload))}")
print(f"New pattern matches: {bool(style_pattern_new.search(test_payload))}")
PYRepository: MasumRab/EmailIntelligence
Length of output: 147
Script and style-tag sanitizers bypass on whitespace-padded closing tags.
The regex patterns at lines 171 and 176 fail to match closing tags with whitespace, such as </script > and </style >, allowing sanitization bypass. Testing confirms the original pattern misses variants with spaces, tabs, and newlines before the closing >.
Both sanitizers require fixing:
Suggested fix
# Remove script tags
html_content = re.sub(
- r"<script[^>]*>.*?</script>", "", html_content, flags=re.DOTALL | re.IGNORECASE
+ r"<script\b[^>]*>.*?</script\s*>", "", html_content, flags=re.DOTALL | re.IGNORECASE
)
# Remove style tags
html_content = re.sub(
- r"<style[^>]*>.*?</style>", "", html_content, flags=re.DOTALL | re.IGNORECASE
+ r"<style\b[^>]*>.*?</style\s*>", "", html_content, flags=re.DOTALL | re.IGNORECASE
)🧰 Tools
🪛 GitHub Check: CodeQL
[failure] 171-171: Bad HTML filtering regexp
This regular expression does not match script end tags like </script >.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@modules/new_ui/utils.py` around lines 170 - 172, The regex used to strip
script/style blocks (operating on html_content) fails to match closing tags with
whitespace like "</script >" or "</style >"; update both patterns to use
word-boundary and allow optional whitespace before the closing ">" (e.g.
r"<script\b[^>]*>.*?</script\s*>" and r"<style\b[^>]*>.*?</style\s*>") and keep
flags=re.DOTALL|re.IGNORECASE so html_content correctly removes
whitespace-padded closing tags.
modules/system_status/__init__.py
Outdated
| dashboard_response = requests.get( | ||
| "http://127.0.0.1:8000/api/dashboard/stats", timeout=5 | ||
| ) | ||
| dashboard_data = ( | ||
| dashboard_response.json() | ||
| if dashboard_response.status_code == 200 | ||
| else {} | ||
| ) | ||
| except (requests.RequestException, ValueError): | ||
| dashboard_data = {} | ||
|
|
||
| # Get Gmail performance metrics | ||
| try: | ||
| gmail_response = requests.get("http://127.0.0.1:8000/api/gmail/performance", timeout=5) | ||
| gmail_data = gmail_response.json() if gmail_response.status_code == 200 else {} | ||
| gmail_response = requests.get( | ||
| "http://127.0.0.1:8000/api/gmail/performance", timeout=5 | ||
| ) | ||
| gmail_data = ( | ||
| gmail_response.json() if gmail_response.status_code == 200 else {} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify blocking HTTP calls inside async functions in this file.
rg -n -C3 'async def |requests\.(get|post)\(' modules/system_status/__init__.pyRepository: MasumRab/EmailIntelligence
Length of output: 2055
🏁 Script executed:
#!/bin/bash
# Check imports at the top of the file to see if async HTTP clients are available
head -30 modules/system_status/__init__.py
# Check if httpx or aiohttp are imported anywhere in the file
rg -n 'import (httpx|aiohttp|requests)' modules/system_status/__init__.py
# Get line counts to understand file structure
wc -l modules/system_status/__init__.pyRepository: MasumRab/EmailIntelligence
Length of output: 863
🏁 Script executed:
#!/bin/bash
# Verify the exact lines mentioned as affected in the "Also applies to" section
# Lines: 166-171, 205-205, 218-218, 228-232, 242-244
sed -n '164,172p' modules/system_status/__init__.py # 166-171 context
sed -n '204,207p' modules/system_status/__init__.py # 205
sed -n '216,220p' modules/system_status/__init__.py # 218
sed -n '226,234p' modules/system_status/__init__.py # 228-232
sed -n '240,246p' modules/system_status/__init__.py # 242-244Repository: MasumRab/EmailIntelligence
Length of output: 1490
🏁 Script executed:
#!/bin/bash
# Check if there's a requirements.txt or setup.py to see if httpx is available
fd -e requirements.txt -e setup.py -e pyproject.toml | head -5
# If found, check for httpx/aiohttp availability
if [ -f requirements.txt ]; then
grep -i 'httpx\|aiohttp' requirements.txt || echo "No async HTTP client in requirements.txt"
fi
if [ -f pyproject.toml ]; then
grep -i 'httpx\|aiohttp' pyproject.toml || echo "No async HTTP client in pyproject.toml"
fi
if [ -f setup.py ]; then
grep -i 'httpx\|aiohttp' setup.py || echo "No async HTTP client in setup.py"
fiRepository: MasumRab/EmailIntelligence
Length of output: 104
Replace synchronous requests with async HTTP client in async functions.
Both get_system_status() (lines 124–176) and perform_health_checks() (lines 198–252) use requests.get/post(), which blocks the event loop and degrades throughput and concurrency. Replace with httpx.AsyncClient (already in dependencies) for async I/O.
Refactor direction
+import httpx
...
- dashboard_response = requests.get(
- "http://127.0.0.1:8000/api/dashboard/stats", timeout=5
- )
+ async with httpx.AsyncClient(timeout=5) as client:
+ dashboard_response = await client.get(
+ "http://127.0.0.1:8000/api/dashboard/stats"
+ )
...
- response = requests.get("http://127.0.0.1:8000/health", timeout=5)
+ async with httpx.AsyncClient(timeout=5) as client:
+ response = await client.get("http://127.0.0.1:8000/health")Affected lines: 153–155, 166–168, 205, 218, 228–231, 242–244.
🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis
[failure] 153-153: Use an async HTTP client in this async function instead of a synchronous one.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@modules/system_status/__init__.py` around lines 153 - 170,
get_system_status() and perform_health_checks() are using blocking
requests.get/post calls; replace them with asynchronous calls using
httpx.AsyncClient (use "async with httpx.AsyncClient()" and await
client.get/post) and swap exception handling from
requests.RequestException/ValueError to httpx.RequestError and handle non-200
responses appropriately; ensure you preserve the same timeouts, JSON parsing
logic, and fallbacks (empty dicts) and update all occurrences referenced in the
diff (the dashboard/gmail endpoints and other GET/POST calls) to be awaited
async calls within the async functions.
| from setup.services import ( | ||
| start_services, start_backend, start_node_service, start_gradio_ui, validate_services | ||
| start_services, start_backend, start_node_service, start_gradio_ui, validate_services, | ||
| PACKAGE_JSON | ||
| ) |
There was a problem hiding this comment.
Remove unused import validate_services.
The import validate_services from setup.services is flagged as unused by Ruff (F401). Remove it to fix the pipeline failure.
🧹 Proposed fix
from setup.services import (
- start_services, start_backend, start_node_service, start_gradio_ui, validate_services,
+ start_services, start_backend, start_node_service, start_gradio_ui,
PACKAGE_JSON
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from setup.services import ( | |
| start_services, start_backend, start_node_service, start_gradio_ui, validate_services | |
| start_services, start_backend, start_node_service, start_gradio_ui, validate_services, | |
| PACKAGE_JSON | |
| ) | |
| from setup.services import ( | |
| start_services, start_backend, start_node_service, start_gradio_ui, | |
| PACKAGE_JSON | |
| ) |
🧰 Tools
🪛 Flake8 (7.3.0)
[error] 22-22: 'setup.services.validate_services' imported but unused
(F401)
🪛 GitHub Actions: Pull Request Check
[error] 23-23: Ruff (F401): setup.services.validate_services imported but unused. Remove unused import.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@setup/launch.py` around lines 22 - 25, Remove the unused symbol
validate_services from the import list in launch.py (the from setup.services
import ... statement that currently includes validate_services); edit the import
to only include start_services, start_backend, start_node_service,
start_gradio_ui, and PACKAGE_JSON so Ruff no longer reports F401 and the
pipeline passes.
| decorator_path = "backend.python_backend.performance_monitor.log_performance" | ||
| with patch(decorator_path, lambda *args, **kwargs: (lambda func: func)): | ||
| with patch(decorator_path, lambda *args, **kwargs: lambda func: func): | ||
| yield TestClient(app) |
There was a problem hiding this comment.
The patch timing does not prevent import-time decorator execution, causing the pipeline failure.
The app is imported at module level (line 8), which triggers import-time execution of @log_performance decorators before this patch takes effect. The with patch(...) block only patches the decorator during TestClient(app) instantiation, but by then the modules using @log_performance have already been imported and decorated.
To fix this, the patch needs to be applied before importing the app, or use patch.object at a different level.
🐛 Proposed fix: Move patch to wrap the import
`@pytest.fixture`
def client(mock_db_manager, mock_ai_engine, mock_filter_manager, mock_workflow_engine):
"""
Provides a TestClient with all external services mocked.
This fixture resets mocks for each test and handles dependency overrides.
"""
- from backend.python_backend.database import get_db
- from backend.python_backend.dependencies import (
- get_ai_engine,
- get_filter_manager,
- get_workflow_engine,
- )
+ decorator_path = "backend.python_backend.performance_monitor.log_performance"
+ with patch(decorator_path, lambda *args, **kwargs: lambda func: func):
+ from backend.python_backend.database import get_db
+ from backend.python_backend.dependencies import (
+ get_ai_engine,
+ get_filter_manager,
+ get_workflow_engine,
+ )
# Reset all mocks
for manager_mock in [
mock_db_manager,
mock_ai_engine,
mock_filter_manager,
mock_workflow_engine,
]:
manager_mock.reset_mock()
for attr_name in dir(manager_mock):
attr = getattr(manager_mock, attr_name)
if isinstance(attr, (MagicMock, AsyncMock)):
attr.reset_mock()
attr.side_effect = None
# Set up dependency overrides
app.dependency_overrides[get_db] = lambda: mock_db_manager
app.dependency_overrides[get_ai_engine] = lambda: mock_ai_engine
app.dependency_overrides[get_filter_manager] = lambda: mock_filter_manager
app.dependency_overrides[get_workflow_engine] = lambda: mock_workflow_engine
- decorator_path = "backend.python_backend.performance_monitor.log_performance"
- with patch(decorator_path, lambda *args, **kwargs: lambda func: func):
- yield TestClient(app)
+ yield TestClient(app)
# Clean up dependency overrides
app.dependency_overrides.clear()Alternatively, if the decorator is applied at module import time (outside functions), you may need to patch it at the top of the test module or use conftest.py-level patching before any imports.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/backend/python_backend/tests/conftest.py` around lines 87 - 89, The patch
is applied too late—app is imported at module level so `@log_performance` ran
during import before the with patch(decorator_path, ...) block; fix by applying
the patch before importing the app (or at module import time in conftest) so the
decorator is replaced prior to any module imports that use it: move the patching
of "backend.python_backend.performance_monitor.log_performance" to run before
the code that imports or constructs app (ensure TestClient(app) sees the patched
decorator), or perform a module-level patch (e.g., patch the attribute on the
module that defines the decorated functions) so the decorators are no-op at
import time.
| async def get_training_status( | ||
| job_id: str, current_user: str = Depends(get_current_active_user) | ||
| ): |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify unresolved Depends usage vs imports in this file
rg -n -C2 'from fastapi import|Depends\(' src/backend/python_backend/training_routes.pyRepository: MasumRab/EmailIntelligence
Length of output: 564
Add missing Depends import from fastapi.
Depends(...) is used at lines 32 and 67 but not imported. This causes F821 undefined name errors and breaks the build.
Proposed fix
-from fastapi import APIRouter, BackgroundTasks, HTTPException
+from fastapi import APIRouter, BackgroundTasks, Depends, HTTPException🧰 Tools
🪛 Flake8 (7.3.0)
[error] 67-67: undefined name 'Depends'
(F821)
🪛 GitHub Actions: CI
[error] 67-68: ruff F821: Undefined name Depends.
🪛 GitHub Actions: Pull Request Check
[error] 67-67: Ruff (F821): Undefined name Depends used in FastAPI route dependency injection.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/backend/python_backend/training_routes.py` around lines 66 - 68, The file
uses Depends in route signatures (e.g., get_training_status and the dependency
on get_current_active_user) but missing its import; add "from fastapi import
Depends" to the module imports so the Depends symbol is defined and the route
dependency declarations (including other functions using Depends) compile
correctly.
| # Trigger re-analysis if AI engine is available | ||
| if self.ai_engine: | ||
| try: | ||
| asyncio.create_task(self._reanalyze_email(message_id)) | ||
| except Exception as e: | ||
| logger.warning(f"Failed to schedule re-analysis for email {message_id}: {e}") | ||
|
|
||
| logger.warning( | ||
| f"Failed to schedule re-analysis for email {message_id}: {e}" | ||
| ) | ||
|
|
There was a problem hiding this comment.
Same asyncio.create_task garbage collection issue.
The task created at line 671 also lacks a stored reference. Apply the same fix pattern here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/core/notmuch_data_source.py` around lines 668 - 676, The create_task call
inside the ai_engine conditional should keep a strong reference to avoid
premature garbage collection; replace the bare
asyncio.create_task(self._reanalyze_email(message_id)) with creating the Task,
storing it on a container on the instance (e.g.,
self._background_tasks.add(task)) and attaching a done callback to remove it
when complete. Update the block that references self.ai_engine and the call to
asyncio.create_task in the same scope (the invocation of self._reanalyze_email)
so the new Task is retained and cleaned up automatically.
| @@ -23,8 +25,10 @@ class ConstitutionalRequirement: | |||
|
|
|||
| from enum import Enum | |||
There was a problem hiding this comment.
Move Enum import to the top import block.
Current placement triggers Ruff E402 and blocks CI.
Proposed fix
from typing import List, Dict, Any, Optional
from dataclasses import dataclass
+from enum import Enum
import json
import yaml
from pathlib import Path
@@
-from enum import Enum
-
-
class ComplianceLevel(Enum):🧰 Tools
🪛 GitHub Actions: CI
[error] 26-26: ruff E402: Module level import not at top of file (from enum import Enum).
🪛 GitHub Actions: Pull Request Check
[error] 26-44: Ruff (E402): Module level import not at top of file; plus type uses undefined ComplianceResult.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/resolution/__init__.py` at line 26, Move the from enum import Enum
statement out of the middle of the file and into the top import block alongside
the other imports to satisfy Ruff E402; update the imports in
src/resolution/__init__.py so Enum is imported at the top with the other
module-level imports (referencing Enum and __init__.py) and remove the
duplicate/misplaced import location.
src/resolution/__init__.py
Outdated
|
|
||
| overall_score: float # 0.0 to 1.0 | ||
| compliance_level: ComplianceLevel | ||
| detailed_results: List[ComplianceResult] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify unresolved forward reference and class order in src/resolution/__init__.py
rg -n 'class ConstitutionalValidationResult|class ComplianceResult|detailed_results:\s*List\[' src/resolution/__init__.py
rg -n '^from __future__ import annotations' src/resolution/__init__.pyRepository: MasumRab/EmailIntelligence
Length of output: 185
🏁 Script executed:
sed -n '35,60p' src/resolution/__init__.pyRepository: MasumRab/EmailIntelligence
Length of output: 671
🏁 Script executed:
head -20 src/resolution/__init__.pyRepository: MasumRab/EmailIntelligence
Length of output: 443
🏁 Script executed:
fd -e py -x rg -l '^from __future__ import annotations' {} \;Repository: MasumRab/EmailIntelligence
Length of output: 100
🏁 Script executed:
# Check if Ruff config exists and what it says about F821
fd -name 'ruff.toml|pyproject.toml|setup.cfg' -x grep -l 'ruff\|F821' {} \;Repository: MasumRab/EmailIntelligence
Length of output: 300
🏁 Script executed:
# Find config files
fd -e toml -o -e cfg | head -20Repository: MasumRab/EmailIntelligence
Length of output: 186
🏁 Script executed:
ls -la src/resolution/__init__.py
find . -name "pyproject.toml" -o -name "ruff.toml" 2>/dev/null | head -5Repository: MasumRab/EmailIntelligence
Length of output: 186
🏁 Script executed:
cat pyproject.toml | grep -A 20 '\[tool.ruff\]'Repository: MasumRab/EmailIntelligence
Length of output: 52
🏁 Script executed:
cat pyproject.tomlRepository: MasumRab/EmailIntelligence
Length of output: 2955
🏁 Script executed:
# Check for other forward references in the file
sed -n '1,60p' src/resolution/__init__.py | grep -n 'List\[' | head -20Repository: MasumRab/EmailIntelligence
Length of output: 158
Fix forward reference for ComplianceResult in ConstitutionalValidationResult.
detailed_results: List[ComplianceResult] at line 44 is evaluated before ComplianceResult is defined at line 54, causing Ruff F821. Use a string annotation to defer evaluation.
Proposed fix
- detailed_results: List[ComplianceResult]
+ detailed_results: List["ComplianceResult"]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| detailed_results: List[ComplianceResult] | |
| detailed_results: List["ComplianceResult"] |
🧰 Tools
🪛 Flake8 (7.3.0)
[error] 44-44: undefined name 'ComplianceResult'
(F821)
🪛 GitHub Actions: CI
[error] 44-44: ruff F821: Undefined name ComplianceResult (used in List[ComplianceResult]).
🪛 GitHub Actions: Pull Request Check
[error] 26-44: Ruff (E402): Module level import not at top of file; plus type uses undefined ComplianceResult.
[error] 44-44: Ruff (F821): Undefined name ComplianceResult in List[ComplianceResult] field.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/resolution/__init__.py` at line 44, ConstitutionalValidationResult
currently annotates detailed_results with List[ComplianceResult] before
ComplianceResult is defined, causing a forward-reference error; fix by deferring
evaluation of the type—either change the annotation to a string like
"List['ComplianceResult']" for the detailed_results field in the
ConstitutionalValidationResult class, or enable postponed evaluation by adding
from __future__ import annotations at the top of the module so ComplianceResult
can be referenced directly.
| "line_count_change": len(merged_content) - len(original_content.split("\n")) | ||
| if isinstance(original_content, str) | ||
| else 0, |
There was a problem hiding this comment.
Fix line_count_change to compare line counts, not characters.
len(merged_content) counts characters, while the right side counts lines. This produces incorrect validation metrics.
Proposed fix
- "line_count_change": len(merged_content) - len(original_content.split("\n"))
- if isinstance(original_content, str)
- else 0,
+ "line_count_change": len(merged_content.split("\n"))
+ - len(original_content.split("\n"))
+ if isinstance(original_content, str) and isinstance(merged_content, str)
+ else 0,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "line_count_change": len(merged_content) - len(original_content.split("\n")) | |
| if isinstance(original_content, str) | |
| else 0, | |
| "line_count_change": len(merged_content.split("\n")) | |
| - len(original_content.split("\n")) | |
| if isinstance(original_content, str) and isinstance(merged_content, str) | |
| else 0, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/resolution/semantic_merger.py` around lines 339 - 341, The code computing
"line_count_change" uses len(merged_content) (characters) vs
len(original_content.split("\n")) (lines); change it to compare line counts
consistently by using len(merged_content.split("\n")) -
len(original_content.split("\n")) (and guard with isinstance(original_content,
str) as currently done) so both sides are line counts; update the use of
merged_content and original_content in the expression where "line_count_change"
is defined in semantic_merger.py.
- Fix TimerContext record_metric reference in performance_monitor.py - Fix argon2.exceptions import in auth.py - Fix _build_node_context to use workflow connections properly - Fix parallel execution test operation signature - Fix factory tests with correct patch paths - Fix AI engine factory test with async context manager - Fix test_app.py with redirect follow - Add new test files: audit_logger, performance_monitor, etc. - Skip test_app_startup due to TestClient limitations with Gradio
- Move enum import to top of auth.py - Move duplicate imports to top of performance_monitor.py - Rename duplicate PerformanceMetric class to PerformanceMetricV2 - Remove duplicate log_performance method in OptimizedPerformanceMonitor - Update PerformanceMetric usage to PerformanceMetricV2 - Remove unused imports in test files
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
tests/test_workflow_engine.py (1)
172-199:⚠️ Potential issue | 🟠 MajorStrengthen this test to validate D’s input wiring, not just execution count.
Line 199 only verifies that 4 nodes ran. It does not verify that
input1/input2are mapped correctly intotwo_arg_operation(x, y). With symmetric upstream values andx + y, a swapped mapping still passes.Suggested test hardening
- def single_arg_operation(x): + def node_a_operation(x): return x + 1 + def node_b_operation(x): + return x + 1 + + def node_c_operation(x): + return x + 2 + def two_arg_operation(x, y): - return x + y + return x - y # Create nodes that can run in parallel after A: A -> B, A -> C, then B,C -> D - node_a = Node("A", "Node A", single_arg_operation, ["input"], ["output"]) - node_b = Node("B", "Node B", single_arg_operation, ["input"], ["output"]) - node_c = Node("C", "Node C", single_arg_operation, ["input"], ["output"]) + node_a = Node("A", "Node A", node_a_operation, ["input"], ["output"]) + node_b = Node("B", "Node B", node_b_operation, ["input"], ["output"]) + node_c = Node("C", "Node C", node_c_operation, ["input"], ["output"]) node_d = Node("D", "Node D", two_arg_operation, ["input1", "input2"], ["output"]) @@ result = runner.run({"input": 1}, parallel_execution=True) assert result["success"] is True assert len(result["results"]) == 4 # All 4 nodes should have executed + assert result["results"]["D"]["output"] == -1As per coding guidelines, "Add or update appropriate unit, integration, and/or end-to-end tests to provide comprehensive coverage for code changes".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_workflow_engine.py` around lines 172 - 199, The test should ensure D's inputs are distinct so wiring is validated: replace the generic single_arg_operation used for node_b/node_c with two different functions (e.g., node_b_op and node_c_op) that return distinct values given the workflow input, keep two_arg_operation and node_d as-is, run runner.run(parallel_execution=True) and then assert that the result for node D equals the sum of those two distinct values (verifying node_d received input1 from B and input2 from C); update assertions on result["results"] to check the specific node D output rather than only the count.tests/core/test_factory.py (2)
337-342:⚠️ Potential issue | 🟠 MajorSame issue:
get_ai_engine()returns async context manager, not engine.Lines 341-342 have the same problem—calling
get_ai_engine()directly and asserting it's notNonedoesn't validate the engine properly. The async generator is neverNone, but the test should verify the engine can be obtained viaasync with.🐛 Proposed fix
- # Test get_ai_engine (sync) - assert callable(get_ai_engine) + # Test get_ai_engine (async context manager) + assert callable(get_ai_engine) - # Should be able to call get_ai_engine without issues - ai_engine = get_ai_engine() - assert ai_engine is not None + # Note: get_ai_engine() returns an async context manager, + # so full validation requires async test - see test_get_ai_engine_returns_object🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/test_factory.py` around lines 337 - 342, The test wrongly calls get_ai_engine() directly (it returns an async context manager) and should instead obtain the engine by entering that context; update the test that asserts ai_engine is not None to enter the async context manager returned by get_ai_engine (e.g., using "async with get_ai_engine() as ai_engine" or wrapping that in asyncio.run for a sync test) and then assert the obtained ai_engine is not None; reference get_ai_engine to locate and change the assertion logic.
256-281:⚠️ Potential issue | 🔴 CriticalCritical:
get_ai_engine()requiresasync withcontext manager.Based on the context snippet showing
get_ai_engine()is decorated with@asynccontextmanager, calling it directly returns an async generator, not the engine instance. The method calls on lines 265-267 will fail because the generator object doesn't haveanalyze_sentiment,classify_topic, orrecognize_intentmethods.Additionally, this test is synchronous but should be async to properly use the context manager.
🐛 Proposed fix
- def test_ai_engine_factory_integration(self): + `@pytest.mark.asyncio` + async def test_ai_engine_factory_integration(self): """Test that AI engine factory integrates properly.""" - ai_engine = get_ai_engine() - - # Test basic functionality test_text = "This is a test email about work" - # Should not raise exceptions - try: - sentiment = ai_engine.analyze_sentiment(test_text) - topic = ai_engine.classify_topic(test_text) - intent = ai_engine.recognize_intent(test_text) - - # Results should be dictionaries or None - assert sentiment is None or isinstance(sentiment, dict) - assert topic is None or isinstance(topic, dict) - assert intent is None or isinstance(intent, dict) - - except Exception as e: - # AI engine might not be fully implemented, but shouldn't crash - # Log the error for debugging - import logging - - logging.warning(f"AI engine integration test failed: {e}") - # Don't fail the test - this might be expected if models aren't loaded - pass + async with get_ai_engine() as ai_engine: + # Should not raise exceptions + try: + result = await ai_engine.analyze_email(test_text) + assert result is None or isinstance(result, dict) + except Exception as e: + import logging + logging.warning(f"AI engine integration test failed: {e}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/test_factory.py` around lines 256 - 281, The test calls get_ai_engine() directly but get_ai_engine is an `@asynccontextmanager` and must be used with async with; convert test_ai_engine_factory_integration into an async test (e.g., add asyncio/pytest.mark.asyncio) and use "async with get_ai_engine() as ai_engine:" to acquire the engine before calling ai_engine.analyze_sentiment, ai_engine.classify_topic, and ai_engine.recognize_intent; preserve the try/except semantics but adapt them for async (await any coroutine returns if those methods are async) so the test uses the context manager correctly and does not operate on an async generator.
🧹 Nitpick comments (3)
tests/core/test_plugin_base.py (1)
91-112: Harden default-value tests for regression resistance.Line 93-Line 111 checks defaults, but it can still miss regressions where mutable defaults become shared across instances, and the
created_atcheck depends on wall-clock timing. Consider adding isolation and deterministic timestamp assertions.♻️ Proposed test additions
class TestPluginMetadata: @@ def test_plugin_metadata_default_values(self): @@ assert metadata.checksum is None assert before <= metadata.created_at <= after + + def test_plugin_metadata_mutable_defaults_are_not_shared(self): + """Default list fields should not be shared across instances.""" + first = PluginMetadata( + plugin_id="p1", + name="P1", + version="1.0.0", + author="A1", + description="D1", + ) + second = PluginMetadata( + plugin_id="p2", + name="P2", + version="1.0.0", + author="A2", + description="D2", + ) + + first.dependencies.append("dep1") + first.permissions.append("read") + first.tags.append("tag1") + + assert second.dependencies == [] + assert second.permissions == [] + assert second.tags == [] + + def test_plugin_metadata_created_at_is_deterministic_with_patched_time(self, monkeypatch): + """created_at should use current time factory value.""" + fixed_ts = 1_700_000_000.0 + monkeypatch.setattr("time.time", lambda: fixed_ts) + + metadata = PluginMetadata( + plugin_id="time_test", + name="Time Test", + version="1.0.0", + author="Clock", + description="Clock test", + ) + + assert metadata.created_at == fixed_tsAs per coding guidelines, "Add or update appropriate unit, integration, and/or end-to-end tests to provide comprehensive coverage for code changes".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/test_plugin_base.py` around lines 91 - 112, Create stronger, regression-resistant assertions for PluginMetadata: instantiate two PluginMetadata objects (using PluginMetadata(...) with same args), mutate a list field on one (e.g., append to dependencies or permissions) and assert the other instance's corresponding list stays empty to catch shared-mutable-default bugs; also make created_at deterministic by capturing a controlled timestamp (use monkeypatching/freezing of time or assert that created_at is a float and that the second instance's created_at is >= first instance's created_at) instead of relying on loose before/after wall-clock checks; reference PluginMetadata, PluginSecurityLevel, created_at, dependencies, and permissions in the test changes.tests/core/test_settings.py (1)
21-24: Consider defensive assertion forsecret_key.The assertion
settings.settings.secret_key != ""will pass ifsecret_keyisNone. If the intent is to ensure a valid secret key exists, consider a more explicit check.💡 Suggested improvement
def test_settings_has_secret_key(self): """Test that settings has a secret key set.""" # The global settings instance should have a secret key from environment - assert settings.settings.secret_key != "" + assert settings.settings.secret_key # Ensures not None and not empty🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/test_settings.py` around lines 21 - 24, The test test_settings_has_secret_key currently uses assert settings.settings.secret_key != "" which will pass for None; update the assertion to explicitly verify a non-empty, non-None secret (e.g., assert settings.settings.secret_key is not None and str(settings.settings.secret_key).strip() != "") so the test fails for None or blank values and clearly validates settings.settings.secret_key.tests/core/test_audit_logger.py (1)
71-74: Test method is in wrong class.
test_audit_event_types_are_uniquetestsAuditEventTypebut is placed inTestAuditSeverity. Move it toTestAuditEventTypefor proper test organization.♻️ Proposed fix
Move this test to the
TestAuditEventTypeclass (after line 55):class TestAuditEventType: """Tests for AuditEventType enum.""" ... def test_system_events_exist(self): ... + + def test_audit_event_types_are_unique(self): + """Test that all AuditEventType values are unique.""" + values = [e.value for e in AuditEventType] + assert len(values) == len(set(values)), "AuditEventType values are not unique"And remove it from
TestAuditSeverity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/test_audit_logger.py` around lines 71 - 74, Move the test method test_audit_event_types_are_unique out of the TestAuditSeverity class and place it into the TestAuditEventType class (ideally after the existing tests in that class); ensure you remove the duplicate from TestAuditSeverity so the test now lives only under TestAuditEventType and continues to assert uniqueness of AuditEventType values.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@setup/pyproject.toml`:
- Around line 13-23: Update the dependency version floors in pyproject.toml to
match the ones declared in requirements.txt so both install paths resolve the
same minimums: bump "pydantic" to >=2.11.5, "httpx" to >=0.28.1, "pyngrok" to
>=0.7.0, "email-validator" to >=2.2.0, and "python-dotenv" to >=1.1.0; also
reconcile the "fastapi" entries (ensure the same minimum in both files—either
change pyproject.toml to the requirements.txt minimum or update requirements.txt
to the intended newer floor) so installations via pip install -e .[dev] and pip
install -r requirements.txt use consistent version constraints for the packages
listed in the shown dependency block.
In `@tests/core/test_job_queue.py`:
- Line 7: The import line brings in patch and MagicMock which are unused in this
test module; remove the unused symbols by deleting them from the import
statement (or remove the entire "from unittest.mock import patch, MagicMock"
line) so only actually used imports remain and the test file has no unused
imports.
In `@tests/core/test_settings.py`:
- Line 5: Remove the unused import statement from tests/core/test_settings.py:
delete the top-level "import os" since it is never referenced in this file (no
functions, classes, or tests rely on os), leaving only the necessary imports for
the test module.
---
Outside diff comments:
In `@tests/core/test_factory.py`:
- Around line 337-342: The test wrongly calls get_ai_engine() directly (it
returns an async context manager) and should instead obtain the engine by
entering that context; update the test that asserts ai_engine is not None to
enter the async context manager returned by get_ai_engine (e.g., using "async
with get_ai_engine() as ai_engine" or wrapping that in asyncio.run for a sync
test) and then assert the obtained ai_engine is not None; reference
get_ai_engine to locate and change the assertion logic.
- Around line 256-281: The test calls get_ai_engine() directly but get_ai_engine
is an `@asynccontextmanager` and must be used with async with; convert
test_ai_engine_factory_integration into an async test (e.g., add
asyncio/pytest.mark.asyncio) and use "async with get_ai_engine() as ai_engine:"
to acquire the engine before calling ai_engine.analyze_sentiment,
ai_engine.classify_topic, and ai_engine.recognize_intent; preserve the
try/except semantics but adapt them for async (await any coroutine returns if
those methods are async) so the test uses the context manager correctly and does
not operate on an async generator.
In `@tests/test_workflow_engine.py`:
- Around line 172-199: The test should ensure D's inputs are distinct so wiring
is validated: replace the generic single_arg_operation used for node_b/node_c
with two different functions (e.g., node_b_op and node_c_op) that return
distinct values given the workflow input, keep two_arg_operation and node_d
as-is, run runner.run(parallel_execution=True) and then assert that the result
for node D equals the sum of those two distinct values (verifying node_d
received input1 from B and input2 from C); update assertions on
result["results"] to check the specific node D output rather than only the
count.
---
Nitpick comments:
In `@tests/core/test_audit_logger.py`:
- Around line 71-74: Move the test method test_audit_event_types_are_unique out
of the TestAuditSeverity class and place it into the TestAuditEventType class
(ideally after the existing tests in that class); ensure you remove the
duplicate from TestAuditSeverity so the test now lives only under
TestAuditEventType and continues to assert uniqueness of AuditEventType values.
In `@tests/core/test_plugin_base.py`:
- Around line 91-112: Create stronger, regression-resistant assertions for
PluginMetadata: instantiate two PluginMetadata objects (using
PluginMetadata(...) with same args), mutate a list field on one (e.g., append to
dependencies or permissions) and assert the other instance's corresponding list
stays empty to catch shared-mutable-default bugs; also make created_at
deterministic by capturing a controlled timestamp (use monkeypatching/freezing
of time or assert that created_at is a float and that the second instance's
created_at is >= first instance's created_at) instead of relying on loose
before/after wall-clock checks; reference PluginMetadata, PluginSecurityLevel,
created_at, dependencies, and permissions in the test changes.
In `@tests/core/test_settings.py`:
- Around line 21-24: The test test_settings_has_secret_key currently uses assert
settings.settings.secret_key != "" which will pass for None; update the
assertion to explicitly verify a non-empty, non-None secret (e.g., assert
settings.settings.secret_key is not None and
str(settings.settings.secret_key).strip() != "") so the test fails for None or
blank values and clearly validates settings.settings.secret_key.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c9c21861-cb78-4c9f-a52b-45a3eef49a69
📒 Files selected for processing (15)
setup/pyproject.tomlsrc/core/auth.pysrc/core/performance_monitor.pysrc/core/workflow_engine.pytests/core/test_app.pytests/core/test_audit_logger.pytests/core/test_constants.pytests/core/test_exceptions.pytests/core/test_factory.pytests/core/test_job_queue.pytests/core/test_module_manager.pytests/core/test_performance_monitor.pytests/core/test_plugin_base.pytests/core/test_settings.pytests/test_workflow_engine.py
✅ Files skipped from review due to trivial changes (1)
- src/core/auth.py
🚧 Files skipped from review as they are similar to previous changes (2)
- src/core/performance_monitor.py
- src/core/workflow_engine.py
| "fastapi>=0.115.0", | ||
| "uvicorn[standard]>=0.30.0", | ||
| "pydantic>=2.10.0", | ||
| "python-dotenv>=1.0.0", | ||
| "python-multipart>=0.0.9", | ||
| "email-validator>=2.1.0", | ||
| "httpx>=0.27.0", | ||
|
|
||
| # Basic UI (always needed) | ||
| "gradio>=4.0.0", | ||
| "pyngrok>=0.7.0", | ||
| "pyngrok>=0.6.0", |
There was a problem hiding this comment.
Version constraints conflict with requirements.txt.
Several dependencies here specify lower minimum versions than what requirements.txt requires:
| Package | pyproject.toml |
requirements.txt |
|---|---|---|
| pydantic | >=2.10.0 | >=2.11.5 |
| httpx | >=0.27.0 | >=0.28.1 |
| pyngrok | >=0.6.0 | >=0.7.0 |
| email-validator | >=2.1.0 | >=2.2.0 |
| python-dotenv | >=1.0.0 | >=1.1.0 |
| fastapi | >=0.115.0 | >=0.100.0 (opposite) |
This inconsistency causes different version floors depending on the installation method (pip install -e .[dev] vs pip install -r requirements.txt). CI workflows using uv pip install -e .[dev,...] may resolve older versions than what's expected in production.
Please reconcile these constraints to ensure consistent dependency resolution across all installation paths.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@setup/pyproject.toml` around lines 13 - 23, Update the dependency version
floors in pyproject.toml to match the ones declared in requirements.txt so both
install paths resolve the same minimums: bump "pydantic" to >=2.11.5, "httpx" to
>=0.28.1, "pyngrok" to >=0.7.0, "email-validator" to >=2.2.0, and
"python-dotenv" to >=1.1.0; also reconcile the "fastapi" entries (ensure the
same minimum in both files—either change pyproject.toml to the requirements.txt
minimum or update requirements.txt to the intended newer floor) so installations
via pip install -e .[dev] and pip install -r requirements.txt use consistent
version constraints for the packages listed in the shown dependency block.
|
|
||
| import pytest | ||
| from datetime import datetime | ||
| from unittest.mock import patch, MagicMock |
There was a problem hiding this comment.
Remove unused imports.
The patch and MagicMock imports are not used in this test file.
🧹 Proposed fix
from datetime import datetime
-from unittest.mock import patch, MagicMock
from src.core.job_queue import (📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from unittest.mock import patch, MagicMock | |
| from datetime import datetime | |
| from src.core.job_queue import ( |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/core/test_job_queue.py` at line 7, The import line brings in patch and
MagicMock which are unused in this test module; remove the unused symbols by
deleting them from the import statement (or remove the entire "from
unittest.mock import patch, MagicMock" line) so only actually used imports
remain and the test file has no unused imports.
| Tests for src/core/settings.py | ||
| """ | ||
|
|
||
| import os |
There was a problem hiding this comment.
Remove unused import.
The os module is imported but never used in this test file.
🧹 Proposed fix
-import os
import pytest📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import os | |
| import pytest |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/core/test_settings.py` at line 5, Remove the unused import statement
from tests/core/test_settings.py: delete the top-level "import os" since it is
never referenced in this file (no functions, classes, or tests rely on os),
leaving only the necessary imports for the test module.
- Add ruff.toml with appropriate ignores for existing code patterns - Format src/ and modules/ with ruff format - Ignore E402 (module level imports), E501 (line length), F401/F811/F821 (unused/undefined) - Ignore UP042/UP035 (Python 3.11+ compatibility issues) - Ignore W291 (trailing whitespace)
| with open(file_path) as f: | ||
| return json.load(f) | ||
| except Exception as e: | ||
| logger.error(f"Failed to retrieve item {key}: {e}") |
Check failure
Code scanning / CodeQL
Uncontrolled data used in path expression High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 4 days ago
General approach: ensure that any path derived from untrusted input is validated before use. Here, that means (a) restricting what constitutes a valid workflow key, and (b) ensuring that the final computed path remains within DATA_DIR using a normalization check. This aligns with the recommendation: normalize with Path.resolve() and verify that the resolved path is under a known safe root.
Concrete best fix with minimal behavior change:
-
Introduce a small helper on
BackendClient,_normalize_key, that:- Accepts a
key: str. - Builds
safe_keyexactly as before (isalnum+_/-). - Enforces a reasonable pattern: for example, non-empty and no leading/trailing
-/_(or at least non-empty). - Returns
safe_key.
- Accepts a
-
Introduce another helper,
_safe_data_path, that:- Takes a
key: str. - Calls
_normalize_keyto obtainsafe_key. - Constructs
file_path = (DATA_DIR / f"{safe_key}.json").resolve(). - Ensures
file_pathis underDATA_DIR.resolve()by checkingfile_path.is_relative_to(DATA_DIR.resolve())(Python 3.9+) or, for broader compatibility, by comparingDATA_DIR.resolve()tofile_pathusingrelative_toin atry/except. - Raises
ValueErrorif validation fails.
- Takes a
-
Update both
persist_itemandretrieve_itemto use_safe_data_path(key)instead of inlining the previous sanitization. This keeps behavior the same for valid keys, but ensures all access is withinDATA_DIRand centralizes validation. -
Wrap calls to
_safe_data_pathin the existingtryblocks so anyValueErrorlogs an error and results in a safe failure (FalseorNone) instead of exploiting the filesystem.
All these changes are in modules/new_ui/backend_adapter.py; modules/new_ui/app.py does not need changes. No new external libraries are required; the standard pathlib functionality already imported (Path) is sufficient.
- Add more error codes to ignore (W292, W293, I001, UP006, UP015, UP032, UP045, F841) - Add setup/* to per-file-ignores - Exclude setup/ from formatting - Fix duplicate PerformanceMetric and log_performance issues
|
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/core/auth.py (2)
281-295:⚠️ Potential issue | 🟡 MinorFail fast on
required_roles=[].If this dependency is created with an empty list, the endpoint quietly becomes a permanent 403. Raising at dependency construction time makes that misconfiguration obvious.
🔧 Proposed fix
def require_any_role(required_roles: List[UserRole]): """ Dependency to require any of the specified roles for accessing an endpoint. @@ Returns: A dependency function that checks the user's role """ + if not required_roles: + raise ValueError("required_roles must not be empty") def role_checker(token_data: TokenData = Depends(verify_token)) -> TokenData:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/auth.py` around lines 281 - 295, require_any_role currently accepts an empty required_roles which silently makes every request 403; modify require_any_role to validate required_roles at construction and raise a clear exception (e.g., ValueError) if the list is empty so misconfiguration fails fast, then keep the existing role_checker (TokenData/verify_token) logic unchanged; reference the require_any_role function and its inner role_checker and TokenData to locate where to add the validation.
173-184:⚠️ Potential issue | 🔴 CriticalUsername uniqueness is not atomically enforced—race condition exists.
Lines 174–184 implement a read-before-write pattern that is vulnerable to concurrent signup races. The database layer uses JSON file storage (not transactional SQL), and no unique constraint exists on the username field. Two concurrent requests can both pass the
get_user_by_usernamecheck (line 172) and then both executedb.create_user(...)successfully, resulting in duplicate user accounts.The generic exception handler masks the absence of duplicate-key error handling. Either add a database-level unique constraint on username (if using a real SQL database) or implement atomic compare-and-swap logic; alternatively, use a lock-based or transactional approach for user creation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/auth.py` around lines 173 - 184, The signup flow currently does a non-atomic read-then-write (using db.get_user_by_username then db.create_user), causing race conditions; modify the logic so username uniqueness is enforced atomically in the DB layer or via a lock: remove the separate existence check in the signup path and instead call a single atomic operation (e.g., db.create_user should validate uniqueness and raise a specific DuplicateUsernameError if the username exists), or wrap the create sequence with a distributed/file/async lock around db.create_user to prevent concurrent creates; update the caller to catch the new DuplicateUsernameError (instead of relying on the generic exception handler) so duplicate signups return a proper failure.src/core/performance_monitor.py (1)
313-373:⚠️ Potential issue | 🟠 MajorThe
time_functionoverload is broken for both decorator forms.
@monitor.time_function("foo")returnsTimerContext(), which lacks a__call__method and fails at decorator definition time. Bare@monitor.time_functionrecords the function object as the metric name instead of a string, breaking the expected schema forrecord_metric.While no decorator usages currently exist in the codebase, the documented usage examples in the docstring will fail if attempted.
Proposed fix
def time_function( self, - name: str, + name: Union[str, Callable], tags: dict[str, str] | None = None, sample_rate: float = 1.0, ): """ Decorator/context manager to time function execution. Usage: `@monitor.time_function`("my_function") def my_function(): pass with monitor.time_function("my_operation"): do_something() """ - def decorator(func: Callable) -> Callable: + def decorator(func: Callable, metric_name: str) -> Callable: `@wraps`(func) def wrapper(*args, **kwargs): start_time = time.perf_counter() try: return func(*args, **kwargs) finally: duration = (time.perf_counter() - start_time) * 1000 # Convert to milliseconds self.record_metric( - name=name, + name=metric_name, value=duration, unit="ms", tags=tags, sample_rate=sample_rate, ) return wrapper # Support both decorator and context manager usage if callable(name): # Used as `@time_function` func = name - return decorator(func) + return decorator(func, func.__name__) else: # Used as `@time_function`("name") or with time_function("name"): # Capture reference to outer class instance monitor_instance = self class TimerContext: + def __call__(self, func: Callable) -> Callable: + return decorator(func, name) + def __enter__(self): self.start_time = time.perf_counter() return self def __exit__(self, exc_type, exc_val, exc_tb): duration = (time.perf_counter() - self.start_time) * 1000 monitor_instance.record_metric( name=name, value=duration, unit="ms", tags=tags, sample_rate=sample_rate, ) return TimerContext()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/performance_monitor.py` around lines 313 - 373, time_function currently breaks both decorator usages: when called as `@monitor.time_function`("foo") it returns TimerContext() which is not callable, and when used bare as `@monitor.time_function` it records the function object as the metric name. Fix by making the returned object support both decorator and context-manager roles: implement a TimerContext class (used for the name-provided form) that defines __enter__/__exit__ for with-usage and __call__(self, func) to act as a decorator wrapping func (using the existing wrapper logic); update the callable-name branch to treat a bare call (callable(name) case) by deriving the metric name from func.__name__ (not the function object) before wrapping; ensure both decorator paths and the context-manager path call monitor_instance.record_metric(name=<string>, value=..., unit="ms", tags=tags, sample_rate=sample_rate). Use the existing symbols TimerContext, decorator, wrapper, monitor_instance, and record_metric to locate and replace the current implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/core/performance_monitor.py`:
- Around line 243-245: Update the type annotations so the buffer and accessor
reflect the actual stored type: change the declaration of self._metrics_buffer
in OptimizedPerformanceMonitor to use deque[PerformanceMetricV2] (instead of
deque[PerformanceMetric]) and update get_recent_metrics() return type to
Sequence[PerformanceMetricV2] (or List[PerformanceMetricV2]) to match
record_metric() which appends PerformanceMetricV2 instances; ensure any imports
or type aliases are adjusted to reference PerformanceMetricV2.
---
Outside diff comments:
In `@src/core/auth.py`:
- Around line 281-295: require_any_role currently accepts an empty
required_roles which silently makes every request 403; modify require_any_role
to validate required_roles at construction and raise a clear exception (e.g.,
ValueError) if the list is empty so misconfiguration fails fast, then keep the
existing role_checker (TokenData/verify_token) logic unchanged; reference the
require_any_role function and its inner role_checker and TokenData to locate
where to add the validation.
- Around line 173-184: The signup flow currently does a non-atomic
read-then-write (using db.get_user_by_username then db.create_user), causing
race conditions; modify the logic so username uniqueness is enforced atomically
in the DB layer or via a lock: remove the separate existence check in the signup
path and instead call a single atomic operation (e.g., db.create_user should
validate uniqueness and raise a specific DuplicateUsernameError if the username
exists), or wrap the create sequence with a distributed/file/async lock around
db.create_user to prevent concurrent creates; update the caller to catch the new
DuplicateUsernameError (instead of relying on the generic exception handler) so
duplicate signups return a proper failure.
In `@src/core/performance_monitor.py`:
- Around line 313-373: time_function currently breaks both decorator usages:
when called as `@monitor.time_function`("foo") it returns TimerContext() which is
not callable, and when used bare as `@monitor.time_function` it records the
function object as the metric name. Fix by making the returned object support
both decorator and context-manager roles: implement a TimerContext class (used
for the name-provided form) that defines __enter__/__exit__ for with-usage and
__call__(self, func) to act as a decorator wrapping func (using the existing
wrapper logic); update the callable-name branch to treat a bare call
(callable(name) case) by deriving the metric name from func.__name__ (not the
function object) before wrapping; ensure both decorator paths and the
context-manager path call monitor_instance.record_metric(name=<string>,
value=..., unit="ms", tags=tags, sample_rate=sample_rate). Use the existing
symbols TimerContext, decorator, wrapper, monitor_instance, and record_metric to
locate and replace the current implementation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bba4a94d-9162-429e-bdb3-065785b41290
📒 Files selected for processing (4)
src/core/auth.pysrc/core/performance_monitor.pytests/core/test_factory.pytests/test_workflow_engine.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/test_workflow_engine.py
- tests/core/test_factory.py
src/core/performance_monitor.py
Outdated
| self._metrics_buffer: deque[PerformanceMetric] = deque( | ||
| maxlen=max_metrics_buffer | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
sed -n '188,206p' src/core/performance_monitor.py
sed -n '242,246p' src/core/performance_monitor.py
sed -n '289,296p' src/core/performance_monitor.py
sed -n '392,398p' src/core/performance_monitor.py
rg -n --type=py -C2 '\bPerformanceMetric\b|\bget_recent_metrics\s*\(|\.source\b'Repository: MasumRab/EmailIntelligence
Length of output: 11146
🏁 Script executed:
sed -n '285,305p' src/core/performance_monitor.py
sed -n '385,405p' src/core/performance_monitor.pyRepository: MasumRab/EmailIntelligence
Length of output: 1587
🏁 Script executed:
# Search for calls to get_recent_metrics from src/core context
rg -n "get_recent_metrics" --type=py -B3 -A3Repository: MasumRab/EmailIntelligence
Length of output: 1929
🏁 Script executed:
# Check if legacy PerformanceMetric in src/core is used elsewhere in src/core/
rg -n "PerformanceMetric" src/core/ --type=py | head -30Repository: MasumRab/EmailIntelligence
Length of output: 663
🏁 Script executed:
# Check for any calls to get_recent_metrics() that might use src/core version specifically
rg -n "\.get_recent_metrics\(" src/core/ --type=py -B5 -A5Repository: MasumRab/EmailIntelligence
Length of output: 52
🏁 Script executed:
# Verify if any code in src/core/ accesses .source attribute
rg -n "\.source\b" src/core/ --type=pyRepository: MasumRab/EmailIntelligence
Length of output: 52
🏁 Script executed:
# Check if there are any imports or usages of the OptimizedPerformanceMonitor class from src/core/performance_monitor.py
rg -n "OptimizedPerformanceMonitor" --type=py -B2 -A2Repository: MasumRab/EmailIntelligence
Length of output: 1306
🏁 Script executed:
# Verify the exact class definition boundaries for OptimizedPerformanceMonitor
sed -n '200,250p' src/core/performance_monitor.py | head -60Repository: MasumRab/EmailIntelligence
Length of output: 1550
🏁 Script executed:
# Search for imports of OptimizedPerformanceMonitor or usage of the global performance_monitor from src/core
rg -n "from src.core.performance_monitor import|from src\.core import.*performance_monitor|performance_monitor\." --type=py | head -20Repository: MasumRab/EmailIntelligence
Length of output: 2007
🏁 Script executed:
# Check if there are any references to src/core performance_monitor at all
rg -n "src\.core\.performance_monitor|from src.core import.*performance" --type=pyRepository: MasumRab/EmailIntelligence
Length of output: 752
🏁 Script executed:
# Check what get_aggregated_metrics() returns
sed -n '377,386p' src/core/performance_monitor.pyRepository: MasumRab/EmailIntelligence
Length of output: 380
🏁 Script executed:
# Check the actual calls to get_recent_metrics in src/core context more carefully
rg -n "get_recent_metrics" src/ --type=py -B3 -A3 | grep -A10 -B10 "src/core\|src/main"Repository: MasumRab/EmailIntelligence
Length of output: 1468
🏁 Script executed:
# Verify test usages to understand what get_aggregated_metrics actually accesses
sed -n '90,110p' tests/test_security_integration.pyRepository: MasumRab/EmailIntelligence
Length of output: 992
🏁 Script executed:
# Check the PerformanceMetric class definition in src/core to see if it has a .source attribute
sed -n '35,50p' src/core/performance_monitor.pyRepository: MasumRab/EmailIntelligence
Length of output: 438
🏁 Script executed:
# Verify there are truly no callers of get_recent_metrics from OptimizedPerformanceMonitor in src/core context
rg -n "\.get_recent_metrics\(" src/ --type=pyRepository: MasumRab/EmailIntelligence
Length of output: 196
Fix type annotations to match actual PerformanceMetricV2 usage in OptimizedPerformanceMonitor.
record_metric() stores PerformanceMetricV2 instances, but the _metrics_buffer type annotation and get_recent_metrics() return type still declare PerformanceMetric. This creates a type contract mismatch even though the method is currently unused in the codebase.
Proposed fix
- self._metrics_buffer: deque[PerformanceMetric] = deque(
+ self._metrics_buffer: deque[PerformanceMetricV2] = deque(
maxlen=max_metrics_buffer
)
- def get_recent_metrics(self, name: str, limit: int = 100) -> list[PerformanceMetric]:
+ def get_recent_metrics(self, name: str, limit: int = 100) -> list[PerformanceMetricV2]:
"""Get recent raw metrics for a specific name."""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/core/performance_monitor.py` around lines 243 - 245, Update the type
annotations so the buffer and accessor reflect the actual stored type: change
the declaration of self._metrics_buffer in OptimizedPerformanceMonitor to use
deque[PerformanceMetricV2] (instead of deque[PerformanceMetric]) and update
get_recent_metrics() return type to Sequence[PerformanceMetricV2] (or
List[PerformanceMetricV2]) to match record_metric() which appends
PerformanceMetricV2 instances; ensure any imports or type aliases are adjusted
to reference PerformanceMetricV2.




No description provided.