-
Notifications
You must be signed in to change notification settings - Fork 130
Quality of life improvements for MFC toolchain #1118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
- Add validate command for pre-flight case validation without building - Add clean command implementation (was previously missing) - Show detailed CMake/compiler errors on build failure with formatted panels - Add build progress indicators (Configuring/Building/Installing status) - Display human-readable test names prominently in test output - Add real-time test failure feedback with helpful hints - Enhanced case validator error messages with suggestions - Add --debug-log flag for troubleshooting - Create troubleshooting documentation Co-Authored-By: Claude Opus 4.5 <[email protected]>
Test Summary: - Rich panel with formatted pass/fail/skip counts - Total test duration display - Failed test details with UUIDs and error types - Helpful "Next Steps" suggestions for failures Case Template Generator (./mfc.sh init): - Create new cases from built-in templates (1D/2D/3D_minimal) - Copy from any example with --template example:<name> - List available templates with --list - Well-documented case files with clear parameter sections Co-Authored-By: Claude Opus 4.5 <[email protected]>
…oarding - Add ./mfc.sh completion command to install shell tab-completion - Installs to ~/.local/share/mfc/completions/ (works across MFC clones) - Automatically configures ~/.bashrc or ~/.zshrc - Supports install/uninstall/status subcommands - Add enhanced help output with Rich formatting - Add contextual tips after build/test failures - Add interactive menu mode (./mfc.sh interactive) - Add welcome message for first-time users with getting started guide - Add bash and zsh completion scripts - Update README with toolchain features documentation Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Remove unused Text import from rich.text - Add pylint disable for too-many-locals in test() - Add pylint disable for too-many-arguments in _print_test_summary() - Rename failed_tests param to failed_test_list to avoid shadowing - Prefix unused skipped_cases param with underscore - Add pylint disable for global-statement in setup_debug_logging() Lint score: 10.00/10 Co-Authored-By: Claude Opus 4.5 <[email protected]>
1. Enhanced subcommand help (./mfc.sh build --help): - Shows Rich-formatted header with command description - Displays practical examples with explanations - Lists key options in easy-to-read format - Still shows full argparse options below 2. Topic-based help (./mfc.sh help <topic>): - gpu: GPU configuration, OpenACC/OpenMP, troubleshooting - clusters: HPC cluster setup (Phoenix, Frontier, etc.) - batch: Batch job submission options and examples - debugging: Debug logging, validation, common issues 3. Command aliases for faster typing: - b = build - r = run - t = test - v = validate - c = clean 4. Consistent --help behavior: - ./mfc.sh --help now shows enhanced help (same as ./mfc.sh) - Aliases shown in command table Updated bash/zsh completion scripts to support new commands. Co-Authored-By: Claude Opus 4.5 <[email protected]>
The 'load' command must be sourced to set environment variables in the current shell. Running it directly (./mfc.sh load) now shows a clear error message with the correct usage: source ./mfc.sh load -c p -m g Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Add Unicode progress bar during first-time pip package installation Shows "│████████░░░░░░░░░░░░│ 53% (37 pkgs, 1m23s)" in terminals Falls back to milestone updates when output is piped - Defer list_cases() call in args.py until test command is actually used Previously generated all ~200 test cases on every startup - Make pyrometheus and cantera imports lazy in run/input.py These heavy chemistry packages are now only imported when needed These changes fix the apparent "hang" on first run where pip install showed no progress, and significantly speed up startup time for non-test commands. Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Progress bar now tracks three phases: resolving (0-60%), building wheels (60-80%), and installing (80-100%) - Add "Starting..." message in mfc.sh for immediate user feedback - Non-TTY mode now shows phase transitions (building, installing) - Better status messages showing current operation Co-Authored-By: Claude Opus 4.5 <[email protected]>
Display the currently processing package (collecting, downloading, or building) on a second line below the progress bar. The display updates in real-time as pip processes each package. Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Use lazy imports in main.py to defer loading heavy modules - Remove heavy imports from args.py (run.run, build, test.cases) - Hardcode target names and template names to avoid import chain - Import test.cases only when running test command This reduces startup time from ~1.4s to ~1.0s by avoiding loading mako, pygments, and other heavy dependencies until they're needed. Co-Authored-By: Claude Opus 4.5 <[email protected]>
uv is a blazing fast Python package installer (10-100x faster than pip). When uv is detected, use it for package installation (~7 seconds vs several minutes with pip). Falls back to pip with progress bar when uv is not available. Users can install uv with: pip install uv Co-Authored-By: Claude Opus 4.5 <[email protected]>
Instead of requiring users to manually install uv, automatically bootstrap it in the venv using pip. This gives all users the speed benefit of uv (~23 seconds for fresh install vs several minutes with pip alone). The bootstrap process: 1. Create venv 2. Install uv via pip (~2-3 seconds) 3. Use uv to install all ~70 packages (~7 seconds) Falls back to pip with progress bar if uv installation fails. Co-Authored-By: Claude Opus 4.5 <[email protected]>
The exact number of packages varies, so just say "Installing Python packages" without specifying a count. Co-Authored-By: Claude Opus 4.5 <[email protected]>
Instead of hiding uv's output, show its native progress display when running in an interactive terminal. This lets users see what's happening during installation instead of appearing to hang. For non-interactive (CI) environments, output is still captured for logs. Co-Authored-By: Claude Opus 4.5 <[email protected]>
On HPC systems where cache and venv are on different filesystems, uv's hardlink attempts fail and fall back to copying. Setting UV_LINK_MODE=copy skips the failed hardlink attempts, reducing overhead and eliminating the warning message. Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Remove early exit for 'init' command in mfc.sh that prevented help from showing - Fix _handle_enhanced_help to properly show both enhanced help and argparse help for all subcommands - Add subparser map to print argparse help for specific commands Now ./mfc.sh init --help (and other subcommands) properly show help. Co-Authored-By: Claude Opus 4.5 <[email protected]>
Make build imports lazy in case.py and run/input.py to break circular import chains that were exposed by the lazy import optimizations in main.py and args.py. - case.py: lazy import build in get_inp() and get_fpp() - run/input.py: lazy import build in generate_inp(), validate_constraints(), and clean() Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Pass --help through to Python for bash-handled commands (clean, lint, format, load, spelling) so enhanced help is shown - Add missing commands to COMMANDS dict in user_guide.py: bench, completion, lint, format, spelling Co-Authored-By: Claude Opus 4.5 <[email protected]>
Use argparse's built-in aliases= parameter instead of separate parsers for command aliases (b, r, t, v, c). This ensures aliases inherit all arguments from their parent commands. Previously, ./mfc.sh t would fail because the 't' parser didn't have test-specific arguments like --rdma-mpi. Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Parse ninja's [X/Y] output to show compile progress - Display progress bar with: file count, percentage, current file, elapsed time - Falls back to spinner with elapsed time for non-TTY environments - Shows "(build took Xs)" for builds longer than 5 seconds In interactive terminals, users now see: Building simulation [████████░░░░] 42/156 • 0:02:34 m_rhs.fpp.f90 In non-interactive mode: Building simulation... (build took 234.5s) ✓ Built simulation Co-Authored-By: Claude Opus 4.5 <[email protected]>
The 'init' command was originally used to bootstrap the Python environment without running any command. This restores that behavior and renames the case template creation to 'new': - ./mfc.sh init -> Just bootstrap venv/environment, then exit - ./mfc.sh new -> Create a new case from a template Updated files: - mfc.sh: Restore original init behavior (exit after python.sh) - args.py: Rename init parser to new - main.py: Handle 'new' command instead of 'init' - user_guide.py: Update all references from init to new - init.py: Update usage messages to use 'new' Co-Authored-By: Claude Opus 4.5 <[email protected]>
When --verbose is used, bypass the progress bar and show raw cmake/compiler output. This applies to both configure and build steps. Previously, verbose mode would still capture all output in the progress bar, making the --verbose flag ineffective. Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Replace black with autopep8 due to Python 3.12.5 memory safety issues - Fix bash completion to allow file path navigation after commands - Fix format.sh to accept custom path arguments - Fix pylint errors in build.py and main.py - Reformat all Python files with autopep8 for CI consistency Co-Authored-By: Claude Opus 4.5 <[email protected]>
Implement a declarative CLI schema that eliminates the need to manually update 8+ files when adding commands or options: - Add cli/schema.py with dataclass definitions for CLI structure - Add cli/commands.py as SINGLE SOURCE OF TRUTH for all commands - Add cli/argparse_gen.py to generate argparse parsers from schema - Add cli/completion_gen.py to generate bash/zsh completions - Add generate.py and ./mfc.sh generate command Modified files to use the schema: - args.py: Now uses generated parser from CLI schema - user_guide.py: Imports COMMANDS from cli/commands.py - Completion scripts are now auto-generated, not hand-edited Adding a new command now requires editing only cli/commands.py, then running ./mfc.sh generate to update completion scripts. Co-Authored-By: Claude Opus 4.5 <[email protected]>
Extract helper functions to reduce complexity in completion_gen.py and argparse_gen.py. Add pylint disable for dataclasses that legitimately need many attributes. Code now scores 10.00/10 on pylint. Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Add cli/docs_gen.py to generate CLI reference markdown from schema - Update generate.py to also produce docs/cli-reference.md - Add generate --check step to lint-toolchain CI workflow - Generated documentation includes quick reference table, all commands with options, examples, and common options section Co-Authored-By: Claude Opus 4.5 <[email protected]>
This adds toolchain/mfc/params_tests/ with tools to capture and verify validation behavior before refactoring case_validator.py: - inventory.py: Export all 3300 parameters with types and stages - snapshot.py: Capture validation results from 134 example cases - coverage.py: Analyze 306 constraints across 56 check methods - runner.py: CLI for build/verify/summary commands Run `python -m mfc.params_tests.runner build` to capture baseline, then `python -m mfc.params_tests.runner verify` after changes to detect any unintended changes to validation behavior. Co-Authored-By: Claude Opus 4.5 <[email protected]>
Extends the safety net with two new testing approaches: 1. negative_tests.py - Hand-crafted test cases that intentionally violate constraints (20 tests, 100% trigger rate) 2. mutation_tests.py - Automatically mutates valid example cases to test validator coverage (830 mutations, 82.8% catch rate) Mutation testing found real validator gaps: - t_step_save: No validation for 0 or negative values - mpp_lim/cyl_coord: Invalid strings like "X" not caught - x_domain%beg/end: Missing value not validated Run with: python -m mfc.params_tests.runner negative python -m mfc.params_tests.runner mutation Co-Authored-By: Claude Opus 4.5 <[email protected]>
Mutation testing found several gaps where the validator allowed invalid parameter values. This commit fixes them: 1. Add check_parameter_types() - validates Fortran logicals are 'T'/'F' - mpp_lim, cyl_coord, bubbles_euler, etc. now reject 'X', 'yes', etc. 2. Add x_domain validation - x_domain%beg/end required when m > 0 3. Fix t_step_save - now validates t_step_save > 0 4. Fix dt validation - now validates dt > 0 in all modes where dt is set 5. Improve time stepping validation - properly handles cfl_dt, cfl_adap_dt, and adap_dt modes Mutation test results improved from 82.8% to 99.9% catch rate. All 134 example cases still pass validation. Co-Authored-By: Claude Opus 4.5 <[email protected]>
Implement a central parameter registry that serves as the single source of truth for MFC's ~3,300 case parameters. This eliminates manual synchronization between case_dicts.py, case_validator.py, and documentation. Key changes: - Create mfc/params/ package with schema definitions and registry - Refactor case_dicts.py from ~680 lines to ~115 lines (uses registry) - Add parameter definitions organized by domain (core, weno, patches, etc.) - Add code generators for case_dicts, validator, and documentation - Add physics validation for patches, acoustics, and bubbles The registry provides: - ParamDef dataclass for parameter metadata (name, type, stages, description) - ConstraintRule dataclass for validation constraints (26 defined) - Generators that produce case_dicts-compatible output - Documentation generator (~194K chars of markdown) - Comparison tool to verify registry matches original All tests pass: - Safety net: 134/134 cases unchanged - Mutation tests: 99.9% catch rate - JSON schema validation: working Co-Authored-By: Claude Opus 4.5 <[email protected]>
The strings had no placeholders, making the f-prefix unnecessary. Fixes lint warning F541. Co-Authored-By: Claude Opus 4.5 <[email protected]>
|
/improve |
| @@ -15,545 +32,56 @@ class ParamType(Enum): | |||
| _ANALYTIC_REAL = {"type": ["number", "string"]} | |||
|
|
|||
| def analytic(self): | |||
| """Return the analytic version of this type (allows string expressions).""" | |||
| if self == self.INT: | |||
| return self._ANALYTIC_INT | |||
| if self == self.REAL: | |||
| return self._ANALYTIC_REAL | |||
| return self.STR | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Modify the analytic() method in ParamType to return enum members instead of raw dictionaries. This ensures type consistency and prevents crashes during JSON schema generation. [possible issue, importance: 8]
| class ParamType(Enum): | |
| """ | |
| Parameter types for JSON schema validation. | |
| Each type maps to a JSON schema fragment used by fastjsonschema | |
| to validate case file parameters. | |
| """ | |
| INT = {"type": "integer"} | |
| REAL = {"type": "number"} | |
| LOG = {"enum": ["T", "F"]} | |
| STR = {"type": "string"} | |
| ANALYTIC_INT = {"type": ["integer", "string"]} | |
| ANALYTIC_REAL = {"type": ["number", "string"]} | |
| def analytic(self): | |
| """Return the analytic version of this type (allows string expressions).""" | |
| if self == self.INT: | |
| return self.ANALYTIC_INT | |
| if self == self.REAL: | |
| return self.ANALYTIC_REAL | |
| return self.STR |
| # PRE_PROCESS.append(f"spec_pp({f_id})") | ||
|
|
||
| # Load parameter definitions from registry | ||
| COMMON, PRE_PROCESS, SIMULATION, POST_PROCESS = _load_from_registry() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Defer loading the parameter registry until it is first needed, rather than at module import time. This avoids potential circular import errors and improves startup performance. [possible issue, importance: 7]
| COMMON, PRE_PROCESS, SIMULATION, POST_PROCESS = _load_from_registry() | |
| # Load parameter definitions lazily to avoid import-time circular imports | |
| COMMON = PRE_PROCESS = SIMULATION = POST_PROCESS = None | |
| def _ensure_loaded(): | |
| global COMMON, PRE_PROCESS, SIMULATION, POST_PROCESS | |
| if COMMON is None: | |
| COMMON, PRE_PROCESS, SIMULATION, POST_PROCESS = _load_from_registry() |
| for pattern, template in PATTERNS: | ||
| match = re.match(pattern, param_name) | ||
| if match: | ||
| return template.format(*match.groups()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ Suggestion: Use re.fullmatch() instead of re.match() to ensure that regular expression patterns match the entire parameter name. This prevents incorrect descriptions from partial prefix matches. [general, importance: 7]
| for pattern, template in PATTERNS: | |
| match = re.match(pattern, param_name) | |
| if match: | |
| return template.format(*match.groups()) | |
| for pattern, template in PATTERNS: | |
| match = re.fullmatch(pattern, param_name) | |
| if match: | |
| return template.format(*match.groups()) |
| """Validate parameter types before other checks. | ||
|
|
||
| This catches invalid values early with clear error messages, | ||
| rather than letting them cause confusing failures later. | ||
| """ | ||
| # Logical (boolean) parameters that must be 'T' or 'F' | ||
| logical_params = [ | ||
| 'mpp_lim', 'cyl_coord', 'bubbles_euler', 'adv_n', 'adap_dt', | ||
| 'polytropic', 'polydisperse', 'acoustic_source', 'hypoelasticity', | ||
| 'hyperelasticity', 'cont_damage', 'mapped_weno', 'mp_weno', | ||
| 'teno', 'weno_avg', 'weno_Re_flux', 'null_weights', 'alt_soundspeed', | ||
| 'mixture_err', 'parallel_io', 'prim_vars_wrt', 'rho_wrt', 'E_wrt', | ||
| 'mom_wrt(1)', 'mom_wrt(2)', 'mom_wrt(3)', 'vel_wrt(1)', 'vel_wrt(2)', | ||
| 'vel_wrt(3)', 'flux_wrt(1)', 'flux_wrt(2)', 'flux_wrt(3)', | ||
| 'c_wrt', 'omega_wrt(1)', 'omega_wrt(2)', 'omega_wrt(3)', 'qm_wrt', | ||
| 'schlieren_wrt', 'probe_wrt', 'integral_wrt', 'viscous', 'shear_stress', | ||
| 'bulk_stress', 'bubbles_lagrange', 'relativity', 'cu_tensor', 'igr', | ||
| ] | ||
| for param in logical_params: | ||
| self._validate_logical(param) | ||
|
|
||
| # Required domain parameters when m > 0 | ||
| m = self.get('m') | ||
| if m is not None and m > 0: | ||
| self.prohibit(not self.is_set('x_domain%beg'), | ||
| "x_domain%beg must be set when m > 0") | ||
| self.prohibit(not self.is_set('x_domain%end'), | ||
| "x_domain%end must be set when m > 0") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Add a numeric check for the m parameter before its comparison to prevent a potential TypeError during validation. [possible issue, importance: 7]
| """Validate parameter types before other checks. | |
| This catches invalid values early with clear error messages, | |
| rather than letting them cause confusing failures later. | |
| """ | |
| # Logical (boolean) parameters that must be 'T' or 'F' | |
| logical_params = [ | |
| 'mpp_lim', 'cyl_coord', 'bubbles_euler', 'adv_n', 'adap_dt', | |
| 'polytropic', 'polydisperse', 'acoustic_source', 'hypoelasticity', | |
| 'hyperelasticity', 'cont_damage', 'mapped_weno', 'mp_weno', | |
| 'teno', 'weno_avg', 'weno_Re_flux', 'null_weights', 'alt_soundspeed', | |
| 'mixture_err', 'parallel_io', 'prim_vars_wrt', 'rho_wrt', 'E_wrt', | |
| 'mom_wrt(1)', 'mom_wrt(2)', 'mom_wrt(3)', 'vel_wrt(1)', 'vel_wrt(2)', | |
| 'vel_wrt(3)', 'flux_wrt(1)', 'flux_wrt(2)', 'flux_wrt(3)', | |
| 'c_wrt', 'omega_wrt(1)', 'omega_wrt(2)', 'omega_wrt(3)', 'qm_wrt', | |
| 'schlieren_wrt', 'probe_wrt', 'integral_wrt', 'viscous', 'shear_stress', | |
| 'bulk_stress', 'bubbles_lagrange', 'relativity', 'cu_tensor', 'igr', | |
| ] | |
| for param in logical_params: | |
| self._validate_logical(param) | |
| # Required domain parameters when m > 0 | |
| m = self.get('m') | |
| if m is not None and m > 0: | |
| self.prohibit(not self.is_set('x_domain%beg'), | |
| "x_domain%beg must be set when m > 0") | |
| self.prohibit(not self.is_set('x_domain%end'), | |
| "x_domain%end must be set when m > 0") | |
| def check_parameter_types(self): | |
| """Validate parameter types before other checks. | |
| This catches invalid values early with clear error messages, | |
| rather than letting them cause confusing failures later. | |
| """ | |
| # Logical (boolean) parameters that must be 'T' or 'F' | |
| logical_params = [ | |
| 'mpp_lim', 'cyl_coord', 'bubbles_euler', 'adv_n', 'adap_dt', | |
| 'polytropic', 'polydisperse', 'acoustic_source', 'hypoelasticity', | |
| 'hyperelasticity', 'cont_damage', 'mapped_weno', 'mp_weno', | |
| 'teno', 'weno_avg', 'weno_Re_flux', 'null_weights', 'alt_soundspeed', | |
| 'mixture_err', 'parallel_io', 'prim_vars_wrt', 'rho_wrt', 'E_wrt', | |
| 'mom_wrt(1)', 'mom_wrt(2)', 'mom_wrt(3)', 'vel_wrt(1)', 'vel_wrt(2)', | |
| 'vel_wrt(3)', 'flux_wrt(1)', 'flux_wrt(2)', 'flux_wrt(3)', | |
| 'c_wrt', 'omega_wrt(1)', 'omega_wrt(2)', 'omega_wrt(3)', 'qm_wrt', | |
| 'schlieren_wrt', 'probe_wrt', 'integral_wrt', 'viscous', 'shear_stress', | |
| 'bulk_stress', 'bubbles_lagrange', 'relativity', 'cu_tensor', 'igr', | |
| ] | |
| for param in logical_params: | |
| self._validate_logical(param) | |
| # Required domain parameters when m > 0 | |
| m = self.get('m') | |
| if m is not None and self._is_numeric(m) and m > 0: | |
| self.prohibit(not self.is_set('x_domain%beg'), | |
| "x_domain%beg must be set when m > 0") | |
| self.prohibit(not self.is_set('x_domain%end'), | |
| "x_domain%end must be set when m > 0") |
| cmd = f"./mfc.sh new {name} -t {template}" | ||
| cons.print() | ||
| cons.print(f"[dim]Running: {cmd}[/dim]") | ||
| cons.print() | ||
| os.system(cmd) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Replace os.system with subprocess.run and a list of arguments to prevent a shell injection vulnerability when creating a new case from user input. [security, importance: 9]
| cmd = f"./mfc.sh new {name} -t {template}" | |
| cons.print() | |
| cons.print(f"[dim]Running: {cmd}[/dim]") | |
| cons.print() | |
| os.system(cmd) | |
| cmd = ["./mfc.sh", "new", name, "-t", template] | |
| cons.print() | |
| cons.print(f"[dim]Running: {' '.join(cmd)}[/dim]") | |
| cons.print() | |
| import subprocess | |
| subprocess.run(cmd, check=False) |
| path = Prompt.ask("Path to case.py") | ||
|
|
||
| cmd = f"./mfc.sh validate {path}" | ||
| cons.print() | ||
| cons.print(f"[dim]Running: {cmd}[/dim]") | ||
| cons.print() | ||
| os.system(cmd) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Replace os.system with subprocess.run and a list of arguments to prevent a shell injection vulnerability when validating a case from user input. [security, importance: 9]
| path = Prompt.ask("Path to case.py") | |
| cmd = f"./mfc.sh validate {path}" | |
| cons.print() | |
| cons.print(f"[dim]Running: {cmd}[/dim]") | |
| cons.print() | |
| os.system(cmd) | |
| path = Prompt.ask("Path to case.py") | |
| cmd = ["./mfc.sh", "validate", path] | |
| cons.print() | |
| cons.print(f"[dim]Running: {' '.join(cmd)}[/dim]") | |
| cons.print() | |
| import subprocess | |
| subprocess.run(cmd, check=False) |
| if streaming: | ||
| # Streaming mode (-v): print build progress lines as they happen | ||
| cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...") | ||
| start_time = time.time() | ||
|
|
||
| # Read stdout and print matching lines | ||
| for line in iter(process.stdout.readline, ''): | ||
| all_stdout.append(line) | ||
| stripped = line.strip() | ||
|
|
||
| # Try ninja format first: [42/156] Action | ||
| ninja_match = _NINJA_PROGRESS_RE.match(stripped) | ||
| if ninja_match: | ||
| completed = ninja_match.group(1) | ||
| total = ninja_match.group(2) | ||
| action = ninja_match.group(3) | ||
| # Extract filename from action | ||
| parts = action.split() | ||
| if len(parts) >= 3: | ||
| filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '') | ||
| if len(filename) > 40: | ||
| filename = filename[:37] + "..." | ||
| cons.print(f" [dim][{completed}/{total}][/dim] {filename}") | ||
| continue | ||
|
|
||
| # Try make format: [ 16%] Action | ||
| make_match = _MAKE_PROGRESS_RE.match(stripped) | ||
| if make_match: | ||
| percent = make_match.group(1) | ||
| action = make_match.group(2) | ||
| # Extract filename from action | ||
| parts = action.split() | ||
| if len(parts) >= 3: | ||
| # Get the last part which is usually the file path | ||
| obj_path = parts[-1] | ||
| filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '') | ||
| if len(filename) > 40: | ||
| filename = filename[:37] + "..." | ||
| cons.print(f" [dim][{percent:>3}%][/dim] {filename}") | ||
|
|
||
| # Read any remaining stderr | ||
| stderr = process.stderr.read() | ||
| all_stderr.append(stderr) | ||
| process.wait() | ||
|
|
||
| elapsed = time.time() - start_time | ||
| if elapsed > 5: | ||
| cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]") | ||
|
|
||
| return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: To prevent a potential deadlock in streaming mode, read stderr concurrently with stdout using a thread and queue, instead of reading stderr only after stdout is exhausted. [possible issue, importance: 9]
| if streaming: | |
| # Streaming mode (-v): print build progress lines as they happen | |
| cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...") | |
| start_time = time.time() | |
| # Read stdout and print matching lines | |
| for line in iter(process.stdout.readline, ''): | |
| all_stdout.append(line) | |
| stripped = line.strip() | |
| # Try ninja format first: [42/156] Action | |
| ninja_match = _NINJA_PROGRESS_RE.match(stripped) | |
| if ninja_match: | |
| completed = ninja_match.group(1) | |
| total = ninja_match.group(2) | |
| action = ninja_match.group(3) | |
| # Extract filename from action | |
| parts = action.split() | |
| if len(parts) >= 3: | |
| filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '') | |
| if len(filename) > 40: | |
| filename = filename[:37] + "..." | |
| cons.print(f" [dim][{completed}/{total}][/dim] {filename}") | |
| continue | |
| # Try make format: [ 16%] Action | |
| make_match = _MAKE_PROGRESS_RE.match(stripped) | |
| if make_match: | |
| percent = make_match.group(1) | |
| action = make_match.group(2) | |
| # Extract filename from action | |
| parts = action.split() | |
| if len(parts) >= 3: | |
| # Get the last part which is usually the file path | |
| obj_path = parts[-1] | |
| filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '') | |
| if len(filename) > 40: | |
| filename = filename[:37] + "..." | |
| cons.print(f" [dim][{percent:>3}%][/dim] {filename}") | |
| # Read any remaining stderr | |
| stderr = process.stderr.read() | |
| all_stderr.append(stderr) | |
| process.wait() | |
| elapsed = time.time() - start_time | |
| if elapsed > 5: | |
| cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]") | |
| return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr)) | |
| if streaming: | |
| # Streaming mode (-v): print build progress lines as they happen | |
| cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...") | |
| start_time = time.time() | |
| stderr_queue = queue.Queue() | |
| def read_stderr(): | |
| for line in iter(process.stderr.readline, ''): | |
| stderr_queue.put(line) | |
| stderr_queue.put(None) # Signal EOF | |
| stderr_thread = threading.Thread(target=read_stderr, daemon=True) | |
| stderr_thread.start() | |
| # Read stdout and print matching lines | |
| for line in iter(process.stdout.readline, ''): | |
| all_stdout.append(line) | |
| stripped = line.strip() | |
| # Drain any pending stderr to avoid pipe buffer deadlocks | |
| while True: | |
| try: | |
| err_line = stderr_queue.get_nowait() | |
| except queue.Empty: | |
| break | |
| if err_line is None: | |
| break | |
| all_stderr.append(err_line) | |
| # Try ninja format first: [42/156] Action | |
| ninja_match = _NINJA_PROGRESS_RE.match(stripped) | |
| if ninja_match: | |
| completed = ninja_match.group(1) | |
| total = ninja_match.group(2) | |
| action = ninja_match.group(3) | |
| # Extract filename from action | |
| parts = action.split() | |
| if len(parts) >= 3: | |
| filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '') | |
| if len(filename) > 40: | |
| filename = filename[:37] + "..." | |
| cons.print(f" [dim][{completed}/{total}][/dim] {filename}") | |
| continue | |
| # Try make format: [ 16%] Action | |
| make_match = _MAKE_PROGRESS_RE.match(stripped) | |
| if make_match: | |
| percent = make_match.group(1) | |
| action = make_match.group(2) | |
| # Extract filename from action | |
| parts = action.split() | |
| if len(parts) >= 3: | |
| # Get the last part which is usually the file path | |
| obj_path = parts[-1] | |
| filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '') | |
| if len(filename) > 40: | |
| filename = filename[:37] + "..." | |
| cons.print(f" [dim][{percent:>3}%][/dim] {filename}") | |
| process.wait() | |
| # Drain remaining stderr and ensure thread exits | |
| stderr_thread.join(timeout=1) | |
| while True: | |
| try: | |
| err_line = stderr_queue.get_nowait() | |
| except queue.Empty: | |
| break | |
| if err_line is None: | |
| break | |
| all_stderr.append(err_line) | |
| elapsed = time.time() - start_time | |
| if elapsed > 5: | |
| cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]") | |
| return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr)) |
| def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str: | ||
| """Generate bash completion expression for a completion type.""" | ||
| if comp_type == CompletionType.CHOICES and choices: | ||
| return f'COMPREPLY=( $(compgen -W "{" ".join(choices)}" -- "${{cur}}") )' | ||
| return _BASH_COMPLETION_MAP.get(comp_type, "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Escape shell metacharacters in completion choices before embedding them into the compgen command to prevent syntax errors and potential shell injection in the generated bash script. [security, importance: 8]
| def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str: | |
| """Generate bash completion expression for a completion type.""" | |
| if comp_type == CompletionType.CHOICES and choices: | |
| return f'COMPREPLY=( $(compgen -W "{" ".join(choices)}" -- "${{cur}}") )' | |
| return _BASH_COMPLETION_MAP.get(comp_type, "") | |
| def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str: | |
| """Generate bash completion expression for a completion type.""" | |
| if comp_type == CompletionType.CHOICES and choices: | |
| def _escape_bash_word(word: str) -> str: | |
| # Minimal safe escaping for inclusion inside a double-quoted compgen -W "..." | |
| return ( | |
| str(word) | |
| .replace("\\", "\\\\") | |
| .replace('"', '\\"') | |
| .replace("$", "\\$") | |
| .replace("`", "\\`") | |
| ) | |
| escaped = " ".join(_escape_bash_word(c) for c in choices) | |
| return f'COMPREPLY=( $(compgen -W "{escaped}" -- "${{cur}}") )' | |
| return _BASH_COMPLETION_MAP.get(comp_type, "") |
| def load_example_case(case_path: Path) -> Dict[str, Any]: | ||
| """Load parameters from an example case file.""" | ||
| result = subprocess.run( | ||
| ["python3", str(case_path)], | ||
| capture_output=True, | ||
| text=True, | ||
| cwd=case_path.parent, | ||
| timeout=30, | ||
| check=False | ||
| ) | ||
| if result.returncode != 0: | ||
| return None | ||
| return json.loads(result.stdout.strip()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: To make test execution more robust, defensively parse the output of case.py scripts by extracting the last line that appears to be a JSON object, ignoring other printed output. [possible issue, importance: 7]
| def load_example_case(case_path: Path) -> Dict[str, Any]: | |
| """Load parameters from an example case file.""" | |
| result = subprocess.run( | |
| ["python3", str(case_path)], | |
| capture_output=True, | |
| text=True, | |
| cwd=case_path.parent, | |
| timeout=30, | |
| check=False | |
| ) | |
| if result.returncode != 0: | |
| return None | |
| return json.loads(result.stdout.strip()) | |
| def load_example_case(case_path: Path) -> Dict[str, Any]: | |
| """Load parameters from an example case file.""" | |
| result = subprocess.run( | |
| ["python3", str(case_path)], | |
| capture_output=True, | |
| text=True, | |
| cwd=case_path.parent, | |
| timeout=30, | |
| check=False | |
| ) | |
| if result.returncode != 0: | |
| return None | |
| stdout = (result.stdout or "").strip() | |
| if not stdout: | |
| return None | |
| # Many example cases may print extra lines; prefer parsing the last JSON-looking line. | |
| lines = [ln.strip() for ln in stdout.splitlines() if ln.strip()] | |
| for candidate in reversed(lines): | |
| if candidate.startswith("{") and candidate.endswith("}"): | |
| try: | |
| return json.loads(candidate) | |
| except json.JSONDecodeError: | |
| return None | |
| return None |
re.match only checks if pattern matches at start of string, which could cause incorrect matches if patterns aren't perfectly ordered. re.fullmatch ensures the entire parameter name matches the pattern. Co-Authored-By: Claude Opus 4.5 <[email protected]>
|
/improve |
| CASE_OPTIMIZATION = _get_case_optimization_params() | ||
|
|
||
| # Build JSON schema for validation | ||
| _properties = {k: v.value for k, v in ALL.items()} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Refactor the JSON schema generation to be more robust by using getattr(v, "value", v) and adding a type check to handle both ParamType enums and pre-formatted dict schema fragments. [possible issue, importance: 7]
| _properties = {k: v.value for k, v in ALL.items()} | |
| # Build JSON schema for validation | |
| _properties = {} | |
| for k, v in ALL.items(): | |
| schema_fragment = getattr(v, "value", v) | |
| if not isinstance(schema_fragment, dict): | |
| raise TypeError(f"Invalid schema fragment for {k}: {type(schema_fragment).__name__}") | |
| _properties[k] = schema_fragment |
| for n in ["dt", "cfl_target", "cfl_max", "t_tol", "weno_eps", "teno_CT", "wenoz_q", | ||
| "adap_dt_tol", "t_stop", "t_save", "tau_star", "cont_damage_s", "alpha_bar", | ||
| "alf_factor", "ic_eps", "ic_beta"]: | ||
| _r(n, REAL, {S}) | ||
| for n in ["mapped_weno", "wenoz", "teno", "weno_Re_flux", "weno_avg", "mp_weno", | ||
| "run_time_info", "prim_vars_wrt", "alt_soundspeed", "null_weights", | ||
| "mixture_err", "probe_wrt", "acoustic_source", "integral_wrt", "rdma_mpi", | ||
| "adap_dt", "powell", "igr_pres_lim", "int_comp", "nv_uvm_out_of_core", | ||
| "nv_uvm_pref_gpu"]: | ||
| _r(n, LOG, {S}) | ||
| # cfl_target also POST_PROCESS | ||
| _r("cfl_target", REAL, {O}) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Consolidate the duplicate registration of the cfl_target parameter into a single call with combined stages {S, O} to avoid potential runtime errors and improve clarity. [possible issue, importance: 6]
| for n in ["dt", "cfl_target", "cfl_max", "t_tol", "weno_eps", "teno_CT", "wenoz_q", | |
| "adap_dt_tol", "t_stop", "t_save", "tau_star", "cont_damage_s", "alpha_bar", | |
| "alf_factor", "ic_eps", "ic_beta"]: | |
| _r(n, REAL, {S}) | |
| for n in ["mapped_weno", "wenoz", "teno", "weno_Re_flux", "weno_avg", "mp_weno", | |
| "run_time_info", "prim_vars_wrt", "alt_soundspeed", "null_weights", | |
| "mixture_err", "probe_wrt", "acoustic_source", "integral_wrt", "rdma_mpi", | |
| "adap_dt", "powell", "igr_pres_lim", "int_comp", "nv_uvm_out_of_core", | |
| "nv_uvm_pref_gpu"]: | |
| _r(n, LOG, {S}) | |
| # cfl_target also POST_PROCESS | |
| _r("cfl_target", REAL, {O}) | |
| for n in ["dt", "cfl_max", "t_tol", "weno_eps", "teno_CT", "wenoz_q", | |
| "adap_dt_tol", "t_stop", "t_save", "tau_star", "cont_damage_s", "alpha_bar", | |
| "alf_factor", "ic_eps", "ic_beta"]: | |
| _r(n, REAL, {S}) | |
| # cfl_target is used in both SIMULATION and POST_PROCESS | |
| _r("cfl_target", REAL, {S, O}) |
| cmd = f"./mfc.sh new {name} -t {template}" | ||
| cons.print() | ||
| cons.print(f"[dim]Running: {cmd}[/dim]") | ||
| cons.print() | ||
| os.system(cmd) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Replace os.system with subprocess.run and pass command arguments as a list to prevent shell injection vulnerabilities from user-provided input in interactive mode. [security, importance: 9]
| cmd = f"./mfc.sh new {name} -t {template}" | |
| cons.print() | |
| cons.print(f"[dim]Running: {cmd}[/dim]") | |
| cons.print() | |
| os.system(cmd) | |
| import subprocess | |
| ... | |
| cmd = ["./mfc.sh", "new", name, "-t", template] | |
| cons.print() | |
| cons.print(f"[dim]Running: {' '.join(cmd)}[/dim]") | |
| cons.print() | |
| subprocess.run(cmd, check=False) |
| m = self.get('m') | ||
| if m is not None and m > 0: | ||
| self.prohibit(not self.is_set('x_domain%beg'), | ||
| "x_domain%beg must be set when m > 0") | ||
| self.prohibit(not self.is_set('x_domain%end'), | ||
| "x_domain%end must be set when m > 0") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Add a check to ensure the parameter m is numeric before performing a comparison (m > 0) to prevent a TypeError and an unexpected crash during validation. [possible issue, importance: 7]
| m = self.get('m') | |
| if m is not None and m > 0: | |
| self.prohibit(not self.is_set('x_domain%beg'), | |
| "x_domain%beg must be set when m > 0") | |
| self.prohibit(not self.is_set('x_domain%end'), | |
| "x_domain%end must be set when m > 0") | |
| m = self.get('m') | |
| if m is not None and self._is_numeric(m) and m > 0: | |
| self.prohibit(not self.is_set('x_domain%beg'), | |
| "x_domain%beg must be set when m > 0") | |
| self.prohibit(not self.is_set('x_domain%end'), | |
| "x_domain%end must be set when m > 0") |
| def extract_constraints_from_validator() -> List[ConstraintInfo]: | ||
| """Parse case_validator.py and extract all prohibit() calls.""" | ||
| validator_path = Path(__file__).parent.parent / "case_validator.py" | ||
|
|
||
| with open(validator_path, 'r', encoding='utf-8') as f: | ||
| source = f.read() | ||
|
|
||
| constraints = [] | ||
| current_method = None | ||
| tree = ast.parse(source) | ||
|
|
||
| for node in ast.walk(tree): | ||
| if isinstance(node, ast.FunctionDef): | ||
| current_method = node.name | ||
| continue | ||
|
|
||
| if not _is_prohibit_call(node): | ||
| continue | ||
|
|
||
| message = _extract_message(node.args[1]) | ||
| try: | ||
| condition_code = ast.unparse(node.args[0]) | ||
| except Exception: | ||
| condition_code = "<complex>" | ||
|
|
||
| constraints.append(ConstraintInfo( | ||
| method=current_method or "<unknown>", | ||
| line_number=node.lineno, | ||
| message=message, | ||
| condition_code=condition_code | ||
| )) | ||
|
|
||
| return constraints |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ Suggestion: Fix incorrect method attribution in extract_constraints_from_validator by changing the ast traversal logic. Instead of a single ast.walk(), iterate through each function definition within the CaseValidator class to correctly associate constraints with their parent method. [possible issue, importance: 9]
| def extract_constraints_from_validator() -> List[ConstraintInfo]: | |
| """Parse case_validator.py and extract all prohibit() calls.""" | |
| validator_path = Path(__file__).parent.parent / "case_validator.py" | |
| with open(validator_path, 'r', encoding='utf-8') as f: | |
| source = f.read() | |
| constraints = [] | |
| current_method = None | |
| tree = ast.parse(source) | |
| for node in ast.walk(tree): | |
| if isinstance(node, ast.FunctionDef): | |
| current_method = node.name | |
| continue | |
| if not _is_prohibit_call(node): | |
| continue | |
| message = _extract_message(node.args[1]) | |
| try: | |
| condition_code = ast.unparse(node.args[0]) | |
| except Exception: | |
| condition_code = "<complex>" | |
| constraints.append(ConstraintInfo( | |
| method=current_method or "<unknown>", | |
| line_number=node.lineno, | |
| message=message, | |
| condition_code=condition_code | |
| )) | |
| return constraints | |
| def extract_constraints_from_validator() -> List[ConstraintInfo]: | |
| """Parse case_validator.py and extract all prohibit() calls.""" | |
| validator_path = Path(__file__).parent.parent / "case_validator.py" | |
| with open(validator_path, 'r', encoding='utf-8') as f: | |
| source = f.read() | |
| tree = ast.parse(source) | |
| validator_class = _find_case_validator_class(tree) | |
| if validator_class is None: | |
| return [] | |
| constraints: List[ConstraintInfo] = [] | |
| for item in validator_class.body: | |
| if not isinstance(item, ast.FunctionDef): | |
| continue | |
| for node in ast.walk(item): | |
| if not _is_prohibit_call(node): | |
| continue | |
| message = _extract_message(node.args[1]) | |
| try: | |
| condition_code = ast.unparse(node.args[0]) | |
| except Exception: | |
| condition_code = "<complex>" | |
| constraints.append(ConstraintInfo( | |
| method=item.name, | |
| line_number=node.lineno, | |
| message=message, | |
| condition_code=condition_code | |
| )) | |
| return constraints |
| if streaming: | ||
| # Streaming mode (-v): print build progress lines as they happen | ||
| cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...") | ||
| start_time = time.time() | ||
|
|
||
| # Read stdout and print matching lines | ||
| for line in iter(process.stdout.readline, ''): | ||
| all_stdout.append(line) | ||
| stripped = line.strip() | ||
|
|
||
| # Try ninja format first: [42/156] Action | ||
| ninja_match = _NINJA_PROGRESS_RE.match(stripped) | ||
| if ninja_match: | ||
| completed = ninja_match.group(1) | ||
| total = ninja_match.group(2) | ||
| action = ninja_match.group(3) | ||
| # Extract filename from action | ||
| parts = action.split() | ||
| if len(parts) >= 3: | ||
| filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '') | ||
| if len(filename) > 40: | ||
| filename = filename[:37] + "..." | ||
| cons.print(f" [dim][{completed}/{total}][/dim] {filename}") | ||
| continue | ||
|
|
||
| # Try make format: [ 16%] Action | ||
| make_match = _MAKE_PROGRESS_RE.match(stripped) | ||
| if make_match: | ||
| percent = make_match.group(1) | ||
| action = make_match.group(2) | ||
| # Extract filename from action | ||
| parts = action.split() | ||
| if len(parts) >= 3: | ||
| # Get the last part which is usually the file path | ||
| obj_path = parts[-1] | ||
| filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '') | ||
| if len(filename) > 40: | ||
| filename = filename[:37] + "..." | ||
| cons.print(f" [dim][{percent:>3}%][/dim] {filename}") | ||
|
|
||
| # Read any remaining stderr | ||
| stderr = process.stderr.read() | ||
| all_stderr.append(stderr) | ||
| process.wait() | ||
|
|
||
| elapsed = time.time() - start_time | ||
| if elapsed > 5: | ||
| cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]") | ||
|
|
||
| return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Prevent a potential deadlock in the build process's streaming mode by handling stdout and stderr concurrently. A simple fix is to redirect stderr to stdout by setting stderr=subprocess.STDOUT in the subprocess.Popen call. [possible issue, importance: 9]
| if streaming: | |
| # Streaming mode (-v): print build progress lines as they happen | |
| cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...") | |
| start_time = time.time() | |
| # Read stdout and print matching lines | |
| for line in iter(process.stdout.readline, ''): | |
| all_stdout.append(line) | |
| stripped = line.strip() | |
| # Try ninja format first: [42/156] Action | |
| ninja_match = _NINJA_PROGRESS_RE.match(stripped) | |
| if ninja_match: | |
| completed = ninja_match.group(1) | |
| total = ninja_match.group(2) | |
| action = ninja_match.group(3) | |
| # Extract filename from action | |
| parts = action.split() | |
| if len(parts) >= 3: | |
| filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '') | |
| if len(filename) > 40: | |
| filename = filename[:37] + "..." | |
| cons.print(f" [dim][{completed}/{total}][/dim] {filename}") | |
| continue | |
| # Try make format: [ 16%] Action | |
| make_match = _MAKE_PROGRESS_RE.match(stripped) | |
| if make_match: | |
| percent = make_match.group(1) | |
| action = make_match.group(2) | |
| # Extract filename from action | |
| parts = action.split() | |
| if len(parts) >= 3: | |
| # Get the last part which is usually the file path | |
| obj_path = parts[-1] | |
| filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '') | |
| if len(filename) > 40: | |
| filename = filename[:37] + "..." | |
| cons.print(f" [dim][{percent:>3}%][/dim] {filename}") | |
| # Read any remaining stderr | |
| stderr = process.stderr.read() | |
| all_stderr.append(stderr) | |
| process.wait() | |
| elapsed = time.time() - start_time | |
| if elapsed > 5: | |
| cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]") | |
| return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr)) | |
| if streaming: | |
| # Streaming mode (-v): print build progress lines as they happen | |
| cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...") | |
| start_time = time.time() | |
| process = subprocess.Popen( # pylint: disable=consider-using-with | |
| cmd, | |
| stdout=subprocess.PIPE, | |
| stderr=subprocess.STDOUT, | |
| text=True, | |
| bufsize=1 | |
| ) | |
| for line in iter(process.stdout.readline, ''): | |
| all_stdout.append(line) | |
| stripped = line.strip() | |
| ninja_match = _NINJA_PROGRESS_RE.match(stripped) | |
| if ninja_match: | |
| completed = ninja_match.group(1) | |
| total = ninja_match.group(2) | |
| action = ninja_match.group(3) | |
| parts = action.split() | |
| if len(parts) >= 3: | |
| filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '') | |
| if len(filename) > 40: | |
| filename = filename[:37] + "..." | |
| cons.print(f" [dim][{completed}/{total}][/dim] {filename}") | |
| continue | |
| make_match = _MAKE_PROGRESS_RE.match(stripped) | |
| if make_match: | |
| percent = make_match.group(1) | |
| action = make_match.group(2) | |
| parts = action.split() | |
| if len(parts) >= 3: | |
| obj_path = parts[-1] | |
| filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '') | |
| if len(filename) > 40: | |
| filename = filename[:37] + "..." | |
| cons.print(f" [dim][{percent:>3}%][/dim] {filename}") | |
| process.wait() | |
| elapsed = time.time() - start_time | |
| if elapsed > 5: | |
| cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]") | |
| return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr)) |
| def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str: | ||
| """Generate bash completion expression for a completion type.""" | ||
| if comp_type == CompletionType.CHOICES and choices: | ||
| return f'COMPREPLY=( $(compgen -W "{" ".join(choices)}" -- "${{cur}}") )' | ||
| return _BASH_COMPLETION_MAP.get(comp_type, "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Fix a shell injection vulnerability in the generated bash completion script by properly escaping special characters in completion choices. This prevents malformed choices from breaking the script or executing arbitrary code. [security, importance: 8]
| def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str: | |
| """Generate bash completion expression for a completion type.""" | |
| if comp_type == CompletionType.CHOICES and choices: | |
| return f'COMPREPLY=( $(compgen -W "{" ".join(choices)}" -- "${{cur}}") )' | |
| return _BASH_COMPLETION_MAP.get(comp_type, "") | |
| def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str: | |
| """Generate bash completion expression for a completion type.""" | |
| if comp_type == CompletionType.CHOICES and choices: | |
| safe_choices = [ | |
| str(c).replace("\\", "\\\\").replace('"', '\\"').replace("$", "\\$").replace("`", "\\`") | |
| for c in choices | |
| ] | |
| return f'COMPREPLY=( $(compgen -W "{" ".join(safe_choices)}" -- "${{cur}}") )' | |
| return _BASH_COMPLETION_MAP.get(comp_type, "") |
The previous ast.walk() approach could misattribute constraints to the wrong method because ast.walk doesn't guarantee parent-child order. Fixed by iterating through class methods explicitly and walking within each method scope. Co-Authored-By: Claude Opus 4.5 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="toolchain/mfc/params_tests/coverage.py">
<violation number="1" location="toolchain/mfc/params_tests/coverage.py:46">
P2: There are now two `_find_case_validator_class` definitions in this module; the later one overrides the new version, making the added function dead code and risking confusion about which behavior is used. Consolidate into a single definition (update one and remove the duplicate) so the helper used by all callers is unambiguous.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
The function was defined twice after the AST traversal fix. Removed the old ast.walk version, keeping the better tree.body version. Co-Authored-By: Claude Opus 4.5 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🤖 Fix all issues with AI agents
In `@docs/documentation/cli-reference.md`:
- Around line 15-41: The anchor links in the CLI quick-reference table (e.g.,
[<code>build</code>](`#build`)) don't match generated heading IDs because headings
include alias text like "### build (alias: `b`)"; update the docs generator to
either emit explicit HTML anchors for each command heading (e.g., <a
id="build">) or change heading generation so alias text is removed from the
heading (move alias into a separate sibling element such as a span or
parentheses line) so that the quick-reference links (e.g., build, run, test)
resolve correctly; modify the generator logic that emits the "### build (alias:
`b`)" headings and the code that creates the quick-reference table links to
ensure consistent IDs.
- Around line 48-52: Update the docs generator/template to emit
"space-separated" (with a hyphen) instead of "Space separated" wherever the
phrase appears in option tables; specifically fix the option descriptions for
flags like `-t`/`--targets` (currently "Space separated list of targets") and
any other occurrences in generated CLI tables so the wording is consistently
hyphenated and uses lowercase "space-separated".
In `@docs/documentation/parameters.md`:
- Around line 17-50: The table's fragment links like "#patch-icpp" don't match
the actual headings "## patch_icpp", causing broken anchors (MD051); fix by
making link fragments match heading IDs or vice versa — e.g., update all table
entries (patch-icpp, patch-ib, fluid-pp, etc.) to use underscores (patch_icpp,
patch_ib, fluid_pp, ...) so they match headings, or alternatively rename the
headings to use hyphens or add explicit HTML anchors (<a name="patch-icpp"></a>)
before the corresponding headings such as patch_icpp to ensure the fragments
resolve.
- Around line 55-61: The generated markdown is using bold/italic lines as
headings ("Initial condition patch parameters" and "1954 parameters") and
inconsistent table pipe spacing in the "Patterns" table which triggers
MD036/MD060; update the doc generator (or generation template) to emit real
heading syntax (e.g., convert the bold/italic lines into proper ATX/Setext
headings such as "## Initial condition patch parameters" and "### 1954
parameters") and normalize table formatting for consistent pipes/spaces in the
"Patterns" table, or alternatively add a generation-time marker/frontmatter to
exclude this file from markdownlint rules (or insert per-file
markdownlint-disable comments for MD036 and MD060) so the linter stops failing.
In `@toolchain/mfc/cli/commands.py`:
- Around line 543-552: The Argument for name "template" currently uses
CompletionType.CHOICES with a hard-coded choices list which causes argparse to
reject inputs like "example:<name>"; remove the hard choices enforcement: delete
the choices=[...] entry and stop using CompletionType.CHOICES (use
CompletionType.TEXT or omit the completion type) so parsing no longer rejects
"example:*" values, and if needed add a lightweight post-parse validation in the
code that handles the "template" value to accept either the minimal names or
strings prefixed with "example:".
In `@toolchain/mfc/params_tests/coverage.py`:
- Around line 46-52: There are two definitions of the function
_find_case_validator_class causing a duplicate-definition error; locate both
occurrences of the function name _find_case_validator_class (the earlier and the
later definition) and remove the earlier duplicate so only a single
_find_case_validator_class implementation remains, ensuring any callers still
reference the retained definition and no redundant imports or references remain.
In `@toolchain/mfc/params/definitions.py`:
- Around line 147-148: The lines calling _r(... ) currently contain multiple
statements per line (e.g., _r(f"{d}_a", REAL, {P}); _r(f"{d}_b", REAL, {P}) and
_r(f"a_{d}", REAL, {P}); _r(f"loops_{d}", INT, {P})), which triggers Ruff E702;
split each semicolon-separated call into its own statement on separate lines so
each _r(...) invocation stands alone (locate the occurrences of _r with names
like f"{d}_a", f"{d}_b", a_{d}, and loops_{d} and place each call on its own
line).
- Around line 114-116: Rename the ambiguous alias O to a clear name like POST
(e.g., change "C, P, S, O = Stage.COMMON, Stage.PRE_PROCESS, Stage.SIMULATION,
Stage.POST_PROCESS" to "C, P, S, POST = Stage.COMMON, Stage.PRE_PROCESS,
Stage.SIMULATION, Stage.POST_PROCESS") and update any usages of O to POST;
remove the unused alias A_INT (drop "A_INT, A_REAL = ParamType.ANALYTIC_INT,
ParamType.ANALYTIC_REAL" and instead keep only "A_REAL =
ParamType.ANALYTIC_REAL" or define A_REAL separately) so there is no
unused-variable (F841) or ambiguous-alias (E741) linting; also remove the
now-unnecessary pylint disable comment if present.
In `@toolchain/mfc/params/descriptions.py`:
- Around line 472-474: In the if "offset" in suffix block replace the no-op
f-string used in the return (currently returning f"Offset parameter" + (f" for
{context}" if context else "")) with a plain string for the static part so Ruff
F541 is avoided; i.e., use "Offset parameter" as the static component and keep
the conditional f-string only for the " for {context}" suffix so the return
becomes a concatenation of the plain "Offset parameter" and the existing (f" for
{context}" if context else "") expression.
🧹 Nitpick comments (2)
toolchain/mfc/params_tests/coverage.py (2)
78-82: Avoid catching blanketExceptionaroundast.unparse.🔧 Suggested narrowing of exceptions
try: condition_code = ast.unparse(node.args[0]) - except Exception: + except (ValueError, TypeError): condition_code = "<complex>"
275-291: Make Optional type explicit per PEP 484.🧷 Typing fix
-def save_coverage_report(output_path: Path = None): +def save_coverage_report(output_path: Path | None = None):
| Argument( | ||
| name="template", | ||
| short="t", | ||
| help="Template to use (e.g., 1D_minimal, 2D_minimal, 3D_minimal, or example:<name>).", | ||
| default="1D_minimal", | ||
| completion=Completion( | ||
| type=CompletionType.CHOICES, | ||
| choices=["1D_minimal", "2D_minimal", "3D_minimal"], | ||
| ), | ||
| ), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
new --template choices reject example:* templates.
Argparse choices only permits the three minimal templates, but help/examples advertise example:<name>. This will reject valid inputs at parse time. Consider removing choices (keep completion hints) or expanding validation to accept example: prefixes.
🛠️ Suggested fix (keep completion hints, remove hard choices)
Argument(
name="template",
short="t",
help="Template to use (e.g., 1D_minimal, 2D_minimal, 3D_minimal, or example:<name>).",
default="1D_minimal",
- completion=Completion(
- type=CompletionType.CHOICES,
- choices=["1D_minimal", "2D_minimal", "3D_minimal"],
- ),
+ completion=Completion(
+ type=CompletionType.CHOICES,
+ choices=["1D_minimal", "2D_minimal", "3D_minimal"],
+ ),
),🤖 Prompt for AI Agents
In `@toolchain/mfc/cli/commands.py` around lines 543 - 552, The Argument for name
"template" currently uses CompletionType.CHOICES with a hard-coded choices list
which causes argparse to reject inputs like "example:<name>"; remove the hard
choices enforcement: delete the choices=[...] entry and stop using
CompletionType.CHOICES (use CompletionType.TEXT or omit the completion type) so
parsing no longer rejects "example:*" values, and if needed add a lightweight
post-parse validation in the code that handles the "template" value to accept
either the minimal names or strings prefixed with "example:".
- Rename ambiguous O alias to POST in definitions.py (Ruff E741) - Remove unused A_INT variable (Ruff F841) - Split multiple statements per line (Ruff E702) - Remove no-op f-string prefix in descriptions.py (Ruff F541) - Fix markdown anchor links in doc generators - Hyphenate "space-separated" in help text Regenerated completion scripts and documentation. Co-Authored-By: Claude Opus 4.5 <[email protected]>
|
/improve |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@toolchain/completions/_mfc`:
- Around line 58-141: The completion specs treat many options as flags instead
of value-taking; update the _arguments entries (both the general/multi-target
blocks and the run) so each option that requires a value includes a colon and a
completion action (e.g., change '-j[Allows for JOBS concurrent jobs.]' and
'--jobs[...]' to '-j:jobs:' and '--jobs:jobs:' or '-j:jobs:_guard "[1-9]*" ' for
numeric hints, change '-g/--gpus' to '-g:gpus:' and '--gpus:gpus:_files or
_comma_separated_values', change '-p/--partition', '-n/--tasks-per-node',
'-w/--walltime', '-a/--account', '-@/--email', '-#/--name',
'-q/--quality_of_service', '-N/--nodes', and '-o/--output-summary' similarly),
making sure to edit the entries inside the _arguments calls for the b) and run)
cases so Zsh will expect and complete their arguments.
In `@toolchain/mfc/params/generators/docs_gen.py`:
- Around line 72-85: The generator mistakenly treats REGISTRY.all_params as an
attribute; replace usages that call REGISTRY.all_params (e.g., in docs_gen.py
where you build the intro string and where you iterate with for name, param in
sorted(REGISTRY.all_params.items()): and any other places around lines
referenced) with a call to the method REGISTRY.all_params() so you iterate over
the returned mapping (e.g., sorted(REGISTRY.all_params().items())) and compute
len(REGISTRY.all_params()) when building the header; ensure the same change is
applied to the other occurrences (including the block that groups by family
using _get_family).
🧹 Nitpick comments (1)
toolchain/mfc/params/generators/docs_gen.py (1)
15-15: Remove the unusednoqadirective to satisfy RUF100.
# noqa: F401isn’t suppressing anything and is flagged by Ruff. Keep the import for side effects, but drop the unused noqa.🧹 Minimal clean-up
-from .. import definitions # noqa: F401 pylint: disable=unused-import +from .. import definitions # pylint: disable=unused-import
| '-j[Allows for JOBS concurrent jobs.]' \ | ||
| '--jobs[Allows for JOBS concurrent jobs.]' \ | ||
| '-v[Increase verbosity level. Use -v, -vv, or -vvv for more detail.]' \ | ||
| '--verbose[Increase verbosity level. Use -v, -vv, or -vvv for more detail.]' \ | ||
| '-d[Enable debug logging for troubleshooting.]' \ | ||
| '--debug-log[Enable debug logging for troubleshooting.]' \ | ||
| '-i[(GPU Optimization) Build a version of MFC optimized for a case.]:_files -g "*.py"' \ | ||
| '--input[(GPU Optimization) Build a version of MFC optimized for a case.]:_files -g "*.py"' \ | ||
| '--case-optimization[(GPU Optimization) Compile MFC targets with some case parameters hard-coded (requires --input).]' | ||
| ;; | ||
| b) | ||
| _arguments \ | ||
| '-t[Space-separated list of targets to act upon.]:(fftw hdf5 silo lapack hipfort pre_process simulation post_process syscheck documentation)' \ | ||
| '--targets[Space-separated list of targets to act upon.]:(fftw hdf5 silo lapack hipfort pre_process simulation post_process syscheck documentation)' \ | ||
| '--mpi[Enable MPI]' \ | ||
| '--no-mpi[Disable MPI]' \ | ||
| '--gpu[Enable GPU]:mode:(acc mp)' \ | ||
| '--no-gpu[Disable GPU]' \ | ||
| '--debug[Enable debug mode]' \ | ||
| '--no-debug[Disable debug mode]' \ | ||
| '-j[Allows for JOBS concurrent jobs.]' \ | ||
| '--jobs[Allows for JOBS concurrent jobs.]' \ | ||
| '-v[Increase verbosity level. Use -v, -vv, or -vvv for more detail.]' \ | ||
| '--verbose[Increase verbosity level. Use -v, -vv, or -vvv for more detail.]' \ | ||
| '-d[Enable debug logging for troubleshooting.]' \ | ||
| '--debug-log[Enable debug logging for troubleshooting.]' \ | ||
| '-i[(GPU Optimization) Build a version of MFC optimized for a case.]:_files -g "*.py"' \ | ||
| '--input[(GPU Optimization) Build a version of MFC optimized for a case.]:_files -g "*.py"' \ | ||
| '--case-optimization[(GPU Optimization) Compile MFC targets with some case parameters hard-coded (requires --input).]' | ||
| ;; | ||
| run) | ||
| _arguments \ | ||
| '1:Input file to run.:_files -g "*.py"' \ | ||
| '-t[Space-separated list of targets to act upon.]:(fftw hdf5 silo lapack hipfort pre_process simulation post_process syscheck documentation)' \ | ||
| '--targets[Space-separated list of targets to act upon.]:(fftw hdf5 silo lapack hipfort pre_process simulation post_process syscheck documentation)' \ | ||
| '--mpi[Enable MPI]' \ | ||
| '--no-mpi[Disable MPI]' \ | ||
| '--gpu[Enable GPU]:mode:(acc mp)' \ | ||
| '--no-gpu[Disable GPU]' \ | ||
| '--debug[Enable debug mode]' \ | ||
| '--no-debug[Disable debug mode]' \ | ||
| '-j[Allows for JOBS concurrent jobs.]' \ | ||
| '--jobs[Allows for JOBS concurrent jobs.]' \ | ||
| '-v[Increase verbosity level. Use -v, -vv, or -vvv for more detail.]' \ | ||
| '--verbose[Increase verbosity level. Use -v, -vv, or -vvv for more detail.]' \ | ||
| '-d[Enable debug logging for troubleshooting.]' \ | ||
| '--debug-log[Enable debug logging for troubleshooting.]' \ | ||
| '-g[(Optional GPU override) List of GPU #s to use (environment default if unspecified).]' \ | ||
| '--gpus[(Optional GPU override) List of GPU #s to use (environment default if unspecified).]' \ | ||
| '-e[Job execution/submission engine choice.]:(interactive batch)' \ | ||
| '--engine[Job execution/submission engine choice.]:(interactive batch)' \ | ||
| '-p[(Batch) Partition for job submission.]' \ | ||
| '--partition[(Batch) Partition for job submission.]' \ | ||
| '-q[(Batch) Quality of Service for job submission.]' \ | ||
| '--quality_of_service[(Batch) Quality of Service for job submission.]' \ | ||
| '-N[(Batch) Number of nodes.]' \ | ||
| '--nodes[(Batch) Number of nodes.]' \ | ||
| '-n[Number of tasks per node.]' \ | ||
| '--tasks-per-node[Number of tasks per node.]' \ | ||
| '-w[(Batch) Walltime.]' \ | ||
| '--walltime[(Batch) Walltime.]' \ | ||
| '-a[(Batch) Account to charge.]' \ | ||
| '--account[(Batch) Account to charge.]' \ | ||
| '-@[(Batch) Email for job notification.]' \ | ||
| '--email[(Batch) Email for job notification.]' \ | ||
| '-#[(Batch) Job name.]' \ | ||
| '--name[(Batch) Job name.]' \ | ||
| '-s[Build from scratch.]' \ | ||
| '--scratch[Build from scratch.]' \ | ||
| '-b[(Interactive) Override MPI execution binary]:(mpirun jsrun srun mpiexec)' \ | ||
| '--binary[(Interactive) Override MPI execution binary]:(mpirun jsrun srun mpiexec)' \ | ||
| '--dry-run[(Batch) Run without submitting batch file.]' \ | ||
| '--case-optimization[(GPU Optimization) Compile MFC targets with some case parameters hard-coded.]' \ | ||
| '--no-build[(Testing) Do not rebuild MFC.]' \ | ||
| '--wait[(Batch) Wait for the job to finish.]' \ | ||
| '-c[(Batch) Path to a custom submission file template or one of the built-in templates.]:(bridges2 carpenter carpenter-cray default delta deltaai frontier hipergator nautilus oscar phoenix phoenix-bench santis tuo)' \ | ||
| '--computer[(Batch) Path to a custom submission file template or one of the built-in templates.]:(bridges2 carpenter carpenter-cray default delta deltaai frontier hipergator nautilus oscar phoenix phoenix-bench santis tuo)' \ | ||
| '-o[Output file (YAML) for summary.]' \ | ||
| '--output-summary[Output file (YAML) for summary.]' \ | ||
| '--clean[Clean the case before running.]' \ | ||
| '--ncu[Profile with NVIDIA Nsight Compute.]' \ | ||
| '--nsys[Profile with NVIDIA Nsight Systems.]' \ | ||
| '--rcu[Profile with ROCM rocprof-compute.]' \ | ||
| '--rsys[Profile with ROCM rocprof-systems.]' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's find and examine the actual file
git ls-files | grep -E "completions.*_mfc"Repository: MFlowCode/MFC
Length of output: 83
🌐 Web query:
Zsh _arguments syntax for options that require values argument specification
💡 Result:
In Zsh completion, _arguments option specs (“optspecs”) use colons to declare option arguments:
1) Basic form (option with a required value)
_arguments \
'-o[output file]:file:_files'General shape:
'optspec:message:action'
- The presence of the first
:afteroptspecmeans “this option takes an argument”. (zsh.sourceforge.io) :message:action= mandatory argument. (zsh.sourceforge.io)
2) Optional value for an option
_arguments \
'--color[use color]::when:(auto always never)'::message:action= optional argument. (zsh.sourceforge.io)
3) Controlling how the value is attached (next word vs same word vs =)
These are suffixes on the option name inside optspec: (zsh.sourceforge.io)
-
Next word only (default):
_arguments '-o[output]:file:_files' # -o FILE
-
Same word only (immediately after option name):
_arguments '-I-[include dir]:dir:_files -/' # e.g. -I/usr/include
-
Either same word or next word:
_arguments '-I+[include dir]:dir:_files -/' # -I/usr/include OR -I /usr/include
-
Either
--opt=VALor--opt VAL:_arguments '--port=[port]:port: ' # --port=123 OR --port 123
-
Must be
--opt=VAL(equals required):_arguments '--port=-[port]:port: ' # only --port=123
4) Multiple values for one option
Repeat :message:action segments to declare multiple arguments: (zsh.sourceforge.io)
_arguments \
'--range[first]:from:' \
':to:'If you tell me an example CLI (e.g. --port 123, -I/usr/include, --define=K=V), I can write the exact _arguments line(s) for it.
Citations:
- 1: https://zsh.sourceforge.io/Doc/Release/Completion-System.html
- 2: https://zsh.sourceforge.io/Doc/Release/Completion-System.html
- 3: https://zsh.sourceforge.io/Doc/Release/Completion-System.html
- 4: https://zsh.sourceforge.io/Doc/Release/Completion-System.html
- 5: https://zsh.sourceforge.io/Doc/Release/Completion-System.html
🏁 Script executed:
# Read the actual file to see current state
head -141 toolchain/completions/_mfc | tail -84Repository: MFlowCode/MFC
Length of output: 6955
Declare argument specs for value-taking options (e.g., -j/--jobs, -g/--gpus, -p/--partition, -n/--tasks-per-node, -w/--walltime, -a/--account, -@/--email, -#/--name, -q/--quality_of_service, -N/--nodes, -o/--output-summary).
Without :message:action specs, Zsh treats these as flags and completion won't expect a value. According to Zsh documentation, options requiring arguments must include at least one colon followed by a completion action (e.g., :jobs: or :partition:_files).
♻️ Example fix for one occurrence
- '-j[Allows for JOBS concurrent jobs.]' \
- '--jobs[Allows for JOBS concurrent jobs.]' \
+ '-j[Allows for JOBS concurrent jobs.]:jobs:' \
+ '--jobs[Allows for JOBS concurrent jobs.]:jobs:' \🤖 Prompt for AI Agents
In `@toolchain/completions/_mfc` around lines 58 - 141, The completion specs treat
many options as flags instead of value-taking; update the _arguments entries
(both the general/multi-target blocks and the run) so each option that requires
a value includes a colon and a completion action (e.g., change '-j[Allows for
JOBS concurrent jobs.]' and '--jobs[...]' to '-j:jobs:' and '--jobs:jobs:' or
'-j:jobs:_guard "[1-9]*" ' for numeric hints, change '-g/--gpus' to '-g:gpus:'
and '--gpus:gpus:_files or _comma_separated_values', change '-p/--partition',
'-n/--tasks-per-node', '-w/--walltime', '-a/--account', '-@/--email',
'-#/--name', '-q/--quality_of_service', '-N/--nodes', and '-o/--output-summary'
similarly), making sure to edit the entries inside the _arguments calls for the
b) and run) cases so Zsh will expect and complete their arguments.
| f"MFC supports **{len(REGISTRY.all_params):,}** case parameters organized into families.", | ||
| "", | ||
| "**Quick search:** Use `./mfc.sh params <query>` to search parameters from the command line.", | ||
| "", | ||
| "## Parameter Families", | ||
| "", | ||
| ] | ||
|
|
||
| # Group parameters by family | ||
| families: Dict[str, List[Tuple[str, any]]] = defaultdict(list) | ||
| for name, param in sorted(REGISTRY.all_params.items()): | ||
| family = _get_family(name) | ||
| families[family].append((name, param)) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Call REGISTRY.all_params() — current usage will break doc generation.
Line 72, Line 82, and Line 234 treat all_params as an attribute, but it’s a method in registry.py. This will raise at runtime (e.g., method has no items()), breaking the generator.
🐛 Fix: invoke the method
- f"MFC supports **{len(REGISTRY.all_params):,}** case parameters organized into families.",
+ f"MFC supports **{len(REGISTRY.all_params()):,}** case parameters organized into families.",
@@
- for name, param in sorted(REGISTRY.all_params.items()):
+ for name, param in sorted(REGISTRY.all_params().items()):
@@
- return len(REGISTRY.all_params)
+ return len(REGISTRY.all_params())Also applies to: 231-234
🤖 Prompt for AI Agents
In `@toolchain/mfc/params/generators/docs_gen.py` around lines 72 - 85, The
generator mistakenly treats REGISTRY.all_params as an attribute; replace usages
that call REGISTRY.all_params (e.g., in docs_gen.py where you build the intro
string and where you iterate with for name, param in
sorted(REGISTRY.all_params.items()): and any other places around lines
referenced) with a call to the method REGISTRY.all_params() so you iterate over
the returned mapping (e.g., sorted(REGISTRY.all_params().items())) and compute
len(REGISTRY.all_params()) when building the header; ensure the same change is
applied to the other occurrences (including the block that groups by family
using _get_family).
| CASE_OPTIMIZATION = _get_case_optimization_params() | ||
|
|
||
| # Build JSON schema for validation | ||
| _properties = {k: v.value for k, v in ALL.items()} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Use getattr(v, "value", v) to handle cases where parameter types are represented as raw dictionaries instead of ParamType enums, preventing a potential AttributeError during schema generation. [possible issue, importance: 6]
| _properties = {k: v.value for k, v in ALL.items()} | |
| _properties = {k: getattr(v, "value", v) for k, v in ALL.items()} |
| desc = re.sub(r"(species|fluid|component|direction) 2", r"\1 M", desc) | ||
| desc = re.sub(r"component 2", "component M", desc) | ||
| # Then handle primary index (1 -> N) | ||
| desc = re.sub(r"(patch|fluid|IB patch|source|probe|region|species|direction|component) 1", r"\1 N", desc) | ||
| # Generic fallback | ||
| desc = re.sub(r" 1([,\s]|$)", r" N\1", desc) | ||
| desc = re.sub(r" 2([,\s]|$)", r" M\1", desc) | ||
| desc = re.sub(r" 3([,\s]|$)", r" K\1", desc) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Use word boundaries (\b) in the regex substitutions within get_pattern_description to ensure only whole numbers are replaced, preventing incorrect replacements in multi-digit numbers. [possible issue, importance: 7]
| desc = re.sub(r"(species|fluid|component|direction) 2", r"\1 M", desc) | |
| desc = re.sub(r"component 2", "component M", desc) | |
| # Then handle primary index (1 -> N) | |
| desc = re.sub(r"(patch|fluid|IB patch|source|probe|region|species|direction|component) 1", r"\1 N", desc) | |
| # Generic fallback | |
| desc = re.sub(r" 1([,\s]|$)", r" N\1", desc) | |
| desc = re.sub(r" 2([,\s]|$)", r" M\1", desc) | |
| desc = re.sub(r" 3([,\s]|$)", r" K\1", desc) | |
| desc = re.sub(r"\b(species|fluid|component|direction)\s+2\b", r"\1 M", desc) | |
| # Then handle primary index (1 -> N) | |
| desc = re.sub(r"\b(patch|fluid|IB patch|source|probe|region|species|direction|component)\s+1\b", r"\1 N", desc) | |
| # Generic fallback | |
| desc = re.sub(r"\b1\b(?=[,\s]|$)", "N", desc) | |
| desc = re.sub(r"\b2\b(?=[,\s]|$)", "M", desc) | |
| desc = re.sub(r"\b3\b(?=[,\s]|$)", "K", desc) |
| for n in ["dt", "cfl_target", "cfl_max", "t_tol", "weno_eps", "teno_CT", "wenoz_q", | ||
| "adap_dt_tol", "t_stop", "t_save", "tau_star", "cont_damage_s", "alpha_bar", | ||
| "alf_factor", "ic_eps", "ic_beta"]: | ||
| _r(n, REAL, {S}) | ||
| for n in ["mapped_weno", "wenoz", "teno", "weno_Re_flux", "weno_avg", "mp_weno", | ||
| "run_time_info", "prim_vars_wrt", "alt_soundspeed", "null_weights", | ||
| "mixture_err", "probe_wrt", "acoustic_source", "integral_wrt", "rdma_mpi", | ||
| "adap_dt", "powell", "igr_pres_lim", "int_comp", "nv_uvm_out_of_core", | ||
| "nv_uvm_pref_gpu"]: | ||
| _r(n, LOG, {S}) | ||
| # cfl_target also POST_PROCESS | ||
| _r("cfl_target", REAL, {POST}) | ||
|
|
||
| # Body force (SIM) | ||
| for d in ["x", "y", "z"]: | ||
| for v in ["k", "w", "p", "g"]: | ||
| _r(f"{v}_{d}", REAL, {S}) | ||
| _r(f"bf_{d}", LOG, {S}) | ||
|
|
||
| # === POST_PROCESS === | ||
| for n in ["format", "flux_lim", "t_step_start", "t_step_stop", "t_step_save"]: | ||
| _r(n, INT, {POST}) | ||
| for d in ["x", "y", "z"]: | ||
| _r(f"bc_{d}%beg", INT, {POST}) | ||
| _r(f"bc_{d}%end", INT, {POST}) | ||
| for n in ["schlieren_alpha"]: | ||
| _r(n, REAL, {POST}) | ||
| for n in ["schlieren_wrt", "alpha_rho_wrt", "rho_wrt", "mom_wrt", "vel_wrt", | ||
| "flux_wrt", "E_wrt", "pres_wrt", "alpha_wrt", "kappa_wrt", "gamma_wrt", | ||
| "heat_ratio_wrt", "pi_inf_wrt", "pres_inf_wrt", "cons_vars_wrt", "c_wrt", | ||
| "omega_wrt", "qm_wrt", "liutex_wrt", "cf_wrt", "sim_data", "output_partial_domain"]: | ||
| _r(n, LOG, {POST}) | ||
| for d in ["x", "y", "z"]: | ||
| _r(f"{d}_output%beg", REAL, {POST}) | ||
| _r(f"{d}_output%end", REAL, {POST}) | ||
| for n in ["prim_vars_wrt", "alt_soundspeed", "mixture_err", "t_stop", "t_save", "fd_order"]: | ||
| _r(n, LOG if "wrt" in n or n in ["alt_soundspeed", "mixture_err"] else (INT if n == "fd_order" else REAL), {POST}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Refactor parameter registration to avoid duplicate entries for cfl_target and prim_vars_wrt. Register each parameter only once with the combined set of all applicable stages. [possible issue, importance: 7]
| for n in ["dt", "cfl_target", "cfl_max", "t_tol", "weno_eps", "teno_CT", "wenoz_q", | |
| "adap_dt_tol", "t_stop", "t_save", "tau_star", "cont_damage_s", "alpha_bar", | |
| "alf_factor", "ic_eps", "ic_beta"]: | |
| _r(n, REAL, {S}) | |
| for n in ["mapped_weno", "wenoz", "teno", "weno_Re_flux", "weno_avg", "mp_weno", | |
| "run_time_info", "prim_vars_wrt", "alt_soundspeed", "null_weights", | |
| "mixture_err", "probe_wrt", "acoustic_source", "integral_wrt", "rdma_mpi", | |
| "adap_dt", "powell", "igr_pres_lim", "int_comp", "nv_uvm_out_of_core", | |
| "nv_uvm_pref_gpu"]: | |
| _r(n, LOG, {S}) | |
| # cfl_target also POST_PROCESS | |
| _r("cfl_target", REAL, {POST}) | |
| # Body force (SIM) | |
| for d in ["x", "y", "z"]: | |
| for v in ["k", "w", "p", "g"]: | |
| _r(f"{v}_{d}", REAL, {S}) | |
| _r(f"bf_{d}", LOG, {S}) | |
| # === POST_PROCESS === | |
| for n in ["format", "flux_lim", "t_step_start", "t_step_stop", "t_step_save"]: | |
| _r(n, INT, {POST}) | |
| for d in ["x", "y", "z"]: | |
| _r(f"bc_{d}%beg", INT, {POST}) | |
| _r(f"bc_{d}%end", INT, {POST}) | |
| for n in ["schlieren_alpha"]: | |
| _r(n, REAL, {POST}) | |
| for n in ["schlieren_wrt", "alpha_rho_wrt", "rho_wrt", "mom_wrt", "vel_wrt", | |
| "flux_wrt", "E_wrt", "pres_wrt", "alpha_wrt", "kappa_wrt", "gamma_wrt", | |
| "heat_ratio_wrt", "pi_inf_wrt", "pres_inf_wrt", "cons_vars_wrt", "c_wrt", | |
| "omega_wrt", "qm_wrt", "liutex_wrt", "cf_wrt", "sim_data", "output_partial_domain"]: | |
| _r(n, LOG, {POST}) | |
| for d in ["x", "y", "z"]: | |
| _r(f"{d}_output%beg", REAL, {POST}) | |
| _r(f"{d}_output%end", REAL, {POST}) | |
| for n in ["prim_vars_wrt", "alt_soundspeed", "mixture_err", "t_stop", "t_save", "fd_order"]: | |
| _r(n, LOG if "wrt" in n or n in ["alt_soundspeed", "mixture_err"] else (INT if n == "fd_order" else REAL), {POST}) | |
| for n in ["dt", "cfl_target", "cfl_max", "t_tol", "weno_eps", "teno_CT", "wenoz_q", | |
| "adap_dt_tol", "t_stop", "t_save", "tau_star", "cont_damage_s", "alpha_bar", | |
| "alf_factor", "ic_eps", "ic_beta"]: | |
| _r(n, REAL, {S} | ({POST} if n == "cfl_target" else set())) | |
| ... | |
| for n in ["prim_vars_wrt", "alt_soundspeed", "mixture_err", "t_stop", "t_save", "fd_order"]: | |
| if n != "prim_vars_wrt": | |
| _r(n, LOG if "wrt" in n or n in ["alt_soundspeed", "mixture_err"] else (INT if n == "fd_order" else REAL), {POST}) |
| cmd = f"./mfc.sh new {name} -t {template}" | ||
| cons.print() | ||
| cons.print(f"[dim]Running: {cmd}[/dim]") | ||
| cons.print() | ||
| os.system(cmd) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Replace os.system() calls with subprocess.run() and pass command arguments as a list to prevent shell injection vulnerabilities from user input in the interactive mode. [security, importance: 9]
| cmd = f"./mfc.sh new {name} -t {template}" | |
| cons.print() | |
| cons.print(f"[dim]Running: {cmd}[/dim]") | |
| cons.print() | |
| os.system(cmd) | |
| import subprocess | |
| ... | |
| cmd = ["./mfc.sh", "new", name, "-t", template] | |
| cons.print() | |
| cons.print(f"[dim]Running: {' '.join(cmd)}[/dim]") | |
| cons.print() | |
| subprocess.run(cmd, check=False) |
| m = self.get('m') | ||
| if m is not None and m > 0: | ||
| self.prohibit(not self.is_set('x_domain%beg'), | ||
| "x_domain%beg must be set when m > 0") | ||
| self.prohibit(not self.is_set('x_domain%end'), | ||
| "x_domain%end must be set when m > 0") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Add a type check for the parameter m before performing a numeric comparison. This prevents the validator from crashing with a TypeError if m is not a number and instead provides a user-friendly error message. [possible issue, importance: 7]
| m = self.get('m') | |
| if m is not None and m > 0: | |
| self.prohibit(not self.is_set('x_domain%beg'), | |
| "x_domain%beg must be set when m > 0") | |
| self.prohibit(not self.is_set('x_domain%end'), | |
| "x_domain%end must be set when m > 0") | |
| # Required domain parameters when m > 0 | |
| m = self.get('m') | |
| if m is not None and not isinstance(m, (int, float)) or isinstance(m, bool): | |
| self.errors.append(f"m must be numeric, got '{m}'") | |
| elif m is not None and m > 0: | |
| self.prohibit(not self.is_set('x_domain%beg'), | |
| "x_domain%beg must be set when m > 0") | |
| self.prohibit(not self.is_set('x_domain%end'), | |
| "x_domain%end must be set when m > 0") |
| if streaming: | ||
| # Streaming mode (-v): print build progress lines as they happen | ||
| cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...") | ||
| start_time = time.time() | ||
|
|
||
| # Read stdout and print matching lines | ||
| for line in iter(process.stdout.readline, ''): | ||
| all_stdout.append(line) | ||
| stripped = line.strip() | ||
|
|
||
| # Try ninja format first: [42/156] Action | ||
| ninja_match = _NINJA_PROGRESS_RE.match(stripped) | ||
| if ninja_match: | ||
| completed = ninja_match.group(1) | ||
| total = ninja_match.group(2) | ||
| action = ninja_match.group(3) | ||
| # Extract filename from action | ||
| parts = action.split() | ||
| if len(parts) >= 3: | ||
| filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '') | ||
| if len(filename) > 40: | ||
| filename = filename[:37] + "..." | ||
| cons.print(f" [dim][{completed}/{total}][/dim] {filename}") | ||
| continue | ||
|
|
||
| # Try make format: [ 16%] Action | ||
| make_match = _MAKE_PROGRESS_RE.match(stripped) | ||
| if make_match: | ||
| percent = make_match.group(1) | ||
| action = make_match.group(2) | ||
| # Extract filename from action | ||
| parts = action.split() | ||
| if len(parts) >= 3: | ||
| # Get the last part which is usually the file path | ||
| obj_path = parts[-1] | ||
| filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '') | ||
| if len(filename) > 40: | ||
| filename = filename[:37] + "..." | ||
| cons.print(f" [dim][{percent:>3}%][/dim] {filename}") | ||
|
|
||
| # Read any remaining stderr | ||
| stderr = process.stderr.read() | ||
| all_stderr.append(stderr) | ||
| process.wait() | ||
|
|
||
| elapsed = time.time() - start_time | ||
| if elapsed > 5: | ||
| cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]") | ||
|
|
||
| return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Avoid a potential deadlock in streaming mode by redirecting stderr to stdout in the subprocess.Popen call. This prevents the process from hanging if the stderr buffer fills up. [possible issue, importance: 9]
| if streaming: | |
| # Streaming mode (-v): print build progress lines as they happen | |
| cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...") | |
| start_time = time.time() | |
| # Read stdout and print matching lines | |
| for line in iter(process.stdout.readline, ''): | |
| all_stdout.append(line) | |
| stripped = line.strip() | |
| # Try ninja format first: [42/156] Action | |
| ninja_match = _NINJA_PROGRESS_RE.match(stripped) | |
| if ninja_match: | |
| completed = ninja_match.group(1) | |
| total = ninja_match.group(2) | |
| action = ninja_match.group(3) | |
| # Extract filename from action | |
| parts = action.split() | |
| if len(parts) >= 3: | |
| filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '') | |
| if len(filename) > 40: | |
| filename = filename[:37] + "..." | |
| cons.print(f" [dim][{completed}/{total}][/dim] {filename}") | |
| continue | |
| # Try make format: [ 16%] Action | |
| make_match = _MAKE_PROGRESS_RE.match(stripped) | |
| if make_match: | |
| percent = make_match.group(1) | |
| action = make_match.group(2) | |
| # Extract filename from action | |
| parts = action.split() | |
| if len(parts) >= 3: | |
| # Get the last part which is usually the file path | |
| obj_path = parts[-1] | |
| filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '') | |
| if len(filename) > 40: | |
| filename = filename[:37] + "..." | |
| cons.print(f" [dim][{percent:>3}%][/dim] {filename}") | |
| # Read any remaining stderr | |
| stderr = process.stderr.read() | |
| all_stderr.append(stderr) | |
| process.wait() | |
| elapsed = time.time() - start_time | |
| if elapsed > 5: | |
| cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]") | |
| return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr)) | |
| if streaming: | |
| # Streaming mode (-v): print build progress lines as they happen | |
| cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...") | |
| start_time = time.time() | |
| # Re-launch process with stderr merged into stdout to avoid pipe deadlocks | |
| process = subprocess.Popen( # pylint: disable=consider-using-with | |
| cmd, | |
| stdout=subprocess.PIPE, | |
| stderr=subprocess.STDOUT, | |
| text=True, | |
| bufsize=1 | |
| ) | |
| for line in iter(process.stdout.readline, ''): | |
| all_stdout.append(line) | |
| stripped = line.strip() | |
| ninja_match = _NINJA_PROGRESS_RE.match(stripped) | |
| if ninja_match: | |
| completed = ninja_match.group(1) | |
| total = ninja_match.group(2) | |
| action = ninja_match.group(3) | |
| parts = action.split() | |
| if len(parts) >= 3: | |
| filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '') | |
| if len(filename) > 40: | |
| filename = filename[:37] + "..." | |
| cons.print(f" [dim][{completed}/{total}][/dim] {filename}") | |
| continue | |
| make_match = _MAKE_PROGRESS_RE.match(stripped) | |
| if make_match: | |
| percent = make_match.group(1) | |
| action = make_match.group(2) | |
| parts = action.split() | |
| if len(parts) >= 3: | |
| obj_path = parts[-1] | |
| filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '') | |
| if len(filename) > 40: | |
| filename = filename[:37] + "..." | |
| cons.print(f" [dim][{percent:>3}%][/dim] {filename}") | |
| process.wait() | |
| elapsed = time.time() - start_time | |
| if elapsed > 5: | |
| cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]") | |
| return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), '') |
| failed_tests = [] # Track failed test details for summary | ||
| test_start_time = None # Track overall test duration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Introduce a threading.Lock to protect shared global variables like failed_tests, nFAIL, and errors from concurrent modification by worker threads. This prevents race conditions and ensures consistent test results. [possible issue, importance: 8]
| failed_tests = [] # Track failed test details for summary | |
| test_start_time = None # Track overall test duration | |
| failed_tests = [] # Track failed test details for summary | |
| test_start_time = None # Track overall test duration | |
| _results_lock = threading.Lock() |
| def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str: | ||
| """Generate bash completion expression for a completion type.""" | ||
| if comp_type == CompletionType.CHOICES and choices: | ||
| return f'COMPREPLY=( $(compgen -W "{" ".join(choices)}" -- "${{cur}}") )' | ||
| return _BASH_COMPLETION_MAP.get(comp_type, "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Escape choice values when generating bash completion scripts to prevent shell injection vulnerabilities. Unescaped values with special characters can break the script or lead to unintended code execution. [security, importance: 8]
| def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str: | |
| """Generate bash completion expression for a completion type.""" | |
| if comp_type == CompletionType.CHOICES and choices: | |
| return f'COMPREPLY=( $(compgen -W "{" ".join(choices)}" -- "${{cur}}") )' | |
| return _BASH_COMPLETION_MAP.get(comp_type, "") | |
| def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str: | |
| """Generate bash completion expression for a completion type.""" | |
| if comp_type == CompletionType.CHOICES and choices: | |
| safe_choices = [ | |
| str(c).replace("\\", "\\\\").replace('"', '\\"').replace("\n", " ").strip() | |
| for c in choices | |
| ] | |
| return f'COMPREPLY=( $(compgen -W "{" ".join(safe_choices)}" -- "${{cur}}") )' | |
| return _BASH_COMPLETION_MAP.get(comp_type, "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name=".github/workflows/lint-toolchain.yml">
<violation number="1" location=".github/workflows/lint-toolchain.yml:39">
P2: The multiline `python3 -c` block is indented inside the quoted string, which triggers `IndentationError: unexpected indent` in Python. Use a heredoc or remove the indentation so the CI step runs.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #1118 +/- ##
=======================================
Coverage 44.16% 44.16%
=======================================
Files 71 71
Lines 20417 20417
Branches 1991 1991
=======================================
Hits 9018 9018
Misses 10252 10252
Partials 1147 1147 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
New tests added: - CLI smoke tests (verify commands don't crash) - Parameter registry validation (3400+ params, matches case_dicts) - Bash completion syntax check - Example case validation These catch toolchain regressions early without requiring full builds. Co-Authored-By: Claude Opus 4.5 <[email protected]>
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
|
||
|
|
||
| # Parameters that can be hard-coded for GPU case optimization (from registry) | ||
| CASE_OPTIMIZATION = _get_case_optimization_params() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: When analytic parameter types are present, some entries in the combined dictionary are raw dicts instead of Enum members, so iterating with v.value will raise an AttributeError at import time, breaking schema construction; you should handle both Enum and dict values when building the JSON schema. [type error]
Severity Level: Critical 🚨
- ❌ Module import fails during validation/schema initialization.
- ❌ get_validator() unavailable breaking parameter validation.
- ⚠️ CLI commands using case validation error at startup.
- ⚠️ Any test that imports registry triggers import error.| CASE_OPTIMIZATION = _get_case_optimization_params() | |
| _properties = { | |
| k: (v.value if isinstance(v, ParamType) else v) | |
| for k, v in ALL.items() | |
| } |
Steps of Reproduction ✅
1. Import the case dicts module: run `python -c "import toolchain.mfc.run.case_dicts"`
(module file at toolchain/mfc/run/case_dicts.py). This runs the top-level code that calls
`_load_from_registry()` (see toolchain/mfc/run/case_dicts.py:43-58).
2. `_load_from_registry()` in toolchain/mfc/run/case_dicts.py (lines 43-58) calls
`get_stage_dict()` from toolchain/mfc/params/generators/case_dicts_gen.py (function
defined at toolchain/mfc/params/generators/case_dicts_gen.py:15-46). That generator maps
analytic registry types to class attributes like `CDParamType._ANALYTIC_INT` (see the
`type_map` in toolchain/mfc/params/generators/case_dicts_gen.py:15-46).
3. Because `CDParamType._ANALYTIC_INT` and `_ANALYTIC_REAL` are class-level dicts in
ParamType (not Enum members), the combined ALL dict produced at module import contains a
mix of ParamType Enum members and raw dicts.
4. Module-level code in toolchain/mfc/run/case_dicts.py then executes the comprehension
`_properties = {k: v.value for k, v in ALL.items()}` (at
toolchain/mfc/run/case_dicts.py:81), which attempts to access `.value` on a plain dict and
raises AttributeError: 'dict' object has no attribute 'value'. The import fails
immediately with a traceback pointing to that line.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** toolchain/mfc/run/case_dicts.py
**Line:** 81:81
**Comment:**
*Type Error: When analytic parameter types are present, some entries in the combined dictionary are raw dicts instead of Enum members, so iterating with `v.value` will raise an AttributeError at import time, breaking schema construction; you should handle both Enum and dict values when building the JSON schema.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.|
CodeAnt AI Incremental review completed. |
The % character is special in Doxygen. Using %% renders as a literal %. This fixes parameter names like bc_y%alpha_in showing as bc_yalpha_in. Co-Authored-By: Claude Opus 4.5 <[email protected]>
Instead of listing every combination like (1,1), (1,2), (2,1), etc., the docs now show a single pattern like (N, M). Before: 78 rows for simplex_params (showing every index combination) After: 8 rows showing unique patterns Co-Authored-By: Claude Opus 4.5 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="docs/documentation/parameters.md">
<violation number="1" location="docs/documentation/parameters.md:63">
P2: The documentation now lists parameters with `%%` (e.g., `patch_icpp(N)%%Bx`), which appears to be a typo and misdocuments the actual case file syntax (`%`). This will mislead users copying parameter names.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Previously, small families (<20 params) showed full tables even when they were indexed. Now we always collapse patterns if it reduces rows. Example: schlieren_alpha with 10 params now shows 1 pattern row instead of 10 individual rows. Co-Authored-By: Claude Opus 4.5 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@toolchain/mfc/params/generators/docs_gen.py`:
- Line 137: The type annotation for the variable families incorrectly uses the
builtin any instead of typing.Any; change the annotation from Dict[str,
List[Tuple[str, any]]] to Dict[str, List[Tuple[str, Any]]] and ensure Any is
imported from typing (add or update the import statement that provides Any) so
the annotation references typing.Any; locate the declaration of families in
docs_gen.py to apply the fix.
- Around line 49-74: The nested block that handles parenthesized numeric indices
inside the while i < len(name) loop should be extracted into a helper function
to reduce nesting and fix R1702; create a function (e.g.,
handle_parenthesized_indices or parse_parenthesized_indices) that takes (name,
start_index, placeholders, placeholder_idx), moves the inner logic that collects
paren_content, validates numeric parts, builds new_parts and returns the
replacement string (like '(?, ?)' or with placeholders), the updated index j+1,
and the updated placeholder_idx; then replace the inlined code in the main loop
with a single call to that helper and append the returned segment to result or
fall back to the original flow when the helper indicates no valid parenthesized
numeric indices.
🧹 Nitpick comments (2)
toolchain/mfc/params/generators/docs_gen.py (2)
15-15: Remove unusednoqadirective.Ruff reports the
# noqa: F401comment is unnecessary since no F401 warning is being suppressed. The import is actually used (for side effects of registering definitions).🧹 Proposed fix
-from .. import definitions # noqa: F401 pylint: disable=unused-import +from .. import definitions # pylint: disable=unused-import
24-26: Simplify unnecessaryany()over single-element list.The expression
any(... for _ in [1])iterates over a single element, makingany()pointless. This looks like a refactoring leftover.🧹 Proposed fix
- if any(name.startswith(f"{base}(") or name.startswith(f"{base}%") for _ in [1]): + if name.startswith(f"{base}(") or name.startswith(f"{base}%"): return base
- Change `any` to `Any` from typing module - Extract _parse_paren_content helper to reduce nesting in _collapse_indices - Fixes R1702 pylint warning (too many nested blocks) Co-Authored-By: Claude Opus 4.5 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@docs/documentation/parameters.md`:
- Around line 165-180: Several fluid_pp parameters (`fluid_pp(N)%%D_v`, `%%M_v`,
`%%cp_v`, `%%gamma_v`, `%%gamma_v`, `%%k_v`, `%%mu_v`, `%%mul0`, `%%pv`, `%%ss`)
and general-family params (`Ca`, `Re_inv`, `Web`) are missing human-readable
descriptions in the parameter registry; open the parameter descriptions registry
(toolchain/mfc/params/descriptions.py) and add concise, validated descriptions
and units for each symbol (e.g., D_v = vapor diffusivity [m^2/s], M_v = vapor
molecular weight [kg/mol], cp_v = vapor specific heat at constant pressure
[J/(kg·K)], gamma_v = vapor heat capacity ratio, k_v = vapor thermal
conductivity [W/(m·K)], mu_v = vapor dynamic viscosity [Pa·s], mul0 = reference
viscosity, pv = vapor pressure, ss = surface tension source/flag or clarify
exact meaning, Ca = Capillary number, Re_inv = inverse Reynolds number, Web =
Weber number), verifying physical meanings against source physics before
committing so the generated docs populate correctly.
In `@toolchain/mfc/params/generators/docs_gen.py`:
- Around line 233-242: The description text returned by get_description(example)
isn't escaped for percent signs before being appended to lines, so call the
existing _escape_percent on the (possibly truncated) desc variable before using
it in lines.append to prevent Doxygen format tokens; update the block that
builds the table row (the loop iterating over patterns/examples that sets desc,
truncates it, and appends to lines) to replace desc with _escape_percent(desc)
and apply the identical change to the other similar block later in the file that
builds example rows (the second block that handles examples/patterns around the
same codepath).
- Line 15: Remove the unused "noqa: F401" directive on the import statement in
docs_gen.py: keep the intentional side-effect import "from .. import
definitions" and retain the "pylint: disable=unused-import" comment but delete
the "# noqa: F401" portion so Ruff no longer flags an unused noqa; ensure the
line still imports definitions for side effects.
| | `fluid_pp(N)%%D_v` | `fluid_pp(1)%%D_v` | | | ||
| | `fluid_pp(N)%%G` | `fluid_pp(1)%%G` | Shear modulus for fluid 1 | | ||
| | `fluid_pp(N)%%M_v` | `fluid_pp(1)%%M_v` | | | ||
| | `fluid_pp(N)%%Re(M)` | `fluid_pp(1)%%Re(1)` | Reynolds number component 1 for fluid 1 | | ||
| | `fluid_pp(N)%%cp_v` | `fluid_pp(1)%%cp_v` | | | ||
| | `fluid_pp(N)%%cv` | `fluid_pp(1)%%cv` | Specific heat at constant volume for fluid 1 | | ||
| | `fluid_pp(N)%%gamma` | `fluid_pp(1)%%gamma` | Specific heat ratio for fluid 1 | | ||
| | `fluid_pp(N)%%gamma_v` | `fluid_pp(1)%%gamma_v` | | | ||
| | `fluid_pp(N)%%k_v` | `fluid_pp(1)%%k_v` | | | ||
| | `fluid_pp(N)%%mu_v` | `fluid_pp(1)%%mu_v` | | | ||
| | `fluid_pp(N)%%mul0` | `fluid_pp(1)%%mul0` | | | ||
| | `fluid_pp(N)%%pi_inf` | `fluid_pp(1)%%pi_inf` | Stiffness pressure for fluid 1 | | ||
| | `fluid_pp(N)%%pv` | `fluid_pp(1)%%pv` | | | ||
| | `fluid_pp(N)%%qv` | `fluid_pp(1)%%qv` | Heat of formation for fluid 1 | | ||
| | `fluid_pp(N)%%qvp` | `fluid_pp(1)%%qvp` | Heat of formation prime for fluid 1 | | ||
| | `fluid_pp(N)%%ss` | `fluid_pp(1)%%ss` | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Multiple parameters lack descriptions.
Several fluid_pp parameters have empty descriptions (lines 165, 167, 169, 172-175, 177, 180): D_v, M_v, cp_v, gamma_v, k_v, mu_v, mul0, pv, and ss. Users consulting this reference won't understand what these parameters control.
Similarly, in the general family (lines 690, 693-694), parameters Ca, Re_inv, and Web also lack descriptions.
These descriptions should be added to the parameter registry (likely toolchain/mfc/params/descriptions.py based on the PR context) so they appear in the generated documentation.
📝 Suggested action
Add descriptions for these parameters in the source registry. For example:
D_v: Likely vapor diffusivityM_v: Likely vapor molecular weightcp_v: Likely vapor specific heat at constant pressuregamma_v: Likely vapor heat capacity ratioCa: Likely Capillary numberRe_inv: Likely inverse Reynolds numberWeb: Likely Weber number
Verify the correct physical meanings and update the registry accordingly.
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 165-165: Table column style
Table pipe has extra space to the right for style "compact"
(MD060, table-column-style)
[warning] 165-165: Table column style
Table pipe has extra space to the left for style "compact"
(MD060, table-column-style)
[warning] 167-167: Table column style
Table pipe has extra space to the right for style "compact"
(MD060, table-column-style)
[warning] 167-167: Table column style
Table pipe has extra space to the left for style "compact"
(MD060, table-column-style)
[warning] 169-169: Table column style
Table pipe has extra space to the right for style "compact"
(MD060, table-column-style)
[warning] 169-169: Table column style
Table pipe has extra space to the left for style "compact"
(MD060, table-column-style)
[warning] 172-172: Table column style
Table pipe has extra space to the right for style "compact"
(MD060, table-column-style)
[warning] 172-172: Table column style
Table pipe has extra space to the left for style "compact"
(MD060, table-column-style)
[warning] 173-173: Table column style
Table pipe has extra space to the right for style "compact"
(MD060, table-column-style)
[warning] 173-173: Table column style
Table pipe has extra space to the left for style "compact"
(MD060, table-column-style)
[warning] 174-174: Table column style
Table pipe has extra space to the right for style "compact"
(MD060, table-column-style)
[warning] 174-174: Table column style
Table pipe has extra space to the left for style "compact"
(MD060, table-column-style)
[warning] 175-175: Table column style
Table pipe has extra space to the right for style "compact"
(MD060, table-column-style)
[warning] 175-175: Table column style
Table pipe has extra space to the left for style "compact"
(MD060, table-column-style)
[warning] 177-177: Table column style
Table pipe has extra space to the right for style "compact"
(MD060, table-column-style)
[warning] 177-177: Table column style
Table pipe has extra space to the left for style "compact"
(MD060, table-column-style)
[warning] 180-180: Table column style
Table pipe has extra space to the right for style "compact"
(MD060, table-column-style)
[warning] 180-180: Table column style
Table pipe has extra space to the left for style "compact"
(MD060, table-column-style)
🤖 Prompt for AI Agents
In `@docs/documentation/parameters.md` around lines 165 - 180, Several fluid_pp
parameters (`fluid_pp(N)%%D_v`, `%%M_v`, `%%cp_v`, `%%gamma_v`, `%%gamma_v`,
`%%k_v`, `%%mu_v`, `%%mul0`, `%%pv`, `%%ss`) and general-family params (`Ca`,
`Re_inv`, `Web`) are missing human-readable descriptions in the parameter
registry; open the parameter descriptions registry
(toolchain/mfc/params/descriptions.py) and add concise, validated descriptions
and units for each symbol (e.g., D_v = vapor diffusivity [m^2/s], M_v = vapor
molecular weight [kg/mol], cp_v = vapor specific heat at constant pressure
[J/(kg·K)], gamma_v = vapor heat capacity ratio, k_v = vapor thermal
conductivity [W/(m·K)], mu_v = vapor dynamic viscosity [Pa·s], mul0 = reference
viscosity, pv = vapor pressure, ss = surface tension source/flag or clarify
exact meaning, Ca = Capillary number, Re_inv = inverse Reynolds number, Web =
Weber number), verifying physical meanings against source physics before
committing so the generated docs populate correctly.
| from ..schema import ParamType | ||
| from ..registry import REGISTRY | ||
| from ..descriptions import get_description | ||
| from .. import definitions # noqa: F401 pylint: disable=unused-import |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unused noqa directive to satisfy Ruff.
Ruff flags the # noqa: F401 as unused; the pylint suppression is sufficient if the import is intentionally for side effects.
🧹 Proposed fix
-from .. import definitions # noqa: F401 pylint: disable=unused-import
+from .. import definitions # pylint: disable=unused-import📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from .. import definitions # noqa: F401 pylint: disable=unused-import | |
| from .. import definitions # pylint: disable=unused-import |
🧰 Tools
🪛 Ruff (0.14.14)
[warning] 15-15: Unused noqa directive (non-enabled: F401)
Remove unused noqa directive
(RUF100)
🤖 Prompt for AI Agents
In `@toolchain/mfc/params/generators/docs_gen.py` at line 15, Remove the unused
"noqa: F401" directive on the import statement in docs_gen.py: keep the
intentional side-effect import "from .. import definitions" and retain the
"pylint: disable=unused-import" comment but delete the "# noqa: F401" portion so
Ruff no longer flags an unused noqa; ensure the line still imports definitions
for side effects.
| for pattern, examples in sorted(patterns.items()): | ||
| example = examples[0] | ||
| desc = get_description(example) or "" | ||
| # Truncate long descriptions | ||
| if len(desc) > 60: | ||
| desc = desc[:57] + "..." | ||
| # Escape % for Doxygen | ||
| pattern_escaped = _escape_percent(pattern) | ||
| example_escaped = _escape_percent(example) | ||
| lines.append(f"| `{pattern_escaped}` | `{example_escaped}` | {desc} |") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Escape % in descriptions to avoid Doxygen formatting issues.
You already escape % in patterns/names, but descriptions can include % too; those will still be treated as format tokens by Doxygen. Consider escaping after truncation.
🛠️ Proposed fix
- if len(desc) > 60:
- desc = desc[:57] + "..."
+ if len(desc) > 60:
+ desc = desc[:57] + "..."
+ desc = _escape_percent(desc)
@@
- if len(desc) > 80:
- desc = desc[:77] + "..."
+ if len(desc) > 80:
+ desc = desc[:77] + "..."
+ desc = _escape_percent(desc)Also applies to: 250-261
🤖 Prompt for AI Agents
In `@toolchain/mfc/params/generators/docs_gen.py` around lines 233 - 242, The
description text returned by get_description(example) isn't escaped for percent
signs before being appended to lines, so call the existing _escape_percent on
the (possibly truncated) desc variable before using it in lines.append to
prevent Doxygen format tokens; update the block that builds the table row (the
loop iterating over patterns/examples that sets desc, truncates it, and appends
to lines) to replace desc with _escape_percent(desc) and apply the identical
change to the other similar block later in the file that builds example rows
(the second block that handles examples/patterns around the same codepath).
User description
PR Type
Enhancement, Tests, Documentation
Description
Major Quality of Life Improvements for MFC Toolchain
Centralized Parameter System: Consolidated ~3,300 MFC case parameters into unified registry with descriptions, constraints, and dependencies (
params/definitions.py,params/descriptions.py)Enhanced CLI Architecture: Refactored argument parsing with declarative schema-based command definitions, auto-generated argparse parsers, and shell completion support (
cli/schema.py,cli/argparse_gen.py,cli/completion_gen.py)Improved User Experience: Added interactive user guide with topic-based help system, contextual tips, and first-time user onboarding (
user_guide.py,completion.py)Case Template System: New case template generator with built-in minimal templates and example-based case creation (
init.py)Parameter Discovery: New CLI command for searching and exploring parameters with filtering and statistics (
params_cmd.py)Enhanced Validation: Comprehensive physics constraint validation for bubbles, patches, time stepping, and acoustic sources with improved error reporting (
case_validator.py)Build System Improvements: Added progress bars, streaming output, and detailed error reporting with contextual hints (
build.py)Test Infrastructure: Comprehensive parameter validation test suite including negative tests, mutation testing, snapshot regression testing, and coverage analysis (
params_tests/module)Documentation Generation: Auto-generated CLI reference and parameter documentation from schema definitions (
cli/docs_gen.py,params/generators/docs_gen.py)Code Quality: Fixed formatting issues across example cases and benchmarks (PEP 8 compliance for imports and comments)
Diagram Walkthrough
File Walkthrough
15 files
descriptions.py
Centralized parameter descriptions and documentation systemtoolchain/mfc/params/descriptions.py
case parameters
patch_icpp(N)%geometry)parameters
case_dicts.py
Migrate parameter definitions to central registrytoolchain/mfc/run/case_dicts.py
of hardcoded dictionaries
loading via
_load_from_registry()CASE_OPTIMIZATIONto derive from registry metadata instead ofhardcoded list
ParamTypeenum and utility functionscommands.py
Unified CLI command definitions and schematoolchain/mfc/cli/commands.py
etc.) with full argument specifications
completions, and documentation
case_validator.py
Enhanced case validation with physics constraintstoolchain/mfc/case_validator.py
check_parameter_types()to validate logical parameters are 'T'or 'F'
check_bubbles_euler()with validation for bubble physicsparameters (R0ref, p0ref, viscosities, surface tension)
check_patch_physics()to validate initial condition constraints(positive pressure, valid volume fractions, positive dimensions)
check_time_stepping()to handle multiple time stepping modes(CFL-based, adaptive, fixed)
check_acoustic_source()with physics validation (positivefrequency, wavelength, gaussian parameters)
_is_numeric()helper and_format_errors()for better errorreporting with hints
definitions.py
Consolidated parameter definitions module with constraint anddependency supporttoolchain/mfc/params/definitions.py
compact module using loops instead of a definitions/ directory
dependencies (requires/recommends relationships)
_load()function that registers all parameters acrossCOMMON, PRE_PROCESS, SIMULATION, and POST_PROCESS stages
acoustic sources, probes, and other multi-instance configurations
user_guide.py
New user guide with enhanced help and interactive onboarding systemtoolchain/mfc/user_guide.py
with Rich formatting
setup, batch jobs, and debugging
failures, and run failures
interactive menu-driven interface
init.py
New case template generator with built-in and example-based templatestoolchain/mfc/init.py
built-in or example templates
1D_minimal,2D_minimal, and3D_minimalwith documented parameterscreate_case()function to generate new cases from templatesor copy from examples directory
list_templates()to display available templates andget_available_templates()for programmatic accessargs.py
Refactored argument parsing with centralized CLI schema and enhancedhelptoolchain/mfc/args.py
centralized CLI schema instead of manual argparse setup
print_help(),print_command_help(),and topic-based help system integration
_handle_enhanced_help()to provide rich formatting beforeargparse help
users
build.py
Enhanced build system with progress bars and improved error reportingtoolchain/mfc/build.py
output parsing using regex patterns
_run_build_with_progress()function supporting streamingmode (-v) and interactive progress bars
_show_build_error()to display detailed error information fromcaptured subprocess output
(✓/✗), timing information, and contextual error tips
-vv+ (full compiler output)
completion_gen.py
New shell completion generator for bash and zsh from CLI schematoolchain/mfc/cli/completion_gen.py
schema
generate_bash_completion()andgenerate_zsh_completion()functions for auto-generated completions
file types (Python, YAML, pack files)
argument sets in completion generation
argparse_gen.py
New argparse generator from declarative CLI schematoolchain/mfc/cli/argparse_gen.py
definitions
generate_parser()to convert schema into completeArgumentParser with subcommands and argument groups
dynamically from dataclass fields
exclusive groups, and nested subcommands
params_cmd.py
Parameter search and discovery CLI commandtoolchain/mfc/params_cmd.py
MFC case parameters
indexed parameters
configurable output formats
alternative suggestions
test.py
Enhanced test reporting and failure feedbacktoolchain/mfc/test/test.py
test names prominently
visual panel formatting
helpful hints
failed_teststracking and_print_test_summary()function fordetailed failure reporting
completion.py
Shell completion installation and managementtoolchain/mfc/completion.py
~/.local/share/mfc/completions/schema.py
CLI schema dataclass definitionstoolchain/mfc/cli/schema.py
truth)
Argument,Positional,Command,CommonArgumentSet, andCLISchemadataclassesArgActionandCompletionTypewith completionconfiguration
generation
7 files
case.py
Fix comment formatting in example caseexamples/2D_shearlayer/case.py
#'to# 'for consistency with Pythonstyle guidelines
case.py
Fix import statement formattingexamples/1D_bubblescreen/case.py
case.py
Code formatting improvement for import statementsexamples/nD_perfect_reactor/case.py
import json, argparseinto separatelines for PEP 8 compliance
case.py
Code formatting and style improvementsbenchmarks/5eq_rk3_weno3_hllc/case.py
##to single#for consistencycase.py
Comment style normalizationexamples/2D_axisym_shockwatercavity/case.py
##to single#for consistencycase.py
Comment style normalizationexamples/2D_ibm_steady_shock/case.py
##to single#for consistencyexport.py
Import statement formattingexamples/scaling/export.py
6 files
negative_tests.py
Negative test case generator for validator constraintstoolchain/mfc/params_tests/negative_tests.py
violate validator constraints
BASE_CASEwith valid parameters andConstraintTestdataclassfor test specification
generate_constraint_tests()covering simulation domain,model equations, time stepping, WENO, boundary conditions, and
acoustic source constraints
run_constraint_tests()andprint_test_report()for validationand reporting
mutation_tests.py
Mutation testing for validator coverage analysistoolchain/mfc/params_tests/mutation_tests.py
parameters to verify validator coverage
MUTATIONSdictionary with invalid values for ~40 parametersacross numeric, boolean, physics, and numerics categories
tracking
by parameter
snapshot.py
Validation snapshot tool for regression testingtoolchain/mfc/params_tests/snapshot.py
cases for regression testing
CaseSnapshotandValidationResultdataclasses for storingvalidation state
and save/load snapshots to JSON
across refactoring
coverage.py
Constraint coverage analysis tooltoolchain/mfc/params_tests/coverage.py
case_validator.pyAST
self.prohibit()calls and maps them to validation stagesrunner.py
Test safety net runner and orchestratortoolchain/mfc/params_tests/runner.py
test suite
build_safety_net()to capture parameter inventory,validation snapshots, and constraint coverage
verify_safety_net()to detect unintended validation behaviorchanges
generation commands
inventory.py
Parameter inventory export and analysistoolchain/mfc/params_tests/inventory.py
stages
export_parameter_inventory()categorizing parameters bystage and type
tracking
metadata
2 files
docs_gen.py
CLI documentation generator from schematoolchain/mfc/cli/docs_gen.py
definitions
examples, and subcommands
docs_gen.py
Parameter documentation generatortoolchain/mfc/params/generators/docs_gen.py
parameters
constraints
grouping for large families
instructions
74 files
CodeAnt-AI Description
Centralize case parameters, add a single CLI schema, and show rich build progress/errors
What Changed
Impact
✅ Clearer build errors and diagnostics✅ Easier discovery of case parameters✅ Shorter onboarding for new cases and CLI usage💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.
Summary by CodeRabbit
New Features
Documentation
Improvements
Chores
✏️ Tip: You can customize this high-level summary in your review settings.