Skip to content

Conversation

@sbryngelson
Copy link
Member

@sbryngelson sbryngelson commented Jan 31, 2026

User description

PR Type

Enhancement, Tests, Documentation

Description

Major Quality of Life Improvements for MFC Toolchain

  • Centralized Parameter System: Consolidated ~3,300 MFC case parameters into unified registry with descriptions, constraints, and dependencies (params/definitions.py, params/descriptions.py)

  • Enhanced CLI Architecture: Refactored argument parsing with declarative schema-based command definitions, auto-generated argparse parsers, and shell completion support (cli/schema.py, cli/argparse_gen.py, cli/completion_gen.py)

  • Improved User Experience: Added interactive user guide with topic-based help system, contextual tips, and first-time user onboarding (user_guide.py, completion.py)

  • Case Template System: New case template generator with built-in minimal templates and example-based case creation (init.py)

  • Parameter Discovery: New CLI command for searching and exploring parameters with filtering and statistics (params_cmd.py)

  • Enhanced Validation: Comprehensive physics constraint validation for bubbles, patches, time stepping, and acoustic sources with improved error reporting (case_validator.py)

  • Build System Improvements: Added progress bars, streaming output, and detailed error reporting with contextual hints (build.py)

  • Test Infrastructure: Comprehensive parameter validation test suite including negative tests, mutation testing, snapshot regression testing, and coverage analysis (params_tests/ module)

  • Documentation Generation: Auto-generated CLI reference and parameter documentation from schema definitions (cli/docs_gen.py, params/generators/docs_gen.py)

  • Code Quality: Fixed formatting issues across example cases and benchmarks (PEP 8 compliance for imports and comments)

Diagram Walkthrough

flowchart LR
  A["Parameter Registry<br/>definitions.py<br/>descriptions.py"] -->|"loads"| B["Case Validator<br/>Enhanced Constraints"]
  A -->|"powers"| C["Parameter Search<br/>params_cmd.py"]
  D["CLI Schema<br/>schema.py"] -->|"generates"| E["Argparse Parser<br/>argparse_gen.py"]
  D -->|"generates"| F["Shell Completions<br/>completion_gen.py"]
  D -->|"generates"| G["CLI Docs<br/>docs_gen.py"]
  E -->|"used by"| H["Enhanced Args<br/>args.py"]
  H -->|"provides"| I["User Guide<br/>user_guide.py"]
  J["Build System<br/>build.py"] -->|"reports"| I
  B -->|"tested by"| K["Test Suite<br/>negative/mutation/<br/>snapshot/coverage"]
  L["Case Templates<br/>init.py"] -->|"creates"| M["New Cases"]
Loading

File Walkthrough

Relevant files
Enhancement
15 files
descriptions.py
Centralized parameter descriptions and documentation system

toolchain/mfc/params/descriptions.py

  • New file providing centralized parameter descriptions for ~3,300 MFC
    case parameters
  • Includes manual descriptions dictionary extracted from documentation
  • Implements pattern-based auto-generation for indexed parameters (e.g.,
    patch_icpp(N)%geometry)
  • Provides fallback naming convention inference for undocumented
    parameters
+552/-0 
case_dicts.py
Migrate parameter definitions to central registry               

toolchain/mfc/run/case_dicts.py

  • Refactored to load parameter definitions from central registry instead
    of hardcoded dictionaries
  • Replaced ~500 lines of manual parameter type definitions with dynamic
    loading via _load_from_registry()
  • Updated CASE_OPTIMIZATION to derive from registry metadata instead of
    hardcoded list
  • Added comprehensive docstrings to ParamType enum and utility functions
+61/-523
commands.py
Unified CLI command definitions and schema                             

toolchain/mfc/cli/commands.py

  • New file defining all MFC CLI commands as a single source of truth
  • Includes 20+ commands (build, run, test, clean, validate, new, params,
    etc.) with full argument specifications
  • Defines common argument sets, command aliases, and help topics
  • Provides structured schema for generating argparse parsers,
    completions, and documentation
+1003/-0
case_validator.py
Enhanced case validation with physics constraints               

toolchain/mfc/case_validator.py

  • Added check_parameter_types() to validate logical parameters are 'T'
    or 'F'
  • Enhanced check_bubbles_euler() with validation for bubble physics
    parameters (R0ref, p0ref, viscosities, surface tension)
  • Added check_patch_physics() to validate initial condition constraints
    (positive pressure, valid volume fractions, positive dimensions)
  • Improved check_time_stepping() to handle multiple time stepping modes
    (CFL-based, adaptive, fixed)
  • Enhanced check_acoustic_source() with physics validation (positive
    frequency, wavelength, gaussian parameters)
  • Added _is_numeric() helper and _format_errors() for better error
    reporting with hints
+257/-20
definitions.py
Consolidated parameter definitions module with constraint and
dependency support

toolchain/mfc/params/definitions.py

  • New file consolidating ~3,300 parameter definitions into a single
    compact module using loops instead of a definitions/ directory
  • Defines parameter constraints (choices, min/max values) and
    dependencies (requires/recommends relationships)
  • Implements _load() function that registers all parameters across
    COMMON, PRE_PROCESS, SIMULATION, and POST_PROCESS stages
  • Supports indexed parameters for patches, fluids, boundary conditions,
    acoustic sources, probes, and other multi-instance configurations
+394/-0 
user_guide.py
New user guide with enhanced help and interactive onboarding system

toolchain/mfc/user_guide.py

  • New comprehensive user guide module providing enhanced help system
    with Rich formatting
  • Implements topic-based help system covering GPU configuration, cluster
    setup, batch jobs, and debugging
  • Provides contextual tips after build failures, case errors, test
    failures, and run failures
  • Includes onboarding for first-time users with welcome message and
    interactive menu-driven interface
+594/-0 
init.py
New case template generator with built-in and example-based templates

toolchain/mfc/init.py

  • New case template generator module for creating new case files from
    built-in or example templates
  • Provides three built-in minimal templates: 1D_minimal, 2D_minimal, and
    3D_minimal with documented parameters
  • Implements create_case() function to generate new cases from templates
    or copy from examples directory
  • Includes list_templates() to display available templates and
    get_available_templates() for programmatic access
+500/-0 
args.py
Refactored argument parsing with centralized CLI schema and enhanced
help

toolchain/mfc/args.py

  • Refactored argument parsing to use auto-generated parsers from
    centralized CLI schema instead of manual argparse setup
  • Added enhanced help output with print_help(), print_command_help(),
    and topic-based help system integration
  • Implemented _handle_enhanced_help() to provide rich formatting before
    argparse help
  • Deferred test case loading to avoid slow startup for non-test commands
  • Added support for command aliases and improved help for first-time
    users
+106/-151
build.py
Enhanced build system with progress bars and improved error reporting

toolchain/mfc/build.py

  • Added progress bar visualization for build process with ninja/make
    output parsing using regex patterns
  • Implemented _run_build_with_progress() function supporting streaming
    mode (-v) and interactive progress bars
  • Added _show_build_error() to display detailed error information from
    captured subprocess output
  • Enhanced configure, build, and install steps with status indicators
    (✓/✗), timing information, and contextual error tips
  • Improved verbosity levels: default (progress bar), -v (streaming),
    -vv+ (full compiler output)
+339/-7 
completion_gen.py
New shell completion generator for bash and zsh from CLI schema

toolchain/mfc/cli/completion_gen.py

  • New module generating bash and zsh shell completion scripts from CLI
    schema
  • Implements generate_bash_completion() and generate_zsh_completion()
    functions for auto-generated completions
  • Supports completion for commands, options, positional arguments, and
    file types (Python, YAML, pack files)
  • Handles command aliases, mutually exclusive groups, and common
    argument sets in completion generation
+353/-0 
argparse_gen.py
New argparse generator from declarative CLI schema             

toolchain/mfc/cli/argparse_gen.py

  • New module generating argparse parsers from declarative CLI schema
    definitions
  • Implements generate_parser() to convert schema into complete
    ArgumentParser with subcommands and argument groups
  • Handles MFCConfig boolean pair arguments (--flag/--no-flag)
    dynamically from dataclass fields
  • Supports positional arguments, common argument sets, mutually
    exclusive groups, and nested subcommands
+226/-0 
params_cmd.py
Parameter search and discovery CLI command                             

toolchain/mfc/params_cmd.py

  • New module providing CLI command for searching and exploring ~3,300
    MFC case parameters
  • Implements parameter search with filtering by type and collapsing of
    indexed parameters
  • Displays parameter statistics, families, and descriptions with
    configurable output formats
  • Includes helper functions for pattern matching, result formatting, and
    alternative suggestions
+330/-0 
test.py
Enhanced test reporting and failure feedback                         

toolchain/mfc/test/test.py

  • Enhanced test output formatting with improved progress display showing
    test names prominently
  • Added test duration tracking and comprehensive summary report with
    visual panel formatting
  • Implemented real-time failure feedback with error type detection and
    helpful hints
  • Added failed_tests tracking and _print_test_summary() function for
    detailed failure reporting
+136/-18
completion.py
Shell completion installation and management                         

toolchain/mfc/completion.py

  • New module for shell completion installation and management
  • Implements auto-detection and installation for bash and zsh shells
  • Manages completion script installation to
    ~/.local/share/mfc/completions/
  • Provides status checking, uninstall, and RC file configuration
+220/-0 
schema.py
CLI schema dataclass definitions                                                 

toolchain/mfc/cli/schema.py

  • New module defining dataclasses for CLI schema (single source of
    truth)
  • Implements Argument, Positional, Command, CommonArgumentSet, and
    CLISchema dataclasses
  • Defines enums for ArgAction and CompletionType with completion
    configuration
  • Provides helper methods for command lookup and argument flag
    generation
+204/-0 
Formatting
7 files
case.py
Fix comment formatting in example case                                     

examples/2D_shearlayer/case.py

  • Fixed comment formatting from #' to # ' for consistency with Python
    style guidelines
+4/-4     
case.py
Fix import statement formatting                                                   

examples/1D_bubblescreen/case.py

  • Split multi-import statement into separate lines for PEP 8 compliance
+2/-1     
case.py
Code formatting improvement for import statements               

examples/nD_perfect_reactor/case.py

  • Minor formatting change: split import json, argparse into separate
    lines for PEP 8 compliance
+2/-1     
case.py
Code formatting and style improvements                                     

benchmarks/5eq_rk3_weno3_hllc/case.py

  • Reformatted import statements to follow PEP 8 (one import per line)
  • Converted multi-line comments from ## to single # for consistency
+9/-7     
case.py
Comment style normalization                                                           

examples/2D_axisym_shockwatercavity/case.py

  • Converted multi-line comments from ## to single # for consistency
+6/-6     
case.py
Comment style normalization                                                           

examples/2D_ibm_steady_shock/case.py

  • Converted multi-line comment from ## to single # for consistency
+1/-1     
export.py
Import statement formatting                                                           

examples/scaling/export.py

  • Reformatted import statements to follow PEP 8 (one import per line)
+5/-1     
Tests
6 files
negative_tests.py
Negative test case generator for validator constraints     

toolchain/mfc/params_tests/negative_tests.py

  • New test module generating negative test cases that intentionally
    violate validator constraints
  • Defines BASE_CASE with valid parameters and ConstraintTest dataclass
    for test specification
  • Implements generate_constraint_tests() covering simulation domain,
    model equations, time stepping, WENO, boundary conditions, and
    acoustic source constraints
  • Provides run_constraint_tests() and print_test_report() for validation
    and reporting
+358/-0 
mutation_tests.py
Mutation testing for validator coverage analysis                 

toolchain/mfc/params_tests/mutation_tests.py

  • New module for mutation testing that systematically mutates valid
    parameters to verify validator coverage
  • Defines MUTATIONS dictionary with invalid values for ~40 parameters
    across numeric, boolean, physics, and numerics categories
  • Implements mutation application and validation checking with result
    tracking
  • Provides comprehensive reporting on catch rates and missed mutations
    by parameter
+279/-0 
snapshot.py
Validation snapshot tool for regression testing                   

toolchain/mfc/params_tests/snapshot.py

  • New module for capturing and comparing validation snapshots of example
    cases for regression testing
  • Implements CaseSnapshot and ValidationResult dataclasses for storing
    validation state
  • Provides functions to load case parameters, validate across stages,
    and save/load snapshots to JSON
  • Includes comparison logic to detect changes in validation behavior
    across refactoring
+301/-0 
coverage.py
Constraint coverage analysis tool                                               

toolchain/mfc/params_tests/coverage.py

  • New module analyzing constraint coverage by parsing case_validator.py
    AST
  • Extracts all self.prohibit() calls and maps them to validation stages
  • Generates coverage reports showing constraints per method and stage
  • Provides JSON export and console reporting of coverage metrics
+278/-0 
runner.py
Test safety net runner and orchestrator                                   

toolchain/mfc/params_tests/runner.py

  • New main entry point for building and verifying parameter validation
    test suite
  • Implements build_safety_net() to capture parameter inventory,
    validation snapshots, and constraint coverage
  • Provides verify_safety_net() to detect unintended validation behavior
    changes
  • Includes CLI interface supporting build, verify, summary, and report
    generation commands
+258/-0 
inventory.py
Parameter inventory export and analysis                                   

toolchain/mfc/params_tests/inventory.py

  • New module exporting parameter inventory with types and validation
    stages
  • Implements export_parameter_inventory() categorizing parameters by
    stage and type
  • Extracts dynamic parameter patterns with normalization and example
    tracking
  • Provides JSON export and console summary reporting of parameter
    metadata
+176/-0 
Documentation
2 files
docs_gen.py
CLI documentation generator from schema                                   

toolchain/mfc/cli/docs_gen.py

  • New module generating markdown CLI reference documentation from schema
    definitions
  • Formats command sections with usage, arguments, options tables,
    examples, and subcommands
  • Organizes commands by category (core, utility, development, CI, other)
  • Generates quick reference tables and common options documentation
+322/-0 
docs_gen.py
Parameter documentation generator                                               

toolchain/mfc/params/generators/docs_gen.py

  • New module generating markdown documentation for all MFC case
    parameters
  • Organizes parameters by family with descriptions, types, and
    constraints
  • Generates table of contents and detailed family sections with pattern
    grouping for large families
  • Includes command-line reference examples and footer with CLI search
    instructions
+234/-0 
Additional files
74 files
docs.yml +1/-1     
lint-toolchain.yml +3/-0     
settings.json +21/-9   
CMakeLists.txt +1/-1     
README.md +46/-0   
case.py +3/-1     
case.py +3/-1     
case.py +3/-1     
case.py +3/-1     
authors.md +2/-0     
case.md +5/-1     
cli-reference.md +667/-0 
docker.md +2/-0     
expectedPerformance.md +2/-0     
getting-started.md +40/-0   
gpuParallelization.md +2/-0     
papers.md +2/-0     
parameters.md +1000/-0
readme.md +29/-18 
references.md +2/-0     
running.md +2/-0     
testing.md +2/-0     
troubleshooting.md +151/-0 
visualization.md +2/-0     
examples.sh +1/-1     
index.html +1/-1     
plot.py +3/-3     
case.py +1/-1     
case.py +3/-1     
case.py +2/-1     
case.py +2/-1     
case.py +2/-1     
case.py +2/-1     
case.py +2/-1     
case.py +6/-4     
casefile.py +8/-5     
case.py +2/-2     
case.py +9/-7     
case.py +2/-1     
case.py +2/-1     
case.py +6/-4     
case.py +2/-2     
case.py +6/-6     
case.py +6/-6     
analyze.py +2/-1     
case.py +7/-3     
mfc.sh +22/-5   
format.sh +53/-19 
format_python.sh +6/-1     
lint.sh +1/-1     
python.sh +235/-4 
_mfc +449/-0 
mfc.bash +191/-0 
main.py +60/-11 
mfc-case-schema.json +24427/-0
case.py +37/-7   
clean.py +22/-0   
__init__.py +37/-0   
common.py +26/-1   
gen_case_constraints_docs.py +1/-0     
generate.py +116/-0 
ide.py +136/-0 
__init__.py +10/-0   
__init__.py +5/-0     
case_dicts_gen.py +46/-0   
json_schema_gen.py +128/-0 
registry.py +52/-0   
schema.py +88/-0   
validate.py +148/-0 
__init__.py +8/-0     
input.py +20/-5   
run.py +29/-1   
validate.py +58/-0   
pyproject.toml +1/-1     

CodeAnt-AI Description

Centralize case parameters, add a single CLI schema, and show rich build progress/errors

What Changed

  • All ~3,300 MFC case parameters now have centralized descriptions and pattern-based generation so parameter lookup, search, and descriptions are available from a single registry.
  • The CLI is now defined from one schema: commands like build, run, test, clean, validate, params, completion, new, and generate are available with consistent options, examples, tab-completion metadata, and an interactive command set.
  • Building now shows an interactive progress display: streaming ninja/make lines in -v mode, a terminal progress bar with current file and elapsed time otherwise, and clearer panels containing CMake/compiler output on failure.
  • New user-visible commands and behaviors: validate (pre-flight case checks), clean (remove build artifacts), params (search/list/describe parameters), and init/new (create cases from templates).
  • Verbosity levels affect output: -v streams compile lines, -vv+ shows full command output and more debug info; failures include formatted, truncated stdout/stderr for easier debugging.

Impact

✅ Clearer build errors and diagnostics
✅ Easier discovery of case parameters
✅ Shorter onboarding for new cases and CLI usage

💡 Usage Guide

Checking Your Pull Request

Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.

Talking to CodeAnt AI

Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:

@codeant-ai ask: Your question here

This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.

Example

@codeant-ai ask: Can you suggest a safer alternative to storing this secret?

Preserve Org Learnings with CodeAnt

You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:

@codeant-ai: Your feedback here

This helps CodeAnt AI learn and adapt to your team's coding style and standards.

Example

@codeant-ai: Do not flag unused imports.

Retrigger review

Ask CodeAnt AI to review the PR again, by typing:

@codeant-ai: review

Check Your Repository Health

To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.

Summary by CodeRabbit

  • New Features

    • Expanded CLI (init, validate, params, generate, clean, completion, interactive), built‑in case templates, and full Bash/Zsh completions with install/uninstall/status.
  • Documentation

    • Added comprehensive CLI reference, parameters reference (~3,400 entries), troubleshooting, getting‑started, reorganized docs hub, and README quick‑start.
  • Improvements

    • Stronger parameter validation with suggestions, validator snapshots/tests, richer build/test/run progress and summaries, and VS Code schema integration.
  • Chores

    • CI/lint workflow updates and tooling refresh; widespread cosmetic formatting across examples and benchmarks.

✏️ Tip: You can customize this high-level summary in your review settings.

sbryngelson and others added 30 commits January 27, 2026 20:34
- Add validate command for pre-flight case validation without building
- Add clean command implementation (was previously missing)
- Show detailed CMake/compiler errors on build failure with formatted panels
- Add build progress indicators (Configuring/Building/Installing status)
- Display human-readable test names prominently in test output
- Add real-time test failure feedback with helpful hints
- Enhanced case validator error messages with suggestions
- Add --debug-log flag for troubleshooting
- Create troubleshooting documentation

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Test Summary:
- Rich panel with formatted pass/fail/skip counts
- Total test duration display
- Failed test details with UUIDs and error types
- Helpful "Next Steps" suggestions for failures

Case Template Generator (./mfc.sh init):
- Create new cases from built-in templates (1D/2D/3D_minimal)
- Copy from any example with --template example:<name>
- List available templates with --list
- Well-documented case files with clear parameter sections

Co-Authored-By: Claude Opus 4.5 <[email protected]>
…oarding

- Add ./mfc.sh completion command to install shell tab-completion
  - Installs to ~/.local/share/mfc/completions/ (works across MFC clones)
  - Automatically configures ~/.bashrc or ~/.zshrc
  - Supports install/uninstall/status subcommands

- Add enhanced help output with Rich formatting
- Add contextual tips after build/test failures
- Add interactive menu mode (./mfc.sh interactive)
- Add welcome message for first-time users with getting started guide
- Add bash and zsh completion scripts
- Update README with toolchain features documentation

Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Remove unused Text import from rich.text
- Add pylint disable for too-many-locals in test()
- Add pylint disable for too-many-arguments in _print_test_summary()
- Rename failed_tests param to failed_test_list to avoid shadowing
- Prefix unused skipped_cases param with underscore
- Add pylint disable for global-statement in setup_debug_logging()

Lint score: 10.00/10

Co-Authored-By: Claude Opus 4.5 <[email protected]>
1. Enhanced subcommand help (./mfc.sh build --help):
   - Shows Rich-formatted header with command description
   - Displays practical examples with explanations
   - Lists key options in easy-to-read format
   - Still shows full argparse options below

2. Topic-based help (./mfc.sh help <topic>):
   - gpu: GPU configuration, OpenACC/OpenMP, troubleshooting
   - clusters: HPC cluster setup (Phoenix, Frontier, etc.)
   - batch: Batch job submission options and examples
   - debugging: Debug logging, validation, common issues

3. Command aliases for faster typing:
   - b = build
   - r = run
   - t = test
   - v = validate
   - c = clean

4. Consistent --help behavior:
   - ./mfc.sh --help now shows enhanced help (same as ./mfc.sh)
   - Aliases shown in command table

Updated bash/zsh completion scripts to support new commands.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
The 'load' command must be sourced to set environment variables in the
current shell. Running it directly (./mfc.sh load) now shows a clear
error message with the correct usage:

  source ./mfc.sh load -c p -m g

Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Add Unicode progress bar during first-time pip package installation
  Shows "│████████░░░░░░░░░░░░│ 53% (37 pkgs, 1m23s)" in terminals
  Falls back to milestone updates when output is piped

- Defer list_cases() call in args.py until test command is actually used
  Previously generated all ~200 test cases on every startup

- Make pyrometheus and cantera imports lazy in run/input.py
  These heavy chemistry packages are now only imported when needed

These changes fix the apparent "hang" on first run where pip install
showed no progress, and significantly speed up startup time for
non-test commands.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Progress bar now tracks three phases: resolving (0-60%),
  building wheels (60-80%), and installing (80-100%)
- Add "Starting..." message in mfc.sh for immediate user feedback
- Non-TTY mode now shows phase transitions (building, installing)
- Better status messages showing current operation

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Display the currently processing package (collecting, downloading, or
building) on a second line below the progress bar. The display updates
in real-time as pip processes each package.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Use lazy imports in main.py to defer loading heavy modules
- Remove heavy imports from args.py (run.run, build, test.cases)
- Hardcode target names and template names to avoid import chain
- Import test.cases only when running test command

This reduces startup time from ~1.4s to ~1.0s by avoiding loading
mako, pygments, and other heavy dependencies until they're needed.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
uv is a blazing fast Python package installer (10-100x faster than pip).
When uv is detected, use it for package installation (~7 seconds vs
several minutes with pip). Falls back to pip with progress bar when
uv is not available.

Users can install uv with: pip install uv

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Instead of requiring users to manually install uv, automatically
bootstrap it in the venv using pip. This gives all users the speed
benefit of uv (~23 seconds for fresh install vs several minutes
with pip alone).

The bootstrap process:
1. Create venv
2. Install uv via pip (~2-3 seconds)
3. Use uv to install all ~70 packages (~7 seconds)

Falls back to pip with progress bar if uv installation fails.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
The exact number of packages varies, so just say "Installing Python
packages" without specifying a count.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Instead of hiding uv's output, show its native progress display when
running in an interactive terminal. This lets users see what's happening
during installation instead of appearing to hang.

For non-interactive (CI) environments, output is still captured for logs.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
On HPC systems where cache and venv are on different filesystems,
uv's hardlink attempts fail and fall back to copying. Setting
UV_LINK_MODE=copy skips the failed hardlink attempts, reducing
overhead and eliminating the warning message.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Remove early exit for 'init' command in mfc.sh that prevented help
  from showing
- Fix _handle_enhanced_help to properly show both enhanced help and
  argparse help for all subcommands
- Add subparser map to print argparse help for specific commands

Now ./mfc.sh init --help (and other subcommands) properly show help.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Make build imports lazy in case.py and run/input.py to break circular
import chains that were exposed by the lazy import optimizations in
main.py and args.py.

- case.py: lazy import build in get_inp() and get_fpp()
- run/input.py: lazy import build in generate_inp(), validate_constraints(),
  and clean()

Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Pass --help through to Python for bash-handled commands (clean, lint,
  format, load, spelling) so enhanced help is shown
- Add missing commands to COMMANDS dict in user_guide.py:
  bench, completion, lint, format, spelling

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Use argparse's built-in aliases= parameter instead of separate parsers
for command aliases (b, r, t, v, c). This ensures aliases inherit all
arguments from their parent commands.

Previously, ./mfc.sh t would fail because the 't' parser didn't have
test-specific arguments like --rdma-mpi.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Parse ninja's [X/Y] output to show compile progress
- Display progress bar with: file count, percentage, current file, elapsed time
- Falls back to spinner with elapsed time for non-TTY environments
- Shows "(build took Xs)" for builds longer than 5 seconds

In interactive terminals, users now see:
  Building simulation [████████░░░░] 42/156 • 0:02:34 m_rhs.fpp.f90

In non-interactive mode:
  Building simulation...
  (build took 234.5s)
  ✓ Built simulation

Co-Authored-By: Claude Opus 4.5 <[email protected]>
The 'init' command was originally used to bootstrap the Python environment
without running any command. This restores that behavior and renames the
case template creation to 'new':

- ./mfc.sh init     -> Just bootstrap venv/environment, then exit
- ./mfc.sh new      -> Create a new case from a template

Updated files:
- mfc.sh: Restore original init behavior (exit after python.sh)
- args.py: Rename init parser to new
- main.py: Handle 'new' command instead of 'init'
- user_guide.py: Update all references from init to new
- init.py: Update usage messages to use 'new'

Co-Authored-By: Claude Opus 4.5 <[email protected]>
When --verbose is used, bypass the progress bar and show raw cmake/compiler
output. This applies to both configure and build steps.

Previously, verbose mode would still capture all output in the progress bar,
making the --verbose flag ineffective.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Replace black with autopep8 due to Python 3.12.5 memory safety issues
- Fix bash completion to allow file path navigation after commands
- Fix format.sh to accept custom path arguments
- Fix pylint errors in build.py and main.py
- Reformat all Python files with autopep8 for CI consistency

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Implement a declarative CLI schema that eliminates the need to manually
update 8+ files when adding commands or options:

- Add cli/schema.py with dataclass definitions for CLI structure
- Add cli/commands.py as SINGLE SOURCE OF TRUTH for all commands
- Add cli/argparse_gen.py to generate argparse parsers from schema
- Add cli/completion_gen.py to generate bash/zsh completions
- Add generate.py and ./mfc.sh generate command

Modified files to use the schema:
- args.py: Now uses generated parser from CLI schema
- user_guide.py: Imports COMMANDS from cli/commands.py
- Completion scripts are now auto-generated, not hand-edited

Adding a new command now requires editing only cli/commands.py,
then running ./mfc.sh generate to update completion scripts.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Extract helper functions to reduce complexity in completion_gen.py and
argparse_gen.py. Add pylint disable for dataclasses that legitimately
need many attributes. Code now scores 10.00/10 on pylint.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Add cli/docs_gen.py to generate CLI reference markdown from schema
- Update generate.py to also produce docs/cli-reference.md
- Add generate --check step to lint-toolchain CI workflow
- Generated documentation includes quick reference table, all commands
  with options, examples, and common options section

Co-Authored-By: Claude Opus 4.5 <[email protected]>
This adds toolchain/mfc/params_tests/ with tools to capture and verify
validation behavior before refactoring case_validator.py:

- inventory.py: Export all 3300 parameters with types and stages
- snapshot.py: Capture validation results from 134 example cases
- coverage.py: Analyze 306 constraints across 56 check methods
- runner.py: CLI for build/verify/summary commands

Run `python -m mfc.params_tests.runner build` to capture baseline,
then `python -m mfc.params_tests.runner verify` after changes to
detect any unintended changes to validation behavior.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Extends the safety net with two new testing approaches:

1. negative_tests.py - Hand-crafted test cases that intentionally
   violate constraints (20 tests, 100% trigger rate)

2. mutation_tests.py - Automatically mutates valid example cases
   to test validator coverage (830 mutations, 82.8% catch rate)

Mutation testing found real validator gaps:
- t_step_save: No validation for 0 or negative values
- mpp_lim/cyl_coord: Invalid strings like "X" not caught
- x_domain%beg/end: Missing value not validated

Run with:
  python -m mfc.params_tests.runner negative
  python -m mfc.params_tests.runner mutation

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Mutation testing found several gaps where the validator allowed
invalid parameter values. This commit fixes them:

1. Add check_parameter_types() - validates Fortran logicals are 'T'/'F'
   - mpp_lim, cyl_coord, bubbles_euler, etc. now reject 'X', 'yes', etc.

2. Add x_domain validation - x_domain%beg/end required when m > 0

3. Fix t_step_save - now validates t_step_save > 0

4. Fix dt validation - now validates dt > 0 in all modes where dt is set

5. Improve time stepping validation - properly handles cfl_dt, cfl_adap_dt,
   and adap_dt modes

Mutation test results improved from 82.8% to 99.9% catch rate.
All 134 example cases still pass validation.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Implement a central parameter registry that serves as the single source
of truth for MFC's ~3,300 case parameters. This eliminates manual
synchronization between case_dicts.py, case_validator.py, and documentation.

Key changes:
- Create mfc/params/ package with schema definitions and registry
- Refactor case_dicts.py from ~680 lines to ~115 lines (uses registry)
- Add parameter definitions organized by domain (core, weno, patches, etc.)
- Add code generators for case_dicts, validator, and documentation
- Add physics validation for patches, acoustics, and bubbles

The registry provides:
- ParamDef dataclass for parameter metadata (name, type, stages, description)
- ConstraintRule dataclass for validation constraints (26 defined)
- Generators that produce case_dicts-compatible output
- Documentation generator (~194K chars of markdown)
- Comparison tool to verify registry matches original

All tests pass:
- Safety net: 134/134 cases unchanged
- Mutation tests: 99.9% catch rate
- JSON schema validation: working

Co-Authored-By: Claude Opus 4.5 <[email protected]>
The strings had no placeholders, making the f-prefix unnecessary.
Fixes lint warning F541.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@sbryngelson
Copy link
Member Author

/improve

Comment on lines 19 to 40
@@ -15,545 +32,56 @@ class ParamType(Enum):
_ANALYTIC_REAL = {"type": ["number", "string"]}

def analytic(self):
"""Return the analytic version of this type (allows string expressions)."""
if self == self.INT:
return self._ANALYTIC_INT
if self == self.REAL:
return self._ANALYTIC_REAL
return self.STR
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Modify the analytic() method in ParamType to return enum members instead of raw dictionaries. This ensures type consistency and prevents crashes during JSON schema generation. [possible issue, importance: 8]

Suggested change
class ParamType(Enum):
"""
Parameter types for JSON schema validation.
Each type maps to a JSON schema fragment used by fastjsonschema
to validate case file parameters.
"""
INT = {"type": "integer"}
REAL = {"type": "number"}
LOG = {"enum": ["T", "F"]}
STR = {"type": "string"}
ANALYTIC_INT = {"type": ["integer", "string"]}
ANALYTIC_REAL = {"type": ["number", "string"]}
def analytic(self):
"""Return the analytic version of this type (allows string expressions)."""
if self == self.INT:
return self.ANALYTIC_INT
if self == self.REAL:
return self.ANALYTIC_REAL
return self.STR

# PRE_PROCESS.append(f"spec_pp({f_id})")

# Load parameter definitions from registry
COMMON, PRE_PROCESS, SIMULATION, POST_PROCESS = _load_from_registry()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Defer loading the parameter registry until it is first needed, rather than at module import time. This avoids potential circular import errors and improves startup performance. [possible issue, importance: 7]

Suggested change
COMMON, PRE_PROCESS, SIMULATION, POST_PROCESS = _load_from_registry()
# Load parameter definitions lazily to avoid import-time circular imports
COMMON = PRE_PROCESS = SIMULATION = POST_PROCESS = None
def _ensure_loaded():
global COMMON, PRE_PROCESS, SIMULATION, POST_PROCESS
if COMMON is None:
COMMON, PRE_PROCESS, SIMULATION, POST_PROCESS = _load_from_registry()

Comment on lines 432 to 435
for pattern, template in PATTERNS:
match = re.match(pattern, param_name)
if match:
return template.format(*match.groups())
Copy link
Contributor

@qodo-code-review qodo-code-review bot Jan 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Use re.fullmatch() instead of re.match() to ensure that regular expression patterns match the entire parameter name. This prevents incorrect descriptions from partial prefix matches. [general, importance: 7]

Suggested change
for pattern, template in PATTERNS:
match = re.match(pattern, param_name)
if match:
return template.format(*match.groups())
for pattern, template in PATTERNS:
match = re.fullmatch(pattern, param_name)
if match:
return template.format(*match.groups())

Comment on lines +54 to +81
"""Validate parameter types before other checks.

This catches invalid values early with clear error messages,
rather than letting them cause confusing failures later.
"""
# Logical (boolean) parameters that must be 'T' or 'F'
logical_params = [
'mpp_lim', 'cyl_coord', 'bubbles_euler', 'adv_n', 'adap_dt',
'polytropic', 'polydisperse', 'acoustic_source', 'hypoelasticity',
'hyperelasticity', 'cont_damage', 'mapped_weno', 'mp_weno',
'teno', 'weno_avg', 'weno_Re_flux', 'null_weights', 'alt_soundspeed',
'mixture_err', 'parallel_io', 'prim_vars_wrt', 'rho_wrt', 'E_wrt',
'mom_wrt(1)', 'mom_wrt(2)', 'mom_wrt(3)', 'vel_wrt(1)', 'vel_wrt(2)',
'vel_wrt(3)', 'flux_wrt(1)', 'flux_wrt(2)', 'flux_wrt(3)',
'c_wrt', 'omega_wrt(1)', 'omega_wrt(2)', 'omega_wrt(3)', 'qm_wrt',
'schlieren_wrt', 'probe_wrt', 'integral_wrt', 'viscous', 'shear_stress',
'bulk_stress', 'bubbles_lagrange', 'relativity', 'cu_tensor', 'igr',
]
for param in logical_params:
self._validate_logical(param)

# Required domain parameters when m > 0
m = self.get('m')
if m is not None and m > 0:
self.prohibit(not self.is_set('x_domain%beg'),
"x_domain%beg must be set when m > 0")
self.prohibit(not self.is_set('x_domain%end'),
"x_domain%end must be set when m > 0")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Add a numeric check for the m parameter before its comparison to prevent a potential TypeError during validation. [possible issue, importance: 7]

Suggested change
"""Validate parameter types before other checks.
This catches invalid values early with clear error messages,
rather than letting them cause confusing failures later.
"""
# Logical (boolean) parameters that must be 'T' or 'F'
logical_params = [
'mpp_lim', 'cyl_coord', 'bubbles_euler', 'adv_n', 'adap_dt',
'polytropic', 'polydisperse', 'acoustic_source', 'hypoelasticity',
'hyperelasticity', 'cont_damage', 'mapped_weno', 'mp_weno',
'teno', 'weno_avg', 'weno_Re_flux', 'null_weights', 'alt_soundspeed',
'mixture_err', 'parallel_io', 'prim_vars_wrt', 'rho_wrt', 'E_wrt',
'mom_wrt(1)', 'mom_wrt(2)', 'mom_wrt(3)', 'vel_wrt(1)', 'vel_wrt(2)',
'vel_wrt(3)', 'flux_wrt(1)', 'flux_wrt(2)', 'flux_wrt(3)',
'c_wrt', 'omega_wrt(1)', 'omega_wrt(2)', 'omega_wrt(3)', 'qm_wrt',
'schlieren_wrt', 'probe_wrt', 'integral_wrt', 'viscous', 'shear_stress',
'bulk_stress', 'bubbles_lagrange', 'relativity', 'cu_tensor', 'igr',
]
for param in logical_params:
self._validate_logical(param)
# Required domain parameters when m > 0
m = self.get('m')
if m is not None and m > 0:
self.prohibit(not self.is_set('x_domain%beg'),
"x_domain%beg must be set when m > 0")
self.prohibit(not self.is_set('x_domain%end'),
"x_domain%end must be set when m > 0")
def check_parameter_types(self):
"""Validate parameter types before other checks.
This catches invalid values early with clear error messages,
rather than letting them cause confusing failures later.
"""
# Logical (boolean) parameters that must be 'T' or 'F'
logical_params = [
'mpp_lim', 'cyl_coord', 'bubbles_euler', 'adv_n', 'adap_dt',
'polytropic', 'polydisperse', 'acoustic_source', 'hypoelasticity',
'hyperelasticity', 'cont_damage', 'mapped_weno', 'mp_weno',
'teno', 'weno_avg', 'weno_Re_flux', 'null_weights', 'alt_soundspeed',
'mixture_err', 'parallel_io', 'prim_vars_wrt', 'rho_wrt', 'E_wrt',
'mom_wrt(1)', 'mom_wrt(2)', 'mom_wrt(3)', 'vel_wrt(1)', 'vel_wrt(2)',
'vel_wrt(3)', 'flux_wrt(1)', 'flux_wrt(2)', 'flux_wrt(3)',
'c_wrt', 'omega_wrt(1)', 'omega_wrt(2)', 'omega_wrt(3)', 'qm_wrt',
'schlieren_wrt', 'probe_wrt', 'integral_wrt', 'viscous', 'shear_stress',
'bulk_stress', 'bubbles_lagrange', 'relativity', 'cu_tensor', 'igr',
]
for param in logical_params:
self._validate_logical(param)
# Required domain parameters when m > 0
m = self.get('m')
if m is not None and self._is_numeric(m) and m > 0:
self.prohibit(not self.is_set('x_domain%beg'),
"x_domain%beg must be set when m > 0")
self.prohibit(not self.is_set('x_domain%end'),
"x_domain%end must be set when m > 0")

Comment on lines +510 to +514
cmd = f"./mfc.sh new {name} -t {template}"
cons.print()
cons.print(f"[dim]Running: {cmd}[/dim]")
cons.print()
os.system(cmd)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Replace os.system with subprocess.run and a list of arguments to prevent a shell injection vulnerability when creating a new case from user input. [security, importance: 9]

Suggested change
cmd = f"./mfc.sh new {name} -t {template}"
cons.print()
cons.print(f"[dim]Running: {cmd}[/dim]")
cons.print()
os.system(cmd)
cmd = ["./mfc.sh", "new", name, "-t", template]
cons.print()
cons.print(f"[dim]Running: {' '.join(cmd)}[/dim]")
cons.print()
import subprocess
subprocess.run(cmd, check=False)

Comment on lines +522 to +528
path = Prompt.ask("Path to case.py")

cmd = f"./mfc.sh validate {path}"
cons.print()
cons.print(f"[dim]Running: {cmd}[/dim]")
cons.print()
os.system(cmd)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Replace os.system with subprocess.run and a list of arguments to prevent a shell injection vulnerability when validating a case from user input. [security, importance: 9]

Suggested change
path = Prompt.ask("Path to case.py")
cmd = f"./mfc.sh validate {path}"
cons.print()
cons.print(f"[dim]Running: {cmd}[/dim]")
cons.print()
os.system(cmd)
path = Prompt.ask("Path to case.py")
cmd = ["./mfc.sh", "validate", path]
cons.print()
cons.print(f"[dim]Running: {' '.join(cmd)}[/dim]")
cons.print()
import subprocess
subprocess.run(cmd, check=False)

Comment on lines +58 to +107
if streaming:
# Streaming mode (-v): print build progress lines as they happen
cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...")
start_time = time.time()

# Read stdout and print matching lines
for line in iter(process.stdout.readline, ''):
all_stdout.append(line)
stripped = line.strip()

# Try ninja format first: [42/156] Action
ninja_match = _NINJA_PROGRESS_RE.match(stripped)
if ninja_match:
completed = ninja_match.group(1)
total = ninja_match.group(2)
action = ninja_match.group(3)
# Extract filename from action
parts = action.split()
if len(parts) >= 3:
filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{completed}/{total}][/dim] {filename}")
continue

# Try make format: [ 16%] Action
make_match = _MAKE_PROGRESS_RE.match(stripped)
if make_match:
percent = make_match.group(1)
action = make_match.group(2)
# Extract filename from action
parts = action.split()
if len(parts) >= 3:
# Get the last part which is usually the file path
obj_path = parts[-1]
filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{percent:>3}%][/dim] {filename}")

# Read any remaining stderr
stderr = process.stderr.read()
all_stderr.append(stderr)
process.wait()

elapsed = time.time() - start_time
if elapsed > 5:
cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]")

return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: To prevent a potential deadlock in streaming mode, read stderr concurrently with stdout using a thread and queue, instead of reading stderr only after stdout is exhausted. [possible issue, importance: 9]

Suggested change
if streaming:
# Streaming mode (-v): print build progress lines as they happen
cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...")
start_time = time.time()
# Read stdout and print matching lines
for line in iter(process.stdout.readline, ''):
all_stdout.append(line)
stripped = line.strip()
# Try ninja format first: [42/156] Action
ninja_match = _NINJA_PROGRESS_RE.match(stripped)
if ninja_match:
completed = ninja_match.group(1)
total = ninja_match.group(2)
action = ninja_match.group(3)
# Extract filename from action
parts = action.split()
if len(parts) >= 3:
filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{completed}/{total}][/dim] {filename}")
continue
# Try make format: [ 16%] Action
make_match = _MAKE_PROGRESS_RE.match(stripped)
if make_match:
percent = make_match.group(1)
action = make_match.group(2)
# Extract filename from action
parts = action.split()
if len(parts) >= 3:
# Get the last part which is usually the file path
obj_path = parts[-1]
filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{percent:>3}%][/dim] {filename}")
# Read any remaining stderr
stderr = process.stderr.read()
all_stderr.append(stderr)
process.wait()
elapsed = time.time() - start_time
if elapsed > 5:
cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]")
return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr))
if streaming:
# Streaming mode (-v): print build progress lines as they happen
cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...")
start_time = time.time()
stderr_queue = queue.Queue()
def read_stderr():
for line in iter(process.stderr.readline, ''):
stderr_queue.put(line)
stderr_queue.put(None) # Signal EOF
stderr_thread = threading.Thread(target=read_stderr, daemon=True)
stderr_thread.start()
# Read stdout and print matching lines
for line in iter(process.stdout.readline, ''):
all_stdout.append(line)
stripped = line.strip()
# Drain any pending stderr to avoid pipe buffer deadlocks
while True:
try:
err_line = stderr_queue.get_nowait()
except queue.Empty:
break
if err_line is None:
break
all_stderr.append(err_line)
# Try ninja format first: [42/156] Action
ninja_match = _NINJA_PROGRESS_RE.match(stripped)
if ninja_match:
completed = ninja_match.group(1)
total = ninja_match.group(2)
action = ninja_match.group(3)
# Extract filename from action
parts = action.split()
if len(parts) >= 3:
filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{completed}/{total}][/dim] {filename}")
continue
# Try make format: [ 16%] Action
make_match = _MAKE_PROGRESS_RE.match(stripped)
if make_match:
percent = make_match.group(1)
action = make_match.group(2)
# Extract filename from action
parts = action.split()
if len(parts) >= 3:
# Get the last part which is usually the file path
obj_path = parts[-1]
filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{percent:>3}%][/dim] {filename}")
process.wait()
# Drain remaining stderr and ensure thread exits
stderr_thread.join(timeout=1)
while True:
try:
err_line = stderr_queue.get_nowait()
except queue.Empty:
break
if err_line is None:
break
all_stderr.append(err_line)
elapsed = time.time() - start_time
if elapsed > 5:
cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]")
return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr))

Comment on lines +66 to +70
def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str:
"""Generate bash completion expression for a completion type."""
if comp_type == CompletionType.CHOICES and choices:
return f'COMPREPLY=( $(compgen -W "{" ".join(choices)}" -- "${{cur}}") )'
return _BASH_COMPLETION_MAP.get(comp_type, "")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Escape shell metacharacters in completion choices before embedding them into the compgen command to prevent syntax errors and potential shell injection in the generated bash script. [security, importance: 8]

Suggested change
def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str:
"""Generate bash completion expression for a completion type."""
if comp_type == CompletionType.CHOICES and choices:
return f'COMPREPLY=( $(compgen -W "{" ".join(choices)}" -- "${{cur}}") )'
return _BASH_COMPLETION_MAP.get(comp_type, "")
def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str:
"""Generate bash completion expression for a completion type."""
if comp_type == CompletionType.CHOICES and choices:
def _escape_bash_word(word: str) -> str:
# Minimal safe escaping for inclusion inside a double-quoted compgen -W "..."
return (
str(word)
.replace("\\", "\\\\")
.replace('"', '\\"')
.replace("$", "\\$")
.replace("`", "\\`")
)
escaped = " ".join(_escape_bash_word(c) for c in choices)
return f'COMPREPLY=( $(compgen -W "{escaped}" -- "${{cur}}") )'
return _BASH_COMPLETION_MAP.get(comp_type, "")

Comment on lines +120 to +132
def load_example_case(case_path: Path) -> Dict[str, Any]:
"""Load parameters from an example case file."""
result = subprocess.run(
["python3", str(case_path)],
capture_output=True,
text=True,
cwd=case_path.parent,
timeout=30,
check=False
)
if result.returncode != 0:
return None
return json.loads(result.stdout.strip())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: To make test execution more robust, defensively parse the output of case.py scripts by extracting the last line that appears to be a JSON object, ignoring other printed output. [possible issue, importance: 7]

Suggested change
def load_example_case(case_path: Path) -> Dict[str, Any]:
"""Load parameters from an example case file."""
result = subprocess.run(
["python3", str(case_path)],
capture_output=True,
text=True,
cwd=case_path.parent,
timeout=30,
check=False
)
if result.returncode != 0:
return None
return json.loads(result.stdout.strip())
def load_example_case(case_path: Path) -> Dict[str, Any]:
"""Load parameters from an example case file."""
result = subprocess.run(
["python3", str(case_path)],
capture_output=True,
text=True,
cwd=case_path.parent,
timeout=30,
check=False
)
if result.returncode != 0:
return None
stdout = (result.stdout or "").strip()
if not stdout:
return None
# Many example cases may print extra lines; prefer parsing the last JSON-looking line.
lines = [ln.strip() for ln in stdout.splitlines() if ln.strip()]
for candidate in reversed(lines):
if candidate.startswith("{") and candidate.endswith("}"):
try:
return json.loads(candidate)
except json.JSONDecodeError:
return None
return None

re.match only checks if pattern matches at start of string, which
could cause incorrect matches if patterns aren't perfectly ordered.
re.fullmatch ensures the entire parameter name matches the pattern.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@sbryngelson
Copy link
Member Author

/improve

CASE_OPTIMIZATION = _get_case_optimization_params()

# Build JSON schema for validation
_properties = {k: v.value for k, v in ALL.items()}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Refactor the JSON schema generation to be more robust by using getattr(v, "value", v) and adding a type check to handle both ParamType enums and pre-formatted dict schema fragments. [possible issue, importance: 7]

Suggested change
_properties = {k: v.value for k, v in ALL.items()}
# Build JSON schema for validation
_properties = {}
for k, v in ALL.items():
schema_fragment = getattr(v, "value", v)
if not isinstance(schema_fragment, dict):
raise TypeError(f"Invalid schema fragment for {k}: {type(schema_fragment).__name__}")
_properties[k] = schema_fragment

Comment on lines 176 to 188
for n in ["dt", "cfl_target", "cfl_max", "t_tol", "weno_eps", "teno_CT", "wenoz_q",
"adap_dt_tol", "t_stop", "t_save", "tau_star", "cont_damage_s", "alpha_bar",
"alf_factor", "ic_eps", "ic_beta"]:
_r(n, REAL, {S})
for n in ["mapped_weno", "wenoz", "teno", "weno_Re_flux", "weno_avg", "mp_weno",
"run_time_info", "prim_vars_wrt", "alt_soundspeed", "null_weights",
"mixture_err", "probe_wrt", "acoustic_source", "integral_wrt", "rdma_mpi",
"adap_dt", "powell", "igr_pres_lim", "int_comp", "nv_uvm_out_of_core",
"nv_uvm_pref_gpu"]:
_r(n, LOG, {S})
# cfl_target also POST_PROCESS
_r("cfl_target", REAL, {O})

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Consolidate the duplicate registration of the cfl_target parameter into a single call with combined stages {S, O} to avoid potential runtime errors and improve clarity. [possible issue, importance: 6]

Suggested change
for n in ["dt", "cfl_target", "cfl_max", "t_tol", "weno_eps", "teno_CT", "wenoz_q",
"adap_dt_tol", "t_stop", "t_save", "tau_star", "cont_damage_s", "alpha_bar",
"alf_factor", "ic_eps", "ic_beta"]:
_r(n, REAL, {S})
for n in ["mapped_weno", "wenoz", "teno", "weno_Re_flux", "weno_avg", "mp_weno",
"run_time_info", "prim_vars_wrt", "alt_soundspeed", "null_weights",
"mixture_err", "probe_wrt", "acoustic_source", "integral_wrt", "rdma_mpi",
"adap_dt", "powell", "igr_pres_lim", "int_comp", "nv_uvm_out_of_core",
"nv_uvm_pref_gpu"]:
_r(n, LOG, {S})
# cfl_target also POST_PROCESS
_r("cfl_target", REAL, {O})
for n in ["dt", "cfl_max", "t_tol", "weno_eps", "teno_CT", "wenoz_q",
"adap_dt_tol", "t_stop", "t_save", "tau_star", "cont_damage_s", "alpha_bar",
"alf_factor", "ic_eps", "ic_beta"]:
_r(n, REAL, {S})
# cfl_target is used in both SIMULATION and POST_PROCESS
_r("cfl_target", REAL, {S, O})

Comment on lines +510 to +514
cmd = f"./mfc.sh new {name} -t {template}"
cons.print()
cons.print(f"[dim]Running: {cmd}[/dim]")
cons.print()
os.system(cmd)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Replace os.system with subprocess.run and pass command arguments as a list to prevent shell injection vulnerabilities from user-provided input in interactive mode. [security, importance: 9]

Suggested change
cmd = f"./mfc.sh new {name} -t {template}"
cons.print()
cons.print(f"[dim]Running: {cmd}[/dim]")
cons.print()
os.system(cmd)
import subprocess
...
cmd = ["./mfc.sh", "new", name, "-t", template]
cons.print()
cons.print(f"[dim]Running: {' '.join(cmd)}[/dim]")
cons.print()
subprocess.run(cmd, check=False)

Comment on lines +76 to +81
m = self.get('m')
if m is not None and m > 0:
self.prohibit(not self.is_set('x_domain%beg'),
"x_domain%beg must be set when m > 0")
self.prohibit(not self.is_set('x_domain%end'),
"x_domain%end must be set when m > 0")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Add a check to ensure the parameter m is numeric before performing a comparison (m > 0) to prevent a TypeError and an unexpected crash during validation. [possible issue, importance: 7]

Suggested change
m = self.get('m')
if m is not None and m > 0:
self.prohibit(not self.is_set('x_domain%beg'),
"x_domain%beg must be set when m > 0")
self.prohibit(not self.is_set('x_domain%end'),
"x_domain%end must be set when m > 0")
m = self.get('m')
if m is not None and self._is_numeric(m) and m > 0:
self.prohibit(not self.is_set('x_domain%beg'),
"x_domain%beg must be set when m > 0")
self.prohibit(not self.is_set('x_domain%end'),
"x_domain%end must be set when m > 0")

Comment on lines 46 to 78
def extract_constraints_from_validator() -> List[ConstraintInfo]:
"""Parse case_validator.py and extract all prohibit() calls."""
validator_path = Path(__file__).parent.parent / "case_validator.py"

with open(validator_path, 'r', encoding='utf-8') as f:
source = f.read()

constraints = []
current_method = None
tree = ast.parse(source)

for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
current_method = node.name
continue

if not _is_prohibit_call(node):
continue

message = _extract_message(node.args[1])
try:
condition_code = ast.unparse(node.args[0])
except Exception:
condition_code = "<complex>"

constraints.append(ConstraintInfo(
method=current_method or "<unknown>",
line_number=node.lineno,
message=message,
condition_code=condition_code
))

return constraints
Copy link
Contributor

@qodo-code-review qodo-code-review bot Jan 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Fix incorrect method attribution in extract_constraints_from_validator by changing the ast traversal logic. Instead of a single ast.walk(), iterate through each function definition within the CaseValidator class to correctly associate constraints with their parent method. [possible issue, importance: 9]

Suggested change
def extract_constraints_from_validator() -> List[ConstraintInfo]:
"""Parse case_validator.py and extract all prohibit() calls."""
validator_path = Path(__file__).parent.parent / "case_validator.py"
with open(validator_path, 'r', encoding='utf-8') as f:
source = f.read()
constraints = []
current_method = None
tree = ast.parse(source)
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
current_method = node.name
continue
if not _is_prohibit_call(node):
continue
message = _extract_message(node.args[1])
try:
condition_code = ast.unparse(node.args[0])
except Exception:
condition_code = "<complex>"
constraints.append(ConstraintInfo(
method=current_method or "<unknown>",
line_number=node.lineno,
message=message,
condition_code=condition_code
))
return constraints
def extract_constraints_from_validator() -> List[ConstraintInfo]:
"""Parse case_validator.py and extract all prohibit() calls."""
validator_path = Path(__file__).parent.parent / "case_validator.py"
with open(validator_path, 'r', encoding='utf-8') as f:
source = f.read()
tree = ast.parse(source)
validator_class = _find_case_validator_class(tree)
if validator_class is None:
return []
constraints: List[ConstraintInfo] = []
for item in validator_class.body:
if not isinstance(item, ast.FunctionDef):
continue
for node in ast.walk(item):
if not _is_prohibit_call(node):
continue
message = _extract_message(node.args[1])
try:
condition_code = ast.unparse(node.args[0])
except Exception:
condition_code = "<complex>"
constraints.append(ConstraintInfo(
method=item.name,
line_number=node.lineno,
message=message,
condition_code=condition_code
))
return constraints

Comment on lines +58 to +107
if streaming:
# Streaming mode (-v): print build progress lines as they happen
cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...")
start_time = time.time()

# Read stdout and print matching lines
for line in iter(process.stdout.readline, ''):
all_stdout.append(line)
stripped = line.strip()

# Try ninja format first: [42/156] Action
ninja_match = _NINJA_PROGRESS_RE.match(stripped)
if ninja_match:
completed = ninja_match.group(1)
total = ninja_match.group(2)
action = ninja_match.group(3)
# Extract filename from action
parts = action.split()
if len(parts) >= 3:
filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{completed}/{total}][/dim] {filename}")
continue

# Try make format: [ 16%] Action
make_match = _MAKE_PROGRESS_RE.match(stripped)
if make_match:
percent = make_match.group(1)
action = make_match.group(2)
# Extract filename from action
parts = action.split()
if len(parts) >= 3:
# Get the last part which is usually the file path
obj_path = parts[-1]
filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{percent:>3}%][/dim] {filename}")

# Read any remaining stderr
stderr = process.stderr.read()
all_stderr.append(stderr)
process.wait()

elapsed = time.time() - start_time
if elapsed > 5:
cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]")

return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Prevent a potential deadlock in the build process's streaming mode by handling stdout and stderr concurrently. A simple fix is to redirect stderr to stdout by setting stderr=subprocess.STDOUT in the subprocess.Popen call. [possible issue, importance: 9]

Suggested change
if streaming:
# Streaming mode (-v): print build progress lines as they happen
cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...")
start_time = time.time()
# Read stdout and print matching lines
for line in iter(process.stdout.readline, ''):
all_stdout.append(line)
stripped = line.strip()
# Try ninja format first: [42/156] Action
ninja_match = _NINJA_PROGRESS_RE.match(stripped)
if ninja_match:
completed = ninja_match.group(1)
total = ninja_match.group(2)
action = ninja_match.group(3)
# Extract filename from action
parts = action.split()
if len(parts) >= 3:
filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{completed}/{total}][/dim] {filename}")
continue
# Try make format: [ 16%] Action
make_match = _MAKE_PROGRESS_RE.match(stripped)
if make_match:
percent = make_match.group(1)
action = make_match.group(2)
# Extract filename from action
parts = action.split()
if len(parts) >= 3:
# Get the last part which is usually the file path
obj_path = parts[-1]
filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{percent:>3}%][/dim] {filename}")
# Read any remaining stderr
stderr = process.stderr.read()
all_stderr.append(stderr)
process.wait()
elapsed = time.time() - start_time
if elapsed > 5:
cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]")
return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr))
if streaming:
# Streaming mode (-v): print build progress lines as they happen
cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...")
start_time = time.time()
process = subprocess.Popen( # pylint: disable=consider-using-with
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1
)
for line in iter(process.stdout.readline, ''):
all_stdout.append(line)
stripped = line.strip()
ninja_match = _NINJA_PROGRESS_RE.match(stripped)
if ninja_match:
completed = ninja_match.group(1)
total = ninja_match.group(2)
action = ninja_match.group(3)
parts = action.split()
if len(parts) >= 3:
filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{completed}/{total}][/dim] {filename}")
continue
make_match = _MAKE_PROGRESS_RE.match(stripped)
if make_match:
percent = make_match.group(1)
action = make_match.group(2)
parts = action.split()
if len(parts) >= 3:
obj_path = parts[-1]
filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{percent:>3}%][/dim] {filename}")
process.wait()
elapsed = time.time() - start_time
if elapsed > 5:
cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]")
return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr))

Comment on lines +66 to +70
def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str:
"""Generate bash completion expression for a completion type."""
if comp_type == CompletionType.CHOICES and choices:
return f'COMPREPLY=( $(compgen -W "{" ".join(choices)}" -- "${{cur}}") )'
return _BASH_COMPLETION_MAP.get(comp_type, "")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Fix a shell injection vulnerability in the generated bash completion script by properly escaping special characters in completion choices. This prevents malformed choices from breaking the script or executing arbitrary code. [security, importance: 8]

Suggested change
def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str:
"""Generate bash completion expression for a completion type."""
if comp_type == CompletionType.CHOICES and choices:
return f'COMPREPLY=( $(compgen -W "{" ".join(choices)}" -- "${{cur}}") )'
return _BASH_COMPLETION_MAP.get(comp_type, "")
def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str:
"""Generate bash completion expression for a completion type."""
if comp_type == CompletionType.CHOICES and choices:
safe_choices = [
str(c).replace("\\", "\\\\").replace('"', '\\"').replace("$", "\\$").replace("`", "\\`")
for c in choices
]
return f'COMPREPLY=( $(compgen -W "{" ".join(safe_choices)}" -- "${{cur}}") )'
return _BASH_COMPLETION_MAP.get(comp_type, "")

The previous ast.walk() approach could misattribute constraints to
the wrong method because ast.walk doesn't guarantee parent-child
order. Fixed by iterating through class methods explicitly and
walking within each method scope.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file (changes from recent commits).

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them.


<file name="toolchain/mfc/params_tests/coverage.py">

<violation number="1" location="toolchain/mfc/params_tests/coverage.py:46">
P2: There are now two `_find_case_validator_class` definitions in this module; the later one overrides the new version, making the added function dead code and risking confusion about which behavior is used. Consolidate into a single definition (update one and remove the duplicate) so the helper used by all callers is unambiguous.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

The function was defined twice after the AST traversal fix.
Removed the old ast.walk version, keeping the better tree.body version.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🤖 Fix all issues with AI agents
In `@docs/documentation/cli-reference.md`:
- Around line 15-41: The anchor links in the CLI quick-reference table (e.g.,
[<code>build</code>](`#build`)) don't match generated heading IDs because headings
include alias text like "### build (alias: `b`)"; update the docs generator to
either emit explicit HTML anchors for each command heading (e.g., <a
id="build">) or change heading generation so alias text is removed from the
heading (move alias into a separate sibling element such as a span or
parentheses line) so that the quick-reference links (e.g., build, run, test)
resolve correctly; modify the generator logic that emits the "### build (alias:
`b`)" headings and the code that creates the quick-reference table links to
ensure consistent IDs.
- Around line 48-52: Update the docs generator/template to emit
"space-separated" (with a hyphen) instead of "Space separated" wherever the
phrase appears in option tables; specifically fix the option descriptions for
flags like `-t`/`--targets` (currently "Space separated list of targets") and
any other occurrences in generated CLI tables so the wording is consistently
hyphenated and uses lowercase "space-separated".

In `@docs/documentation/parameters.md`:
- Around line 17-50: The table's fragment links like "#patch-icpp" don't match
the actual headings "## patch_icpp", causing broken anchors (MD051); fix by
making link fragments match heading IDs or vice versa — e.g., update all table
entries (patch-icpp, patch-ib, fluid-pp, etc.) to use underscores (patch_icpp,
patch_ib, fluid_pp, ...) so they match headings, or alternatively rename the
headings to use hyphens or add explicit HTML anchors (<a name="patch-icpp"></a>)
before the corresponding headings such as patch_icpp to ensure the fragments
resolve.
- Around line 55-61: The generated markdown is using bold/italic lines as
headings ("Initial condition patch parameters" and "1954 parameters") and
inconsistent table pipe spacing in the "Patterns" table which triggers
MD036/MD060; update the doc generator (or generation template) to emit real
heading syntax (e.g., convert the bold/italic lines into proper ATX/Setext
headings such as "## Initial condition patch parameters" and "### 1954
parameters") and normalize table formatting for consistent pipes/spaces in the
"Patterns" table, or alternatively add a generation-time marker/frontmatter to
exclude this file from markdownlint rules (or insert per-file
markdownlint-disable comments for MD036 and MD060) so the linter stops failing.

In `@toolchain/mfc/cli/commands.py`:
- Around line 543-552: The Argument for name "template" currently uses
CompletionType.CHOICES with a hard-coded choices list which causes argparse to
reject inputs like "example:<name>"; remove the hard choices enforcement: delete
the choices=[...] entry and stop using CompletionType.CHOICES (use
CompletionType.TEXT or omit the completion type) so parsing no longer rejects
"example:*" values, and if needed add a lightweight post-parse validation in the
code that handles the "template" value to accept either the minimal names or
strings prefixed with "example:".

In `@toolchain/mfc/params_tests/coverage.py`:
- Around line 46-52: There are two definitions of the function
_find_case_validator_class causing a duplicate-definition error; locate both
occurrences of the function name _find_case_validator_class (the earlier and the
later definition) and remove the earlier duplicate so only a single
_find_case_validator_class implementation remains, ensuring any callers still
reference the retained definition and no redundant imports or references remain.

In `@toolchain/mfc/params/definitions.py`:
- Around line 147-148: The lines calling _r(... ) currently contain multiple
statements per line (e.g., _r(f"{d}_a", REAL, {P}); _r(f"{d}_b", REAL, {P}) and
_r(f"a_{d}", REAL, {P}); _r(f"loops_{d}", INT, {P})), which triggers Ruff E702;
split each semicolon-separated call into its own statement on separate lines so
each _r(...) invocation stands alone (locate the occurrences of _r with names
like f"{d}_a", f"{d}_b", a_{d}, and loops_{d} and place each call on its own
line).
- Around line 114-116: Rename the ambiguous alias O to a clear name like POST
(e.g., change "C, P, S, O = Stage.COMMON, Stage.PRE_PROCESS, Stage.SIMULATION,
Stage.POST_PROCESS" to "C, P, S, POST = Stage.COMMON, Stage.PRE_PROCESS,
Stage.SIMULATION, Stage.POST_PROCESS") and update any usages of O to POST;
remove the unused alias A_INT (drop "A_INT, A_REAL = ParamType.ANALYTIC_INT,
ParamType.ANALYTIC_REAL" and instead keep only "A_REAL =
ParamType.ANALYTIC_REAL" or define A_REAL separately) so there is no
unused-variable (F841) or ambiguous-alias (E741) linting; also remove the
now-unnecessary pylint disable comment if present.

In `@toolchain/mfc/params/descriptions.py`:
- Around line 472-474: In the if "offset" in suffix block replace the no-op
f-string used in the return (currently returning f"Offset parameter" + (f" for
{context}" if context else "")) with a plain string for the static part so Ruff
F541 is avoided; i.e., use "Offset parameter" as the static component and keep
the conditional f-string only for the " for {context}" suffix so the return
becomes a concatenation of the plain "Offset parameter" and the existing (f" for
{context}" if context else "") expression.
🧹 Nitpick comments (2)
toolchain/mfc/params_tests/coverage.py (2)

78-82: Avoid catching blanket Exception around ast.unparse.

🔧 Suggested narrowing of exceptions
             try:
                 condition_code = ast.unparse(node.args[0])
-            except Exception:
+            except (ValueError, TypeError):
                 condition_code = "<complex>"

275-291: Make Optional type explicit per PEP 484.

🧷 Typing fix
-def save_coverage_report(output_path: Path = None):
+def save_coverage_report(output_path: Path | None = None):

Comment on lines +543 to +552
Argument(
name="template",
short="t",
help="Template to use (e.g., 1D_minimal, 2D_minimal, 3D_minimal, or example:<name>).",
default="1D_minimal",
completion=Completion(
type=CompletionType.CHOICES,
choices=["1D_minimal", "2D_minimal", "3D_minimal"],
),
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

new --template choices reject example:* templates.

Argparse choices only permits the three minimal templates, but help/examples advertise example:<name>. This will reject valid inputs at parse time. Consider removing choices (keep completion hints) or expanding validation to accept example: prefixes.

🛠️ Suggested fix (keep completion hints, remove hard choices)
         Argument(
             name="template",
             short="t",
             help="Template to use (e.g., 1D_minimal, 2D_minimal, 3D_minimal, or example:<name>).",
             default="1D_minimal",
-            completion=Completion(
-                type=CompletionType.CHOICES,
-                choices=["1D_minimal", "2D_minimal", "3D_minimal"],
-            ),
+            completion=Completion(
+                type=CompletionType.CHOICES,
+                choices=["1D_minimal", "2D_minimal", "3D_minimal"],
+            ),
         ),
🤖 Prompt for AI Agents
In `@toolchain/mfc/cli/commands.py` around lines 543 - 552, The Argument for name
"template" currently uses CompletionType.CHOICES with a hard-coded choices list
which causes argparse to reject inputs like "example:<name>"; remove the hard
choices enforcement: delete the choices=[...] entry and stop using
CompletionType.CHOICES (use CompletionType.TEXT or omit the completion type) so
parsing no longer rejects "example:*" values, and if needed add a lightweight
post-parse validation in the code that handles the "template" value to accept
either the minimal names or strings prefixed with "example:".

- Rename ambiguous O alias to POST in definitions.py (Ruff E741)
- Remove unused A_INT variable (Ruff F841)
- Split multiple statements per line (Ruff E702)
- Remove no-op f-string prefix in descriptions.py (Ruff F541)
- Fix markdown anchor links in doc generators
- Hyphenate "space-separated" in help text

Regenerated completion scripts and documentation.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@sbryngelson
Copy link
Member Author

/improve

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@toolchain/completions/_mfc`:
- Around line 58-141: The completion specs treat many options as flags instead
of value-taking; update the _arguments entries (both the general/multi-target
blocks and the run) so each option that requires a value includes a colon and a
completion action (e.g., change '-j[Allows for JOBS concurrent jobs.]' and
'--jobs[...]' to '-j:jobs:' and '--jobs:jobs:' or '-j:jobs:_guard "[1-9]*" ' for
numeric hints, change '-g/--gpus' to '-g:gpus:' and '--gpus:gpus:_files or
_comma_separated_values', change '-p/--partition', '-n/--tasks-per-node',
'-w/--walltime', '-a/--account', '-@/--email', '-#/--name',
'-q/--quality_of_service', '-N/--nodes', and '-o/--output-summary' similarly),
making sure to edit the entries inside the _arguments calls for the b) and run)
cases so Zsh will expect and complete their arguments.

In `@toolchain/mfc/params/generators/docs_gen.py`:
- Around line 72-85: The generator mistakenly treats REGISTRY.all_params as an
attribute; replace usages that call REGISTRY.all_params (e.g., in docs_gen.py
where you build the intro string and where you iterate with for name, param in
sorted(REGISTRY.all_params.items()): and any other places around lines
referenced) with a call to the method REGISTRY.all_params() so you iterate over
the returned mapping (e.g., sorted(REGISTRY.all_params().items())) and compute
len(REGISTRY.all_params()) when building the header; ensure the same change is
applied to the other occurrences (including the block that groups by family
using _get_family).
🧹 Nitpick comments (1)
toolchain/mfc/params/generators/docs_gen.py (1)

15-15: Remove the unused noqa directive to satisfy RUF100.

# noqa: F401 isn’t suppressing anything and is flagged by Ruff. Keep the import for side effects, but drop the unused noqa.

🧹 Minimal clean-up
-from .. import definitions  # noqa: F401  pylint: disable=unused-import
+from .. import definitions  # pylint: disable=unused-import

Comment on lines +58 to +141
'-j[Allows for JOBS concurrent jobs.]' \
'--jobs[Allows for JOBS concurrent jobs.]' \
'-v[Increase verbosity level. Use -v, -vv, or -vvv for more detail.]' \
'--verbose[Increase verbosity level. Use -v, -vv, or -vvv for more detail.]' \
'-d[Enable debug logging for troubleshooting.]' \
'--debug-log[Enable debug logging for troubleshooting.]' \
'-i[(GPU Optimization) Build a version of MFC optimized for a case.]:_files -g "*.py"' \
'--input[(GPU Optimization) Build a version of MFC optimized for a case.]:_files -g "*.py"' \
'--case-optimization[(GPU Optimization) Compile MFC targets with some case parameters hard-coded (requires --input).]'
;;
b)
_arguments \
'-t[Space-separated list of targets to act upon.]:(fftw hdf5 silo lapack hipfort pre_process simulation post_process syscheck documentation)' \
'--targets[Space-separated list of targets to act upon.]:(fftw hdf5 silo lapack hipfort pre_process simulation post_process syscheck documentation)' \
'--mpi[Enable MPI]' \
'--no-mpi[Disable MPI]' \
'--gpu[Enable GPU]:mode:(acc mp)' \
'--no-gpu[Disable GPU]' \
'--debug[Enable debug mode]' \
'--no-debug[Disable debug mode]' \
'-j[Allows for JOBS concurrent jobs.]' \
'--jobs[Allows for JOBS concurrent jobs.]' \
'-v[Increase verbosity level. Use -v, -vv, or -vvv for more detail.]' \
'--verbose[Increase verbosity level. Use -v, -vv, or -vvv for more detail.]' \
'-d[Enable debug logging for troubleshooting.]' \
'--debug-log[Enable debug logging for troubleshooting.]' \
'-i[(GPU Optimization) Build a version of MFC optimized for a case.]:_files -g "*.py"' \
'--input[(GPU Optimization) Build a version of MFC optimized for a case.]:_files -g "*.py"' \
'--case-optimization[(GPU Optimization) Compile MFC targets with some case parameters hard-coded (requires --input).]'
;;
run)
_arguments \
'1:Input file to run.:_files -g "*.py"' \
'-t[Space-separated list of targets to act upon.]:(fftw hdf5 silo lapack hipfort pre_process simulation post_process syscheck documentation)' \
'--targets[Space-separated list of targets to act upon.]:(fftw hdf5 silo lapack hipfort pre_process simulation post_process syscheck documentation)' \
'--mpi[Enable MPI]' \
'--no-mpi[Disable MPI]' \
'--gpu[Enable GPU]:mode:(acc mp)' \
'--no-gpu[Disable GPU]' \
'--debug[Enable debug mode]' \
'--no-debug[Disable debug mode]' \
'-j[Allows for JOBS concurrent jobs.]' \
'--jobs[Allows for JOBS concurrent jobs.]' \
'-v[Increase verbosity level. Use -v, -vv, or -vvv for more detail.]' \
'--verbose[Increase verbosity level. Use -v, -vv, or -vvv for more detail.]' \
'-d[Enable debug logging for troubleshooting.]' \
'--debug-log[Enable debug logging for troubleshooting.]' \
'-g[(Optional GPU override) List of GPU #s to use (environment default if unspecified).]' \
'--gpus[(Optional GPU override) List of GPU #s to use (environment default if unspecified).]' \
'-e[Job execution/submission engine choice.]:(interactive batch)' \
'--engine[Job execution/submission engine choice.]:(interactive batch)' \
'-p[(Batch) Partition for job submission.]' \
'--partition[(Batch) Partition for job submission.]' \
'-q[(Batch) Quality of Service for job submission.]' \
'--quality_of_service[(Batch) Quality of Service for job submission.]' \
'-N[(Batch) Number of nodes.]' \
'--nodes[(Batch) Number of nodes.]' \
'-n[Number of tasks per node.]' \
'--tasks-per-node[Number of tasks per node.]' \
'-w[(Batch) Walltime.]' \
'--walltime[(Batch) Walltime.]' \
'-a[(Batch) Account to charge.]' \
'--account[(Batch) Account to charge.]' \
'-@[(Batch) Email for job notification.]' \
'--email[(Batch) Email for job notification.]' \
'-#[(Batch) Job name.]' \
'--name[(Batch) Job name.]' \
'-s[Build from scratch.]' \
'--scratch[Build from scratch.]' \
'-b[(Interactive) Override MPI execution binary]:(mpirun jsrun srun mpiexec)' \
'--binary[(Interactive) Override MPI execution binary]:(mpirun jsrun srun mpiexec)' \
'--dry-run[(Batch) Run without submitting batch file.]' \
'--case-optimization[(GPU Optimization) Compile MFC targets with some case parameters hard-coded.]' \
'--no-build[(Testing) Do not rebuild MFC.]' \
'--wait[(Batch) Wait for the job to finish.]' \
'-c[(Batch) Path to a custom submission file template or one of the built-in templates.]:(bridges2 carpenter carpenter-cray default delta deltaai frontier hipergator nautilus oscar phoenix phoenix-bench santis tuo)' \
'--computer[(Batch) Path to a custom submission file template or one of the built-in templates.]:(bridges2 carpenter carpenter-cray default delta deltaai frontier hipergator nautilus oscar phoenix phoenix-bench santis tuo)' \
'-o[Output file (YAML) for summary.]' \
'--output-summary[Output file (YAML) for summary.]' \
'--clean[Clean the case before running.]' \
'--ncu[Profile with NVIDIA Nsight Compute.]' \
'--nsys[Profile with NVIDIA Nsight Systems.]' \
'--rcu[Profile with ROCM rocprof-compute.]' \
'--rsys[Profile with ROCM rocprof-systems.]'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's find and examine the actual file
git ls-files | grep -E "completions.*_mfc"

Repository: MFlowCode/MFC

Length of output: 83


🌐 Web query:

Zsh _arguments syntax for options that require values argument specification

💡 Result:

In Zsh completion, _arguments option specs (“optspecs”) use colons to declare option arguments:

1) Basic form (option with a required value)

_arguments \
  '-o[output file]:file:_files'

General shape:

'optspec:message:action'

2) Optional value for an option

_arguments \
  '--color[use color]::when:(auto always never)'

3) Controlling how the value is attached (next word vs same word vs =)

These are suffixes on the option name inside optspec: (zsh.sourceforge.io)

  • Next word only (default):

    _arguments '-o[output]:file:_files'   # -o FILE
  • Same word only (immediately after option name):

    _arguments '-I-[include dir]:dir:_files -/'
    # e.g. -I/usr/include
  • Either same word or next word:

    _arguments '-I+[include dir]:dir:_files -/'
    # -I/usr/include  OR  -I /usr/include
  • Either --opt=VAL or --opt VAL:

    _arguments '--port=[port]:port: '
    # --port=123  OR  --port 123
  • Must be --opt=VAL (equals required):

    _arguments '--port=-[port]:port: '
    # only --port=123

4) Multiple values for one option

Repeat :message:action segments to declare multiple arguments: (zsh.sourceforge.io)

_arguments \
  '--range[first]:from:' \
  ':to:'

If you tell me an example CLI (e.g. --port 123, -I/usr/include, --define=K=V), I can write the exact _arguments line(s) for it.

Citations:


🏁 Script executed:

# Read the actual file to see current state
head -141 toolchain/completions/_mfc | tail -84

Repository: MFlowCode/MFC

Length of output: 6955


Declare argument specs for value-taking options (e.g., -j/--jobs, -g/--gpus, -p/--partition, -n/--tasks-per-node, -w/--walltime, -a/--account, -@/--email, -#/--name, -q/--quality_of_service, -N/--nodes, -o/--output-summary).

Without :message:action specs, Zsh treats these as flags and completion won't expect a value. According to Zsh documentation, options requiring arguments must include at least one colon followed by a completion action (e.g., :jobs: or :partition:_files).

♻️ Example fix for one occurrence
-                        '-j[Allows for JOBS concurrent jobs.]' \
-                        '--jobs[Allows for JOBS concurrent jobs.]' \
+                        '-j[Allows for JOBS concurrent jobs.]:jobs:' \
+                        '--jobs[Allows for JOBS concurrent jobs.]:jobs:' \
🤖 Prompt for AI Agents
In `@toolchain/completions/_mfc` around lines 58 - 141, The completion specs treat
many options as flags instead of value-taking; update the _arguments entries
(both the general/multi-target blocks and the run) so each option that requires
a value includes a colon and a completion action (e.g., change '-j[Allows for
JOBS concurrent jobs.]' and '--jobs[...]' to '-j:jobs:' and '--jobs:jobs:' or
'-j:jobs:_guard "[1-9]*" ' for numeric hints, change '-g/--gpus' to '-g:gpus:'
and '--gpus:gpus:_files or _comma_separated_values', change '-p/--partition',
'-n/--tasks-per-node', '-w/--walltime', '-a/--account', '-@/--email',
'-#/--name', '-q/--quality_of_service', '-N/--nodes', and '-o/--output-summary'
similarly), making sure to edit the entries inside the _arguments calls for the
b) and run) cases so Zsh will expect and complete their arguments.

Comment on lines 72 to 85
f"MFC supports **{len(REGISTRY.all_params):,}** case parameters organized into families.",
"",
"**Quick search:** Use `./mfc.sh params <query>` to search parameters from the command line.",
"",
"## Parameter Families",
"",
]

# Group parameters by family
families: Dict[str, List[Tuple[str, any]]] = defaultdict(list)
for name, param in sorted(REGISTRY.all_params.items()):
family = _get_family(name)
families[family].append((name, param))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Call REGISTRY.all_params() — current usage will break doc generation.

Line 72, Line 82, and Line 234 treat all_params as an attribute, but it’s a method in registry.py. This will raise at runtime (e.g., method has no items()), breaking the generator.

🐛 Fix: invoke the method
-        f"MFC supports **{len(REGISTRY.all_params):,}** case parameters organized into families.",
+        f"MFC supports **{len(REGISTRY.all_params()):,}** case parameters organized into families.",
@@
-    for name, param in sorted(REGISTRY.all_params.items()):
+    for name, param in sorted(REGISTRY.all_params().items()):
@@
-    return len(REGISTRY.all_params)
+    return len(REGISTRY.all_params())

Also applies to: 231-234

🤖 Prompt for AI Agents
In `@toolchain/mfc/params/generators/docs_gen.py` around lines 72 - 85, The
generator mistakenly treats REGISTRY.all_params as an attribute; replace usages
that call REGISTRY.all_params (e.g., in docs_gen.py where you build the intro
string and where you iterate with for name, param in
sorted(REGISTRY.all_params.items()): and any other places around lines
referenced) with a call to the method REGISTRY.all_params() so you iterate over
the returned mapping (e.g., sorted(REGISTRY.all_params().items())) and compute
len(REGISTRY.all_params()) when building the header; ensure the same change is
applied to the other occurrences (including the block that groups by family
using _get_family).

CASE_OPTIMIZATION = _get_case_optimization_params()

# Build JSON schema for validation
_properties = {k: v.value for k, v in ALL.items()}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Use getattr(v, "value", v) to handle cases where parameter types are represented as raw dictionaries instead of ParamType enums, preventing a potential AttributeError during schema generation. [possible issue, importance: 6]

Suggested change
_properties = {k: v.value for k, v in ALL.items()}
_properties = {k: getattr(v, "value", v) for k, v in ALL.items()}

Comment on lines +544 to +551
desc = re.sub(r"(species|fluid|component|direction) 2", r"\1 M", desc)
desc = re.sub(r"component 2", "component M", desc)
# Then handle primary index (1 -> N)
desc = re.sub(r"(patch|fluid|IB patch|source|probe|region|species|direction|component) 1", r"\1 N", desc)
# Generic fallback
desc = re.sub(r" 1([,\s]|$)", r" N\1", desc)
desc = re.sub(r" 2([,\s]|$)", r" M\1", desc)
desc = re.sub(r" 3([,\s]|$)", r" K\1", desc)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Use word boundaries (\b) in the regex substitutions within get_pattern_description to ensure only whole numbers are replaced, preventing incorrect replacements in multi-digit numbers. [possible issue, importance: 7]

Suggested change
desc = re.sub(r"(species|fluid|component|direction) 2", r"\1 M", desc)
desc = re.sub(r"component 2", "component M", desc)
# Then handle primary index (1 -> N)
desc = re.sub(r"(patch|fluid|IB patch|source|probe|region|species|direction|component) 1", r"\1 N", desc)
# Generic fallback
desc = re.sub(r" 1([,\s]|$)", r" N\1", desc)
desc = re.sub(r" 2([,\s]|$)", r" M\1", desc)
desc = re.sub(r" 3([,\s]|$)", r" K\1", desc)
desc = re.sub(r"\b(species|fluid|component|direction)\s+2\b", r"\1 M", desc)
# Then handle primary index (1 -> N)
desc = re.sub(r"\b(patch|fluid|IB patch|source|probe|region|species|direction|component)\s+1\b", r"\1 N", desc)
# Generic fallback
desc = re.sub(r"\b1\b(?=[,\s]|$)", "N", desc)
desc = re.sub(r"\b2\b(?=[,\s]|$)", "M", desc)
desc = re.sub(r"\b3\b(?=[,\s]|$)", "K", desc)

Comment on lines +178 to +214
for n in ["dt", "cfl_target", "cfl_max", "t_tol", "weno_eps", "teno_CT", "wenoz_q",
"adap_dt_tol", "t_stop", "t_save", "tau_star", "cont_damage_s", "alpha_bar",
"alf_factor", "ic_eps", "ic_beta"]:
_r(n, REAL, {S})
for n in ["mapped_weno", "wenoz", "teno", "weno_Re_flux", "weno_avg", "mp_weno",
"run_time_info", "prim_vars_wrt", "alt_soundspeed", "null_weights",
"mixture_err", "probe_wrt", "acoustic_source", "integral_wrt", "rdma_mpi",
"adap_dt", "powell", "igr_pres_lim", "int_comp", "nv_uvm_out_of_core",
"nv_uvm_pref_gpu"]:
_r(n, LOG, {S})
# cfl_target also POST_PROCESS
_r("cfl_target", REAL, {POST})

# Body force (SIM)
for d in ["x", "y", "z"]:
for v in ["k", "w", "p", "g"]:
_r(f"{v}_{d}", REAL, {S})
_r(f"bf_{d}", LOG, {S})

# === POST_PROCESS ===
for n in ["format", "flux_lim", "t_step_start", "t_step_stop", "t_step_save"]:
_r(n, INT, {POST})
for d in ["x", "y", "z"]:
_r(f"bc_{d}%beg", INT, {POST})
_r(f"bc_{d}%end", INT, {POST})
for n in ["schlieren_alpha"]:
_r(n, REAL, {POST})
for n in ["schlieren_wrt", "alpha_rho_wrt", "rho_wrt", "mom_wrt", "vel_wrt",
"flux_wrt", "E_wrt", "pres_wrt", "alpha_wrt", "kappa_wrt", "gamma_wrt",
"heat_ratio_wrt", "pi_inf_wrt", "pres_inf_wrt", "cons_vars_wrt", "c_wrt",
"omega_wrt", "qm_wrt", "liutex_wrt", "cf_wrt", "sim_data", "output_partial_domain"]:
_r(n, LOG, {POST})
for d in ["x", "y", "z"]:
_r(f"{d}_output%beg", REAL, {POST})
_r(f"{d}_output%end", REAL, {POST})
for n in ["prim_vars_wrt", "alt_soundspeed", "mixture_err", "t_stop", "t_save", "fd_order"]:
_r(n, LOG if "wrt" in n or n in ["alt_soundspeed", "mixture_err"] else (INT if n == "fd_order" else REAL), {POST})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Refactor parameter registration to avoid duplicate entries for cfl_target and prim_vars_wrt. Register each parameter only once with the combined set of all applicable stages. [possible issue, importance: 7]

Suggested change
for n in ["dt", "cfl_target", "cfl_max", "t_tol", "weno_eps", "teno_CT", "wenoz_q",
"adap_dt_tol", "t_stop", "t_save", "tau_star", "cont_damage_s", "alpha_bar",
"alf_factor", "ic_eps", "ic_beta"]:
_r(n, REAL, {S})
for n in ["mapped_weno", "wenoz", "teno", "weno_Re_flux", "weno_avg", "mp_weno",
"run_time_info", "prim_vars_wrt", "alt_soundspeed", "null_weights",
"mixture_err", "probe_wrt", "acoustic_source", "integral_wrt", "rdma_mpi",
"adap_dt", "powell", "igr_pres_lim", "int_comp", "nv_uvm_out_of_core",
"nv_uvm_pref_gpu"]:
_r(n, LOG, {S})
# cfl_target also POST_PROCESS
_r("cfl_target", REAL, {POST})
# Body force (SIM)
for d in ["x", "y", "z"]:
for v in ["k", "w", "p", "g"]:
_r(f"{v}_{d}", REAL, {S})
_r(f"bf_{d}", LOG, {S})
# === POST_PROCESS ===
for n in ["format", "flux_lim", "t_step_start", "t_step_stop", "t_step_save"]:
_r(n, INT, {POST})
for d in ["x", "y", "z"]:
_r(f"bc_{d}%beg", INT, {POST})
_r(f"bc_{d}%end", INT, {POST})
for n in ["schlieren_alpha"]:
_r(n, REAL, {POST})
for n in ["schlieren_wrt", "alpha_rho_wrt", "rho_wrt", "mom_wrt", "vel_wrt",
"flux_wrt", "E_wrt", "pres_wrt", "alpha_wrt", "kappa_wrt", "gamma_wrt",
"heat_ratio_wrt", "pi_inf_wrt", "pres_inf_wrt", "cons_vars_wrt", "c_wrt",
"omega_wrt", "qm_wrt", "liutex_wrt", "cf_wrt", "sim_data", "output_partial_domain"]:
_r(n, LOG, {POST})
for d in ["x", "y", "z"]:
_r(f"{d}_output%beg", REAL, {POST})
_r(f"{d}_output%end", REAL, {POST})
for n in ["prim_vars_wrt", "alt_soundspeed", "mixture_err", "t_stop", "t_save", "fd_order"]:
_r(n, LOG if "wrt" in n or n in ["alt_soundspeed", "mixture_err"] else (INT if n == "fd_order" else REAL), {POST})
for n in ["dt", "cfl_target", "cfl_max", "t_tol", "weno_eps", "teno_CT", "wenoz_q",
"adap_dt_tol", "t_stop", "t_save", "tau_star", "cont_damage_s", "alpha_bar",
"alf_factor", "ic_eps", "ic_beta"]:
_r(n, REAL, {S} | ({POST} if n == "cfl_target" else set()))
...
for n in ["prim_vars_wrt", "alt_soundspeed", "mixture_err", "t_stop", "t_save", "fd_order"]:
if n != "prim_vars_wrt":
_r(n, LOG if "wrt" in n or n in ["alt_soundspeed", "mixture_err"] else (INT if n == "fd_order" else REAL), {POST})

Comment on lines +510 to +514
cmd = f"./mfc.sh new {name} -t {template}"
cons.print()
cons.print(f"[dim]Running: {cmd}[/dim]")
cons.print()
os.system(cmd)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Replace os.system() calls with subprocess.run() and pass command arguments as a list to prevent shell injection vulnerabilities from user input in the interactive mode. [security, importance: 9]

Suggested change
cmd = f"./mfc.sh new {name} -t {template}"
cons.print()
cons.print(f"[dim]Running: {cmd}[/dim]")
cons.print()
os.system(cmd)
import subprocess
...
cmd = ["./mfc.sh", "new", name, "-t", template]
cons.print()
cons.print(f"[dim]Running: {' '.join(cmd)}[/dim]")
cons.print()
subprocess.run(cmd, check=False)

Comment on lines +76 to +81
m = self.get('m')
if m is not None and m > 0:
self.prohibit(not self.is_set('x_domain%beg'),
"x_domain%beg must be set when m > 0")
self.prohibit(not self.is_set('x_domain%end'),
"x_domain%end must be set when m > 0")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Add a type check for the parameter m before performing a numeric comparison. This prevents the validator from crashing with a TypeError if m is not a number and instead provides a user-friendly error message. [possible issue, importance: 7]

Suggested change
m = self.get('m')
if m is not None and m > 0:
self.prohibit(not self.is_set('x_domain%beg'),
"x_domain%beg must be set when m > 0")
self.prohibit(not self.is_set('x_domain%end'),
"x_domain%end must be set when m > 0")
# Required domain parameters when m > 0
m = self.get('m')
if m is not None and not isinstance(m, (int, float)) or isinstance(m, bool):
self.errors.append(f"m must be numeric, got '{m}'")
elif m is not None and m > 0:
self.prohibit(not self.is_set('x_domain%beg'),
"x_domain%beg must be set when m > 0")
self.prohibit(not self.is_set('x_domain%end'),
"x_domain%end must be set when m > 0")

Comment on lines +58 to +107
if streaming:
# Streaming mode (-v): print build progress lines as they happen
cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...")
start_time = time.time()

# Read stdout and print matching lines
for line in iter(process.stdout.readline, ''):
all_stdout.append(line)
stripped = line.strip()

# Try ninja format first: [42/156] Action
ninja_match = _NINJA_PROGRESS_RE.match(stripped)
if ninja_match:
completed = ninja_match.group(1)
total = ninja_match.group(2)
action = ninja_match.group(3)
# Extract filename from action
parts = action.split()
if len(parts) >= 3:
filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{completed}/{total}][/dim] {filename}")
continue

# Try make format: [ 16%] Action
make_match = _MAKE_PROGRESS_RE.match(stripped)
if make_match:
percent = make_match.group(1)
action = make_match.group(2)
# Extract filename from action
parts = action.split()
if len(parts) >= 3:
# Get the last part which is usually the file path
obj_path = parts[-1]
filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{percent:>3}%][/dim] {filename}")

# Read any remaining stderr
stderr = process.stderr.read()
all_stderr.append(stderr)
process.wait()

elapsed = time.time() - start_time
if elapsed > 5:
cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]")

return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Avoid a potential deadlock in streaming mode by redirecting stderr to stdout in the subprocess.Popen call. This prevents the process from hanging if the stderr buffer fills up. [possible issue, importance: 9]

Suggested change
if streaming:
# Streaming mode (-v): print build progress lines as they happen
cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...")
start_time = time.time()
# Read stdout and print matching lines
for line in iter(process.stdout.readline, ''):
all_stdout.append(line)
stripped = line.strip()
# Try ninja format first: [42/156] Action
ninja_match = _NINJA_PROGRESS_RE.match(stripped)
if ninja_match:
completed = ninja_match.group(1)
total = ninja_match.group(2)
action = ninja_match.group(3)
# Extract filename from action
parts = action.split()
if len(parts) >= 3:
filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{completed}/{total}][/dim] {filename}")
continue
# Try make format: [ 16%] Action
make_match = _MAKE_PROGRESS_RE.match(stripped)
if make_match:
percent = make_match.group(1)
action = make_match.group(2)
# Extract filename from action
parts = action.split()
if len(parts) >= 3:
# Get the last part which is usually the file path
obj_path = parts[-1]
filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{percent:>3}%][/dim] {filename}")
# Read any remaining stderr
stderr = process.stderr.read()
all_stderr.append(stderr)
process.wait()
elapsed = time.time() - start_time
if elapsed > 5:
cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]")
return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), ''.join(all_stderr))
if streaming:
# Streaming mode (-v): print build progress lines as they happen
cons.print(f" [bold blue]Building[/bold blue] [magenta]{target_name}[/magenta] [dim](-v)[/dim]...")
start_time = time.time()
# Re-launch process with stderr merged into stdout to avoid pipe deadlocks
process = subprocess.Popen( # pylint: disable=consider-using-with
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1
)
for line in iter(process.stdout.readline, ''):
all_stdout.append(line)
stripped = line.strip()
ninja_match = _NINJA_PROGRESS_RE.match(stripped)
if ninja_match:
completed = ninja_match.group(1)
total = ninja_match.group(2)
action = ninja_match.group(3)
parts = action.split()
if len(parts) >= 3:
filename = os.path.basename(parts[-1]).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{completed}/{total}][/dim] {filename}")
continue
make_match = _MAKE_PROGRESS_RE.match(stripped)
if make_match:
percent = make_match.group(1)
action = make_match.group(2)
parts = action.split()
if len(parts) >= 3:
obj_path = parts[-1]
filename = os.path.basename(obj_path).replace('.o', '').replace('.obj', '')
if len(filename) > 40:
filename = filename[:37] + "..."
cons.print(f" [dim][{percent:>3}%][/dim] {filename}")
process.wait()
elapsed = time.time() - start_time
if elapsed > 5:
cons.print(f" [dim](build took {elapsed:.1f}s)[/dim]")
return subprocess.CompletedProcess(cmd, process.returncode, ''.join(all_stdout), '')

Comment on lines +26 to +27
failed_tests = [] # Track failed test details for summary
test_start_time = None # Track overall test duration
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Introduce a threading.Lock to protect shared global variables like failed_tests, nFAIL, and errors from concurrent modification by worker threads. This prevents race conditions and ensures consistent test results. [possible issue, importance: 8]

Suggested change
failed_tests = [] # Track failed test details for summary
test_start_time = None # Track overall test duration
failed_tests = [] # Track failed test details for summary
test_start_time = None # Track overall test duration
_results_lock = threading.Lock()

Comment on lines +66 to +70
def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str:
"""Generate bash completion expression for a completion type."""
if comp_type == CompletionType.CHOICES and choices:
return f'COMPREPLY=( $(compgen -W "{" ".join(choices)}" -- "${{cur}}") )'
return _BASH_COMPLETION_MAP.get(comp_type, "")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Escape choice values when generating bash completion scripts to prevent shell injection vulnerabilities. Unescaped values with special characters can break the script or lead to unintended code execution. [security, importance: 8]

Suggested change
def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str:
"""Generate bash completion expression for a completion type."""
if comp_type == CompletionType.CHOICES and choices:
return f'COMPREPLY=( $(compgen -W "{" ".join(choices)}" -- "${{cur}}") )'
return _BASH_COMPLETION_MAP.get(comp_type, "")
def _bash_completion_for_type(comp_type: CompletionType, choices: List[str] = None) -> str:
"""Generate bash completion expression for a completion type."""
if comp_type == CompletionType.CHOICES and choices:
safe_choices = [
str(c).replace("\\", "\\\\").replace('"', '\\"').replace("\n", " ").strip()
for c in choices
]
return f'COMPREPLY=( $(compgen -W "{" ".join(safe_choices)}" -- "${{cur}}") )'
return _BASH_COMPLETION_MAP.get(comp_type, "")

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file (changes from recent commits).

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them.


<file name=".github/workflows/lint-toolchain.yml">

<violation number="1" location=".github/workflows/lint-toolchain.yml:39">
P2: The multiline `python3 -c` block is indented inside the quoted string, which triggers `IndentationError: unexpected indent` in Python. Use a heredoc or remove the indentation so the CI step runs.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

@codecov
Copy link

codecov bot commented Feb 1, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 44.16%. Comparing base (944aa2f) to head (213a0bb).
⚠️ Report is 1 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #1118   +/-   ##
=======================================
  Coverage   44.16%   44.16%           
=======================================
  Files          71       71           
  Lines       20417    20417           
  Branches     1991     1991           
=======================================
  Hits         9018     9018           
  Misses      10252    10252           
  Partials     1147     1147           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

New tests added:
- CLI smoke tests (verify commands don't crash)
- Parameter registry validation (3400+ params, matches case_dicts)
- Bash completion syntax check
- Example case validation

These catch toolchain regressions early without requiring full builds.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@codeant-ai
Copy link
Contributor

codeant-ai bot commented Feb 1, 2026

CodeAnt AI is running Incremental review


Thanks for using CodeAnt! 🎉

We're free for open-source projects. if you're enjoying it, help us grow by sharing.

Share on X ·
Reddit ·
LinkedIn

@codeant-ai codeant-ai bot added size:XXL This PR changes 1000+ lines, ignoring generated files and removed size:XXL This PR changes 1000+ lines, ignoring generated files labels Feb 1, 2026


# Parameters that can be hard-coded for GPU case optimization (from registry)
CASE_OPTIMIZATION = _get_case_optimization_params()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: When analytic parameter types are present, some entries in the combined dictionary are raw dicts instead of Enum members, so iterating with v.value will raise an AttributeError at import time, breaking schema construction; you should handle both Enum and dict values when building the JSON schema. [type error]

Severity Level: Critical 🚨
- ❌ Module import fails during validation/schema initialization.
- ❌ get_validator() unavailable breaking parameter validation.
- ⚠️ CLI commands using case validation error at startup.
- ⚠️ Any test that imports registry triggers import error.
Suggested change
CASE_OPTIMIZATION = _get_case_optimization_params()
_properties = {
k: (v.value if isinstance(v, ParamType) else v)
for k, v in ALL.items()
}
Steps of Reproduction ✅
1. Import the case dicts module: run `python -c "import toolchain.mfc.run.case_dicts"`
(module file at toolchain/mfc/run/case_dicts.py). This runs the top-level code that calls
`_load_from_registry()` (see toolchain/mfc/run/case_dicts.py:43-58).

2. `_load_from_registry()` in toolchain/mfc/run/case_dicts.py (lines 43-58) calls
`get_stage_dict()` from toolchain/mfc/params/generators/case_dicts_gen.py (function
defined at toolchain/mfc/params/generators/case_dicts_gen.py:15-46). That generator maps
analytic registry types to class attributes like `CDParamType._ANALYTIC_INT` (see the
`type_map` in toolchain/mfc/params/generators/case_dicts_gen.py:15-46).

3. Because `CDParamType._ANALYTIC_INT` and `_ANALYTIC_REAL` are class-level dicts in
ParamType (not Enum members), the combined ALL dict produced at module import contains a
mix of ParamType Enum members and raw dicts.

4. Module-level code in toolchain/mfc/run/case_dicts.py then executes the comprehension
`_properties = {k: v.value for k, v in ALL.items()}` (at
toolchain/mfc/run/case_dicts.py:81), which attempts to access `.value` on a plain dict and
raises AttributeError: 'dict' object has no attribute 'value'. The import fails
immediately with a traceback pointing to that line.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** toolchain/mfc/run/case_dicts.py
**Line:** 81:81
**Comment:**
	*Type Error: When analytic parameter types are present, some entries in the combined dictionary are raw dicts instead of Enum members, so iterating with `v.value` will raise an AttributeError at import time, breaking schema construction; you should handle both Enum and dict values when building the JSON schema.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.

@codeant-ai
Copy link
Contributor

codeant-ai bot commented Feb 1, 2026

CodeAnt AI Incremental review completed.

sbryngelson and others added 2 commits January 31, 2026 23:25
The % character is special in Doxygen. Using %% renders as a literal %.
This fixes parameter names like bc_y%alpha_in showing as bc_yalpha_in.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Instead of listing every combination like (1,1), (1,2), (2,1), etc.,
the docs now show a single pattern like (N, M).

Before: 78 rows for simplex_params (showing every index combination)
After: 8 rows showing unique patterns

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 2 files (changes from recent commits).

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them.


<file name="docs/documentation/parameters.md">

<violation number="1" location="docs/documentation/parameters.md:63">
P2: The documentation now lists parameters with `%%` (e.g., `patch_icpp(N)%%Bx`), which appears to be a typo and misdocuments the actual case file syntax (`%`). This will mislead users copying parameter names.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Previously, small families (<20 params) showed full tables even when
they were indexed. Now we always collapse patterns if it reduces rows.

Example: schlieren_alpha with 10 params now shows 1 pattern row
instead of 10 individual rows.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@toolchain/mfc/params/generators/docs_gen.py`:
- Line 137: The type annotation for the variable families incorrectly uses the
builtin any instead of typing.Any; change the annotation from Dict[str,
List[Tuple[str, any]]] to Dict[str, List[Tuple[str, Any]]] and ensure Any is
imported from typing (add or update the import statement that provides Any) so
the annotation references typing.Any; locate the declaration of families in
docs_gen.py to apply the fix.
- Around line 49-74: The nested block that handles parenthesized numeric indices
inside the while i < len(name) loop should be extracted into a helper function
to reduce nesting and fix R1702; create a function (e.g.,
handle_parenthesized_indices or parse_parenthesized_indices) that takes (name,
start_index, placeholders, placeholder_idx), moves the inner logic that collects
paren_content, validates numeric parts, builds new_parts and returns the
replacement string (like '(?, ?)' or with placeholders), the updated index j+1,
and the updated placeholder_idx; then replace the inlined code in the main loop
with a single call to that helper and append the returned segment to result or
fall back to the original flow when the helper indicates no valid parenthesized
numeric indices.
🧹 Nitpick comments (2)
toolchain/mfc/params/generators/docs_gen.py (2)

15-15: Remove unused noqa directive.

Ruff reports the # noqa: F401 comment is unnecessary since no F401 warning is being suppressed. The import is actually used (for side effects of registering definitions).

🧹 Proposed fix
-from .. import definitions  # noqa: F401  pylint: disable=unused-import
+from .. import definitions  # pylint: disable=unused-import

24-26: Simplify unnecessary any() over single-element list.

The expression any(... for _ in [1]) iterates over a single element, making any() pointless. This looks like a refactoring leftover.

🧹 Proposed fix
-        if any(name.startswith(f"{base}(") or name.startswith(f"{base}%") for _ in [1]):
+        if name.startswith(f"{base}(") or name.startswith(f"{base}%"):
             return base

- Change `any` to `Any` from typing module
- Extract _parse_paren_content helper to reduce nesting in _collapse_indices
- Fixes R1702 pylint warning (too many nested blocks)

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@docs/documentation/parameters.md`:
- Around line 165-180: Several fluid_pp parameters (`fluid_pp(N)%%D_v`, `%%M_v`,
`%%cp_v`, `%%gamma_v`, `%%gamma_v`, `%%k_v`, `%%mu_v`, `%%mul0`, `%%pv`, `%%ss`)
and general-family params (`Ca`, `Re_inv`, `Web`) are missing human-readable
descriptions in the parameter registry; open the parameter descriptions registry
(toolchain/mfc/params/descriptions.py) and add concise, validated descriptions
and units for each symbol (e.g., D_v = vapor diffusivity [m^2/s], M_v = vapor
molecular weight [kg/mol], cp_v = vapor specific heat at constant pressure
[J/(kg·K)], gamma_v = vapor heat capacity ratio, k_v = vapor thermal
conductivity [W/(m·K)], mu_v = vapor dynamic viscosity [Pa·s], mul0 = reference
viscosity, pv = vapor pressure, ss = surface tension source/flag or clarify
exact meaning, Ca = Capillary number, Re_inv = inverse Reynolds number, Web =
Weber number), verifying physical meanings against source physics before
committing so the generated docs populate correctly.

In `@toolchain/mfc/params/generators/docs_gen.py`:
- Around line 233-242: The description text returned by get_description(example)
isn't escaped for percent signs before being appended to lines, so call the
existing _escape_percent on the (possibly truncated) desc variable before using
it in lines.append to prevent Doxygen format tokens; update the block that
builds the table row (the loop iterating over patterns/examples that sets desc,
truncates it, and appends to lines) to replace desc with _escape_percent(desc)
and apply the identical change to the other similar block later in the file that
builds example rows (the second block that handles examples/patterns around the
same codepath).
- Line 15: Remove the unused "noqa: F401" directive on the import statement in
docs_gen.py: keep the intentional side-effect import "from .. import
definitions" and retain the "pylint: disable=unused-import" comment but delete
the "# noqa: F401" portion so Ruff no longer flags an unused noqa; ensure the
line still imports definitions for side effects.

Comment on lines +165 to +180
| `fluid_pp(N)%%D_v` | `fluid_pp(1)%%D_v` | |
| `fluid_pp(N)%%G` | `fluid_pp(1)%%G` | Shear modulus for fluid 1 |
| `fluid_pp(N)%%M_v` | `fluid_pp(1)%%M_v` | |
| `fluid_pp(N)%%Re(M)` | `fluid_pp(1)%%Re(1)` | Reynolds number component 1 for fluid 1 |
| `fluid_pp(N)%%cp_v` | `fluid_pp(1)%%cp_v` | |
| `fluid_pp(N)%%cv` | `fluid_pp(1)%%cv` | Specific heat at constant volume for fluid 1 |
| `fluid_pp(N)%%gamma` | `fluid_pp(1)%%gamma` | Specific heat ratio for fluid 1 |
| `fluid_pp(N)%%gamma_v` | `fluid_pp(1)%%gamma_v` | |
| `fluid_pp(N)%%k_v` | `fluid_pp(1)%%k_v` | |
| `fluid_pp(N)%%mu_v` | `fluid_pp(1)%%mu_v` | |
| `fluid_pp(N)%%mul0` | `fluid_pp(1)%%mul0` | |
| `fluid_pp(N)%%pi_inf` | `fluid_pp(1)%%pi_inf` | Stiffness pressure for fluid 1 |
| `fluid_pp(N)%%pv` | `fluid_pp(1)%%pv` | |
| `fluid_pp(N)%%qv` | `fluid_pp(1)%%qv` | Heat of formation for fluid 1 |
| `fluid_pp(N)%%qvp` | `fluid_pp(1)%%qvp` | Heat of formation prime for fluid 1 |
| `fluid_pp(N)%%ss` | `fluid_pp(1)%%ss` | |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Multiple parameters lack descriptions.

Several fluid_pp parameters have empty descriptions (lines 165, 167, 169, 172-175, 177, 180): D_v, M_v, cp_v, gamma_v, k_v, mu_v, mul0, pv, and ss. Users consulting this reference won't understand what these parameters control.

Similarly, in the general family (lines 690, 693-694), parameters Ca, Re_inv, and Web also lack descriptions.

These descriptions should be added to the parameter registry (likely toolchain/mfc/params/descriptions.py based on the PR context) so they appear in the generated documentation.

📝 Suggested action

Add descriptions for these parameters in the source registry. For example:

  • D_v: Likely vapor diffusivity
  • M_v: Likely vapor molecular weight
  • cp_v: Likely vapor specific heat at constant pressure
  • gamma_v: Likely vapor heat capacity ratio
  • Ca: Likely Capillary number
  • Re_inv: Likely inverse Reynolds number
  • Web: Likely Weber number

Verify the correct physical meanings and update the registry accordingly.

🧰 Tools
🪛 markdownlint-cli2 (0.20.0)

[warning] 165-165: Table column style
Table pipe has extra space to the right for style "compact"

(MD060, table-column-style)


[warning] 165-165: Table column style
Table pipe has extra space to the left for style "compact"

(MD060, table-column-style)


[warning] 167-167: Table column style
Table pipe has extra space to the right for style "compact"

(MD060, table-column-style)


[warning] 167-167: Table column style
Table pipe has extra space to the left for style "compact"

(MD060, table-column-style)


[warning] 169-169: Table column style
Table pipe has extra space to the right for style "compact"

(MD060, table-column-style)


[warning] 169-169: Table column style
Table pipe has extra space to the left for style "compact"

(MD060, table-column-style)


[warning] 172-172: Table column style
Table pipe has extra space to the right for style "compact"

(MD060, table-column-style)


[warning] 172-172: Table column style
Table pipe has extra space to the left for style "compact"

(MD060, table-column-style)


[warning] 173-173: Table column style
Table pipe has extra space to the right for style "compact"

(MD060, table-column-style)


[warning] 173-173: Table column style
Table pipe has extra space to the left for style "compact"

(MD060, table-column-style)


[warning] 174-174: Table column style
Table pipe has extra space to the right for style "compact"

(MD060, table-column-style)


[warning] 174-174: Table column style
Table pipe has extra space to the left for style "compact"

(MD060, table-column-style)


[warning] 175-175: Table column style
Table pipe has extra space to the right for style "compact"

(MD060, table-column-style)


[warning] 175-175: Table column style
Table pipe has extra space to the left for style "compact"

(MD060, table-column-style)


[warning] 177-177: Table column style
Table pipe has extra space to the right for style "compact"

(MD060, table-column-style)


[warning] 177-177: Table column style
Table pipe has extra space to the left for style "compact"

(MD060, table-column-style)


[warning] 180-180: Table column style
Table pipe has extra space to the right for style "compact"

(MD060, table-column-style)


[warning] 180-180: Table column style
Table pipe has extra space to the left for style "compact"

(MD060, table-column-style)

🤖 Prompt for AI Agents
In `@docs/documentation/parameters.md` around lines 165 - 180, Several fluid_pp
parameters (`fluid_pp(N)%%D_v`, `%%M_v`, `%%cp_v`, `%%gamma_v`, `%%gamma_v`,
`%%k_v`, `%%mu_v`, `%%mul0`, `%%pv`, `%%ss`) and general-family params (`Ca`,
`Re_inv`, `Web`) are missing human-readable descriptions in the parameter
registry; open the parameter descriptions registry
(toolchain/mfc/params/descriptions.py) and add concise, validated descriptions
and units for each symbol (e.g., D_v = vapor diffusivity [m^2/s], M_v = vapor
molecular weight [kg/mol], cp_v = vapor specific heat at constant pressure
[J/(kg·K)], gamma_v = vapor heat capacity ratio, k_v = vapor thermal
conductivity [W/(m·K)], mu_v = vapor dynamic viscosity [Pa·s], mul0 = reference
viscosity, pv = vapor pressure, ss = surface tension source/flag or clarify
exact meaning, Ca = Capillary number, Re_inv = inverse Reynolds number, Web =
Weber number), verifying physical meanings against source physics before
committing so the generated docs populate correctly.

from ..schema import ParamType
from ..registry import REGISTRY
from ..descriptions import get_description
from .. import definitions # noqa: F401 pylint: disable=unused-import
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove unused noqa directive to satisfy Ruff.

Ruff flags the # noqa: F401 as unused; the pylint suppression is sufficient if the import is intentionally for side effects.

🧹 Proposed fix
-from .. import definitions  # noqa: F401  pylint: disable=unused-import
+from .. import definitions  # pylint: disable=unused-import
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from .. import definitions # noqa: F401 pylint: disable=unused-import
from .. import definitions # pylint: disable=unused-import
🧰 Tools
🪛 Ruff (0.14.14)

[warning] 15-15: Unused noqa directive (non-enabled: F401)

Remove unused noqa directive

(RUF100)

🤖 Prompt for AI Agents
In `@toolchain/mfc/params/generators/docs_gen.py` at line 15, Remove the unused
"noqa: F401" directive on the import statement in docs_gen.py: keep the
intentional side-effect import "from .. import definitions" and retain the
"pylint: disable=unused-import" comment but delete the "# noqa: F401" portion so
Ruff no longer flags an unused noqa; ensure the line still imports definitions
for side effects.

Comment on lines +233 to +242
for pattern, examples in sorted(patterns.items()):
example = examples[0]
desc = get_description(example) or ""
# Truncate long descriptions
if len(desc) > 60:
desc = desc[:57] + "..."
# Escape % for Doxygen
pattern_escaped = _escape_percent(pattern)
example_escaped = _escape_percent(example)
lines.append(f"| `{pattern_escaped}` | `{example_escaped}` | {desc} |")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Escape % in descriptions to avoid Doxygen formatting issues.

You already escape % in patterns/names, but descriptions can include % too; those will still be treated as format tokens by Doxygen. Consider escaping after truncation.

🛠️ Proposed fix
-                if len(desc) > 60:
-                    desc = desc[:57] + "..."
+                if len(desc) > 60:
+                    desc = desc[:57] + "..."
+                desc = _escape_percent(desc)
@@
-                if len(desc) > 80:
-                    desc = desc[:77] + "..."
+                if len(desc) > 80:
+                    desc = desc[:77] + "..."
+                desc = _escape_percent(desc)

Also applies to: 250-261

🤖 Prompt for AI Agents
In `@toolchain/mfc/params/generators/docs_gen.py` around lines 233 - 242, The
description text returned by get_description(example) isn't escaped for percent
signs before being appended to lines, so call the existing _escape_percent on
the (possibly truncated) desc variable before using it in lines.append to
prevent Doxygen format tokens; update the block that builds the table row (the
loop iterating over patterns/examples that sets desc, truncates it, and appends
to lines) to replace desc with _escape_percent(desc) and apply the identical
change to the other similar block later in the file that builds example rows
(the second block that handles examples/patterns around the same codepath).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Review effort 4/5 size:XXL This PR changes 1000+ lines, ignoring generated files

Development

Successfully merging this pull request may close these issues.

1 participant