Skip to content

Conversation

@FBumann
Copy link
Member

@FBumann FBumann commented Jan 3, 2026

Description

Brief description of the changes in this PR.

Type of Change

  • Bug fix
  • New feature
  • Documentation update
  • Code refactoring

Related Issues

Closes #(issue number)

Testing

  • I have tested my changes
  • Existing tests still pass

Checklist

  • My code follows the project style
  • I have updated documentation if needed
  • I have added tests for new functionality (if applicable)

Summary by CodeRabbit

  • Chores
    • Optimized documentation build workflow with caching and parallel execution for faster deployment cycles.

✏️ Tip: You can customize this high-level summary in your review settings.

FBumann added 21 commits January 1, 2026 21:30
…ation, reducing code duplication while maintaining the same functionality (data preparation, color resolution from components, PlotResult wrapping).
… area(), and duration_curve() methods in both DatasetPlotAccessor and DataArrayPlotAccessor

  2. scatter() method - Plots two variables against each other with x and y parameters
  3. pie() method - Creates pie charts from aggregated (scalar) dataset values, e.g. ds.sum('time').fxplot.pie()
  4. duration_curve() method - Sorts values along the time dimension in descending order, with optional normalize parameter for percentage x-axis
  5. CONFIG.Plotting.default_line_shape - New config option (default 'hv') that controls the default line shape for line(), area(), and duration_curve() methods
  1. X-axis is now determined first using CONFIG.Plotting.x_dim_priority
  2. Facets are resolved from remaining dimensions (x-axis excluded)

  x_dim_priority expanded:
  x_dim_priority = ('time', 'duration', 'duration_pct', 'period', 'scenario', 'cluster')
  - Time-like dims first, then common grouping dims as fallback
  - variable stays excluded (it's used for color, not x-axis)

  _get_x_dim() refactored:
  - Now takes dims: list[str] instead of a DataFrame
  - More versatile - works with any list of dimension names
  - Add `x` parameter to bar/stacked_bar/line/area for explicit x-axis control
  - Add CONFIG.Plotting.x_dim_priority for auto x-axis selection order
  - X-axis determined first, facets from remaining dimensions
  - Refactor _get_x_column -> _get_x_dim (takes dim list, not DataFrame)
  - Support scalar data (no dims) by using 'variable' as x-axis
  - Add `x` parameter to bar/stacked_bar/line/area for explicit x-axis control
  - Add CONFIG.Plotting.x_dim_priority for auto x-axis selection
    Default: ('time', 'duration', 'duration_pct', 'period', 'scenario', 'cluster')
  - X-axis determined first, facets resolved from remaining dimensions
  - Refactor _get_x_column -> _get_x_dim (takes dim list, more versatile)
  - Support scalar data (no dims) by using 'variable' as x-axis
  - Skip color='variable' when x='variable' to avoid double encoding
  - Fix _dataset_to_long_df to use dims (not just coords) as id_vars
  - Add `x` parameter to bar/stacked_bar/line/area for explicit x-axis control
  - Add CONFIG.Plotting.x_dim_priority for auto x-axis selection
    Default: ('time', 'duration', 'duration_pct', 'period', 'scenario', 'cluster')
  - X-axis determined first, facets resolved from remaining dimensions
  - Refactor _get_x_column -> _get_x_dim (takes dim list, more versatile)
  - Support scalar data (no dims) by using 'variable' as x-axis
  - Skip color='variable' when x='variable' to avoid double encoding
  - Fix _dataset_to_long_df to use dims (not just coords) as id_vars
  - Ensure px_kwargs properly overrides all defaults (color, facets, etc.)
…wargs} so user can override

  2. scatter unused colors - Removed the unused parameter
  3. to_duration_curve sorting - Changed [::-1] to np.flip(..., axis=time_axis) for correct multi-dimensional handling
  4. DataArrayPlotAccessor.heatmap - Same kwarg merge fix
…ork-v2+plotting

# Conflicts:
#	docs/notebooks/08a-aggregation.ipynb
#	docs/notebooks/08b-rolling-horizon.ipynb
#	docs/notebooks/08c-clustering.ipynb
#	docs/notebooks/08c2-clustering-storage-modes.ipynb
#	docs/notebooks/08d-clustering-multiperiod.ipynb
#	docs/notebooks/08e-clustering-internals.ipynb
  .github/workflows/docs.yaml

  1. Notebook caching - Caches executed notebooks using a hash of notebooks + source code
  2. Parallel execution - Runs jupyter execute with -P 4 (4 notebooks in parallel)
  3. Skip mkdocs-jupyter execution - Sets MKDOCS_JUPYTER_EXECUTE=false since notebooks are pre-executed
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 3, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

📝 Walkthrough

Walkthrough

The documentation build workflow now implements caching for executed notebooks and parallel execution when cache misses occur. The notebook execution flag in the mkdocs configuration is delegated to an environment variable, allowing the CI pipeline to control execution timing rather than always executing.

Changes

Cohort / File(s) Summary
GitHub Actions workflow optimization
.github/workflows/docs.yaml
Adds notebook execution caching using a hash computed from notebook and source files; conditionally executes notebooks in parallel (4 concurrent) only on cache miss; injects MKDOCS_JUPYTER_EXECUTE environment variable in Build and Deploy steps. Changes applied to both build and deploy job sequences.
Configuration environment variable delegation
mkdocs.yml
Replaces hardcoded notebook execution flag with environment variable reference: !ENV [MKDOCS_JUPYTER_EXECUTE, true]. Preserves default behavior (true) while allowing CI override.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Poem

🐰 Swift whiskers twitch with glee,
Cached notebooks race in harmony!
Four at once, no time to waste,
Building docs at blistering pace,
Environment flags guide the way—
Docs redeploy faster each day! ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning The description is entirely composed of unfilled template placeholders with no actual content describing the changes; only checklist items are marked without substance. Replace template placeholders with meaningful descriptions of the actual changes, specify the type of change (e.g., 'New feature'), list related issue numbers, and describe testing performed.
✅ Passed checks (2 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Title check ✅ Passed The title 'ci: Speedup Notebook execution' directly and accurately reflects the main changes in the pull request, which focus on optimizing notebook execution in CI through caching and parallelization.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@FBumann
Copy link
Member Author

FBumann commented Jan 3, 2026

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 3, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
.github/workflows/docs.yaml (2)

74-79: Consider explicitly aligning notebook execution with mkdocs-jupyter configuration.

The parallel execution finds all .ipynb files recursively, which may include notebooks that shouldn't be processed. While the current setup appears safe based on the nav structure, explicitly aligning with mkdocs-jupyter's ignore patterns would make the behavior more maintainable and prevent accidental execution of draft or test notebooks.

🔎 Optional improvement to align with ignore patterns
       - name: Execute notebooks in parallel
         if: steps.notebook-cache.outputs.cache-hit != 'true'
         run: |
           # Execute all notebooks in parallel (4 at a time)
-          find docs/notebooks -name '*.ipynb' -print0 | \
+          find docs/notebooks -name '*.ipynb' ! -path '*/data/*' -print0 | \
             xargs -0 -P 4 -I {} jupyter execute --inplace {}

This explicitly excludes the data/ subdirectory, matching the mkdocs-jupyter ignore pattern.


121-148: Notebook preparation is correctly duplicated for job isolation.

The deploy job independently implements the same caching and parallel execution logic as the build job. While this creates some duplication, it ensures the deploy job can run independently and maintains workflow clarity.

If you anticipate further changes to the notebook execution logic, consider extracting it into a composite action to reduce maintenance overhead.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 93e7152 and fd8ad23.

📒 Files selected for processing (2)
  • .github/workflows/docs.yaml
  • mkdocs.yml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.14)
  • GitHub Check: Build documentation
🔇 Additional comments (3)
mkdocs.yml (1)

233-233: Good pattern for CI-controlled execution.

The environment variable delegation with a sensible default (true) preserves local development behavior while allowing CI to optimize by pre-executing notebooks in parallel and then skipping re-execution during the build step.

.github/workflows/docs.yaml (2)

67-72: Caching strategy is well-designed.

The cache path correctly targets the notebooks directory, and the key appropriately includes both notebook content and source code changes, ensuring cache invalidation when either changes.


81-84: Correct integration with pre-execution strategy.

Setting MKDOCS_JUPYTER_EXECUTE="false" properly prevents mkdocs from re-executing the notebooks that were already executed in parallel, optimizing the build time.

Comment on lines 60 to 65
- name: Get notebook cache key
id: notebook-cache-key
run: |
# Hash notebooks + flixopt source code
HASH=$(find docs/notebooks -name '*.ipynb' -exec cat {} \; | cat - <(find flixopt -name '*.py' -exec cat {} \;) | sha256sum | cut -d' ' -f1)
echo "hash=$HASH" >> $GITHUB_OUTPUT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Sort files before hashing to ensure stable cache keys.

The cache key computation concatenates file contents without sorting, which means filesystem iteration order affects the hash. This can lead to different cache keys for identical content across runs, reducing cache effectiveness.

🔎 Proposed fix to stabilize cache keys
       - name: Get notebook cache key
         id: notebook-cache-key
         run: |
           # Hash notebooks + flixopt source code
-          HASH=$(find docs/notebooks -name '*.ipynb' -exec cat {} \; | cat - <(find flixopt -name '*.py' -exec cat {} \;) | sha256sum | cut -d' ' -f1)
+          HASH=$(find docs/notebooks -name '*.ipynb' | sort | xargs cat | cat - <(find flixopt -name '*.py' | sort | xargs cat) | sha256sum | cut -d' ' -f1)
           echo "hash=$HASH" >> $GITHUB_OUTPUT
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Get notebook cache key
id: notebook-cache-key
run: |
# Hash notebooks + flixopt source code
HASH=$(find docs/notebooks -name '*.ipynb' -exec cat {} \; | cat - <(find flixopt -name '*.py' -exec cat {} \;) | sha256sum | cut -d' ' -f1)
echo "hash=$HASH" >> $GITHUB_OUTPUT
- name: Get notebook cache key
id: notebook-cache-key
run: |
# Hash notebooks + flixopt source code
HASH=$(find docs/notebooks -name '*.ipynb' | sort | xargs cat | cat - <(find flixopt -name '*.py' | sort | xargs cat) | sha256sum | cut -d' ' -f1)
echo "hash=$HASH" >> $GITHUB_OUTPUT
🤖 Prompt for AI Agents
.github/workflows/docs.yaml around lines 60 to 65: the hash is computed by
concatenating notebook and flixopt file contents in filesystem order, which is
non-deterministic; change the pipeline to list files first, sort the filenames
deterministically (e.g., using find to print paths and sort), then cat the files
in that sorted order (use a null-safe approach if filenames may contain special
characters) so the same set of files always yields the same hash.

@FBumann FBumann changed the title Feature/aggregate rework v2+plotting+speedup ci ci: Speedup Notebook execution Jan 4, 2026
@FBumann FBumann merged commit bf6962c into feature/aggregate-rework-v2 Jan 4, 2026
10 checks passed
FBumann added a commit that referenced this pull request Jan 5, 2026
Improve CI workflow for faster notebook execution in documentation builds.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@FBumann FBumann mentioned this pull request Jan 5, 2026
FBumann added a commit that referenced this pull request Jan 6, 2026
Improve CI workflow for faster notebook execution in documentation builds.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <[email protected]>
FBumann added a commit that referenced this pull request Jan 6, 2026
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <[email protected]>
FBumann added a commit that referenced this pull request Jan 6, 2026
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants