Skip to content

📚🩹 Fix rendering of Result with nbsphinx (workaround)#1244

Merged
s-weigand merged 3 commits intoglotaran:mainfrom
s-weigand:fix-docs-rendering
Feb 18, 2023
Merged

📚🩹 Fix rendering of Result with nbsphinx (workaround)#1244
s-weigand merged 3 commits intoglotaran:mainfrom
s-weigand:fix-docs-rendering

Conversation

@s-weigand
Copy link
Member

@s-weigand s-weigand commented Feb 16, 2023

This is just a hack for the docs to render properly with the markdown repr of Result.

See current docs on main (will be different in the future when looking back at this PR) and the build for this PR (might be deleted)

Down the rabbit hole

The problem is the markdown repr of the Result and our notebook docs plugin (nbsphinx) using pandoc to transform markdown to reST as the intermediate format that sphinx can then picked up by sphinx (doc building framework) to create html, latex ...

The markdown repr of Result looks something like this

TABLE<in md>
<details>
# Model
## Modelitem
...
</details>

Which is totally valid markdown since that is a superset of html
When we build the docs this string gets extracted by nbsphinx which then uses pandoc to transform it into reST which looks like this

TABLE<in reST>

.. raw:: html

   <details>

Model
~~~~~

Modelitem
^^^^^^^^^
...

.. raw:: html

   </details>

So far so good, but when sphinx takes over to generate html the problem starts since it assumes that </details> is part of the section

TABLE<in html>

<details>
<section>
<h2>Model</h2>
<section>
<h3>Modelitem</h3>
...
</details>
</section>
</section>

This breaks the opening and closing order in such a way that the browser moves out the last cells from the main context wrapper into the document root which is why it look so messed up
The very dirty workaround is to wrap </details> into two empty section tags first closing the two parent section and than opening two new ones that get closed by the existing closing tags
So the new markdown would be

The markdown repr of Result looks something like this

TABLE<in md>

<details>
# Model
## Modelitem
...

</section>
</section>
</details>
<section>
<section>

Which then gives us the following reST

TABLE<in reST>

.. raw:: html

   <details>

Model
~~~~~

Modelitem
^^^^^^^^^
...

.. raw:: html

   </section>
   </section>
   </details>
   <section>
   <section>

Which then finally gets transformed in none broken HTML

TABLE<in html>

<details>
<section>
<h2>Model</h2>
<section>
<h3>Modelitem</h3>
...
</section>
</section>
</details>
<section>
<section>
</section>
</section>

The markdown renders fine in vscode and jupyter lab but it is a bloody mess, just to compensate the wrong assumption on sphinx 🤷‍♀️
But I looked into all the steps of the translation process and there seems to be no quick and easy way around it.

Change summary

Checklist

  • ✔️ Passing the tests (mandatory for all PR's)

@s-weigand s-weigand requested a review from jsnel as a code owner February 16, 2023 00:25
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Feb 16, 2023

Sourcery Code Quality Report

❌  Merging this PR will decrease code quality in the affected files by 0.15%.

Quality metrics Before After Change
Complexity 4.94 ⭐ 4.94 ⭐ 0.00
Method Length 67.09 🙂 67.91 🙂 0.82 👎
Working memory 7.65 🙂 7.65 🙂 0.00
Quality 71.83% 🙂 71.68% 🙂 -0.15% 👎
Other metrics Before After Change
Lines 253 261 8
Changed files Quality Before Quality After Quality Change
glotaran/project/result.py 71.83% 🙂 71.68% 🙂 -0.15% 👎

Here are some functions in these files that still need a tune-up:

File Function Complexity Length Working Memory Quality Recommendation
glotaran/project/result.py Result.markdown 10 🙂 251 ⛔ 14 😞 38.90% 😞 Try splitting into smaller methods. Extract out complex expressions

Legend and Explanation

The emojis denote the absolute quality of the code:

  • ⭐ excellent
  • 🙂 good
  • 😞 poor
  • ⛔ very poor

The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request.


Please see our documentation here for details on how these metrics are calculated.

We are actively working on this report - lots more documentation and extra metrics to come!

Help us improve this quality report!

@github-actions
Copy link
Contributor

Binder 👈 Launch a binder notebook on branch s-weigand/pyglotaran/fix-docs-rendering

@github-actions
Copy link
Contributor

github-actions bot commented Feb 16, 2023

Benchmark is done. Checkout the benchmark result page.
Benchmark differences below 5% might be due to CI noise.

Benchmark diff v0.6.0 vs. main

Parametrized benchmark signatures:

BenchmarkOptimize.time_optimize(index_dependent, grouped, weight)

All benchmarks:

       before           after         ratio
     [6c3c390e]       [e95baf62]
     <v0.6.0>                   
!      41.8±0.2ms           failed      n/a  BenchmarkOptimize.time_optimize(False, False, False)
!      44.9±0.5ms           failed      n/a  BenchmarkOptimize.time_optimize(False, False, True)
!      41.8±0.1ms           failed      n/a  BenchmarkOptimize.time_optimize(False, True, False)
!      44.3±0.3ms           failed      n/a  BenchmarkOptimize.time_optimize(False, True, True)
!        51.9±2ms           failed      n/a  BenchmarkOptimize.time_optimize(True, False, False)
!       61.7±10ms           failed      n/a  BenchmarkOptimize.time_optimize(True, False, True)
!      51.7±0.3ms           failed      n/a  BenchmarkOptimize.time_optimize(True, True, False)
!       61.3±30ms           failed      n/a  BenchmarkOptimize.time_optimize(True, True, True)
             205M             208M     1.01  IntegrationTwoDatasets.peakmem_optimize
-      1.98±0.04s       1.00±0.02s     0.50  IntegrationTwoDatasets.time_optimize

Benchmark diff main vs. PR

Parametrized benchmark signatures:

BenchmarkOptimize.time_optimize(index_dependent, grouped, weight)

All benchmarks:

       before           after         ratio
     [e95baf62]       [a3fd145e]
           failed           failed      n/a  BenchmarkOptimize.time_optimize(False, False, False)
           failed           failed      n/a  BenchmarkOptimize.time_optimize(False, False, True)
           failed           failed      n/a  BenchmarkOptimize.time_optimize(False, True, False)
           failed           failed      n/a  BenchmarkOptimize.time_optimize(False, True, True)
           failed           failed      n/a  BenchmarkOptimize.time_optimize(True, False, False)
           failed           failed      n/a  BenchmarkOptimize.time_optimize(True, False, True)
           failed           failed      n/a  BenchmarkOptimize.time_optimize(True, True, False)
           failed           failed      n/a  BenchmarkOptimize.time_optimize(True, True, True)
             208M             206M     0.99  IntegrationTwoDatasets.peakmem_optimize
       1.00±0.02s         983±20ms     0.98  IntegrationTwoDatasets.time_optimize

@codecov
Copy link

codecov bot commented Feb 16, 2023

Codecov Report

Base: 88.1% // Head: 88.1% // No change to project coverage 👍

Coverage data is based on head (a3fd145) compared to base (e95baf6).
Patch coverage: 100.0% of modified lines in pull request are covered.

Additional details and impacted files
@@          Coverage Diff          @@
##            main   #1244   +/-   ##
=====================================
  Coverage   88.1%   88.1%           
=====================================
  Files        104     104           
  Lines       5064    5064           
  Branches     842     842           
=====================================
  Hits        4462    4462           
  Misses       484     484           
  Partials     118     118           
Impacted Files Coverage Δ
glotaran/project/result.py 90.9% <100.0%> (ø)

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@s-weigand s-weigand marked this pull request as draft February 16, 2023 00:42
@s-weigand s-weigand marked this pull request as ready for review February 16, 2023 14:33
@s-weigand s-weigand requested a review from a team as a code owner February 16, 2023 14:33
@sonarqubecloud
Copy link

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
0.0% 0.0% Duplication

Copy link
Member

@jsnel jsnel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What can I say? It's a workaround. We'll stay on the lookout for alternatives.

@s-weigand s-weigand merged commit f435b0e into glotaran:main Feb 18, 2023
@s-weigand s-weigand deleted the fix-docs-rendering branch February 18, 2023 18:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants