Conversation
Welcome to Codecov 🎉Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests. ℹ️ You can also turn on project coverage checks and project coverage reporting on Pull Request comment Thanks for integrating Codecov - We've got you covered ☂️ |
4a279d8 to
1e65673
Compare
digicosmos86
left a comment
There was a problem hiding this comment.
Thank you! I am soooooo happy with the coverage of our code! I just mentioned in the comment that we have moved away from using conda as our test environment. Can you take a look at main?
0bc29ac to
3fca065
Compare
| run_tests: | ||
| runs-on: ubuntu-latest | ||
| if: ${{ ! contains(github.event.head_commit.message, '[skip slow tests]') }} | ||
| if: ${{ ! contains(github.event.head_commit.message, '[skip fast tests]') }} |
There was a problem hiding this comment.
@digicosmos86 I am not sure this is needed. Can this line be dropped?
There was a problem hiding this comment.
So the setup is now to run all tests? Can codecov collect test results from separate runs? In principle I'd want fast tests to run so the tests fail fast
There was a problem hiding this comment.
Can codecov collect test results from separate runs?
It seems like the tooling for that is not reliable -- see https://stackoverflow.com/questions/73688076/pytest-with-cov-append-flag-overwrites-coverage-file. Besides, we are using xdist to run the tests in parallel. We could mark the slow tests but it's not clear to me that would matter to xdist. We can have a fail fast strategy with the --exitfirst flag, however. I'll add that.
There was a problem hiding this comment.
That's great but how about the order of execution? Would fast tests be run before slow tests? Is there a way to determine that?
|
@digicosmos86 the approval here is after the last change? Or another review needed? |
We need to figure out the fast and slow tests situation. For now let's discuss this a bit more |
I could try to run the tests in succession and see if the tool for combining coverage reports works for us. However, please keep in mind that the tests are run in parallel, most of them are not slow, and there is a fail-fast strategy in place already. |
Yeah the parallel tests does complicate things a bit, but even though parallelization may introduce some randomness, I think it's still possible to figure out the order in which tests are scheduled to run, which is connected to how pytest searches and collects tests. There should be some documentation on that. I don't mind restructuring (i.e. put all fast tests in one separate directory) the tests a bit so that fast tests are run before the hundreds of slow ones |
@digicosmos86 Please see this and the changes to |
Ah got it! You did exactly that. Cool then I am OK with merging. @AlexanderFengler would you like to give it another look? |
TODO: