Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #320 +/- ##
==========================================
- Coverage 86.54% 86.43% -0.11%
==========================================
Files 14 14
Lines 1850 1851 +1
Branches 320 320
==========================================
- Hits 1601 1600 -1
- Misses 205 207 +2
Partials 44 44 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
rkingsbury
left a comment
There was a problem hiding this comment.
Looks great @vineetbansal , thank you. I'm happy to be moving away from 3rd party GitHub actions; I just wasn't aware of the "stock" alternatives.
One question - prior to your work we were using uv pip install with a resolution kwarg to test the lowest and highest supported versions of the dependencies in pyproject.toml (in the post-process workflow I believe, not in individual PRs). The idea behind this was to catch cases where a change we made in pyEQL made use of some new feature of a package or something, causing an "officially support" as per pyproject.toml version to fail.
By now testing against requirements.txt, I think we will miss those cases. Do you see any value in adding that back in? I think your approach will functionally test the highest supported versions (which I think is what winds up in requirements.txt, but probably not the lowest.
Apart from that and two minor comments, I'm ready to merge. I'm also happy to merge now if you'd prefer to deal with any comments in a separate PR.
| - main | ||
| - pourbaix | ||
| paths-ignore: |
There was a problem hiding this comment.
Please keep the pourbaix branch here. We have some active development going on there (or we will soon) and I want the workflow to run on PRs against that branch.
There was a problem hiding this comment.
Added back in.
Question here - in the old (3rd party) workflow, the tags that triggered a release were |
|
@rkingsbury - good point about the tag. The previous iteration would have initiated the release process on any tag on the |
|
@rkingsbury - to answer your other questions:
I think its most useful to test scenarios that the end users (or developers) will be running themselves - either a Of course this is my opinion based on several recent python projects I've been involved with. It's very possible that I haven't discovered the true values of these flags while others have. But even then I'd probably handle that as a separate PR, where the goal is to make the project "uv-first" (with a |
rkingsbury
left a comment
There was a problem hiding this comment.
Awesome, thanks for the detailed explanations. I trust your judgment on this!
Summary
Thanks for merging the PRs leading up to this.
I've tried to streamline the release process, mostly based on what usually works best in most Python projects I've worked with (and because I don't exactly know what some of the marketplace actions that were used before are doing under the hood).
Some changes for CI optimizations are:
testingworkflow now tests our code only on Linux. I feel this is a good base platform for detecting regressions. This has the effect of detecting any regressions with newer versions of our dependencies. We test all supported python versions to ensure that our code is not using language constructs that are only applicable to a particular python version (and neither are any of our dependencies).testing-pinnedworkflow takes the responsibility of detecting platform-specific nuances. We run all tests using pinned dependencies on our minimum and maximum supported python versions on all platforms. This tests our own code running on the latest and greatest python versions on all platforms, while making sure we're not relying on new features of the language on any platform, while taking our dependencies out of the equation. This workflow is thus a proxy now for anything we may have done to our code to break it.However, the above optimizations are entirely my opinion - we can put back a common/larger matrix for simplicity and everything will still work, though it will take longer.
Other changes:
The
releaseworkflow now runs when PRs are merged tomain(or on direct pushes tomain, which we don't allow for now anyway).pyEQLis built inside isolated containers usingcibuildwheel, and the tests related to our C++ code ( intests/phreeqcfolder) are run inside the container. Again, this is an optimization and we can run all tests if we want, but I think we don't need them because of thetesting.ymlandtesting-pinned.yamlworkflows which detect other failure modes anyway. In most hybrid Python/C++ projects, its usually sufficient to test ifimport <project-name>works inside the container and move on, so we're doing slightly better than that anyway.All artifacts (sdist and binary wheels) are collected as github artifacts so we can inspect them if necessary.
If, in addition, a tag has been applied to the commit on the
mainbranch, then we usesetuptools-scm(like before) to make that tag the version number ofpyEQL. (v1.3.2was the last one, we can go withv1.4.0next), and usepypi_passwordproject secret (which is essentially the API key you get from PyPI), to upload the final wheels to PyPI. You will have to set this secret up with this exact name, though you likely have some version of it in there already.I've tested out everything (except the final release to PyPI part of course) on a fork of this repo. I guess we'll find out if something fails when we merge - which is why we should probably tag the next release as
v1.4.0-pre0orv1.4.0.rc0instead of using a real version number. If we merge this but not apply a tag then everything will run except the final release-to-pypi step.Happy to tweak or answer any concerns!
Checklist
ruff. (For guidance in fixing rule violates, see rule list)mypy.Tip: Install
pre-commithooks to auto-check types and linting before every commit: