Skip to content

Create Docker wheel builder#592

Merged
neworderofjamie merged 38 commits intogenn-team:masterfrom
bdevans:master
Oct 11, 2023
Merged

Create Docker wheel builder#592
neworderofjamie merged 38 commits intogenn-team:masterfrom
bdevans:master

Conversation

@bdevans
Copy link
Contributor

@bdevans bdevans commented Jul 5, 2023

This adds Dockerfile.builder and a Makefile target (make wheel) to cleanly build python wheels within a Docker container. The versions of CUDA and Python can be specified by passing build arguments.

@bdevans
Copy link
Contributor Author

bdevans commented Jul 18, 2023

You may also want to consolidate GENN_DIR and GENN_PATH into one argument throughout the project, as they appear to refer to the same path but have different names in the Dockerfile and Makefile.

@neworderofjamie
Copy link
Contributor

This is very cool - would it be possible to build them on manylinux2014 (https://github.com/pypa/manylinux) for compatibility across linuxes? https://github.com/ameli/manylinux-cuda has some docker files for installing CUDA on top of many linux

@bdevans
Copy link
Contributor Author

bdevans commented Jul 26, 2023

I can have a go at basing it on manylinux2014. Those docker images don't have the cuDNN libraries though - does the wheel need them?

@neworderofjamie
Copy link
Contributor

That'd be awesome - GeNN doesn't need cuDNN at all, just cuRAND

@bdevans
Copy link
Contributor Author

bdevans commented Jul 27, 2023

Here you go! I've renamed the previous target (use it with: make ubuntu_wheel) and added a new one make wheels which will build a wheel for each of the installed python versions for a given CUDA version.

Note that building against python 3.12 currently fails (probably to do with it still being in beta) so I included a clause to skip it which will eventually need to be removed. Also the build process still uses the double python setup.py bdist_wheel craziness, so the second one will need to be removed when a new release is made with the fix.

@neworderofjamie neworderofjamie self-requested a review July 28, 2023 10:46
Copy link
Contributor

@neworderofjamie neworderofjamie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very cool! manylinux wheels will be an excellent feature for GeNN 5.0.0

LABEL org.opencontainers.image.documentation="https://genn-team.github.io/"
LABEL org.opencontainers.image.source="https://github.com/genn-team/genn"
LABEL org.opencontainers.image.title="GeNN Docker image"
LABEL maintainer="J.C.Knight@sussex.ac.uk" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you for the tidying here - I didn't realise just how directly dockerfile lines mapped to layers and thus massive disk wastage when I wrote this

@@ -0,0 +1,46 @@
# syntax=docker/dockerfile:1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the point of keeping (and maintaining) this one? if you have manylinux wheels you can use them on all the ubunti

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well you can ignore it for now but it might be a more robust alternative for Ubuntu users and it's a little simpler so might also be a useful reference for when the build system changes.

WORKDIR ${GENN_PATH}

# Install GeNN and PyGeNN
RUN make install -j `lscpu -p | egrep -v '^#' | sort -u -t, -k 2,4 | wc -l`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is no need to make install when all you're doing is build PyGeNN - that's only if you want a system-wide installation to use GeNN from C++

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I'll remove it.

build-wheels.sh Outdated
# "${PYBIN}/pip" install -r /io/dev-requirements.txt
"${PYBIN}/pip" install numpy swig
# "${PYBIN}/pip" wheel /opt/genn/ --no-deps -w dist/
make install -j `lscpu -p | egrep -v '^#' | sort -u -t, -k 2,4 | wc -l`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, no need to make install

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

build-wheels.sh Outdated
"${PYBIN}/pip" install numpy swig
# "${PYBIN}/pip" wheel /opt/genn/ --no-deps -w dist/
make install -j `lscpu -p | egrep -v '^#' | sort -u -t, -k 2,4 | wc -l`
make DYNAMIC=1 LIBRARY_DIRECTORY=${GENN_PATH}/pygenn/genn_wrapper/ -j `lscpu -p | egrep -v '^#' | sort -u -t, -k 2,4 | wc -l`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this doesn't link against python so could be done once outside of the loop

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I must have misunderstood something then. The packages need to be installed within the loop still though don't they? Since GeNN uses them for bindings, doesn't that also mean it should be built within the loop?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the stuff you're installing with pip definitely need to be done within the loop but make DYNAMICS=1 etc etc isn't Python-specific, it just builds GeNN as shared libraries and sticks them where the python build system expects them to be so can move it outside the loop (it probably won't do anything after the first iteration anyway cos nothing will have changed)

--build-arg CUDA=$(CUDA) \
--target=output --output type=local,dest=dist/ .

# TODO: Consider build with docker run instead of docker build
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would the advantages of that be?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure yet, beyond saving a bit of time in image building. It was the method used in the example but I'd need to investigate a bit more.

FROM nvidia/cuda:${BASE}

LABEL maintainer="J.C.Knight@sussex.ac.uk"
LABEL version="4.8.0"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know if there's any way to read this from the version.txt? I've managed to convince setuptools and doxygen to do this but couldn't figure it out for docker

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well passing it as a build-arg is simple and explicit but I expect you could also do it with the right cat / sed incantation.

@neworderofjamie neworderofjamie merged commit d1e97e1 into genn-team:master Oct 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants