Skip to content

Releases: genn-team/genn

GeNN 5.4.0

20 Jan 17:55

Choose a tag to compare

Release Notes for GeNN 5.4.0

This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 5.3.0 release.

User Side Changes

  • Added full support for AMD GPUs using HIP (#718)
  • Significantly modernised PyGeNN build system (#720)
  • Added support for newer compute capabilities to CUDA backend (#722)
  • Improvements to documentation (#729)
  • Improvements to feature test coverage (#726, #728, #732)

Bug fixes

  • Fixed several compatibility issues with CUDA 13 (#724, #719)

New Contributors

Massive thanks to some new contributors to GeNN:

GeNN 5.3.0

29 Aug 08:41
5897866

Choose a tag to compare

Release Notes for GeNN 5.3.0

This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 5.2.0 release. We also proud to say that GeNN is now part of the Open Neuromorphic community and is an official target for the NESTML modelling language.

User Side Changes

  • Added support for CUDA Array Interface for interoperability with other Python libraries (#675)
  • Expose CurrentSource target_var to PyGeNN (#676)
  • Add NVTX profiling support to CUDA backend (#680)
  • Add support for building generated code using NMake as well as MSBuild build systems on windows (#700, #705)
  • Extended the timing system to record the time spent in each custom update group running host code from custom connectivity updates (#701)

Bug fixes

  • Fixed several issues working with sparse connectivity in PyGeNN (#694)
  • Fixed several bugs with the transpiler (#679, #696, #702)
  • Fixes RNG initialisation issues in networks with very simple neuron models which use an RNG (#685)
  • Fixed several issues with building GeNN on Mac OS X (#707)
  • Fixed crashes when using CUDA backend on latest Visual Studio 2022 builds (#716)

Optimisations

  • The code generator now marks relevant pointers with a backend-specific restrict keyword (#690)

New Contributors

Massive thanks to some new contributors to GeNN:

Special thanks to @JRV7903 and @Agrim-P777 who have just completed very successful Google Summer of Code projects with us, working on Conda packaging and ISPC support respectively.

GeNN 5.2.0

25 Apr 14:29

Choose a tag to compare

Release Notes for GeNN 5.2.0

This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 5.1.0 release.

User Side Changes

  • Added assert into the 'standard library' so it can be used for error testing in GeNNCode (#644)
  • Automated detection of parameters which are homogeneous across merged groups - should improve performance in models with small numbers of populations (#646)
  • Added preliminary support for AMD HIP. All tests pass using HIP on NVIDIA hardware but we do not have access to any AMD hardware so have been unable to complete this (#647)
  • Reduced memory bandwidth requirements of RNG - can half time spent in neuron kernels in some models (#649)
  • Add additional error handling to prevent non-existent backend preferences being silently accepted by GeNNModel constructor (#663)
  • Improved the "Testing" tutorial in the "Insect-inspired MNIST classification" series to demonstrate the use of custom updates for resetting model state between trials (#661)
  • Added shorthand syntax for creating variable references with just a target variable name in places where there is only one possible target group (e.g. references to neuron variables in current sources and weight update models). Also, using this add support for weight update models to access postsynaptic model variables (#652)

Bug fixes

  • Fixed subtle bug in SIMT code generation that only occurs when addToPre is used in presynaptic updates with Toeplitz connectivity (#643)
  • Continuing the sequence of compatibility-breaking changes to setuptools, fixed another (#670)
  • Fixed generation of debug code in PyGeNN on Windows (#672)

New Contributors

Massive thanks to some new contributors to GeNN:

Full Changelog: 5.1.0...5.2.0

GeNN 5.1.0

07 Nov 16:43
d973817

Choose a tag to compare

Release Notes for GeNN 5.1.0

This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 5.0.0 release.

User Side Changes

  1. Updated CUDA block size optimiser to support SM9.0 (#627)
  2. Access to postsynaptic variables with heterogeneous delay (#629)
  3. Special variable references for zeroing internals (#634)
  4. Stop Windows CUDA compilation relying on correct order of CUDA + Visual Studio installation (#639)

Bug fixes

  1. Fixed issues building GeNN on Mac OS/Clang (#623)
  2. Fixed bug when using dendritic delays in batched models (#630)
  3. Fixed issues with new version of setuptools 74 and newer (#636,#640)
  4. Fixed bug with merging/fusing of neuron groups with multiple spike-like event conditions (#638)

GeNN 5.0.0

22 Apr 13:06

Choose a tag to compare

Release Notes for GeNN 5.0.0

This is a very large update to GeNN that has fixed a large number of longstanding bugs and we hope will make GeNN easier to use and enable various exciting new features in the near future. The licence has also been switched from GPL to LGPL making it mildly more liberal by allowing PyGeNN to be used as a component in closed-source systems.

This release breaks backward compatibility so all models are likely to require updating but the documentation has also been completely re-done and the pre-release version is at https://genn-team.github.io/genn/documentation/5/. This includes a guide to updating existing models

New features

  • GeNN has a whole new code generator. This gives much better quality error messages to the user about syntax/typing errors in code strings and will enable use to do smarter optimisations in future but it does restrict user code to a well-defined subset of C99 (#595)**
  • As well as simulation kernels, GeNN 4.x generated large amounts of boilerpalte for allocating memory and copying from device to host. This resulted in very long compile times with large models. In GeNN 5 we have replaced this with a new runtime which reduces compilation time by around 10x on very large models (#602)
  • In GeNN 4.X, parameters were always "scalar" type. This resulted in poor code generation when these are used to store integers. Parameters now have types and can also be made dynamic to allow them to be changed at runtime (#607)
  • Weight update models now have postsynaptic spike-like events, allowing a wider class of learning rules to be implemented (#609)

Bug fixes

  • PyGeNN only really works with precision set to float (#289)
  • Refine global - register -global transfers (#55)
  • Avoiding creating unused variables enhancement (#47)
  • PyGeNN doesn't correctly handle neuron variables with delay slots (#393)
  • assign_external_pointer overrides should use explicitly sized integer types (#288)
  • Repeat of spike-like-event conditions in synapse code flawed (#379)
  • Dangerous conflict potential of user and system code (#385)
  • Accessing queued pre and postsynaptic weight update model variables (#402)
  • Linker-imposed model complexity limit on Windows (#408)
  • Got 'error: duplicate parameter name' when ./generate_run test in userproject/Izh_sparse_project bug (#416)
  • Issues with merging synapse groups where pre or postsynaptic neuron parameters are referenced (#566)
  • Presynaptic Synapse Variable undefined in Event Threshold Condition (#594)

GeNN 5.0.0 RC1

19 Mar 11:53

Choose a tag to compare

GeNN 5.0.0 RC1 Pre-release
Pre-release

Release Notes for GeNN 5.0.0

This is a very large update to GeNN that has fixed a large number of longstanding bugs and we hope will make GeNN easier to use and enable various exciting new features in the near future. The licence has also been switched from GPL to LGPL making it mildly more liberal by allowing PyGeNN to be used as a component in closed-source systems.

This release breaks backward compatibility so all models are likely to require updating but the documentation has also been completely re-done and the pre-release version is at https://genn-team.github.io/genn/documentation/5/. This includes a guide to updating existing models

User Side Changes

  • Named parameters by (#493)
  • Transpiler (#595)
  • Variable dimensions (#598)
  • Dynamic loader (#602)
  • Replace implicit neuron variable references with explicit ones (#604)
  • Dynamic and typed (#607)
  • Fused event generation and postsynaptic spike-like events (#609)
  • Single PSM code string (#612)

Bug fixes

  • PyGeNN only really works with precision set to float (#289)
  • Refine global - register -global transfers (#55)
  • Avoiding creating unused variables enhancement (#47)
  • PyGeNN doesn't correctly handle neuron variables with delay slots (#393)
  • assign_external_pointer overrides should use explicitly sized integer types (#288)
  • Repeat of spike-like-event conditions in synapse code flawed (#379)
  • Dangerous conflict potential of user and system code (#385)
  • Accessing queued pre and postsynaptic weight update model variables (#402)
  • Linker-imposed model complexity limit on Windows (#408)
  • Got 'error: duplicate parameter name' when ./generate_run test in userproject/Izh_sparse_project bug (#416)
  • Issues with merging synapse groups where pre or postsynaptic neuron parameters are referenced (#566)
  • Presynaptic Synapse Variable undefined in Event Threshold Condition (#594)

GeNN 4.9.0

11 Oct 10:33
a915552

Choose a tag to compare

Release Notes for GeNN 4.9.0

This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.8.1 release.
It is intended as the last release for GeNN 4.X.X.
Fixes for serious bugs may be backported if requested but, otherwise, development will be switching to GeNN 5.

User Side Changes

  1. Implemented pygenn.GeNNModel.unload to manually unload GeNN models to improve control in scenarios such as parameter sweeping where multiple PyGeNN models need to be instantiated (#581).
  2. Added Extra Global Parameter references to custom updates (see Defining Custom Updates, Defining your own custom update model and Extra Global Parameter references (#583).
  3. Expose $(num_pre), $(num_post), $(num_batches) to all user code strings (#576)

Bug fixes

  1. Fixed handling of indices specified as sequences types other than numpy arrays in pygenn.SynapseGroup.set_sparse_connections (#597).
  2. Fixed bug in CUDA constant cache estimation bug which could cause nvLink errors in models with learning rules which required previous spike times (#589).
  3. Fixed longstanding issue with setuptools that meant PyGeNN sometimes had to be built twice to obtain a functional version. Massive thanks to @erolm-a for contributing this fix (#591).

Optimisations

  1. Reduced the number of layers and generally optimised Docker image. Massive thanks to @bdevans for his work on this (#601).

GeNN 4.8.1

21 Apr 15:41
ef69e59

Choose a tag to compare

Release Notes for GeNN v4.8.1

This release fixes a number of issues found in the 4.8.0 release and also includes some optimisation which could be very beneficial for some classes of model.

Bug fixes

  1. Fixed bug relating to merging populations with variable references pointing to variables with different access duplication modes (#557).
  2. Fixed infinite loop that could occur in code generator if a bracket was missed calling a GeNN function in a code snippet (#559).
  3. Fixed bug that meant batched models which required previous spike times failed to compile (#565).
  4. Fixed bug with DLL-searching logic on Windows which meant CUDA backend failed to load on some systems (#579).
  5. Fixed a number of corner cases in the handling of VarAccessDuplication::SHARED_NEURON variables (#578).

Optimisations

  1. When building models with large numbers of populations using the CUDA backend, compile times could be very large. This was at least in part due to over-verbose error handling code being generated. CodeGenerator::CUDA::Preferences::generateSimpleErrorHandling enables the generation of much more minimal error-handling code and can speed up compilation by up to 10x (#554).
  2. Turned on multi-processor compilation option in Visual Studio solutions which speeds up compilation of GeNN by a significant amount (#555).
  3. Fusing postsynaptic models was previously overly-conservative meaning large, highly-connected models using a postsynaptic model with additional state variables would perform poorly. These checks have been relaxed and brought into line with those used for fusing pre and postsynaptic updates coming from weight update models (#567).

GeNN 4.8.0

31 Oct 13:09
5aa20a0

Choose a tag to compare

Release Notes for GeNN 4.8.0

This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.7.1 release.

User Side Changes

  1. Custom updates extended to work on SynapseMatrixWeight::KERNEL weight update model variables (#524).
  2. Custom updates extended to perform reduction operations across neurons as well as batches (#539).
  3. PyGeNN can now automatically find Visual Studio build tools using functionality in setuptools.msvc.msvc14_get_vc_env (#471)
  4. GeNN now comes with a fully-functional Docker image and releases will be distributed via Dockerhub as well as existing channels. Special thanks to @Stevinson , @jamesturner246 and @bdevans for their help on this (see the README for more information) (#548 and #550).

Bug fixes

  1. Fixed bug relating to merging of synapse groups which perform presynaptic "revInSyn" updates (#520).
  2. Added missing parameter to PyGeNN. pygenn.genn_model.create_custom_postsynaptic_class function so postsynaptic models with extra global parameters can be created (#522).
  3. Correctly substitute 0 for $(batch) when using single-threaded CPU backend (#523).
  4. Fixed issues building PyGeNN with Visual Studio 2017 (#533).
  5. Fixed bug where model might not be rebuilt if sparse connectivity initialisation snippet was changed (#547).
  6. Fixed longstanding bug in the gen_input_structured tool -- used by some userprojects -- where data was written outside of array bounds (#551).
  7. Fixed issue with debug mode of genn-buildmodel.bat when used with single-threaded CPU backend (#551).
  8. Fixed issue where, if custom update models were the only part of a model that required an RNG for initialisation, one might not be instantiated (#540).

GeNN 4.7.1

29 Apr 10:17

Choose a tag to compare

Release Notes for GeNN v4.7.1

This release fixes a plethora of issues found in the 4.7.0 release and also includes an optimisation which could be very beneficial for some classes of model.

Bug fixes

  1. Fixed issue meaning that manual changes to max synaptic row length (via SynapseGroup::setMaxConnections) were not detected and model might not be rebuilt. Additionally, reduce the strictness of checks in SynapseGroup::setMaxConnections and SynapseGroup::setMaxSourceConnections so maximum synaptic row and column lengths can be overridden when sparse connectivity initialisation snippets are in use as long as overriding values are larger than those provided by snippet (#515).
  2. Fixed issue preventing PyGeNN being built on Python 2.7 (#510)
  3. Fixed issue meaning that inSyn, denDelayInSyn and revInSynOutSyn variables were not properly zeroed during initialisation (or reinitialisation) of batched models (#509).
  4. Fixed issue where initialization code for synapse groups could be incorrectly merged (#508).
  5. Fixed issue when using custom updates on batched neuron group variables (#507).
  6. Fixed issue in spike recording system where some permutations of kernel and neuron population size would result in memory corruption (#502).
  7. Fixed (long-standing) issue where LLDB wasn't correctly invoked when running genn-buildmodel.sh -d on Mac (#518).
  8. Fixed issue where sparse initialisation kernels weren't correctly generated if they were only required to initialise custom updates (#517).

Optimisations

  1. Using synapse dynamics with sparse connectivity previously had very high memory requirements and poor performance. Both issues have been solved with a new algorithm (#511).