Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
d2ec7ff
new variable modes and checks that NEURON_SHARED variables aren't use…
neworderofjamie Sep 28, 2022
3a414fc
support for allocating, pushing and pulling variables with SHARED_NEU…
neworderofjamie Sep 28, 2022
5228315
support for indexing variables with SHARED_NEURON duplication
neworderofjamie Sep 28, 2022
c6bc853
``genNeuronIndexCalculation`` is pre-substitution
neworderofjamie Sep 28, 2022
35e4550
initialisation of NEURON_SHARED variables
neworderofjamie Sep 29, 2022
69fd61d
fixed typo
neworderofjamie Sep 29, 2022
28b546d
more instances where substitutions aren't run so batch has to be used…
neworderofjamie Sep 29, 2022
bbd43a5
extended batch_var_init to test ``READ_ONLY_SHARED_NEURON`` variable …
neworderofjamie Sep 29, 2022
6bc2e94
modified pre and post wu var tests to mix in ``READ_ONLY_SHARED_NEURO…
neworderofjamie Sep 29, 2022
b3a09d2
moved ``isReduction`` test up from ``CustomUpdateModel`` to ``CustomU…
neworderofjamie Sep 29, 2022
2f1f969
first go at CUDA population reduction implementation
neworderofjamie Sep 29, 2022
7ae2d2f
test for batch size one population reductions
neworderofjamie Sep 29, 2022
b8c92b7
test for population reduction with larger batch size
neworderofjamie Sep 29, 2022
a9e8569
fixed typo
neworderofjamie Sep 29, 2022
65cd6e6
skip neuron reduction tests for OpenCL
neworderofjamie Sep 29, 2022
4ca9450
(currently failing) test of more complex three-step softmax operation
neworderofjamie Sep 30, 2022
9bfb8c8
fixed index calculations for custom updates
neworderofjamie Oct 1, 2022
81e9b28
test final softmax output
neworderofjamie Oct 2, 2022
288b5f6
Makefile symlinks for new feature tests
neworderofjamie Oct 3, 2022
1d28d09
removed commented out stuff
neworderofjamie Oct 3, 2022
dca666a
moved genInitReductionTargets from BackendSIMT to BackendBase
neworderofjamie Oct 3, 2022
747dae8
* implemented single-threaded CPU neuron reductions
neworderofjamie Oct 3, 2022
70b7966
CUDA optimiser
neworderofjamie Oct 3, 2022
f10b72a
tweaked type check
neworderofjamie Oct 3, 2022
ce4ef69
error message if Custom WU updates use model with NEURON_SHARED varia…
neworderofjamie Oct 3, 2022
77af7f9
error message if custom update is configured to do both batch and neu…
neworderofjamie Oct 3, 2022
5efa219
unit tests
neworderofjamie Oct 3, 2022
c821e86
Added error if you attempt to make neuron reductions on OpenCL backend
neworderofjamie Oct 3, 2022
d215557
fixed nasty bug with reductions that write back result to some sort o…
neworderofjamie Oct 3, 2022
825a63c
fixed compiler warning in test
neworderofjamie Oct 3, 2022
e7aed41
update pip
neworderofjamie Oct 3, 2022
1114e62
fixed typo
neworderofjamie Oct 3, 2022
23fa037
documentation
neworderofjamie Oct 3, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@ for(b = 0; b < builderNodes.size(); b++) {
rm -rf virtualenv
${env.PYTHON} -m venv virtualenv
. virtualenv/bin/activate

pip install --upgrade pip
pip install wheel "numpy>=1.17"

python setup.py clean --all
Expand Down
43 changes: 39 additions & 4 deletions doxygen/10_UserManual.dox
Original file line number Diff line number Diff line change
Expand Up @@ -803,7 +803,9 @@ For convenience the methods this class should implement can be implemented using
- \add_cpp_python_text{DECLARE_CUSTOM_UPDATE_MODEL(TYPE\, NUM_PARAMS\, NUM_VARS\, NUM_VAR_REFS) is an extended version of ``DECLARE_MODEL()`` which declares the boilerplate code required for a custom update with variable references as well as variables and parameters,`class_name`: the name of the new model}.
- \add_cpp_python_text{SET_VAR_REFS(),`var_refs`} defines the names, type strings (e.g. "float", "double", etc) and (optionally) access mode
of the variable references. The variables defined here as `NAME` can then be used in the syntax \$(NAME) in the update code string. Variable reference types must match those of the underlying variables.
supported access modes are \add_cpp_python_text{VarAccessMode::READ_WRITE, pygenn.genn_wrapper.Models.VarAccessMode_READ_WRITE} \add_cpp_python_text{VarAccessMode::READ_ONLY,pygenn.genn_wrapper.Models.VarAccessMode_READ_ONLY}, \add_cpp_python_text{VarAccessMode::REDUCE_SUM, pygenn.genn_wrapper.Models.VarAccessMode_REDUCE_SUM} and \add_cpp_python_text{VarAccessMode::REDUCE_MAX, pygenn.genn_wrapper.Models.VarAccessMode_REDUCE_MAX}.
supported access modes are \add_cpp_python_text{VarAccessMode::READ_WRITE, pygenn.genn_wrapper.Models.VarAccessMode_READ_WRITE} \add_cpp_python_text{VarAccessMode::READ_ONLY,pygenn.genn_wrapper.Models.VarAccessMode_READ_ONLY}, \add_cpp_python_text{VarAccessMode::REDUCE_SUM, pygenn.genn_wrapper.Models.VarAccessMode_REDUCE_SUM}, \add_cpp_python_text{VarAccessMode::REDUCE_MAX, pygenn.genn_wrapper.Models.VarAccessMode_REDUCE_MAX}, \
\add_cpp_python_text{VarAccessMode::REDUCE_NEURON_SUM, pygenn.genn_wrapper.Models.VarAccessMode_REDUCE_NEURON_SUM}
and add_cpp_python_text{VarAccessMode::REDUCE_NEURON_MAX, pygenn.genn_wrapper.Models.VarAccessMode_REDUCE_NEURON_MAX}.
- \add_cpp_python_text{SET_UPDATE_CODE(UPDATE_CODE),``update_code=UPDATE_CODE``}: where UPDATE_CODE contains the code for to perform the custom update.

For example, using these \add_cpp_python_text{macros,keyword arguments}, we can define a custom update which will set a referenced variable to the value of a custom update model state variable:
Expand Down Expand Up @@ -831,18 +833,18 @@ reset_model = genn_model.create_custom_custom_update_class(
When used in a model with batch size > 1, whether custom updates of this sort are batched or not depends on the variables their references point to. If any referenced variables have \add_cpp_python_text{VarAccess::READ_ONLY_DUPLICATE, pygenn.genn_wrapper.Models.VarAccess_READ_ONLY_DUPLICATE} or \add_cpp_python_text{VarAccess::READ_WRITE, pygenn.genn_wrapper.Models.VarAccess_READ_WRITE} access modes, then the update will be batched and any variables associated with the custom update which also have \add_cpp_python_text{VarAccess::READ_ONLY_DUPLICATE, pygenn.genn_wrapper.Models.VarAccess_READ_ONLY_DUPLICATE} or \add_cpp_python_text{VarAccess::READ_WRITE, pygenn.genn_wrapper.Models.VarAccess_READ_WRITE} access modes will be duplicated across the batches.

\subsection custom_update_reduction Batch reduction
As well as the standard variable access modes described in \ref subsect11, custom updates support variables with several 'reduction' access modes:
As well as the standard variable access modes described in \ref subsect11, custom updates support variables with several 'batch reduction' access modes:
- \add_cpp_python_text{VarAccess::REDUCE_BATCH_SUM, ``pygenn.genn_wrapper.Models.VarAccess_REDUCE_BATCH_SUM``}
- \add_cpp_python_text{VarAccess::REDUCE_BATCH_MAX, ``pygenn.genn_wrapper.Models.VarAccess_REDUCE_BATCH_MAX``}

These access modes allow values read from variables duplicated across batches to be reduced into variables that are shared across batches.
For example, in a gradient-based learning scenario, a model like this could be used to sum gradients from across all batches so they can be used as the input to a learning rule operating on shared synaptic weights:

\add_toggle_code_cpp
class Reset : public CustomUpdateModels::Base
class GradientBatchReduce : public CustomUpdateModels::Base
{
public:
DECLARE_CUSTOM_UPDATE_MODEL(Reset, 0, 1, 1);
DECLARE_CUSTOM_UPDATE_MODEL(GradientBatchReduce, 0, 1, 1);

SET_UPDATE_CODE(
"$(reducedGradient) = $(gradient);\n"
Expand All @@ -867,6 +869,39 @@ Custom updates can also perform the same sort of reduction operation _into_ vari
- \add_cpp_python_text{VarAccessMode::REDUCE_SUM, ``pygenn.genn_wrapper.Models.VarAccess_REDUCE_SUM``}
- \add_cpp_python_text{VarAccessMode::REDUCE_MAX, ``pygenn.genn_wrapper.Models.VarAccess_REDUCE_MAX``}

\subsection custom_update_neuron_reduction Neuron reductions
Similarly to the batch reduction modes discussed previously, custom updates also support variables with several 'neuron reduction' access modes:
- \add_cpp_python_text{VarAccess::REDUCE_NEURON_SUM, ``pygenn.genn_wrapper.Models.REDUCE_NEURON_SUM``}
- \add_cpp_python_text{VarAccess::REDUCE_NEURON_MAX, ``pygenn.genn_wrapper.Models.VarAccess_REDUCE_NEURON_MAX``}

These access modes allow values read from per-neuron variables to be reduced into variables that are shared across neurons.
For example, a model like this could be used to calculate the maximum value of a state variable in a population of neurons:

\add_toggle_code_cpp
class Reduce : public CustomUpdateModels::Base
{
public:
DECLARE_CUSTOM_UPDATE_MODEL(Reduce, 0, 1, 1);

SET_UPDATE_CODE(
"$(reduction) = $(source);\n");

SET_VARS({{"reduction", "scalar", VarAccess::REDUCE_NEURON_SUM}});
SET_VAR_REFS({{"source", "scalar", VarAccessMode::READ_ONLY},
};
\end_toggle_code
\add_toggle_code_python
reduce_model = genn_model.create_custom_custom_update_class(
"reduce",
var_name_typessourcereduction", "scalar", REDUCE_NEURON_SUM)],
var_refs=[("gradient", "scalar", VarAccessMode_READ_ONLY)],
update_code="""
$(reduction) = $(source);
""")
\end_toggle_code

Again, like batch reductions, neuron reductions can also be performed into variable references with the \add_cpp_python_text{VarAccessMode::REDUCE_SUM, ``pygenn.genn_wrapper.Models.VarAccess_REDUCE_SUM``} or \add_cpp_python_text{VarAccessMode::REDUCE_MAX, ``pygenn.genn_wrapper.Models.VarAccess_REDUCE_MAX``} access modes.

\note
Reading from variables with a reduction access mode is undefined behaviour.

Expand Down
25 changes: 18 additions & 7 deletions include/genn/backends/single_threaded_cpu/backend.h
Original file line number Diff line number Diff line change
Expand Up @@ -195,24 +195,35 @@ class BACKEND_EXPORT Backend : public BackendBase
}
}

//! Helper to generate code to copy reduced variables back to variables
//! Helper to generate code to copy reduced custom update group variables back to memory
/*! Because reduction operations are unnecessary in unbatched single-threaded CPU models so there's no need to actually reduce */
template<typename G>
void genWriteBackReductions(CodeStream &os, const G &cg, const std::string &idx) const
void genWriteBackReductions(CodeStream &os, const CustomUpdateGroupMerged &cg, const std::string &idx) const;

//! Helper to generate code to copy reduced custom weight update group variables back to memory
/*! Because reduction operations are unnecessary in unbatched single-threaded CPU models so there's no need to actually reduce */
void genWriteBackReductions(CodeStream &os, const CustomUpdateWUGroupMerged &cg, const std::string &idx) const;

template<typename G, typename R>
void genWriteBackReductions(CodeStream &os, const G &cg, const std::string &idx, R getVarRefIndexFn) const
{
const auto *cm = cg.getArchetype().getCustomUpdateModel();
for(const auto &v : cm->getVars()) {
// If variable is a reduction target, copy value from register straight back into global memory
if(v.access & VarAccessModeAttribute::REDUCE) {
os << "group->" << v.name << "[" << idx << "] = l" << v.name << ";" << std::endl;
os << "group->" << v.name << "[" << cg.getVarIndex(getVarAccessDuplication(v.access), idx) << "] = l" << v.name << ";" << std::endl;
}
}

// Loop through variable references
for(const auto &v : cm->getVarRefs()) {
const auto modelVarRefs = cm->getVarRefs();
const auto &varRefs = cg.getArchetype().getVarReferences();
for (size_t i = 0; i < varRefs.size(); i++) {
const auto varRef = varRefs.at(i);
const auto modelVarRef = modelVarRefs.at(i);

// If variable reference is a reduction target, copy value from register straight back into global memory
if(v.access & VarAccessModeAttribute::REDUCE) {
os << "group->" << v.name << "[" << idx<< "] = l" << v.name << ";" << std::endl;
if(modelVarRef.access & VarAccessModeAttribute::REDUCE) {
os << "group->" << modelVarRef.name << "[" << getVarRefIndexFn(varRef, idx) << "] = l" << modelVarRef.name << ";" << std::endl;
}
}
}
Expand Down
62 changes: 62 additions & 0 deletions include/genn/genn/code_generator/backendBase.h
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
#include "codeStream.h"
#include "gennExport.h"
#include "gennUtils.h"
#include "varAccess.h"
#include "variableMode.h"

// Forward declarations
Expand Down Expand Up @@ -451,6 +452,23 @@ class GENN_EXPORT BackendBase
const T &getPreferences() const { return static_cast<const T &>(m_Preferences); }

protected:
//--------------------------------------------------------------------------
// ReductionTarget
//--------------------------------------------------------------------------
//! Simple struct to hold reduction targets
struct ReductionTarget
{
ReductionTarget(const std::string &n, const std::string &t, VarAccessMode a, const std::string &i)
: name(n), type(t), access(a), index(i)
{
}

const std::string name;
const std::string type;
const VarAccessMode access;
const std::string index;
};

//--------------------------------------------------------------------------
// Protected API
//--------------------------------------------------------------------------
Expand All @@ -471,7 +489,51 @@ class GENN_EXPORT BackendBase

void genCustomUpdateIndexCalculation(CodeStream &os, const CustomUpdateGroupMerged &cu) const;

//! Helper function to generate initialisation code for any reduction operations carried out be custom update group.
//! Returns vector of ReductionTarget structs, providing all information to write back reduction results to memory
std::vector<ReductionTarget> genInitReductionTargets(CodeStream &os, const CustomUpdateGroupMerged &cg, const std::string &idx = "") const;

//! Helper function to generate initialisation code for any reduction operations carried out be custom weight update group.
//! //! Returns vector of ReductionTarget structs, providing all information to write back reduction results to memory
std::vector<ReductionTarget> genInitReductionTargets(CodeStream &os, const CustomUpdateWUGroupMerged &cg, const std::string &idx = "") const;

private:
//--------------------------------------------------------------------------
// Private API
//--------------------------------------------------------------------------
template<typename G, typename R>
std::vector<ReductionTarget> genInitReductionTargets(CodeStream &os, const G &cg, const std::string &idx, R getVarRefIndexFn) const
{
// Loop through variables
std::vector<ReductionTarget> reductionTargets;
const auto *cm = cg.getArchetype().getCustomUpdateModel();
for (const auto &v : cm->getVars()) {
// If variable is a reduction target, define variable initialised to correct initial value for reduction
if (v.access & VarAccessModeAttribute::REDUCE) {
os << v.type << " lr" << v.name << " = " << getReductionInitialValue(*this, getVarAccessMode(v.access), v.type) << ";" << std::endl;
reductionTargets.emplace_back(v.name, v.type, getVarAccessMode(v.access),
cg.getVarIndex(getVarAccessDuplication(v.access), idx));
}
}

// Loop through variable references
const auto modelVarRefs = cm->getVarRefs();
const auto &varRefs = cg.getArchetype().getVarReferences();
for (size_t i = 0; i < varRefs.size(); i++) {
const auto varRef = varRefs.at(i);
const auto modelVarRef = modelVarRefs.at(i);

// If variable reference is a reduction target, define variable initialised to correct initial value for reduction
if (modelVarRef.access & VarAccessModeAttribute::REDUCE) {
os << modelVarRef.type << " lr" << modelVarRef.name << " = " << getReductionInitialValue(*this, modelVarRef.access, modelVarRef.type) << ";" << std::endl;
reductionTargets.emplace_back(modelVarRef.name, modelVarRef.type, modelVarRef.access,
getVarRefIndexFn(varRef, idx));
}
}
return reductionTargets;
}


//--------------------------------------------------------------------------
// Members
//--------------------------------------------------------------------------
Expand Down
42 changes: 0 additions & 42 deletions include/genn/genn/code_generator/backendSIMT.h
Original file line number Diff line number Diff line change
Expand Up @@ -219,23 +219,6 @@ class GENN_EXPORT BackendSIMT : public BackendBase
const KernelBlockSize &getKernelBlockSize() const { return m_KernelBlockSizes; }

private:
//--------------------------------------------------------------------------
// ReductionTarget
//--------------------------------------------------------------------------
//! Simple struct to hold reduction targets
struct ReductionTarget
{
ReductionTarget(const std::string &n, const std::string &t, VarAccessMode a)
: name(n), type(t), access(a)
{
}

const std::string name;
const std::string type;
const VarAccessMode access;
};


//--------------------------------------------------------------------------
// Type definitions
//--------------------------------------------------------------------------
Expand Down Expand Up @@ -332,31 +315,6 @@ class GENN_EXPORT BackendSIMT : public BackendBase
[](const T &) { return true; }, handler);
}

template<typename G>
std::vector<ReductionTarget> genInitReductionTargets(CodeStream &os, const G &cg) const
{
// Loop through variables
std::vector<ReductionTarget> reductionTargets;
const auto *cm = cg.getArchetype().getCustomUpdateModel();
for(const auto &v : cm->getVars()) {
// If variable is a reduction target, define variable initialised to correct initial value for reduction
if(v.access & VarAccessModeAttribute::REDUCE) {
os << v.type << " lr" << v.name << " = " << getReductionInitialValue(*this, getVarAccessMode(v.access), v.type) << ";" << std::endl;
reductionTargets.emplace_back(v.name, v.type, getVarAccessMode(v.access));
}
}

// Loop through variable references
for(const auto &v : cm->getVarRefs()) {
// If variable reference is a reduction target, define variable initialised to correct initial value for reduction
if(v.access & VarAccessModeAttribute::REDUCE) {
os << v.type << " lr" << v.name << " = " << getReductionInitialValue(*this, v.access, v.type) << ";" << std::endl;
reductionTargets.emplace_back(v.name, v.type, v.access);
}
}
return reductionTargets;
}

// Helper function to generate kernel code to initialise variables associated with synapse group or custom WU update with dense/kernel connectivity
template<typename G>
void genSynapseVarInit(CodeStream &os, const ModelSpecMerged &modelMerged, const G &g, Substitutions &popSubs,
Expand Down
3 changes: 0 additions & 3 deletions include/genn/genn/code_generator/customUpdateGroupMerged.h
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,6 @@ class GENN_EXPORT CustomUpdateGroupMerged : public GroupMerged<CustomUpdateInter

void generateCustomUpdate(const BackendBase &backend, CodeStream &os, const ModelSpecMerged &modelMerged, Substitutions &popSubs) const;

//----------------------------------------------------------------------------
// Static API
//----------------------------------------------------------------------------
std::string getVarIndex(VarAccessDuplication varDuplication, const std::string &index) const;
std::string getVarRefIndex(bool delay, VarAccessDuplication varDuplication, const std::string &index) const;

Expand Down
22 changes: 11 additions & 11 deletions include/genn/genn/code_generator/groupMerged.h
Original file line number Diff line number Diff line change
Expand Up @@ -1091,27 +1091,24 @@ class GENN_EXPORT SynapseGroupMergedBase : public GroupMerged<SynapseGroupIntern

std::string getPostDenDelayIndex(unsigned int batchSize, const std::string &index, const std::string &offset) const;

//------------------------------------------------------------------------
// Static API
//------------------------------------------------------------------------
static std::string getPreVarIndex(bool delay, unsigned int batchSize, VarAccessDuplication varDuplication, const std::string &index);
static std::string getPostVarIndex(bool delay, unsigned int batchSize, VarAccessDuplication varDuplication, const std::string &index);
std::string getPreVarIndex(bool delay, unsigned int batchSize, VarAccessDuplication varDuplication, const std::string &index) const;
std::string getPostVarIndex(bool delay, unsigned int batchSize, VarAccessDuplication varDuplication, const std::string &index) const;

static std::string getPrePrevSpikeTimeIndex(bool delay, unsigned int batchSize, VarAccessDuplication varDuplication, const std::string &index);
static std::string getPostPrevSpikeTimeIndex(bool delay, unsigned int batchSize, VarAccessDuplication varDuplication, const std::string &index);
std::string getPrePrevSpikeTimeIndex(bool delay, unsigned int batchSize, VarAccessDuplication varDuplication, const std::string &index) const;
std::string getPostPrevSpikeTimeIndex(bool delay, unsigned int batchSize, VarAccessDuplication varDuplication, const std::string &index) const;

static std::string getPostISynIndex(unsigned int batchSize, const std::string &index)
std::string getPostISynIndex(unsigned int batchSize, const std::string &index) const
{
return ((batchSize == 1) ? "" : "postBatchOffset + ") + index;
}

static std::string getPreISynIndex(unsigned int batchSize, const std::string &index)
std::string getPreISynIndex(unsigned int batchSize, const std::string &index) const
{
return ((batchSize == 1) ? "" : "preBatchOffset + ") + index;
}

static std::string getSynVarIndex(unsigned int batchSize, VarAccessDuplication varDuplication, const std::string &index);
static std::string getKernelVarIndex(unsigned int batchSize, VarAccessDuplication varDuplication, const std::string &index);
std::string getSynVarIndex(unsigned int batchSize, VarAccessDuplication varDuplication, const std::string &index) const;
std::string getKernelVarIndex(unsigned int batchSize, VarAccessDuplication varDuplication, const std::string &index) const;

protected:
//----------------------------------------------------------------------------
Expand Down Expand Up @@ -1147,6 +1144,9 @@ class GENN_EXPORT SynapseGroupMergedBase : public GroupMerged<SynapseGroupIntern
void addTrgPointerField(const std::string &type, const std::string &name, const std::string &prefix);
void addWeightSharingPointerField(const std::string &type, const std::string &name, const std::string &prefix);

std::string getVarIndex(bool delay, unsigned int batchSize, VarAccessDuplication varDuplication,
const std::string &index, const std::string &prefix) const;

//! Is the weight update model parameter referenced?
bool isWUParamReferenced(size_t paramIndex) const;

Expand Down
Loading