Skip to content

Merging of compatible postsynaptic models#201

Merged
neworderofjamie merged 12 commits intomasterfrom
psm_merge
Aug 24, 2018
Merged

Merging of compatible postsynaptic models#201
neworderofjamie merged 12 commits intomasterfrom
psm_merge

Conversation

@neworderofjamie
Copy link
Contributor

I think I've talked to you about this before but what this change does is, if the GENN_PREFERENCE is set, merge together the postsynaptic models of a neuron population's incoming synapse populations if

  1. They use the same model class
  2. They have the same maximum number of dendritic delay slots
  3. They have no state variables (it's ambiguous as to how these should be initialised)
  4. Their parameter and derived-parameter values match

When postsynaptic models are merged:

  1. Memory is saved as only one dendritic delay buffer and inSyn is allocated
  2. Hopefully the L2 cache is thrashed slightly less
  3. The postsynaptic model updating/applying phase of the neuron kernel is much reduced

Overall, on a K40, this change more than halves the time spent by the neuron kernel on the Potjans microcircuit model. It's a little messy but I feel worth it!

# Conflicts:
#	lib/src/generateCPU.cc
#	lib/src/generateInit.cc
#	lib/src/generateKernels.cc
#	lib/src/generateRunner.cc
#	lib/src/synapseGroup.cc
# Conflicts:
#	lib/include/neuronGroup.h
#	lib/src/generateCPU.cc
#	lib/src/generateKernels.cc
#	lib/src/generateRunner.cc
#	lib/src/modelSpec.cc
* Added yet another ``GENN_PREFERENCES`` flag defaulting to turn off postsynaptic model merging
* Changed flags in ``SynapseGroup`` to allow whether group has been merged (as an origin or target) as well as what the target postsynaptic group is (otherwise it is ambiguous whether a group is the source of a merge or unmerged)
* Fixed bug in runner where methods to push and pull state of synapse groups included inSyn and denDelay variables which had been merged away
Copy link
Member

@tnowotny tnowotny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume you have tested that this works on at least the model you mentioned? In principle it should work of course ...
You said it was messy but I think it is the kind of thing we should be doing to improve GeNN performance! Of course, I am not sure how many of these kind of ideas can be put in before things get out of hand and there is just too many code paths/options and everything becomes unmanageable. So far I would say it still looks fairly tidy given what it does.
I have no further concerns - if you are confident it's not containing a horrible bug, I think we can merge it.

for(unsigned int i = 0; !inSyn.empty(); i++) {
// Remove last element from vector
SynapseGroup *a = inSyn.back();
inSyn.pop_back();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't pop_back returning the last element of the vector, i.e. allowing
SynapseGroup *a = inSyn.pop_back();
?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sadly it doesn't - it must have for some programming language though because in my head it should too

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants