Is there a way to clean spike storage without restarting simulation? #738
-
|
Hello, I am running a large simulation, where I need to record, for an extended number of time-steps, the spikes spikes of every neuron in the simulation. Naturally, I came to a point where I use all the GPUs resources. Is it possible to pull spikes from the simulation, store them, clean the buffer, and continue? I would be looking to something like this: ` start = time.time() ` Thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
|
as long as you have loaded the model with the buffer size set to match self.model.load(num_recording_timesteps=buffer_t)this code will already work the way you want. You just need to make a copy of the data after calling spike_times, spike_ids = genn_pop.spike_recording_data[0]
self.spike_times.append(spike_times)
self.spike_ids.append(spike_ids) |
Beta Was this translation helpful? Give feedback.
as long as you have loaded the model with the buffer size set to match
buffer_ti.e.:this code will already work the way you want. You just need to make a copy of the data after calling
pull_recording_buffers_from_device()e.g.