Not sure if anyone here will be able to help, but I thought it was worth a try.
When implementing SNN's (spike neural networks), there are two primary methods of temporality.
1. Frame based - Calculate updates every t (commonly 1ms). This is possible due to axonal delays.
2. Event based - Still uses discrete time frames, however events are scheduled (and ordered) per frame, so that exact (single/double precision) timestamps can be calculated for each event.
Now both of these are performance comparable when utilising a single CPU thread (from experience), however it becomes a different kettle of fish when trying to implement them via a parallel architecture. Specifically, the frame based method becomes much easier as the synchronicity of the system is dithered to the width of each frame, while the event based method requires high amounts of synchronisation, thus making parallelisation more difficult.
While the frame based systems exhibit forms of self organisation through STDP, every calculation the system performs contains a small error based on the temporal resolution of the frames. One journal i've read shows that the temporal resolution of the frame based system would need to be around 0.01ms in order to start blurring the results of comparing the two systems.
So what is my question?
Basically how important is the temporal accuracy of the events within the brain when looking at high level functions (i.e. 1ms, 0.1ms, 0.01ms, 0.001ms)? In other words, how much temporal error could you introduce to spike times before you start to see bad degredation (parkinson's would probably suffice as a biological analogy).