The boundaries of experimental neuroscience and associated limited experiment duration hence pose a serious constraint on the ultimate success of DNN-based models of neuronal systems. Insufficient experimental data can easily lead to overfitted models that describe the biophysical systems poorly while following artifacts or noise present in the recordings 35. DNNs suffer from their data-driven nature that requires a vast amount of data to accurately describe an unknown system, and can essentially be only as good as the data that were used for training. Even though deep learning has become a powerful research tool to help interpret the ever-growing pool of neuroscience and neuroimaging recordings 33, 34, these models have an important drawback when it comes to predicting responses to novel inputs. Examples include DNN-based models that were successfully trained to mimic the representational transformations of sensory input 29, 30, or DNNs that use neural activity to manipulate sensory stimuli 31, 32. Deep neural networks (DNNs) learn to map input to output representations and are composed of multiple layers with simplified units that loosely mimic the integration and activation properties of real neurons 28. These machine-learning methods can yield efficient, end-to-end descriptions of neuronal transfer functions, population responses or neuro-imaging data without having to rely on detailed analytical descriptions of the individual neurons responsible for this behaviour. To meet this demand, neuroscience recently embraced deep learning 18, a technique that quickly revolutionised our ability to construct large-scale neuronal networks and to quantify complex neuronal behaviour 19, 20, 21, 22, 23, 24, 25, 26, 27. This computational complexity hinders their further development to simulate more complex behaviour, limits their integration within large-scale neuronal simulation platforms 14, 15, and their uptake in neuro-engineering applications that require real-time, closed-loop neuron model units 16, 17. Such mechanistic models have substantially improved our understanding of how individual neurons function, but even the most basic models use coupled sets of ordinary differential equations (ODEs) in their descriptions. For example, the mechano-electrical transduction of cochlear inner-hair-cells (IHCs) was described using conductance models 7, 8, 9, 10 and the IHC-synapse firing rate using multi-compartment diffusion models 11, 12, 13. This approach resulted in a variety of stimulus-driven models of neuronal firing and was successful in describing the non-linear and adaptation properties of sensory systems 3, 4, 5, 6. In classical computational neuroscience, transfer functions between stimulation and recorded neural activity are derived and approximated analytically. We describe how our approach for auditory models can be applied to other neuron and synapse types to help accelerate the development of large-scale brain networks and DNN-based treatments of the pathological system.įollowing the fundamental work of Hodgkin and Huxley in modelling action-potential generation and propagation 1, numerous specific neuronal models were developed that proved essential for shaping and driving modern-day neuroscience 2. We focussed our development on auditory neurons and synapses, and show that our DNN-model architecture can be extended to a variety of existing analytical models. Our DNN-model architecture comprises parallel and differentiable equations that can be used for backpropagation in neuro-engineering applications, and offers a simulation run-time improvement factor of 70 and 280 on CPU or GPU systems respectively. We present a hybrid, machine-learning and computational-neuroscience approach that transforms analytical models of sensory neurons and synapses into deep-neural-network (DNN) neuronal units with the same biophysical properties. These neuronal models are typically slow to compute and cannot be integrated within large-scale neuronal simulation frameworks. In classical computational neuroscience, analytical model descriptions are derived from neuronal recordings to mimic the underlying biological system.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |