2020_Zhang_NeuNet.pdf (8.83 MB)
Download file

Reconstruction of natural visual scenes from neural spikes with deep neural networks

Download (8.83 MB)
journal contribution
posted on 11.03.2020, 11:25 by Yichen Zhang, Shanshan Jia, Yajing Zheng, Zhaofei Yu, Yonghong Tian, Siwei Ma, Tiejun Huang, Jian K Liu
Neural coding is one of the central questions in systems neuroscience for understanding how the brain processes stimulus from the environment, moreover, it is also a cornerstone for designing algorithms of brain–machine interface, where decoding incoming stimulus is highly demanded for better performance of physical devices. Traditionally researchers have focused on functional magnetic resonance imaging (fMRI) data as the neural signals of interest for decoding visual scenes. However, our visual perception operates in a fast time scale of millisecond in terms of an event termed neural spike. There are few studies of decoding by using spikes. Here we fulfill this aim by developing a novel decoding framework based on deep neural networks, named spike-image decoder (SID), for reconstructing natural visual scenes, including static images and dynamic videos, from experimentally recorded spikes of a population of retinal ganglion cells. The SID is an end-to-end decoder with one end as neural spikes and the other end as images, which can be trained directly such that visual scenes are reconstructed from spikes in a highly accurate fashion. Our SID also outperforms on the reconstruction of visual stimulus compared to existing fMRI decoding models. In addition, with the aid of a spike encoder, we show that SID can be generalized to arbitrary visual scenes by using the image datasets of MNIST, CIFAR10, and CIFAR100. Furthermore, with a pre-trained SID, one can decode any dynamic videos to achieve real-time encoding and decoding of visual scenes by spikes. Altogether, our results shed new light on neuromorphic computing for artificial visual systems, such as event-based visual cameras and visual neuroprostheses.3


National Basic Research Program of China [2015CB351806]; National Natural Science Foundation of China [61806011, 61825101, 61425025,61961130392 and U1611461]; National Postdoctoral Program for Innovative Talents, China [BX20180005]; China Postdoctoral Science Foundation [2018M630036]; Zhejiang Lab, China [2019KC0AB03 and 2019KC0AD02]; the Royal Society Newton Advanced Fellowship, UK[NAF-R1-191082]



Neural Networks, Volume 125, May 2020, Pages 19-30

Author affiliation

Department of Neuroscience, Psychology and Behavior


AM (Accepted Manuscript)

Published in

Neural Networks




19 - 30


Elsevier BV



Acceptance date


Copyright date


Available date


Publisher version