deep networks
Recently Published Documents





2022 ◽  
Vol 93 ◽  
pp. 101754
Wesley L. Passos ◽  
Gabriel M. Araujo ◽  
Amaro A. de Lima ◽  
Sergio L. Netto ◽  
Eduardo A.B. da Silva

2022 ◽  
Vol 12 (2) ◽  
pp. 832
Han Li ◽  
Kean Chen ◽  
Lei Wang ◽  
Jianben Liu ◽  
Baoquan Wan ◽  

Thanks to the development of deep learning, various sound source separation networks have been proposed and made significant progress. However, the study on the underlying separation mechanisms is still in its infancy. In this study, deep networks are explained from the perspective of auditory perception mechanisms. For separating two arbitrary sound sources from monaural recordings, three different networks with different parameters are trained and achieve excellent performances. The networks’ output can obtain an average scale-invariant signal-to-distortion ratio improvement (SI-SDRi) higher than 10 dB, comparable with the human performance to separate natural sources. More importantly, the most intuitive principle—proximity—is explored through simultaneous and sequential organization experiments. Results show that regardless of network structures and parameters, the proximity principle is learned spontaneously by all networks. If components are proximate in frequency or time, they are not easily separated by networks. Moreover, the frequency resolution at low frequencies is better than at high frequencies. These behavior characteristics of all three networks are highly consistent with those of the human auditory system, which implies that the learned proximity principle is not accidental, but the optimal strategy selected by networks and humans when facing the same task. The emergence of the auditory-like separation mechanisms provides the possibility to develop a universal system that can be adapted to all sources and scenes.

2022 ◽  
Vol 22 (1) ◽  
pp. 8
Philipp Grüning ◽  
Thomas Martinetz ◽  
Erhardt Barth

2022 ◽  
pp. 108485
Nora Horanyi ◽  
Kedi Xia ◽  
Kwang Moo Yi ◽  
Abhishake Kumar Bojja ◽  
Aleš Leonardis ◽  

2022 ◽  
Vol 71 ◽  
pp. 103217
Jayanthi Venkatraman Shanmugam ◽  
Baskar Duraisamy ◽  
Blessy Chittattukarakkaran Simon ◽  
Preethi Bhaskaran

2021 ◽  
Vol 14 (1) ◽  
pp. 171
Qingyan Wang ◽  
Meng Chen ◽  
Junping Zhang ◽  
Shouqiang Kang ◽  
Yujing Wang

Hyperspectral image (HSI) data classification often faces the problem of the scarcity of labeled samples, which is considered to be one of the major challenges in the field of remote sensing. Although active deep networks have been successfully applied in semi-supervised classification tasks to address this problem, their performance inevitably meets the bottleneck due to the limitation of labeling cost. To address the aforementioned issue, this paper proposes a semi-supervised classification method for hyperspectral images that improves active deep learning. Specifically, the proposed model introduces the random multi-graph algorithm and replaces the expert mark in active learning with the anchor graph algorithm, which can label a considerable amount of unlabeled data precisely and automatically. In this way, a large number of pseudo-labeling samples would be added to the training subsets such that the model could be fine-tuned and the generalization performance could be improved without extra efforts for data manual labeling. Experiments based on three standard HSIs demonstrate that the proposed model can get better performance than other conventional methods, and they also outperform other studied algorithms in the case of a small training set.

2021 ◽  
Zedong Bi ◽  
Guozhang Chen ◽  
Dongping Yang ◽  
Yu Zhou

The way in which the brain modifies synapses to improve the performance of complicated networks remains one of the biggest mysteries in neuroscience. Existing proposals lack sufficient experimental support, and neglect inter-cellular signaling pathways ubiquitous in the brain. Here we show that the heterosynaptic plasticity between hippocampal or cortical pyramidal cells mediated by diffusive nitric oxide and astrocyte calcium wave, together with flexible dendritic gating of somatostatin interneurons, implies an evolutionary algorithm (EA). In simulation, this EA is able to train deep networks with biologically plausible binary weights in MNIST classification and Atari-game playing tasks up to performance comparable with continuous-weight networks trained by gradient-based methods. Our work leads paradigmatically fresh understanding of the brain learning mechanism.

Sign in / Sign up

Export Citation Format

Share Document