scholarly journals Biologically motivated learning method for deep neural networks using hierarchical competitive learning

2021 ◽  
Author(s):  
Takashi Shinozaki
2020 ◽  
Vol 8 (6) ◽  
pp. 3992-3995

Object recognition the use deep neural networks has been most typically used in real applications. We propose a framework for identifying items in pics of very low decision through collaborative studying of two deep neural networks. It includes photo enhancement network object popularity networks. The picture correction community seeks to decorate images of much lower decision faster and more informative images with the usages of collaborative gaining knowledge of indicatores from object recognition networks. Object popularity networks actively participate in the mastering of photograph enhancement networks, with skilled weights for photographs of excessive resolution. It uses output from photograph enhancement networks as augmented studying recordes to reinforce the overall performance of its identity on a very low decision object. We esablished that the proposed method can improve photograph reconstruction and classification overall performance


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 41164-41171 ◽  
Author(s):  
Haonan Guo ◽  
Shilin Wang ◽  
Jianxun Fan ◽  
Shenghong Li

2021 ◽  
Author(s):  
Huan Yang ◽  
Zhaoping Xiong ◽  
Francesco Zonta

AbstractClassical potentials are widely used to describe protein physics, due to their simplicity and accuracy, but they are continuously challenged as real applications become more demanding with time. Deep neural networks could help generating alternative ways of describing protein physics. Here we propose an unsupervised learning method to derive a neural network energy function for proteins. The energy function is a probability density model learned from plenty of 3D local structures which have been extensively explored by evolution. We tested this model on a few applications (assessment of protein structures, protein dynamics and protein sequence design), showing that the neural network can correctly recognize patterns in protein structures. In other words, the neural network learned some aspects of protein physics from experimental data.


2019 ◽  
Vol 56 (7) ◽  
pp. 071102
Author(s):  
卓刘 Zhuo Liu ◽  
陈晓琪 Chen Xiaoqi ◽  
谢振平 Xie Zhenping ◽  
蒋晓军 Jiang Xiaojun ◽  
毕道鹍 Bi Daokun

Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 233
Author(s):  
Haoran Xu ◽  
Yanbai He ◽  
Xinya Li ◽  
Xiaoying Hu ◽  
Chuanyan Hao ◽  
...  

Subtitles are crucial for video content understanding. However, a large amount of videos have only burned-in, hardcoded subtitles that prevent video re-editing, translation, etc. In this paper, we construct a deep-learning-based system for the inverse conversion of a burned-in subtitle video to a subtitle file and an inpainted video, by coupling three deep neural networks (CTPN, CRNN, and EdgeConnect). We evaluated the performance of the proposed method and found that the deep learning method achieved high-precision separation of the subtitles and video frames and significantly improved the video inpainting results compared to the existing methods. This research fills a gap in the application of deep learning to burned-in subtitle video reconstruction and is expected to be widely applied in the reconstruction and re-editing of videos with subtitles, advertisements, logos, and other occlusions.


Author(s):  
Elena Solovyeva ◽  
Ali Abdullah

Introduction: Due to its advantages, such as high flexibility and the ability to move heavy pieces with high torques and forces, the robotic arm, also named manipulator robot, is the most used industrial robot. Purpose: We improve the controlling quality of a manipulator robot with seven degrees of freedom in the V-REP program's environment using the reinforcement learning method based on deep neural networks. Methods: Estimate the action signal's policy by building a numerical algorithm using deep neural networks. The action-network sends the action's signal to the robotic manipulator, and the critic-network performs a numerical function approximation to calculate the value function (Q-value). Results: We create a model of the robot and the environment using the reinforcement-learning library in MATLAB and connecting the output signals (the action's signal) to a simulated robot in V-REP program. Train the robot to reach an object in its workspace after interacting with the environment and calculating the reward of such interaction. The model of the observations was done using three vision sensors. Based on the proposed deep learning method, a model of an agent representing the robotic manipulator was built using four layers neural network for the actor with four layers neural network for the critic. The agent's model representing the robotic manipulator was trained for several hours until the robot started to reach the object in its workspace in an acceptable way. The main advantage over supervised learning control is allowing our robot to perform actions and train at the same moment, giving the robot the ability to reach an object in its workspace in a continuous space action. Practical relevance: The results obtained are used to control the behavior of the movement of the manipulator without the need to construct kinematic models, which reduce the mathematical complexity of the calculation and provide a universal solution.


Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

2018 ◽  
Author(s):  
Chi Zhang ◽  
Xiaohan Duan ◽  
Ruyuan Zhang ◽  
Li Tong

2013 ◽  
Vol 133 (10) ◽  
pp. 1976-1982 ◽  
Author(s):  
Hidetaka Watanabe ◽  
Seiichi Koakutsu ◽  
Takashi Okamoto ◽  
Hironori Hirata

Sign in / Sign up

Export Citation Format

Share Document