scholarly journals Deep learning—a first meta-survey of selected reviews across scientific disciplines, their commonalities, challenges and research impact

2021 ◽  
Vol 7 ◽  
pp. e773
Author(s):  
Jan Egger ◽  
Antonio Pepe ◽  
Christina Gsaxner ◽  
Yuan Jin ◽  
Jianning Li ◽  
...  

Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Deep learning tries to achieve this by drawing inspiration from the learning of a human brain. Similar to the basic structure of a brain, which consists of (billions of) neurons and connections between them, a deep learning algorithm consists of an artificial neural network, which resembles the biological brain structure. Mimicking the learning process of humans with their senses, deep learning networks are fed with (sensory) data, like texts, images, videos or sounds. These networks outperform the state-of-the-art methods in different tasks and, because of this, the whole field saw an exponential growth during the last years. This growth resulted in way over 10,000 publications per year in the last years. For example, the search engine PubMed alone, which covers only a sub-set of all publications in the medical field, provides already over 11,000 results in Q3 2020 for the search term ‘deep learning’, and around 90% of these results are from the last three years. Consequently, a complete overview over the field of deep learning is already impossible to obtain and, in the near future, it will potentially become difficult to obtain an overview over a subfield. However, there are several review articles about deep learning, which are focused on specific scientific fields or applications, for example deep learning advances in computer vision or in specific tasks like object detection. With these surveys as a foundation, the aim of this contribution is to provide a first high-level, categorized meta-survey of selected reviews on deep learning across different scientific disciplines and outline the research impact that they already have during a short period of time. The categories (computer vision, language processing, medical informatics and additional works) have been chosen according to the underlying data sources (image, language, medical, mixed). In addition, we review the common architectures, methods, pros, cons, evaluations, challenges and future directions for every sub-category.

Antibodies ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 12 ◽  
Author(s):  
Jordan Graves ◽  
Jacob Byerly ◽  
Eduardo Priego ◽  
Naren Makkapati ◽  
S. Vince Parish ◽  
...  

Driven by its successes across domains such as computer vision and natural language processing, deep learning has recently entered the field of biology by aiding in cellular image classification, finding genomic connections, and advancing drug discovery. In drug discovery and protein engineering, a major goal is to design a molecule that will perform a useful function as a therapeutic drug. Typically, the focus has been on small molecules, but new approaches have been developed to apply these same principles of deep learning to biologics, such as antibodies. Here we give a brief background of deep learning as it applies to antibody drug development, and an in-depth explanation of several deep learning algorithms that have been proposed to solve aspects of both protein design in general, and antibody design in particular.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Daniel Pinto dos Santos ◽  
Sebastian Brodehl ◽  
Bettina Baeßler ◽  
Gordon Arnhold ◽  
Thomas Dratsch ◽  
...  

Abstract Background Data used for training of deep learning networks usually needs large amounts of accurate labels. These labels are usually extracted from reports using natural language processing or by time-consuming manual review. The aim of this study was therefore to develop and evaluate a workflow for using data from structured reports as labels to be used in a deep learning application. Materials and methods We included all plain anteriorposterior radiographs of the ankle for which structured reports were available. A workflow was designed and implemented where a script was used to automatically retrieve, convert, and anonymize the respective radiographs of cases where fractures were either present or absent from the institution’s picture archiving and communication system (PACS). These images were then used to retrain a pretrained deep convolutional neural network. Finally, performance was evaluated on a set of previously unseen radiographs. Results Once implemented and configured, completion of the whole workflow took under 1 h. A total of 157 structured reports were retrieved from the reporting platform. For all structured reports, corresponding radiographs were successfully retrieved from the PACS and fed into the training process. On an unseen validation subset, the model showed a satisfactory performance with an area under the curve of 0.850 (95% CI 0.634–1.000) for detection of fractures. Conclusion We demonstrate that data obtained from structured reports written in clinical routine can be used to successfully train deep learning algorithms. This highlights the potential role of structured reporting for the future of radiology, especially in the context of deep learning.


During search and rescue operations in flood disaster, application of deep learning on aerial imaging is pretty good to find the humans when the environmental conditions are favorable and clear but it starts failing when the environmental conditions are adverse or not supporting. During our findings we realized that generally rescue teams stop their rescue work in night time because of invisibility .When orientation of sun comes at front, the drone aerial picture quality starts decaying. It does not work in different types of fog. Also it is difficult to find people when they are somehow hidden in vegetation. This study explains about infrared cameras potentially very useful in disaster management especially in flood [6]. It takes deep learning networks that were originally developed for visible imagery [1], [2] and applying it to long wave infrared or thermal cameras. Most missions for public safety occur in remote areas where the terrain can be difficult to navigate and in some cases inaccessible. So the drone allows you to fly high above the trees see through gaps of foliage and locate your target even in the darkness of night through thermal cameras and then applying deep learning techniques to identify them as human. Creating accurate machine learning models capable of localizing and identifying human objects in a single image/video remained a challenge in computer vision but with recent advancement in drone, radiometric thermal imaging, deep learning based computer vision models it is possible now to support the rescue team to a bigger extent


2020 ◽  
Vol 9 (1) ◽  
pp. 2663-2667

In this century, Artificial Intelligence AI has gained lot of popularity because of the performance of the AI models with good accuracy scores. Natural Language Processing NLP which is a major subfield of AI deals with analysis of huge amounts of Natural Language data and processing it. Text Summarization is one of the major applications of NLP. The basic idea of Text Summarization is, when we have large news articles or reviews and we need a gist of news or reviews with in a short period of time then summarization will be useful. Text Summarization also finds its unique place in many applications like patent research, Help desk and customer support. There are numerous ways to build a Text Summarization Model but this paper will mainly focus on building a Text Summarization Model using seq2seq architecture and TensorFlow API.


Lot of research has gone into Natural language processing and the state of the art algorithms in deep learning that unambiguously helps in converting an English text into a data structure without loss of meaning. Also with the advent of neural networks for learning word representations as vectors has helped a lot in revolutionizing the automatic feature extraction from text data corpus. A combination of word embedding and the use of a deep learning algorithm like a convolution neural network helped in better accuracy for text classification. In this era of Internet of things and the voluminous amounts of data that is overwhelming the users determining the veracity of the data is a very challenging task. There are many truth discovery algorithms in literature that help in resolving the conflicts that arise due to multiple sources of data. These algorithms help in estimating the trustworthiness of the data and reliability of the sources. In this paper, a convolution based truth discovery with multitasking is proposed to estimate the genuineness of the data for a given text corpus. The proposed algorithm has been tested on analysing the genuineness of Quora questions dataset and experimental results showed an improved accuracy and speed over other existing approaches.


2019 ◽  
Vol 3 (2) ◽  
pp. 31-40 ◽  
Author(s):  
Ahmed Shamsaldin ◽  
Polla Fattah ◽  
Tarik Rashid ◽  
Nawzad Al-Salihi

At present, deep learning is widely used in a broad range of arenas. A convolutional neural networks (CNN) is becoming the star of deep learning as it gives the best and most precise results when cracking real-world problems. In this work, a brief description of the applications of CNNs in two areas will be presented: First, in computer vision, generally, that is, scene labeling, face recognition, action recognition, and image classification; Second, in natural language processing, that is, the fields of speech recognition and text classification.


2019 ◽  
Vol 8 (2) ◽  
pp. 1746-1750

Segmentation is an important stage in any computer vision system. Segmentation involves discarding the objects which are not of our interest and extracting only the object of our interest. Automated segmentation has become very difficult when we have complex background and other challenges like illumination, occlusion etc. In this project we are designing an automated segmentation system using deep learning algorithm to segment images with complex background.


2020 ◽  
Vol 3 (3) ◽  
pp. 202-213
Author(s):  
Lu Chen ◽  
Chunchao Xia ◽  
Huaiqiang Sun

ABSTRACT Deep learning (DL) is a recently proposed subset of machine learning methods that has gained extensive attention in the academic world, breaking benchmark records in areas such as visual recognition and natural language processing. Different from conventional machine learning algorithm, DL is able to learn useful representations and features directly from raw data through hierarchical nonlinear transformations. Because of its ability to detect abstract and complex patterns, DL has been used in neuroimaging studies of psychiatric disorders, which are characterized by subtle and diffuse alterations. Here, we provide a brief review of recent advances and associated challenges in neuroimaging studies of DL applied to psychiatric disorders. The results of these studies indicate that DL could be a powerful tool in assisting the diagnosis of psychiatric diseases. We conclude our review by clarifying the main promises and challenges of DL application in psychiatric disorders, and possible directions for future research.


Processes ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 768
Author(s):  
Ruey-Kai Sheu ◽  
Lun-Chi Chen ◽  
Mayuresh Sunil Pardeshi ◽  
Kai-Chih Pai ◽  
Chia-Yu Chen

Sheet metal-based products serve as a major portion of the furniture market and maintain higher quality standards by being competitive. During industrial processes, while converting a sheet metal to an end product, new defects are observed and thus need to be identified carefully. Recent studies have shown scratches, bumps, and pollution/dust are identified, but orange peel defects present overall a new challenge. So our model identifies scratches, bumps, and dust by using computer vision algorithms, whereas orange peel defect detection with deep learning have a better performance. The goal of this paper was to resolve artificial intelligence (AI) as an AI landing challenge faced in identifying various kinds of sheet metal-based product defects by ALDB-DL process automation. Therefore, our system model consists of multiple cameras from two different angles to capture the defects of the sheet metal-based drawer box. The aim of this paper was to solve multiple defects detection as design and implementation of Industrial process integration with AI by Automated Optical Inspection (AOI) for sheet metal-based drawer box defect detection, stated as AI Landing for sheet metal-based Drawer Box defect detection using Deep Learning (ALDB-DL). Therefore, the scope was given as achieving higher accuracy using multi-camera-based image feature extraction using computer vision and deep learning algorithm for defect classification in AOI. We used SHapley Additive exPlanations (SHAP) values for pre-processing, LeNet with a (1 × 1) convolution filter, and a Global Average Pooling (GAP) Convolutional Neural Network (CNN) algorithm to achieve the best results. It has applications for sheet metal-based product industries with improvised quality control for edge and surface detection. The results were competitive as the precision, recall, and area under the curve were 1.00, 0.99, and 0.98, respectively. Successively, the discussion section presents a detailed insight view about the industrial functioning with ALDB-DL experience sharing.


Sign in / Sign up

Export Citation Format

Share Document