scholarly journals Assessing the Applicability of Deep Learning Methods for Segmenting Geometrical Features in Large-Scale Jet Flames from Infrared Images

Heliyon ◽  
2021 ◽  
pp. e08349
Author(s):  
Carmina Pérez-Guerrero ◽  
Adriana Palacios ◽  
Gilberto Ochoa-Ruiz ◽  
Christian Mata ◽  
Joaquim Casal ◽  
...  
2021 ◽  
Vol 61 (2) ◽  
pp. 653-663
Author(s):  
Sankalp Jain ◽  
Vishal B. Siramshetty ◽  
Vinicius M. Alves ◽  
Eugene N. Muratov ◽  
Nicole Kleinstreuer ◽  
...  

2021 ◽  
Author(s):  
Jimin Pei ◽  
Jing Zhang ◽  
Qian Cong

AbstractRecent development of deep-learning methods has led to a breakthrough in the prediction accuracy of 3-dimensional protein structures. Extending these methods to protein pairs is expected to allow large-scale detection of protein-protein interactions and modeling protein complexes at the proteome level. We applied RoseTTAFold and AlphaFold2, two of the latest deep-learning methods for structure predictions, to analyze coevolution of human proteins residing in mitochondria, an organelle of vital importance in many cellular processes including energy production, metabolism, cell death, and antiviral response. Variations in mitochondrial proteins have been linked to a plethora of human diseases and genetic conditions. RoseTTAFold, with high computational speed, was used to predict the coevolution of about 95% of mitochondrial protein pairs. Top-ranked pairs were further subject to the modeling of the complex structures by AlphaFold2, which also produced contact probability with high precision and in many cases consistent with RoseTTAFold. Most of the top ranked pairs with high contact probability were supported by known protein-protein interactions and/or similarities to experimental structural complexes. For high-scoring pairs without experimental complex structures, our coevolution analyses and structural models shed light on the details of their interfaces, including CHCHD4-AIFM1, MTERF3-TRUB2, FMC1-ATPAF2, ECSIT-NDUFAF1 and COQ7-COQ9, among others. We also identified novel PPIs (PYURF-NDUFAF5, LYRM1-MTRF1L and COA8-COX10) for several proteins without experimentally characterized interaction partners, leading to predictions of their molecular functions and the biological processes they are involved in.


Author(s):  
E. CELLEDONI ◽  
M. J. EHRHARDT ◽  
C. ETMANN ◽  
R. I. MCLACHLAN ◽  
B. OWREN ◽  
...  

Over the past few years, deep learning has risen to the foreground as a topic of massive interest, mainly as a result of successes obtained in solving large-scale image processing tasks. There are multiple challenging mathematical problems involved in applying deep learning: most deep learning methods require the solution of hard optimisation problems, and a good understanding of the trade-off between computational effort, amount of data and model complexity is required to successfully design a deep learning approach for a given problem.. A large amount of progress made in deep learning has been based on heuristic explorations, but there is a growing effort to mathematically understand the structure in existing deep learning methods and to systematically design new deep learning methods to preserve certain types of structure in deep learning. In this article, we review a number of these directions: some deep neural networks can be understood as discretisations of dynamical systems, neural networks can be designed to have desirable properties such as invertibility or group equivariance and new algorithmic frameworks based on conformal Hamiltonian systems and Riemannian manifolds to solve the optimisation problems have been proposed. We conclude our review of each of these topics by discussing some open problems that we consider to be interesting directions for future research.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Bowen Shen ◽  
Hao Zhang ◽  
Cong Li ◽  
Tianheng Zhao ◽  
Yuanning Liu

Traditional machine learning methods are widely used in the field of RNA secondary structure prediction and have achieved good results. However, with the emergence of large-scale data, deep learning methods have more advantages than traditional machine learning methods. As the number of network layers increases in deep learning, there will often be problems such as increased parameters and overfitting. We used two deep learning models, GoogLeNet and TCN, to predict RNA secondary results. And from the perspective of the depth and width of the network, improvements are made based on the neural network model, which can effectively improve the computational efficiency while extracting more feature information. We process the existing real RNA data through experiments, use deep learning models to extract useful features from a large amount of RNA sequence data and structure data, and then predict the extracted features to obtain each base’s pairing probability. The characteristics of RNA secondary structure and dynamic programming methods are used to process the base prediction results, and the structure with the largest sum of the probability of each base pairing is obtained, and this structure will be used as the optimal RNA secondary structure. We, respectively, evaluated GoogLeNet and TCN models based on 5sRNA, tRNA data, and tmRNA data, and compared them with other standard prediction algorithms. The sensitivity and specificity of the GoogLeNet model on the 5sRNA and tRNA data sets are about 16% higher than the best prediction results in other algorithms. The sensitivity and specificity of the GoogLeNet model on the tmRNA dataset are about 9% higher than the best prediction results in other algorithms. As deep learning algorithms’ performance is related to the size of the data set, as the scale of RNA data continues to expand, the prediction accuracy of deep learning methods for RNA secondary structure will continue to improve.


2018 ◽  
Author(s):  
William J. Godinez ◽  
Imtiaz Hossain ◽  
Xian Zhang

AbstractLarge-scale cellular imaging and phenotyping is a widely adopted strategy for understanding biological systems and chemical perturbations. Quantitative analysis of cellular images for identifying phenotypic changes is a key challenge within this strategy, and has recently seen promising progress with approaches based on deep neural networks. However, studies so far require either pre-segmented images as input or manual phenotype annotations for training, or both. To address these limitations, we have developed an unsupervised approach that exploits the inherent groupings within cellular imaging datasets to define surrogate classes that are used to train a multi-scale convolutional neural network. The trained network takes as input full-resolution microscopy images, and, without the need for segmentation, yields as output feature vectors that support phenotypic profiling. Benchmarked on two diverse benchmark datasets, the proposed approach yields accurate phenotypic predictions as well as compound potency estimates comparable to the state-of-the-art. More importantly, we show that the approach identifies novel cellular phenotypes not included in the manual annotation nor detected by previous studies.Author summaryCellular microscopy images provide detailed information about how cells respond to genetic or chemical treatments, and have been widely and successfully used in basic research and drug discovery. The recent breakthrough of deep learning methods for natural imaging recognition tasks has triggered the development and application of deep learning methods to cellular images to understand how cells change upon perturbation. Although successful, deep learning studies so far either can only take images of individual cells as input or require human experts to label a large amount of images. In this paper, we present an unsupervised deep learning approach that, without any human annotation, analyzes directly full-resolution microscopy images displaying typically hundreds of cells. We apply the approach to two benchmark datasets, and show that the approach identifies novel visual phenotypes not detected by previous studies.


2020 ◽  
Vol 82 (12) ◽  
pp. 2635-2670 ◽  
Author(s):  
Muhammed Sit ◽  
Bekir Z. Demiray ◽  
Zhongrun Xiang ◽  
Gregory J. Ewing ◽  
Yusuf Sermet ◽  
...  

Abstract The global volume of digital data is expected to reach 175 zettabytes by 2025. The volume, variety and velocity of water-related data are increasing due to large-scale sensor networks and increased attention to topics such as disaster response, water resources management, and climate change. Combined with the growing availability of computational resources and popularity of deep learning, these data are transformed into actionable and practical knowledge, revolutionizing the water industry. In this article, a systematic review of literature is conducted to identify existing research that incorporates deep learning methods in the water sector, with regard to monitoring, management, governance and communication of water resources. The study provides a comprehensive review of state-of-the-art deep learning approaches used in the water industry for generation, prediction, enhancement, and classification tasks, and serves as a guide for how to utilize available deep learning methods for future water resources challenges. Key issues and challenges in the application of these techniques in the water domain are discussed, including the ethics of these technologies for decision-making in water resources management and governance. Finally, we provide recommendations and future directions for the application of deep learning models in hydrology and water resources.


2021 ◽  
Vol 13 (7) ◽  
pp. 1290
Author(s):  
Jiangbo Xi ◽  
Okan K. Ersoy ◽  
Jianwu Fang ◽  
Ming Cong ◽  
Tianjun Wu ◽  
...  

Recently, deep learning methods, for example, convolutional neural networks (CNNs), have achieved high performance in hyperspectral image (HSI) classification. The limited training samples of HSI images make it hard to use deep learning methods with many layers and a large number of convolutional kernels as in large scale imagery tasks, and CNN-based methods usually need long training time. In this paper, we present a wide sliding window and subsampling network (WSWS Net) for HSI classification. It is based on layers of transform kernels with sliding windows and subsampling (WSWS). It can be extended in the wide direction to learn both spatial and spectral features more efficiently. The learned features are subsampled to reduce computational loads and to reduce memorization. Thus, layers of WSWS can learn higher level spatial and spectral features efficiently, and the proposed network can be trained easily by only computing linear weights with least squares. The experimental results show that the WSWS Net achieves excellent performance with different hyperspectral remotes sensing datasets compared with other shallow and deep learning methods. The effects of ratio of training samples, the sizes of image patches, and the visualization of features in WSWS layers are presented.


2020 ◽  
Vol 4 (2) ◽  
pp. 276-285
Author(s):  
Winda Kurnia Sari ◽  
Dian Palupi Rini ◽  
Reza Firsandaya Malik ◽  
Iman Saladin B. Azhar

Multilabel text classification is a task of categorizing text into one or more categories. Like other machine learning, multilabel classification performance is limited to the small labeled data and leads to the difficulty of capturing semantic relationships. It requires a multilabel text classification technique that can group four labels from news articles. Deep Learning is a proposed method for solving problems in multilabel text classification techniques. Some of the deep learning methods used for text classification include Convolutional Neural Networks, Autoencoders, Deep Belief Networks, and Recurrent Neural Networks (RNN). RNN is one of the most popular architectures used in natural language processing (NLP) because the recurrent structure is appropriate for processing variable-length text. One of the deep learning methods proposed in this study is RNN with the application of the Long Short-Term Memory (LSTM) architecture. The models are trained based on trial and error experiments using LSTM and 300-dimensional words embedding features with Word2Vec. By tuning the parameters and comparing the eight proposed Long Short-Term Memory (LSTM) models with a large-scale dataset, to show that LSTM with features Word2Vec can achieve good performance in text classification. The results show that text classification using LSTM with Word2Vec obtain the highest accuracy is in the fifth model with 95.38, the average of precision, recall, and F1-score is 95. Also, LSTM with the Word2Vec feature gets graphic results that are close to good-fit on seventh and eighth models.


2021 ◽  
Vol 13 (21) ◽  
pp. 4443
Author(s):  
Bo Jiang ◽  
Guanting Chen ◽  
Jinshuai Wang ◽  
Hang Ma ◽  
Lin Wang ◽  
...  

The haze in remote sensing images can cause the decline of image quality and bring many obstacles to the applications of remote sensing images. Considering the non-uniform distribution of haze in remote sensing images, we propose a single remote sensing image dehazing method based on the encoder–decoder architecture, which combines both wavelet transform and deep learning technology. To address the clarity issue of remote sensing images with non-uniform haze, we preliminary process the input image by the dehazing method based on the atmospheric scattering model, and extract the first-order low-frequency sub-band information of its 2D stationary wavelet transform as an additional channel. Meanwhile, we establish a large-scale hazy remote sensing image dataset to train and test the proposed method. Extensive experiments show that the proposed method obtains greater advantages over typical traditional methods and deep learning methods qualitatively. For the quantitative aspects, we take the average of four typical deep learning methods with superior performance as a comparison object using 500 random test images, and the peak-signal-to-noise ratio (PSNR) value using the proposed method is improved by 3.5029 dB, and the structural similarity (SSIM) value is improved by 0.0295, respectively. Based on the above, the effectiveness of the proposed method for the problem of remote sensing non-uniform dehazing is verified comprehensively.


2021 ◽  
Author(s):  
Mohammad Reza Keshtkaran ◽  
Andrew R. Sedler ◽  
Raeed H. Chowdhury ◽  
Raghav Tandon ◽  
Diya Basrai ◽  
...  

AbstractLarge-scale recordings of neural activity are providing new opportunities to study network-level dynamics. However, the sheer volume of data and its dynamical complexity are critical barriers to uncovering and interpreting these dynamics. Deep learning methods are a promising approach due to their ability to uncover meaningful relationships from large, complex, and noisy datasets. When applied to high-D spiking data from motor cortex (M1) during stereotyped behaviors, they offer improvements in the ability to uncover dynamics and their relation to subjects’ behaviors on a millisecond timescale. However, applying such methods to less-structured behaviors, or in brain areas that are not well-modeled by autonomous dynamics, is far more challenging, because deep learning methods often require careful hand-tuning of complex model hyperparameters (HPs). Here we demonstrate AutoLFADS, a large-scale, automated model-tuning framework that can characterize dynamics in diverse brain areas without regard to behavior. AutoLFADS uses distributed computing to train dozens of models simultaneously while using evolutionary algorithms to tune HPs in a completely unsupervised way. This enables accurate inference of dynamics out-of-the-box on a variety of datasets, including data from M1 during stereotyped and free-paced reaching, somatosensory cortex during reaching with perturbations, and frontal cortex during cognitive timing tasks. We present a cloud software package and comprehensive tutorials that enable new users to apply the method without needing dedicated computing resources.


Sign in / Sign up

Export Citation Format

Share Document