scholarly journals DeepOMe: A Web Server for the Prediction of 2′-O-Me Sites Based on the Hybrid CNN and BLSTM Architecture

Author(s):  
Hongyu Li ◽  
Li Chen ◽  
Zaoli Huang ◽  
Xiaotong Luo ◽  
Huiqin Li ◽  
...  

2′-O-methylations (2′-O-Me or Nm) are one of the most important layers of regulatory control over gene expression. With increasing attentions focused on the characteristics, mechanisms and influences of 2′-O-Me, a revolutionary technique termed Nm-seq were established, allowing the identification of precise 2′-O-Me sites in RNA sequences with high sensitivity. However, as the costs and complexities involved with this new method, the large-scale detection and in-depth study of 2′-O-Me is still largely limited. Therefore, the development of a novel computational method to identify 2′-O-Me sites with adequate reliability is urgently needed at the current stage. To address the above issue, we proposed a hybrid deep-learning algorithm named DeepOMe that combined Convolutional Neural Networks (CNN) and Bidirectional Long Short-term Memory (BLSTM) to accurately predict 2′-O-Me sites in human transcriptome. Validating under 4-, 6-, 8-, and 10-fold cross-validation, we confirmed that our proposed model achieved a high performance (AUC close to 0.998 and AUPR close to 0.880). When testing in the independent data set, DeepOMe was substantially superior to NmSEER V2.0. To facilitate the usage of DeepOMe, a user-friendly web-server was constructed, which can be freely accessed at http://deepome.renlab.org.

Electronics ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. 1140
Author(s):  
Jeong-Hee Lee ◽  
Jongseok Kang ◽  
We Shim ◽  
Hyun-Sang Chung ◽  
Tae-Eung Sung

Building a pattern detection model using a deep learning algorithm for data collected from manufacturing sites is an effective way for to perform decision-making and assess business feasibility for enterprises, by providing the results and implications of the patterns analysis of big data occurring at manufacturing sites. To identify the threshold of the abnormal pattern requires collaboration between data analysts and manufacturing process experts, but it is practically difficult and time-consuming. This paper suggests how to derive the threshold setting of the abnormal pattern without manual labelling by process experts, and offers a prediction algorithm to predict the potentials of future failures in advance by using the hybrid Convolutional Neural Networks (CNN)–Long Short-Term Memory (LSTM) algorithm, and the Fast Fourier Transform (FFT) technique. We found that it is easier to detect abnormal patterns that cannot be found in the existing time domain after preprocessing the data set through FFT. Our study shows that both train loss and test loss were well developed, with near zero convergence with the lowest loss rate compared to existing models such as LSTM. Our proposition for the model and our method of preprocessing the data greatly helps in understanding the abnormal pattern of unlabeled big data produced at the manufacturing site, and can be a strong foundation for detecting the threshold of the abnormal pattern of big data occurring at manufacturing sites.


Author(s):  
Dang Viet Hung ◽  
Ha Manh Hung ◽  
Pham Hoang Anh ◽  
Nguyen Truong Thang

Timely monitoring the large-scale civil structure is a tedious task demanding expert experience and significant economic resources. Towards a smart monitoring system, this study proposes a hybrid deep learning algorithm aiming for structural damage detection tasks, which not only reduces required resources, including computational complexity, data storage but also has the capability to deal with different damage levels. The technique combines the ability to capture local connectivity of Convolution Neural Network and the well-known performance in accounting for long-term dependencies of Long-Short Term Memory network, into a single end-to-end architecture using directly raw acceleration time-series without requiring any signal preprocessing step. The proposed approach is applied to a series of experimentally measured vibration data from a three-story frame and successful in providing accurate damage identification results. Furthermore, parametric studies are carried out to demonstrate the robustness of this hybrid deep learning method when facing data corrupted by random noises, which is unavoidable in reality. Keywords: structural damage detection; deep learning algorithm; vibration; sensor; signal processing.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Bangtong Huang ◽  
Hongquan Zhang ◽  
Zihong Chen ◽  
Lingling Li ◽  
Lihua Shi

Deep learning algorithms are facing the limitation in virtual reality application due to the cost of memory, computation, and real-time computation problem. Models with rigorous performance might suffer from enormous parameters and large-scale structure, and it would be hard to replant them onto embedded devices. In this paper, with the inspiration of GhostNet, we proposed an efficient structure ShuffleGhost to make use of the redundancy in feature maps to alleviate the cost of computations, as well as tackling some drawbacks of GhostNet. Since GhostNet suffers from high computation of convolution in Ghost module and shortcut, the restriction of downsampling would make it more difficult to apply Ghost module and Ghost bottleneck to other backbone. This paper proposes three new kinds of ShuffleGhost structure to tackle the drawbacks of GhostNet. The ShuffleGhost module and ShuffleGhost bottlenecks are utilized by the shuffle layer and group convolution from ShuffleNet, and they are designed to redistribute the feature maps concatenated from Ghost Feature Map and Primary Feature Map. Besides, they eliminate the gap of them and extract the features. Then, SENet layer is adopted to reduce the computation cost of group convolution, as well as evaluating the importance of the feature maps which concatenated from Ghost Feature Maps and Primary Feature Maps and giving proper weights for the feature maps. This paper conducted some experiments and proved that the ShuffleGhostV3 has smaller trainable parameters and FLOPs with the ensurance of accuracy. And with proper design, it could be more efficient in both GPU and CPU side.


2020 ◽  
Vol 34 (4) ◽  
pp. 437-444
Author(s):  
Lingyan Ou ◽  
Ling Chen

Corporate internet reporting (CIR) has such advantages as the strong timeliness, large amount, and wide coverage of financial information. However, the CIR, like any other online information, faces various risks. With the aid of the increasingly sophisticated artificial intelligence (AI) technology, this paper proposes an improved deep learning algorithm for the prediction of CIR risks, aiming to improve the accuracy of CIR risk prediction. After building a reasonable evaluation index system (EIS) for CIR risks, the data involved in risk rating and the prediction of risk transmission effect (RTE) were subject to structured feature extraction and time series construction. Next, a combinatory CIR risk prediction model was established by combining the autoregressive moving average (ARMA) model with long short-term memory (LSTM). The former is good at depicting linear series, and the latter excels in describing nonlinear series. Experimental results demonstrate the effectiveness of the ARMA-LSTM model. The research findings provide a good reference for applying AI technology in risk prediction of other areas.


Author(s):  
Usman Ahmed ◽  
Jerry Chun-Wei Lin ◽  
Gautam Srivastava

Deep learning methods have led to a state of the art medical applications, such as image classification and segmentation. The data-driven deep learning application can help stakeholders to collaborate. However, limited labelled data set limits the deep learning algorithm to generalize for one domain into another. To handle the problem, meta-learning helps to learn from a small set of data. We proposed a meta learning-based image segmentation model that combines the learning of the state-of-the-art model and then used it to achieve domain adoption and high accuracy. Also, we proposed a prepossessing algorithm to increase the usability of the segments part and remove noise from the new test image. The proposed model can achieve 0.94 precision and 0.92 recall. The ability to increase 3.3% among the state-of-the-art algorithms.


2021 ◽  
Vol 263 (1) ◽  
pp. 5552-5554
Author(s):  
Kim Deukha ◽  
Seongwook Jeon ◽  
Won June Lee ◽  
Junhong Park

Intraocular pressure (IOP) measurement is one of the basic tests performed in ophthalmology and is known to be an important risk factor for the development and progression of glaucoma. Measurement of IOP is important for assessing response to treatment and monitoring the progression of the disease in glaucoma. In this study, we investigate a method for measuring IOP using the characteristics of vibration propagation generated when the structure is in contact with the eyeball. The response was measured using an accelerometer and a force sensitive resistor to determine the correlation between the IOP. Experiment was performed using ex-vivo porcine eyes. To control the IOP, a needle of the infusion line connected with the water bottle was inserted into the porcine eyes through the limbus. A cross correlation analysis between the accelerometer and the force sensitive resistor was performed to derive a vibration factor that indicate the change in IOP. In order to analyze the degree of influence of biological tissues such as the eyelid, silicon was placed between the structure and the eyeball. The Long Short-Term Memory (LSTM) deep learning algorithm was used to predict IOP based on the vibration factor.


GEOMATICA ◽  
2021 ◽  
pp. 1-23
Author(s):  
Roholah Yazdan ◽  
Masood Varshosaz ◽  
Saied Pirasteh ◽  
Fabio Remondino

Automatic detection and recognition of traffic signs from images is an important topic in many applications. At first, we segmented the images using a classification algorithm to delineate the areas where the signs are more likely to be found. In this regard, shadows, objects having similar colours, and extreme illumination changes can significantly affect the segmentation results. We propose a new shape-based algorithm to improve the accuracy of the segmentation. The algorithm works by incorporating the sign geometry to filter out the wrong pixels from the classification results. We performed several tests to compare the performance of our algorithm against those obtained by popular techniques such as Support Vector Machine (SVM), K-Means, and K-Nearest Neighbours. In these tests, to overcome the unwanted illumination effects, the images are transformed into colour spaces Hue, Saturation, and Intensity, YUV, normalized red green blue, and Gaussian. Among the traditional techniques used in this study, the best results were obtained with SVM applied to the images transformed into the Gaussian colour space. The comparison results also suggested that by adding the geometric constraints proposed in this study, the quality of sign image segmentation is improved by 10%–25%. We also comparted the SVM classifier enhanced by incorporating the geometry of signs with a U-Shaped deep learning algorithm. Results suggested the performance of both techniques is very close. Perhaps the deep learning results could be improved if a more comprehensive data set is provided.


Nanomaterials ◽  
2020 ◽  
Vol 10 (4) ◽  
pp. 664 ◽  
Author(s):  
Junsong Hu ◽  
Junsheng Yu ◽  
Ying Li ◽  
Xiaoqing Liao ◽  
Xingwu Yan ◽  
...  

The reasonable design pattern of flexible pressure sensors with excellent performance and prominent features including high sensitivity and a relatively wide workable linear range has attracted significant attention owing to their potential application in the advanced wearable electronics and artificial intelligence fields. Herein, nano carbon black from kerosene soot, an atmospheric pollutant generated during the insufficient burning of hydrocarbon fuels, was utilized as the conductive material with a bottom interdigitated textile electrode screen printed using silver paste to construct a piezoresistive pressure sensor with prominent performance. Owing to the distinct loose porous structure, the lumpy surface roughness of the fabric electrodes, and the softness of polydimethylsiloxane, the piezoresistive pressure sensor exhibited superior detection performance, including high sensitivity (31.63 kPa−1 within the range of 0–2 kPa), a relatively large feasible range (0–15 kPa), a low detection limit (2.26 pa), and a rapid response time (15 ms). Thus, these sensors act as outstanding candidates for detecting the human physiological signal and large-scale limb movement, showing their broad range of application prospects in the advanced wearable electronics field.


2019 ◽  
Vol 5 (Supplement_1) ◽  
Author(s):  
David Nieuwenhuijse ◽  
Bas Oude Munnink ◽  
My Phan ◽  
Marion Koopmans

Abstract Sewage samples have a high potential benefit for surveillance of circulating pathogens because they are easy to obtain and reflect population-wide circulation of pathogens. These type of samples typically contain a great diversity of viruses. Therefore, one of the main challenges of metagenomic sequencing of sewage for surveillance is sequence annotation and interpretation. Especially for high-threat viruses, false positive signals can trigger unnecessary alerts, but true positives should not be missed. Annotation thus requires high sensitivity and specificity. To better interpret annotated reads for high-threat viruses, we attempt to determine how classifiable they are in a background of reads of closely related low-threat viruses. As an example, we attempted to distinguish poliovirus reads, a virus of high public health importance, from other enterovirus reads. A sequence-based deep learning algorithm was used to classify reads as either polio or non-polio enterovirus. Short reads were generated from 500 polio and 2,000 non-polio enterovirus genomes as a training set. By training the algorithm on this dataset we try to determine, on a single read level, which short reads can reliably be labeled as poliovirus and which cannot. After training the deep learning algorithm on the generated reads we were able to calculate the probability with which a read can be assigned to a poliovirus genome or a non-poliovirus genome. We show that the algorithm succeeds in classifying the reads with high accuracy. The probability of assigning the read to the correct class was related to the location in the genome to which the read mapped, which conformed with our expectations since some regions of the genome are more conserved than others. Classifying short reads of high-threat viral pathogens seems to be a promising application of sequence-based deep learning algorithms. Also, recent developments in software and hardware have facilitated the development and training of deep learning algorithms. Further plans of this work are to characterize the hard-to-classify regions of the poliovirus genome, build larger training databases, and expand on the current approach to other viruses.


Sign in / Sign up

Export Citation Format

Share Document