scholarly journals An Improved Deep Learning Algorithm for Risk Prediction of Corporate Internet Reporting

2020 ◽  
Vol 34 (4) ◽  
pp. 437-444
Author(s):  
Lingyan Ou ◽  
Ling Chen

Corporate internet reporting (CIR) has such advantages as the strong timeliness, large amount, and wide coverage of financial information. However, the CIR, like any other online information, faces various risks. With the aid of the increasingly sophisticated artificial intelligence (AI) technology, this paper proposes an improved deep learning algorithm for the prediction of CIR risks, aiming to improve the accuracy of CIR risk prediction. After building a reasonable evaluation index system (EIS) for CIR risks, the data involved in risk rating and the prediction of risk transmission effect (RTE) were subject to structured feature extraction and time series construction. Next, a combinatory CIR risk prediction model was established by combining the autoregressive moving average (ARMA) model with long short-term memory (LSTM). The former is good at depicting linear series, and the latter excels in describing nonlinear series. Experimental results demonstrate the effectiveness of the ARMA-LSTM model. The research findings provide a good reference for applying AI technology in risk prediction of other areas.

Author(s):  
Zheng Fang ◽  
David L. Dowe ◽  
Shelton Peiris ◽  
Dedi Rosadi

We investigate the power of time series analysis based on a variety of information-theoretic approaches from statistics (AIC, BIC) and machine learning (Minimum Message Length) - and we then compare their efficacy with traditional time series model and with hybrids involving deep learning. More specifically, we develop AIC, BIC and Minimum Message Length (MML) ARMA (autoregressive moving average) time series models - with this Bayesian information-theoretic MML ARMA modelling already being new work. We then study deep learning based algorithms in time series forecasting, using Long Short Term Memory (LSTM), and we then combine this with the ARMA modelling to produce a hybrid ARMA-LSTM prediction. Part of the purpose of the use of LSTM is to seek capture any hidden information in the residuals left from the traditional ARMA model. We show that MML not only outperforms earlier statistical approaches to ARMA modelling, but we further show that the hybrid MML ARMA-LSTM models outperform both ARMA models and LSTM models.


Author(s):  
Luotong Wang ◽  
Li Qu ◽  
Longshu Yang ◽  
Yiying Wang ◽  
Huaiqiu Zhu

AbstractNanopore sequencing is regarded as one of the most promising third-generation sequencing (TGS) technologies. Since 2014, Oxford Nanopore Technologies (ONT) has developed a series of devices based on nanopore sequencing to produce very long reads, with an expected impact on genomics. However, the nanopore sequencing reads are susceptible to a fairly high error rate owing to the difficulty in identifying the DNA bases from the complex electrical signals. Although several basecalling tools have been developed for nanopore sequencing over the past years, it is still challenging to correct the sequences after applying the basecalling procedure. In this study, we developed an open-source DNA basecalling reviser, NanoReviser, based on a deep learning algorithm to correct the basecalling errors introduced by current basecallers provided by default. In our module, we re-segmented the raw electrical signals based on the basecalled sequences provided by the default basecallers. By employing convolution neural networks (CNNs) and bidirectional long short-term memory (Bi-LSTM) networks, we took advantage of the information from the raw electrical signals and the basecalled sequences from the basecallers. Our results showed NanoReviser, as a post-basecalling reviser, significantly improving the basecalling quality. After being trained on standard ONT sequencing reads from public E. coli and human NA12878 datasets, NanoReviser reduced the sequencing error rate by over 5% for both the E. coli dataset and the human dataset. The performance of NanoReviser was found to be better than those of all current basecalling tools. Furthermore, we analyzed the modified bases of the E. coli dataset and added the methylation information to train our module. With the methylation annotation, NanoReviser reduced the error rate by 7% for the E. coli dataset and specifically reduced the error rate by over 10% for the regions of the sequence rich in methylated bases. To the best of our knowledge, NanoReviser is the first post-processing tool after basecalling to accurately correct the nanopore sequences without the time-consuming procedure of building the consensus sequence. The NanoReviser package is freely available at https://github.com/pkubioinformatics/NanoReviser.


2020 ◽  
pp. 158-161
Author(s):  
Chandraprabha S ◽  
Pradeepkumar G ◽  
Dineshkumar Ponnusamy ◽  
Saranya M D ◽  
Satheesh Kumar S ◽  
...  

This paper outfits artificial intelligence based real time LDR data which is implemented in various applications like indoor lightning, and places where enormous amount of heat is produced, agriculture to increase the crop yield, Solar plant for solar irradiance Tracking. For forecasting the LDR information. The system uses a sensor that can measure the light intensity by means of LDR. The data acquired from sensors are posted in an Adafruit cloud for every two seconds time interval using Node MCU ESP8266 module. The data is also presented on adafruit dashboard for observing sensor variables. A Long short-term memory is used for setting up the deep learning. LSTM module uses the recorded historical data from adafruit cloud which is paired with Node MCU in order to obtain the real-time long-term time series sensor variables that is measured in terms of light intensity. Data is extracted from the cloud for processing the data analytics later the deep learning model is implemented in order to predict future light intensity values.


2021 ◽  
Vol 54 (3-4) ◽  
pp. 439-445
Author(s):  
Chih-Ta Yen ◽  
Sheng-Nan Chang ◽  
Cheng-Hong Liao

This study used photoplethysmography signals to classify hypertensive into no hypertension, prehypertension, stage I hypertension, and stage II hypertension. There are four deep learning models are compared in the study. The difficulties in the study are how to find the optimal parameters such as kernel, kernel size, and layers in less photoplethysmographyt (PPG) training data condition. PPG signals were used to train deep residual network convolutional neural network (ResNetCNN) and bidirectional long short-term memory (BILSTM) to determine the optimal operating parameters when each dataset consisted of 2100 data points. During the experiment, the proportion of training and testing datasets was 8:2. The model demonstrated an optimal classification accuracy of 76% when the testing dataset was used.


Author(s):  
Rafly Indra Kurnia ◽  
◽  
Abba Suganda Girsang

This study will classify the text based on the rating of the provider application on the Google Play Store. This research is classification of user comments using Word2vec and the deep learning algorithm in this case is Long Short Term Memory (LSTM) based on the rating given with a rating scale of 1-5 with a detailed rating 1 is the lowest and rating 5 is the highest data and a rating scale of 1-3 with a detailed rating, 1 as a negative is a combination of ratings 1 and 2, rating 2 as a neutral is rating 3, and rating 3 as a positive is a combination of ratings 4 and 5 to get sentiment from users using SMOTE oversampling to handle the imbalance data. The data used are 16369 data. The training data and the testing data will be taken from user comments MyTelkomsel’s application from the play.google.com site where each comment has a rating in Indonesian Language. This review data will be very useful for companies to make business decisions. This data can be obtained from social media, but social media does not provide a rating feature for every user comment. This research goal is that data from social media such as Twitter or Facebook can also quickly find out the total of the user satisfaction based from the rating from the comment given. The best f1 scores and precisions obtained using 5 classes with LSTM and SMOTE were 0.62 and 0.70 and the best f1 scores and precisions obtained using 3 classes with LSTM and SMOTE were 0.86 and 0.87


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 536
Author(s):  
Pasquale Arpaia ◽  
Federica Crauso ◽  
Egidio De Benedetto ◽  
Luigi Duraccio ◽  
Giovanni Improta ◽  
...  

This work addresses the design, development and implementation of a 4.0-based wearable soft transducer for patient-centered vitals telemonitoring. In particular, first, the soft transducer measures hypertension-related vitals (heart rate, oxygen saturation and systolic/diastolic pressure) and sends the data to a remote database (which can be easily consulted both by the patient and the physician). In addition to this, a dedicated deep learning algorithm, based on a Long-Short-Term-Memory Autoencoder, was designed, implemented and tested for providing an alert when the patient’s vitals exceed certain thresholds, which are automatically personalized for the specific patient. Furthermore, a mobile application (EcO2u) was developed to manage the entire data flow and facilitate the data fruition; this application also implements an innovative face-detection algorithm that ensures the identity of the patient. The robustness of the proposed soft transducer was validated experimentally on five individuals, who used the system for 30 days. The experimental results demonstrated an accuracy in anomaly detection greater than 93%, with a true positive rate of more than 94%.


Electronics ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. 1140
Author(s):  
Jeong-Hee Lee ◽  
Jongseok Kang ◽  
We Shim ◽  
Hyun-Sang Chung ◽  
Tae-Eung Sung

Building a pattern detection model using a deep learning algorithm for data collected from manufacturing sites is an effective way for to perform decision-making and assess business feasibility for enterprises, by providing the results and implications of the patterns analysis of big data occurring at manufacturing sites. To identify the threshold of the abnormal pattern requires collaboration between data analysts and manufacturing process experts, but it is practically difficult and time-consuming. This paper suggests how to derive the threshold setting of the abnormal pattern without manual labelling by process experts, and offers a prediction algorithm to predict the potentials of future failures in advance by using the hybrid Convolutional Neural Networks (CNN)–Long Short-Term Memory (LSTM) algorithm, and the Fast Fourier Transform (FFT) technique. We found that it is easier to detect abnormal patterns that cannot be found in the existing time domain after preprocessing the data set through FFT. Our study shows that both train loss and test loss were well developed, with near zero convergence with the lowest loss rate compared to existing models such as LSTM. Our proposition for the model and our method of preprocessing the data greatly helps in understanding the abnormal pattern of unlabeled big data produced at the manufacturing site, and can be a strong foundation for detecting the threshold of the abnormal pattern of big data occurring at manufacturing sites.


Sign in / Sign up

Export Citation Format

Share Document