scholarly journals An Improved Deep Learning Model for Traffic Crash Prediction

2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Chunjiao Dong ◽  
Chunfu Shao ◽  
Juan Li ◽  
Zhihua Xiong

Machine-learning technology powers many aspects of modern society. Compared to the conventional machine learning techniques that were limited in processing natural data in the raw form, deep learning allows computational models to learn representations of data with multiple levels of abstraction. In this study, an improved deep learning model is proposed to explore the complex interactions among roadways, traffic, environmental elements, and traffic crashes. The proposed model includes two modules, an unsupervised feature learning module to identify functional network between the explanatory variables and the feature representations and a supervised fine tuning module to perform traffic crash prediction. To address the unobserved heterogeneity issues in the traffic crash prediction, a multivariate negative binomial (MVNB) model is embedding into the supervised fine tuning module as a regression layer. The proposed model was applied to the dataset that was collected from Knox County in Tennessee to validate the performances. The results indicate that the feature learning module identifies relational information between the explanatory variables and the feature representations, which reduces the dimensionality of the input and preserves the original information. The proposed model that includes the MVNB regression layer in the supervised fine tuning module can better account for differential distribution patterns in traffic crashes across injury severities and provides superior traffic crash predictions. The findings suggest that the proposed model is a superior alternative for traffic crash predictions and the average accuracy of the prediction that was measured by RMSD can be improved by 84.58% and 158.27% compared to the deep learning model without the regression layer and the SVM model, respectively.

Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1996
Author(s):  
Junghoon Park ◽  
Il-Youp Kwak ◽  
Changwon Lim

The SARS-CoV-2 virus has spread worldwide, and the World Health Organization has declared COVID-19 pandemic, proclaiming that the entire world must overcome it together. The chest X-ray and computed tomography datasets of individuals with COVID-19 remain limited, which can cause lower performance of deep learning model. In this study, we developed a model for the diagnosis of COVID-19 by solving the classification problem using a self-supervised learning technique with a convolution attention module. Self-supervised learning using a U-shaped convolutional neural network model combined with a convolution block attention module (CBAM) using over 100,000 chest X-Ray images with structure similarity (SSIM) index captures image representations extremely well. The system we proposed consists of fine-tuning the weights of the encoder after a self-supervised learning pretext task, interpreting the chest X-ray representation in the encoder using convolutional layers, and diagnosing the chest X-ray image as the classification model. Additionally, considering the CBAM further improves the averaged accuracy of 98.6%, thereby outperforming the baseline model (97.8%) by 0.8%. The proposed model classifies the three classes of normal, pneumonia, and COVID-19 extremely accurately, along with other metrics such as specificity and sensitivity that are similar to accuracy. The average area under the curve (AUC) is 0.994 in the COVID-19 class, indicating that our proposed model exhibits outstanding classification performance.


Stroke ◽  
2021 ◽  
Vol 52 (Suppl_1) ◽  
Author(s):  
Yannan Yu ◽  
Soren Christensen ◽  
Yuan Xie ◽  
Enhao Gong ◽  
Maarten G Lansberg ◽  
...  

Objective: Ischemic core prediction from CT perfusion (CTP) remains inaccurate compared with gold standard diffusion-weighted imaging (DWI). We evaluated if a deep learning model to predict the DWI lesion from MR perfusion (MRP) could facilitate ischemic core prediction on CTP. Method: Using the multi-center CRISP cohort of acute ischemic stroke patient with CTP before thrombectomy, we included patients with major reperfusion (TICI score≥2b), adequate image quality, and follow-up MRI at 3-7 days. Perfusion parameters including Tmax, mean transient time, cerebral blood flow (CBF), and cerebral blood volume were reconstructed by RAPID software. Core lab experts outlined the stroke lesion on the follow-up MRI. A previously trained MRI model in a separate group of patients was used as a starting point, which used MRP parameters as input and RAPID ischemic core on DWI as ground truth. We fine-tuned this model, using CTP parameters as input, and follow-up MRI as ground truth. Another model was also trained from scratch with only CTP data. 5-fold cross validation was used. Performance of the models was compared with ischemic core (rCBF≤30%) from RAPID software to identify the presence of a large infarct (volume>70 or >100ml). Results: 94 patients in the CRISP trial met the inclusion criteria (mean age 67±15 years, 52% male, median baseline NIHSS 18, median 90-day mRS 2). Without fine-tuning, the MRI model had an agreement of 73% in infarct >70ml, and 69% in >100ml; the MRI model fine-tuned on CT improved the agreement to 77% and 73%; The CT model trained from scratch had agreements of 73% and 71%; All of the deep learning models outperformed the rCBF segmentation from RAPID, which had agreements of 51% and 64%. See Table and figure. Conclusions: It is feasible to apply MRP-based deep learning model to CT. Fine-tuning with CTP data further improves the predictions. All deep learning models predict the stroke lesion after major recanalization better than thresholding approaches based on rCBF.


2020 ◽  
Vol 12 (12) ◽  
pp. 5074
Author(s):  
Jiyoung Woo ◽  
Jaeseok Yun

Spam posts in web forum discussions cause user inconvenience and lower the value of the web forum as an open source of user opinion. In this regard, as the importance of a web post is evaluated in terms of the number of involved authors, noise distorts the analysis results by adding unnecessary data to the opinion analysis. Here, in this work, an automatic detection model for spam posts in web forums using both conventional machine learning and deep learning is proposed. To automatically differentiate between normal posts and spam, evaluators were asked to recognize spam posts in advance. To construct the machine learning-based model, text features from posted content using text mining techniques from the perspective of linguistics were extracted, and supervised learning was performed to distinguish content noise from normal posts. For the deep learning model, raw text including and excluding special characters was utilized. A comparison analysis on deep neural networks using the two different recurrent neural network (RNN) models of the simple RNN and long short-term memory (LSTM) network was also performed. Furthermore, the proposed model was applied to two web forums. The experimental results indicate that the deep learning model affords significant improvements over the accuracy of conventional machine learning associated with text features. The accuracy of the proposed model using LSTM reaches 98.56%, and the precision and recall of the noise class reach 99% and 99.53%, respectively.


2021 ◽  
Vol 7 ◽  
pp. e551
Author(s):  
Nihad Karim Chowdhury ◽  
Muhammad Ashad Kabir ◽  
Md. Muhtadir Rahman ◽  
Noortaz Rezoana

The goal of this research is to develop and implement a highly effective deep learning model for detecting COVID-19. To achieve this goal, in this paper, we propose an ensemble of Convolutional Neural Network (CNN) based on EfficientNet, named ECOVNet, to detect COVID-19 from chest X-rays. To make the proposed model more robust, we have used one of the largest open-access chest X-ray data sets named COVIDx containing three classes—COVID-19, normal, and pneumonia. For feature extraction, we have applied an effective CNN structure, namely EfficientNet, with ImageNet pre-training weights. The generated features are transferred into custom fine-tuned top layers followed by a set of model snapshots. The predictions of the model snapshots (which are created during a single training) are consolidated through two ensemble strategies, i.e., hard ensemble and soft ensemble, to enhance classification performance. In addition, a visualization technique is incorporated to highlight areas that distinguish classes, thereby enhancing the understanding of primal components related to COVID-19. The results of our empirical evaluations show that the proposed ECOVNet model outperforms the state-of-the-art approaches and significantly improves detection performance with 100% recall for COVID-19 and overall accuracy of 96.07%. We believe that ECOVNet can enhance the detection of COVID-19 disease, and thus, underpin a fully automated and efficacious COVID-19 detection system.


Author(s):  
Surenthiran Krishnan ◽  
Pritheega Magalingam ◽  
Roslina Ibrahim

<span>This paper proposes a new hybrid deep learning model for heart disease prediction using recurrent neural network (RNN) with the combination of multiple gated recurrent units (GRU), long short-term memory (LSTM) and Adam optimizer. This proposed model resulted in an outstanding accuracy of 98.6876% which is the highest in the existing model of RNN. The model was developed in Python 3.7 by integrating RNN in multiple GRU that operates in Keras and Tensorflow as the backend for deep learning process, supported by various Python libraries. The recent existing models using RNN have reached an accuracy of 98.23% and deep neural network (DNN) has reached 98.5%. The common drawbacks of the existing models are low accuracy due to the complex build-up of the neural network, high number of neurons with redundancy in the neural network model and imbalance datasets of Cleveland. Experiments were conducted with various customized model, where results showed that the proposed model using RNN and multiple GRU with synthetic minority oversampling technique (SMOTe) has reached the best performance level. This is the highest accuracy result for RNN using Cleveland datasets and much promising for making an early heart disease prediction for the patients.</span>


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 283
Author(s):  
Xiaoyuan Yu ◽  
Suigu Tang ◽  
Chak Fong Cheang ◽  
Hon Ho Yu ◽  
I Cheong Choi

The automatic analysis of endoscopic images to assist endoscopists in accurately identifying the types and locations of esophageal lesions remains a challenge. In this paper, we propose a novel multi-task deep learning model for automatic diagnosis, which does not simply replace the role of endoscopists in decision making, because endoscopists are expected to correct the false results predicted by the diagnosis system if more supporting information is provided. In order to help endoscopists improve the diagnosis accuracy in identifying the types of lesions, an image retrieval module is added in the classification task to provide an additional confidence level of the predicted types of esophageal lesions. In addition, a mutual attention module is added in the segmentation task to improve its performance in determining the locations of esophageal lesions. The proposed model is evaluated and compared with other deep learning models using a dataset of 1003 endoscopic images, including 290 esophageal cancer, 473 esophagitis, and 240 normal. The experimental results show the promising performance of our model with a high accuracy of 96.76% for the classification and a Dice coefficient of 82.47% for the segmentation. Consequently, the proposed multi-task deep learning model can be an effective tool to help endoscopists in judging esophageal lesions.


Author(s):  
Xiangbin Liu ◽  
Jiesheng He ◽  
Liping Song ◽  
Shuai Liu ◽  
Gautam Srivastava

With the rapid development of Artificial Intelligence (AI), deep learning has increasingly become a research hotspot in various fields, such as medical image classification. Traditional deep learning models use Bilinear Interpolation when processing classification tasks of multi-size medical image dataset, which will cause the loss of information of the image, and then affect the classification effect. In response to this problem, this work proposes a solution for an adaptive size deep learning model. First, according to the characteristics of the multi-size medical image dataset, the optimal size set module is proposed in combination with the unpooling process. Next, an adaptive deep learning model module is proposed based on the existing deep learning model. Then, the model is fused with the size fine-tuning module used to process multi-size medical images to obtain a solution of the adaptive size deep learning model. Finally, the proposed solution model is applied to the pneumonia CT medical image dataset. Through experiments, it can be seen that the model has strong robustness, and the classification effect is improved by about 4% compared with traditional algorithms.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Qichao Luo ◽  
Shenglong Mo ◽  
Yunfei Xue ◽  
Xiangzhou Zhang ◽  
Yuliang Gu ◽  
...  

Abstract Background Drug-drug interaction (DDI) is a serious public health issue. The L1000 database of the LINCS project has collected millions of genome-wide expressions induced by 20,000 small molecular compounds on 72 cell lines. Whether this unified and comprehensive transcriptome data resource can be used to build a better DDI prediction model is still unclear. Therefore, we developed and validated a novel deep learning model for predicting DDI using 89,970 known DDIs extracted from the DrugBank database (version 5.1.4). Results The proposed model consists of a graph convolutional autoencoder network (GCAN) for embedding drug-induced transcriptome data from the L1000 database of the LINCS project; and a long short-term memory (LSTM) for DDI prediction. Comparative evaluation of various machine learning methods demonstrated the superior performance of our proposed model for DDI prediction. Many of our predicted DDIs were revealed in the latest DrugBank database (version 5.1.7). In the case study, we predicted drugs interacting with sulfonylureas to cause hypoglycemia and drugs interacting with metformin to cause lactic acidosis, and showed both to induce effects on the proteins involved in the metabolic mechanism in vivo. Conclusions The proposed deep learning model can accelerate the discovery of new DDIs. It can support future clinical research for safer and more effective drug co-prescription.


Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 850
Author(s):  
Pablo Zinemanas ◽  
Martín Rocamora ◽  
Marius Miron ◽  
Frederic Font ◽  
Xavier Serra

Deep learning models have improved cutting-edge technologies in many research areas, but their black-box structure makes it difficult to understand their inner workings and the rationale behind their predictions. This may lead to unintended effects, such as being susceptible to adversarial attacks or the reinforcement of biases. There is still a lack of research in the audio domain, despite the increasing interest in developing deep learning models that provide explanations of their decisions. To reduce this gap, we propose a novel interpretable deep learning model for automatic sound classification, which explains its predictions based on the similarity of the input to a set of learned prototypes in a latent space. We leverage domain knowledge by designing a frequency-dependent similarity measure and by considering different time-frequency resolutions in the feature space. The proposed model achieves results that are comparable to that of the state-of-the-art methods in three different sound classification tasks involving speech, music, and environmental audio. In addition, we present two automatic methods to prune the proposed model that exploit its interpretability. Our system is open source and it is accompanied by a web application for the manual editing of the model, which allows for a human-in-the-loop debugging approach.


Sign in / Sign up

Export Citation Format

Share Document