scholarly journals Artificial intelligence to predict in-hospital mortality using novel anatomical injury score

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Wu Seong Kang ◽  
Heewon Chung ◽  
Hoon Ko ◽  
Nan Yeol Kim ◽  
Do Wan Kim ◽  
...  

AbstractThe aim of the study is to develop artificial intelligence (AI) algorithm based on a deep learning model to predict mortality using abbreviate injury score (AIS). The performance of the conventional anatomic injury severity score (ISS) system in predicting in-hospital mortality is still limited. AIS data of 42,933 patients registered in the Korean trauma data bank from four Korean regional trauma centers were enrolled. After excluding patients who were younger than 19 years old and those who died within six hours from arrival, we included 37,762 patients, of which 36,493 (96.6%) survived and 1269 (3.4%) deceased. To enhance the AI model performance, we reduced the AIS codes to 46 input values by organizing them according to the organ location (Region-46). The total AIS and six categories of the anatomic region in the ISS system (Region-6) were used to compare the input features. The AI models were compared with the conventional ISS and new ISS (NISS) systems. We evaluated the performance pertaining to the 12 combinations of the features and models. The highest accuracy (85.05%) corresponded to Region-46 with DNN, followed by that of Region-6 with DNN (83.62%), AIS with DNN (81.27%), ISS-16 (80.50%), NISS-16 (79.18%), NISS-25 (77.09%), and ISS-25 (70.82%). The highest AUROC (0.9084) corresponded to Region-46 with DNN, followed by that of Region-6 with DNN (0.9013), AIS with DNN (0.8819), ISS (0.8709), and NISS (0.8681). The proposed deep learning scheme with feature combination exhibited high accuracy metrics such as the balanced accuracy and AUROC than the conventional ISS and NISS systems. We expect that our trial would be a cornerstone of more complex combination model.

2021 ◽  
Vol 11 (4) ◽  
pp. 290
Author(s):  
Luca Pasquini ◽  
Antonio Napolitano ◽  
Emanuela Tagliente ◽  
Francesco Dellepiane ◽  
Martina Lucignani ◽  
...  

Isocitrate dehydrogenase (IDH) mutant and wildtype glioblastoma multiforme (GBM) often show overlapping features on magnetic resonance imaging (MRI), representing a diagnostic challenge. Deep learning showed promising results for IDH identification in mixed low/high grade glioma populations; however, a GBM-specific model is still lacking in the literature. Our aim was to develop a GBM-tailored deep-learning model for IDH prediction by applying convoluted neural networks (CNN) on multiparametric MRI. We selected 100 adult patients with pathologically demonstrated WHO grade IV gliomas and IDH testing. MRI sequences included: MPRAGE, T1, T2, FLAIR, rCBV and ADC. The model consisted of a 4-block 2D CNN, applied to each MRI sequence. Probability of IDH mutation was obtained from the last dense layer of a softmax activation function. Model performance was evaluated in the test cohort considering categorical cross-entropy loss (CCEL) and accuracy. Calculated performance was: rCBV (accuracy 83%, CCEL 0.64), T1 (accuracy 77%, CCEL 1.4), FLAIR (accuracy 77%, CCEL 1.98), T2 (accuracy 67%, CCEL 2.41), MPRAGE (accuracy 66%, CCEL 2.55). Lower performance was achieved on ADC maps. We present a GBM-specific deep-learning model for IDH mutation prediction, with a maximal accuracy of 83% on rCBV maps. Highest predictivity achieved on perfusion images possibly reflects the known link between IDH and neoangiogenesis through the hypoxia inducible factor.


2021 ◽  
Author(s):  
Yew Kee Wong

Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate some of the different deep learning algorithms and methods which can be applied to artificial intelligence analysis, as well as the opportunities provided by the application in various decision making domains.


2019 ◽  
Vol 39 (5) ◽  
pp. 47-59 ◽  
Author(s):  
Sugeerth Murugesan ◽  
Sana Malik ◽  
Fan Du ◽  
Eunyee Koh ◽  
Tuan Manh Lai

BMJ Open ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. e036423
Author(s):  
Zhigang Song ◽  
Chunkai Yu ◽  
Shuangmei Zou ◽  
Wenmiao Wang ◽  
Yong Huang ◽  
...  

ObjectivesThe microscopic evaluation of slides has been gradually moving towards all digital in recent years, leading to the possibility for computer-aided diagnosis. It is worthwhile to know the similarities between deep learning models and pathologists before we put them into practical scenarios. The simple criteria of colorectal adenoma diagnosis make it to be a perfect testbed for this study.DesignThe deep learning model was trained by 177 accurately labelled training slides (156 with adenoma). The detailed labelling was performed on a self-developed annotation system based on iPad. We built the model based on DeepLab v2 with ResNet-34. The model performance was tested on 194 test slides and compared with five pathologists. Furthermore, the generalisation ability of the learning model was tested by extra 168 slides (111 with adenoma) collected from two other hospitals.ResultsThe deep learning model achieved an area under the curve of 0.92 and obtained a slide-level accuracy of over 90% on slides from two other hospitals. The performance was on par with the performance of experienced pathologists, exceeding the average pathologist. By investigating the feature maps and cases misdiagnosed by the model, we found the concordance of thinking process in diagnosis between the deep learning model and pathologists.ConclusionsThe deep learning model for colorectal adenoma diagnosis is quite similar to pathologists. It is on-par with pathologists’ performance, makes similar mistakes and learns rational reasoning logics. Meanwhile, it obtains high accuracy on slides collected from different hospitals with significant staining configuration variations.


2020 ◽  
Vol 10 (4) ◽  
pp. 211 ◽  
Author(s):  
Yong Joon Suh ◽  
Jaewon Jung ◽  
Bum-Joo Cho

Mammography plays an important role in screening breast cancer among females, and artificial intelligence has enabled the automated detection of diseases on medical images. This study aimed to develop a deep learning model detecting breast cancer in digital mammograms of various densities and to evaluate the model performance compared to previous studies. From 1501 subjects who underwent digital mammography between February 2007 and May 2015, craniocaudal and mediolateral view mammograms were included and concatenated for each breast, ultimately producing 3002 merged images. Two convolutional neural networks were trained to detect any malignant lesion on the merged images. The performances were tested using 301 merged images from 284 subjects and compared to a meta-analysis including 12 previous deep learning studies. The mean area under the receiver-operating characteristic curve (AUC) for detecting breast cancer in each merged mammogram was 0.952 ± 0.005 by DenseNet-169 and 0.954 ± 0.020 by EfficientNet-B5, respectively. The performance for malignancy detection decreased as breast density increased (density A, mean AUC = 0.984 vs. density D, mean AUC = 0.902 by DenseNet-169). When patients’ age was used as a covariate for malignancy detection, the performance showed little change (mean AUC, 0.953 ± 0.005). The mean sensitivity and specificity of the DenseNet-169 (87 and 88%, respectively) surpassed the mean values (81 and 82%, respectively) obtained in a meta-analysis. Deep learning would work efficiently in screening breast cancer in digital mammograms of various densities, which could be maximized in breasts with lower parenchyma density.


Author(s):  
Josh Neudorf ◽  
Shaylyn Kress ◽  
Ron Borowsky

AbstractAlthough functional connectivity and associated graph theory measures (e.g., centrality; how centrally important to the network a region is) are widely used in brain research, the full extent to which these functional measures are related to the underlying structural connectivity is not yet fully understood. Graph neural network deep learning methods have not yet been applied for this purpose, and offer an ideal model architecture for working with connectivity data given their ability to capture and maintain inherent network structure. Here, we applied this model to predict functional connectivity from structural connectivity in a sample of 998 participants from the Human Connectome Project. Our results showed that the graph neural network accounted for 89% of the variance in mean functional connectivity, 56% of the variance in individual-level functional connectivity, 99% of the variance in mean functional centrality, and 81% of the variance in individual-level functional centrality. These results represent an important finding that functional centrality can be robustly predicted from structural connectivity. Regions of particular importance to the model's performance as determined through lesioning are discussed, whereby regions with higher centrality have a higher impact on model performance. Future research on models of patient, demographic, or behavioural data can also benefit from this graph neural network method as it is ideally-suited for depicting connectivity and centrality in brain networks. These results have set a new benchmark for prediction of functional connectivity from structural connectivity, and models like this may ultimately lead to a way to predict functional connectivity in individuals who are unable to do fMRI tasks (e.g., non-responsive patients).


2021 ◽  
Author(s):  
Nurul Akmar Azman ◽  
Azlinah Mohamed ◽  
Amsyar Mohmad Jamil

Abstract Automation is seen as a potential alternative in improving productivity in the twenty-first century. Invoicing is the essential foundation of accounting record keeping and serves as a critical foundation for law enforcement inspections by auditing agencies and tax authorities. With the rise of artificial intelligence, automated record keeping systems are becoming more widespread in major organizations, allowing them to do tasks in real time and with no effort as well as a decision-making tool. Despite the system's benefits, many small and medium-sized businesses, particularly in Malaysia, are hesitant to implement it. Invoices are mostly processed manually that prone to human errors and lower productivity of the company. Artificial intelligence will further improve automated invoice handling making it simpler and efficient for all levels of businesses especially the small and medium enterprise This study presents a deep learning approach on record keeping focusing on invoices recognition by detecting invoice image classification. The deep learning model used in this research including the classic architecture of Convolutional Neural Network and its other variation such as VGG-16, VGG-19 and ResNet-50. Besides that, the constrains and expectation of the system to be implemented in small and medium enterprise in Malaysia are also presented in the interview scores. The research highlighted a comparison result between deep learning model and the perspective of SME presented in the discussion section. ResNet-50 shows a significant value in both training and validation accuracy compared to the other models with 95.90% accuracy in training and 74.24% accuracy for validation data. Future work will look at the suggested other deep learning method and intelligence features to be implemented for a more efficient invoices recognition and for small and medium enterprise.


2020 ◽  
pp. 1-10
Author(s):  
Ruijuan Wang ◽  
Wei Zhuo

The image intelligent processing analysis technology uses a computer to imitate and execute some intellectual functions of the human brain, and realizes an image processing system with artificial intelligence, that is, an image processing analysis technology is an understanding of an image. The degree of intelligent automated analysis and processing is low, many operations need to be done manually, causing human error, inaccurate detection, and time-consuming and laborious. Deep learning method can extract features step by step in the original image from the bottom to the top. Therefore, based on feature analysis technology, this paper uses the deep learning method to intelligently and automatically analyse the visual image. This method only needs to send the image into the system, and then the manual analysis is not needed, and the analysis result of the final image can be obtained. The process is completely intelligent and automatically processed. First, improve the deep learning model and use massive image data to choose and optimize parameters. Results indicate that our method not only automatically derives the semantic information of the image, but also accurately understands the image accurately and improve the work efficiency.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1064
Author(s):  
I Nyoman Kusuma Wardana ◽  
Julian W. Gardner ◽  
Suhaib A. Fahmy

Accurate air quality monitoring requires processing of multi-dimensional, multi-location sensor data, which has previously been considered in centralised machine learning models. These are often unsuitable for resource-constrained edge devices. In this article, we address this challenge by: (1) designing a novel hybrid deep learning model for hourly PM2.5 pollutant prediction; (2) optimising the obtained model for edge devices; and (3) examining model performance running on the edge devices in terms of both accuracy and latency. The hybrid deep learning model in this work comprises a 1D Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to predict hourly PM2.5 concentration. The results show that our proposed model outperforms other deep learning models, evaluated by calculating RMSE and MAE errors. The proposed model was optimised for edge devices, the Raspberry Pi 3 Model B+ (RPi3B+) and Raspberry Pi 4 Model B (RPi4B). This optimised model reduced file size to a quarter of the original, with further size reduction achieved by implementing different post-training quantisation. In total, 8272 hourly samples were continuously fed to the edge device, with the RPi4B executing the model twice as fast as the RPi3B+ in all quantisation modes. Full-integer quantisation produced the lowest execution time, with latencies of 2.19 s and 4.73 s for RPi4B and RPi3B+, respectively.


2019 ◽  
Vol 15 ◽  
pp. P280-P281
Author(s):  
Shangran Qiu ◽  
Megan S. Heydari ◽  
Matthew I. Miller ◽  
Prajakta S. Joshi ◽  
Benjamin C. Wong ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document