scholarly journals A fast and fully-automated deep-learning approach for accurate hemorrhage segmentation and volume quantification in non-contrast whole-head CT

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Ali Arab ◽  
Betty Chinda ◽  
George Medvedev ◽  
William Siu ◽  
Hui Guo ◽  
...  

Abstract This project aimed to develop and evaluate a fast and fully-automated deep-learning method applying convolutional neural networks with deep supervision (CNN-DS) for accurate hematoma segmentation and volume quantification in computed tomography (CT) scans. Non-contrast whole-head CT scans of 55 patients with hemorrhagic stroke were used. Individual scans were standardized to 64 axial slices of 128 × 128 voxels. Each voxel was annotated independently by experienced raters, generating a binary label of hematoma versus normal brain tissue based on majority voting. The dataset was split randomly into training (n = 45) and testing (n = 10) subsets. A CNN-DS model was built applying the training data and examined using the testing data. Performance of the CNN-DS solution was compared with three previously established methods. The CNN-DS achieved a Dice coefficient score of 0.84 ± 0.06 and recall of 0.83 ± 0.07, higher than patch-wise U-Net (< 0.76). CNN-DS average running time of 0.74 ± 0.07 s was faster than PItcHPERFeCT (> 1412 s) and slice-based U-Net (> 12 s). Comparable interrater agreement rates were observed between “method-human” vs. “human–human” (Cohen’s kappa coefficients > 0.82). The fully automated CNN-DS approach demonstrated expert-level accuracy in fast segmentation and quantification of hematoma, substantially improving over previous methods. Further research is warranted to test the CNN-DS solution as a software tool in clinical settings for effective stroke management.

Mekatronika ◽  
2019 ◽  
Vol 1 (1) ◽  
pp. 80-86
Author(s):  
Ooi Peng Toon ◽  
Muhammad Aizzat Zakaria ◽  
Ahmad Fakhri Ab. Nasir ◽  
Anwar P.P. Abdul Majeed ◽  
Chung Young Tan ◽  
...  

Solanum lycopersicum or generally known as tomato came from countries of South America and has been growing in many tropical countries and its healthy nutrients in tomato becomes one of the food demand by the locals in Malaysia when their lifestyle shifted to more concern for healthy food. Since export value and production has increased for the past few years, a vast amount of labours considered for the fruit-picking process. Hence, farmers are now preferring to look for automation to replace labour problems and high cost that they are facing. To pick a correct fruit within clusters, a harvesting robot requires guidance so that it can detect a fruit accurately. In this study, a new classification algorithm using deep learning specifically convolution neural network to classify the image is either a tomato or not tomato and next, the image is classified into either a ripe or unripe tomato. Furthermore, there are two classification neural networks which are tomato or not tomato and ripe and unripe tomato. Each network consists of 600 training data and 33 testing data. The accuracies that obtained from network 1 (tomato or not tomato) and network 2 (ripe or unripe tomato) are 76.366% and 98.788% respectively.


PeerJ ◽  
2019 ◽  
Vol 7 ◽  
pp. e6405
Author(s):  
Sheng Zou ◽  
Paul Gader ◽  
Alina Zare

Tree species classification using hyperspectral imagery is a challenging task due to the high spectral similarity between species and large intra-species variability. This paper proposes a solution using the Multiple Instance Adaptive Cosine Estimator (MI-ACE) algorithm. MI-ACE estimates a discriminative target signature to differentiate between a pair of tree species while accounting for label uncertainty. Multi-class species classification is achieved by training a set of one-vs-one MI-ACE classifiers corresponding to the classification between each pair of tree species and a majority voting on the classification results from all classifiers. Additionally, the performance of MI-ACE does not rely on parameter settings that require tuning resulting in a method that is easy to use in application. Results presented are using training and testing data provided by a data analysis competition aimed at encouraging the development of methods for extracting ecological information through remote sensing obtained through participation in the competition. The experimental results using one-vs-one MI-ACE technique composed of a hierarchical classification, where a tree crown is first classified to one of the genus classes and one of the species classes. The species-level rank-1 classification accuracy is 86.4% and cross entropy is 0.9395 on the testing data, provided by the competition organizer, without the release of ground truth for testing data. Similarly, the same evaluation metrics are computed on the training data, where the rank-1 classification accuracy is 95.62% and the cross entropy is 0.2649. The results show that the presented approach can not only classify the majority species classes, but also classify the rare species classes.


2019 ◽  
Vol 8 (2) ◽  
pp. 1822-1827 ◽  

This paper presents a computer vision based emotion recognition system for the identification of six basic emotions among Filipino Gamers using deep learning techniques. In particular, the proposed system utilized deep learning through the Inception Network and Long-Short Term Memory (LSTM). The researchers gathered a database for Filipino Facial Expressions consisting of 74 gamers for the training data and 4 gamer subjects for the testing data. The system was able to produce a maximum categorical validation accuracy of .9983 and a test accuracy of .9940 for the six basic emotions using the Filipino database. The cross-database analysis results using the well-known Cohn -Kanade+ database showed that the proposed Inception-LSTM system has accuracy on a par with the current existing systems. The results demonstrated the feasibility of the proposed system and showed sample computations of empathy and engagement based on the six basic emotions as a proof of concept


The Lancet ◽  
2018 ◽  
Vol 392 (10162) ◽  
pp. 2388-2396 ◽  
Author(s):  
Sasank Chilamkurthy ◽  
Rohit Ghosh ◽  
Swetha Tanamala ◽  
Mustafa Biviji ◽  
Norbert G Campeau ◽  
...  

2020 ◽  
Vol 28 (5) ◽  
pp. 939-951
Author(s):  
Luyao Ma ◽  
Yun Wang ◽  
Lin Guo ◽  
Yu Zhang ◽  
Ping Wang ◽  
...  

OBJECTIVE: Diagnosis of tuberculosis (TB) in multi-slice spiral computed tomography (CT) images is a difficult task in many TB prevalent locations in which experienced radiologists are lacking. To address this difficulty, we develop an automated detection system based on artificial intelligence (AI) in this study to simplify the diagnostic process of active tuberculosis (ATB) and improve the diagnostic accuracy using CT images. DATA: A CT image dataset of 846 patients is retrospectively collected from a large teaching hospital. The gold standard for ATB patients is sputum smear, and the gold standard for normal and pneumonia patients is the CT report result. The dataset is divided into independent training and testing data subsets. The training data contains 337 ATB, 110 pneumonia, and 120 normal cases, while the testing data contains 139 ATB, 40 pneumonia, and 100 normal cases, respectively. METHODS: A U-Net deep learning algorithm was applied for automatic detection and segmentation of ATB lesions. Image processing methods are then applied to CT layers diagnosed as ATB lesions by U-Net, which can detect potentially misdiagnosed layers, and can turn 2D ATB lesions into 3D lesions based on consecutive U-Net annotations. Finally, independent test data is used to evaluate the performance of the developed AI tool. RESULTS: For an independent test, the AI tool yields an AUC value of 0.980. Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value are 0.968, 0.964, 0.971, 0.971, and 0.964, respectively, which shows that the AI tool performs well for detection of ATB and differential diagnosis of non-ATB (i.e. pneumonia and normal cases). CONCLUSION: An AI tool for automatic detection of ATB in chest CT is successfully developed in this study. The AI tool can accurately detect ATB patients, and distinguish between ATB and non- ATB cases, which simplifies the diagnosis process and lays a solid foundation for the next step of AI in CT diagnosis of ATB in clinical application.


Author(s):  
Felix Erne ◽  
Daniel Dehncke ◽  
Steven C. Herath ◽  
Fabian Springer ◽  
Nico Pfeifer ◽  
...  

Abstract Background Fracture detection by artificial intelligence and especially Deep Convolutional Neural Networks (DCNN) is a topic of growing interest in current orthopaedic and radiological research. As learning a DCNN usually needs a large amount of training data, mostly frequent fractures as well as conventional X-ray are used. Therefore, less common fractures like acetabular fractures (AF) are underrepresented in the literature. The aim of this pilot study was to establish a DCNN for detection of AF using computer tomography (CT) scans. Methods Patients with an acetabular fracture were identified from the monocentric consecutive pelvic injury registry at the BG Trauma Center XXX from 01/2003 – 12/2019. All patients with unilateral AF and CT scans available in DICOM-format were included for further processing. All datasets were automatically anonymised and digitally post-processed. Extraction of the relevant region of interests was performed and the technique of data augmentation (DA) was implemented to artificially increase the number of training samples. A DCNN based on Med3D was used for autonomous fracture detection, using global average pooling (GAP) to reduce overfitting. Results From a total of 2,340 patients with a pelvic fracture, 654 patients suffered from an AF. After screening and post-processing of the datasets, a total of 159 datasets were enrolled for training of the algorithm. A random assignment into training datasets (80%) and test datasets (20%) was performed. The technique of bone area extraction, DA and GAP increased the accuracy of fracture detection from 58.8% (native DCNN) up to an accuracy of 82.8% despite the low number of datasets. Conclusion The accuracy of fracture detection of our trained DCNN is comparable to published values despite the low number of training datasets. The techniques of bone extraction, DA and GAP are useful for increasing the detection rates of rare fractures by a DCNN. Based on the used DCNN in combination with the described techniques from this pilot study, the possibility of an automatic fracture classification of AF is under investigation in a multicentre study.


2021 ◽  
Vol 5 (1) ◽  
pp. 11
Author(s):  
Arif Agustyawan

<p><em>Abstrak: </em></p><p>Proses penyortiran ikan yang dilakukan oleh nelayan atau penjual, untuk menyeleksi ikan berdasar kualitasnya masih menggunakan metode manual dan terkadang meleset karena faktor keterbatasan indra penglihatan ketika lelah. Selama ini pemeriksaan hanya dillihat secara fisik. Akibatnya, saat akan dikonsumsi ikan tersebut kerap kali sudah rusak. Penelitian ini mencoba menerapkan algoritma <em>Convolutional Neural Network</em> (CNN) untuk membedakan ikan segar dan tidak segar. <em>Convolutional Neural Network</em> merupakan salah satu metode <em>deep learning</em> yang mampu melakukan proses pembelajaran mandiri untuk pengenalan objek, ekstraksi objek, dan klasifikasi objek. Pada penelitian ini, diterapkan algoritma <em>Convolutional Neural Network</em> untuk membedakan ikan segar dan tidak segar. Proses <em>learning</em> jaringan menghasilkan akurasi 100% terhadap data <em>training</em> dan data <em>validation</em>. Pengujian terhadap data <em>testing</em> juga menghasilkan akurasi 100%. Hasil penelitian ini menunjukan bahwa penggunaan metode <em>Convolutional Neural Network</em> mampu mengidentifikasi dan mengklasifikasikan ikan segar dan tidak segar dengan sangat baik.</p><p><em>___________________________</em></p><p><em>Abstract:</em></p><p><em>The fish sorting process carried out by fishermen or sellers, to select fish based on quality is still using manual methods and sometimes misses due to the limited sense of sight when tired. So far the examination has only been seen physically. As a result, the fish will often be damaged when consumed. This study tries to apply the Convolutional Neural Network (CNN) algorithm to distinguish between fresh and non-fresh fish. Convolutional Neural Network is a method of deep learning that is capable of conducting independent learning processes for object recognition, object extraction, and object classification. In this study, the Convolutional Neural Network algorithm is applied to distinguish between fresh and non-fresh fish. Network learning process produces 100% accuracy of training data and data validation. Testing of testing data also results in 100% accuracy. The results of this study indicate that the use of the Convolutional Neural Network method can identify and classify fresh and non-fresh fish very well.</em></p>


Author(s):  
Greg Smith ◽  
Masayoshi Shibatani

In the past years, various intelligent machine learning and deep learning algorithms have been developed and widely applied for gearbox fault detection and diagnosis. However, the real-time application of these intelligent algorithms has been limited, mainly due to the fact that the model developed using data from one machine or one operating condition has serious diagnosis performance degradation when applied to another machine or the same machine with a different operating condition. The reason for poor model generalization is the distribution discrepancy between the training and testing data. This paper proposes to address this issue using a deep learning based cross domain adaptation approach for gearbox fault diagnosis. Labelled data from training dataset and unlabeled data from testing dataset is used to achieve the cross-domain adaptation task. A deep convolutional neural network (CNN) is used as the main architecture. Maximum mean discrepancy is used as a measure to minimize the distribution distance between the labelled training data and unlabeled testing data. The study proposes to reduce the discrepancy between the two domains in multiple layers of the designed CNN to adapt the learned representations from the training data to be applied in the testing data. The proposed approach is evaluated using experimental data from a gearbox under significant speed variation and multiple health conditions. An appropriate benchmarking with both traditional machine learning methods and other domain adaptation methods demonstrates the superiority of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document