scholarly journals Early Detection of Hemorrhagic Stroke Using a Lightweight Deep Learning Neural Network Model

2021 ◽  
Vol 38 (6) ◽  
pp. 1727-1736
Author(s):  
Bandi Vamsi ◽  
Debnath Bhattacharyya ◽  
Divya Midhunchakkravarthy ◽  
Jung-yoon Kim

In present days, the major disease affecting people all across the world is “Cerebrovascular Stroke”. Computed tomographic (CT) images play a crucial role in identifying hemorrhagic strokes. It also helps in understanding the impact of damage caused in the brain cells more accurately. The existing research work is implemented on the Graphical Processing Unit (GPU’s) for stroke segmentation using heavyweight convolutions that require more processing time for diagnosis and increases the model's cost. Deep learning techniques with VGG-16 architecture and Random Forest algorithm are implemented for detecting hemorrhagic stroke using brain CT images under segmentation. A two-step light-weighted convolution model is proposed by using the data collected from multiple- repositories to inscribe this constraint. In the first step, the input CT images are given to VGG-16 architecture and in step two, data frames are given to random forest for stroke segmentation with three levels of classes. In this paper, we explore various training time values in the detection of stroke that reduces when compared with existing heavyweight models. Experimental results have shown that when compared to other existing architectures, our hybrid model VGG-16 and random forest achieved increased results obtained are dice coefficient with 72.92 and accuracy with 97.81% which shows promising results.

Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4595
Author(s):  
Parisa Asadi ◽  
Lauren E. Beckingham

X-ray CT imaging provides a 3D view of a sample and is a powerful tool for investigating the internal features of porous rock. Reliable phase segmentation in these images is highly necessary but, like any other digital rock imaging technique, is time-consuming, labor-intensive, and subjective. Combining 3D X-ray CT imaging with machine learning methods that can simultaneously consider several extracted features in addition to color attenuation, is a promising and powerful method for reliable phase segmentation. Machine learning-based phase segmentation of X-ray CT images enables faster data collection and interpretation than traditional methods. This study investigates the performance of several filtering techniques with three machine learning methods and a deep learning method to assess the potential for reliable feature extraction and pixel-level phase segmentation of X-ray CT images. Features were first extracted from images using well-known filters and from the second convolutional layer of the pre-trained VGG16 architecture. Then, K-means clustering, Random Forest, and Feed Forward Artificial Neural Network methods, as well as the modified U-Net model, were applied to the extracted input features. The models’ performances were then compared and contrasted to determine the influence of the machine learning method and input features on reliable phase segmentation. The results showed considering more dimensionality has promising results and all classification algorithms result in high accuracy ranging from 0.87 to 0.94. Feature-based Random Forest demonstrated the best performance among the machine learning models, with an accuracy of 0.88 for Mancos and 0.94 for Marcellus. The U-Net model with the linear combination of focal and dice loss also performed well with an accuracy of 0.91 and 0.93 for Mancos and Marcellus, respectively. In general, considering more features provided promising and reliable segmentation results that are valuable for analyzing the composition of dense samples, such as shales, which are significant unconventional reservoirs in oil recovery.


The need for offline handwritten character recognition is intense, yet difficult as the writing varies from person to person and also depends on various other factors connected to the attitude and mood of the person. However, we are able to achieve it by converting the handwritten document into digital form. It has been advanced with introducing convolutional neural networks and is further productive with pre-trained models which have the capacity of decreasing the training time and increasing accuracy of character recognition. Research in recognition of handwritten characters for Indian languages is less when compared to other languages like English, Latin, Chinese etc., mainly because it is a multilingual country. Recognition of Telugu and Hindi characters are more difficult as the script of these languages is mostly cursive and are with more diacritics. So the research work in this line is to have inclination towards accuracy in their recognition. Some research has already been started and is successful up to eighty percent in offline hand written character recognition of Telugu and Hindi. The proposed work focuses on increasing accuracy in less time in recognition of these selected languages and is able to reach the expectant values.


2020 ◽  
Vol 9 (2) ◽  
pp. 1049-1054

In this paper, we have tried to predict flight delays using different machine learning and deep learning techniques. By using such a model it can be easier to predict whether the flight will be delayed or not. Factors like ‘WeatherDelay’, ‘NASDelay’, ‘Destination’, ‘Origin’ play a vital role in this model. Using machine learning algorithms like Random Forest, Support Vector Machine (SVM) and K-Nearest Neighbors (KNN), the f1-score, precision, recall, support and accuracy have been predicted. To add to the model, Long Short-Term Memory (LSTM) RNN architecture has also been employed. In the paper, the dataset from Bureau of Transportation Statistics (BTS) of the ‘Pittsburgh’ is being used. The results computed from the above mentioned algorithms have been compared. Further, the results were visualized for various airlines to find maximum delay and AUC-ROC curve has been plotted for Random Forest Algorithm. The aim of our research work is to predict the delay so as to minimize loses and increase customer satisfaction.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 1292
Author(s):  
Nandish Siddeshappa ◽  
Tejashri Varur ◽  
Krithika Subramani ◽  
Siddhi Puranik ◽  
Niranjana Sampathila

Background: The recent outbreak of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and the disease corresponding to it (coronavirus disease 2019; COVID-19) has been declared a pandemic by the World Health Organization. COVID-19 has become a global crisis, shattering health care systems, and weakening economies of most countries. The current methods of testing that are employed include reverse transcription polymerase chain reaction (RT-PCR), rapid antigen testing, and lateral flow testing with RT-PCR being used as the golden standard despite its accuracy being at a mere 63%. It is a manual process which is time consuming, taking about an average of 48 hours to obtain the results. Alternative methods employing deep learning techniques and radiologic images are up and coming. Methods: In this paper, we used a dataset consisting of COVID-19 and non-COVID-19 folders for both X-Ray and CT images which contained a total number of 17,599 images. This dataset has been used to compare 3 (non-pre-trained) CNN models and 5 pre-trained models and their performances in detecting COVID-19 under various parameters like validation accuracy, training accuracy, validation loss, training loss, prediction accuracy, sensitivity and the training time required, with CT and X-Ray images separately. Results: Xception provided the highest validation accuracy (88%) when trained with the dataset containing the X- ray images while VGG19 provided the highest validation accuracy (81.2%) when CT images are used for training. Conclusions: The model, VGG16, showed the most consistent performance, with a validation accuracy of 76.6% for CT images and 87.76% for X-ray images. When comparing the results between the modalities, models trained with the X-ray dataset showed better performances than the same models trained with CT images. Hence, it can be concluded that X-ray images provide a higher accuracy in detecting COVID-19 making it an effective method for detecting COVID-19 in real life.


2020 ◽  
Vol 22 (Supplement_2) ◽  
pp. ii159-ii159
Author(s):  
Christopher Tinkle ◽  
Chih-Yang Hsu ◽  
Edward Simpson ◽  
Jason Chiang ◽  
Xiaoyu Li ◽  
...  

Abstract BACKGROUND Genomic profiling of DIPG suggests distinct and clinically relevant molecular subgroups based on the presence and isoform of histone H3 K27M mutation. We evaluated the impact of radiomic features on the classification and prognostication of 81 histologically confirmed and centrally reviewed DIPG. METHODS We utilized a combination of manual and automatic segmentation (DeepMedic) to define tumor volume and Pyradiomics for computation of radiomic features. Imaging feature stability was assessed by calculating concordance correlation coefficient (CCC) for each radiomic parameter from two separate pretreatment MRIs. Bootstrapped least absolute shrinkage and selection operator (LASSO) was used for feature selection. Classification and prognostication models, incorporating H3 status and clinical variables, were developed using random forest, Cox proportional hazards, and deep learning algorithms and assessed for goodness of fit using the c-index. RESULTS Eighty of 386 imaging features demonstrated stability (CCC, p< 0.001) between pretreatment scans. LASSO identified 26 prognostic imaging features and 38 and 57 imaging features predictive of the presence of H3 K27M mutation and H3 K27M isoforms, respectively. Using five-fold cross validation, the accuracy of distinguishing H3 K27M mutant and WT tumors was 85% and 77% for H3.3 K27M, H3.1 K27M, and WT tumors. C-index for prognostication was 0.77 for Cox, 0.55 for random forest, and 0.72 for deep learning. All models were more predictive than the Jansen survival prediction model. CONCLUSIONS Stable, predictive radiomic features may be a surrogate for H3 status and enhance current prognostication of DIPG. Model validation in cohorts of prospectively treated patients with DIPG is ongoing.


2021 ◽  
Vol 3 (3) ◽  
pp. 206-220
Author(s):  
J Samuel Manoharan

Social distancing is a non-pharmaceutical infection prevention and control approach that is now being utilized in the COVID-19 scenario to avoid or restrict the transmission of illness in a community. As a consequence, the disease transmission, as well as the morbidity and mortality associated with it are reduced. The deadly coronavirus will circulate if the distance between the two persons in each site is used. However, coronavirus exposure must be avoided at all costs. The distance varies due to different nations' political rules and the conditions of their medical embassy. The WHO established a social distance of 1 to 2 metres as the standard. This research work has developed a computational method for estimating the impact of coronavirus based on various social distancing metrics. Generally, in COVID – 19 situations, social distance ranging from long to extremely long can be a good strategy. The adoption of extremely small social distance is a harmful approach to the pandemic. This calculation can be done by using deep learning based on crowd image identification. The proposed work has been utilized to find the optimal social distancing for COVID – 19 and it is identified as 1.89 meter. The purpose of the proposed experiment is to compare the different types of deep learning based image recognition algorithms in a crowded environment. The performance can be measured with various metrics such as accuracy, precision, recall, and true detection rate.


2019 ◽  
Vol 37 (15_suppl) ◽  
pp. e14592-e14592
Author(s):  
Junshui Ma ◽  
Rongjie Liu ◽  
Gregory V. Goldmacher ◽  
Richard Baumgartner

e14592 Background: Radiomic features derived from CT scans have shown promise in predicting treatment response (Sun et al 2018, and others). We carried out a proof-of-concept study to investigate the use of CT images to predict lesion-level response. Methods: CT images from Merck studies KEYNOTE-010 (NCT01905657) and KEYNOTE-024 (NCT02142738), were used. Data from each study were evaluated separately and split for training (80%) and validation (20%) in each study. A lesion was classified as “shrinking” if ≥30% size reduction from baseline was seen on any future scan. There were 2004 (613 shrinking vs. 1391 non-shrinking) and 588 (311 vs. 277) lesions in KN10 and KN24, respectively. 130 radiomic features were extracted, followed by random forest to predict lesion response. In addition, end-to-end deep learning was used, which predicts the response directly from ROIs of CT images. Models were trained in two ways: (1) using pre-treatment baseline (BL) only or (2) using both BL and the first post-treatment image (V1) as predictors. Finally, to evaluate the predictive power without relying on initial lesion size, size information was omitted from CT images. Results: Results from the KN10 and KN24 are summarized in Table. Conclusions: The results suggest that the BL CT images alone have little power to predict lesion response, while BL and the first post-baseline image exhibit high predictive power. Although a substantial part of the predictive power can be attributed to change in ROI size, the predictive power does exist in other aspects of CT images. Overall, the radiomic signature followed by random forest produced predictions similar to, if not better than, the deep learning approach. [Table: see text]


Deep learning is widespread over different fields like health industries, voice recognition, image & video classification, real-time rendering applications, face recognition and many other domains too. Fundamentally Deep Learning is used due to the three different aspects. The first one is its ability to perform better with a huge amount of data for training, second is high computational speed, and third is the elevation of deep training at various levels of reflection and depiction. Acceleration of Deep Machine Learning requires a platform for immense performance; this needs accelerated hardware for training convoluted deep learning problems. While training large datasets on deep learning that takes hours, days, or weeks, accelerated hardware that decreased the overload of computation task can be used. The main attention of all the research studies is to optimize the results of predictions in terms of accuracy, error rate and execution time. Graphical Processing Unit (GPU) is one of the accelerated hardware that has currently prevailed to decrease the training time due to its parallel architecture. In this research paper, the multi-level or Deep Learning approach is simulated over Central Processing Unit (CPU) and GPU. Different research claims that GPUs deliver accurate results with a maximum rate of speed. MATLAB is the framework used in this work to train the Deep Learning network for predicting Ground Water Level using a dataset of three different parameters Temperature, Rainfall, and Water requirement. Thirteen year’s dataset of Faridabad District of Haryana from the year 2006 to 2018 is used to train, verify, test and analyzed the network over CPU and GPU. The training function used was the trailm for training the network over CPU and trainscg for GPU training as it does not support Jacobian training. From our results, it is concluded that for large dataset the accuracy of training increased with GPU and processing time for training is decreased when compared to CPU. Overall performance improves while training the network over GPU and suits to be a better method for predicting the Water Level. The proficiency estimation of the network shows the maximum regression value, least Mean Square Error (MSE), and highperformance value for GPU during the training.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Johan Phan ◽  
Leonardo C. Ruspini ◽  
Frank Lindseth

AbstractObtaining an accurate segmentation of images obtained by computed microtomography (micro-CT) techniques is a non-trivial process due to the wide range of noise types and artifacts present in these images. Current methodologies are often time-consuming, sensitive to noise and artifacts, and require skilled people to give accurate results. Motivated by the rapid advancement of deep learning-based segmentation techniques in recent years, we have developed a tool that aims to fully automate the segmentation process in one step, without the need for any extra image processing steps such as noise filtering or artifact removal. To get a general model, we train our network using a dataset made of high-quality three-dimensional micro-CT images from different scanners, rock types, and resolutions. In addition, we use a domain-specific augmented training pipeline with various types of noise, synthetic artifacts, and image transformation/distortion. For validation, we use a synthetic dataset to measure accuracy and analyze noise/artifact sensitivity. The results show a robust and accurate segmentation performance for the most common types of noises present in real micro-CT images. We also compared the segmentation of our method and five expert users, using commercial and open software packages on real rock images. We found that most of the current tools fail to reduce the impact of local and global noises and artifacts. We quantified the variation on human-assisted segmentation results in terms of physical properties and observed a large variation. In comparison, the new method is more robust to local noises and artifacts, outperforming the human segmentation and giving consistent results. Finally, we compared the porosity of our model segmented images with experimental porosity measured in the laboratory for ten different untrained samples, finding very encouraging results.


10.28945/4897 ◽  
2022 ◽  
Vol 17 ◽  
pp. 035-065
Author(s):  
Niharika Prasanna Kumar

Aim/Purpose: This paper aims to analyze the availability and pricing of perishable farm produce before and during the lockdown restrictions imposed due to Covid-19. This paper also proposes machine learning and deep learning models to help the farmers decide on an appropriate market to sell their farm produce and get a fair price for their product. Background: Developing countries like India have regulated agricultural markets governed by country-specific protective laws like the Essential Commodities Act and the Agricultural Produce Market Committee (APMC) Act. These regulations restrict the sale of agricultural produce to a predefined set of local markets. Covid-19 pandemic led to a lockdown during the first half of 2020 which resulted in supply disruption and demand-supply mismatch of agricultural commodities at these local markets. These demand-supply dynamics led to disruptions in the pricing of the farm produce leading to a lower price realization for farmers. Hence it is essential to analyze the impact of this disruption on the pricing of farm produce at a granular level. Moreover, the farmers need a tool that guides them with the most suitable market/city/town to sell their farm produce to get a fair price. Methodology: One hundred and fifty thousand samples from the agricultural dataset, released by the Government of India, were used to perform statistical analysis and identify the supply disruptions as well as price disruptions of perishable agricultural produce. In addition, more than seventeen thousand samples were used to implement and train machine learning and deep learning models that can predict and guide the farmers about the appropriate market to sell their farm produce. In essence, the paper uses descriptive analytics to analyze the impact of COVID-19 on agricultural produce pricing. The paper explores the usage of prescriptive analytics to recommend an appropriate market to sell agricultural produce. Contribution: Five machine learning models based on Logistic Regression, K-Nearest Neighbors, Support Vector Machine, Random Forest, and Gradient Boosting, and three deep learning models based on Artificial Neural Networks were implemented. The performance of these models was compared using metrics like Precision, Recall, Accuracy, and F1-Score. Findings: Among the five classification models, the Gradient Boosting classifier was the optimal classifier that achieved precision, recall, accuracy, and F1 score of 99%. Out of the three deep learning models, the Adam optimizer-based deep neural network achieved precision, recall, accuracy, and F1 score of 99%. Recommendations for Practitioners: Gradient boosting technique and Adam-based deep learning model should be the preferred choice for analyzing agricultural pricing-related problems. Recommendation for Researchers: Ensemble learning techniques like Random Forest and Gradient boosting perform better than non-Ensemble classification techniques. Hyperparameter tuning is an essential step in developing these models and it improves the performance of the model. Impact on Society: Statistical analysis of the data revealed the true nature of demand and supply and price disruption. This analysis helps to assess the revenue impact borne by the farmers due to Covid-19. The machine learning and deep learning models help the farmers to get a better price for their crops. Though the da-taset used in this paper is related to India, the outcome of this research work applies to many developing countries that have similar regulated markets. Hence farmers from developing countries across the world can benefit from the outcome of this research work. Future Research: The machine learning and deep learning models were implemented and tested for markets in and around Bangalore. The model can be expanded to cover other markets within India.


Sign in / Sign up

Export Citation Format

Share Document