Comprehensive Review on Deep Learning for Neuronal Disorders

2020 ◽  
Vol 9 (1) ◽  
pp. 27-44
Author(s):  
Vinayak Majhi ◽  
Angana Saikia ◽  
Amitava Datta ◽  
Aseem Sinha ◽  
Sudip Paul

In the last few years deep learning (DL) has gained a great attention in modern technology. By using a deep learning method, we can analyse different types of data in different domains which is near to the accuracy of humans. As DL is our upcoming technology and it is also under development, we can say DL is the successor of machine learning (ML) technique. In the present era, ML is used everywhere, wherever we need to analyse statistical data. As we can say DL is our future technology that going to cover every sector of our modern industry, one question always remains: why we are lagging? So, the simple answer in terms of analysing any algorithm is complexity, both time and space. DL needs a large artificial neural network (ANN) with hundreds of hidden layers trained with a huge amount of data. So, to performing these tasks we need high-performance computing device that is very expensive in nowadays. With the growing industries of semiconducting devices, we can easily say that the future of DL is about to come with developing artificial intelligence (AI). As an example, in 2009, the Google Brain, a deep learning artificial intelligence team of Google introduced a Nvidia GPU which increased the learning speed of DL system by 100 times. As of 2017, the intermediate connection of networks increases to a few million units from few thousand, this network can perform several tasks like object recognition, pattern recognition, speech recognition, and image restoration. It has a greater scope in bioengineering since each living organism contains a huge amount data; it can be used for disease diagnosis, rehabilitation, and treatment. It can also help by using data to find the different features and helps us to take several possible decisions in real time. In this review, we found that DL can be very helpful for diagnosing neurological disorders by its symptoms, because DL can be used to identify patterns for each disorder for identification. The benefit is learning how DL can be helpful identifying different neuronal disorders based on different neuropsychiatric symptoms.

Author(s):  
Mohammed Y. Kamil

COVID-19 disease has rapidly spread all over the world at the beginning of this year. The hospitals' reports have told that low sensitivity of RT-PCR tests in the infection early stage. At which point, a rapid and accurate diagnostic technique, is needed to detect the Covid-19. CT has been demonstrated to be a successful tool in the diagnosis of disease. A deep learning framework can be developed to aid in evaluating CT exams to provide diagnosis, thus saving time for disease control. In this work, a deep learning model was modified to Covid-19 detection via features extraction from chest X-ray and CT images. Initially, many transfer-learning models have applied and comparison it, then a VGG-19 model was tuned to get the best results that can be adopted in the disease diagnosis. Diagnostic performance was assessed for all models used via the dataset that included 1000 images. The VGG-19 model achieved the highest accuracy of 99%, sensitivity of 97.4%, and specificity of 99.4%. The deep learning and image processing demonstrated high performance in early Covid-19 detection. It shows to be an auxiliary detection way for clinical doctors and thus contribute to the control of the pandemic.


2019 ◽  
Author(s):  
Lu Liu ◽  
Ahmed Elazab ◽  
Baiying Lei ◽  
Tianfu Wang

BACKGROUND Echocardiography has a pivotal role in the diagnosis and management of cardiovascular diseases since it is real-time, cost-effective, and non-invasive. The development of artificial intelligence (AI) techniques have led to more intelligent and automatic computer-aided diagnosis (CAD) systems in echocardiography over the past few years. Automatic CAD mainly includes classification, detection of anatomical structures, tissue segmentation, and disease diagnosis, which are mainly completed by machine learning techniques and the recent developed deep learning techniques. OBJECTIVE This review aims to provide a guide for researchers and clinicians on relevant aspects of AI, machine learning, and deep learning. In addition, we review the recent applications of these methods in echocardiography and identify how echocardiography could incorporate AI in the future. METHODS This paper first summarizes the overview of machine learning and deep learning. Second, it reviews current use of AI in echocardiography by searching literature in the main databases for the past 10 years and finally discusses potential limitations and challenges in the future. RESULTS AI has showed promising improvements in analysis and interpretation of echocardiography to a new stage in the fields of standard views detection, automated analysis of chamber size and function, and assessment of cardiovascular diseases. CONCLUSIONS Compared with machine learning, deep learning methods have achieved state-of-the-art performance across different applications in echocardiography. Although there are challenges such as the required large dataset, AI can provide satisfactory results by devising various strategies. We believe AI has the potential to improve accuracy of diagnosis, reduce time consumption, and decrease the load of cardiologists.


with fires spreading increasingly around the world due to increasing global warming, it has become imperative to develop an intelligent system that detects fires early, using modern technology. Therefore, we used one of the artificial intelligence techniques, which is deep learning, which is one of the popular methods now. Professionals have done a lot of research, experiments, and coding software to detect fires using deep learning. Through this paper, we review current methods that are reached by industry professionals, as well as data sets and fire detection accuracy for each method.


2021 ◽  
Vol 11 (2) ◽  
pp. 744
Author(s):  
Sanghyeop Lee ◽  
Junyeob Kim ◽  
Hyeon Kang ◽  
Do-Young Kang ◽  
Jangsik Park

Alzheimer’s disease is one of the major challenges of population ageing, and diagnosis and prediction of the disease through various biomarkers is the key. While the application of deep learning as imaging technologies has recently expanded across the medical industry, empirical design of these technologies is very difficult. The main reason for this problem is that the performance of the Convolutional Neural Networks (CNN) differ greatly depending on the statistical distribution of the input dataset. Different hyperparameters also greatly affect the convergence of the CNN models. With this amount of information, selecting appropriate parameters for the network structure has became a large research area. Genetic Algorithm (GA), is a very popular technique to automatically select a high-performance network architecture. In this paper, we show the possibility of optimising the network architecture using GA, where its search space includes both network structure configuration and hyperparameters. To verify the performance of our Algorithm, we used an amyloid brain image dataset that is used for Alzheimer’s disease diagnosis. As a result, our algorithm outperforms Genetic CNN by 11.73% on a given classification task.


2021 ◽  
Author(s):  
Soumava Dey ◽  
Gunther Correia Bacellar ◽  
Mallikarjuna Basappa Chandrappa ◽  
Raj Kulkarni

The rise of the coronavirus disease 2019 (COVID-19) pandemic has made it necessary to improve existing medical screening and clinical management of this disease. While COVID-19 patients are known to exhibit a variety of symptoms, the major symptoms include fever, cough, and fatigue. Since these symptoms also appear in pneumonia patients, this creates complications in COVID-19 detection especially during the flu season. Early studies identified abnormalities in chest X-ray images of COVID-19 infected patients that could be beneficial for disease diagnosis. Therefore, chest X-ray image-based disease classification has emerged as an alternative to aid medical diagnosis. However, manual detection of COVID-19 from a set of chest X-ray images comprising both COVID-19 and pneumonia cases is cumbersome and prone to human error. Thus, artificial intelligence techniques powered by deep learning algorithms, which learn from radiography images and predict presence of COVID-19 have potential to enhance current diagnosis process. Towards this purpose, here we implemented a set of deep learning pre-trained models such as ResNet, VGG, Inception and EfficientNet in conjunction with developing a computer vision AI system based on our own convolutional neural network (CNN) model: Deep Learning in Healthcare (DLH)-COVID. All these CNN models cater to image classification exercise. We used publicly available resources of 6,432 images and further strengthened our model by tuning hyperparameters to provide better generalization during the model validation phase. Our final DLH-COVID model yielded the highest accuracy of 96% in detection of COVID-19 from chest X-ray images when compared to images of both pneumonia-affected and healthy individuals. Given the practicality of acquiring chest X-ray images by patients, we also developed a web application (link: https://toad.li/xray) based on our model to directly enable users to upload chest X-ray images and detect the presence of COVID-19 within a few seconds. Taken together, here we introduce a state-of-the-art artificial intelligence-based system for efficient COVID-19 detection and a user-friendly application that has the capacity to become a rapid COVID-19 diagnosis method in the near future.


Author(s):  
Yaser AbdulAali Jasim

Nowadays, technology and computer science are rapidly developing many tools and algorithms, especially in the field of artificial intelligence.  Machine learning is involved in the development of new methodologies and models that have become a novel machine learning area of applications for artificial intelligence. In addition to the architectures of conventional neural network methodologies, deep learning refers to the use of artificial neural network architectures which include multiple processing layers. In this paper, models of the Convolutional neural network were designed to detect (diagnose) plant disorders by applying samples of healthy and unhealthy plant images analyzed by means of methods of deep learning. The models were trained using an open data set containing (18,000) images of ten different plants, including healthy plants. Several model architectures have been trained to achieve the best performance of (97 percent) when the respectively [plant, disease] paired are detected. This is a very useful information or early warning technique and a method that can be further improved with the substantially high-performance rate to support an automated plant disease detection system to work in actual farm conditions.


2020 ◽  
Vol 9 (3) ◽  
pp. 871 ◽  
Author(s):  
Muhammad Arsalan ◽  
Muhammad Owais ◽  
Tahir Mahmood ◽  
Jiho Choi ◽  
Kang Ryoung Park

Automatic chest anatomy segmentation plays a key role in computer-aided disease diagnosis, such as for cardiomegaly, pleural effusion, emphysema, and pneumothorax. Among these diseases, cardiomegaly is considered a perilous disease, involving a high risk of sudden cardiac death. It can be diagnosed early by an expert medical practitioner using a chest X-Ray (CXR) analysis. The cardiothoracic ratio (CTR) and transverse cardiac diameter (TCD) are the clinical criteria used to estimate the heart size for diagnosing cardiomegaly. Manual estimation of CTR and other diseases is a time-consuming process and requires significant work by the medical expert. Cardiomegaly and related diseases can be automatically estimated by accurate anatomical semantic segmentation of CXRs using artificial intelligence. Automatic segmentation of the lungs and heart from the CXRs is considered an intensive task owing to inferior quality images and intensity variations using nonideal imaging conditions. Although there are a few deep learning-based techniques for chest anatomy segmentation, most of them only consider single class lung segmentation with deep complex architectures that require a lot of trainable parameters. To address these issues, this study presents two multiclass residual mesh-based CXR segmentation networks, X-RayNet-1 and X-RayNet-2, which are specifically designed to provide fine segmentation performance with a few trainable parameters compared to conventional deep learning schemes. The proposed methods utilize semantic segmentation to support the diagnostic procedure of related diseases. To evaluate X-RayNet-1 and X-RayNet-2, experiments were performed with a publicly available Japanese Society of Radiological Technology (JSRT) dataset for multiclass segmentation of the lungs, heart, and clavicle bones; two other publicly available datasets, Montgomery County (MC) and Shenzhen X-Ray sets (SC), were evaluated for lung segmentation. The experimental results showed that X-RayNet-1 achieved fine performance for all datasets and X-RayNet-2 achieved competitive performance with a 75% parameter reduction.


Author(s):  
Sonia Rani

COVID-19 is a major pandemic disease exploited in this century in the whole world. COVID-19 was started om Wuhan, China in November 2019. The main reason for spreading this disease was that test kits were not available in huge amounts to diagnose the COVID-19, and no vaccine was available to cure this disease. Many researchers are trying to make a vaccine for the treatment of this disease. Prevention is better than cure. Therefore, prevention from this epidemic disease is diagnosis at early stages, and treatment should be given to the patient at an accurate time so that patient can escape death. Millions of people were infected by this disease, and most of them lost their lives after suffering from this disease. As we all know, this disease diagnosis test is complicated. Therefore, many smart apps like Siri, Cova App, Arogya Setu App, etc. and digital systems are used to detect and diagnose cases of infected people. These systems are embedded with artificial intelligence techniques. For diagnosis, the COVID-19 computer tomography is based on deep learning convolutional neural network.


Author(s):  
Yang Lu

The importance of data as the fuel of artificial intelligence is self-evident. As the degree of informatization in various industries deepens, the amount of accumulated data continues to increase; however, data processing capability lags far behind the exponential growth of data volume. To gather accurate results, more and more data should be collected. However, the more data collected, the slower the processing and analyzing of that data. The emergence of deep learning solves the problem of how to process large amounts of data quickly and precisely. With the advancement of technology, the healthcare industry has achieved a promising level of needed data. Moreover, if deep learning can be used to aid disease diagnosis, patient data can be processed efficiently, useful information can be screened, valuable diagnostic rules can be mined, and disease diagnosis results can be better formulated and treated. It is foreseeable that deep learning has the potential to improve the effectiveness and the efficiency of healthcare and relevant industries.


2020 ◽  
Author(s):  
Chang Seok Bang ◽  
Hyun Lim ◽  
Hae Min Jeong ◽  
Sung Hyeon Hwang

BACKGROUND Authors previously examined deep-learning models to classify the invasion depth (mucosa-confined vs. submucosa-invaded) of gastric neoplasms using endoscopic images. The external-test accuracy reach 77.3%. However, model establishment is labor-intense, requiring high performance. Automated deep-learning (AutoDL), which enable fast searching of optimal neural architectures and hyperparameters without complex coding, have been developed. OBJECTIVE To establish AutoDL models in classifying the invasion depth of gastric neoplasms. Additionally, endoscopist-artificial intelligence interactions were explored. METHODS The same 2,899 endoscopic images, which were employed to establish the previous model, were used. A prospective multicenter validation using 206 and 1597 novel images was conducted. The primary outcome was external-test accuracy. “Neuro-T,” “Create ML-Image Classifier,” and “AutoML-Vision” were used in establishing the models. Three doctors with different levels of endoscopy expertise were analyzed for each image without AutoDL’s support, with faulty AutoDL’s support, and with best performance AutoDL’s support in sequence. RESULTS Neuro-T-based model reached 89.3% (95% confidence interval: 85.1–93.5%) external-test accuracy. For the model establishment time, Create ML-Image Classifier showed the fastest time of 13 minutes while reaching 82% external-test accuracy. Expert endoscopist decisions were not influenced by AutoDL. The faulty AutoDL has misled the endoscopy trainee and the general physician. However, this was corrected by the support of the best performance AutoDL. The trainee gained the highest benefit from the AutoDL’s support. CONCLUSIONS AutoDL is deemed useful for the on-site establishment of customized deep-learning models. An inexperienced endoscopist with at least a certain level of expertise can benefit from AutoDL support.


Sign in / Sign up

Export Citation Format

Share Document