scholarly journals A Review of Computer-Aided Expert Systems for Breast Cancer Diagnosis

Cancers ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2764
Author(s):  
Xin Yu Liew ◽  
Nazia Hameed ◽  
Jeremie Clos

A computer-aided diagnosis (CAD) expert system is a powerful tool to efficiently assist a pathologist in achieving an early diagnosis of breast cancer. This process identifies the presence of cancer in breast tissue samples and the distinct type of cancer stages. In a standard CAD system, the main process involves image pre-processing, segmentation, feature extraction, feature selection, classification, and performance evaluation. In this review paper, we reviewed the existing state-of-the-art machine learning approaches applied at each stage involving conventional methods and deep learning methods, the comparisons within methods, and we provide technical details with advantages and disadvantages. The aims are to investigate the impact of CAD systems using histopathology images, investigate deep learning methods that outperform conventional methods, and provide a summary for future researchers to analyse and improve the existing techniques used. Lastly, we will discuss the research gaps of existing machine learning approaches for implementation and propose future direction guidelines for upcoming researchers.

Diagnostics ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 694
Author(s):  
Xuejiao Pang ◽  
Zijian Zhao ◽  
Ying Weng

At present, the application of artificial intelligence (AI) based on deep learning in the medical field has become more extensive and suitable for clinical practice compared with traditional machine learning. The application of traditional machine learning approaches to clinical practice is very challenging because medical data are usually uncharacteristic. However, deep learning methods with self-learning abilities can effectively make use of excellent computing abilities to learn intricate and abstract features. Thus, they are promising for the classification and detection of lesions through gastrointestinal endoscopy using a computer-aided diagnosis (CAD) system based on deep learning. This study aimed to address the research development of a CAD system based on deep learning in order to assist doctors in classifying and detecting lesions in the stomach, intestines, and esophagus. It also summarized the limitations of the current methods and finally presented a prospect for future research.


2019 ◽  
Vol 21 (5) ◽  
pp. 1609-1627 ◽  
Author(s):  
Tianlin Zhang ◽  
Jiaxu Leng ◽  
Ying Liu

AbstractDrug–drug interactions (DDIs) are crucial for drug research and pharmacovigilance. These interactions may cause adverse drug effects that threaten public health and patient safety. Therefore, the DDIs extraction from biomedical literature has been widely studied and emphasized in modern biomedical research. The previous rules-based and machine learning approaches rely on tedious feature engineering, which is labourious, time-consuming and unsatisfactory. With the development of deep learning technologies, this problem is alleviated by learning feature representations automatically. Here, we review the recent deep learning methods that have been applied to the extraction of DDIs from biomedical literature. We describe each method briefly and compare its performance in the DDI corpus systematically. Next, we summarize the advantages and disadvantages of these deep learning models for this task. Furthermore, we discuss some challenges and future perspectives of DDI extraction via deep learning methods. This review aims to serve as a useful guide for interested researchers to further advance bioinformatics algorithms for DDIs extraction from the literature.


AI Magazine ◽  
2022 ◽  
Vol 42 (3) ◽  
pp. 7-18
Author(s):  
Harald Steck ◽  
Linas Baltrunas ◽  
Ehtsham Elahi ◽  
Dawen Liang ◽  
Yves Raimond ◽  
...  

Deep learning has profoundly impacted many areas of machine learning. However, it took a while for its impact to be felt in the field of recommender systems. In this article, we outline some of the challenges encountered and lessons learned in using deep learning for recommender systems at Netflix. We first provide an overview of the various recommendation tasks on the Netflix service. We found that different model architectures excel at different tasks. Even though many deep-learning models can be understood as extensions of existing (simple) recommendation algorithms, we initially did not observe significant improvements in performance over well-tuned non-deep-learning approaches. Only when we added numerous features of heterogeneous types to the input data, deep-learning models did start to shine in our setting. We also observed that deep-learning methods can exacerbate the problem of offline–online metric (mis-)alignment. After addressing these challenges, deep learning has ultimately resulted in large improvements to our recommendations as measured by both offline and online metrics. On the practical side, integrating deep-learning toolboxes in our system has made it faster and easier to implement and experiment with both deep-learning and non-deep-learning approaches for various recommendation tasks. We conclude this article by summarizing our take-aways that may generalize to other applications beyond Netflix.


Intensification in the occurrence of brain diseases and the need for the initial diagnosis for ailments like Tumor, Alzheimer’s, Epilepsy and Parkinson’s has riveted the attention of researchers. Machine learning practices, specifically deep learning, is considered as a beneficial diagnostic tool. Deep learning approaches to neuroimaging will assist computer-aided analysis of neurological diseases. Feature extraction of neuroimages carried out using Artificial Neural Networks leads to better diagnoses. In this study, all the brain diseases are revisited to consolidate the methodologies carried out by various authors in the literature.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7527
Author(s):  
Mugdim Bublin

Distributed Acoustic Sensing (DAS) is a promising new technology for pipeline monitoring and protection. However, a big challenge is distinguishing between relevant events, like intrusion by an excavator near the pipeline, and interference, like land machines. This paper investigates whether it is possible to achieve adequate detection accuracy with classic machine learning algorithms using simulations and real system implementation. Then, we compare classical machine learning with a deep learning approach and analyze the advantages and disadvantages of both approaches. Although acceptable performance can be achieved with both approaches, preliminary results show that deep learning is the more promising approach, eliminating the need for laborious feature extraction and offering a six times lower event detection delay and twelve times lower execution time. However, we achieved the best results by combining deep learning with the knowledge-based and classical machine learning approaches. At the end of this manuscript, we propose general guidelines for efficient system design combining knowledge-based, classical machine learning, and deep learning approaches.


2017 ◽  
Author(s):  
Fadhl M Alakwaa ◽  
Kumardeep Chaudhary ◽  
Lana X Garmire

ABSTRACTMetabolomics holds the promise as a new technology to diagnose highly heterogeneous diseases. Conventionally, metabolomics data analysis for diagnosis is done using various statistical and machine learning based classification methods. However, it remains unknown if deep neural network, a class of increasingly popular machine learning methods, is suitable to classify metabolomics data. Here we use a cohort of 271 breast cancer tissues, 204 positive estrogen receptor (ER+) and 67 negative estrogen receptor (ER-), to test the accuracies of autoencoder, a deep learning (DL) framework, as well as six widely used machine learning models, namely Random Forest (RF), Support Vector Machines (SVM), Recursive Partitioning and Regression Trees (RPART), Linear Discriminant Analysis (LDA), Prediction Analysis for Microarrays (PAM), and Generalized Boosted Models (GBM). DL framework has the highest area under the curve (AUC) of 0.93 in classifying ER+/ER-patients, compared to the other six machine learning algorithms. Furthermore, the biological interpretation of the first hidden layer reveals eight commonly enriched significant metabolomics pathways (adjusted P-value<0.05) that cannot be discovered by other machine learning methods. Among them, protein digestion & absorption and ATP-binding cassette (ABC) transporters pathways are also confirmed in integrated analysis between metabolomics and gene expression data in these samples. In summary, deep learning method shows advantages for metabolomics based breast cancer ER status classification, with both the highest prediction accurcy (AUC=0.93) and better revelation of disease biology. We encourage the adoption of autoencoder based deep learning method in the metabolomics research community for classification.


2021 ◽  
Author(s):  
Thiago Abdo ◽  
Fabiano Silva

The purpose of this paper is to analyze the use of different machine learning approaches and algorithms to be integrated as an automated assistance on a tool to aid the creation of new annotated datasets. We evaluate how they scale in an environment without dedicated machine learning hardware. In particular, we study the impact over a dataset with few examples and one that is being constructed. We experiment using deep learning algorithms (Bert) and classical learning algorithms with a lower computational cost (W2V and Glove combined with RF and SVM). Our experiments show that deep learning algorithms have a performance advantage over classical techniques. However, deep learning algorithms have a high computational cost, making them inadequate to an environment with reduced hardware resources. Simulations using Active and Iterative machine learning techniques to assist the creation of new datasets are conducted. For these simulations, we use the classical learning algorithms because of their computational cost. The knowledge gathered with our experimental evaluation aims to support the creation of a tool for building new text datasets.


Author(s):  
Yue You ◽  
Svetlana V. Doubova ◽  
Diana Pinto-Masis ◽  
Ricardo Pérez-Cuevas ◽  
Víctor Hugo Borja-Aburto ◽  
...  

Abstract Background The study aimed to assess the performance of a multidisciplinary-team diabetes care program called DIABETIMSS on glycemic control of type 2 diabetes (T2D) patients, by using available observational patient data and machine-learning-based targeted learning methods. Methods We analyzed electronic health records and laboratory databases from the year 2012 to 2016 of T2D patients from six family medicine clinics (FMCs) delivering the DIABETIMSS program, and five FMCs providing routine care. All FMCs belong to the Mexican Institute of Social Security and are in Mexico City and the State of Mexico. The primary outcome was glycemic control. The study covariates included: patient sex, age, anthropometric data, history of glycemic control, diabetic complications and comorbidity. We measured the effects of DIABETIMSS program through 1) simple unadjusted mean differences; 2) adjusted via standard logistic regression and 3) adjusted via targeted machine learning. We treated the data as a serial cross-sectional study, conducted a standard principal components analysis to explore the distribution of covariates among clinics, and performed regression tree on data transformed to use the prediction model to identify patient sub-groups in whom the program was most successful. To explore the robustness of the machine learning approaches, we conducted a set of simulations and the sensitivity analysis with process-of-care indicators as possible confounders. Results The study included 78,894 T2D patients, from which 37,767patients received care through DIABETIMSS. The impact of DIABETIMSS ranged, among clinics, from 2 to 8% improvement in glycemic control, with an overall (pooled) estimate of 5% improvement. T2D patients with fewer complications have more significant benefit from DIABETIMSS than those with more complications. At the FMC’s delivering the conventional model the predicted impacts were like what was observed empirically in the DIABETIMSS clinics. The sensitivity analysis did not change the overall estimate average across clinics. Conclusions DIABETIMSS program had a small, but significant increase in glycemic control. The use of machine learning methods yields both population-level effects and pinpoints the sub-groups of patients the program benefits the most. These methods exploit the potential of routine observational patient data within complex healthcare systems to inform decision-makers.


2020 ◽  
Vol 10 (18) ◽  
pp. 6580 ◽  
Author(s):  
Alket Cecaj ◽  
Marco Lippi ◽  
Marco Mamei ◽  
Franco Zambonelli

Accurately forecasting how crowds of people are distributed in urban areas during daily activities is of key importance for the smart city vision and related applications. In this work we forecast the crowd density and distribution in an urban area by analyzing an aggregated mobile phone dataset. By comparing the forecasting performance of statistical and deep learning methods on the aggregated mobile data we show that each class of methods has its advantages and disadvantages depending on the forecasting scenario. However, for our time-series forecasting problem, deep learning methods are preferable when it comes to simplicity and immediacy of use, since they do not require a time-consuming model selection for each different cell. Deep learning approaches are also appropriate when aiming to reduce the maximum forecasting error. Statistical methods instead show their superiority in providing more precise forecasting results, but they require data domain knowledge and computationally expensive techniques in order to select the best parameters.


Sign in / Sign up

Export Citation Format

Share Document