The application of artificial intelligence and machine learning to automate Gleason grading: Novel tools to develop next generation risk assessment assays.

2018 ◽  
Vol 36 (6_suppl) ◽  
pp. 170-170 ◽  
Author(s):  
Michael Joseph Donovan ◽  
Richard Scott ◽  
Faisal m Khan ◽  
Jack Zeineh ◽  
Gerardo Fernandez

170 Background: Postoperative risk assessment remains an important variable in the treatment of prostate cancer (PCA). Advances in genomic risk classifiers have aided clinical-decision making; however, clinical-pathologic variables such as Gleason grade and pathologic stage remain significant comparators for accurate prognostication. We aimed to standardize the descriptive pathology of PCA through automation of Gleason grading with artificial intelligence and image analysis feature selection Methods: Retrospective study using radical prostatectomy (RP) tissue microarrays from Henry Ford Hospital and Roswell Park Cancer Center with 8-year median follow-up. Samples were stained with a multiplex immunofluorescent assay: Androgen Receptor (AR), Ki67, Cytokeratin 18, Cytokeratin 5/6 and Alpha-methylacyl-CoA racemase); imaged with a CRI Nuance FX camera and then analyzed with proprietary software to generate a suite of morphometric - attributes that quantitatively characterize the Gleason spectrum. Derived features were univariately correlated with disease progression using the concordance index (CI) along with the hazards ratio and p-value. Results: Starting with a training cohort of 306 patients and a 15% event rate, MIF PCA images were subjected to a machine learning analysis program which incorporates a graph theory-based approach for characterization of gland / ring fusion and fragmentation of tumor architecture (TA) with biomarker quantitation (BQ) (i.e. AR and Ki67). 19 unique image features with 7 TA and 12 TA+BQ were identified, by univariate CI, all TA features were strongly associated with Gleason grading with CI’s reflecting degree of tumor differentiation (CI 0.29-.33, p-value = 0.005). Four TA+BQ features were selected in a training risk model and effectively replaced the clinical Gleason features. By comparison, dominant RP Gleason had a CI of 0.31. Conclusions: Image-based feature selection guided by principles of machine learning has the potential to automate and replace traditional Gleason grading. Such approaches provide the necessary foundation for next generation risk assessment assays.

2020 ◽  
pp. 97-102
Author(s):  
Benjamin Wiggins

Can risk assessment be made fair? The conclusion of Calculating Race returns to actuarial science’s foundations in probability. The roots of probability rest in a pair of problems posed to Blaise Pascal and Pierre de Fermat in the summer of 1654: “the Dice Problem” and “the Division Problem.” From their very foundation, the mathematics of probability offered the potential not only to be used to gain an advantage (as in the case of the Dice Problem), but also to divide material fairly (as in the case of the Division Problem). As the United States and the world enter an age driven by Big Data, algorithms, artificial intelligence, and machine learning and characterized by an actuarialization of everything, we must remember that risk assessment need not be put to use for individual, corporate, or government advantage but, rather, that it has always been capable of guiding how to distribute risk equitably instead.


Author(s):  
Vineet Talwar ◽  
Kundan Singh Chufal ◽  
Srujana Joga

AbstractArtificial intelligence (AI) has become an essential tool in human life because of its pivotal role in communications, transportation, media, and social networking. Inspired by the complex neuronal network and its functions in human beings, AI, using computer-based algorithms and training, had been explored since the 1950s. To tackle the enormous amount of patients' clinical data, imaging, histopathological data, and the increasing pace of research on new treatments and clinical trials, and ever-changing guidelines for treatment with the advent of novel drugs and evidence, AI is the need of the hour. There are numerous publications and active work on AI's role in the field of oncology. In this review, we discuss the fundamental terminology of AI, its applications in oncology on the whole, and its limitations. There is an inter-relationship between AI, machine learning and, deep learning. The virtual branch of AI deals with machine learning. While the physical branch of AI deals with the delivery of different forms of treatment—surgery, targeted drug delivery, and elderly care. The applications of AI in oncology include cancer screening, diagnosis (clinical, imaging, and histopathological), radiation therapy (image acquisition, tumor and organs at risk segmentation, image registration, planning, and delivery), prediction of treatment outcomes and toxicities, prediction of cancer cell sensitivity to therapeutics and clinical decision-making. A specific area of interest is in the development of effective drug combinations tailored to every patient and tumor with the help of AI. Radiomics, the new kid on the block, deals with the planning and administration of radiotherapy. As with any new invention, AI has its fallacies. The limitations include lack of external validation and proof of generalizability, difficulty in data access for rare diseases, ethical and legal issues, no precise logic behind the prediction, and last but not the least, lack of education and expertise among medical professionals. A collaboration between departments of clinical oncology, bioinformatics, and data sciences can help overcome these problems in the near future.


2019 ◽  
Author(s):  
Yizhao Ni ◽  
Drew Barzman ◽  
Alycia Bachtel ◽  
Marcus Griffey ◽  
Alexander Osborn ◽  
...  

BACKGROUND School violence has a far reaching effect, impacting the entire school population including staff, students and their families. Among youth attending the most violent schools, studies have reported higher dropout rates, poor school attendance, and poor scholastic achievement. It was noted that the largest crime-prevention results occurred when youth at elevated risk were given an individualized prevention program. However, much work is needed to establish an effective approach to identify at-risk subjects. OBJECTIVE In our earlier research, we developed a standardized risk assessment program to interview subjects, identify risk and protective factors, and evaluate risk for school violence. This study focused on developing natural language processing (NLP) and machine learning technologies to automate the risk assessment process. METHODS We prospectively recruited 131 students with behavioral concerns from 89 schools between 05/01/2015 and 04/30/2018. The subjects were interviewed with three innovative risk assessment scales and their risk of violence were determined by pediatric psychiatrists based on clinical judgment. Leveraging NLP technologies, different types of linguistic features were extracted from the interview content. Machine learning classifiers were then applied to predict risk of school violence for individual subjects. A two-stage feature selection was implemented to identify violence-related predictors. The performance was validated on the psychiatrist-generated reference standard of risk levels, where positive predictive value (PPV), sensitivity (SEN), negative predictive value (NPV), specificity (SPEC) and area under the ROC curve (AUC) were assessed. RESULTS Compared to subjects' demographics and socioeconomic information, use of linguistic features significantly improved classifiers' predictive performance (P<0.01). The best-performing classifier with n-gram features achieved 86.5%/86.5%/85.7%/85.7%/94.0% (PPV/SEN/NPV/SPEC/AUC) on the cross-validation set and 83.3%/93.8%/91.7%/78.6%/94.6% (PPV/SEN/NPV/SPEC/AUC) on the test data. The feature selection process identified a set of predictors covering the discussion of subjects' thoughts, perspectives, behaviors, individual characteristics, peers and family dynamics, and protective factors. CONCLUSIONS By analyzing the content from subject interviews, the NLP and machine learning algorithms showed good capacity for detecting risk of school violence. The feature selection uncovered multiple warning markers that could deliver useful clinical insights to assist personalizing intervention. Consequently, the developed approach offered the promise of an end-to-end computerized screening service for preventing school violence.


2020 ◽  
Vol 9 (1) ◽  
pp. 248 ◽  
Author(s):  
Mariana Chumbita ◽  
Catia Cillóniz ◽  
Pedro Puerta-Alcalde ◽  
Estela Moreno-García ◽  
Gemma Sanjuan ◽  
...  

The use of artificial intelligence (AI) to support clinical medical decisions is a rather promising concept. There are two important factors that have driven these advances: the availability of data from electronic health records (EHR) and progress made in computational performance. These two concepts are interrelated with respect to complex mathematical functions such as machine learning (ML) or neural networks (NN). Indeed, some published articles have already demonstrated the potential of these approaches in medicine. When considering the diagnosis and management of pneumonia, the use of AI and chest X-ray (CXR) images primarily have been indicative of early diagnosis, prompt antimicrobial therapy, and ultimately, better prognosis. Coupled with this is the growing research involving empirical therapy and mortality prediction, too. Maximizing the power of NN, the majority of studies have reported high accuracy rates in their predictions. As AI can handle large amounts of data and execute mathematical functions such as machine learning and neural networks, AI can be revolutionary in supporting the clinical decision-making processes. In this review, we describe and discuss the most relevant studies of AI in pneumonia.


2021 ◽  
Vol 55 (1) ◽  
pp. 61-67
Author(s):  
Benjamin Bowman ◽  
H. Howie Huang

Cybersecurity professionals are inundated with large amounts of data, and require intelligent algorithms capable of distinguishing vulnerable from patched, normal from anomalous, and malicious from benign. Unfortunately, not all machine learning (ML) and artificial intelligence (AI) algorithms are created equal, and in this position paper we posit that a new breed of ML, specifically graph-based machine learning (Graph AI), is poised to make a significant impact in this domain. We will discuss the primary differentiators between traditional ML and graph ML, and provide reasons and justifications for why the latter is well-suited to many aspects of cybersecurity. We will present several example applications and result of graph ML in cybersecurity, followed by a discussion of the challenges that lie ahead.


2021 ◽  
Vol 29 (Supplement_1) ◽  
pp. i18-i18
Author(s):  
N Hassan ◽  
R Slight ◽  
D Weiand ◽  
A Vellinga ◽  
G Morgan ◽  
...  

Abstract Introduction Sepsis is a life-threatening condition that is associated with increased mortality. Artificial intelligence tools can inform clinical decision making by flagging patients who may be at risk of developing infection and subsequent sepsis and assist clinicians with their care management. Aim To identify the optimal set of predictors used to train machine learning algorithms to predict the likelihood of an infection and subsequent sepsis and inform clinical decision making. Methods This systematic review was registered in PROSPERO database (CRD42020158685). We searched 3 large databases: Medline, Cumulative Index of Nursing and Allied Health Literature, and Embase, using appropriate search terms. We included quantitative primary research studies that focused on sepsis prediction associated with bacterial infection in adult population (&gt;18 years) in all care settings, which included data on predictors to develop machine learning algorithms. The timeframe of the search was 1st January 2000 till the 25th November 2019. Data extraction was performed using a data extraction sheet, and a narrative synthesis of eligible studies was undertaken. Narrative analysis was used to arrange the data into key areas, and compare and contrast between the content of included studies. Quality assessment was performed using Newcastle-Ottawa Quality Assessment scale, which was used to evaluate the quality of non-randomized studies. Bias was not assessed due to the non-randomised nature of the included studies. Results Fifteen articles met our inclusion criteria (Figure 1). We identified 194 predictors that were used to train machine learning algorithms to predict infection and subsequent sepsis, with 13 predictors used on average across all included studies. The most significant predictors included age, gender, smoking, alcohol intake, heart rate, blood pressure, lactate level, cardiovascular disease, endocrine disease, cancer, chronic kidney disease (eGFR&lt;60ml/min), white blood cell count, liver dysfunction, surgical approach (open or minimally invasive), and pre-operative haematocrit &lt; 30%. These predictors were used for the development of all the algorithms in the fifteen articles. All included studies used artificial intelligence techniques to predict the likelihood of sepsis, with average sensitivity 77.5±19.27, and average specificity 69.45±21.25. Conclusion The type of predictors used were found to influence the predictive power and predictive timeframe of the developed machine learning algorithm. Two strengths of our review were that we included studies published since the first definition of sepsis was published in 2001, and identified factors that can improve the predictive ability of algorithms. However, we note that the included studies had some limitations, with three studies not validating the models that they developed, and many tools limited by either their reduced specificity or sensitivity or both. This work has important implications for practice, as predicting the likelihood of sepsis can help inform the management of patients and concentrate finite resources to those patients who are most at risk. Producing a set of predictors can also guide future studies in developing more sensitive and specific algorithms with increased predictive time window to allow for preventive clinical measures.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Ashish Goyal ◽  
Maheshwar Kuchana ◽  
Kameswari Prasada Rao Ayyagari

AbstractIn-vitro fertilization (IVF) is a popular method of resolving complications such as endometriosis, poor egg quality, a genetic disease of mother or father, problems with ovulation, antibody problems that harm sperm or eggs, the inability of sperm to penetrate or survive in the cervical mucus and low sperm counts, resulting human infertility. Nevertheless, IVF does not guarantee success in the fertilization. Choosing IVF is burdensome for the reason of high cost and uncertainty in the result. As the complications and fertilization factors are numerous in the IVF process, it is a cumbersome task for fertility doctors to give an accurate prediction of a successful birth. Artificial Intelligence (AI) has been employed in this study for predicting the live-birth occurrence. This work mainly focuses on making predictions of live-birth occurrence when an embryo forms from a couple and not a donor. Here, we compare various AI algorithms, including both classical Machine Learning, deep learning architecture, and an ensemble of algorithms on the publicly available dataset provided by Human Fertilisation and Embryology Authority (HFEA). Insights on data and metrics such as confusion matrices, F1-score, precision, recall, receiver operating characteristic (ROC) curves are demonstrated in the subsequent sections. The training process has two settings Without feature selection and With feature selection to train classifier models. Machine Learning, Deep learning, ensemble models classification paradigms have been trained in both settings. The Random Forest model achieves the highest F1-score of 76.49% in without feature selection setting. For the same model, the precision, recall, and area under the ROC Curve (ROC AUC) scores are 77%, 76%, and 84.60%, respectively. The success of the pregnancy depends on both male and female traits and living conditions. This study predicts a successful pregnancy through the clinically relevant parameters in In-vitro fertilization. Thus artificial intelligence plays a promising role in decision making process to support the diagnosis, prognosis, treatment etc.


Database ◽  
2020 ◽  
Vol 2020 ◽  
Author(s):  
Zeeshan Ahmed ◽  
Khalid Mohamed ◽  
Saman Zeeshan ◽  
XinQi Dong

Abstract Precision medicine is one of the recent and powerful developments in medical care, which has the potential to improve the traditional symptom-driven practice of medicine, allowing earlier interventions using advanced diagnostics and tailoring better and economically personalized treatments. Identifying the best pathway to personalized and population medicine involves the ability to analyze comprehensive patient information together with broader aspects to monitor and distinguish between sick and relatively healthy people, which will lead to a better understanding of biological indicators that can signal shifts in health. While the complexities of disease at the individual level have made it difficult to utilize healthcare information in clinical decision-making, some of the existing constraints have been greatly minimized by technological advancements. To implement effective precision medicine with enhanced ability to positively impact patient outcomes and provide real-time decision support, it is important to harness the power of electronic health records by integrating disparate data sources and discovering patient-specific patterns of disease progression. Useful analytic tools, technologies, databases, and approaches are required to augment networking and interoperability of clinical, laboratory and public health systems, as well as addressing ethical and social issues related to the privacy and protection of healthcare data with effective balance. Developing multifunctional machine learning platforms for clinical data extraction, aggregation, management and analysis can support clinicians by efficiently stratifying subjects to understand specific scenarios and optimize decision-making. Implementation of artificial intelligence in healthcare is a compelling vision that has the potential in leading to the significant improvements for achieving the goals of providing real-time, better personalized and population medicine at lower costs. In this study, we focused on analyzing and discussing various published artificial intelligence and machine learning solutions, approaches and perspectives, aiming to advance academic solutions in paving the way for a new data-centric era of discovery in healthcare.


2021 ◽  
Vol 9 (3) ◽  
pp. 39
Author(s):  
David Mhlanga

In banking and finance, credit risk is among the important topics because the process of issuing a loan requires a lot of attention to assessing the possibilities of getting the loaned money back. At the same time in emerging markets, the underbanked individuals cannot access traditional forms of collateral or identification that is required by financial institutions for them to be granted loans. Using the literature review approach through documentary and conceptual analysis to investigate the impact of machine learning and artificial intelligence in credit risk assessment, this study discovered that artificial intelligence and machine learning have a strong impact on credit risk assessments using alternative data sources such as public data to deal with the problems of information asymmetry, adverse selection, and moral hazard. This allows lenders to do serious credit risk analysis, to assess the behaviour of the customer, and subsequently to verify the ability of the clients to repay the loans, permitting less privileged people to access credit. Therefore, this study recommends that financial institutions such as banks and credit lending institutions invest more in artificial intelligence and machine learning to ensure that financially excluded households can obtain credit.


Sign in / Sign up

Export Citation Format

Share Document