scholarly journals Artificial Intelligence: A New Tool in Oncologist's Armamentarium

Author(s):  
Vineet Talwar ◽  
Kundan Singh Chufal ◽  
Srujana Joga

AbstractArtificial intelligence (AI) has become an essential tool in human life because of its pivotal role in communications, transportation, media, and social networking. Inspired by the complex neuronal network and its functions in human beings, AI, using computer-based algorithms and training, had been explored since the 1950s. To tackle the enormous amount of patients' clinical data, imaging, histopathological data, and the increasing pace of research on new treatments and clinical trials, and ever-changing guidelines for treatment with the advent of novel drugs and evidence, AI is the need of the hour. There are numerous publications and active work on AI's role in the field of oncology. In this review, we discuss the fundamental terminology of AI, its applications in oncology on the whole, and its limitations. There is an inter-relationship between AI, machine learning and, deep learning. The virtual branch of AI deals with machine learning. While the physical branch of AI deals with the delivery of different forms of treatment—surgery, targeted drug delivery, and elderly care. The applications of AI in oncology include cancer screening, diagnosis (clinical, imaging, and histopathological), radiation therapy (image acquisition, tumor and organs at risk segmentation, image registration, planning, and delivery), prediction of treatment outcomes and toxicities, prediction of cancer cell sensitivity to therapeutics and clinical decision-making. A specific area of interest is in the development of effective drug combinations tailored to every patient and tumor with the help of AI. Radiomics, the new kid on the block, deals with the planning and administration of radiotherapy. As with any new invention, AI has its fallacies. The limitations include lack of external validation and proof of generalizability, difficulty in data access for rare diseases, ethical and legal issues, no precise logic behind the prediction, and last but not the least, lack of education and expertise among medical professionals. A collaboration between departments of clinical oncology, bioinformatics, and data sciences can help overcome these problems in the near future.

2019 ◽  
Author(s):  
Xia Huiyi ◽  
◽  
Nankai Xia ◽  
Liu Liu ◽  
◽  
...  

With the development of urbanization and the continuous development, construction and renewal of the city, the living environment of human beings has also undergone tremendous changes, such as residential community environment and service facilities, urban roads and street spaces, and urban public service formats. And the layout of the facilities, etc., and these are the real needs of people in urban life, but the characteristics of these needs or their problems will inevitably have a certain impact on the user's psychological feelings, thus affecting people's use needs. Then, studying the ways in which urban residents perceive changes in the living environment and how they perceive changes in psychology and emotions will have practical significance and can effectively assist urban management and builders to optimize the living environment of residents. This is also the long-term. One of the topics of greatest interest to urban researchers since then. In the theory of demand hierarchy proposed by American psychologist Abraham Maslow, safety is the basic requirement second only to physiological needs. So safety, especially psychological security, has become one of the basic needs of people in the urban environment. People's perception of the psychological security of the urban environment is also one of the most important indicators in urban environmental assessment. In the past, due to the influence of technical means, the study of urban environmental psychological security often relied on the limited investigation of a small number of respondents. Low-density data is difficult to measure the perceptual results of universality. With the leaping development of the mobile Internet, Internet image data has grown geometrically over time. And with the development of artificial intelligence technology in recent years, image recognition and perception analysis based on machine learning has become possible. The maturity of these technical conditions provides a basis for the study of the urban renewal index evaluation system based on psychological security. In addition to the existing urban visual street furniture data obtained through urban big data collection combined with artificial intelligence image analysis, this paper also proposes a large number of urban living environment psychological assessment data collection strategies. These data are derived from crowdsourcing, and the collection method is limited by the development of cost and technology. At present, the psychological security preference of a large number of users on urban street images is collected by forced selection method, and then obtained by statistical data fitting to obtain urban environmental psychology. Security sense training set. In the future, when the conditions are mature, the brainwave feedback data in the virtual reality scene can be used to carry out the machine learning of psychological security, so as to improve the accuracy of the psychological security data.


Author(s):  
Денис Валерьевич Сикулер

В статье выполнен обзор 10 ресурсов сети Интернет, позволяющих подобрать данные для разнообразных задач, связанных с машинным обучением и искусственным интеллектом. Рассмотрены как широко известные сайты (например, Kaggle, Registry of Open Data on AWS), так и менее популярные или узкоспециализированные ресурсы (к примеру, The Big Bad NLP Database, Common Crawl). Все ресурсы предоставляют бесплатный доступ к данным, в большинстве случаев для этого даже не требуется регистрация. Для каждого ресурса указаны характеристики и особенности, касающиеся поиска и получения наборов данных. В работе представлены следующие сайты: Kaggle, Google Research, Microsoft Research Open Data, Registry of Open Data on AWS, Harvard Dataverse Repository, Zenodo, Портал открытых данных Российской Федерации, World Bank, The Big Bad NLP Database, Common Crawl. The work presents review of 10 Internet resources that can be used to find data for different tasks related to machine learning and artificial intelligence. There were examined some popular sites (like Kaggle, Registry of Open Data on AWS) and some less known and specific ones (like The Big Bad NLP Database, Common Crawl). All included resources provide free access to data. Moreover in most cases registration is not needed for data access. Main features are specified for every examined resource, including regarding data search and access. The following sites are included in the review: Kaggle, Google Research, Microsoft Research Open Data, Registry of Open Data on AWS, Harvard Dataverse Repository, Zenodo, Open Data portal of the Russian Federation, World Bank, The Big Bad NLP Database, Common Crawl.


2020 ◽  
Vol 9 (1) ◽  
pp. 248 ◽  
Author(s):  
Mariana Chumbita ◽  
Catia Cillóniz ◽  
Pedro Puerta-Alcalde ◽  
Estela Moreno-García ◽  
Gemma Sanjuan ◽  
...  

The use of artificial intelligence (AI) to support clinical medical decisions is a rather promising concept. There are two important factors that have driven these advances: the availability of data from electronic health records (EHR) and progress made in computational performance. These two concepts are interrelated with respect to complex mathematical functions such as machine learning (ML) or neural networks (NN). Indeed, some published articles have already demonstrated the potential of these approaches in medicine. When considering the diagnosis and management of pneumonia, the use of AI and chest X-ray (CXR) images primarily have been indicative of early diagnosis, prompt antimicrobial therapy, and ultimately, better prognosis. Coupled with this is the growing research involving empirical therapy and mortality prediction, too. Maximizing the power of NN, the majority of studies have reported high accuracy rates in their predictions. As AI can handle large amounts of data and execute mathematical functions such as machine learning and neural networks, AI can be revolutionary in supporting the clinical decision-making processes. In this review, we describe and discuss the most relevant studies of AI in pneumonia.


2018 ◽  
Vol 36 (6_suppl) ◽  
pp. 170-170 ◽  
Author(s):  
Michael Joseph Donovan ◽  
Richard Scott ◽  
Faisal m Khan ◽  
Jack Zeineh ◽  
Gerardo Fernandez

170 Background: Postoperative risk assessment remains an important variable in the treatment of prostate cancer (PCA). Advances in genomic risk classifiers have aided clinical-decision making; however, clinical-pathologic variables such as Gleason grade and pathologic stage remain significant comparators for accurate prognostication. We aimed to standardize the descriptive pathology of PCA through automation of Gleason grading with artificial intelligence and image analysis feature selection Methods: Retrospective study using radical prostatectomy (RP) tissue microarrays from Henry Ford Hospital and Roswell Park Cancer Center with 8-year median follow-up. Samples were stained with a multiplex immunofluorescent assay: Androgen Receptor (AR), Ki67, Cytokeratin 18, Cytokeratin 5/6 and Alpha-methylacyl-CoA racemase); imaged with a CRI Nuance FX camera and then analyzed with proprietary software to generate a suite of morphometric - attributes that quantitatively characterize the Gleason spectrum. Derived features were univariately correlated with disease progression using the concordance index (CI) along with the hazards ratio and p-value. Results: Starting with a training cohort of 306 patients and a 15% event rate, MIF PCA images were subjected to a machine learning analysis program which incorporates a graph theory-based approach for characterization of gland / ring fusion and fragmentation of tumor architecture (TA) with biomarker quantitation (BQ) (i.e. AR and Ki67). 19 unique image features with 7 TA and 12 TA+BQ were identified, by univariate CI, all TA features were strongly associated with Gleason grading with CI’s reflecting degree of tumor differentiation (CI 0.29-.33, p-value = 0.005). Four TA+BQ features were selected in a training risk model and effectively replaced the clinical Gleason features. By comparison, dominant RP Gleason had a CI of 0.31. Conclusions: Image-based feature selection guided by principles of machine learning has the potential to automate and replace traditional Gleason grading. Such approaches provide the necessary foundation for next generation risk assessment assays.


Author(s):  
Sailesh Suryanarayan Iyer ◽  
Sridaran Rajagopal

Knowledge revolution is transforming the globe from traditional society to a technology-driven society. Online transactions have compounded, exposing the world to a new demon called cybercrime. Human beings are being replaced by devices and robots, leading to artificial intelligence. Robotics, image processing, machine vision, and machine learning are changing the lifestyle of citizens. Machine learning contains algorithms which are capable of learning from historical occurrences. This chapter discusses the concept of machine learning, cyber security, cybercrime, and applications of machine learning in cyber security domain. Malware detection and network intrusion are a few areas where machine learning and deep learning can be applied. The authors have also elaborated on the research advancements and challenges in machine learning related to cyber security. The last section of this chapter lists the future trends and directions in machine learning and cyber security.


Author(s):  
S. Matthew Liao

This introduction outlines in section I.1 some of the key issues in the study of the ethics of artificial intelligence (AI) and proposes ways to take these discussions further. Section I.2 discusses key concepts in AI, machine learning, and deep learning. Section I.3 considers ethical issues that arise because current machine learning is data hungry; is vulnerable to bad data and bad algorithms; is a black box that has problems with interpretability, explainability, and trust; and lacks a moral sense. Section I.4 discusses ethical issues that arise because current machine learning systems may be working too well and human beings can be vulnerable in the presence of these intelligent systems. Section I.5 examines ethical issues arising out of the long-term impact of superintelligence such as how the values of a superintelligent AI can be aligned with human values. Section I.6 presents an overview of the essays in this volume.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Subhanik Purkayastha ◽  
Yijun Zhao ◽  
Jing Wu ◽  
Rong Hu ◽  
Aidan McGirr ◽  
...  

Abstract Pre-treatment determination of renal cell carcinoma aggressiveness may help guide clinical decision-making. We aimed to differentiate low-grade (Fuhrman I–II) from high-grade (Fuhrman III–IV) renal cell carcinoma using radiomics features extracted from routine MRI. 482 pathologically confirmed renal cell carcinoma lesions from 2008 to 2019 in a multicenter cohort were retrospectively identified. 439 lesions with information on Fuhrman grade from 4 institutions were divided into training and test sets with an 8:2 split for model development and internal validation. Another 43 lesions from a separate institution were set aside for independent external validation. The performance of TPOT (Tree-Based Pipeline Optimization Tool), an automatic machine learning pipeline optimizer, was compared to hand-optimized machine learning pipeline. The best-performing hand-optimized pipeline was a Bayesian classifier with Fischer Score feature selection, achieving an external validation ROC AUC of 0.59 (95% CI 0.49–0.68), accuracy of 0.77 (95% CI 0.68–0.84), sensitivity of 0.38 (95% CI 0.29–0.48), and specificity of 0.86 (95% CI 0.78–0.92). The best-performing TPOT pipeline achieved an external validation ROC AUC of 0.60 (95% CI 0.50–0.69), accuracy of 0.81 (95% CI 0.72–0.88), sensitivity of 0.12 (95% CI 0.14–0.30), and specificity of 0.97 (95% CI 0.87–0.97). Automated machine learning pipelines can perform equivalent to or better than hand-optimized pipeline on an external validation test non-invasively predicting Fuhrman grade of renal cell carcinoma using conventional MRI.


2021 ◽  
Vol 29 (Supplement_1) ◽  
pp. i18-i18
Author(s):  
N Hassan ◽  
R Slight ◽  
D Weiand ◽  
A Vellinga ◽  
G Morgan ◽  
...  

Abstract Introduction Sepsis is a life-threatening condition that is associated with increased mortality. Artificial intelligence tools can inform clinical decision making by flagging patients who may be at risk of developing infection and subsequent sepsis and assist clinicians with their care management. Aim To identify the optimal set of predictors used to train machine learning algorithms to predict the likelihood of an infection and subsequent sepsis and inform clinical decision making. Methods This systematic review was registered in PROSPERO database (CRD42020158685). We searched 3 large databases: Medline, Cumulative Index of Nursing and Allied Health Literature, and Embase, using appropriate search terms. We included quantitative primary research studies that focused on sepsis prediction associated with bacterial infection in adult population (>18 years) in all care settings, which included data on predictors to develop machine learning algorithms. The timeframe of the search was 1st January 2000 till the 25th November 2019. Data extraction was performed using a data extraction sheet, and a narrative synthesis of eligible studies was undertaken. Narrative analysis was used to arrange the data into key areas, and compare and contrast between the content of included studies. Quality assessment was performed using Newcastle-Ottawa Quality Assessment scale, which was used to evaluate the quality of non-randomized studies. Bias was not assessed due to the non-randomised nature of the included studies. Results Fifteen articles met our inclusion criteria (Figure 1). We identified 194 predictors that were used to train machine learning algorithms to predict infection and subsequent sepsis, with 13 predictors used on average across all included studies. The most significant predictors included age, gender, smoking, alcohol intake, heart rate, blood pressure, lactate level, cardiovascular disease, endocrine disease, cancer, chronic kidney disease (eGFR<60ml/min), white blood cell count, liver dysfunction, surgical approach (open or minimally invasive), and pre-operative haematocrit < 30%. These predictors were used for the development of all the algorithms in the fifteen articles. All included studies used artificial intelligence techniques to predict the likelihood of sepsis, with average sensitivity 77.5±19.27, and average specificity 69.45±21.25. Conclusion The type of predictors used were found to influence the predictive power and predictive timeframe of the developed machine learning algorithm. Two strengths of our review were that we included studies published since the first definition of sepsis was published in 2001, and identified factors that can improve the predictive ability of algorithms. However, we note that the included studies had some limitations, with three studies not validating the models that they developed, and many tools limited by either their reduced specificity or sensitivity or both. This work has important implications for practice, as predicting the likelihood of sepsis can help inform the management of patients and concentrate finite resources to those patients who are most at risk. Producing a set of predictors can also guide future studies in developing more sensitive and specific algorithms with increased predictive time window to allow for preventive clinical measures.


2021 ◽  
pp. 279-294
Author(s):  
Marcin Kowalczyk

The paper presents findings regarding AI and Machine Learning and how “thinking machines” differ from human beings? In the next part the paper presents the issue of AI and Machine Learning’s impact on day-to-day activities in the following areas: 1. Microtargetting and psychometrics – with the examples from the business and politics; 2. Surveillance systems, biometric identification, COVID 19 tracing apps etc. – the issue of privacy in the digital era; 3. The question of choice optimization (AI-driven Web browsers and dating apps, chatbots and virtual assistants etc.); whether free will still exist in the AI supported on-line environment? The article is summed up with conclusions.


10.29007/s6vh ◽  
2019 ◽  
Author(s):  
Harris Wang

The resurgence of interest in Artificial Intelligence and advances in several fronts of AI, machine learning with neural network in particular, have made us think again about the nature of intelligence, and the existence of a generic model that may be able to capture what human beings have in their mind about the world to empower them to present all kinds of intelligent behaviors. In this paper, we present Constrained Object Hierarchies (COHs) as such a generic model of the world and intelligence. COHs extend the well-known object-oriented paradigm by adding identity constraints, trigger constraints, goal constraints, and some primary methods that can be used by capable beings to accomplish various intelligence, such as deduction, induction, analogy, recognition, construction, learning and many others.In the paper we will first argue the need for such a generic model of the world and intelligence, and then present the generic model in detail, including its important constructs, the primary methods capable beings can use, as well as how different intelligent behaviors can be implemented and achieved with this generic model.


Sign in / Sign up

Export Citation Format

Share Document