Artificial intelligence (AI) to improve patient outcomes in community oncology practices.

2019 ◽  
Vol 37 (15_suppl) ◽  
pp. e18098-e18098
Author(s):  
John Frownfelter ◽  
Sibel Blau ◽  
Ray D. Page ◽  
John Showalter ◽  
Kelly Miller ◽  
...  

e18098 Background: Artificial Intelligence(AI) for predictive analytics has been studied extensively in diagnostic imaging and genetic testing. Cognitive analytics adds by suggesting interventions that optimize health outcomes using real-time data and machine learning. Herein, we report the results of a pilot study of the Jvion, Inc. Cognitive Clinical Success Machine (CCSM), an eigen vector-based deep learning AI technology. Methods: The CCSM uses electronic medical record (EMR) and publicly available socioeconomic/behavioral databases to create a n-dimensional space within which patients are mapped along vectors resulting in thousands of relevant clusters of clinically/behaviorally similar patients. These clusters have a mathematical propensity to respond to a clinical intervention which are updated dynamically with new data from the site. The CCSM generates recommendations for the provider to consider as they develop a care plan based on the patients’ cluster. We tested and trained the CCSM technology at 3 US oncology practices for the risk (low, intermediate, high) of 4 specific outcomes: 30 day severe pain, 30 day mortality, 6 month clinical deterioration (ECOG-PS), and 6 month diagnosis of major depressive disorder (MDD). We report the accuracy of the CCSM based on the testing and training data sets. Area under the curve (AUC) was calculated to show goodness of fit of classification models for each outcome. Results: In the training/testing data set there were 371,787 patients from the 3 sites: female = 61.3%; age ≤ 50 = 21.3%, 51-65 = 26.9%, > 65 = 51.9%; white/Caucasian = 43.4%, black/African American = 5.9%, unknown race = 43.4%. Cancer types were unknown/missing for 66.3% of patients and stage for 90.4% of patients. AUC range per vector: 30 day severe/recurrent pain = 0.85-0.90; 30-day mortality = 0.86-0.97; 6-month ECOG-PS decline of 1 point = 0.88-0.92; and 6-month diagnosis of MDD = 0.77-0.90. Conclusions: The high AUC indicates good separation between true positives/negatives (proper model specification for classifying the risk of each outcome) regardless of the degree of missing data for variables including cancer type and stage. Following testing, a 6 month pilot program was implemented (06/2018-11/2018). Final results of the pilot program are pending.

2010 ◽  
Vol 2 (2) ◽  
pp. 38-51 ◽  
Author(s):  
Marc Halbrügge

Keep it simple - A case study of model development in the context of the Dynamic Stocks and Flows (DSF) taskThis paper describes the creation of a cognitive model submitted to the ‘Dynamic Stocks and Flows’ (DSF) modeling challenge. This challenge aims at comparing computational cognitive models for human behavior during an open ended control task. Participants in the modeling competition were provided with a simulation environment and training data for benchmarking their models while the actual specification of the competition task was withheld. To meet this challenge, the cognitive model described here was designed and optimized for generalizability. Only two simple assumptions about human problem solving were used to explain the empirical findings of the training data. In-depth analysis of the data set prior to the development of the model led to the dismissal of correlations or other parametric statistics as goodness-of-fit indicators. A new statistical measurement based on rank orders and sequence matching techniques is being proposed instead. This measurement, when being applied to the human sample, also identifies clusters of subjects that use different strategies for the task. The acceptability of the fits achieved by the model is verified using permutation tests.


Author(s):  
Christopher MacDonald ◽  
Michael Yang ◽  
Shawn Learn ◽  
Ron Hugo ◽  
Simon Park

Abstract There are several challenges associated with existing rupture detection systems such as their inability to accurately detect during transient (such as pump dynamics) conditions, delayed responses and their inability to transfer models to different pipeline configurations easily. To address these challenges, we employ multiple Artificial Intelligence (AI) classifiers that rely on pattern recognitions instead of traditional operator-set thresholds. AI techniques, consisting of two-dimensional (2D) Convolutional Neural Networks (CNN) and Adaptive Neuro Fuzzy Interface Systems (ANFIS), are used to mimic processes performed by operators during a rupture event. This includes both visualization (using CNN) and rule-based decision making (using ANFIS). The system provides a level of reasoning to an operator through the use of the rule-based AI system. Pump station sensor data is non-dimensionalized prior to AI processing, enabling application to pipeline configurations outside of the training data set. AI algorithms undergo testing and training using two data sets: laboratory-collected data that mimics transient pump-station operations and real operator data that includes Real Time Transient Model (RTTM) simulated ruptures. The use of non-dimensional sensor data enables the system to detect ruptures from pipeline data not used in the training process.


2021 ◽  
Vol 80 (Suppl 1) ◽  
pp. 223.3-224
Author(s):  
A. C. Genç ◽  
Z. N. Kaya ◽  
F. Turkoglu Genc ◽  
L. Genc Kaya ◽  
Z. Öztürk ◽  
...  

Background:There is an average of 8 years delay in the diagnosis of ankylosing spondylitis (AS). The most important danger of late diagnosis is that the disease can cause physical and functional disability (2). There is no specific diagnostic biomarker for AS. Sacroiliac joint (SIJ) radiography is frequently used in the diagnosis and follow-up of AS due to its easy accessibility and low cost. It can be classified as grade 0, 1, 2, 3, 4, and these classes may not be sharply separated from each other (3).Objectives:Interpretation of the SIJ radiography may differ from physician to physician. In fact, the same physician may interpret it differently at different times (3). We wanted to find a solution to the intraobserver disagreement problem with the artificial intelligence model.Methods:The SIJ radiography of 590 patients who applied to our center were divided into 3 categories as right and left, separately, grade 0, grade 1-2, grade 3-4, and an educational data set was prepared for the object recognition method. 488 images were augmented through noise from 490 images in the training data. 242 articular objects were trained for grade 0, 278 for grade 1-2, and 1426 for grade 2-3. The model was tested with 100 images for 36 joint objects for grade 0, 29 for grade 1-2, and 135 for grade 3-4 to create a computer vision-artificial intelligence model (image 1).Results:Training performance is 70% for grade 0, %63 for grade 1-2, %90 for grade 3-4 and test performance is %52 for grade 0, %24 for grade 1-2, %86 intersection over union (I/U:Intersection over Union is a form of measurement used to indicate the accuracy of an object detector.) for grade 3-4. The mean average precision (mAP) score of our object detection model is %65.9 for test data set (image 1). The estimation quality of the model can be affected by the distribution and number of each class.Conclusion:The experience of the x-ray technician, dose adjustment, and position differences due to patient compliance complicate the standardization of SIJ radiography and this may cause interobserver disagreement (3). Artificial intelligence models to be created with a larger and homogeneous data set in order to ensure objective standardization in the interpretation of the SIJ graph can help physicians.References:[1]Braun J. ‘Axial spondyloarthritis including ankylosing spondylitis’ Rheumatology (Oxford). 2018 1;57(suppl_6):vi1-vi3[2]Rudwaleit M, van der Heijde D, Khan MA, Braun J, Sieper J. How to diagnose axial spondyloarthritis early. Ann Rheum Dis 2004; 63:535-543.[3]van den Berg, R. et al. Agreement between clinical practice and trained central reading in reading of sacroiliac joints on plain pelvic radiographs. Results from the DESIR cohort. Arthritis Rheumatol 66, 2403–2411 (2014).Disclosure of Interests:None declared.


2019 ◽  
Vol 949 ◽  
pp. 24-31 ◽  
Author(s):  
Bartłomiej Mulewicz ◽  
Grzegorz Korpala ◽  
Jan Kusiak ◽  
Ulrich Prahl

The main objective of presented research is an attempt of application of techniques taken from a dynamically developing field of image analysis based on Artificial Intelligence, particularly on Deep Learning, in classification of steel microstructures. Our research focused on developing and implementation of Deep Convolutional Neural Networks (DCNN) for classification of different types of steel microstructure photographs received from the light microscopy at the TU Bergakademie, Freiberg. First, brief presentation of the idea of the system based on DCNN is given. Next, the results of tests of developed classification system on 8 different types (classes) of microstructure of the following different steel grades: C15, C45, C60, C80, V33, X70 and carbide free steel. The DCNN based classification systems require numerous training data and the system accuracy strongly depend on the size of these data. Therefore, created data set of numerous micrograph images of different types of microstructure (33283 photographs) gave the opportunity to develop high precision classification systems and segmentation routines, reaching the accuracy of 99.8%. Presented results confirm, that DCNN can be a useful tool in microstructure classification.


Author(s):  
Christopher Macdonald ◽  
Jaehyun Yang ◽  
Shawn Learn ◽  
Simon S. Park ◽  
Ronald J. Hugo

Abstract There are several challenges associated with existing pipeline rupture detection systems, including an inability to accurately detect during transient conditions (such as changes in pump operating points), an inability to easily transfer from one pipeline configuration to another, and relatively slow response times. To address these challenges, we employ multiple Artificial Intelligence (AI) classifiers that rely on pattern recognition instead of traditional operator-set thresholds. AI techniques, consisting of two-dimensional (2D) Convolutional Neural Networks (CNN) and Adaptive Neuro Fuzzy Interface Systems (ANFIS), are used to mimic processes performed by operators during a rupture event. This includes both visualization (using CNN) and rule-based decision making (using ANFIS). The system provides a level of reasoning to an operator through the use of rule-based AI. Pump station sensor data is non-dimensionalized prior to AI processing, enabling pipeline configurations outside of the training data set, independent of geometry, length, and medium. AI algorithms undergo testing and training using two data sets: laboratory-collected flow loop data that mimics transient pump-station operations and real operator data that include simulated ruptures using the Real Time Transient Model (RTTM). The multiple AI classifier results are fused together to provide higher reliability especially detecting ruptures from pipeline data not used in the training process.


2021 ◽  
pp. 97-121
Author(s):  
Yuri Petrunin ◽  
◽  
Anna Pugacheva ◽  

The article examines the problems and prospects of the introduction of artificial intelligence technologies in the selection of personnel in commercial companies in Russia. In recent years, both the number of applications and the number of scientific articles on the use of artificial intelligence technologies in personnel management processes both in our country and abroad have been growing. However, at present, there is a certain gap in the issues of evaluating the effectiveness of the use of these technologies, identifying the most promising areas for the use of artificial intelligence in the selection of personnel, and determining the factors that affect the results of such implementations in relation to Russian conditions. The survey of experts and practitioners in the field of working with artificial intelligence technologies in the field of personnel management of leading Russian companies allowed us to partially answer the relevant questions. The analysis of the respondents ' responses showed that these technologies favorably affect the selection of employees, improve the quality of selection, increase its speed, unload employees, save money resources and help eliminate bias towards candidates. The factors that increase the efficiency and effectiveness of the implementation of artificial intelligence technologies in the selection of personnel were identified: the category of selected employees, the scale of selection, and the possibility of integration with existing software. The difficulties of using artificial intelligence technologies in the selection of personnel include the presence of atypical positions for selection, the dependence of the results on the quality and volume of the training data set, and the possible reluctance of candidates to communicate with the robot. According to the results of the study, we can make a reasonable conclusion that artificial intelligence in the field of personnel selection, despite the presence of certain problems, has many advantages, as well as great prospects for development.


Author(s):  
Afshin Partovian

Successful modeling of hydro-environmental processes widely relies on quantity and quality of accessible data and noisy data might effect on the functioning of the modeling. On the other hand in training phase of any Artificial Intelligence (AI) based model, each training data set is usually a limited sample of possible patterns of the process and hence, might not show the behavior of whole population. Accordingly in the present article first, wavelet-based denoising method was used in order to smooth hydrological time series and then small normally distributed noises with the mean of zero and various standard deviations were generated and added to the smoothed time series to form different denoised-jittered training data sets, for Artificial Neural Network (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) modeling of daily rainfall – runoff process of the Oconee River watershed located in USA. To evaluate the modeling performance, the outcomes were compared with the results of multi linear regression (MLR) and Auto Regressive Integrated Moving Average (ARIMA) models. Comparing the achieved results via the trained ANN and ANFIS models using denoised-jittered data showed that the proposed data processing approach which serves both denoising and jittering techniques could improve performance of the ANN and ANFIS based rainfall-runoff modeling of the Oconee River Watershed up to 13% and 11% in the verification phase.


2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Bo Huang ◽  
Wei Tan ◽  
Zhou Li ◽  
Lei Jin

Abstract Background For the association between time-lapse technology (TLT) and embryo ploidy status, there has not yet been fully understood. TLT has the characteristics of large amount of data and non-invasiveness. If we want to accurately predict embryo ploidy status from TLT, artificial intelligence (AI) technology is a good choice. However, the current work of AI in this field needs to be strengthened. Methods A total of 469 preimplantation genetic testing (PGT) cycles and 1803 blastocysts from April 2018 to November 2019 were included in the study. All embryo images are captured during 5 or 6 days after fertilization before biopsy by time-lapse microscope system. All euploid embryos or aneuploid embryos are used as data sets. The data set is divided into training set, validation set and test set. The training set is mainly used for model training, the validation set is mainly used to adjust the hyperparameters of the model and the preliminary evaluation of the model, and the test set is used to evaluate the generalization ability of the model. For better verification, we used data other than the training data for external verification. A total of 155 PGT cycles from December 2019 to December 2020 and 523 blastocysts were included in the verification process. Results The euploid prediction algorithm (EPA) was able to predict euploid on the testing dataset with an area under curve (AUC) of 0.80. Conclusions The TLT incubator has gradually become the choice of reproductive centers. Our AI model named EPA that can predict embryo ploidy well based on TLT data. We hope that this system can serve all in vitro fertilization and embryo transfer (IVF-ET) patients in the future, allowing embryologists to have more non-invasive aids when selecting the best embryo to transfer.


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
R Haneef ◽  
S Fuentes ◽  
R Hrzic ◽  
S Fosse-Edorh ◽  
S Kab ◽  
...  

Abstract Background The use of artificial intelligence is increasing to estimate and predict health outcomes from large data sets. The main objectives were to develop two algorithms using machine learning techniques to identify new cases of diabetes (case study I) and to classify type 1 and type 2 (case study II) in France. Methods We selected the training data set from a cohort study linked with French national Health database (i.e., SNDS). Two final datasets were used to achieve each objective. A supervised machine learning method including eight following steps was developed: the selection of the data set, case definition, coding and standardization of variables, split data into training and test data sets, variable selection, training, validation and selection of the model. We planned to apply the trained models on the SNDS to estimate the incidence of diabetes and the prevalence of type 1/2 diabetes. Results For the case study I, 23/3468 and for case study II, 14/3481 SNDS variables were selected based on an optimal balance between variance explained and using the ReliefExp algorithm. We trained four models using different classification algorithms on the training data set. The Linear Discriminant Analysis model performed best in both case studies. The models were assessed on the test datasets and achieved a specificity of 67% and a sensitivity of 62% in case study I, and a specificity of 97 % and sensitivity of 100% in case study II. The case study II model was applied to the SNDS and estimated the prevalence of type 1 diabetes in 2016 in France of 0.3% and for type 2, 4.4%. The case study model I was not applied to the SNDS. Conclusions The case study II model to estimate the prevalence of type 1/2 diabetes has good performance and will be used in routine surveillance. The case study I model to identify new cases of diabetes showed a poor performance due to missing necessary information on determinants of diabetes and will need to be improved for further research.


Realization of the tremendous features and facilities provided by Cloud Computing by the geniuses in the world of digital marketing increases its demand. As customer satisfaction is the manifest of this ever shining field, balancing its load becomes a major issue. Various heuristic and meta-heuristic algorithms were applied to get optimum solutions. The current era is much attracted with the provisioning of self-manageable, self-learnable, self-healable, and self-configurable smart systems. To get self-manageable Smart Cloud, various Artificial Intelligence and Machine Learning (AI-ML) techniques and algorithms are revived. In this review, recent trend in the utilization of AI-ML techniques, their applied areas, purpose, their merits and demerits are highlighted. These techniques are further categorized as instance-based machine learning algorithms and reinforcement learning techniques based on their ability of learning. Reinforcement learning is preferred when there is no training data set. It leads the system to learn by its own experience itself even in dynamic environment.


Sign in / Sign up

Export Citation Format

Share Document