Methodologies to Associate COVID-19 Spreading Data to Space and Scale

2022 ◽  
pp. 103-137
Author(s):  
Lais-Ioanna Margiori ◽  
Stylianos Krommydakis

Since the onset of the COVID-19 pandemic, the correlation between the spread of the SARS-Cov-2 virus and a number of epidemiological parameters has been a key tool for understanding the dynamics of its flow. This information has assisted local authorities in making policy decisions for the containment of its expansion. Several methods have been used including topographical data, artificial intelligence and machine learning data, and epidemiological tools to analyze factors facilitating the spread of epidemic at a local and global scale. The aim of this study is to use a new tool to assess and categorize the incoming epidemiological data regarding the spread of the disease as per population densities, spatial and topographical morphologies, social and financial activities, population densities and mobility between regions. These data will be appraised as risk factors in the spread of the disease on a local and a global scale.

Author(s):  
Ladly Patel ◽  
Kumar Abhishek Gaurav

In today's world, a huge amount of data is available. So, all the available data are analyzed to get information, and later this data is used to train the machine learning algorithm. Machine learning is a subpart of artificial intelligence where machines are given training with data and the machine predicts the results. Machine learning is being used in healthcare, image processing, marketing, etc. The aim of machine learning is to reduce the work of the programmer by doing complex coding and decreasing human interaction with systems. The machine learns itself from past data and then predict the desired output. This chapter describes machine learning in brief with different machine learning algorithms with examples and about machine learning frameworks such as tensor flow and Keras. The limitations of machine learning and various applications of machine learning are discussed. This chapter also describes how to identify features in machine learning data.


2022 ◽  
Vol 2161 (1) ◽  
pp. 012015
Author(s):  
V Sai Krishna Reddy ◽  
P Meghana ◽  
N V Subba Reddy ◽  
B Ashwath Rao

Abstract Machine Learning is an application of Artificial Intelligence where the method begins with observations on data. In the medical field, it is very important to make a correct decision within less time while treating a patient. Here ML techniques play a major role in predicting the disease by considering the vast amount of data that is produced by the healthcare field. In India, heart disease is the major cause of death. According to WHO, it can predict and prevent stroke by timely actions. In this paper, the study is useful to predict cardiovascular disease with better accuracy by applying ML techniques like Decision Tree and Naïve Bayes and also with the help of risk factors. The dataset that we considered is the Heart Failure Dataset which consists of 13 attributes. In the process of analyzing the performance of techniques, the collected data should be pre-processed. Later, it should follow by feature selection and reduction.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Zia U. Ahmed ◽  
Kang Sun ◽  
Michael Shelly ◽  
Lina Mu

AbstractMachine learning (ML) has demonstrated promise in predicting mortality; however, understanding spatial variation in risk factor contributions to mortality rate requires explainability. We applied explainable artificial intelligence (XAI) on a stack-ensemble machine learning model framework to explore and visualize the spatial distribution of the contributions of known risk factors to lung and bronchus cancer (LBC) mortality rates in the conterminous United States. We used five base-learners—generalized linear model (GLM), random forest (RF), Gradient boosting machine (GBM), extreme Gradient boosting machine (XGBoost), and Deep Neural Network (DNN) for developing stack-ensemble models. Then we applied several model-agnostic approaches to interpret and visualize the stack ensemble model's output in global and local scales (at the county level). The stack ensemble generally performs better than all the base learners and three spatial regression models. A permutation-based feature importance technique ranked smoking prevalence as the most important predictor, followed by poverty and elevation. However, the impact of these risk factors on LBC mortality rates varies spatially. This is the first study to use ensemble machine learning with explainable algorithms to explore and visualize the spatial heterogeneity of the relationships between LBC mortality and risk factors in the contiguous USA.


2021 ◽  
Vol 8 (32) ◽  
pp. 22-38
Author(s):  
José Manuel Amigo

Concepts like Machine Learning, Data Mining or Artificial Intelligence have become part of our daily life. This is mostly due to the incredible advances made in computation (hardware and software), the increasing capabilities of generating and storing all types of data and, especially, the benefits (societal and economical) that generate the analysis of such data. Simultaneously, Chemometrics has played an important role since the late 1970s, analyzing data within natural science (and especially in Analytical Chemistry). Even with the strong parallelisms between all of the abovementioned terms and being popular with most of us, it is still difficult to clearly define or differentiate the meaning of Machine Learning, Data Mining, Artificial Intelligence, Deep Learning and Chemometrics. This manuscript brings some light to the definitions of Machine Learning, Data Mining, Artificial Intelligence and Big Data Analysis, defines their application ranges and seeks an application space within the field of analytical chemistry (a.k.a. Chemometrics). The manuscript is full of personal, sometimes probably subjective, opinions and statements. Therefore, all opinions here are open for constructive discussion with the only purpose of Learning (like the Machines do nowadays).


Author(s):  
Hassan A ◽  
◽  
Hassan M ◽  
Hassan M ◽  
Ellahham S ◽  
...  

Artificial Intelligence (AI) refers to the design of computer programs and machines which simulate the rudiments of human intelligence independently [1]. Machine learning encompasses a multitude of deep learning algorithms, including Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) - both of which enable continuous analysis of large-scale data to make decisions consistent with previously detected patterns [1]. AI exhibits high potential for employment in the healthcare industry and research laboratories to accurately predict illness, maximize disease prevention, and refine treatment plans. As technological advancements are made, the application of AI will gradually become more feasible and appropriately lend itself to advancing quality care for frail patients even away from the hospital setting. Frailty is somewhat of an ambiguous diagnosis due to lack of a universally agreed upon definition and frailty assessment tool. Efforts have been put forth to delineate frailty and standardize its method of measurement, but many physicians with minimal to none geriatric experience are more likely to eyeball the patient from the foot end of the bed. Although the Comprehensive Geriatric Assessment (CGA) is a gold standard for multidisciplinary and systematic approach of frailty recognition, it is time-consuming and depends upon administers’ expertise [2]. The integration of AI into a frailty assessment strategy would not only cause a paradigm shift in the approach of physicians to this syndrome, but it would also revolutionize pre-existing protocols for management of frail and pre-frail status patients. Sufficient neglect of the variables that comprise frailty results in inefficacious treatment plans and fuels the cost of patient care. International guidelines have come to appreciate the reversibility of frailty and concur that it should be a mandatory component of patient evaluation [3]. AI may be the solution to pinpointing unidentified vulnerabilities that characterize frailty and ensuring that this entity of geriatric practice is more readily incorporated into other subspecialties, too. Chang et al. (2013) conducted research using “household goods” in hopes of facilitating “early detection of frailty and, hence, its early treatment” [4]. eChair, for example, was used to detect “slowness of movement, weakness and weight loss” [4]. Other devices were featured to detect long-term variations in frailty-determining elements and overall functional decline [4]. Pressure sensors, for example, have been embedded into walkers to measure “risk of fall” [4]. Similarly, Canadian Cardiovascular Society Guidelines (2017) encourage the monitoring of orthostatic vital signs to “identify individuals at risk of falls” [3]. Therefore, gradual integration of AI into day-to-day appliances can be exceptionally beneficial when monitoring patients for development of frailty-like “symptoms”. The authors would like to emphasize that the safety and accuracy of aforementioned AI technologies necessitate careful configuration. Literature unveils the key issues surrounding the safety of AI in healthcare [1]. Addressing these concerns is a top priority because frailty must be handled delicately and demands meticulous planning to eliminate risk factors. The concerns include, but are not limited to, oblivious impact, confidence of prediction, unexpected behaviors, privacy and anonymity [1]. Steps taken for mitigation have been described and, if executed, AI may be utilized to monitor and manage frail patients easily. Models for personalized risk estimates “should be well calibrated and efficient, and effective updating protocols should be implemented” [1]. “Automated systems and algorithms should be able to adjust for and respond to uncertainty and unpredictability” [1]. By centering our focus on the safety and accuracy of AI, we can transform older person’s homes into ‘smart homes’. Smart Homes are equipped with AI-embedded appliances; “networked sensors and devices that extend functionality of the home by adding intelligence” [5]. They collect data for continual analysis and predict potential physiological decline. These advancements would not only improve overall quality of life, but processed data supplements single visits to the primary care provider or geriatrician and eliminates the need for frequent journeys to the physician’s office. In addition, the implementation of AI may pave a pathway for investigating genetic biomarkers associated with increased risk of frailty. Machine learning AI could accelerate research that correlates frailty and Single Nucleotide Polymorphisms (SNP). However, current genetic sequencing technologies remain costly, and sequence processing is time-consuming. Third-generation sequencing technologies, such as Oxford Nanopore’s MinION and PromethION, are more cost-effective and agile solutions [6]. These advantages would make them more accessible and appropriate for use among suspected frail patients. Consequently, identification of SNPs already linked to frailty would be possible through deep RNNs that have been used to distinguish DNA modifications from the sequencing data provided by MinKNOW - the cloud-based platform responsible for data analysis [6,7]. Further advancement of “portable sequencing technology” would promote its use in smart nursing homes - enabling caregivers to closely monitor frailty-susceptible patients and tailoring their care based on the presence of specific SNPs. Ultimately, the authors recommend that the search for underlying risk factors pertinent to frailty commences with: (1) the administration of a simple, yet effective, preliminary frailty assessment in the clinical setting, or (2) opting for installation of AI technology into everydayuse equipment in a controlled environment (such as a smart home). If risk has been determined, (1) a more thorough frailty diagnosing tool may be undertaken by an experienced geriatrician or (2) the decision to undergo an AI-based confirmatory test to assess biomarkers and genetic sequences or (3) a combination of both may be performed.


2022 ◽  
Vol 14 (2) ◽  
pp. 1-15
Author(s):  
Lara Mauri ◽  
Ernesto Damiani

Large-scale adoption of Artificial Intelligence and Machine Learning (AI-ML) models fed by heterogeneous, possibly untrustworthy data sources has spurred interest in estimating degradation of such models due to spurious, adversarial, or low-quality data assets. We propose a quantitative estimate of the severity of classifiers’ training set degradation: an index expressing the deformation of the convex hulls of the classes computed on a held-out dataset generated via an unsupervised technique. We show that our index is computationally light, can be calculated incrementally and complements well existing ML data assets’ quality measures. As an experimentation, we present the computation of our index on a benchmark convolutional image classifier.


2021 ◽  
Vol 22 (2) ◽  
pp. 6-7
Author(s):  
Michael Zeller

Michael Zeller, Ph.D. is the recipient of the 2020 ACM SIGKDD Service Award, which is the highest service award in the field of knowledge discovery and data mining. Conferred annually on one individual or group in recognition of outstanding professional services and contributions to the field of knowledge discovery and data mining, Dr. Zeller was honored for his years of service and many accomplishments as the secretary and treasurer for ACM SIGKDD, the organizing body of the annual KDD conference. Zeller is also head of AI strategy and solutions at Temasek, a global investment company seeking to make a difference always with tomorrow in mind. He sat down with SIGKDD Explorations to discuss how he first got involved in the KDD conference in 1999, what he learned from the first-ever virtual conference, his work at Temasek, and what excites him about the future of machine learning, data science and artificial intelligence.


Sign in / Sign up

Export Citation Format

Share Document