Machine Learning and Data Analytics in Pervasive Health

2018 ◽  
Vol 57 (04) ◽  
pp. 194-196
Author(s):  
Nuria Oliver ◽  
Michael Marschollek ◽  
Oscar Mayora

Summary Introduction: This accompanying editorial provides a brief introduction to this focus theme, focused on “Machine Learning and Data Analytics in Pervasive Health”. Objective: The innovative use of machine learning technologies combining small and big data analytics will support a better provisioning of healthcare to citizens. This focus theme aims to present contributions at the crossroads of pervasive health technologies and data analytics as key enablers for achieving personalised medicine for diagnosis and treatment purposes. Methods: A call for paper was announced to all participants of the “11th International Conference on Pervasive Computing Technologies for Healthcare”, to different working groups of the International Medical Informatics Association (IMIA) and European Federation of Medical Informatics (EFMI) and was published in June 2017 on the website of Methods of Information in Medicine. A peer review process was conducted to select the papers for this focus theme. Results: Four papers were selected to be included in this focus theme. The paper topics cover a broad range of machine learning and data analytics applications in healthcare including detection of injurious subtypes of patient-ventilator asynchrony, early detection of cognitive impairment, effective use of small data sets for estimating the performance of radiotherapy in bladder cancer treatment, and the use negation detection in and information extraction from unstructured medical texts. Conclusions: The use of machine learning and data analytics technologies in healthcare is facing a renewed impulse due to the availability of large amounts and new sources of human behavioral and physiological data, such as that captured by mobile and pervasive devices traditionally considered as nonmainstream for healthcare provision and management.

2019 ◽  
Vol 58 (01) ◽  
pp. 060-060
Author(s):  
Nuria Oliver ◽  
Oscar Mayora ◽  
Michael Marschollek

Entropy ◽  
2018 ◽  
Vol 20 (11) ◽  
pp. 840 ◽  
Author(s):  
Frédéric Barbaresco

We introduce poly-symplectic extension of Souriau Lie groups thermodynamics based on higher-order model of statistical physics introduced by Ingarden. This extended model could be used for small data analytics and machine learning on Lie groups. Souriau geometric theory of heat is well adapted to describe density of probability (maximum entropy Gibbs density) of data living on groups or on homogeneous manifolds. For small data analytics (rarified gases, sparse statistical surveys, …), the density of maximum entropy should consider higher order moments constraints (Gibbs density is not only defined by first moment but fluctuations request 2nd order and higher moments) as introduced by Ingarden. We use a poly-sympletic model introduced by Christian Günther, replacing the symplectic form by a vector-valued form. The poly-symplectic approach generalizes the Noether theorem, the existence of moment mappings, the Lie algebra structure of the space of currents, the (non-)equivariant cohomology and the classification of G-homogeneous systems. The formalism is covariant, i.e., no special coordinates or coordinate systems on the parameter space are used to construct the Hamiltonian equations. We underline the contextures of these models, and the process to build these generic structures. We also introduce a more synthetic Koszul definition of Fisher Metric, based on the Souriau model, that we name Souriau-Fisher metric. This Lie groups thermodynamics is the bedrock for Lie group machine learning providing a full covariant maximum entropy Gibbs density based on representation theory (symplectic structure of coadjoint orbits for Souriau non-equivariant model associated to a class of co-homology).


10.2196/24698 ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. e24698
Author(s):  
Sina Ehsani ◽  
Chandan K Reddy ◽  
Brandon Foreman ◽  
Jonathan Ratcliff ◽  
Vignesh Subbian

Background With advances in digital health technologies and proliferation of biomedical data in recent years, applications of machine learning in health care and medicine have gained considerable attention. While inpatient settings are equipped to generate rich clinical data from patients, there is a dearth of actionable information that can be used for pursuing secondary research for specific clinical conditions. Objective This study focused on applying unsupervised machine learning techniques for traumatic brain injury (TBI), which is the leading cause of death and disability among children and adults aged less than 44 years. Specifically, we present a case study to demonstrate the feasibility and applicability of subspace clustering techniques for extracting patterns from data collected from TBI patients. Methods Data for this study were obtained from the Progesterone for Traumatic Brain Injury, Experimental Clinical Treatment–Phase III (PROTECT III) trial, which included a cohort of 882 TBI patients. We applied subspace-clustering methods (density-based, cell-based, and clustering-oriented methods) to this data set and compared the performance of the different clustering methods. Results The analyses showed the following three clusters of laboratory physiological data: (1) international normalized ratio (INR), (2) INR, chloride, and creatinine, and (3) hemoglobin and hematocrit. While all subclustering algorithms had a reasonable accuracy in classifying patients by mortality status, the density-based algorithm had a higher F1 score and coverage. Conclusions Clustering approaches serve as an important step for phenotype definition and validation in clinical domains such as TBI, where patient and injury heterogeneity are among the major reasons for failure of clinical trials. The results from this study provide a foundation to develop scalable clustering algorithms for further research and validation.


2020 ◽  
Author(s):  
Sina Ehsani ◽  
Chandan K Reddy ◽  
Brandon Foreman ◽  
Jonathan Ratcliff ◽  
Vignesh Subbian

BACKGROUND With advances in digital health technologies and proliferation of biomedical data in recent years, applications of machine learning in health care and medicine have gained considerable attention. While inpatient settings are equipped to generate rich clinical data from patients, there is a dearth of actionable information that can be used for pursuing secondary research for specific clinical conditions. OBJECTIVE This study focused on applying unsupervised machine learning techniques for traumatic brain injury (TBI), which is the leading cause of death and disability among children and adults aged less than 44 years. Specifically, we present a case study to demonstrate the feasibility and applicability of subspace clustering techniques for extracting patterns from data collected from TBI patients. METHODS Data for this study were obtained from the Progesterone for Traumatic Brain Injury, Experimental Clinical Treatment–Phase III (PROTECT III) trial, which included a cohort of 882 TBI patients. We applied subspace-clustering methods (density-based, cell-based, and clustering-oriented methods) to this data set and compared the performance of the different clustering methods. RESULTS The analyses showed the following three clusters of laboratory physiological data: (1) international normalized ratio (INR), (2) INR, chloride, and creatinine, and (3) hemoglobin and hematocrit. While all subclustering algorithms had a reasonable accuracy in classifying patients by mortality status, the density-based algorithm had a higher F1 score and coverage. CONCLUSIONS Clustering approaches serve as an important step for phenotype definition and validation in clinical domains such as TBI, where patient and injury heterogeneity are among the major reasons for failure of clinical trials. The results from this study provide a foundation to develop scalable clustering algorithms for further research and validation.


Author(s):  
Farah Magrabi ◽  
Elske Ammenwerth ◽  
Catherine K. Craven ◽  
Kathrin Cresswell ◽  
Nicolet F. De Keizer ◽  
...  

Objectives: To highlight the role of technology assessment in the management of the COVID-19 pandemic. Method: An overview of existing research and evaluation approaches along with expert perspectives drawn from the International Medical Informatics Association (IMIA) Working Group on Technology Assessment and Quality Development in Health Informatics and the European Federation for Medical Informatics (EFMI) Working Group for Assessment of Health Information Systems. Results: Evaluation of digital health technologies for COVID-19 should be based on their technical maturity as well as the scale of implementation. For mature technologies like telehealth whose efficacy has been previously demonstrated, pragmatic, rapid evaluation using the complex systems paradigm which accounts for multiple sociotechnical factors, might be more suitable to examine their effectiveness and emerging safety concerns in new settings. New technologies, particularly those intended for use on a large scale such as digital contract tracing, will require assessment of their usability as well as performance prior to deployment, after which evaluation should shift to using a complex systems paradigm to examine the value of information provided. The success of a digital health technology is dependent on the value of information it provides relative to the sociotechnical context of the setting where it is implemented. Conclusion: Commitment to evaluation using the evidence-based medicine and complex systems paradigms will be critical to ensuring safe and effective use of digital health technologies for COVID-19 and future pandemics. There is an inherent tension between evaluation and the imperative to urgently deploy solutions that needs to be negotiated.


2021 ◽  
Author(s):  
Rainer Röhrig ◽  
Ursula Hübner ◽  
Martin Sedlmayr

Since 2017, the German Society for Medical Informatics, Biometry and Epidemiology e.V. (GMDS) offers the submission of full papers to the annual meetings, optional in Studies in Health Technologies and Informatics (Stud HIT) or in GMS Medical Informatics, Biometrics, and Epidemiology (MIBE). GMDS’ aim is to increase the attractiveness of the conference and paper submission process in particular for young scientists and to increase the visibility of the conference. A standardized peer review process was established. Since 2017, a 25–35% of the contributions have been submitted as full papers. A total of 177 papers were published in Stud HTI. With an unofficial journal impact factor of 1.088 (2019) and 0.540 (2020), the papers were cited with a frequency similarly to national medical journals or full paper contributions of International medical informatics conferences.


Author(s):  
Sadaf Qazi ◽  
Muhammad Usman

Background: Immunization is a significant public health intervention to reduce child mortality and morbidity. However, its coverage, in spite of free accessibility, is still very low in developing countries. One of the primary reasons for this low coverage is the lack of analysis and proper utilization of immunization data at various healthcare facilities. Purpose: In this paper, the existing machine learning based data analytics techniques have been reviewed critically to highlight the gaps where this high potential data could be exploited in a meaningful manner. Results: It has been revealed from our review, that the existing approaches use data analytics techniques without considering the complete complexity of Expanded Program on Immunization which includes the maintenance of cold chain systems, proper distribution of vaccine and quality of data captured at various healthcare facilities. Moreover, in developing countries, there is no centralized data repository where all data related to immunization is being gathered to perform analytics at various levels of granularities. Conclusion: We believe that the existing non-centralized immunization data with the right set of machine learning and Artificial Intelligence based techniques will not only improve the vaccination coverage but will also help in predicting the future trends and patterns of its coverage at different geographical locations.


Author(s):  
William B. Rouse

This book discusses the use of models and interactive visualizations to explore designs of systems and policies in determining whether such designs would be effective. Executives and senior managers are very interested in what “data analytics” can do for them and, quite recently, what the prospects are for artificial intelligence and machine learning. They want to understand and then invest wisely. They are reasonably skeptical, having experienced overselling and under-delivery. They ask about reasonable and realistic expectations. Their concern is with the futurity of decisions they are currently entertaining. They cannot fully address this concern empirically. Thus, they need some way to make predictions. The problem is that one rarely can predict exactly what will happen, only what might happen. To overcome this limitation, executives can be provided predictions of possible futures and the conditions under which each scenario is likely to emerge. Models can help them to understand these possible futures. Most executives find such candor refreshing, perhaps even liberating. Their job becomes one of imagining and designing a portfolio of possible futures, assisted by interactive computational models. Understanding and managing uncertainty is central to their job. Indeed, doing this better than competitors is a hallmark of success. This book is intended to help them understand what fundamentally needs to be done, why it needs to be done, and how to do it. The hope is that readers will discuss this book and develop a “shared mental model” of computational modeling in the process, which will greatly enhance their chances of success.


Sign in / Sign up

Export Citation Format

Share Document