scholarly journals Digital Technologies and Data Science as Health Enablers: An Outline of Appealing Promises and Compelling Ethical, Legal, and Social Challenges

2021 ◽  
Vol 8 ◽  
Author(s):  
João V. Cordeiro

Digital technologies and data science have laid down the promise to revolutionize healthcare by transforming the way health and disease are analyzed and managed in the future. Digital health applications in healthcare include telemedicine, electronic health records, wearable, implantable, injectable and ingestible digital medical devices, health mobile apps as well as the application of artificial intelligence and machine learning algorithms to medical and public health prognosis and decision-making. As is often the case with technological advancement, progress in digital health raises compelling ethical, legal, and social implications (ELSI). This article aims to succinctly map relevant ELSI of the digital health field. The issues of patient autonomy; assessment, value attribution, and validation of health innovation; equity and trustworthiness in healthcare; professional roles and skills and data protection and security are highlighted against the backdrop of the risks of dehumanization of care, the limitations of machine learning-based decision-making and, ultimately, the future contours of human interaction in medicine and public health. The running theme to this article is the underlying tension between the promises of digital health and its many challenges, which is heightened by the contrasting pace of scientific progress and the timed responses provided by law and ethics. Digital applications can prove to be valuable allies for human skills in medicine and public health. Similarly, ethics and the law can be interpreted and perceived as more than obstacles, but also promoters of fairness, inclusiveness, creativity and innovation in health.

2020 ◽  
Vol 19 (1) ◽  
pp. 43-65
Author(s):  
Jane Mitchell ◽  
Simon Mitchell ◽  
Cliff Mitchell

Abstract Advances in mathematical and computational technologies have brought unique and ground-breaking benefits to diverse fields throughout society (engineering, medicine, economics, etc.). Within legal systems, however, the potential applications of data science and innovative mathematical tools have yet to be embraced with the same ambition. The complex decision-making that is needed for reaching just verdicts is often seen as out of reach for such approaches and, in the case of criminal trials, this inhibits exploration into whether machine learning could have a positive impact. Here, through assigning numerical scores to prosecution and defence evidence, and employing an approach based on dimensionality reduction, we showed that evidence strands presented at historical murder trials could be used to train effective machine-learning algorithms (or models). We tested the evidence quantification approach with the trained model and showed that, through machine learning, criminal cases could be clearly classified (probability >99.9%) as belonging to either a guilty or a not-guilty category. The classification was found to be as expected for all test cases. All guilty test cases that were not wrongful convictions were correctly assigned to the guilty category by our model and, crucially, test cases that were wrongful convictions were correctly assigned to the not-guilty category. This work demonstrated the potential for machine learning to benefit criminal trial decision-making, and should motivate further testing and development of the model and datasets for assisting the judicial process.


2021 ◽  
Author(s):  
John Mitchell ◽  
David Guile

The nature of work is changing rapidly, driven by the digital technologies that underpin industry 5.0. It has been argued worldwide that engineering education must adapt to these changes which have the potential to rewrite the core curriculum across engineering as a broader range of skills compete with traditional engineering knowledge. Although it is clear that skills such as data science, machine learning and AI will become fundamental skills of the future it is less clear how these should be integrated into existing engineering education curricula to ensure relevance of graduates. This chapter looks at the nature of future fusion skills and the range of strategies that might be adopted to integrated these into the existing engineering education curriculum.


2020 ◽  
Author(s):  
Raj Dandekar ◽  
Chris Rackauckas ◽  
George Barbastathis

We have developed a globally applicable diagnostic Covid-19 model by augmenting the classical SIR epidemiological model with a neural network module. Our model does not rely upon previous epidemics like SARS/MERS and all parameters are optimized via machine learning algorithms employed on publicly available Covid-19 data. The model decomposes the contributions to the infection timeseries to analyze and compare the role of quarantine control policies employed in highly affected regions of Europe, North America, South America and Asia in controlling the spread of the virus. For all continents considered, our results show a generally strong correlation between strengthening of the quarantine controls as learnt by the model and actions taken by the regions' respective governments. Finally, we have hosted our quarantine diagnosis results for the top $70$ affected countries worldwide, on a public platform, which can be used for informed decision making by public health officials and researchers alike.


2021 ◽  
Vol 11 (8) ◽  
pp. 3296
Author(s):  
Musarrat Hussain ◽  
Jamil Hussain ◽  
Taqdir Ali ◽  
Syed Imran Ali ◽  
Hafiz Syed Muhammad Bilal ◽  
...  

Clinical Practice Guidelines (CPGs) aim to optimize patient care by assisting physicians during the decision-making process. However, guideline adherence is highly affected by its unstructured format and aggregation of background information with disease-specific information. The objective of our study is to extract disease-specific information from CPG for enhancing its adherence ratio. In this research, we propose a semi-automatic mechanism for extracting disease-specific information from CPGs using pattern-matching techniques. We apply supervised and unsupervised machine-learning algorithms on CPG to extract a list of salient terms contributing to distinguishing recommendation sentences (RS) from non-recommendation sentences (NRS). Simultaneously, a group of experts also analyzes the same CPG and extract the initial patterns “Heuristic Patterns” using a group decision-making method, nominal group technique (NGT). We provide the list of salient terms to the experts and ask them to refine their extracted patterns. The experts refine patterns considering the provided salient terms. The extracted heuristic patterns depend on specific terms and suffer from the specialization problem due to synonymy and polysemy. Therefore, we generalize the heuristic patterns to part-of-speech (POS) patterns and unified medical language system (UMLS) patterns, which make the proposed method generalize for all types of CPGs. We evaluated the initial extracted patterns on asthma, rhinosinusitis, and hypertension guidelines with the accuracy of 76.92%, 84.63%, and 89.16%, respectively. The accuracy increased to 78.89%, 85.32%, and 92.07% with refined machine-learning assistive patterns, respectively. Our system assists physicians by locating disease-specific information in the CPGs, which enhances the physicians’ performance and reduces CPG processing time. Additionally, it is beneficial in CPGs content annotation.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Alan Brnabic ◽  
Lisa M. Hess

Abstract Background Machine learning is a broad term encompassing a number of methods that allow the investigator to learn from the data. These methods may permit large real-world databases to be more rapidly translated to applications to inform patient-provider decision making. Methods This systematic literature review was conducted to identify published observational research of employed machine learning to inform decision making at the patient-provider level. The search strategy was implemented and studies meeting eligibility criteria were evaluated by two independent reviewers. Relevant data related to study design, statistical methods and strengths and limitations were identified; study quality was assessed using a modified version of the Luo checklist. Results A total of 34 publications from January 2014 to September 2020 were identified and evaluated for this review. There were diverse methods, statistical packages and approaches used across identified studies. The most common methods included decision tree and random forest approaches. Most studies applied internal validation but only two conducted external validation. Most studies utilized one algorithm, and only eight studies applied multiple machine learning algorithms to the data. Seven items on the Luo checklist failed to be met by more than 50% of published studies. Conclusions A wide variety of approaches, algorithms, statistical software, and validation strategies were employed in the application of machine learning methods to inform patient-provider decision making. There is a need to ensure that multiple machine learning approaches are used, the model selection strategy is clearly defined, and both internal and external validation are necessary to be sure that decisions for patient care are being made with the highest quality evidence. Future work should routinely employ ensemble methods incorporating multiple machine learning algorithms.


2021 ◽  
Vol 9 (5) ◽  
pp. 538
Author(s):  
Jinwan Park ◽  
Jung-Sik Jeong

According to the statistics of maritime collision accidents over the last five years (2016–2020), 95% of the total maritime collision accidents are caused by human factors. Machine learning algorithms are an emerging approach in judging the risk of collision among vessels and supporting reliable decision-making prior to any behaviors for collision avoidance. As the result, it can be a good method to reduce errors caused by navigators’ carelessness. This article aims to propose an enhanced machine learning method to estimate ship collision risk and to support more reliable decision-making for ship collision risk. In order to estimate the ship collision risk, the conventional support vector machine (SVM) was applied. Regardless of the advantage of the SVM to resolve the uncertainty problem by using the collected ships’ parameters, it has inherent weak points. In this study, the relevance vector machine (RVM), which can present reliable probabilistic results based on Bayesian theory, was applied to estimate the collision risk. The proposed method was compared with the results of applying the SVM. It showed that the estimation model using RVM is more accurate and efficient than the model using SVM. We expect to support the reasonable decision-making of the navigator through more accurate risk estimation, thus allowing early evasive actions.


Information ◽  
2022 ◽  
Vol 13 (1) ◽  
pp. 35
Author(s):  
Jibouni Ayoub ◽  
Dounia Lotfi ◽  
Ahmed Hammouch

The analysis of social networks has attracted a lot of attention during the last two decades. These networks are dynamic: new links appear and disappear. Link prediction is the problem of inferring links that will appear in the future from the actual state of the network. We use information from nodes and edges and calculate the similarity between users. The more users are similar, the higher the probability of their connection in the future will be. The similarity metrics play an important role in the link prediction field. Due to their simplicity and flexibility, many authors have proposed several metrics such as Jaccard, AA, and Katz and evaluated them using the area under the curve (AUC). In this paper, we propose a new parameterized method to enhance the AUC value of the link prediction metrics by combining them with the mean received resources (MRRs). Experiments show that the proposed method improves the performance of the state-of-the-art metrics. Moreover, we used machine learning algorithms to classify links and confirm the efficiency of the proposed combination.


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
C E Chronaki ◽  
A Miglietta

Abstract Evidence-based decision-making is central to public health. Implementing evidence-informed actions is most challenging during a public health emergency as in an epidemic, when time is limited, scientific uncertainties and political pressures tend to be high, and reliable data is typically lacking. The process of including data for preparedness and training for evidence-based decision making in public health emergencies is not systematic and is complicated by many barriers as the absence of common digital tools and approaches for resource planning and update of response plans. Health Technology Assessment (HTA) is used with the aim to improve the quality and efficiency of public health interventions and to make healthcare systems more sustainable. Many of today's public health crises are also cross-border, and countries need to collaborate in a systematic and standardized way in order to enhance interoperability to share data and to plan coordinated response. Digital health tools have an important role to play in this setting, facilitating use of knowledge about the population that can potentially affected by the crisis within and across regional and national borders. To strengthen the impact of scientific evidence on decision-making for public health emergency preparedness and response, it is necessary to better define and align mechanisms through which interdisciplinary evidence feeds into decision-making processes during public health emergencies and the context in which these mechanisms operate. Activities and policy development in the HTA network could inform this process. The objective of this presentation is to identify barriers for evidence-based decision making during public health emergencies and discuss how standardization in digital health and HTA processes may help overcome these barriers leading to more effective coordinated and evidence-based public health emergency response.


Author(s):  
Prof. Gowrishankar B S

Stock market is one of the most complicated and sophisticated ways to do business. Small ownerships, brokerage corporations, banking sectors, all depend on this very body to make revenue and divide risks; a very complicated model. However, this paper proposes to use machine learning algorithms to predict the future stock price for exchange by using pre-existing algorithms to help make this unpredictable format of business a little more predictable. The use of machine learning which makes predictions based on the values of current stock market indices by training on their previous values. Machine learning itself employs different models to make prediction easier and authentic. The data has to be cleansed before it can be used for predictions. This paper focuses on categorizing various methods used for predictive analytics in different domains to date, their shortcomings.


Sign in / Sign up

Export Citation Format

Share Document