scholarly journals Ethical Use of Electronic Health Record Data and Artificial Intelligence: Recommendations of the Primary Care Informatics Working Group of the International Medical Informatics Association

2020 ◽  
Vol 29 (01) ◽  
pp. 051-057 ◽  
Author(s):  
Siaw-Teng Liaw ◽  
Harshana Liyanage ◽  
Craig Kuziemsky ◽  
Amanda L. Terry ◽  
Richard Schreiber ◽  
...  

Summary Objective: To create practical recommendations for the curation of routinely collected health data and artificial intelligence (AI) in primary care with a focus on ensuring their ethical use. Methods: We defined data curation as the process of management of data throughout its lifecycle to ensure it can be used into the future. We used a literature review and Delphi exercises to capture insights from the Primary Care Informatics Working Group (PCIWG) of the International Medical Informatics Association (IMIA). Results: We created six recommendations: (1) Ensure consent and formal process to govern access and sharing throughout the data life cycle; (2) Sustainable data creation/collection requires trust and permission; (3) Pay attention to Extract-Transform-Load (ETL) processes as they may have unrecognised risks; (4) Integrate data governance and data quality management to support clinical practice in integrated care systems; (5) Recognise the need for new processes to address the ethical issues arising from AI in primary care; (6) Apply an ethical framework mapped to the data life cycle, including an assessment of data quality to achieve effective data curation. Conclusions: The ethical use of data needs to be integrated within the curation process, hence running throughout the data lifecycle. Current information systems may not fully detect the risks associated with ETL and AI; they need careful scrutiny. With distributed integrated care systems where data are often used remote from documentation, harmonised data quality assessment, management, and governance is important. These recommendations should help maintain trust and connectedness in contemporary information systems and planned developments.

2019 ◽  
Vol 28 (01) ◽  
pp. 041-046 ◽  
Author(s):  
Harshana Liyanage ◽  
Siaw-Teng Liaw ◽  
Jitendra Jonnagaddala ◽  
Richard Schreiber ◽  
Craig Kuziemsky ◽  
...  

Background: Artificial intelligence (AI) is heralded as an approach that might augment or substitute for the limited processing power of the human brain of primary health care (PHC) professionals. However, there are concerns that AI-mediated decisions may be hard to validate and challenge, or may result in rogue decisions. Objective: To form consensus about perceptions, issues, and challenges of AI in primary care. Method: A three-round Delphi study was conducted. Round 1 explored experts’ viewpoints on AI in PHC (n=20). Round 2 rated the appropriateness of statements arising from round one (n=12). The third round was an online panel discussion of findings (n=8) with the members of both the International Medical Informatics Association and the European Federation of Medical Informatics Primary Health Care Informatics Working Groups. Results: PHC and informatics experts reported AI has potential to improve managerial and clinical decisions and processes, and this would be facilitated by common data standards. The respondents did not agree that AI applications should learn and adapt to clinician preferences or behaviour and they did not agree on the extent of AI potential for harm to patients. It was more difficult to assess the impact of AI-based applications on continuity and coordination of care. Conclusion: While the use of AI in medicine should enhance healthcare delivery, we need to ensure meticulous design and evaluation of AI applications. The primary care informatics community needs to be proactive and to guide the ethical and rigorous development of AI applications so that they will be safe and effective.


2019 ◽  
Vol 28 (01) ◽  
pp. 128-134 ◽  
Author(s):  
Farah Magrabi ◽  
Elske Ammenwerth ◽  
Jytte Brender McNair ◽  
Nicolet F. De Keizer ◽  
Hannele Hyppönen ◽  
...  

Objectives: This paper draws attention to: i) key considerations for evaluating artificial intelligence (AI) enabled clinical decision support; and ii) challenges and practical implications of AI design, development, selection, use, and ongoing surveillance. Method: A narrative review of existing research and evaluation approaches along with expert perspectives drawn from the International Medical Informatics Association (IMIA) Working Group on Technology Assessment and Quality Development in Health Informatics and the European Federation for Medical Informatics (EFMI) Working Group for Assessment of Health Information Systems. Results: There is a rich history and tradition of evaluating AI in healthcare. While evaluators can learn from past efforts, and build on best practice evaluation frameworks and methodologies, questions remain about how to evaluate the safety and effectiveness of AI that dynamically harness vast amounts of genomic, biomarker, phenotype, electronic record, and care delivery data from across health systems. This paper first provides a historical perspective about the evaluation of AI in healthcare. It then examines key challenges of evaluating AI-enabled clinical decision support during design, development, selection, use, and ongoing surveillance. Practical aspects of evaluating AI in healthcare, including approaches to evaluation and indicators to monitor AI are also discussed. Conclusion: Commitment to rigorous initial and ongoing evaluation will be critical to ensuring the safe and effective integration of AI in complex sociotechnical settings. Specific enhancements that are required for the new generation of AI-enabled clinical decision support will emerge through practical application.


2016 ◽  
Vol 10 (2) ◽  
pp. 176-192 ◽  
Author(s):  
Line Pouchard

As science becomes more data-intensive and collaborative, researchers increasingly use larger and more complex data to answer research questions. The capacity of storage infrastructure, the increased sophistication and deployment of sensors, the ubiquitous availability of computer clusters, the development of new analysis techniques, and larger collaborations allow researchers to address grand societal challenges in a way that is unprecedented. In parallel, research data repositories have been built to host research data in response to the requirements of sponsors that research data be publicly available. Libraries are re-inventing themselves to respond to a growing demand to manage, store, curate and preserve the data produced in the course of publicly funded research. As librarians and data managers are developing the tools and knowledge they need to meet these new expectations, they inevitably encounter conversations around Big Data. This paper explores definitions of Big Data that have coalesced in the last decade around four commonly mentioned characteristics: volume, variety, velocity, and veracity. We highlight the issues associated with each characteristic, particularly their impact on data management and curation. We use the methodological framework of the data life cycle model, assessing two models developed in the context of Big Data projects and find them lacking. We propose a Big Data life cycle model that includes activities focused on Big Data and more closely integrates curation with the research life cycle. These activities include planning, acquiring, preparing, analyzing, preserving, and discovering, with describing the data and assuring quality being an integral part of each activity. We discuss the relationship between institutional data curation repositories and new long-term data resources associated with high performance computing centers, and reproducibility in computational science. We apply this model by mapping the four characteristics of Big Data outlined above to each of the activities in the model. This mapping produces a set of questions that practitioners should be asking in a Big Data project


2021 ◽  
Vol 2 ◽  
pp. 1-7
Author(s):  
Michael Wagner ◽  
Christin Henzen ◽  
Ralph Müller-Pfefferkorn

Abstract. Metadata management is core to support discovery and reuse of data products, and to allow for reproducibility of the research data in Earth System Sciences (ESS). Thus, ensuring acquisition and provision of meaningful and quality assured metadata should become an integral part of data-driven ESS projects.We propose an open-source tool for the automated metadata and data quality extraction to foster the provision of FAIR data (Findable, Accessible, Interoperable Reusable). By enabling researchers to automatically extract and reuse structured and standardized ESS-specific metadata, in particular quality information, in several components of a research data infrastructure, we support researchers along the research data life cycle.


Author(s):  
Victoria Youngohc Yoon ◽  
Peter Aiken ◽  
Tor Guimaraes

The importance of a company-wide framework for managing data resources has been recognized (Gunter, 2001; Lee, 2003, 2004; Madnick, Wang & Xian, 2003, 2004; Sawhney, 2001; Shankaranarayan, Ziad & Wang, 2003). It is considered a major component of information resources management (Guimaraes, 1988). Many organizations are discovering that imperfect data in information systems negatively affect their business operations and can be extremely costly (Brown, 2001; Keizer, 2004). The expanded data life cycle model proposed here enables us to identify links between cycle phases and data quality engineering dimensions. Expanding the data life cycle model and the dimensions of data quality will enable organizations to more effectively implement the inter- as well as intra-system use of their data resources, as well as better coordinate the development and application of their data quality engineering methods.


Diabetes ◽  
2019 ◽  
Vol 68 (Supplement 1) ◽  
pp. 602-P
Author(s):  
NISHIT UMESH PAREKH ◽  
MALAVIKA BHASKARANAND ◽  
CHAITHANYA RAMACHANDRA ◽  
SANDEEP BHAT ◽  
KAUSHAL SOLANKI

Sign in / Sign up

Export Citation Format

Share Document