scholarly journals AI/ML Models to Aid in the Diagnosis of COVID-19 Illness from Forced Cough Vocalizations: Good Machine Learning Practice and Good Clinical Practices from Concept to Consumer for AI/ML Software Devices

Author(s):  
Karl Kelley ◽  
Mona Kelley ◽  
S. Caitlin Kelley ◽  
Allison A. Sakara ◽  
Maurice A. Ramirez

From a comprehensive and systematic search of the relevant literature on signal data signature (SDS)-based artificial intelligence/machine learning (AI/ML) systems designed to aid in the diagnosis of COVID-19 illness, we identified the highest quality articles with statistically significant data sets for a head-to-head comparison to our own model in development. Further comparisons were made to the recently released "Good Machine Learning Practice (GMLP) for Medical Device Development: Guiding Principles" and, in conclusions, we proposed supplemental principles aimed at bringing AI/ML technologies in closer alignment GMLP and Good Clinical Practices (GCP).

2019 ◽  
Author(s):  
James A Smith ◽  
Roxanna E Abhari ◽  
Zain U Hussain ◽  
Carl Heneghan ◽  
Gary S Collins ◽  
...  

AbstractObjectivesTo determine the extent and disclosure of financial ties to industry and use of scientific evidence in comments on a US Food and Drug Administration (FDA) regulatory framework for modifications to artificial intelligence/machine Learning (AI/ML)-based software as a medical device (SaMD).DesignCross-sectional study.SettingWe searched all publicly available comments on the FDA “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback” from April 2nd 2019 to August 8th 2019.Main outcome measuresThe proportion of articles submitted by parties with financial ties to industry, disclosing those ties, citing scientific articles, citing systematic reviews and meta-analyses, and using a systematic process to identify relevant literature.ResultsWe analysed 125 comments submitted on the proposed framework. 79 (63%) comments came from parties with financial ties; for 36 (29%) comments it was not clear and the absence of financial ties could only be confirmed for 10 (8%) comments. No financial ties were disclosed in any of the comments that were not from industry submitters. The vast majority of submitted comments (86%) did not cite any scientific literature, just 4% cited a systematic review or meta-analysis, and no comments indicated that a systematic process was used to identify relevant literature.ConclusionsFinancial ties to industry were common and undisclosed and scientific evidence, including systematic reviews and meta-analyses, were rarely cited. To ensure regulatory frameworks best serve patient interests, the FDA should mandate disclosure of potential conflicts of interest (including financial ties), in comments, encourage the use of scientific evidence and encourage engagement from non-conflicted parties.Strengths and limitations of this study-We analysed the extent of financial ties to industry and the use of scientific evidence in comments on the proposed FDA framework-We used a comprehensive strategy to attempt to identify financial ties to industry-Readers may be able to contribute higher quality comments to subsequent drafts of this framework-There is heterogeneity in the degree of conflict with respect to the framework that the recorded financial ties may represent; some ties will be more likely than others to result in biased commenting-Because the framework could not be classified as pro-industry or not, we did not classify the direction of opinions expressed in comments with respect to the framework and their association with financial ties-We do not know how information submitted to FDA is used internally in the rule-making process


BMJ Open ◽  
2020 ◽  
Vol 10 (10) ◽  
pp. e039969 ◽  
Author(s):  
James Andrew Smith ◽  
Roxanna E Abhari ◽  
Zain Hussain ◽  
Carl Heneghan ◽  
Gary S Collins ◽  
...  

ObjectivesTo determine the extent and disclosure of financial ties to industry and use of scientific evidence in comments on a US Food and Drug Administration (FDA) regulatory framework for modifications to artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD).DesignCross-sectional study.SettingWe searched all publicly available comments on the FDA ‘Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)—Discussion Paper and Request for Feedback’ from 2 April 2019 to 8 August 2019.Main outcome measuresThe proportion of articles submitted by parties with financial ties to industry, disclosing those ties, citing scientific articles, citing systematic reviews and meta-analyses, and using a systematic process to identify relevant literature.ResultsWe analysed 125 comments submitted on the proposed framework. 79 (63%) comments came from parties with financial ties; for 36 (29%) comments, it was not clear and the absence of financial ties could only be confirmed for 10 (8%) comments. No financial ties were disclosed in any of the comments that were not from industry submitters. The vast majority of submitted comments (86%) did not cite any scientific literature, just 4% cited a systematic review or meta-analysis and no comments indicated that a systematic process was used to identify relevant literature.ConclusionsFinancial ties to industry were common and undisclosed, and scientific evidence, including systematic reviews and meta-analyses, were rarely cited. To ensure regulatory frameworks best serve patient interests, the FDA should mandate disclosure of potential conflicts of interest (including financial ties) in comments, encourage the use of scientific evidence, and encourage engagement from non-conflicted parties.


2014 ◽  
Vol 48 (1) ◽  
pp. 90-97 ◽  
Author(s):  
Brian L. Wiens ◽  
Theodore C. Lystig ◽  
Scott M. Berry

2017 ◽  
Vol 113 (5/6) ◽  
Author(s):  
Kylie de Jager ◽  
Chipo Chimhundu ◽  
Trust Saidi ◽  
Tania S. Douglas ◽  
◽  
...  

A characterisation of the medical device development landscape in South Africa would be beneficial for future policy developments that encourage locally developed devices to address local healthcare needs. The landscape was explored through a bibliometric analysis (2000–2013) of relevant scientific papers using co-authorship as an indicator of collaboration. Collaborating institutions thus found were divided into four sectors: academia (A); healthcare (H); industry (I); and science and support (S). A collaboration network was drawn to show the links between the institutions and analysed using network analysis metrics. Centrality measures identified seven dominant local institutions from three sectors. Group densities were used to quantify the extent of collaboration: the A sector collaborated the most extensively both within and between sectors; local collaborations were more prevalent than international collaborations. Translational collaborations (AHI, HIS or AHIS) are considered to be pivotal in fostering medical device innovation that is both relevant and likely to be commercialised. Few such collaborations were found, suggesting room for increased collaboration of these types in South Africa.


2021 ◽  
Author(s):  
◽  
Hazel Darney

<p>With the rapid uptake of machine learning artificial intelligence in our daily lives, we are beginning to realise the risks involved in implementing this technology in high-stakes decision making. This risk is due to machine learning decisions being based in human-curated datasets, meaning these decisions are not bias-free. Machine learning datasets put women at a disadvantage due to factors including (but not limited to) historical exclusion of women in data collection, research, and design; as well as the low participation of women in artificial intelligence fields. These factors mean that applications of machine learning may fail to treat the needs and experiences of women as equal to those of men.    Research into understanding gender biases in machine learning frequently occurs within the computer science field. This has frequently resulted in research where bias is inconsistently defined, and proposed techniques do not engage with relevant literature outside of the artificial intelligence field. This research proposes a novel, interdisciplinary approach to the measurement and validation of gender biases in machine learning. This approach translates methods of human-based gender bias measurement in psychology, forming a gender bias questionnaire for use on a machine rather than a human.   The final output system of this research as a proof of concept demonstrates the potential for a new approach to gender bias investigation. This system takes advantage of the qualitative nature of language to provide a new way of understanding gender data biases by outputting both quantitative and qualitative results. These results can then be meaningfully translated into their real-world implications.</p>


2021 ◽  
Author(s):  
◽  
Hazel Darney

<p>With the rapid uptake of machine learning artificial intelligence in our daily lives, we are beginning to realise the risks involved in implementing this technology in high-stakes decision making. This risk is due to machine learning decisions being based in human-curated datasets, meaning these decisions are not bias-free. Machine learning datasets put women at a disadvantage due to factors including (but not limited to) historical exclusion of women in data collection, research, and design; as well as the low participation of women in artificial intelligence fields. These factors mean that applications of machine learning may fail to treat the needs and experiences of women as equal to those of men.    Research into understanding gender biases in machine learning frequently occurs within the computer science field. This has frequently resulted in research where bias is inconsistently defined, and proposed techniques do not engage with relevant literature outside of the artificial intelligence field. This research proposes a novel, interdisciplinary approach to the measurement and validation of gender biases in machine learning. This approach translates methods of human-based gender bias measurement in psychology, forming a gender bias questionnaire for use on a machine rather than a human.   The final output system of this research as a proof of concept demonstrates the potential for a new approach to gender bias investigation. This system takes advantage of the qualitative nature of language to provide a new way of understanding gender data biases by outputting both quantitative and qualitative results. These results can then be meaningfully translated into their real-world implications.</p>


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 245
Author(s):  
Konstantinos G. Liakos ◽  
Georgios K. Georgakilas ◽  
Fotis C. Plessas ◽  
Paris Kitsos

A significant problem in the field of hardware security consists of hardware trojan (HT) viruses. The insertion of HTs into a circuit can be applied for each phase of the circuit chain of production. HTs degrade the infected circuit, destroy it or leak encrypted data. Nowadays, efforts are being made to address HTs through machine learning (ML) techniques, mainly for the gate-level netlist (GLN) phase, but there are some restrictions. Specifically, the number and variety of normal and infected circuits that exist through the free public libraries, such as Trust-HUB, are based on the few samples of benchmarks that have been created from circuits large in size. Thus, it is difficult, based on these data, to develop robust ML-based models against HTs. In this paper, we propose a new deep learning (DL) tool named Generative Artificial Intelligence Netlists SynthesIS (GAINESIS). GAINESIS is based on the Wasserstein Conditional Generative Adversarial Network (WCGAN) algorithm and area–power analysis features from the GLN phase and synthesizes new normal and infected circuit samples for this phase. Based on our GAINESIS tool, we synthesized new data sets, different in size, and developed and compared seven ML classifiers. The results demonstrate that our new generated data sets significantly enhance the performance of ML classifiers compared with the initial data set of Trust-HUB.


Author(s):  
Fernando Enrique Lopez Martinez ◽  
Edward Rolando Núñez-Valdez

IoT, big data, and artificial intelligence are currently three of the most relevant and trending pieces for innovation and predictive analysis in healthcare. Many healthcare organizations are already working on developing their own home-centric data collection networks and intelligent big data analytics systems based on machine-learning principles. The benefit of using IoT, big data, and artificial intelligence for community and population health is better health outcomes for the population and communities. The new generation of machine-learning algorithms can use large standardized data sets generated in healthcare to improve the effectiveness of public health interventions. A lot of these data come from sensors, devices, electronic health records (EHR), data generated by public health nurses, mobile data, social media, and the internet. This chapter shows a high-level implementation of a complete solution of IoT, big data, and machine learning implemented in the city of Cartagena, Colombia for hypertensive patients by using an eHealth sensor and Amazon Web Services components.


Sign in / Sign up

Export Citation Format

Share Document