scholarly journals Curated Data In — Trustworthy In Silico Models Out: The Impact of Data Quality on the Reliability of Artificial Intelligence Models as Alternatives to Animal Testing

2021 ◽  
pp. 026119292110296
Author(s):  
Vinicius M. Alves ◽  
Scott S. Auerbach ◽  
Nicole Kleinstreuer ◽  
John P. Rooney ◽  
Eugene N. Muratov ◽  
...  

New Approach Methodologies (NAMs) that employ artificial intelligence (AI) for predicting adverse effects of chemicals have generated optimistic expectations as alternatives to animal testing. However, the major underappreciated challenge in developing robust and predictive AI models is the impact of the quality of the input data on the model accuracy. Indeed, poor data reproducibility and quality have been frequently cited as factors contributing to the crisis in biomedical research, as well as similar shortcomings in the fields of toxicology and chemistry. In this article, we review the most recent efforts to improve confidence in the robustness of toxicological data and investigate the impact that data curation has on the confidence in model predictions. We also present two case studies demonstrating the effect of data curation on the performance of AI models for predicting skin sensitisation and skin irritation. We show that, whereas models generated with uncurated data had a 7–24% higher correct classification rate (CCR), the perceived performance was, in fact, inflated owing to the high number of duplicates in the training set. We assert that data curation is a critical step in building computational models, to help ensure that reliable predictions of chemical toxicity are achieved through use of the models.

2009 ◽  
Vol 54 (2) ◽  
pp. 188-196 ◽  
Author(s):  
Martin Macfarlane ◽  
Penny Jones ◽  
Carsten Goebel ◽  
Eric Dufour ◽  
Joanna Rowland ◽  
...  

The paper describes the essence of the risks that banks face in their activities, first of all, that of the credit risk, the level of which is determined by the size of financial losses if a borrower does not repay credit funds and interest for using them. The models for assessing the probability of a borrower default (statistical, based on artificial intelligence and theoretical ones) are considered and compared; their advantages and disadvantages are determined. It is shown that, as a rule, one of the theoretical models underlies the application of statistical models or artificial intelligence models, and the choice of a model depends on the purpose of the assessment, information that can be used, technical and technological support and the bank's personnel potential. The analysis of a particular borrower of a bank – a firm, carrying out its activities in the field of agriculture, is done. The procedure used for assessing credit risk is based on the NBU regulation on the determination by Ukrainian banks of the size of credit risk for active banking operations. The stages of the procedure are: calculation of the integral indicator, debtor’s class determination and the probability of borrower default, calculation of the exposure at risk and the of the level of losses in case of default and the assessment of the credit risk itself. To determine the probability of a borrower's default, the classic Altman’s model was used, which made it possible to calculate an integral indicator using financial indicators of the enterprise; according to the internal rating of the bank, the class of the enterprise and the probability of default are determined. The size of the credit risk was calculated and its impact on the bank's norms was estimated. The loan provided belongs to the group of large indebtedness, therefore, the bank must constantly monitor the impact of the loan provided on certain standards of credit risk. basic recommendations for issuing and servicing a loan are provided.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6195
Author(s):  
Paul-Lou Benedick ◽  
Jérémy Robert ◽  
Yves Le Traon

Artificial Intelligence (AI) is one of the hottest topics in our society, especially when it comes to solving data-analysis problems. Industry are conducting their digital shifts, and AI is becoming a cornerstone technology for making decisions out of the huge amount of (sensors-based) data available in the production floor. However, such technology may be disappointing when deployed in real conditions. Despite good theoretical performances and high accuracy when trained and tested in isolation, a Machine-Learning (M-L) model may provide degraded performances in real conditions. One reason may be fragility in treating properly unexpected or perturbed data. The objective of the paper is therefore to study the robustness of seven M-L and Deep-Learning (D-L) algorithms, when classifying univariate time-series under perturbations. A systematic approach is proposed for artificially injecting perturbations in the data and for evaluating the robustness of the models. This approach focuses on two perturbations that are likely to happen during data collection. Our experimental study, conducted on twenty sensors’ datasets from the public University of California Riverside (UCR) repository, shows a great disparity of the models’ robustness under data quality degradation. Those results are used to analyse whether the impact of such robustness can be predictable—thanks to decision trees—which would prevent us from testing all perturbations scenarios. Our study shows that building such a predictor is not straightforward and suggests that such a systematic approach needs to be used for evaluating AI models’ robustness.


2020 ◽  
Author(s):  
Joyce V. V. B. Borba ◽  
Vinicius Alves ◽  
Rodolpho Braga ◽  
Daniel Korn ◽  
Kirsten Overdahl ◽  
...  

<p><a>Since 2009, animal testing for cosmetic products has been prohibited in Europe, and in 2016, US EPA announced their intent to modernize the so-called "6-pack" of acute toxicity tests (acute oral toxicity, acute dermal toxicity, acute inhalation toxicity, skin irritation and corrosion, eye irritation and corrosion, and skin sensitization) and expand acceptance of alternative methods to reduce animal testing of pesticides. We have compiled, curated, and integrated the largest publicly available dataset and developed an ensemble of QSAR models for all six endpoints. All models were validated according to the OECD QSAR principles and tested using newly identified data on compounds not included in the training sets. We have established a publicly accessible Systemic and Topical chemical Toxicity (STopTox) web portal (https://stoptox.mml.unc.edu/) integrating all developed models for “6-pack” assays. This portal can be used by scientists and regulators to identify putative toxicants or non-toxicants in chemical libraries of interest.</a></p>


Author(s):  
Joshua Bensemann ◽  
Qiming Bao ◽  
Gaël Gendron ◽  
Tim Hartill ◽  
Michael Witbrock

Processes occurring in brains, a.k.a. biological neural networks, can and have been modeled within artificial neural network architectures. Due to this, we have conducted a review of research on the phenomenon of blindsight in an attempt to generate ideas for artificial intelligence models. Blindsight can be considered as a diminished form of visual experience. If we assume that artificial networks have no form of visual experience, then deficits caused by blindsight give us insights into the processes occurring within visual experience that we can incorporate into artificial neural networks. This paper has been structured into three parts. Section 2 is a review of blindsight research, looking specifically at the errors occurring during this condition compared to normal vision. Section 3 identifies overall patterns from Sec. 2 to generate insights for computational models of vision. Section 4 demonstrates the utility of examining biological research to inform artificial intelligence research by examining computational models of visual attention relevant to one of the insights generated in Sec. 3. The research covered in Sec. 4 shows that incorporating one of our insights into computational vision does benefit those models. Future research will be required to determine whether our other insights are as valuable.


2020 ◽  
Author(s):  
Joyce V. V. B. Borba ◽  
Vinicius Alves ◽  
Rodolpho Braga ◽  
Daniel Korn ◽  
Kirsten Overdahl ◽  
...  

<p><a>Since 2009, animal testing for cosmetic products has been prohibited in Europe, and in 2016, US EPA announced their intent to modernize the so-called "6-pack" of acute toxicity tests (acute oral toxicity, acute dermal toxicity, acute inhalation toxicity, skin irritation and corrosion, eye irritation and corrosion, and skin sensitization) and expand acceptance of alternative methods to reduce animal testing of pesticides. We have compiled, curated, and integrated the largest publicly available dataset and developed an ensemble of QSAR models for all six endpoints. All models were validated according to the OECD QSAR principles and tested using newly identified data on compounds not included in the training sets. We have established a publicly accessible Systemic and Topical chemical Toxicity (STopTox) web portal (https://stoptox.mml.unc.edu/) integrating all developed models for “6-pack” assays. This portal can be used by scientists and regulators to identify putative toxicants or non-toxicants in chemical libraries of interest.</a></p>


1998 ◽  
Vol 26 (5) ◽  
pp. 709-720 ◽  
Author(s):  
Andrew P. Worth ◽  
Julia H. Fentem ◽  
Michael Balls ◽  
Philip A. Botham ◽  
Rodger D. Curren ◽  
...  

The use of testing strategies which incorporate a range of alternative methods and which use animals only as a last resort is widely considered to provide a reliable way of predicting chemical toxicity while minimising animal testing. The widespread concern over the severity of the Draize rabbit test for assessing skin irritation and corrosion led to the proposal of a stepwise testing strategy at an OECD workshop in January 1996. Subsequently, the proposed testing strategy was adopted, with minor modifications, by the OECD Advisory Group on Harmonization of Classification and Labelling. This article reports an evaluation of the proposed OECD testing strategy as it relates to the classification of skin corrosives. By using a set of 60 chemicals, an assessment was made of the effect of applying three steps in the strategy, taken both individually and in sequence. The results indicate that chemicals can be classified as corrosive (C) or non-corrosive (NC) with sufficient reliability by the sequential application of three alternative methods, i.e., structure-activity relationships (where available), pH measurements, and a single in vitro method (either the rat skin transcutaneous electrical resistance (TER) assay or the EPISKIN™ assay). It is concluded that the proposed OECD strategy for skin corrosion can be simplified without compromising its predictivity. For example, it does not appear necessary to measure acid/alkali reserve (buffering capacity) in addition to pH for the classification of pure chemicals.


Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2901
Author(s):  
Lilia Muñoz ◽  
Vladimir Villarreal ◽  
Mel Nielsen ◽  
Yen Caballero ◽  
Inés Sittón-Candanedo ◽  
...  

The rapid spread of SARS-CoV-2 and the consequent global COVID-19 pandemic has prompted the public administrations of different countries to establish health procedures and protocols based on information generated through predictive techniques and models, which, in turn, are based on technology such as artificial intelligence (AI) and machine learning (ML). This article presents some AI tools and computational models used to collaborate in the control and detection of COVID-19 cases. In addition, the main features of the Epidempredict project regarding COVID-19 in Panama are presented. This initiative consists of the planning and design of a digital platform, with cloud-based technology, to manage the ingestion, analysis, visualization and exportation of data regarding the evolution of COVID-19 in Panama. The methodology for the design of predictive algorithms is based on a hybrid model that combines the dynamics associated with population data of an SIR model of differential equations and extrapolation with recurrent neural networks. The technological solution developed suggests that adjustments can be made to the rules implemented in the expert processes that are considered. Furthermore, the resulting information is displayed and explored through user-friendly dashboards, contributing to more meaningful decision-making processes.


Sign in / Sign up

Export Citation Format

Share Document