specific input
Recently Published Documents


TOTAL DOCUMENTS

85
(FIVE YEARS 25)

H-INDEX

15
(FIVE YEARS 1)

2022 ◽  
Vol 12 ◽  
Author(s):  
Samuel P. Border ◽  
Pinaki Sarder

While it is impossible to deny the performance gains achieved through the incorporation of deep learning (DL) and other artificial intelligence (AI)-based techniques in pathology, minimal work has been done to answer the crucial question of why these algorithms predict what they predict. Tracing back classification decisions to specific input features allows for the quick identification of model bias as well as providing additional information toward understanding underlying biological mechanisms. In digital pathology, increasing the explainability of AI models would have the largest and most immediate impact for the image classification task. In this review, we detail some considerations that should be made in order to develop models with a focus on explainability.


AI and Ethics ◽  
2021 ◽  
Author(s):  
Pamela Ugwudike

AbstractOrganisations, governments, institutions and others across several jurisdictions are using AI systems for a constellation of high-stakes decisions that pose implications for human rights and civil liberties. But a fast-growing multidisciplinary scholarship on AI bias is currently documenting problems such as the discriminatory labelling and surveillance of historically marginalised subgroups. One of the ways in which AI systems generate such downstream outcomes is through their inputs. This paper focuses on a specific input dynamic which is the theoretical foundation that informs the design, operation, and outputs of such systems. The paper uses the set of technologies known as predictive policing algorithms as a case example to illustrate how theoretical assumptions can pose adverse social consequences and should therefore be systematically evaluated during audits if the objective is to detect unknown risks, avoid AI harms, and build ethical systems. In its analysis of these issues, the paper adds a new dimension to the literature on AI ethics and audits by investigating algorithmic impact in the context of underpinning theory. In doing so, the paper provides insights that can usefully inform auditing policy and practice instituted by relevant stakeholders including the developers, vendors, and procurers of AI systems as well as independent auditors.


2021 ◽  
Vol 20 (4) ◽  
pp. ar52
Author(s):  
Dustin B. Thoman ◽  
Melo-Jean Yap ◽  
Felisha A. Herrera ◽  
Jessi L. Smith

The diversity intervention-resistance to action model is presented along with interviews of biology faculty undertaken to understand how resistance to implementing diversity-enhancing classroom interventions manifests at four specific input points within a rational decision-making process that too often results in inaction.


2021 ◽  
Vol 30 (4) ◽  
pp. 1-28
Author(s):  
Yida Tao ◽  
Shan Tang ◽  
Yepang Liu ◽  
Zhiwu Xu ◽  
Shengchao Qin

As data volume and complexity grow at an unprecedented rate, the performance of data manipulation programs is becoming a major concern for developers. In this article, we study how alternative API choices could improve data manipulation performance while preserving task-specific input/output equivalence. We propose a lightweight approach that leverages the comparative structures in Q&A sites to extracting alternative implementations. On a large dataset of Stack Overflow posts, our approach extracts 5,080 pairs of alternative implementations that invoke different data manipulation APIs to solve the same tasks, with an accuracy of 86%. Experiments show that for 15% of the extracted pairs, the faster implementation achieved >10x speedup over its slower alternative. We also characterize 68 recurring alternative API pairs from the extraction results to understand the type of APIs that can be used alternatively. To put these findings into practice, we implement a tool, AlterApi7 , to automatically optimize real-world data manipulation programs. In the 1,267 optimization attempts on the Kaggle dataset, 76% achieved desirable performance improvements with up to orders-of-magnitude speedup. Finally, we discuss notable challenges of using alternative APIs for optimizing data manipulation programs. We hope that our study offers a new perspective on API recommendation and automatic performance optimization.


2021 ◽  
Author(s):  
Giovanni Capobianco ◽  
Carmine Cerrone ◽  
Andrea Di Placido ◽  
Daniel Durand ◽  
Luigi Pavone ◽  
...  

AbstractImage analysis is a branch of signal analysis that focuses on the extraction of meaningful information from images through digital image processing techniques. Convolution is a technique used to enhance specific characteristics of an image, while deconvolution is its inverse process. In this work, we focus on the deconvolution process, defining a new approach to retrieve filters applied in the convolution phase. Given an imageIand a filtered image$$I' = f(I)$$I′=f(I), we propose three mathematical formulations that, starting fromIand$$I'$$I′, are able to identify the filter$$f'$$f′that minimizes the mean absolute error between$$I'$$I′and$$f'(I)$$f′(I). Several tests were performed to investigate the applicability of our approaches in different scenarios. The results highlight that the proposed algorithms are able to identify the filter used in the convolution phase in several cases. Alternatively, the developed approaches can be used to verify whether a specific input imageIcan be transformed into a sample image$$I'$$I′through a convolution filter while returning the desired filter as output.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0250494
Author(s):  
Emmanuel Ahovi ◽  
Kevin Schneider ◽  
Alfons Oude Lansink

Differences in technical efficiency across farms are one of the major factors explaining differences in farm survival and growth and changes in farm industry structure. This study employs Data Envelopment Analysis (DEA) to compute technical inefficiency scores for output, energy, materials, pesticides and fertiliser of a sample of Dutch indoor vegetable farms within the period 2006–2016. A bootstrap truncated regression model is used to determine statistical associations between producer-specific characteristics and technical inefficiency scores for the specified inputs. For the sample of indoor growers, the average technical inefficiency was about 14% for energy, 23% for materials, 24% for pesticides and 22% for fertilisers. The bootstrap truncated regression suggested that the degree of specialisation exerts adverse effects on the technical inefficiency of variable inputs. While age, short-term, long-term debt and subsidy were statistically significant, the coefficients were not economically significant. Building the capacity of farmers to reduce input inefficiency will enable farmers to be competitive and reduce the adverse effects of input overuse on the environment.


2021 ◽  
Vol 5 (2) ◽  
pp. 299-307
Author(s):  
Filippo Candela ◽  
Paolo Mulassano

Abstract The paper presents and discusses the method adopted by Compagnia di San Paolo, one of the largest European philanthropic institutions, to monitor the advancement, despite the COVID-19 situation, in providing specific input to the decision-making process for dedicated projects. An innovative approach based on the use of daily open data was adopted to monitor the metropolitan area with a multidimensional perspective. Several open data indicators related to the economy, society, culture, environment, and climate were identified and incorporated into the decision support system dashboard. Indicators are presented and discussed to highlight how open data could be integrated into the foundation's strategic approach and potentially replicated on a large scale by local institutions. Moreover, starting from the lessons learned from this experience, the paper analyzes the opportunities and critical issues surrounding the use of open data, not only to improve the quality of life during the COVID-19 epidemic but also for the effective regulation of society, the participation of citizens, and their well-being.


2021 ◽  
Author(s):  
Rolf Hut ◽  
Niels Drost ◽  
Jerom Aerts ◽  
Laurene Bouaziz ◽  
Willem van Verseveld ◽  
...  

<p>Model comparisons are an important exercise to gain new hydrological insight from the diversity in our communities hydrological models. Current practice in model comparison studies is to have each model be run by the creator/representative of that model and combine the results of all these model runs in a single analysis. </p><p>In this work we present the first major model comparison done within the eWaterCycle Open Hydrological Platform. eWaterCycle is a platform for doing hydrological experiments where hydrological models are accessed as objects from an (online) Jupyter notebook experiment environment. Through the use of GRPC4BMI and containers, (pre-existing and newly made) models in any programming language can be used, without diving into the code of those models. This makes eWaterCycle ideally suited to compare (and couple) models with widely different model setups: conceptual versus distributed for example. eWaterCycle is FAIR by design: any eWaterCycle experiment should be reproducible by anyone without the support of the original model developer. This will make it easier for hydrologists to work with each other's models and speed up the cycle of hydrological knowledge generation. </p><p>In this comparison we’re looking at the impact of the new ERA5 dataset over the older ERA-Interim dataset as a forcing for hydrological models. A key component in making hydrological experiments reproducible and transparent in eWaterCycle is the use of EMSValTool as a pre-processor for hydrological experiments. Using EMSValTool’s recipes structure ensures that model specific input files based on ERA5 or ERA-Interim are all handled identically where possible and that model specific operations are clearly and transparently defined. </p><p>We have run 7 models or model-suites (LISFlood, MARRMoT, WFLOW, HYPE, PCRGlobWB 2.0, SUMMA, HBV) for 6 basins forced with both ERA5 and ERA-Interim and compared model outputs against GRDC discharge observations. From this broad comparison we will conclude what the impact of ERA5 over ERA-Interim will be for hydrological modelling in the foreseeable future. </p>


2021 ◽  
Vol 15 ◽  
Author(s):  
Gianluca Susi ◽  
Luis F. Antón-Toro ◽  
Fernando Maestú ◽  
Ernesto Pereda ◽  
Claudio Mirasso

The recent “multi-neuronal spike sequence detector” (MNSD) architecture integrates the weight- and delay-adjustment methods by combining heterosynaptic plasticity with the neurocomputational feature spike latency, representing a new opportunity to understand the mechanisms underlying biological learning. Unfortunately, the range of problems to which this topology can be applied is limited because of the low cardinality of the parallel spike trains that it can process, and the lack of a visualization mechanism to understand its internal operation. We present here the nMNSD structure, which is a generalization of the MNSD to any number of inputs. The mathematical framework of the structure is introduced, together with the “trapezoid method,” that is a reduced method to analyze the recognition mechanism operated by the nMNSD in response to a specific input parallel spike train. We apply the nMNSD to a classification problem previously faced with the classical MNSD from the same authors, showing the new possibilities the nMNSD opens, with associated improvement in classification performances. Finally, we benchmark the nMNSD on the classification of static inputs (MNIST database) obtaining state-of-the-art accuracies together with advantageous aspects in terms of time- and energy-efficiency if compared to similar classification methods.


Sign in / Sign up

Export Citation Format

Share Document