black boxes
Recently Published Documents


TOTAL DOCUMENTS

526
(FIVE YEARS 197)

H-INDEX

28
(FIVE YEARS 5)

Algorithms ◽  
2022 ◽  
Vol 15 (1) ◽  
pp. 22
Author(s):  
Virginia Niculescu ◽  
Robert Manuel Ştefănică

A general crossword grid generation is considered an NP-complete problem and theoretically it could be a good candidate to be used by cryptography algorithms. In this article, we propose a new algorithm for generating perfect crosswords grids (with no black boxes) that relies on using tries data structures, which are very important for reducing the time for finding the solutions, and offers good opportunity for parallelisation, too. The algorithm uses a special tries representation and it is very efficient, but through parallelisation the performance is improved to a level that allows the solution to be obtained extremely fast. The experiments were conducted using a dictionary of almost 700,000 words, and the solutions were obtained using the parallelised version with an execution time in the order of minutes. We demonstrate here that finding a perfect crossword grid could be solved faster than has been estimated before, if we use tries as supporting data structures together with parallelisation. Still, if the size of the dictionary is increased by a lot (e.g., considering a set of dictionaries for different languages—not only for one), or through a generalisation to a 3D space or multidimensional spaces, then the problem still could be investigated for a possible usage in cryptography.


2022 ◽  
pp. 1-27
Author(s):  
Clifford Bohm ◽  
Douglas Kirkpatrick ◽  
Arend Hintze

Abstract Deep learning (primarily using backpropagation) and neuroevolution are the preeminent methods of optimizing artificial neural networks. However, they often create black boxes that are as hard to understand as the natural brains they seek to mimic. Previous work has identified an information-theoretic tool, referred to as R, which allows us to quantify and identify mental representations in artificial cognitive systems. The use of such measures has allowed us to make previous black boxes more transparent. Here we extend R to not only identify where complex computational systems store memory about their environment but also to differentiate between different time points in the past. We show how this extended measure can identify the location of memory related to past experiences in neural networks optimized by deep learning as well as a genetic algorithm.


2022 ◽  
Author(s):  
Simon Ott ◽  
Adriano Barbosa-Silva ◽  
Matthias Samwald

Machine learning algorithms for link prediction can be valuable tools for hypothesis generation. However, many current algorithms are black boxes or lack good user interfaces that could facilitate insight into why predictions are made. We present LinkExplorer, a software suite for predicting, explaining and exploring links in large biomedical knowledge graphs. LinkExplorer integrates our novel, rule-based link prediction engine SAFRAN, which was recently shown to outcompete other explainable algorithms and established black box algorithms. Here, we demonstrate highly competitive evaluation results of our algorithm on multiple large biomedical knowledge graphs, and release a web interface that allows for interactive and intuitive exploration of predicted links and their explanations.


2022 ◽  
pp. 1803-1846
Author(s):  
Yaëlle Chaudy ◽  
Thomas M. Connolly

Assessment is a crucial aspect of any teaching and learning process. New tools such as educational games offer promising advantages: they can personalize feedback to students and save educators time by automating the assessment process. However, while many teachers agree that educational games increase motivation, learning, and retention, few are ready to fully trust them as an assessment tool. A likely reason behind this lack of trust is that educational games are distributed as black boxes, unmodifiable by educators and not providing enough insight about the gameplay. This chapter presents three systematic literature reviews looking into the integration of assessment, feedback, and learning analytics in educational games. It then proposes a framework and present a fully developed engine. The engine is used by both developers and educators. Designed to separate game and assessment, it allows teachers to modify the assessment after distribution and visualize gameplay data via a learning analytics dashboard.


2021 ◽  
Author(s):  
Brennan Klein ◽  
Erik Hoel ◽  
Anshuman Swain ◽  
Ross Griebenow ◽  
Michael Levin

Abstract The internal workings of biological systems are notoriously difficult to understand. Due to the prevalence of noise and degeneracy in evolved systems, in many cases the workings of everything from gene regulatory networks to protein–protein interactome networks remain black boxes. One consequence of this black-box nature is that it is unclear at which scale to analyze biological systems to best understand their function. We analyzed the protein interactomes of over 1800 species, containing in total 8 782 166 protein–protein interactions, at different scales. We show the emergence of higher order ‘macroscales’ in these interactomes and that these biological macroscales are associated with lower noise and degeneracy and therefore lower uncertainty. Moreover, the nodes in the interactomes that make up the macroscale are more resilient compared with nodes that do not participate in the macroscale. These effects are more pronounced in interactomes of eukaryota, as compared with prokaryota; these results hold even after sensitivity tests where we recalculate the emergent macroscales under network simulations where we add different edge weights to the interactomes. This points to plausible evolutionary adaptation for macroscales: biological networks evolve informative macroscales to gain benefits of both being uncertain at lower scales to boost their resilience, and also being ‘certain’ at higher scales to increase their effectiveness at information transmission. Our work explains some of the difficulty in understanding the workings of biological networks, since they are often most informative at a hidden higher scale, and demonstrates the tools to make these informative higher scales explicit.


2021 ◽  
pp. 1-18
Author(s):  
Wesley Yung ◽  
Siu-Ming Tam ◽  
Bart Buelens ◽  
Hugh Chipman ◽  
Florian Dumpert ◽  
...  

As national statistical offices (NSOs) modernize, interest in integrating machine learning (ML) into official statisticians’ toolbox is growing. Two challenges to such an integration are the potential loss of transparency from using “black-boxes” and the need to develop a quality framework. In 2019, the High-Level Group for the Modernisation of Official Statistics (HLG-MOS) launched a project on machine learning with one of the objectives being to address these two challenges. One of the outputs of the HLG-MOS project is a Quality Framework for Statistical Algorithms (QF4SA). While many quality frameworks exist, they have been conceived with traditional methods in mind, and they tend to target statistical outputs. Currently, machine learning methods are being looked at for use in processes producing intermediate outputs, which lead to a final statistical output. Therefore, the QF4SA does not replace existing quality frameworks; it complements them. As the QF4SA targets intermediate outputs and not necessarily the final statistical output, it should be used in conjunction with existing quality frameworks to ensure that high-quality outputs are produced. This paper presents the QF4SA, as well as some recommendations for NSOs considering the use of machine learning in the production of official statistics.


2021 ◽  
Vol 67 (6) ◽  
pp. 537-547
Author(s):  
Danilo Legisa ◽  
Hernan Mengoni

Brewing recipe design is mainly based on brewer’s expertise, information available in catalogs and certificates of analysis (CoA’s). Hop schedule design and formulation has become an essential topic since hoppy craft beers took the scene. But how accurate is the flavor profile information provided in catalogs? How useful is the chemical composition profile information in CoA’s? Besides current research and tons of reported experiences, hops impact is still a mystery, and topics like biotransformation are black-boxes for brewers. In this study, nine single hopped beers were brewed, and a trained panel conducted sensorial beer analysis. Then, to asses hop impact, qualitative and process-related-quantitative beer characteristics were contrasted to find valuable correlations and trends between hop catalogs and final beers. Discrepancies with catalog qualitative data were reported. In addition to what is already described in the literature, here we describe how α-acids, linalool, myrcene, and geraniol (despite the classical use for these compounds) could predict positive and negative hop impact of nine different hop varieties on bitterness, flavor, and aroma, when they are applied in different brewing process steps. Also, with this pipeline we stand the basis of a tool to be improved, available for brewers, to better predict their brews and assess new hop varieties in real-life pilot brewing set ups.


Author(s):  
Jun-Peng Fang ◽  
Jun Zhou ◽  
Qing Cui ◽  
Cai-Zhi Tang ◽  
Long-Fei Li

In recent years, machine learning models have achieved magnificent success in many industrial applications, but most of them are black boxes. It is crucial to understand why such predictions are made in many critical areas such as medicine, financial markets, and auto driving. In this paper, we propose Coco, a novel interpretation method which can interpret any binary classifier by assigning each feature an importance value for a particular prediction. We first adopt MixUp method to generate reasonable perturbations, then apply these perturbations with constraints to obtain counterfactual instances and finally compute a comprehensive metric on these instances to estimate the importance of each feature. To demonstrate the effectiveness of Coco, we conduct extensive experiments on several datasets. The results show our method achieves better performance in identifying the most important features compared with the state-of-the-art interpretation methods, including Shap and Lime.


2021 ◽  
Vol 3 (4) ◽  
pp. 966-989
Author(s):  
Vanessa Buhrmester ◽  
David Münch ◽  
Michael Arens

Deep Learning is a state-of-the-art technique to make inference on extensive or complex data. As a black box model due to their multilayer nonlinear structure, Deep Neural Networks are often criticized as being non-transparent and their predictions not traceable by humans. Furthermore, the models learn from artificially generated datasets, which often do not reflect reality. By basing decision-making algorithms on Deep Neural Networks, prejudice and unfairness may be promoted unknowingly due to a lack of transparency. Hence, several so-called explanators, or explainers, have been developed. Explainers try to give insight into the inner structure of machine learning black boxes by analyzing the connection between the input and output. In this survey, we present the mechanisms and properties of explaining systems for Deep Neural Networks for Computer Vision tasks. We give a comprehensive overview about the taxonomy of related studies and compare several survey papers that deal with explainability in general. We work out the drawbacks and gaps and summarize further research ideas.


Sign in / Sign up

Export Citation Format

Share Document