scholarly journals Abstraction, Validation, and Generalization for Explainable Artificial Intelligence

Author(s):  
Scott Cheng-Hsin Yang ◽  
Tomas Folke ◽  
Patrick Shafto

Neural network architectures are achieving superhuman performance on an expanding range of tasks. To effectively and safely deploy these systems, their decision-making must to be understandable to a wide range of stakeholders. Methods to explain AI have been proposed to answer this challenge, but a lack of theory impedes the development of systematic abstractions which are necessary for cumulative knowledge gains. We propose Bayesian Teaching as a framework for unifying explainable AI (XAI) by integrating machine learning and human learning. Bayesian Teaching formalizes explanation as a communication act of an explainer to shift the beliefs of an explainee. This formalization decomposes any XAI method into four components: (1) the inference to be explained, (2) the explanatory medium, (3) the explainee model, and (4) the explainer model. The abstraction afforded by Bayesian Teaching to decompose any XAI method elucidates the invariances among them. The decomposition of XAI systems enables modular validation, as each of the first three components listed can be tested semi-independently. This decomposition also promotes generalization through recombination of components from different XAI systems, which facilitates the generation of novel variants. These new variants need not be evaluated one by one provided that each component has been validated, leading to an exponential decrease in development time. Finally, by making the goal of explanation explicit, Bayesian Teaching helps developers to assess how suitable an XAI system is for its intended real-world use case. Thus, Bayesian Teaching provides a theoretical framework that encourages systematic, scientific investigation of XAI.

1970 ◽  
Vol 16 ◽  
pp. 1-10 ◽  
Author(s):  
M Fazlul Hoque ◽  
W Islam ◽  
M Khalequzzaman

Life table of Tetranychus urticae and Phytoseiulus persimilis on bean leaflets were studied under laboratory conditions in three seasons. For T. urticae the development time from egg to adult varied from 7 to 24 days and the highest immature mortality was 78.70 % in winter. Eggs laid by females were 88.1 eggs in autumn and 70.6 eggs in summer season. The gross reproductive rate (GRR) was the highest (65.51) in autumn and 52.50 in summer. The net reproductive rate (Ro) was the highest (15.862) in autumn and 8.916 in summer. The intrinsic rates of increase (rm) and finite capacity for increase (λ) reached maximal values (0.1873 and 1.206) in autumn, whereas minimal values (0.056 and 1.058) were in winter season. The mean generation time (T) was the shortest in summer and double (3.701) days in autumn. The development time of P. persimilis from egg to adult varied from 5 to 14 days. The highest immature mortality was 60% in summer. Eggs laid by females were 39.4 eggs in autumn and 30.2 eggs in summer. The gross reproductive rate (GRR) was the highest (31.4) in autumn and 24.0 in summer. The net reproductive rate (Ro) was the highest (10.573) in autumn and 8.460 in winter. The intrinsic rates of increase (rm) and finite capacity for increase (λ) reached maximal values (0.1823 and 1.200) in summer, whereas minimal values (0.1025 and 1.108) were in winter. The mean generation time (T) was the shortest in summer. The results suggested that P. persimilis could develop and reproduce within a wide range of temperatures. Key words: Tetranychus urticae, Phytoseiulus persimilis, immature mortality, intrinsic rates of increase, reproductive rate, Survival  DOI:10.3329/jbs.v16i0.3733 J. bio-sci. 16: 1-10, 2008


2017 ◽  
Author(s):  
Charlie W. Zhao ◽  
Mark J. Daley ◽  
J. Andrew Pruszynski

AbstractFirst-order tactile neurons have spatially complex receptive fields. Here we use machine learning tools to show that such complexity arises for a wide range of training sets and network architectures, and benefits network performance, especially on more difficult tasks and in the presence of noise. Our work suggests that spatially complex receptive fields are normatively good given the biological constraints of the tactile periphery.


Author(s):  
Matt Woodburn ◽  
Gabriele Droege ◽  
Sharon Grant ◽  
Quentin Groom ◽  
Janeen Jones ◽  
...  

The utopian vision is of a future where a digital representation of each object in our collections is accessible through the internet and sustainably linked to other digital resources. This is a long term goal however, and in the meantime there is an urgent need to share data about our collections at a higher level with a range of stakeholders (Woodburn et al. 2020). To sustainably achieve this, and to aggregate this information across all natural science collections, the data need to be standardised (Johnston and Robinson 2002). To this end, the Biodiversity Information Standards (TDWG) Collection Descriptions (CD) Interest Group has developed a data standard for describing collections, which is approaching formal review for ratification as a new TDWG standard. It proposes 20 classes (Suppl. material 1) and over 100 properties that can be used to describe, categorise, quantify, link and track digital representations of natural science collections, from high-level approximations to detailed breakdowns depending on the purpose of a particular implementation. The wide range of use cases identified for representing collection description data means that a flexible approach to the standard and the underlying modelling concepts is essential. These are centered around the ‘ObjectGroup’ (Fig. 1), a class that may represent any group (of any size) of physical collection objects, which have one or more common characteristics. This generic definition of the ‘collection’ in ‘collection descriptions’ is an important factor in making the standard flexible enough to support the breadth of use cases. For any use case or implementation, only a subset of classes and properties within the standard are likely to be relevant. In some cases, this subset may have little overlap with those selected for other use cases. This additional need for flexibility means that very few classes and properties, representing the core concepts, are proposed to be mandatory. Metrics, facts and narratives are represented in a normalised structure using an extended MeasurementOrFact class, so that these can be user-defined rather than constrained to a set identified by the standard. Finally, rather than a rigid underlying data model as part of the normative standard, documentation will be developed to provide guidance on how the classes in the standard may be related and quantified according to relational, dimensional and graph-like models. So, in summary, the standard has, by design, been made flexible enough to be used in a number of different ways. The corresponding risk is that it could be used in ways that may not deliver what is needed in terms of outputs, manageability and interoperability with other resources of collection-level or object-level data. To mitigate this, it is key for any new implementer of the standard to establish how it should be used in that particular instance, and define any necessary constraints within the wider scope of the standard and model. This is the concept of the ‘collection description scheme,’ a profile that defines elements such as: which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. Various factors might influence these decisions, including the types of information that are relevant to the use case, whether quantitative metrics need to be captured and aggregated across collection descriptions, and how many resources can be dedicated to amassing and maintaining the data. This process has particular relevance to the Distributed System of Scientific Collections (DiSSCo) consortium, the design of which incorporates use cases for storing, interlinking and reporting on the collections of its member institutions. These include helping users of the European Loans and Visits System (ELViS) (Islam 2020) to discover specimens for physical and digital loans by providing descriptions and breakdowns of the collections of holding institutions, and monitoring digitisation progress across European collections through a dynamic Collections Digitisation Dashboard. In addition, DiSSCo will be part of a global collections data ecosystem requiring interoperation with other infrastructures such as the GBIF (Global Biodiversity Information Facility) Registry of Scientific Collections, the CETAF (Consortium of European Taxonomic Facilities) Registry of Collections and Index Herbariorum. In this presentation, we will introduce the draft standard and discuss the process of defining new collection description schemes using the standard and data model, and focus on DiSSCo requirements as examples of real-world collection descriptions use cases.


Author(s):  
Vincent Elliott Lasnik

One of the central problems and corresponding challenges facing the multidisciplinary fields of distance learning and instructional design has been in the construction of theory-grounded, research-based taxonomies for prescribing what particular strategies and approaches should be employed when, how, and in what combination to be most effective and efficient for teaching specific knowledge domains and performance outcomes. While numerous scholars and practioners across a wide range of associated instructional design fields have created a rich variety of effective, efficient, and very current prescriptions for obtaining specific learning outcomes in specific situations (Anderson & Elloumi, 2004; Marzano, 2000; Merrill, 2002a; Nelson & Stolterman, 2003; Reigeluth, 1999a; Shedroff, 1999; Wiley, 2002), to date, no single theory-grounded and research-verified unifying taxonomic scheme has successfully emerged to address all existing and potential educational problems across the phenomena of human learning and performance.


2011 ◽  
pp. 270-287
Author(s):  
Vincent Elliott Lasnik

One of the central problems and corresponding challenges facing the multidisciplinary fields of distance learning and instructional design has been in the construction of theory-grounded, research-based taxonomies for prescribing what particular strategies and approaches should be employed when, how, and in what combination to be most effective and efficient for teaching specific knowledge domains and performance outcomes. While numerous scholars and practioners across a wide range of associated instructional design fields have created a rich variety of effective, efficient, and very current prescriptions for obtaining specific learning outcomes in specific situations (Anderson & Elloumi, 2004; Marzano, 2000; Merrill, 2002a; Nelson & Stolterman, 2003; Reigeluth, 1999a; Shedroff, 1999; Wiley, 2002), to date, no single theory-grounded and research-verified unifying taxonomic scheme has successfully emerged to address all existing and potential educational problems across the phenomena of human learning and performance.


Author(s):  
Donald Needham ◽  
Rodrigo Caballero ◽  
Steven Demurjian ◽  
Felix Eickhoff ◽  
Yi Zhang

This chapter examines a formal framework for reusability assessment of development-time components and classes via metrics, refactoring guidelines, and algorithms. It argues that software engineers seeking to improve design reusability stand to benefit from tools that precisely measure the potential and actual reuse of software artifacts to achieve domain-specific reuse for an organization’s current and future products. The authors consider the reuse definition, assessment, and analysis of a UML design prior to the existence of source code, and include dependency tracking for use case and class diagrams in support of reusability analysis and refactoring for UML. The integration of these extensions into the UML tool Together Control Center to support reusability measurement from design to development is also considered.


2019 ◽  
Vol 214 ◽  
pp. 07030
Author(s):  
Marco Aldinucci ◽  
Stefano Bagnasco ◽  
Matteo Concas ◽  
Stefano Lusso ◽  
Sergio Rabellino ◽  
...  

Obtaining CPU cycles on an HPC cluster is nowadays relatively simple and sometimes even cheap for academic institutions. However, in most of the cases providers of HPC services would not allow changes on the configuration, implementation of special features or a lower-level control on the computing infrastructure, for example for testing experimental configurations. The variety of use cases proposed by several departments of the University of Torino, including ones from solid-state chemistry, computational biology, genomics and many others, called for different and sometimes conflicting configurations; furthermore, several R&D activities in the field of scientific computing, with topics ranging from GPU acceleration to Cloud Computing technologies, needed a platform to be carried out on. The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multi-purpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Torino branch of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible and reconfigurable infrastructure to cater to a wide range of different scientific computing needs, as well as a platform for R&D activities on computational technologies themselves. We describe some of the use cases that prompted the design and construction of the system, its architecture and a first characterisation of its performance by some synthetic benchmark tools and a few realistic use-case tests.


2019 ◽  
Vol 5 (3) ◽  
pp. 201-213 ◽  
Author(s):  
E.M. Nyakeri ◽  
M.A. Ayieko ◽  
F.A. Amimo ◽  
H. Salum ◽  
H.J.O. Ogola

The dual roles of efficient degradation and bioconversion of a wide range of organic wastes into valuable animal protein and organic fertiliser, has led to increased interest in black soldier fly (BSF) technology as a highly promising tool for sustainable waste management and alternative protein production. The current study investigated the potential application of BSF technology in the valorisation of faecal sludge (FS), a common organic waste in the urban informal settlements in low and middle-income countries. We evaluated the effect of different feeding rates (100, 150, 200 and 250 mg/larva/day), different feeding regimen and supplementation with other waste feedstock (food remains, FR; brewers waste, BW; and banana peelings, BP) on BSF larvae (BSFL) growth rates/yield and FS reduction efficiency. Results showed significantly (P<0.01) higher prepupal yield (179±3.3 and 190±1.2 g) and shorter larval development time (16.7 and 15 days) when reared on 200 and 250 mg/larva/day FS, respectively. However, different feeding regimes of FS did not significantly affect larval growth rate and prepupal yield (P=0.56). Supplementation of FS with other organic substrates resulted in significantly increased BSFL biomass production and substrate reduction, and shortened larval development time; with the effect was more pronounced when FS was supplemented with FR and at 30% supplementation. Protein:fat ratios for BSFL reared on FS, FS:FR, FS:BW were significantly (P<0.05) higher (2.51, 2.53, and 2.44, respectively) compared to FS:BP mixture (1.99). These results demonstrated that supplementation of FS with locally available organic waste can be used to improve its suitability as feedstock for BSF production and organic waste bioremediation from the environment. In conclusion, a daily feeding strategy of substrate containing FS supplemented with 30% organic waste co-substrate at feeding rate of 200 mg/larva/day can be used as a guideline for BSFL mass production and bioremediation of FS both at small- and large-scale level.


Author(s):  
Vincent E. Lasnik

There are simple answers to all complex problems—and they are uniformly wrong. —H.L. Mencken One of the central problems and corresponding challenges facing the multidisciplinary fields of distance learning and instructional design has been in the construction of theory-grounded, research-based taxonomies for prescribing what particular strategies and approaches should be employed when, how, and in what combination to be most effective and efficient for teaching specific knowledge domains and performance outcomes. While numerous scholars and practitioners across a wide range of associated instructional design fields have created a rich variety of effective, efficient, and very current prescriptions for obtaining specific learning outcomes in specific situations (Anderson & Elloumi, 2004; Marzano, 2000; Merrill, 2002a; Nelson & Stolterman, 2003; Reigeluth, 1999a; Shedroff, 1999; Wiley, 2002), to date no single theory-grounded and research-verified unifying taxonomic scheme has successfully emerged to address all existing and potential educational problems across the phenomena of human learning and performance.


Sign in / Sign up

Export Citation Format

Share Document