scholarly journals Beyond Human: Deep Learning, Explainability and Representation

2020 ◽  
pp. 026327642096638
Author(s):  
M. Beatrice Fazi

This article addresses computational procedures that are no longer constrained by human modes of representation and considers how these procedures could be philosophically understood in terms of ‘algorithmic thought’. Research in deep learning is its case study. This artificial intelligence (AI) technique operates in computational ways that are often opaque. Such a black-box character demands rethinking the abstractive operations of deep learning. The article does so by entering debates about explainability in AI and assessing how technoscience and technoculture tackle the possibility to ‘re-present’ the algorithmic procedures of feature extraction and feature learning to the human mind. The article thus mobilises the notion of incommensurability (originally developed in the philosophy of science) to address explainability as a communicational and representational issue, which challenges phenomenological and existential modes of comparison between human and algorithmic ‘thinking’ operations.

2021 ◽  
Author(s):  
J. Eric T. Taylor ◽  
Graham Taylor

Artificial intelligence powered by deep neural networks has reached a levelof complexity where it can be difficult or impossible to express how a modelmakes its decisions. This black-box problem is especially concerning when themodel makes decisions with consequences for human well-being. In response,an emerging field called explainable artificial intelligence (XAI) aims to increasethe interpretability, fairness, and transparency of machine learning. In thispaper, we describe how cognitive psychologists can make contributions to XAI.The human mind is also a black box, and cognitive psychologists have overone hundred and fifty years of experience modeling it through experimentation.We ought to translate the methods and rigour of cognitive psychology to thestudy of artificial black boxes in the service of explainability. We provide areview of XAI for psychologists, arguing that current methods possess a blindspot that can be complemented by the experimental cognitive tradition. Wealso provide a framework for research in XAI, highlight exemplary cases ofexperimentation within XAI inspired by psychological science, and provide atutorial on experimenting with machines. We end by noting the advantages ofan experimental approach and invite other psychologists to conduct research inthis exciting new field.


Author(s):  
Audri Phillips

This chapter examines the relationships between technology, the human mind, and creativity. The chapter cannot possibly cover the whole spectrum of the aforementioned; nonetheless, it covers highlights that especially apply to new immersive technologies. The nature of creativity, creativity studies, the tools, languages, and technology used to promote creativity are discussed. The part that the mind and the senses—particularly vision—play in immersive media technology, as well as robotics, artificial intelligence (AI), computer vision, and motion capture are also discussed. The immersive transmedia project Robot Prayers is offered as a case study of the application of creativity and technology working hand in hand.


2020 ◽  
Vol 20 (4) ◽  
pp. 609-624
Author(s):  
Mohamed Marzouk ◽  
Mohamed Zaher

Purpose This paper aims to apply a methodology that is capable to classify and localize mechanical, electrical and plumbing (MEP) elements to assist facility managers. Furthermore, it assists in decreasing the technical complexity and sophistication of different systems to the facility management (FM) team. Design/methodology/approach This research exploits artificial intelligence (AI) in FM operations through proposing a new system that uses a deep learning pre-trained model for transfer learning. The model can identify new MEP elements through image classification with a deep convolutional neural network using a support vector machine (SVM) technique under supervised learning. Also, an expert system is developed and integrated with an Android application to the proposed system to identify the required maintenance for the identified elements. FM team can reach the identified assets with bluetooth tracker devices to perform the required maintenance. Findings The proposed system aids facility managers in their tasks and decreases the maintenance costs of facilities by maintaining, upgrading, operating assets cost-effectively using the proposed system. Research limitations/implications The paper considers three fire protection systems for proactive maintenance, where other structural or architectural systems can also significantly affect the level of service and cost expensive repairs and maintenance. Also, the proposed system relies on different platforms that required to be consolidated for facility technicians and managers end-users. Therefore, the authors will consider these limitations and expand the study as a case study in future work. Originality/value This paper assists in a proactive manner to decrease the lack of knowledge of the required maintenance to MEP elements that leads to a lower life cycle cost. These MEP elements have a big share in the operation and maintenance costs of building facilities.


Author(s):  
Xiaolin Wu ◽  
Xi Zhang ◽  
Xiao Shu

Subitizing, or the sense of small natural numbers, is an innate cognitive function of humans and primates; it responds to visual stimuli prior to the development of any symbolic skills, language or arithmetic. Given successes of deep learning (DL) in tasks of visual intelligence and given the primitivity of number sense, a tantalizing question is whether DL can comprehend numbers and perform subitizing. But somewhat disappointingly, extensive experiments of the type of cognitive psychology demonstrate that the examples-driven black box DL cannot see through superficial variations in visual representations and distill the abstract notion of natural number, a task that children perform with high accuracy and confidence. The failure is apparently due to the learning method not the CNN computational machinery itself. A recurrent neural network capable of subitizing does exist, which we construct by encoding a mechanism of mathematical morphology into the CNN convolutional kernels. Also, we investigate, using subitizing as a test bed, the ways to aid the black box DL by cognitive priors derived from human insight. Our findings are mixed and interesting, pointing to both cognitive deficit of pure DL, and some measured successes of boosting DL by predetermined cognitive implements. This case study of DL in cognitive computing is meaningful for visual numerosity represents a minimum level of human intelligence.


Author(s):  
Abraham Rudnick

Artificial intelligence (AI) and its correlates, such as machine and deep learning, are changing health care, where complex matters such as comoribidity call for dynamic decision-making. Yet, some people argue for extreme caution, referring to AI and its correlates as a black box. This brief article uses philosophy and science to address the black box argument about knowledge as a myth, concluding that this argument is misleading as it ignores a fundamental tenet of science, i.e., that no empirical knowledge is certain, and that scientific facts – as well as methods – often change. Instead, control of the technology of AI and its correlates has to be addressed to mitigate such unexpected negative consequences.


Author(s):  
Chunmian Lin ◽  
Lin Li ◽  
Zhixing Cai ◽  
Kelvin C. P. Wang ◽  
Danny Xiao ◽  
...  

Automated lane marking detection is essential for advanced driver assistance system (ADAS) and pavement management work. However, prior research has mostly detected lane marking segments from a front-view image, which easily suffers from occlusion or noise disturbance. In this paper, we aim at accurate and robust lane marking detection from a top-view perspective, and propose a deep learning-based detector with adaptive anchor scheme, referred to as A2-LMDet. On the one hand, it is an end-to-end framework that fuses feature extraction and object detection into a single deep convolutional neural network. On the other hand, the adaptive anchor scheme is designed by formulating a bilinear interpolation algorithm, and is used to guide specific-anchor box generation and informative feature extraction. To validate the proposed method, a newly built lane marking dataset contained 24,000 high-resolution laser imaging data is further developed for case study. Quantitative and qualitative results demonstrate that A2-LMDet achieves highly accurate performance with 0.9927 precision, 0.9612 recall, and a 0.9767 [Formula: see text] score, which outperforms other advanced methods by a considerable margin. Moreover, ablation analysis illustrates the effectiveness of the adaptive anchor scheme for enhancing feature representation and performance improvement. We expect our work will help the development of related research.


First Monday ◽  
2019 ◽  
Author(s):  
Niel Chah

Interest in deep learning, machine learning, and artificial intelligence from industry and the general public has reached a fever pitch recently. However, these terms are frequently misused, confused, and conflated. This paper serves as a non-technical guide for those interested in a high-level understanding of these increasingly influential notions by exploring briefly the historical context of deep learning, its public presence, and growing concerns over the limitations of these techniques. As a first step, artificial intelligence and machine learning are defined. Next, an overview of the historical background of deep learning reveals its wide scope and deep roots. A case study of a major deep learning implementation is presented in order to analyze public perceptions shaped by companies focused on technology. Finally, a review of deep learning limitations illustrates systemic vulnerabilities and a growing sense of concern over these systems.


2020 ◽  
Vol 73 (4) ◽  
pp. 275-284
Author(s):  
Dukyong Yoon ◽  
Jong-Hwan Jang ◽  
Byung Jin Choi ◽  
Tae Young Kim ◽  
Chang Ho Han

Biosignals such as electrocardiogram or photoplethysmogram are widely used for determining and monitoring the medical condition of patients. It was recently discovered that more information could be gathered from biosignals by applying artificial intelligence (AI). At present, one of the most impactful advancements in AI is deep learning. Deep learning-based models can extract important features from raw data without feature engineering by humans, provided the amount of data is sufficient. This AI-enabled feature presents opportunities to obtain latent information that may be used as a digital biomarker for detecting or predicting a clinical outcome or event without further invasive evaluation. However, the black box model of deep learning is difficult to understand for clinicians familiar with a conventional method of analysis of biosignals. A basic knowledge of AI and machine learning is required for the clinicians to properly interpret the extracted information and to adopt it in clinical practice. This review covers the basics of AI and machine learning, and the feasibility of their application to real-life situations by clinicians in the near future.


2019 ◽  
Vol 87 (2) ◽  
pp. 27-29
Author(s):  
Meagan Wiederman

Artificial intelligence (AI) is the ability of any device to take an input, like that of its environment, and work to achieve a desired output. Some advancements in AI have focused n replicating the human brain in machinery. This is being made possible by the human connectome project: an initiative to map all the connections between neurons within the brain. A full replication of the thinking brain would inherently create something that could be argued to be a thinking machine. However, it is more interesting to question whether a non-biologically faithful AI could be considered as a thinking machine. Under Turing’s definition of ‘thinking’, a machine which can be mistaken as human when responding in writing from a “black box,” where they can not be viewed, can be said to pass for thinking. Backpropagation is an error minimizing algorithm to program AI for feature detection with no biological counterpart which is prevalent in AI. The recent success of backpropagation demonstrates that biological faithfulness is not required for deep learning or ‘thought’ in a machine. Backpropagation has been used in medical imaging compression algorithms and in pharmacological modelling.


Sign in / Sign up

Export Citation Format

Share Document