Black Boxes

2021 ◽  
pp. 101-120
Keyword(s):  
2005 ◽  
Vol 39 (4) ◽  
pp. 1-3
Author(s):  
DEEANNA FRANKLIN
Keyword(s):  

Waterlines ◽  
2001 ◽  
Vol 20 (2) ◽  
pp. 12-14 ◽  
Author(s):  
Alan MacDonald
Keyword(s):  

Author(s):  
Samuel Chef ◽  
Chung Tah Chua ◽  
Yu Wen Siah ◽  
Philippe Perdu ◽  
Chee Lip Gan ◽  
...  

Abstract Today’s VLSI devices are neither designed nor manufactured for space applications in which single event effects (SEE) issues are common. In addition, very little information about the internal schematic and usually nothing about the layout or netlist is available. Thus, they are practically black boxes for satellite manufacturers. On the other hand, such devices are crucial in driving the performance of spacecraft, especially smaller satellites. The only way to efficiently manage SEE in VLSI devices is to localize sensitive areas of the die, analyze the regions of interest, study potential mitigation techniques, and evaluate their efficiency. For the first time, all these activities can be performed using the same tool with a single test setup that enables a very efficient iterative process that reduce the evaluation time from months to days. In this paper, we will present the integration of a pulsed laser for SEE study into a laser probing, laser stimulation, and emission microscope system. Use of this system will be demonstrated on a commercial 8 bit microcontroller.


Author(s):  
Gary Smith

We live in an incredible period in history. The Computer Revolution may be even more life-changing than the Industrial Revolution. We can do things with computers that could never be done before, and computers can do things for us that could never be done before. But our love of computers should not cloud our thinking about their limitations. We are told that computers are smarter than humans and that data mining can identify previously unknown truths, or make discoveries that will revolutionize our lives. Our lives may well be changed, but not necessarily for the better. Computers are very good at discovering patterns, but are useless in judging whether the unearthed patterns are sensible because computers do not think the way humans think. We fear that super-intelligent machines will decide to protect themselves by enslaving or eliminating humans. But the real danger is not that computers are smarter than us, but that we think computers are smarter than us and, so, trust computers to make important decisions for us. The AI Delusion explains why we should not be intimidated into thinking that computers are infallible, that data-mining is knowledge discovery, and that black boxes should be trusted.


Author(s):  
Lisa Herzog

The Introduction sets out the problem this book addresses: organizations, in which individuals seem to be nothing but ‘cogs’, have become extremely powerful, while being apparently immune to moral criticism. Organizations—from public bureaucracies to universities, police departments, and private corporations—have specific features that they share qua organizations. They need to be opened up for normative theorizing, rather than treated as ‘black boxes’ or as elements of a ‘system’ in which moral questions have no place. The Introduction describes ‘social philosophy’ as an approach that addresses questions at the meso-level of social life, and situates it in relation to several strands of literature in moral and political philosophy. It concludes by providing a preview of the chapters of the book.


2020 ◽  
Vol 11 (1) ◽  
pp. 18-50 ◽  
Author(s):  
Maja BRKAN ◽  
Grégory BONNET

Understanding of the causes and correlations for algorithmic decisions is currently one of the major challenges of computer science, addressed under an umbrella term “explainable AI (XAI)”. Being able to explain an AI-based system may help to make algorithmic decisions more satisfying and acceptable, to better control and update AI-based systems in case of failure, to build more accurate models, and to discover new knowledge directly or indirectly. On the legal side, the question whether the General Data Protection Regulation (GDPR) provides data subjects with the right to explanation in case of automated decision-making has equally been the subject of a heated doctrinal debate. While arguing that the right to explanation in the GDPR should be a result of interpretative analysis of several GDPR provisions jointly, the authors move this debate forward by discussing the technical and legal feasibility of the explanation of algorithmic decisions. Legal limits, in particular the secrecy of algorithms, as well as technical obstacles could potentially obstruct the practical implementation of this right. By adopting an interdisciplinary approach, the authors explore not only whether it is possible to translate the EU legal requirements for an explanation into the actual machine learning decision-making, but also whether those limitations can shape the way the legal right is used in practice.


2021 ◽  
Vol 21 (7) ◽  
pp. 33-35
Author(s):  
Craig M. Klugman
Keyword(s):  

2021 ◽  
pp. 216770262198972
Author(s):  
Carolyn E. Wilshire ◽  
Tony Ward ◽  
Samuel Clack

In our original article (this issue, p. ♦♦♦), we argued that focusing research on individual symptoms of psychopathology might provide valuable information about their underlying nature and result in better classification systems, explanations, and treatment. To this end, we formulated five core questions that were intended to guide subsequent research and symptom conceptualizations in the psychopathology domain. In this article, we respond to two commentaries on our article. We conclude that it is time to open the black box of symptoms and to take seriously their status as complex constructs.


Author(s):  
Tai-Hung Liu ◽  
K. Sajid ◽  
A. Aziz ◽  
V. Singhal
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document