Which Axial Age, whose rituals? Habermas and Jaspers on the ‘spiritual’ situation of the present age

2020 ◽  
pp. 019145372093190
Author(s):  
Martin Beck Matuštík

Can we keep relying on sources of values dating back to the Axial Age, or do cognitive changes in the present age require a completely new foundation? An uncertainty arises with the crisis of values that can support the human in the age of artificial intelligence. Should we seek contemporary access points to the archaic origins of the species? Or must we also imagine new Anthropocenic-Axial values to reground the human event? In his most recent work, Habermas affirms the continuing importance of the contemporary access to the First Axial values, but before him Jaspers anticipates that a second cognitive revolution opens areas that may be receptive to new value foundations. Habermas’ justification of the postsecular turn may not be thinkable without Jaspers’ discovery of the postaxial imaginary.

2021 ◽  
Vol 1 (1) ◽  
pp. 76-87
Author(s):  
Alexander Buhmann ◽  
Christian Fieseler

Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.


1966 ◽  
Vol 54 (12) ◽  
pp. 1687-1697 ◽  
Author(s):  
R.J. Solomonoff

Itinerario ◽  
2000 ◽  
Vol 24 (3-4) ◽  
pp. 75-88
Author(s):  
Janet Hunter

Much of the recent work on the economic and social history of Tokugawa Japan (1600–1867) has been driven by a desire to identify what T.C. Smith has called ‘native sources ofJapanese industrialisation’. From the Marxist-influenced historians in the 1920s who sought to explain the pre-industrial roots of the structure of production in interwar Japan, through to contem-poraryJapanese historians' studies of the pattern of Japanese development, a major part of the agenda has been to identify how Japan had got to where it was, in other words, what was the secret of its twentieth century successes and weaknesses. It is not possible to explore the situation of Japan's economy in the century 1750–1850 without benefit of this hindsight, without being aware that while Japan's situation may have been in many ways analogous to that of China and Europe in the mid-eighteenth century, its economic fortunes were by the latter part of the nineteenth century experiencing their own ‘great divergence’ from those of China, India and the other countries of Asia and the near East. To search for the antecedents of this divergence is for economic historians of Japan a parallel exercise o t any search for the sources of the European ‘miracle’. While a focus on the period 1750–1850 as an era of European/Asian divergence means, therefore, that we must highlight the situation inJapan during that century, it must also be accepted that in the case of Japan any comparison with other countries or regions may also suggest the causes of Japan's own divergence some fifty to a hundred years later.


2016 ◽  
Vol 2016 ◽  
pp. 1-16 ◽  
Author(s):  
Mario Muñoz-Organero ◽  
Claudia Brito-Pacheco

Fingerprinting-based algorithms are popular in indoor location systems based on mobile devices. Comparing the RSSI (Received Signal Strength Indicator) from different radio wave transmitters, such as Wi-Fi access points, with prerecorded fingerprints from located points (using different artificial intelligence algorithms), fingerprinting-based systems can locate unknown points with a few meters resolution. However, training the system with already located fingerprints tends to be an expensive task both in time and in resources, especially if large areas are to be considered. Moreover, the decision algorithms tend to be of high memory and CPU consuming in such cases and so does the required time for obtaining the estimated location for a new fingerprint. In this paper, we study, propose, and validate a way to select the locations for the training fingerprints which reduces the amount of required points while improving the accuracy of the algorithms when locating points at room level resolution. We present a comparison of different artificial intelligence decision algorithms and select those with better results. We do a comparison with other systems in the literature and draw conclusions about the improvements obtained in our proposal. Moreover, some techniques such as filtering nonstable access points for improving accuracy are introduced, studied, and validated.


1997 ◽  
Vol 352 (1358) ◽  
pp. 1257-1265 ◽  
Author(s):  
Aaron F. Bobick

This paper presents several approaches to the machine perception of motion and discusses the role and levels of knowledge in each. In particular, different techniques of motion understanding as focusing on one of movement, activity or action are described. Movements are the most atomic primitives, requiring no contextual or sequence knowledge to be recognized; movement is often addressed using either view–invariant or view–specific geometric techniques. Activity refers to sequences of movements or states, where the only real knowledge required is the statistics of the sequence; much of the recent work in gesture understanding falls within this category of motion perception. Finally, actions are larger–scale events, which typically include interaction with the environment and causal relationships; action understanding straddles the grey division between perception and cognition, computer vision and artificial intelligence. These levels are illustrated with examples drawn mostly from the group's work in understanding motion in video imagery. It is argued that the utility of such a division is that it makes explicit the representational competencies and manipulations necessary for perception.


Author(s):  
Lirong Xia

We summarize some of our recent work on using AI to improve group decision-making by taking a unified approach from statistics, economics, and computation. We then discuss a few ongoing and future directions.


Author(s):  
Alexey Ignatiev

Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This paper overviews the advances of the rigorous logic-based approach to XAI and argues that it is indispensable if trustable XAI is of concern.


2021 ◽  
Vol 1 (1) ◽  
pp. 76-87
Author(s):  
Alexander Buhmann ◽  
Christian Fieseler

Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.


2007 ◽  
Vol 22 (1) ◽  
pp. 87-109 ◽  
Author(s):  
CHRIS REED ◽  
DOUGLAS WALTON ◽  
FABRIZIO MACAGNO

AbstractIn this paper, we present a survey of the development of the technique of argument diagramming covering not only the fields in which it originated — informal logic, argumentation theory, evidence law and legal reasoning — but also more recent work in applying and developing it in computer science and artificial intelligence (AI). Beginning with a simple example of an everyday argument, we present an analysis of it visualized as an argument diagram constructed using a software tool. In the context of a brief history of the development of diagramming, it is then shown how argument diagrams have been used to analyse and work with argumentation in law, philosophy and AI.


Sign in / Sign up

Export Citation Format

Share Document