strong ai
Recently Published Documents


TOTAL DOCUMENTS

44
(FIVE YEARS 17)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 3 ◽  
Author(s):  
Tiao Hu ◽  
Mathew Mendoza ◽  
Joy Viray Cabador ◽  
Michael Cottingham

The purpose of this study was to explore the status of Paralympic hopefuls' athletic identity and how this identity was impacted by the training and competition cessation resulting from the COVID-19 pandemic. Researchers conducted in-depth semi-structured interviews that explored the experiences of 29 Paralympic hopefuls who compete in thirteen different Paralympic sports. A thematic analysis yielded two superordinate themes: a) Prominent athletic identity, multiplicity over exclusivity; b) Various Impact on AI: Mental adaptation helps overcome the lack of sport participation. Participants in this study possessed prominent strong athletic identities from the benefits of sport participation. Their prioritized athletic role still remains despite setbacks due to the pandemic. However, athletes identified with multiple roles rather than an exclusive athletic identity during COVID-19. As for the impacts on identity, the severity of challenges are determined by the mindset of the athletes. All of the athletes experienced a decreased amount of time and physical participation in their sport. Paralympians whose sole focus was on the loss of physical participation were impacted the most. Athletes who felt unchallenged did so because of their mental adaptation. Through a positive outlook and mentality, athletes were able to effectively cope and not dwell on the negative aspects brought on by the pandemic. In conclusion, having a strong AI did not necessarily coincide with a negative impact on identity from COVID-19, and those who do not possess a strong AI felt their AI was unchallenged by the pandemic. More importantly, Paralympians' mindset of how they view and interpret their AI is crucial to how the individual's AI is affected by the sport disruption of COVID-19.


Author(s):  
Nicholas Smith ◽  
Darby Vickers

AbstractAs artificial intelligence (AI) becomes ubiquitous, it will be increasingly involved in novel, morally significant situations. Thus, understanding what it means for a machine to be morally responsible is important for machine ethics. Any method for ascribing moral responsibility to AI must be intelligible and intuitive to the humans who interact with it. We argue that the appropriate approach is to determine how AIs might fare on a standard account of human moral responsibility: a Strawsonian account. We make no claim that our Strawsonian approach is either the only one worthy of consideration or the obviously correct approach, but we think it is preferable to trying to marry fundamentally different ideas of moral responsibility (i.e. one for AI, one for humans) into a single cohesive account. Under a Strawsonian framework, people are morally responsible when they are appropriately subject to a particular set of attitudes—reactive attitudes—and determine under what conditions it might be appropriate to subject machines to this same set of attitudes. Although the Strawsonian account traditionally applies to individual humans, it is plausible that entities that are not individual humans but possess these attitudes are candidates for moral responsibility under a Strawsonian framework. We conclude that weak AI is never morally responsible, while a strong AI with the right emotional capacities may be morally responsible.


2021 ◽  
Vol 35 (1) ◽  
pp. 91-101 ◽  
Author(s):  
Martin V. Butz

AbstractStrong AI—artificial intelligence that is in all respects at least as intelligent as humans—is still out of reach. Current AI lacks common sense, that is, it is not able to infer, understand, or explain the hidden processes, forces, and causes behind data. Main stream machine learning research on deep artificial neural networks (ANNs) may even be characterized as being behavioristic. In contrast, various sources of evidence from cognitive science suggest that human brains engage in the active development of compositional generative predictive models (CGPMs) from their self-generated sensorimotor experiences. Guided by evolutionarily-shaped inductive learning and information processing biases, they exhibit the tendency to organize the gathered experiences into event-predictive encodings. Meanwhile, they infer and optimize behavior and attention by means of both epistemic- and homeostasis-oriented drives. I argue that AI research should set a stronger focus on learning CGPMs of the hidden causes that lead to the registered observations. Endowed with suitable information-processing biases, AI may develop that will be able to explain the reality it is confronted with, reason about it, and find adaptive solutions, making it Strong AI. Seeing that such Strong AI can be equipped with a mental capacity and computational resources that exceed those of humans, the resulting system may have the potential to guide our knowledge, technology, and policies into sustainable directions. Clearly, though, Strong AI may also be used to manipulate us even more. Thus, it will be on us to put good, far-reaching and long-term, homeostasis-oriented purpose into these machines.


Author(s):  
Egor V. Falev ◽  

The article considers the concept of artificial intelligence (AI) using categories and basic principles of the consciousness theory developed in the Living Ethics (LE). The latter is a modern form of the ancient tradition of exploring conscious­ness in Indian philosophy and spiritual practice. Categorial apparatus of Indian philosophy contains rich variety of distinction which may be successfully imple­mented also in the modern cognitive researches. The article shows that more pre­cise definitions of the basic concepts allow to fulfill strict delimitation between “strong” and “weak” AI as well as between what is possible and what completely impossible regarding AI. Strong AI in a sense of possessing “subjective presenta­tions” appears to be impossible. But, a deeper understanding of nature of con­sciousness in LE allows to move the limits of what is considered as possible for the weak AI. First, LE asserts mechanical mode of operation underlying the most of intellectual operations. Hence, even “weak AI” may fulfill many functions which were before attributed only to “strong AI”. Second, LE defines conscious­ness and intelligence as inherent inner potential powers of material systems, manifesting also in their ability and tendency to self-organization. Therefore, some features of “artificial” intelligence may be reconsidered as manifestations of intrinsic “intelligence” of matter, which also implies wider possibilities for AI systems. Some parallels appear with major Western philosophers such as G. Bruno, Leibnitz, H. Bergson, E. Husserl, including recent approaches of N. Luhmann’s systems theory and B. Latour’s actor-network theory


2021 ◽  
Vol 77 (4) ◽  
pp. 460-478
Author(s):  
Chammah J. Kaunda

This article probes the prospects of personhood in strong artificial intelligence (Strong AI) from a Bemba theo-cosmological perspective. Employing an interdisciplinary perspective, it argues that the Bemba concept of spirit name ( ishina lya mupashi) can make a constructive contribution to a theo-cosmology of the possibility of the personhood of strong AI. This article shifts the discussion from individualized superhuman-intelligence informed to eco-relational intelligence personhood- shaped strong AI.


2020 ◽  
pp. 137-141
Author(s):  
Nicolas Sabouret
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document