the hard problem
Recently Published Documents


TOTAL DOCUMENTS

263
(FIVE YEARS 90)

H-INDEX

14
(FIVE YEARS 3)

2021 ◽  
Vol 2 ◽  
Author(s):  
Michael Pauen

One of the reasons why the Neural Correlates of Consciousness Program could appear attractive in the 1990s was that it seemed to disentangle theoretical and empirical problems. Theoretical disagreements could thus be sidestepped in order to focus on empirical research regarding the neural substrate of consciousness. One of the further consequences of this dissociation of empirical and theoretical questions was that fundamental questions regarding the Mind Body Problem or the “Hard Problem of Consciousness” could remain unresolved even if the search for the neural correlates had been successful.Drawing on historical examples, a widely held consensus in the philosophy of science, and actual NCC research we argue that there is no such independence. Moreover, as the dependence between the theoretical and the empirical level is mutual, empirical progress will go hand in hand with theoretical development. Thus, contrary to what the original NCC program suggested, we conclude that NCC research may significantly take advantage from and contribute to theoretical progress in our explanation and understanding of consciousness. Eventually, this might even contribute to a solution of the Hard Problem of Consciousness.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0251952
Author(s):  
Santosh Hiremath ◽  
Samantha Wittke ◽  
Taru Palosuo ◽  
Jere Kaivosoja ◽  
Fulu Tao ◽  
...  

Identifying crop loss at field parcel scale using satellite images is challenging: first, crop loss is caused by many factors during the growing season; second, reliable reference data about crop loss are lacking; third, there are many ways to define crop loss. This study investigates the feasibility of using satellite images to train machine learning (ML) models to classify agricultural field parcels into those with and without crop loss. The reference data for this study was provided by Finnish Food Authority (FFA) containing crop loss information of approximately 1.4 million field parcels in Finland covering about 3.5 million ha from 2000 to 2015. This reference data was combined with Normalised Difference Vegetation Index (NDVI) derived from Landsat 7 images, in which more than 80% of the possible data are missing. Despite the hard problem with extremely noisy data, among the four ML models we tested, random forest (with mean imputation and missing value indicators) achieved the average AUC (area under the ROC curve) of 0.688±0.059 over all 16 years with the range [0.602, 0.795] in identifying new crop-loss fields based on reference fields of the same year. To our knowledge, this is one of the first large scale benchmark study of using machine learning for crop loss classification at field parcel scale. The classification setting and trained models have numerous potential applications, for example, allowing government agencies or insurance companies to verify crop-loss claims by farmers and realise efficient agricultural monitoring.


Author(s):  
Mihaela Constantinescu ◽  
Cristina Voinea ◽  
Radu Uszkai ◽  
Constantin Vică

AbstractDuring the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for the concept of moral responsibility. The paper starts by highlighting the important difficulties in assigning responsibility to either technologies themselves or to their developers. Top-down and bottom-up approaches to moral responsibility are then contrasted, as we explore how they could inform debates about Responsible AI. We highlight the limits of the former ethical approaches and build the case for classical Aristotelian virtue ethics. We show that two building blocks of Aristotle’s ethics, dianoetic virtues and the context of actions, although largely ignored in the literature, can shed light on how we could think of moral responsibility for both AI and humans. We end by exploring the practical implications of this particular understanding of moral responsibility along the triadic dimensions of ethics by design, ethics in design and ethics for designers.


Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1226
Author(s):  
Garrett Mindt

The hard problem of consciousness has been a perennially vexing issue for the study of consciousness, particularly in giving a scientific and naturalized account of phenomenal experience. At the heart of the hard problem is an often-overlooked argument, which is at the core of the hard problem, and that is the structure and dynamics (S&D) argument. In this essay, I will argue that we have good reason to suspect that the S&D argument given by David Chalmers rests on a limited conception of S&D properties, what in this essay I’m calling extrinsic structure and dynamics. I argue that if we take recent insights from the complexity sciences and from recent developments in Integrated Information Theory (IIT) of Consciousness, that we get a more nuanced picture of S&D, specifically, a class of properties I’m calling intrinsic structure and dynamics. This I think opens the door to a broader class of properties with which we might naturally and scientifically explain phenomenal experience, as well as the relationship between syntactic, semantic, and intrinsic notions of information. I argue that Chalmers’ characterization of structure and dynamics in his S&D argument paints them with too broad a brush and fails to account for important nuances, especially when considering accounting for a system’s intrinsic properties. Ultimately, my hope is to vindicate a certain species of explanation from the S&D argument, and by extension dissolve the hard problem of consciousness at its core, by showing that not all structure and dynamics are equal.


Author(s):  
Xijia Wei ◽  
Zhiqiang Wei ◽  
Valentin Radu

Many engineered approaches have been proposed over the years for solving the hard problem of performing indoor localization. However, specialising solutions for the edge cases remains challenging. Here we propose to build the solution with zero hand-engineered features, but having everything learned directly from data. We use a modality specific neural architecture for extracting preliminary features, which are then integrated with cross-modality neural network structures. We show that each modality-specific neural architecture branch is capable of estimating the location with good accuracy independently. But for better accuracy a cross-modality neural network fusing the features of those early modality-specific representations is a better proposition. Our multimodal neural network, MM-Loc, is effective because it allows the uniform flow of gradients during training across modalities. Because it is a data driven approach, complex features representations are learned rather than relying heavily on hand-engineered features.


2021 ◽  
Author(s):  
Nicholas Martin Rosseinsky

Whether there can be a science of consciousness is both of the utmost importance, and a matter of intense current debate in the field. Recently, two major papers seemed to reach dramatically conflicting conclusions, one denying scientific method currently exists in the field, the other promoting a way to ‘make the hard problem easier’. Here I apply uncontroversial mathematical physics together with a new symbolism for conscious experience, to decisively resolve this issue. Under dynamically-orthodox physics (e.g. current physical theory), there can’t be a scientifically-reliable approach. But under a strong form of dynamically-unorthodox physics, subjective report is not provably unreliable, thus meeting the minimal necessary conditions for a true science. Implications for the epistemological foundations of science are briefly discussed.


2021 ◽  
Author(s):  
Adam Safron

In this brief commentary on The Hidden Spring: A Journey to the Source of Consciousness, I describe ways in which Mark Solms’ account of the origins of subjective experience relates to Integrated World Modeling Theory (IWMT). IWMT is a synthetic theory that brings together different perspectives, with the ultimate goal of solving the enduring problems of consciousness, including the Hard problem. I describe points of compatibility and incompatibility between Solms’ proposal and IWMT, with particular emphasis on how a Bayesian interpretation of Integrated Information Theory and Global (Neuronal) Workspace Theory may help identify the physical and computational substrates of consciousness.


Sign in / Sign up

Export Citation Format

Share Document