scholarly journals Sensory Processing and Categorization in Cortical and Deep Neural Networks

2019 ◽  
Author(s):  
Dimitris A. Pinotsis ◽  
Markus Siegel ◽  
Earl K. Miller

AbstractMany recent advances in artificial intelligence (AI) are rooted in visual neuroscience. However, ideas from more complicated paradigms like decision-making are less used. Although automated decision-making systems are ubiquitous (driverless cars, pilot support systems, medical diagnosis algorithms etc.), achieving human-level performance in decision making tasks is still a challenge. At the same time, these tasks that are hard for AI are easy for humans. Thus, understanding human brain dynamics during these decision-making tasks and modeling them using deep neural networks could improve AI performance. Here we modelled some of the complex neural interactions during a sensorimotor decision making task. We investigated how brain dynamics flexibly represented and distinguished between sensory processing and categorization in two sensory domains: motion direction and color. We used two different approaches for understanding neural representations. We compared brain responses to 1) the geometry of a sensory or category domain (domain selectivity) and 2) predictions from deep neural networks (computation selectivity). Both approaches gave us similar results. This confirmed the validity of our analyses. Using the first approach, we found that neural representations changed depending on context. We then trained deep recurrent neural networks to perform the same tasks as the animals. Using the second approach, we found that computations in different brain areas also changed flexibly depending on context. Color computations appeared to rely more on sensory processing, while motion computations more on abstract categories. Overall, our results shed light to the biological basis of categorization and differences in selectivity and computations in different brain areas. They also suggest a way for studying sensory and categorical representations in the brain: compare brain responses to both a behavioral model and a deep neural network and test if they give similar results.

2020 ◽  
Vol 4 (3) ◽  
pp. 807-851
Author(s):  
Andreas Spiegler ◽  
Javad Karimi Abadchi ◽  
Majid Mohajerani ◽  
Viktor K. Jirsa

Resting-state functional networks such as the default mode network (DMN) dominate spontaneous brain dynamics. To date, the mechanisms linking brain structure and brain dynamics and functions in cognition, perception, and action remain unknown, mainly due to the uncontrolled and erratic nature of the resting state. Here we used a stimulation paradigm to probe the brain’s resting behavior, providing insights on state-space stability and multiplicity of network trajectories after stimulation. We performed explorations on a mouse model to map spatiotemporal brain dynamics as a function of the stimulation site. We demonstrated the emergence of known functional networks in brain responses. Several responses heavily relied on the DMN and were suggestive of the DMN playing a mechanistic role between functional networks. We probed the simulated brain responses to the stimulation of regions along the information processing chains of sensory systems from periphery up to primary sensory cortices. Moreover, we compared simulated dynamics against in vivo brain responses to optogenetic stimulation. Our results underwrite the importance of anatomical connectivity in the functional organization of brain networks and demonstrate how functionally differentiated information processing chains arise from the same system.


2017 ◽  
Author(s):  
B. B. Bankson ◽  
M.N. Hebart ◽  
I.I.A. Groen ◽  
C.I. Baker

AbstractVisual object representations are commonly thought to emerge rapidly, yet it has remained unclear to what extent early brain responses reflect purely low-level visual features of these objects and how strongly those features contribute to later categorical or conceptual representations. Here, we aimed to estimate a lower temporal bound for the emergence of conceptual representations by defining two criteria that characterize such representations: 1) conceptual object representations should generalize across different exemplars of the same object, and 2) these representations should reflect high-level behavioral judgments. To test these criteria, we compared magnetoencephalography (MEG) recordings between two groups of participants (n = 16 per group) exposed to different exemplar images of the same object concepts. Further, we disentangled low-level from high-level MEG responses by estimating the unique and shared contribution of models of behavioral judgments, semantics, and different layers of deep neural networks of visual object processing. We find that 1) both generalization across exemplars as well as generalization of object-related signals across time increase after 150 ms, peaking around 230 ms; 2) behavioral judgments explain the most unique variance in the response after 150 ms. Collectively, these results suggest a lower bound for the emergence of conceptual object representations around 150 ms following stimulus onset.


2021 ◽  
Author(s):  
Rabia Saleem ◽  
Bo Yuan ◽  
Fatih Kurugollu ◽  
Ashiq Anjum

Artificial Intelligence (AI) models can learn from data and make decisions without any human intervention. However, the deployment of such models is challenging and risky because we do not know how the internal decision- making is happening in these models. Especially, the high-risk decisions such as medical diagnosis or automated navigation demand explainability and verification of the decision making process in AI algorithms. This research paper aims to explain Artificial Intelligence (AI) models by discretizing the black-box process model of deep neural networks using partial differential equations. The PDEs based deterministic models would minimize the time and computational cost of the decision-making process and reduce the chances of uncertainty that make the prediction more trustworthy.


2021 ◽  
Author(s):  
Matan Fintz ◽  
Margarita Osadchy ◽  
Uri Hertz

AbstractDeep neural networks (DNN) models have the potential to provide new insights in the study of human decision making, due to their high capacity and data-driven design. While these models may be able to go beyond theory-driven models in predicting human behaviour, their opaque nature limits their ability to explain how an operation is carried out. This explainability problem remains unresolved. Here we demonstrate the use of a DNN model as an exploratory tool to identify predictable and consistent human behaviour in value-based decision making beyond the scope of theory-driven models. We then propose using theory-driven models to characterise the operation of the DNN model. We trained a DNN model to predict human decisions in a four-armed bandit task. We found that this model was more accurate than a reinforcement-learning reward-oriented model geared towards choosing the most rewarding option. This disparity in accuracy was more pronounced during times when the expected reward from all options was similar, i.e., no unambiguous good option. To investigate this disparity, we introduced a reward-oblivious model, which was trained to predict human decisions without information about the rewards obtained from each option. This model captured decision-sequence patterns made by participants (e.g., a-b-c-d). In a series of experimental offline simulations of all models we found that the general model was in line with a reward-oriented model’s predictions when one option was clearly better than the others.However, when options’ expected rewards were similar to each other, it was in-line with the reward-oblivious model’s pattern completion predictions. These results indicate the contribution of predictable but task-irrelevant decision patterns to human decisions, especially when task-relevant choices are not immediately apparent. Importantly, we demonstrate how theory-driven cognitive models can be used to characterise the operation of DNNs, making them a useful explanatory tool in scientific investigation.Author SummaryDeep neural networks (DNN) models are an extremely useful tool across multiple domains, and specifically for performing tasks that mimic and predict human behaviour. However, due to their opaque nature and high level of complexity, their ability to explain human behaviour is limited. Here we used DNN models to uncover hitherto overlooked aspects of human decision making, i.e., their reliance on predictable patterns for exploration. For this purpose, we trained a DNN model to predict human choices in a decision-making task. We then characterised this data-driven model using explicit, theory-driven cognitive models, in a set of offline experimental simulations. This relationship between explicit and data-driven approaches, where high-capacity models are used to explore beyond the scope of established models and theory-driven models are used to explain and characterise these new grounds, make DNN models a powerful scientific tool.


Author(s):  
Nicholas D. Kullman ◽  
Martin Cousineau ◽  
Justin C. Goodson ◽  
Jorge E. Mendoza

We consider the problem of an operator controlling a fleet of electric vehicles for use in a ride-hailing service. The operator, seeking to maximize profit, must assign vehicles to requests as they arise as well as recharge and reposition vehicles in anticipation of future requests. To solve this problem, we employ deep reinforcement learning, developing policies whose decision making uses [Formula: see text]-value approximations learned by deep neural networks. We compare these policies against a reoptimization-based policy and against dual bounds on the value of an optimal policy, including the value of an optimal policy with perfect information, which we establish using a Benders-based decomposition. We assess performance on instances derived from real data for the island of Manhattan in New York City. We find that, across instances of varying size, our best policy trained with deep reinforcement learning outperforms the reoptimization approach. We also provide evidence that this policy may be effectively scaled and deployed on larger instances without retraining.


2020 ◽  
Vol 17 (8) ◽  
pp. 3337-3343
Author(s):  
K. Ashok Kumar ◽  
Valiveti Amrutha Varshini ◽  
Unnam Sai Rekha ◽  
V. S. Mynavathi

The soil, is essential substance answerable for supporting life on earth. In spite, extremely trend setting innovation in the administration segment, agribusiness remains the significant supplier of nourishment assets and wellspring of fund in India. Testing the soil is as significant device for assessing the suitable soil supplement status and soil assists with deciding the right measure of supplements to be applied to the dirt dependent on its richness and yield prerequisities. Right now, AI were utilized to group all supplements more adequately like Nitrogen (N), phosphorous (P), Potassium (K), levels in the soil with the use of Rapidsoil Testing (RST). This made out of three areas specifically soil testing, preparing framework and technique testing.


NeuroImage ◽  
2019 ◽  
Vol 202 ◽  
pp. 116118
Author(s):  
Dimitris A. Pinotsis ◽  
Markus Siegel ◽  
Earl K. Miller

2021 ◽  
Author(s):  
Rabia Saleem ◽  
Bo Yuan ◽  
Fatih Kurugollu ◽  
Ashiq Anjum

Artificial Intelligence (AI) models can learn from data and make decisions without any human intervention. However, the deployment of such models is challenging and risky because we do not know how the internal decision- making is happening in these models. Especially, the high-risk decisions such as medical diagnosis or automated navigation demand explainability and verification of the decision making process in AI algorithms. This research paper aims to explain Artificial Intelligence (AI) models by discretizing the black-box process model of deep neural networks using partial differential equations. The PDEs based deterministic models would minimize the time and computational cost of the decision-making process and reduce the chances of uncertainty that make the prediction more trustworthy.


2021 ◽  
Author(s):  
Zohreh Shams ◽  
Botty Dimanov ◽  
Sumaiyah Kola ◽  
Nikola Simidjievski ◽  
Helena Andres Terre ◽  
...  

AbstractDeep learning models are receiving increasing attention in clinical decision-making, however the lack of interpretability and explainability impedes their deployment in day-to-day clinical practice. We propose REM, an interpretable and explainable methodology for extracting rules from deep neural networks and combining them with other data-driven and knowledge-driven rules. This allows integrating machine learning and reasoning for investigating applied and basic biological research questions. We evaluate the utility of REM on the predictive tasks of classifying histological and immunohistochemical breast cancer subtypes from genotype and phenotype data. We demonstrate that REM efficiently extracts accurate, comprehensible and, biologically relevant rulesets from deep neural networks that can be readily integrated with rulesets obtained from tree-based approaches. REM provides explanation facilities for predictions and enables the clinicians to validate and calibrate the extracted rulesets with their domain knowledge. With these functionalities, REM caters for a novel and direct human-in-the-loop approach in clinical decision making.


Sign in / Sign up

Export Citation Format

Share Document