categorization task
Recently Published Documents


TOTAL DOCUMENTS

213
(FIVE YEARS 51)

H-INDEX

25
(FIVE YEARS 1)

Cognition ◽  
2022 ◽  
Vol 218 ◽  
pp. 104920
Author(s):  
Ellen M. O'Donoghue ◽  
Matthew B. Broschard ◽  
John H. Freeman ◽  
Edward A. Wasserman
Keyword(s):  

2021 ◽  
Vol 5 (Supplement_1) ◽  
pp. 480-480
Author(s):  
Shraddha Shende ◽  
Lydia Nguyen ◽  
Grace Rochford ◽  
Raksha Mudar

Abstract Inhibitory control involves suppression of goal irrelevant information and responses. Emerging evidence suggests alterations in inhibitory control in individuals with age-related hearing loss (ARHL), however, few have specifically studied individuals with mild ARHL. We examined behavioral and event related potential (ERP) differences between 14 older adults with mild ARHL (mean age: 69.43 ± 7.73 years) and 14 age- and education-matched normal hearing (NH, mean age: 66.57 ± 5.70 years) controls on two Go/NoGo tasks: a simpler, basic categorization task (Single Car; SC) and a more difficult, superordinate categorization task (Object Animal; OA). The SC task consisted of exemplars of a single car and dog, and the OA task consisted of exemplars of multiple objects and animals. Participants were required to respond to Go trials (e.g., cars in SC) with a button press, and withhold responses on NoGo trials (e.g., dogs in SC task). Behavioral results revealed that ARHL group had worse accuracy on NoGo trials on the OA task, but not on the SC task. ARHL group had longer N2 latency for NoGo compared to Go trials in the simpler SC Task, but no differences were observed on the OA task between Go and NoGo trials. These findings suggest that more prolonged neural effort in the ARHL group on the SC task NoGo trials may have contributed to their ability to successfully suppress false alarms comparable to the NH group. Overall, these findings provide evidence for behavioral and neural changes in inhibitory control in ARHL.


2021 ◽  
pp. 1-33
Author(s):  
Kevin Berlemont ◽  
Jean-Pierre Nadal

Abstract In experiments on perceptual decision making, individuals learn a categorization task through trial-and-error protocols. We explore the capacity of a decision-making attractor network to learn a categorization task through reward-based, Hebbian-type modifications of the weights incoming from the stimulus encoding layer. For the latter, we assume a standard layer of a large number of stimu lus-specific neurons. Within the general framework of Hebbian learning, we have hypothesized that the learning rate is modulated by the reward at each trial. Surprisingly, we find that when the coding layer has been optimized in view of the categorization task, such reward-modulated Hebbian learning (RMHL) fails to extract efficiently the category membership. In previous work, we showed that the attractor neural networks' nonlinear dynamics accounts for behavioral confidence in sequences of decision trials. Taking advantage of these findings, we propose that learning is controlled by confidence, as computed from the neural activity of the decision-making attractor network. Here we show that this confidence-controlled, reward-based Hebbian learning efficiently extracts categorical information from the optimized coding layer. The proposed learning rule is local and, in contrast to RMHL, does not require storing the average rewards obtained on previous trials. In addition, we find that the confidence-controlled learning rule achieves near-optimal performance. In accordance with this result, we show that the learning rule approximates a gradient descent method on a maximizing reward cost function.


2021 ◽  
pp. 1-19
Author(s):  
Jairo Perez-Osorio ◽  
Abdulaziz Abubshait ◽  
Agnieszka Wykowska

Abstract Understanding others' nonverbal behavior is essential for social interaction, as it allows, among others, to infer mental states. Although gaze communication, a well-established nonverbal social behavior, has shown its importance in inferring others' mental states, not much is known about the effects of irrelevant gaze signals on cognitive conflict markers during collaborative settings. Here, participants completed a categorization task where they categorized objects based on their color while observing images of a robot. On each trial, participants observed the robot iCub grasping an object from a table and offering it to them to simulate a handover. Once the robot “moved” the object forward, participants were asked to categorize the object according to its color. Before participants were allowed to respond, the robot made a lateral head/gaze shift. The gaze shifts were either congruent or incongruent with the object's color. We expected that incongruent head cues would induce more errors (Study 1), would be associated with more curvature in eye-tracking trajectories (Study 2), and induce larger amplitude in electrophysiological markers of cognitive conflict (Study 3). Results of the three studies show more oculomotor interference as measured in error rates (Study 1), larger curvatures eye-tracking trajectories (Study 2), and higher amplitudes of the N2 ERP of the EEG signals as well as higher event-related spectral perturbation amplitudes (Study 3) for incongruent trials compared with congruent trials. Our findings reveal that behavioral, ocular, and electrophysiological markers can index the influence of irrelevant signals during goal-oriented tasks.


2021 ◽  
Vol 16 (1) ◽  
pp. 2-22
Author(s):  
Rémy Versace ◽  
Nicolas Bailloud ◽  
Annie Magnan ◽  
Jean Ecalle

Abstract The aim of the present study was to demonstrate the multisensory nature of vocabulary knowledge by using learning designed to encourage the simulation of sensorimotor experiences. Forty participants were instructed to learn pseudowords together with arbitrary definitions, either by mentally experiencing (sensorimotor simulation) the definitions, or by mentally repeating them. A test phase consisting of three tasks was then administered: in a recognition task, participants had to recognize learned pseudowords among distractors. In a categorization task, they had to categorize pseudowords as representing either living or non-living items. Finally, in a sentence completion task, participants had to decide whether pseudowords were congruent with context sentences. As expected, the sensorimotor simulation condition induced better performances only in the categorization task and the sentence completion task. The results converge with data from the literature in demonstrating that knowledge emergence implies sensorimotor simulation and showing that vocabulary learning can benefit from encoding that encourages the simulation of sensorimotor experiences.


2021 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Abdulaziz Abubshait ◽  
Agnieszka Wykowska

Understanding others’ nonverbal behavior is essential for social interaction, as it allows, among others, to infer mental states. While gaze communication, a well-established nonverbal social behavior, has shown its importance in inferring others’ mental states, not much is known about the effects of irrelevant gaze signals on cognitive conflict markers during collaborative settings. Here, participants completed a categorization task where they categorized objects based on their color while observing images of a robot. On each trial, participants observed the robot iCub grasping an object from a table and offering it to them to simulate a handover. Once the robot “moved” the object forward, participants were asked to categorize the object according to its color. Before participants were allowed to respond, the robot made a lateral head/gaze shift. The gaze shifts were either congruent or incongruent with the object’s color. We expected that incongruent head-cues would induce more errors (Study 1), would be associated with more curvature in eye-tracking trajectories (Study 2), and induce larger amplitude in electrophysiological markers of cognitive conflict (Study 3). Results of the three studies show more oculomotor interference as measured in error rates (Study 1), larger curvatures eye-tracking trajectories (Study 2), and higher amplitudes of the N2 event-related potential (ERP) of the EEG signals as well as higher Event-Related Spectral Perturbation (ERSP) amplitudes (Study 3) for incongruent trials compared to congruent trials. Our findings reveal that behavioral, ocular and electrophysiological markers can index the influence of irrelevant signals during goal-oriented tasks.


2021 ◽  
Vol 32 (9) ◽  
pp. 1494-1509
Author(s):  
Yuan Chang Leong ◽  
Roma Dziembaj ◽  
Mark D’Esposito

People’s perceptual reports are biased toward percepts they are motivated to see. The arousal system coordinates the body’s response to motivationally significant events and is well positioned to regulate motivational effects on perceptual judgments. However, it remains unclear whether arousal would enhance or reduce motivational biases. Here, we measured pupil dilation as a measure of arousal while participants ( N = 38) performed a visual categorization task. We used monetary bonuses to motivate participants to perceive one category over another. Even though the reward-maximizing strategy was to perform the task accurately, participants were more likely to report seeing the desirable category. Furthermore, higher arousal levels were associated with making motivationally biased responses. Analyses using computational models suggested that arousal enhanced motivational effects by biasing evidence accumulation in favor of desirable percepts. These results suggest that heightened arousal biases people toward what they want to see and away from an objective representation of the environment.


Author(s):  
Peter Flynn

In an earlier paper [Flynn 2020] I described the implementation of an XML/XSLT system (now named ℞, pronounced ‘recipe’: see http://xml.silmaril.ie/recipes/recipe/) for checking and reproducing cookery recipes where the ingredients were stored as disaggregated data in attributes rather than as plain-text phrases in unmarked element CDATA content. Since then, work has proceeded on three key aspects: a) the refinement of the categories for recipe ingredients; b) the implementation of the formatting algorithm in XSLT; and c) the implementation in CSS. This paper describes the third of these, recreating in CSS (for XML) the grammar of expressing the disaggregated data which the XSLT (for HTML) algorithms use to create the lists of ingredients and references to them. The categorization task is out of scope for markup conferences, and is best discussed over a good dinner. In recipes written in English, the syntax of the List of Ingredients is a commonly-accepted format expressing quantity, units, item, and various modifiers. In the earlier paper I showed how XSLT can be used to manipulate the ingredient data to achieve the required format. I indicated that the original (pre-℞) site used XML as the print format with CSS, and that this raised challenges when the disaggregated data was in attributes. This problem has now largely been overcome, and I also give details of how to XSLT has been used to overcome some of the things CSS cannot do for the same tasks. Note: The names used for the attributes discussed here are still experimental and subject to change. In particular the item categorization is a work-in-progress, and should not be taken as a statement of intent.


2021 ◽  
Author(s):  
Sindram Volkmer ◽  
Nicole Wetzel ◽  
Andreas Widmann ◽  
Florian Scharf

The ability to shield against distraction while focusing on a task requires the operation of executive functions and is essential for successful learning. We investigated the short-term dynamics of distraction control in a data set of 269 children aged 4–10 years and 51 adults pooled from three studies using multilevel models. Participants performed a visual categorization task while a task-irrelevant sequence of sounds was presented which consisted of frequently repeated standard sounds and rarely interspersed novel sounds. On average, participants responded slower in the categorization task after novel sounds. This distraction effect was more pronounced in children. Throughout the experiment, the initially strong distraction effects declined to level of adults in the groups of 6- to 10-year-olds. Such a decline was neither observed in the groups of the 4- and 5-year-olds, who consistently show a high level of distraction, nor in adults, who showed a constantly low level of distraction throughout the experimental session. Results indicate that distraction control is a highly dynamic process that qualitatively and quantitatively differs between age groups.We conclude that the analysis of short-term dynamics provides valuable insights into the development of attention control and might explain inconsistent findings regarding distraction control in middle childhood. In addition, models of attention control need to be refined to account for age-dependent rapid learning mechanisms. Our findings have implications for the design of learning situations and provide an additional source of information for diagnosis and treatment of attention deficit disorders.


2021 ◽  
Vol 23 (06) ◽  
pp. 1569-1576
Author(s):  
Dr.A. Mekala ◽  
◽  
Dr.A. Prakash ◽  

Text Classification (TC), also known as Text Categorization, is the mission of robotically classifying a set of text documents into dissimilar categories from a predefined set. If a manuscript belongs to exactly one of the categories, it is a single-label categorization task; otherwise, it is a multi-label categorization task. TC uses several tools from Information Retrieval (IR) and Machine Learning (ML) and has received much consideration in the last years from both researchers in academia and manufacturing developers. In this paper, we first categorize the documents using KNN based machine learning approach and then return the most appropriate documents.


Sign in / Sign up

Export Citation Format

Share Document