hierarchical representations
Recently Published Documents


TOTAL DOCUMENTS

119
(FIVE YEARS 42)

H-INDEX

17
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Riccardo Proietti ◽  
Giovanni Pezzulo ◽  
Alessia Tessari

We advance a novel computational model of the acquisition of a hierarchical action repertoire and its use for observation, understanding and motor control. The model is grounded in a principled framework to understand brain and cognition: active inference. We exemplify the functioning of the model by presenting four simulations of a tennis learner who observes a teacher performing tennis shots and forms hierarchical representations of the observed actions - including both actions that are already in her repertoire and novel actions - and finally imitates them. Our simulations that show that the agent’s oculomotor activity implements an active information sampling strategy that permits inferring the kinematics aspects of the observed movement, which lie at the lowest level of the action hierarchy. In turn, this low-level kinematic inference supports higher-level inferences about deeper aspects of the observed actions, such as their proximal goals and intentions. Finally, the inferred action representations can steer imitative motor responses, but interfere with the execution of different actions. Taken together, our simulations show that the same hierarchical active inference model provides a unified account of action observation, understanding, learning and imitation. Finally, our model provides a computational rationale to explain the neurobiological underpinnings of visuomotor cognition, including the multiple routes for action understanding in the dorsal and ventral streams and mirror mechanisms.


2021 ◽  
Vol 9 (Suppl 1) ◽  
pp. e001287
Author(s):  
Robert P Lennon ◽  
Robbie Fraleigh ◽  
Lauren J Van Scoy ◽  
Aparna Keshaviah ◽  
Xindi C Hu ◽  
...  

Qualitative research remains underused, in part due to the time and cost of annotating qualitative data (coding). Artificial intelligence (AI) has been suggested as a means to reduce those burdens, and has been used in exploratory studies to reduce the burden of coding. However, methods to date use AI analytical techniques that lack transparency, potentially limiting acceptance of results. We developed an automated qualitative assistant (AQUA) using a semiclassical approach, replacing Latent Semantic Indexing/Latent Dirichlet Allocation with a more transparent graph-theoretic topic extraction and clustering method. Applied to a large dataset of free-text survey responses, AQUA generated unsupervised topic categories and circle hierarchical representations of free-text responses, enabling rapid interpretation of data. When tasked with coding a subset of free-text data into user-defined qualitative categories, AQUA demonstrated intercoder reliability in several multicategory combinations with a Cohen’s kappa comparable to human coders (0.62–0.72), enabling researchers to automate coding on those categories for the entire dataset. The aim of this manuscript is to describe pertinent components of best practices of AI/machine learning (ML)-assisted qualitative methods, illustrating how primary care researchers may use AQUA to rapidly and accurately code large text datasets. The contribution of this article is providing guidance that should increase AI/ML transparency and reproducibility.


2021 ◽  
Author(s):  
Siddique Latif ◽  
Rajib Rana ◽  
Sara Khalifa ◽  
Raja Jurdak ◽  
Junaid Qadir ◽  
...  

<div>Traditionally, speech emotion recognition (SER) research has relied on manually handcrafted acoustic features using feature engineering. However, the design of handcrafted features for complex SER tasks requires significant manual effort, which impedes generalisability and slows the pace of innovation. This has motivated the adoption of representation learning techniques that can automatically learn an intermediate representation of the input signal without any manual feature engineering. Representation learning has led to improved SER performance and enabled rapid innovation. Its effectiveness has further increased with advances in deep learning (DL), which has facilitated deep representation learning where hierarchical representations are automatically learned in a data-driven manner. This paper presents the first comprehensive survey on the important topic of deep representation learning for SER. We highlight various techniques, related challenges and identify important future areas of research. Our survey bridges the gap in the literature since existing surveys either focus on SER with hand-engineered features or representation learning in the general setting without focusing on SER.</div>


2021 ◽  
Author(s):  
Siddique Latif ◽  
Rajib Rana ◽  
Sara Khalifa ◽  
Raja Jurdak ◽  
Junaid Qadir ◽  
...  

<div>Traditionally, speech emotion recognition (SER) research has relied on manually handcrafted acoustic features using feature engineering. However, the design of handcrafted features for complex SER tasks requires significant manual effort, which impedes generalisability and slows the pace of innovation. This has motivated the adoption of representation learning techniques that can automatically learn an intermediate representation of the input signal without any manual feature engineering. Representation learning has led to improved SER performance and enabled rapid innovation. Its effectiveness has further increased with advances in deep learning (DL), which has facilitated deep representation learning where hierarchical representations are automatically learned in a data-driven manner. This paper presents the first comprehensive survey on the important topic of deep representation learning for SER. We highlight various techniques, related challenges and identify important future areas of research. Our survey bridges the gap in the literature since existing surveys either focus on SER with hand-engineered features or representation learning in the general setting without focusing on SER.</div>


2021 ◽  
Vol 21 (9) ◽  
pp. 2312
Author(s):  
Timothy Brady ◽  
Michael Allen ◽  
Isabella DeStefano

Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6382
Author(s):  
Weizheng Qiao ◽  
Xiaojun Bi

Recently, deep convolutional neural networks (CNN) with inception modules have attracted much attention due to their excellent performances on diverse domains. Nevertheless, the basic CNN can only capture a univariate feature, which is essentially linear. It leads to a weak ability in feature expression, further resulting in insufficient feature mining. In view of this issue, researchers incessantly deepened the network, bringing parameter redundancy and model over-fitting. Hence, whether we can employ this efficient deep neural network architecture to improve CNN and enhance the capacity of image recognition task still remains unknown. In this paper, we introduce spike-and-slab units to the modified inception module, enabling our model to capture dual latent variables and the average and covariance information. This operation further enhances the robustness of our model to variations of image intensity without increasing the model parameters. The results of several tasks demonstrated that dual variable operations can be well-integrated into inception modules, and excellent results have been achieved.


2021 ◽  
Author(s):  
Michael G Allen ◽  
Isabella Destefano ◽  
Timothy F. Brady

Chunks allow us to use long-term knowledge to efficiently represent the world in working memory. Most views of chunking assume that when we use chunks, this results in the loss of specific perceptual details, since it is presumed the contents of chunks are decoded from long-term memory rather than reflecting the exact details of the item that was presented. However, in two experiments, we find that in situations where participants make use of chunks to improve visual working memory, access to instance-specific perceptual detail (that cannot be retrieved from long-term memory) increased, rather than decreased. This supports an alternative view: that chunks facilitate the encoding and retention into memory of perceptual details as part of structured, hierarchical memories, rather than serving as mere “content-free” pointers. It also provides a strong contrast to accounts in which working memory capacity is assumed to be exhaustively described by the number of chunks remembered.


Author(s):  
Mohammed Boukabous ◽  
Mostafa Azizi

Deep learning (DL) approaches use various processing layers to learn hierarchical representations of data. Recently, many methods and designs of natural language processing (NLP) models have shown significant development, especially in text mining and analysis. For learning vector-space representations of text, there are famous models like Word2vec, GloVe, and fastText. In fact, NLP took a big step forward when BERT and recently GTP-3 came out. In this paper, we highlight the most important language representation learning models in NLP and provide an insight of their evolution. We also summarize, compare and contrast these different models on sentiment analysis, and thus discuss their main strengths and limitations. Our obtained results show that BERT is the best language representation learning model.


Patterns ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 100193
Author(s):  
Robert Ian Etheredge ◽  
Manfred Schartl ◽  
Alex Jordan

Sign in / Sign up

Export Citation Format

Share Document