multimodal reasoning
Recently Published Documents


TOTAL DOCUMENTS

12
(FIVE YEARS 4)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 11 (12) ◽  
pp. 758
Author(s):  
Lihua Xu ◽  
Jan van Driel ◽  
Ryan Healy

Classroom communication is increasingly accepted as multimodal, through the orchestrated use of different semiotic modes, resources, and systems. There is growing interest in examining the meaning-making potential of other modes (e.g., gestural, visual, kinesthetic) beyond the semiotic mode of language, in classroom communication and in student reasoning in science. In this paper, we explore the use of a multi-layered analytical framework in an investigation of student reasoning during an open inquiry into the physical phenomenon of dissolving in a primary classroom. The 24 students, who worked in pairs, were video recorded in a facility purposefully designed to capture their verbal and non-verbal interactions during the science session. By employing a multi-layered analytical framework, we were able to identify the interplays between the different semiotic modes and the level of reasoning undertaken by the students as they worked through the tasks. This analytical process uncovered a variety of ways in which the students negotiated ideas and coordinated semiotic resources in their exploration of dissolving. This paper highlights the affordances and challenges of this multi-layered analytical framework for identifying the dynamic inter-relationships between different modes that the students drew on to grapple with the complexity of the physical phenomenon of dissolving.


2020 ◽  
Vol 34 (05) ◽  
pp. 7504-7511 ◽  
Author(s):  
Feilong Chen ◽  
Fandong Meng ◽  
Jiaming Xu ◽  
Peng Li ◽  
Bo Xu ◽  
...  

Visual Dialog is a vision-language task that requires an AI agent to engage in a conversation with humans grounded in an image. It remains a challenging task since it requires the agent to fully understand a given question before making an appropriate response not only from the textual dialog history, but also from the visually-grounded information. While previous models typically leverage single-hop reasoning or single-channel reasoning to deal with this complex multimodal reasoning task, which is intuitively insufficient. In this paper, we thus propose a novel and more powerful Dual-channel Multi-hop Reasoning Model for Visual Dialog, named DMRM. DMRM synchronously captures information from the dialog history and the image to enrich the semantic representation of the question by exploiting dual-channel reasoning. Specifically, DMRM maintains a dual channel to obtain the question- and history-aware image features and the question- and image-aware dialog history features by a mulit-hop reasoning process in each channel. Additionally, we also design an effective multimodal attention to further enhance the decoder to generate more accurate responses. Experimental results on the VisDial v0.9 and v1.0 datasets demonstrate that the proposed model is effective and outperforms compared models by a significant margin.


Sign in / Sign up

Export Citation Format

Share Document