Video retrieval model based on multimodal information fusion

2008 ◽  
Vol 28 (1) ◽  
pp. 199-201 ◽  
Author(s):  
Jing ZHANG
2021 ◽  
Vol 12 ◽  
Author(s):  
Haihua Tu

With the development of science and education, English learning has become increasingly important. In the past, English learning was mainly based on missionaries, and students were not very motivated to learn. The purpose of this article is to use the English cooperative model to improve the enthusiasm and initiative of students in learning, and to improve the efficiency of students in learning English. A team learning model based on the game is proposed. This article constructs a cooperative and competitive model of English learning based on multimodal information fusion. The main manifestation is that students form groups in small groups, and there is a competitive relationship between the groups. The competition among students in learning is the common interest of the entire group, so that the overall interests of each student will be more competitive. This article refers to the main body association model in the literature to adjust English grammar, vocabulary, and language perception ability: learn together in team communication to improve students' multifaceted abilities. Finally, a questionnaire was designed. The results show that after changing the English team learning mode and optimizing the English team learning support system of the students' English learning team, the English learning cooperation and competition model based on multimode information fusion proposed in this article can improve the learning effect by 55%-60%. In all English teaching, the two dimensions of professional knowledge and English ability training are not mutually orthogonal and mutually exclusive, but mutually supportive and interdependent. To form an effective teaching model of “student-centered and teacher-led,” active and rich communication and feedback in the classroom are the keys, and they also help to form a gradual teaching and learning cycle.


2021 ◽  
Author(s):  
Zhibing Xie

Understanding human emotional states is indispensable for our daily interaction, and we can enjoy more natural and friendly human computer interaction (HCI) experience by fully utilizing human’s affective states. In the application of emotion recognition, multimodal information fusion is widely used to discover the relationships of multiple information sources and make joint use of a number of channels, such as speech, facial expression, gesture and physiological processes. This thesis proposes a new framework of emotion recognition using information fusion based on the estimation of information entropy. The novel techniques of information theoretic learning are applied to feature level fusion and score level fusion. The most critical issues for feature level fusion are feature transformation and dimensionality reduction. The existing methods depend on the second order statistics, which is only optimal for Gaussian-like distributions. By incorporating information theoretic tools, a new feature level fusion method based on kernel entropy component analysis is proposed. For score level fusion, most previous methods focus on predefined rule based approaches, which are usually heuristic. In this thesis, a connection between information fusion and maximum correntropy criterion is established for effective score level fusion. Feature level fusion and score level fusion methods are then combined to introduce a two-stage fusion platform. The proposed methods are applied to audiovisual emotion recognition, and their effectiveness is evaluated by experiments on two publicly available audiovisual emotion databases. The experimental results demonstrate that the proposed algorithms achieve improved performance in comparison with the existing methods. The work of this thesis offers a promising direction to design more advanced emotion recognition systems based on multimodal information fusion and has great significance to the development of intelligent human computer interaction systems.


2020 ◽  
Vol 79 (45-46) ◽  
pp. 33943-33956
Author(s):  
Qi Liang ◽  
Ning Xu ◽  
Weijie Wang ◽  
Xingjian Long

2020 ◽  
Vol 53 ◽  
pp. 209-221 ◽  
Author(s):  
Yingying Jiang ◽  
Wei Li ◽  
M. Shamim Hossain ◽  
Min Chen ◽  
Abdulhameed Alelaiwi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document