Robust Late Fusion on Multi-View Clustering

Author(s):  
Lei Du ◽  
Xiaonan Luo ◽  
Yan Pan
Keyword(s):  
2021 ◽  
pp. 1-20
Author(s):  
Tianqi Wang ◽  
Yin Hong ◽  
Quanyi Wang ◽  
Rongfeng Su ◽  
Manwa Lawrence Ng ◽  
...  

Background: Previous studies explored the use of noninvasive biomarkers of speech and language for the detection of mild cognitive impairment (MCI). Yet, most of them employed single task which might not have adequately captured all aspects of their cognitive functions. Objective: The present study aimed to achieve the state-of-the-art accuracy in detecting individuals with MCI using multiple spoken tasks and uncover task-specific contributions with a tentative interpretation of features. Methods: Fifty patients clinically diagnosed with MCI and 60 healthy controls completed three spoken tasks (picture description, semantic fluency, and sentence repetition), from which multidimensional features were extracted to train machine learning classifiers. With a late-fusion configuration, predictions from multiple tasks were combined and correlated with the participants’ cognitive ability assessed using the Montreal Cognitive Assessment (MoCA). Statistical analyses on pre-defined features were carried out to explore their association with the diagnosis. Results: The late-fusion configuration could effectively boost the final classification result (SVM: F1 = 0.95; RF: F1 = 0.96; LR: F1 = 0.93), outperforming each individual task classifier. Besides, the probability estimates of MCI were strongly correlated with the MoCA scores (SVM: –0.74; RF: –0.71; LR: –0.72). Conclusion: Each single task tapped more dominantly to distinct cognitive processes and have specific contributions to the prediction of MCI. Specifically, picture description task characterized communications at the discourse level, while semantic fluency task was more specific to the controlled lexical retrieval processes. With greater demands on working memory load, sentence repetition task uncovered memory deficits through modified speech patterns in the reproduced sentences.


2021 ◽  
Vol 11 (3) ◽  
pp. 1064
Author(s):  
Jenq-Haur Wang ◽  
Yen-Tsang Wu ◽  
Long Wang

In social networks, users can easily share information and express their opinions. Given the huge amount of data posted by many users, it is difficult to search for relevant information. In addition to individual posts, it would be useful if we can recommend groups of people with similar interests. Past studies on user preference learning focused on single-modal features such as review contents or demographic information of users. However, such information is usually not easy to obtain in most social media without explicit user feedback. In this paper, we propose a multimodal feature fusion approach to implicit user preference prediction which combines text and image features from user posts for recommending similar users in social media. First, we use the convolutional neural network (CNN) and TextCNN models to extract image and text features, respectively. Then, these features are combined using early and late fusion methods as a representation of user preferences. Lastly, a list of users with the most similar preferences are recommended. The experimental results on real-world Instagram data show that the best performance can be achieved when we apply late fusion of individual classification results for images and texts, with the best average top-k accuracy of 0.491. This validates the effectiveness of utilizing deep learning methods for fusing multimodal features to represent social user preferences. Further investigation is needed to verify the performance in different types of social media.


2021 ◽  
Author(s):  
Esaú Villatoro-Tello ◽  
S. Pavankumar Dubagunta ◽  
Julian Fritsch ◽  
Gabriela Ramírez-de-la-Rosa ◽  
Petr Motlicek ◽  
...  

2021 ◽  
Vol 8 (7) ◽  
pp. 97-105
Author(s):  
Ali Ahmed ◽  
◽  
Sara Mohamed ◽  

Content-Based Image Retrieval (CBIR) systems retrieve images from the image repository or database in which they are visually similar to the query image. CBIR plays an important role in various fields such as medical diagnosis, crime prevention, web-based searching, and architecture. CBIR consists mainly of two stages: The first is the extraction of features and the second is the matching of similarities. There are several ways to improve the efficiency and performance of CBIR, such as segmentation, relevance feedback, expansion of queries, and fusion-based methods. The literature has suggested several methods for combining and fusing various image descriptors. In general, fusion strategies are typically divided into two groups, namely early and late fusion strategies. Early fusion is the combination of image features from more than one descriptor into a single vector before the similarity computation, while late fusion refers either to the combination of outputs produced by various retrieval systems or to the combination of different rankings of similarity. In this study, a group of color and texture features is proposed to be used for both methods of fusion strategies. Firstly, an early combination of eighteen color features and twelve texture features are combined into a single vector representation and secondly, the late fusion of three of the most common distance measures are used in the late fusion stage. Our experimental results on two common image datasets show that our proposed method has good performance retrieval results compared to the traditional way of using single features descriptor and also has an acceptable retrieval performance compared to some of the state-of-the-art methods. The overall accuracy of our proposed method is 60.6% and 39.07% for Corel-1K and GHIM-10K ‎datasets, respectively.


Sign in / Sign up

Export Citation Format

Share Document