Author(s):  
Takuto Omiya ◽  
◽  
Kazuhiro Hotta

In this paper, we perform image labeling based on the probabilistic integration of local and global features. Several conventional methods label pixels or regions using features extracted from local regions and local contextual relationships between neighboring regions. However, labeling results tend to depend on local viewpoints. To overcome this problem, we propose an image labeling method that utilizes both local and global features. We compute the posterior probability distributions of the local and global features independently, and they are integrated by the product. To compute the probability of the global region (entire image), Bag-of-Words is used. In contrast, local cooccurrence between color and texture features is used to compute local probability. In the experiments, we use the MSRC21 dataset. The result demonstrates that the use of global viewpoint significantly improves labeling accuracy.


2017 ◽  
Author(s):  
L. Sánchez ◽  
N. Barreira ◽  
N. Sánchez ◽  
A. Mosquera ◽  
H. Pena-Verdeal ◽  
...  

2020 ◽  
Vol 34 (4) ◽  
pp. 515-520
Author(s):  
Chen Zhang ◽  
Qingxu Li ◽  
Xue Cheng

The convolutional neural network (CNN) and long short-term memory (LSTM) network are adept at extracting local and global features, respectively. Both can achieve excellent classification effects. However, the CNN performs poorly in extracting the global contextual information of the text, while LSTM often overlooks the features hidden between words. For text sentiment classification, this paper combines the CNN with bidirectional LSTM (BiLSTM) into a parallel hybrid model called CNN_BiLSTM. Firstly, the CNN was adopted to extract the local features of the text quickly. Next, the BiLSTM was employed to obtain the global text features containing contextual semantics. After that, the features extracted by the two neural networks (NNs) were fused, and processed by Softmax classifier for text sentiment classification. To verify its performance, the CNN_BiLSTM was compared with single NNs like CNN and LSTM, as well as other deep learning (DL) NNs through experiments. The experimental results show that the proposed parallel hybrid model outperformed the contrastive methods in F1-score and accuracy. Therefore, our model can solve text sentiment classification tasks effectively, and boast better practical value than other NNs.


2013 ◽  
Vol 04 (03) ◽  
pp. 243-252 ◽  
Author(s):  
Chin-Teng Lin ◽  
Sheng-Chih Hsu ◽  
Ja-Fan Lee ◽  
Chien-Ting Yang

Author(s):  
Daniel Riccio ◽  
Andrea Casanova ◽  
Gianni Fenu

Face recognition in real world applications is a very difficult task because of image misalignments, pose and illumination variations, or occlusions. Many researchers in this field have investigated both face representation and classification techniques able to deal with these drawbacks. However, none of them is free from limitations. Early proposed algorithms were generally holistic, in the sense they consider the face object as a whole. Recently, challenging benchmarks demonstrated that they are not adequate to be applied in unconstrained environments, despite of their good performances in more controlled conditions. Therefore, the researchers' attention is now turning on local features that have been demonstrated to be more robust to a large set of non-monotonic distortions. Nevertheless, though local operators partially overcome some drawbacks, they are still opening new questions (e.g., Which criteria should be used to select the most representative features?). This is the reason why, among all the others, hybrid approaches are showing a high potential in terms of recognition accuracy when applied in uncontrolled settings, as they integrate complementary information from both local and global features. This chapter explores local, global, and hybrid approaches.


2020 ◽  
Vol 528 ◽  
pp. 46-57 ◽  
Author(s):  
Xuelin Liu ◽  
Yuming Fang ◽  
Rengang Du ◽  
Yifan Zuo ◽  
Wenying Wen

Author(s):  
Keisuke Doman ◽  
Daisuke Deguchi ◽  
Tomokazu Takahashi ◽  
Yoshito Mekada ◽  
Ichiro Ide ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document