Truncated attention mechanism and cascade loss for cross-modal person re-identification

2021 ◽  
pp. 1-13
Author(s):  
Shuo Shi ◽  
Changwei Huo ◽  
Yingchun Guo ◽  
Stephen Lean ◽  
Gang Yan ◽  
...  

Person re-identification with natural language description is a process of retrieving the corresponding person’s image from an image dataset according to a text description of the person. The key challenge in this cross-modal task is to extract visual and text features and construct loss functions to achieve cross-modal matching between text and image. Firstly, we designed a two-branch network framework for person re-identification with natural language description. In this framework we include the following: a Bi-directional Long Short-Term Memory (Bi-LSTM) network is used to extract text features and a truncated attention mechanism is proposed to select the principal component of the text features; a MobileNet is used to extract image features. Secondly, we proposed a Cascade Loss Function (CLF), which includes cross-modal matching loss and single modal classification loss, both with relative entropy function, to fully exploit the identity-level information. The experimental results on the CUHK-PEDES dataset demonstrate that our method achieves better results in Top-5 and Top-10 than other current 10 state-of-the-art algorithms.

2022 ◽  
Vol 2022 ◽  
pp. 1-12
Author(s):  
Xiuye Yin ◽  
Liyong Chen

In view of the complexity of the multimodal environment and the existing shallow network structure that cannot achieve high-precision image and text retrieval, a cross-modal image and text retrieval method combining efficient feature extraction and interactive learning convolutional autoencoder (CAE) is proposed. First, the residual network convolution kernel is improved by incorporating two-dimensional principal component analysis (2DPCA) to extract image features and extracting text features through long short-term memory (LSTM) and word vectors to efficiently extract graphic features. Then, based on interactive learning CAE, cross-modal retrieval of images and text is realized. Among them, the image and text features are respectively input to the two input terminals of the dual-modal CAE, and the image-text relationship model is obtained through the interactive learning of the middle layer to realize the image-text retrieval. Finally, based on Flickr30K, MSCOCO, and Pascal VOC 2007 datasets, the proposed method is experimentally demonstrated. The results show that the proposed method can complete accurate image retrieval and text retrieval. Moreover, the mean average precision (MAP) has reached more than 0.3, the area of precision-recall rate (PR) curves are better than other comparison methods, and they are applicable.


Author(s):  
Saud Altaf ◽  
Sofia Iqbal ◽  
Muhammad Waseem Soomro

This paper focuses on capturing the meaning of Natural Language Understanding (NLU) text features to detect the duplicate unsupervised features. The NLU features are compared with lexical approaches to prove the suitable classification technique. The transfer-learning approach is utilized to train the extraction of features on the Semantic Textual Similarity (STS) task. All features are evaluated with two types of datasets that belong to Bosch bug and Wikipedia article reports. This study aims to structure the recent research efforts by comparing NLU concepts for featuring semantics of text and applying it to IR. The main contribution of this paper is a comparative study of semantic similarity measurements. The experimental results demonstrate the Term Frequency–Inverse Document Frequency (TF-IDF) feature results on both datasets with reasonable vocabulary size. It indicates that the Bidirectional Long Short Term Memory (BiLSTM) can learn the structure of a sentence to improve the classification.


Author(s):  
Md. Asifuzzaman Jishan ◽  
Khan Raqib Mahmud ◽  
Abul Kalam Al Azad

We presented a learning model that generated natural language description of images. The model utilized the connections between natural language and visual data by produced text line based contents from a given image. Our Hybrid Recurrent Neural Network model is based on the intricacies of Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Bi-directional Recurrent Neural Network (BRNN) models. We conducted experiments on three benchmark datasets, e.g., Flickr8K, Flickr30K, and MS COCO. Our hybrid model utilized LSTM model to encode text line or sentences independent of the object location and BRNN for word representation, this reduced the computational complexities without compromising the accuracy of the descriptor. The model produced better accuracy in retrieving natural language based description on the dataset.


2021 ◽  
Author(s):  
Jiaojiao Wang ◽  
Dongjin Yu ◽  
Chengfei Liu ◽  
Xiaoxiao Sun

Abstract To effectively predict the outcome of an on-going process instance helps make an early decision, which plays an important role in so-called predictive process monitoring. Existing methods in this field are tailor-made for some empirical operations such as the prefix extraction, clustering, and encoding, leading that their relative accuracy is highly sensitive to the dataset. Moreover, they have limitations in real-time prediction applications due to the lengthy prediction time. Since Long Short-term Memory (LSTM) neural network provides a high precision in the prediction of sequential data in several areas, this paper investigates LSTM and its enhancements and proposes three different approaches to build more effective and efficient models for outcome prediction. The first move on enhancement is that we combine the original LSTM network from two directions, forward and backward, to capture more features from the completed cases. The second move on enhancement is that we add attention mechanism after extracting features in the hidden layer of LSTM network to distinct them from their attention weight. A series of extensive experiments are evaluated on twelve real datasets when comparing with other approaches. The results show that our approaches outperform the state-of-the-art ones in terms of prediction effectiveness and time performance.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5279
Author(s):  
Yang Li ◽  
Huahu Xu ◽  
Junsheng Xiao

Language-based person search retrieves images of a target person using natural language description and is a challenging fine-grained cross-modal retrieval task. A novel hybrid attention network is proposed for the task. The network includes the following three aspects: First, a cubic attention mechanism for person image, which combines cross-layer spatial attention and channel attention. It can fully excavate both important midlevel details and key high-level semantics to obtain better discriminative fine-grained feature representation of a person image. Second, a text attention network for language description, which is based on bidirectional LSTM (BiLSTM) and self-attention mechanism. It can better learn the bidirectional semantic dependency and capture the key words of sentences, so as to extract the context information and key semantic features of the language description more effectively and accurately. Third, a cross-modal attention mechanism and a joint loss function for cross-modal learning, which can pay more attention to the relevant parts between text and image features. It can better exploit both the cross-modal and intra-modal correlation and can better solve the problem of cross-modal heterogeneity. Extensive experiments have been conducted on the CUHK-PEDES dataset. Our approach obtains higher performance than state-of-the-art approaches, demonstrating the advantage of the approach we propose.


2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Biao Yang ◽  
Jinmeng Cao ◽  
Rongrong Ni ◽  
Ling Zou

We propose an anomaly detection approach by learning a generative model using deep neural network. A weighted convolutional autoencoder- (AE-) long short-term memory (LSTM) network is proposed to reconstruct raw data and perform anomaly detection based on reconstruction errors to resolve the existing challenges of anomaly detection in complicated definitions and background influence. Convolutional AEs and LSTMs are used to encode spatial and temporal variations of input frames, respectively. A weighted Euclidean loss is proposed to enable the network to concentrate on moving foregrounds, thus restraining background influence. Moving foregrounds are segmented from the input frames using robust principal component analysis decomposition. Comparisons with state-of-the-art approaches indicate the superiority of our approach in anomaly detection. Generalization of anomaly detection is improved by enforcing the network to focus on moving foregrounds.


Author(s):  
Lifang Fu ◽  
Feifei Zhao

In order to timely and accurately analyze the focus and appeal of public opinion on the Internet, A LSTM-ATTN model was proposed to extract the hot topics and predict their changing trend based on tens of thousands of news and commentary messages. First, an improved LDA model was used to extract hot words and classify the hot topics. Aimed to more accurately describe the detailed characteristics and long-term trend of topic popularity, a prediction model is proposed based on attention mechanism Long Short-Term Memory (LSTM) network, which named LSTM-ATTN model. A large number of numerical experiments were carried out using the public opinion information of "African classical swine fever" event in China. According to results of evaluation indexes, the relative superiority of LSTM-ATTN model was demonstrated. It can capture and reflect the inherent characteristics and periodic fluctuations of the agricultural public opinion information. Also, it has higher convergence efficiency and prediction accuracy.


Energies ◽  
2018 ◽  
Vol 11 (11) ◽  
pp. 3221 ◽  
Author(s):  
Yining Wang ◽  
Da Xie ◽  
Xitian Wang ◽  
Yu Zhang

The interaction between the gird and wind farms has significant impact on the power grid, therefore prediction of the interaction between gird and wind farms is of great significance. In this paper, a wind turbine-gird interaction prediction model based on long short term memory (LSTM) network under the TensorFlow framework is presented. First, the multivariate time series was screened by principal component analysis (PCA) to reduce the data dimensionality. Secondly, the LSTM network is used to model the nonlinear relationship between the selected sequence of wind turbine network interactions and the actual output sequence of the wind farms, it is proved that it has higher accuracy and applicability by comparison with single LSTM model, Autoregressive Integrated Moving Average (ARIMA) model and Back Propagation Neural Network (BPNN) model, the Mean Absolute Percentage Error (MAPE) is 0.617%, 0.703%, 1.397% and 3.127%, respectively. Finally, the Prony algorithm was used to analyze the predicted data of the wind turbine-grid interactions. Based on the actual data, it is found that the oscillation frequencies of the predicted data from PCA-LSTM model are basically the same as the oscillation frequencies of the actual data, thus the feasibility of the model proposed for analyzing interaction between grid and wind turbines is verified.


2020 ◽  
Vol 10 (17) ◽  
pp. 5841 ◽  
Author(s):  
Beakcheol Jang ◽  
Myeonghwi Kim ◽  
Gaspard Harerimana ◽  
Sang-ug Kang ◽  
Jong Wook Kim

There is a need to extract meaningful information from big data, classify it into different categories, and predict end-user behavior or emotions. Large amounts of data are generated from various sources such as social media and websites. Text classification is a representative research topic in the field of natural-language processing that categorizes unstructured text data into meaningful categorical classes. The long short-term memory (LSTM) model and the convolutional neural network for sentence classification produce accurate results and have been recently used in various natural-language processing (NLP) tasks. Convolutional neural network (CNN) models use convolutional layers and maximum pooling or max-overtime pooling layers to extract higher-level features, while LSTM models can capture long-term dependencies between word sequences hence are better used for text classification. However, even with the hybrid approach that leverages the powers of these two deep-learning models, the number of features to remember for classification remains huge, hence hindering the training process. In this study, we propose an attention-based Bi-LSTM+CNN hybrid model that capitalize on the advantages of LSTM and CNN with an additional attention mechanism. We trained the model using the Internet Movie Database (IMDB) movie review data to evaluate the performance of the proposed model, and the test results showed that the proposed hybrid attention Bi-LSTM+CNN model produces more accurate classification results, as well as higher recall and F1 scores, than individual multi-layer perceptron (MLP), CNN or LSTM models as well as the hybrid models.


Author(s):  
Keith April Araño ◽  
Peter Gloor ◽  
Carlotta Orsenigo ◽  
Carlo Vercellis

AbstractSpeech is one of the most natural communication channels for expressing human emotions. Therefore, speech emotion recognition (SER) has been an active area of research with an extensive range of applications that can be found in several domains, such as biomedical diagnostics in healthcare and human–machine interactions. Recent works in SER have been focused on end-to-end deep neural networks (DNNs). However, the scarcity of emotion-labeled speech datasets inhibits the full potential of training a deep network from scratch. In this paper, we propose new approaches for classifying emotions from speech by combining conventional mel-frequency cepstral coefficients (MFCCs) with image features extracted from spectrograms by a pretrained convolutional neural network (CNN). Unlike prior studies that employ end-to-end DNNs, our methods eliminate the resource-intensive network training process. By using the best prediction model obtained, we also build an SER application that predicts emotions in real time. Among the proposed methods, the hybrid feature set fed into a support vector machine (SVM) achieves an accuracy of 0.713 in a 6-class prediction problem evaluated on the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, which is higher than the previously published results. Interestingly, MFCCs taken as unique input into a long short-term memory (LSTM) network achieve a slightly higher accuracy of 0.735. Our results reveal that the proposed approaches lead to an improvement in prediction accuracy. The empirical findings also demonstrate the effectiveness of using a pretrained CNN as an automatic feature extractor for the task of emotion prediction. Moreover, the success of the MFCC-LSTM model is evidence that, despite being conventional features, MFCCs can still outperform more sophisticated deep-learning feature sets.


Sign in / Sign up

Export Citation Format

Share Document