Large-Scale Face Image Retrieval Based on Hadoop and Deep Learning

Author(s):  
Huang Yuanyuan ◽  
Tang Yuan ◽  
Xiong Taisong
Author(s):  
Jie Lin ◽  
Zechao Li ◽  
Jinhui Tang

With the explosive growth of images containing faces, scalable face image retrieval has attracted increasing attention. Due to the amazing effectiveness, deep hashing has become a popular hashing method recently. In this work, we propose a new Discriminative Deep Hashing (DDH) network to learn discriminative and compact hash codes for large-scale face image retrieval. The proposed network incorporates the end-to-end learning, the divide-and-encode module and the desired discrete code learning into a unified framework. Specifically, a network with a stack of convolution-pooling layers is proposed to extract multi-scale and robust features by merging the outputs of the third max pooling layer and the fourth convolutional layer. To reduce the redundancy among hash codes and the network parameters simultaneously, a divide-and-encode module to generate compact hash codes. Moreover, a loss function is introduced to minimize the prediction errors of the learned hash codes, which can lead to discriminative hash codes. Extensive experiments on two datasets demonstrate that the proposed method achieves superior performance compared with some state-of-the-art hashing methods.


2021 ◽  
Vol 2026 (1) ◽  
pp. 012026
Author(s):  
Yuting Zeng ◽  
Yujin Li ◽  
Qiming Hu ◽  
Yuan Tang

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1139
Author(s):  
Khadija Kanwal ◽  
Khawaja Tehseen Ahmad ◽  
Rashid Khan ◽  
Naji Alhusaini ◽  
Li Jing

Convolutional neural networks (CNN) are relational with grid-structures and spatial dependencies for two-dimensional images to exploit location adjacencies, color values, and hidden patterns. Convolutional neural networks use sparse connections at high-level sensitivity with layered connection complying indiscriminative disciplines with local spatial mapping footprints. This fact varies with architectural dependencies, insight inputs, number and types of layers and its fusion with derived signatures. This research focuses this gap by incorporating GoogLeNet, VGG-19, and ResNet-50 architectures with maximum response based Eigenvalues textured and convolutional Laplacian scaled object features with mapped colored channels to obtain the highest image retrieval rates over millions of images from versatile semantic groups and benchmarks. Time and computation efficient formulation of the presented model is a step forward in deep learning fusion and smart signature capsulation for innovative descriptor creation. Remarkable results on challenging benchmarks are presented with a thorough contextualization to provide insight CNN effects with anchor bindings. The presented method is tested on well-known datasets including ALOT (250), Corel-1000, Cifar-10, Corel-10000, Cifar-100, Oxford Buildings, FTVL Tropical Fruits, 17-Flowers, Fashion (15), Caltech-256, and reported outstanding performance. The presented work is compared with state-of-the-art methods and experimented over tiny, large, complex, overlay, texture, color, object, shape, mimicked, plain and occupied background, multiple objected foreground images, and marked significant accuracies.


Author(s):  
Rui Zhang ◽  
Zhigang Jin ◽  
Xiaohui Liu

Sina Weibo, the most popular Chinese social platform with hundreds and millions of user-contributed images and texts, is growing rapidly. However, the noise between the image and text, as well as their incomplete correspondence, makes accurate image retrieval and ranking difficult. In this paper, we propose a deep learning framework using visual features, text content and popularity of Weibo to calculate the similarity between the image and the text based on training the model to maximize the likelihood of the target description sentence given the training image. In addition, the retrieval results are reranked using the popularity of the image. The comparison experiment of the large-scale Sina Weibo dataset proves the validity of the proposed method.


2018 ◽  
Vol 430-431 ◽  
pp. 331-348 ◽  
Author(s):  
Changqin Huang ◽  
Haijiao Xu ◽  
Liang Xie ◽  
Jia Zhu ◽  
Chunyan Xu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document