Face synthesis from low-resolution near-infrared to high-resolution visual light spectrum based on tensor analysis

2014 ◽  
Vol 140 ◽  
pp. 146-154 ◽  
Author(s):  
Zhaoxiang Zhang ◽  
Yunhong Wang ◽  
Zeda Zhang
Author(s):  
Gloria Guilluy ◽  
Alessandro Sozzetti ◽  
Paolo Giacobbe ◽  
Aldo S. Bonomo ◽  
Giuseppina Micela

AbstractSince the first discovery of an extra-solar planet around a main-sequence star, in 1995, the number of detected exoplanets has increased enormously. Over the past two decades, observational instruments (both onboard and on ground-based facilities) have revealed an astonishing diversity in planetary physical features (i. e. mass and radius), and orbital parameters (e.g. period, semi-major axis, inclination). Exoplanetary atmospheres provide direct clues to understand the origin of these differences through their observable spectral imprints. In the near future, upcoming ground and space-based telescopes will shift the focus of exoplanetary science from an era of “species discovery” to one of “atmospheric characterization”. In this context, the Atmospheric Remote-sensing Infrared Exoplanet Large (Ariel) survey, will play a key role. As it is designed to observe and characterize a large and diverse sample of exoplanets, Ariel will provide constraints on a wide gamut of atmospheric properties allowing us to extract much more information than has been possible so far (e.g. insights into the planetary formation and evolution processes). The low resolution spectra obtained with Ariel will probe layers different from those observed by ground-based high resolution spectroscopy, therefore the synergy between these two techniques offers a unique opportunity to understanding the physics of planetary atmospheres. In this paper, we set the basis for building up a framework to effectively utilise, at near-infrared wavelengths, high-resolution datasets (analyzed via the cross-correlation technique) with spectral retrieval analyses based on Ariel low-resolution spectroscopy. We show preliminary results, using a benchmark object, namely HD 209458 b, addressing the possibility of providing improved constraints on the temperature structure and molecular/atomic abundances.


2021 ◽  
Author(s):  
Yu Yin ◽  
Joseph P. Robinson ◽  
Songyao Jiang ◽  
Yue Bai ◽  
Can Qin ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1013
Author(s):  
Sayan Maity ◽  
Mohamed Abdel-Mottaleb ◽  
Shihab S. Asfour

Biometric identification using surveillance video has attracted the attention of many researchers as it can be applicable not only for robust identification but also personalized activity monitoring. In this paper, we present a novel multimodal recognition system that extracts frontal gait and low-resolution face images from frontal walking surveillance video clips to perform efficient biometric recognition. The proposed study addresses two important issues in surveillance video that did not receive appropriate attention in the past. First, it consolidates the model-free and model-based gait feature extraction approaches to perform robust gait recognition only using the frontal view. Second, it uses a low-resolution face recognition approach which can be trained and tested using low-resolution face information. This eliminates the need for obtaining high-resolution face images to create the gallery, which is required in the majority of low-resolution face recognition techniques. Moreover, the classification accuracy on high-resolution face images is considerably higher. Previous studies on frontal gait recognition incorporate assumptions to approximate the average gait cycle. However, we quantify the gait cycle precisely for each subject using only the frontal gait information. The approaches available in the literature use the high resolution images obtained in a controlled environment to train the recognition system. However, in our proposed system we train the recognition algorithm using the low-resolution face images captured in the unconstrained environment. The proposed system has two components, one is responsible for performing frontal gait recognition and one is responsible for low-resolution face recognition. Later, score level fusion is performed to fuse the results of the frontal gait recognition and the low-resolution face recognition. Experiments conducted on the Face and Ocular Challenge Series (FOCS) dataset resulted in a 93.5% Rank-1 for frontal gait recognition and 82.92% Rank-1 for low-resolution face recognition, respectively. The score level multimodal fusion resulted in 95.9% Rank-1 recognition, which demonstrates the superiority and robustness of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document