Coloring high-resolution near-infrared images using chromatic cues from low-resolution references

Author(s):  
Chester Brian Lim ◽  
Maria Monica Salud ◽  
Benjamin Uttoh ◽  
Carlo Noel Ochotorena ◽  
Cecille Adrianne Ochotorena ◽  
...  
Author(s):  
Gloria Guilluy ◽  
Alessandro Sozzetti ◽  
Paolo Giacobbe ◽  
Aldo S. Bonomo ◽  
Giuseppina Micela

AbstractSince the first discovery of an extra-solar planet around a main-sequence star, in 1995, the number of detected exoplanets has increased enormously. Over the past two decades, observational instruments (both onboard and on ground-based facilities) have revealed an astonishing diversity in planetary physical features (i. e. mass and radius), and orbital parameters (e.g. period, semi-major axis, inclination). Exoplanetary atmospheres provide direct clues to understand the origin of these differences through their observable spectral imprints. In the near future, upcoming ground and space-based telescopes will shift the focus of exoplanetary science from an era of “species discovery” to one of “atmospheric characterization”. In this context, the Atmospheric Remote-sensing Infrared Exoplanet Large (Ariel) survey, will play a key role. As it is designed to observe and characterize a large and diverse sample of exoplanets, Ariel will provide constraints on a wide gamut of atmospheric properties allowing us to extract much more information than has been possible so far (e.g. insights into the planetary formation and evolution processes). The low resolution spectra obtained with Ariel will probe layers different from those observed by ground-based high resolution spectroscopy, therefore the synergy between these two techniques offers a unique opportunity to understanding the physics of planetary atmospheres. In this paper, we set the basis for building up a framework to effectively utilise, at near-infrared wavelengths, high-resolution datasets (analyzed via the cross-correlation technique) with spectral retrieval analyses based on Ariel low-resolution spectroscopy. We show preliminary results, using a benchmark object, namely HD 209458 b, addressing the possibility of providing improved constraints on the temperature structure and molecular/atomic abundances.


2011 ◽  
Vol 63 (3) ◽  
pp. 543-554 ◽  
Author(s):  
Tomonori Hioki ◽  
Yoichi Itoh ◽  
Yumiko Oasa ◽  
Misato Fukagawa ◽  
Masahiko Hayashi

2001 ◽  
Vol 556 (2) ◽  
pp. 958-969 ◽  
Author(s):  
Angela S. Cotera ◽  
Barbara A. Whitney ◽  
Erick Young ◽  
Michael J. Wolff ◽  
Kenneth Wood ◽  
...  

2009 ◽  
Vol 61 (6) ◽  
pp. 1271-1279 ◽  
Author(s):  
Tomonori Hioki ◽  
Yoichi Itoh ◽  
Yumiko Oasa ◽  
Misato Fukagawa ◽  
Tomoyuki Kudo ◽  
...  

Author(s):  
Snehal S. Rajole ◽  
J. V. Shinde

In this paper we proposed unique technique which is adaptive to noisy images for eye gaze detection as processing noisy sclera images captured at-a-distance and on-the-move has not been extensively investigated. Sclera blood vessels have been investigated recently as an efficient biometric trait. Capturing part of the eye with a normal camera using visible-wavelength images rather than near infrared images has provoked research interest. This technique involves sclera template rotation alignment and a distance scaling method to minimize the error rates when noisy eye images are captured at-a-distance and on-the move. The proposed system is tested and results are generated by extensive simulation in java.


1999 ◽  
Vol 117 (1) ◽  
pp. 439-445 ◽  
Author(s):  
P. Persi ◽  
A. R. Marenzi ◽  
A. A. Kaas ◽  
G. Olofsson ◽  
L. Nordh ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1013
Author(s):  
Sayan Maity ◽  
Mohamed Abdel-Mottaleb ◽  
Shihab S. Asfour

Biometric identification using surveillance video has attracted the attention of many researchers as it can be applicable not only for robust identification but also personalized activity monitoring. In this paper, we present a novel multimodal recognition system that extracts frontal gait and low-resolution face images from frontal walking surveillance video clips to perform efficient biometric recognition. The proposed study addresses two important issues in surveillance video that did not receive appropriate attention in the past. First, it consolidates the model-free and model-based gait feature extraction approaches to perform robust gait recognition only using the frontal view. Second, it uses a low-resolution face recognition approach which can be trained and tested using low-resolution face information. This eliminates the need for obtaining high-resolution face images to create the gallery, which is required in the majority of low-resolution face recognition techniques. Moreover, the classification accuracy on high-resolution face images is considerably higher. Previous studies on frontal gait recognition incorporate assumptions to approximate the average gait cycle. However, we quantify the gait cycle precisely for each subject using only the frontal gait information. The approaches available in the literature use the high resolution images obtained in a controlled environment to train the recognition system. However, in our proposed system we train the recognition algorithm using the low-resolution face images captured in the unconstrained environment. The proposed system has two components, one is responsible for performing frontal gait recognition and one is responsible for low-resolution face recognition. Later, score level fusion is performed to fuse the results of the frontal gait recognition and the low-resolution face recognition. Experiments conducted on the Face and Ocular Challenge Series (FOCS) dataset resulted in a 93.5% Rank-1 for frontal gait recognition and 82.92% Rank-1 for low-resolution face recognition, respectively. The score level multimodal fusion resulted in 95.9% Rank-1 recognition, which demonstrates the superiority and robustness of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document