Exploring Chromatic Aberration and Defocus Blur for Relative Depth Estimation from Monocular Hyperspectral Image

Author(s):  
Ali Zia ◽  
Jun Zhou ◽  
Yongsheng Gao
NANO ◽  
2017 ◽  
Vol 12 (11) ◽  
pp. 1750130 ◽  
Author(s):  
Bentolhoda Hadavi Moghadam ◽  
Shohreh Kasaei ◽  
A. K. Haghi

In this study, a novel technique for measuring the thickness of electrospun nanofibrous mat based on image analysis techniques is proposed. The thicknesses of electrospun polyacrylonitrile (PAN), polyvinyl alcohol (PVA), and polyurethane (PU) nanofibrous mats are calculated using depth estimation in different views. The images are captured by a fixed scanning electron microscope (SEM) where the mat sample is rotated by 15[Formula: see text], 30[Formula: see text], and 45[Formula: see text] angles. By calculating the disparity value (the distance between two corresponding points in two images), the relative depth of images and consequently the thickness of nanofibrous mat are obtained. Furthermore, the thickness of three electrospun mats are measured from the cross-section view of the nanofibrous mat by scanning the electron microscopy. A close agreement between results obtained by this method at low angle views (15[Formula: see text]) and the direct thickness measurement obtained from the cross-section view is achieved. Comparison of the average thickness from the direct measurement and the proposed method for different samples exhibits a linear relationship with the high regression coefficient of 0.96. By using the proposed method, the quantitative evaluation of the thickness measurement becomes feasible over the entire surface of electrospun mats.


2013 ◽  
Vol 52 (29) ◽  
pp. 7152 ◽  
Author(s):  
Pauline Trouvé ◽  
Frédéric Champagnat ◽  
Guy Le Besnerais ◽  
Jacques Sabater ◽  
Thierry Avignon ◽  
...  

2017 ◽  
Author(s):  
◽  
Alex Yang

Depth estimation from single monocular images is a theoretical challenge in computer vision as well as a computational challenge in practice. This thesis addresses the problem of depth estimation from single monocular images using a deep convolutional neural fields framework; which consists of convolutional feature extraction, superpixel dimensionality reduction, and depth inference. Data were collected using a stereo vision camera, which generated depth maps though triangulation that are paired with visual images. The visual image (input) and computed depth map (desired output) are used to train the model, which has achieved 83 percent test accuracy at the standard 25 percent tolerance. The problem has been formulated as depth regression for superpixels and our technique is superior to existing state-of-the-art approaches based on its demonstrated its generalization ability, high prediction accuracy, and real-time processing capability. We utilize the VGG-16 deep convolutional network as feature extractor and conditional random fields depth inference. We have leveraged a multi-phase training protocol that includes transfer learning and network fine-tuning lead to high performance accuracy. Our framework has a robust modular nature with capability of replacing each component with different implementations for maximum extensibility. Additionally, our GPU-accelerated implementation of superpixel pooling has further facilitated this extensibility by allowing incorporation of feature tensors with exible shapes and has provided both space and time optimization. Based on our novel contributions and high-performance computing methodologies, the model achieves a minimal and optimized design. It is capable of operating at 30 fps; which is a critical step towards empowering real-world applications such as autonomous vehicle with passive relative depth perception using single camera vision-based obstacle avoidance, environment mapping, etc.


Sign in / Sign up

Export Citation Format

Share Document