scholarly journals Reconstruction of a fluttering flag from a single image

2021 ◽  
Vol 15 ◽  
pp. 174830262098365
Author(s):  
Tao Hu ◽  
Jun Li ◽  
Guihuan Guo

Reconstructing a 3 D object from a single image is a challenging task because determining useful geometric structure information from a single image is difficult. In this paper, we propose a novel method to extract the 3 D mesh of a flag from a single image and drive the flag model to flutter with virtual wind. A deep convolutional neural fields model is first used to generate a depth map of a single image. Based on the Alpha Shape, a coarse 2 D mesh of flag is reconstructed by sampling at different depth regions. Then, we optimize the mesh to generate a mesh with depth based on Restricted Frontal-Delaunay. We transform the Delaunay mesh with depth into a simple spring model and use a velocity-based solver to calculate the moving position of the virtual flag model. The experiments demonstrate that the proposed method can construct a realistic fluttering flag video from a single image.

2021 ◽  
Author(s):  
Jingchun Zhou ◽  
Tongyu Yang ◽  
Wenqi Ren ◽  
Dan Zhang ◽  
Weishi Zhang

Atmosphere ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 1266
Author(s):  
Jing Qin ◽  
Liang Chen ◽  
Jian Xu ◽  
Wenqi Ren

In this paper, we propose a novel method to remove haze from a single hazy input image based on the sparse representation. In our method, the sparse representation is proposed to be used as a contextual regularization tool, which can reduce the block artifacts and halos produced by only using dark channel prior without soft matting as the transmission is not always constant in a local patch. A novel way to use dictionary is proposed to smooth an image and generate the sharp dehazed result. Experimental results demonstrate that our proposed method performs favorably against the state-of-the-art dehazing methods and produces high-quality dehazed and vivid color results.


2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Xiaoyuan Ren ◽  
Libing Jiang ◽  
Zhuang Wang

Estimating the 3D pose of the space object from a single image is an important but challenging work. Most of the existing methods estimate the 3D pose of known space objects and assume that the detailed geometry of a specific object is known. These methods are not available for unknown objects without the known geometry of the object. In contrast to previous works, this paper devotes to estimate the 3D pose of the unknown space object from a single image. Our method estimates not only the pose but also the shape of the unknown object from a single image. In this paper, a hierarchical shape model is proposed to represent the prior structure information of typical space objects. On this basis, the parameters of the pose and shape are estimated simultaneously for unknown space objects. Experimental results demonstrate the effectiveness of our method to estimate the 3D pose and infer the geometry of unknown typical space objects from a single image. Moreover, experimental results show the advantage of our approach over the methods relying on the known geometry of the object.


2021 ◽  
Author(s):  
Ramanath Datta ◽  
Sekhar Mandal ◽  
Saiyed Umer ◽  
Ahmad Ali AlZubi ◽  
Abdullah Alharbi ◽  
...  

Abstract A fast and novel method for single-image reconstruction using super resolution (SR) technique has been proposed in this paper. The working principle of proposed technique has been divided into three components. In the first component, a low resolution image is divided into several homogeneous or non-homogeneous regions. This partition is based on the analysis of texture pattern within that region. Only the non-homogeneous regions undergo to the sparse representation for super resolution image reconstruction in the second component. The obtained reconstructed region from the second component undergoes to a statistical based prediction model to generate its more enhanced version in the third component. The remaining homogeneous regions are bicubic interpolated and reflected to the required high resolution image. The proposed technique is applied on some Large scaled Electrical, Machine and Civil architectural design images. The purpose of using these images is that these images are huge in size and processing such large images for any applications, is time consuming. The proposed SR technique results the better reconstructed SR image from its very lower version with low time complexity. The performance of the proposed system on the Electrical, Machine and Civil architectural design images is compared with the state-of-the-art methods and it is shown that the proposed system outperforms the other competing methods.


2019 ◽  
Vol 07 (11) ◽  
pp. 76-87
Author(s):  
Prince Owusu-Agyeman ◽  
Xie Wei ◽  
James Okae

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4434 ◽  
Author(s):  
Sangwon Kim ◽  
Jaeyeal Nam ◽  
Byoungchul Ko

Depth estimation is a crucial and fundamental problem in the computer vision field. Conventional methods re-construct scenes using feature points extracted from multiple images; however, these approaches require multiple images and thus are not easily implemented in various real-time applications. Moreover, the special equipment required by hardware-based approaches using 3D sensors is expensive. Therefore, software-based methods for estimating depth from a single image using machine learning or deep learning are emerging as new alternatives. In this paper, we propose an algorithm that generates a depth map in real time using a single image and an optimized lightweight efficient neural network (L-ENet) algorithm instead of physical equipment, such as an infrared sensor or multi-view camera. Because depth values have a continuous nature and can produce locally ambiguous results, pixel-wise prediction with ordinal depth range classification was applied in this study. In addition, in our method various convolution techniques are applied to extract a dense feature map, and the number of parameters is greatly reduced by reducing the network layer. By using the proposed L-ENet algorithm, an accurate depth map can be generated from a single image quickly and, in a comparison with the ground truth, we can produce depth values closer to those of the ground truth with small errors. Experiments confirmed that the proposed L-ENet can achieve a significantly improved estimation performance over the state-of-the-art algorithms in depth estimation based on a single image.


Author(s):  
WEI QIN ◽  
YILONG YIN

Traditional fingerprint verifications use single image for matching. However, the verification accuracy cannot meet the need of some application domains. In this paper, we propose to use videos for fingerprint verification. To take full use of the information contained in fingerprint videos, we present a novel method to use the dynamic as well as the static information in fingerprint videos. After preprocessing and aligning processes, the Inclusion Ratio of two matching fingerprint videos is calculated and used to represent the similarity between these two videos. Experimental results show that video-based method can access better accuracy than the method based on single fingerprint.


Sign in / Sign up

Export Citation Format

Share Document