scholarly journals Interactive modeling of lofted shapes from a single image

2019 ◽  
Vol 6 (3) ◽  
pp. 279-289 ◽  
Author(s):  
Congyue Deng ◽  
Jiahui Huang ◽  
Yong-Liang Yang

AbstractModeling the complete geometry of general shapes from a single image is an ill-posed problem. User hints are often incorporated to resolve ambiguities and provide guidance during the modeling process. In this work, we present a novel interactive approach for extracting high-quality freeform shapes from a single image. This is inspired by the popular lofting technique in many CAD systems, and only requires minimal user input. Given an input image, the user only needs to sketch several projected cross sections, provide a “main axis”, and specify some geometric relations. Our algorithm then automatically optimizes the common normal to the sections with respect to these constraints, and interpolates between the sections, resulting in a high-quality 3D model that conforms to both the original image and the user input. The entire modeling session is efficient and intuitive. We demonstrate the effectiveness of our approach based on qualitative tests on a variety of images, and quantitative comparisons with the ground truth using synthetic images.

2020 ◽  
Vol 34 (07) ◽  
pp. 11661-11668 ◽  
Author(s):  
Yunfei Liu ◽  
Feng Lu

Many real world vision tasks, such as reflection removal from a transparent surface and intrinsic image decomposition, can be modeled as single image layer separation. However, this problem is highly ill-posed, requiring accurately aligned and hard to collect triplet data to train the CNN models. To address this problem, this paper proposes an unsupervised method that requires no ground truth data triplet in training. At the core of the method are two assumptions about data distributions in the latent spaces of different layers, based on which a novel unsupervised layer separation pipeline can be derived. Then the method can be constructed based on the GANs framework with self-supervision and cycle consistency constraints, etc. Experimental results demonstrate its successfulness in outperforming existing unsupervised methods in both synthetic and real world tasks. The method also shows its ability to solve a more challenging multi-layer separation task.


2020 ◽  
Vol 2020 (10) ◽  
pp. 26-1-26-7
Author(s):  
Takuro Matsui ◽  
Takuro Yamaguchi ◽  
Masaaki Iheara

At public space such as a zoo and sports facilities, the presence of fence often annoys tourists and professional photographers. There is a demand for a post-processing tool to produce a non-occluded view from an image or video. This “de-fencing” task is divided into two stages: one is to detect fence regions and the other is to fill the missing part. For a decade or more, various methods have been proposed for video-based de-fencing. However, only a few single-image-based methods are proposed. In this paper, we mainly focus on single-image fence removal. Conventional approaches suffer from inaccurate and non-robust fence detection and inpainting due to less content information. To solve these problems, we combine novel methods based on a deep convolutional neural network (CNN) and classical domain knowledge in image processing. In the training process, we are required to obtain both fence images and corresponding non-fence ground truth images. Therefore, we synthesize natural fence image from real images. Moreover, spacial filtering processing (e.g. a Laplacian filter and a Gaussian filter) improves the performance of the CNN for detecting and inpainting. Our proposed method can automatically detect a fence and generate a clean image without any user input. Experimental results demonstrate that our method is effective for a broad range of fence images.


2020 ◽  
Vol 33 (2) ◽  
pp. 207
Author(s):  
Rafaela de Oliveira Ferreira ◽  
Ana Cristina Campos Borges ◽  
Juan Augusto Rodrigues dos Campos ◽  
Artur Manoel Leite Medeiros ◽  
Cassia Mônica Sakuragui ◽  
...  

The genus Philodendron Schott comprises the following three currently accepted subgenenera: P. subg. Philodendron, P. subg. Pteromischum and P. subg. Meconostigma; however, these lack a well-defined classification. In the present study, we examined anatomically samples of adventitious roots in species of the group, so as to establish aspects relevant for taxonomic purposes. The anatomical analyses emphasised the characteristics of the steles in cross-sections of the root samples from regions near the apex to the most mature zones. A species of a closely related genus Adelonema, namely A. crinipes, was included in the study to clarify the results. Our results indicated notable differences in the species of the subgenus Meconostigma, mainly in terms of the presence (and variations) of a lobed stele, whereas the cylindrical stele stood out among the common characteristics in P. subg. Philodendron, P. subg. Pteromischum and the related species A. crinipes. Moreover, the characteristics shared by P. subg. Philodendron and P. subg. Pteromischum corroborated the phylogenetic hypothesis that these two taxa were more closely related to one another than to P. subg. Meconostigma.


Scanning ◽  
2017 ◽  
Vol 2017 ◽  
pp. 1-7
Author(s):  
Xu Chen ◽  
Tengfei Guo ◽  
Yubin Hou ◽  
Jing Zhang ◽  
Wenjie Meng ◽  
...  

A new scan-head structure for the scanning tunneling microscope (STM) is proposed, featuring high scan precision and rigidity. The core structure consists of a piezoelectric tube scanner of quadrant type (for XY scans) coaxially housed in a piezoelectric tube with single inner and outer electrodes (for Z scan). They are fixed at one end (called common end). A hollow tantalum shaft is coaxially housed in the XY-scan tube and they are mutually fixed at both ends. When the XY scanner scans, its free end will bring the shaft to scan and the tip which is coaxially inserted in the shaft at the common end will scan a smaller area if the tip protrudes short enough from the common end. The decoupled XY and Z scans are desired for less image distortion and the mechanically reduced scan range has the superiority of reducing the impact of the background electronic noise on the scanner and enhancing the tip positioning precision. High quality atomic resolution images are also shown.


2021 ◽  
pp. 002199832110492
Author(s):  
Ruidong Man ◽  
Jianhui Fu ◽  
Songkil Kim ◽  
Yoongho Jung

As a connecting component of tubes, the elbow is indispensable to pipe-fitting in composite products. Previous studies have addressed methods for generating winding paths based on parametric equations on the elbow. However, these methods are unsuitable for elbows whose surfaces are difficult to describe using mathematical expressions. In this study, a geometric method was proposed for generating winding patterns for various elbow types. With this method, the mandrel surface is first converted into uniform and high-quality quadrilateral elements; an algorithm is then provided for calculating the minimum winding angle for bridging-free. Next, an angle for non-bridging was defined as the design-winding angle to generate the uniform and slippage-free basic winding paths on the quadrilateral elements in non-geodesic directions. Finally, after a series of uniform points were calculated on the selected vertical edge according to the elbow type, the pattern paths were generated with the uniform points and basic paths. The proposed method is advantageously not limited to the elbow’s shape.


2022 ◽  
Vol 41 (1) ◽  
pp. 1-17
Author(s):  
Xin Chen ◽  
Anqi Pang ◽  
Wei Yang ◽  
Peihao Wang ◽  
Lan Xu ◽  
...  

In this article, we present TightCap, a data-driven scheme to capture both the human shape and dressed garments accurately with only a single three-dimensional (3D) human scan, which enables numerous applications such as virtual try-on, biometrics, and body evaluation. To break the severe variations of the human poses and garments, we propose to model the clothing tightness field—the displacements from the garments to the human shape implicitly in the global UV texturing domain. To this end, we utilize an enhanced statistical human template and an effective multi-stage alignment scheme to map the 3D scan into a hybrid 2D geometry image. Based on this 2D representation, we propose a novel framework to predict clothing tightness field via a novel tightness formulation, as well as an effective optimization scheme to further reconstruct multi-layer human shape and garments under various clothing categories and human postures. We further propose a new clothing tightness dataset of human scans with a large variety of clothing styles, poses, and corresponding ground-truth human shapes to stimulate further research. Extensive experiments demonstrate the effectiveness of our TightCap to achieve the high-quality human shape and dressed garments reconstruction, as well as the further applications for clothing segmentation, retargeting, and animation.


Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2892
Author(s):  
Kyungjun Lee ◽  
Seungwoo Wee ◽  
Jechang Jeong

Salient object detection is a method of finding an object within an image that a person determines to be important and is expected to focus on. Various features are used to compute the visual saliency, and in general, the color and luminance of the scene are widely used among the spatial features. However, humans perceive the same color and luminance differently depending on the influence of the surrounding environment. As the human visual system (HVS) operates through a very complex mechanism, both neurobiological and psychological aspects must be considered for the accurate detection of salient objects. To reflect this characteristic in the saliency detection process, we have proposed two pre-processing methods to apply to the input image. First, we applied a bilateral filter to improve the segmentation results by smoothing the image so that only the overall context of the image remains while preserving the important borders of the image. Second, although the amount of light is the same, it can be perceived with a difference in the brightness owing to the influence of the surrounding environment. Therefore, we applied oriented difference-of-Gaussians (ODOG) and locally normalized ODOG (LODOG) filters that adjust the input image by predicting the brightness as perceived by humans. Experiments on five public benchmark datasets for which ground truth exists show that our proposed method further improves the performance of previous state-of-the-art methods.


Agronomy ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 1951
Author(s):  
Brianna B. Posadas ◽  
Mamatha Hanumappa ◽  
Kim Niewolny ◽  
Juan E. Gilbert

Precision agriculture is highly dependent on the collection of high quality ground truth data to validate the algorithms used in prescription maps. However, the process of collecting ground truth data is labor-intensive and costly. One solution to increasing the collection of ground truth data is by recruiting citizen scientists through a crowdsourcing platform. In this study, a crowdsourcing platform application was built using a human-centered design process. The primary goals were to gauge users’ perceptions of the platform, evaluate how well the system satisfies their needs, and observe whether the classification rate of lambsquarters by the users would match that of an expert. Previous work demonstrated a need for ground truth data on lambsquarters in the D.C., Maryland, Virginia (DMV) area. Previous social interviews revealed users who would want a citizen science platform to expand their skills and give them access to educational resources. Using a human-centered design protocol, design iterations of a mobile application were created in Kinvey Studio. The application, Mission LQ, taught people how to classify certain characteristics of lambsquarters in the DMV and allowed them to submit ground truth data. The final design of Mission LQ received a median system usability scale (SUS) score of 80.13, which indicates a good design. The classification rate of lambsquarters was 72%, which is comparable to expert classification. This demonstrates that a crowdsourcing mobile application can be used to collect high quality ground truth data for use in precision agriculture.


Atmosphere ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 1266
Author(s):  
Jing Qin ◽  
Liang Chen ◽  
Jian Xu ◽  
Wenqi Ren

In this paper, we propose a novel method to remove haze from a single hazy input image based on the sparse representation. In our method, the sparse representation is proposed to be used as a contextual regularization tool, which can reduce the block artifacts and halos produced by only using dark channel prior without soft matting as the transmission is not always constant in a local patch. A novel way to use dictionary is proposed to smooth an image and generate the sharp dehazed result. Experimental results demonstrate that our proposed method performs favorably against the state-of-the-art dehazing methods and produces high-quality dehazed and vivid color results.


Sign in / Sign up

Export Citation Format

Share Document