scholarly journals Low-Cost Three-Dimensional Modeling of Crop Plants

Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2883 ◽  
Author(s):  
Jorge Martinez-Guanter ◽  
Ángela Ribeiro ◽  
Gerassimos G. Peteinatos ◽  
Manuel Pérez-Ruiz ◽  
Roland Gerhards ◽  
...  

Plant modeling can provide a more detailed overview regarding the basis of plant development throughout the life cycle. Three-dimensional processing algorithms are rapidly expanding in plant phenotyping programmes and in decision-making for agronomic management. Several methods have already been tested, but for practical implementations the trade-off between equipment cost, computational resources needed and the fidelity and accuracy in the reconstruction of the end-details needs to be assessed and quantified. This study examined the suitability of two low-cost systems for plant reconstruction. A low-cost Structure from Motion (SfM) technique was used to create 3D models for plant crop reconstruction. In the second method, an acquisition and reconstruction algorithm using an RGB-Depth Kinect v2 sensor was tested following a similar image acquisition procedure. The information was processed to create a dense point cloud, which allowed the creation of a 3D-polygon mesh representing every scanned plant. The selected crop plants corresponded to three different crops (maize, sugar beet and sunflower) that have structural and biological differences. The parameters measured from the model were validated with ground truth data of plant height, leaf area index and plant dry biomass using regression methods. The results showed strong consistency with good correlations between the calculated values in the models and the ground truth information. Although, the values obtained were always accurately estimated, differences between the methods and among the crops were found. The SfM method showed a slightly better result with regard to the reconstruction the end-details and the accuracy of the height estimation. Although the use of the processing algorithm is relatively fast, the use of RGB-D information is faster during the creation of the 3D models. Thus, both methods demonstrated robust results and provided great potential for use in both for indoor and outdoor scenarios. Consequently, these low-cost systems for 3D modeling are suitable for several situations where there is a need for model generation and also provide a favourable time-cost relationship.

Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 496 ◽  
Author(s):  
Zheng Sun ◽  
Yingying Zhang

Three-dimensional (3D) reconstruction using video frames extracted from spherical cameras introduces an innovative measurement method in narrow scenes of architectural heritage, but the accuracy of 3D models and their correlations with frame extraction ratios and blur filters are yet to be evaluated. This article addresses these issues for two narrow scenes of architectural heritage that are distinctive in layout, surface material, and lighting conditions. The videos captured with a hand-held spherical camera (30 frames per second) are extracted to frames with various ratios starting from 10 and increasing every 10 frames (10, 20, …, n). Two different blur assessment methods are employed for comparative analyses. Ground truth models obtained from terrestrial laser scanning and photogrammetry are employed for assessing the accuracy of 3D models from different groups. The results show that the relative accuracy (median absolute errors/object dimensions) of spherical-camera videogrammetry range from 1/500 to 1/2000, catering to the surveying and mapping of architectural heritage with medium accuracy and resolution. Sparser baselines (the length between neighboring image pairs) do not necessarily generate higher accuracy than those from denser baselines, and an optimal frame network should consider the essential completeness of complex components and potential degeneracy cases. Substituting blur frames with adjacent sharp frames could reduce global errors by 5–15%.


Heritage ◽  
2019 ◽  
Vol 2 (3) ◽  
pp. 1835-1851 ◽  
Author(s):  
Hafizur Rahaman ◽  
Erik Champion

The 3D reconstruction of real-world heritage objects using either a laser scanner or 3D modelling software is typically expensive and requires a high level of expertise. Image-based 3D modelling software, on the other hand, offers a cheaper alternative, which can handle this task with relative ease. There also exists free and open source (FOSS) software, with the potential to deliver quality data for heritage documentation purposes. However, contemporary academic discourse seldom presents survey-based feature lists or a critical inspection of potential production pipelines, nor typically provides direction and guidance for non-experts who are interested in learning, developing and sharing 3D content on a restricted budget. To address the above issues, a set of FOSS were studied based on their offered features, workflow, 3D processing time and accuracy. Two datasets have been used to compare and evaluate the FOSS applications based on the point clouds they produced. The average deviation to ground truth data produced by a commercial software application (Metashape, formerly called PhotoScan) was used and measured with CloudCompare software. 3D reconstructions generated from FOSS produce promising results, with significant accuracy, and are easy to use. We believe this investigation will help non-expert users to understand the photogrammetry and select the most suitable software for producing image-based 3D models at low cost for visualisation and presentation purposes.


2021 ◽  
Vol 11 (12) ◽  
pp. 5321
Author(s):  
Marcin Barszcz ◽  
Jerzy Montusiewicz ◽  
Magdalena Paśnikowska-Łukaszuk ◽  
Anna Sałamacha

In the era of the global pandemic caused by the COVID-19 virus, 3D digitisation of selected museum artefacts is becoming more and more frequent practice, but the vast majority is performed by specialised teams. The paper presents the results of comparative studies of 3D digital models of the same museum artefacts from the Silk Road area generated by two completely different technologies: Structure from Motion (SfM)—a method belonging to the so-called low-cost technologies—and by Structured-light 3D Scanning (3D SLS). Moreover, procedural differences in data acquisition and their processing to generate three-dimensional models are presented. Models built using a point cloud were created from data collected in the Afrasiyab museum in Samarkand (Uzbekistan) during “The 1st Scientific Expedition of the Lublin University of Technology to Central Asia” in 2017. Photos for creating 3D models in SfM technology were taken during a virtual expedition carried out under the “3D Digital Silk Road” program in 2021. The obtained results show that the quality of the 3D models generated with SfM differs from the models from the technology (3D SLS), but they may be placed in the galleries of the vitrual museum. The obtained models from SfM do not have information about their size, which means that they are not fully suitable for archiving purposes of cultural heritage, unlike the models from SLS.


Spatium ◽  
2016 ◽  
pp. 30-36 ◽  
Author(s):  
Petar Pejic ◽  
Sonja Krasic

Digital three-dimensional models of the existing architectonic structures are created for the purpose of digitalization of the archive documents, presentation of buildings or an urban entity or for conducting various analyses and tests. Traditional methods for the creation of 3D models of the existing buildings assume manual measuring of their dimensions, using the photogrammetry method or laser scanning. Such approaches require considerable time spent in data acquisition or application of specific instruments and equipment. The goal of this paper is presentation of the procedure for the creation of 3D models of the existing structures using the globally available web resources and free software packages on standard PCs. This shortens the time of the production of a digital three-dimensional model of the structure considerably and excludes the physical presence at the location. In addition, precision of this method was tested and compared with the results acquired in a previous research.


Author(s):  
J. Sánchez ◽  
F. Camacho ◽  
R. Lacaze ◽  
B. Smets

This study investigates the scientific quality of the GEOV1 Leaf Area Index (LAI), Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) and Fraction of Vegetation Cover (FCover) products based on PROBA-V observations. The procedure follows, as much as possible, the guidelines, protocols and metrics defined by the Land Product Validation (LPV) group of the Committee on Earth Observation Satellite (CEOS) for the validation of satellite-derived land products. This study is focused on the consistency of SPOT/VGT and PROBA-V GEOV1 products developed in the framework of the Copernicus Global Land Services, providing an early validation of PROBA-V GEOV1 products using data from November 2013 to May 2014, during the overlap period (November 2013-May 2014). The first natural year of PROBA-V GEOV1 products (2014) was considered for the rest of the quality assessment including comparisons with MODIS C5. Several criteria of performance were evaluated including product completeness, spatial consistency, temporal consistency, intra-annual precision and accuracy. Firstly, and inter-comparison with both spatial and temporal consistency were evaluated with reference satellite products (SPOT/VGT GEOV1 and MODIS C5) are presented over a network of sites (BELMANIP2.1). Secondly, the accuracy of PROBA-V GEOV1 products was evaluated against a number of concomitant agricultural sites is presented. The ground data was collected and up-scaled using high resolution imagery in the context of the FP7 ImagineS project in support of the evolution of Copernicus Land Service. Our results demonstrate that GEOV1 PROBA-V products were found spatially and temporally consistent with similar products (SPOT/VGT, MODISC5), and good agreement with limited ground truth data with an accuracy (RMSE) of 0.52 for LAI, 0.11 for FAPAR and 0.14 for FCover, showing a slight bias for FCover for higher values.


2018 ◽  
Vol 10 (12) ◽  
pp. 1907 ◽  
Author(s):  
Luís Pádua ◽  
Pedro Marques ◽  
Jonáš Hruška ◽  
Telmo Adão ◽  
Emanuel Peres ◽  
...  

This study aimed to characterize vineyard vegetation thorough multi-temporal monitoring using a commercial low-cost rotary-wing unmanned aerial vehicle (UAV) equipped with a consumer-grade red/green/blue (RGB) sensor. Ground-truth data and UAV-based imagery were acquired on nine distinct dates, covering the most significant vegetative growing cycle until harvesting season, over two selected vineyard plots. The acquired UAV-based imagery underwent photogrammetric processing resulting, per flight, in an orthophoto mosaic, used for vegetation estimation. Digital elevation models were used to compute crop surface models. By filtering vegetation within a given height-range, it was possible to separate grapevine vegetation from other vegetation present in a specific vineyard plot, enabling the estimation of grapevine area and volume. The results showed high accuracy in grapevine detection (94.40%) and low error in grapevine volume estimation (root mean square error of 0.13 m and correlation coefficient of 0.78 for height estimation). The accuracy assessment showed that the proposed method based on UAV-based RGB imagery is effective and has potential to become an operational technique. The proposed method also allows the estimation of grapevine areas that can potentially benefit from canopy management operations.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3150
Author(s):  
Riccardo Rossi ◽  
Claudio Leolini ◽  
Sergi Costafreda-Aumedes ◽  
Luisa Leolini ◽  
Marco Bindi ◽  
...  

This study aims to test the performances of a low-cost and automatic phenotyping platform, consisting of a Red-Green-Blue (RGB) commercial camera scanning objects on rotating plates and the reconstruction of main plant phenotypic traits via the structure for motion approach (SfM). The precision of this platform was tested in relation to three-dimensional (3D) models generated from images of potted maize, tomato and olive tree, acquired at a different frequency (steps of 4°, 8° and 12°) and quality (4.88, 6.52 and 9.77 µm/pixel). Plant and organs heights, angles and areas were extracted from the 3D models generated for each combination of these factors. Coefficient of determination (R2), relative Root Mean Square Error (rRMSE) and Akaike Information Criterion (AIC) were used as goodness-of-fit indexes to compare the simulated to the observed data. The results indicated that while the best performances in reproducing plant traits were obtained using 90 images at 4.88 µm/pixel (R2 = 0.81, rRMSE = 9.49% and AIC = 35.78), this corresponded to an unviable processing time (from 2.46 h to 28.25 h for herbaceous plants and olive trees, respectively). Conversely, 30 images at 4.88 µm/pixel resulted in a good compromise between a reliable reconstruction of considered traits (R2 = 0.72, rRMSE = 11.92% and AIC = 42.59) and processing time (from 0.50 h to 2.05 h for herbaceous plants and olive trees, respectively). In any case, the results pointed out that this input combination may vary based on the trait under analysis, which can be more or less demanding in terms of input images and time according to the complexity of its shape (R2 = 0.83, rRSME = 10.15% and AIC = 38.78). These findings highlight the reliability of the developed low-cost platform for plant phenotyping, further indicating the best combination of factors to speed up the acquisition and elaboration process, at the same time minimizing the bias between observed and simulated data.


Author(s):  
Jinmiao Huang ◽  
Rahul Rai

We introduce an intuitive gesture-based interaction technique for creating and manipulating simple three-dimensional (3D) shapes. Specifically, the developed interface utilizes low-cost depth camera to capture user's hand gesture as the input, maps different gestures to system commands and generates 3D models from midair 3D sketches (as opposed to traditional two-dimensional (2D) sketches). Our primary contribution is in the development of an intuitive gesture-based interface that enables novice users to rapidly construct conceptual 3D models. Our development extends current works by proposing both design and technical solutions to the challenges of the gestural modeling interface for conceptual 3D shapes. The preliminary user study results suggest that the developed framework is intuitive to use and able to create a variety of 3D conceptual models.


PeerJ ◽  
2019 ◽  
Vol 7 ◽  
pp. e7893 ◽  
Author(s):  
Simone Macrì ◽  
Romain J.G. Clément ◽  
Chiara Spinello ◽  
Maurizio Porfiri

Zebrafish (Danio rerio) have recently emerged as a valuable laboratory species in the field of behavioral pharmacology, where they afford rapid and precise high-throughput drug screening. Although the behavioral repertoire of this species manifests along three-dimensional (3D), most of the efforts in behavioral pharmacology rely on two-dimensional (2D) projections acquired from a single overhead or front camera. We recently showed that, compared to a 3D scoring approach, 2D analyses could lead to inaccurate claims regarding individual and social behavior of drug-free experimental subjects. Here, we examined whether this conclusion extended to the field of behavioral pharmacology by phenotyping adult zebrafish, acutely exposed to citalopram (30, 50, and 100 mg/L) or ethanol (0.25%, 0.50%, and 1.00%), in the novel tank diving test over a 6-min experimental session. We observed that both compounds modulated the time course of general locomotion and anxiety-related profiles, the latter being represented by specific behaviors (erratic movements and freezing) and avoidance of anxiety-eliciting areas of the test tank (top half and distance from the side walls). We observed that 2D projections of 3D trajectories (ground truth data) may introduce a source of unwanted variation in zebrafish behavioral phenotyping. Predictably, both 2D views underestimate absolute levels of general locomotion. Additionally, while data obtained from a camera positioned on top of the experimental tank are similar to those obtained from a 3D reconstruction, 2D front view data yield false negative findings.


Author(s):  
Ismail Elkhrachy

This paper analyses and evaluate the precision and the accuracy the capability of low-cost terrestrial photogrammetry by using many digital cameras to construct a 3D model of an object. To obtain the goal, a building façade has imaged by two inexpensive digital cameras such as Canon and Pentax camera. Bundle adjustment and image processing calculated by using Agisoft PhotScan software. Several factors will be included during this study, different cameras, and control points. Many photogrammetric point clouds will be generated. Their accuracy will be compared with some natural control points which collected by the laser total station of the same building. The cloud to cloud distance will be computed for different comparison 3D models to investigate different variables. The practical field experiment showed a spatial positioning reported by the investigated technique was between 2-4cm in the 3D coordinates of a façade. This accuracy is optimistic since the captured images were processed without any control points.


Sign in / Sign up

Export Citation Format

Share Document