scholarly journals CLIFF COLLAPSE HAZARD FROM REPEATED MULTICOPTER UAV ACQUISITIONS: RETURN ON EXPERIENCE

Author(s):  
T.J. B. Dewez ◽  
J. Leroux ◽  
S. Morelli

Cliff collapse poses a serious hazard to infrastructure and passers-by. Obtaining information such as magnitude-frequency relationship for a specific site is of great help to adapt appropriate mitigation measures. While it is possible to monitor hundreds-of-meter-long cliff sites with ground based techniques (e.g. lidar or photogrammetry), it is both time consuming and scientifically limiting to focus on short cliff sections. In the project SUAVE, we sought to investigate whether an octocopter UAV photogrammetric survey would perform sufficiently well in order to repeatedly survey cliff face geometry and derive rock fall inventories amenable to probabilistic rock fall hazard computation. An experiment was therefore run on a well-studied site of the chalk coast of Normandy, in Mesnil Val, along the English Channel (Northern France). Two campaigns were organized in January and June 2015 which surveyed about 60 ha of coastline, including the 80-m-high cliff face, the chalk platform at its foot, and the hinterland in a matter of 4 hours from start to finish. To conform with UAV regulations, the flight was flown in 3 legs for a total of about 30 minutes in the air. A total of 868 and 1106 photos were respectively shot with a Sony NEX 7 with fixed focal 16mm. Three lines of sight were combined: horizontal shots for cliff face imaging, 45°-oblique views to tie plateau/platform photos with cliff face images, and regular vertical shots. Photogrammetrically derived dense point clouds were produced with Agisoft Photoscan at ultra-high density (median density is 1 point every 1.7cm). Point cloud density proved a critical parameter to reproduce faithfully the chalk face’s geometry. Tuning down the density parameter to “high” or “medium”, though efficient from a computational point of view, generated artefacts along chalk bed edges (i.e. smoothing the sharp gradient) and ultimately creating ghost volumes when computing cloud to cloud differences. Yet, from a hazard point of view, this is where small rock fall will most likely occur. Absolute orientation of both point clouds proved unsufficient despite the 30 black and white quadrants ground control point DGPS surveyed. Additional ICP was necessary to reach centimeter-level accuracy and segment rock fall scars corresponding to the expected average daily rock fall volume (ca. 0.013 m3).

Author(s):  
T.J. B. Dewez ◽  
J. Leroux ◽  
S. Morelli

Cliff collapse poses a serious hazard to infrastructure and passers-by. Obtaining information such as magnitude-frequency relationship for a specific site is of great help to adapt appropriate mitigation measures. While it is possible to monitor hundreds-of-meter-long cliff sites with ground based techniques (e.g. lidar or photogrammetry), it is both time consuming and scientifically limiting to focus on short cliff sections. In the project SUAVE, we sought to investigate whether an octocopter UAV photogrammetric survey would perform sufficiently well in order to repeatedly survey cliff face geometry and derive rock fall inventories amenable to probabilistic rock fall hazard computation. An experiment was therefore run on a well-studied site of the chalk coast of Normandy, in Mesnil Val, along the English Channel (Northern France). Two campaigns were organized in January and June 2015 which surveyed about 60 ha of coastline, including the 80-m-high cliff face, the chalk platform at its foot, and the hinterland in a matter of 4 hours from start to finish. To conform with UAV regulations, the flight was flown in 3 legs for a total of about 30 minutes in the air. A total of 868 and 1106 photos were respectively shot with a Sony NEX 7 with fixed focal 16mm. Three lines of sight were combined: horizontal shots for cliff face imaging, 45°-oblique views to tie plateau/platform photos with cliff face images, and regular vertical shots. Photogrammetrically derived dense point clouds were produced with Agisoft Photoscan at ultra-high density (median density is 1 point every 1.7cm). Point cloud density proved a critical parameter to reproduce faithfully the chalk face’s geometry. Tuning down the density parameter to “high” or “medium”, though efficient from a computational point of view, generated artefacts along chalk bed edges (i.e. smoothing the sharp gradient) and ultimately creating ghost volumes when computing cloud to cloud differences. Yet, from a hazard point of view, this is where small rock fall will most likely occur. Absolute orientation of both point clouds proved unsufficient despite the 30 black and white quadrants ground control point DGPS surveyed. Additional ICP was necessary to reach centimeter-level accuracy and segment rock fall scars corresponding to the expected average daily rock fall volume (ca. 0.013 m3).


2020 ◽  
Vol 7 (2) ◽  
pp. 34-41
Author(s):  
VLADIMIR NIKONOV ◽  
◽  
ANTON ZOBOV ◽  

The construction and selection of a suitable bijective function, that is, substitution, is now becoming an important applied task, particularly for building block encryption systems. Many articles have suggested using different approaches to determining the quality of substitution, but most of them are highly computationally complex. The solution of this problem will significantly expand the range of methods for constructing and analyzing scheme in information protection systems. The purpose of research is to find easily measurable characteristics of substitutions, allowing to evaluate their quality, and also measures of the proximity of a particular substitutions to a random one, or its distance from it. For this purpose, several characteristics were proposed in this work: difference and polynomial, and their mathematical expectation was found, as well as variance for the difference characteristic. This allows us to make a conclusion about its quality by comparing the result of calculating the characteristic for a particular substitution with the calculated mathematical expectation. From a computational point of view, the thesises of the article are of exceptional interest due to the simplicity of the algorithm for quantifying the quality of bijective function substitutions. By its nature, the operation of calculating the difference characteristic carries out a simple summation of integer terms in a fixed and small range. Such an operation, both in the modern and in the prospective element base, is embedded in the logic of a wide range of functional elements, especially when implementing computational actions in the optical range, or on other carriers related to the field of nanotechnology.


Author(s):  
Guillermo Oliver ◽  
Pablo Gil ◽  
Jose F. Gomez ◽  
Fernando Torres

AbstractIn this paper, we present a robotic workcell for task automation in footwear manufacturing such as sole digitization, glue dispensing, and sole manipulation from different places within the factory plant. We aim to make progress towards shoe industry 4.0. To achieve it, we have implemented a novel sole grasping method, compatible with soles of different shapes, sizes, and materials, by exploiting the particular characteristics of these objects. Our proposal is able to work well with low density point clouds from a single RGBD camera and also with dense point clouds obtained from a laser scanner digitizer. The method computes antipodal grasping points from visual data in both cases and it does not require a previous recognition of sole. It relies on sole contour extraction using concave hulls and measuring the curvature on contour areas. Our method was tested both in a simulated environment and in real conditions of manufacturing at INESCOP facilities, processing 20 soles with different sizes and characteristics. Grasps were performed in two different configurations, obtaining an average score of 97.5% of successful real grasps for soles without heel made with materials of low or medium flexibility. In both cases, the grasping method was tested without carrying out tactile control throughout the task.


2021 ◽  
Author(s):  
Emmanuel Wyser ◽  
Lidia Loiotine ◽  
Charlotte Wolff ◽  
Gioacchino Francesco Andriani ◽  
Michel Jaboyedoff ◽  
...  

<p>The identification of discontinuity sets and their properties is among the key factors for the geomechanical characterization of rock masses, which is fundamental for performing stability analyses, and for planning prevention and mitigation measures as well.<br>In practice, discontinuity data are collected throughout difficult and time-consuming field surveys, especially when dealing with areas of wide extension, difficult accessibility, covered by dense vegetation, or with adverse weather conditions. Consequently, even experienced operators may introduce sampling errors or misinterpretations, leading to biased geomechanical models for the investigated rock mass.<br>In the last decades, new remote techniques such as photogrammetry,<em> Light Detection and Ranging</em> (LiDAR), <em>Unmanned Aerial Vehicle</em> (UAV) and <em>InfraRed Thermography </em>(IRT) have been introduced to overcome the limits of conventional surveys. We propose here a new tool for extracting information on the fracture pattern in rock masses, based on <em>remote sensing </em>methods, with particular reference to the analysis of high-resolution georeferenced photos. The first step consists in applying the <em>Structure from Motion</em> (SfM) technique on photos acquired by means of digital cameras and UAV techniques. Once aligned and georeferenced, the orthophotos are exported in a GIS software, to draw the fracture traces at an appropriate scale. We developed a MATLAB routine to extract information on the geostructural setting of rock masses by performing a quantitative 2D analysis of the fracture traces, based on formulas reported in the literature. The code was written by testing few experimental and simple traces and was successively validated on an orthophoto from a real case study.<br>Currently, the script plots the fracture traces as polylines and calculates their orientation (strike) and length. Subsequently, it detects the main discontinuity sets by fitting an experimental composite Gaussian curve on histograms showing the number of discontinuities according to their orientation, and splitting the curve in simpler Gaussian curves, with peaks corresponding to the main discontinuity sets.<br>Then, for each set, a linear scanline intersecting the highest number of traces is plotted, and the apparent and real spacing are calculated. In a second step, a grid of circular scanlines covering the whole area where the traces are located is plotted, and the mean trace intensity, trace density and trace length estimators are calculated.<br>It is expected to test the presented tools on other case studies, in order to optimize them and calculate additional metrics, such as persistence and block sizes, useful to the geomechanical characterization of rock masses.<br>As a future perspective, a similar approach could be investigated for 3D analyses from point clouds.</p>


2019 ◽  
Vol 27 (3) ◽  
pp. 317-340 ◽  
Author(s):  
Max Kontak ◽  
Volker Michel

Abstract In this work, we present the so-called Regularized Weak Functional Matching Pursuit (RWFMP) algorithm, which is a weak greedy algorithm for linear ill-posed inverse problems. In comparison to the Regularized Functional Matching Pursuit (RFMP), on which it is based, the RWFMP possesses an improved theoretical analysis including the guaranteed existence of the iterates, the convergence of the algorithm for inverse problems in infinite-dimensional Hilbert spaces, and a convergence rate, which is also valid for the particular case of the RFMP. Another improvement is the cancellation of the previously required and difficult to verify semi-frame condition. Furthermore, we provide an a-priori parameter choice rule for the RWFMP, which yields a convergent regularization. Finally, we will give a numerical example, which shows that the “weak” approach is also beneficial from the computational point of view. By applying an improved search strategy in the algorithm, which is motivated by the weak approach, we can save up to 90  of computation time in comparison to the RFMP, whereas the accuracy of the solution does not change as much.


Author(s):  
Federico Perini ◽  
Anand Krishnasamy ◽  
Youngchul Ra ◽  
Rolf D. Reitz

The need for more efficient and environmentally sustainable internal combustion engines is driving research towards the need to consider more realistic models for both fuel physics and chemistry. As far as compression ignition engines are concerned, phenomenological or lumped fuel models are unreliable to capture spray and combustion strategies outside of their validation domains — typically, high-pressure injection and high-temperature combustion. Furthermore, the development of variable-reactivity combustion strategies also creates the need to model comprehensively different hydrocarbon families even in single fuel surrogates. From the computational point of view, challenges to achieving practical simulation times arise from the dimensions of the reaction mechanism, that can be of hundreds species even if hydrocarbon families are lumped into representative compounds, and thus modeled with non-elementary, skeletal reaction pathways. In this case, it is also impossible to pursue further mechanism reductions to lower dimensions. CPU times for integrating chemical kinetics in internal combustion engine simulations ultimately scale with the number of cells in the grid, and with the cube number of species in the reaction mechanism. In the present work, two approaches to reduce the demands of engine simulations with detailed chemistry are presented. The first one addresses the demands due to the solution of the chemistry ODE system, and features the adoption of SpeedCHEM, a newly developed chemistry package that solves chemical kinetics using sparse analytical Jacobians. The second one aims to reduce the number of chemistry calculations by binning the CFD cells of the engine grid into a subset of clusters, where chemistry is solved and then mapped back to the original domain. In particular, a high-dimensional representation of the chemical state space is adopted for keeping track of the different fuel components, and a newly developed bounding-box-constrained k-means algorithm is used to subdivide the cells into reactively homogeneous clusters. The approaches have been tested on a number of simulations featuring multi-component diesel fuel surrogates, and different engine grids. The results show that significant CPU time reductions, of about one order of magnitude, can be achieved without loss of accuracy in both engine performance and emissions predictions, prompting for their applicability to more refined or full-sized engine grids.


Author(s):  
Virdiansyah Permana ◽  
Rahmat Shoureshi

This study presents a new approach to determine the controllability and observability of a large scale nonlinear dynamic thermal system using graph-theory. The novelty of this method is in adapting graph theory for nonlinear class and establishing a graphic condition that describes the necessary and sufficient terms for a nonlinear class system to be controllable and observable, which equivalents to the analytical method of Lie algebra rank condition. The directed graph (digraph) is utilized to model the system, and the rule of its adaptation in nonlinear class is defined. Subsequently, necessary and sufficient terms to achieve controllability and observability condition are investigated through the structural property of a digraph called connectability. It will be shown that the connectability condition between input and states, as well as output and states of a nonlinear system are equivalent to Lie-algebra rank condition (LARC). This approach has been proven to be easier from a computational point of view and is thus found to be useful when dealing with a large system.


2019 ◽  
Vol 93 (3) ◽  
pp. 411-429 ◽  
Author(s):  
Maria Immacolata Marzulli ◽  
Pasi Raumonen ◽  
Roberto Greco ◽  
Manuela Persia ◽  
Patrizia Tartarino

Abstract Methods for the three-dimensional (3D) reconstruction of forest trees have been suggested for data from active and passive sensors. Laser scanner technologies have become popular in the last few years, despite their high costs. Since the improvements in photogrammetric algorithms (e.g. structure from motion—SfM), photographs have become a new low-cost source of 3D point clouds. In this study, we use images captured by a smartphone camera to calculate dense point clouds of a forest plot using SfM. Eighteen point clouds were produced by changing the densification parameters (Image scale, Point density, Minimum number of matches) in order to investigate their influence on the quality of the point clouds produced. In order to estimate diameter at breast height (d.b.h.) and stem volumes, we developed an automatic method that extracts the stems from the point cloud and then models them with cylinders. The results show that Image scale is the most influential parameter in terms of identifying and extracting trees from the point clouds. The best performance with cylinder modelling from point clouds compared to field data had an RMSE of 1.9 cm and 0.094 m3, for d.b.h. and volume, respectively. Thus, for forest management and planning purposes, it is possible to use our photogrammetric and modelling methods to measure d.b.h., stem volume and possibly other forest inventory metrics, rapidly and without felling trees. The proposed methodology significantly reduces working time in the field, using ‘non-professional’ instruments and automating estimates of dendrometric parameters.


Sign in / Sign up

Export Citation Format

Share Document