scholarly journals Proposing a Method to Analyze Slope Displacement Using the Distance Image Data of Depth Camera

2021 ◽  
Vol 16 (7) ◽  
pp. 1086-1095
Author(s):  
Yasuhiro Onoue ◽  
◽  
Tomohiro Ishizawa ◽  
Toru Danjo ◽  
Teruki Fukuzono

Sediment disasters have occurred with higher frequencies in recent years because of local heavy rains caused by line-shaped precipitation systems and torrential rains accompanying large typhoons. Since rescue operators are constantly under physical risk at disaster sites, there is a need for technologies to predict the occurrence of secondary disasters. The authors research the measurement of slope displacements by focusing on a depth camera that is readily deployable, can be easily set up, and enables monitoring of an extensive area but does not require expert knowledge to carry out measurements. In this process, we confirmed the difficulty of measuring slope changes preceding failure when the depth camera (DC) is set at a distance because of the large measurement errors caused by the limited depth resolution and poor measurement conditions under rainfall. In this study, we propose a new method for analyzing depth image data obtained by a depth camera and verify its validity for displacement measurements. After comparing the previous and proposed methods, we could confirm that the latter enables one to detect slope changes from minute deformations. When compared with the results of extensometers that directly measured the slope, we found that the results displayed similar tendencies of increase. Therefore, by measuring displacements preceding a slope failure using a depth camera and analyzing the depth image data acquired using the proposed method, we found that it is possible to detect minute changes that precede slope failures.

2018 ◽  
Vol 22 (8) ◽  
pp. 4425-4447 ◽  
Author(s):  
Manuel Antonetti ◽  
Massimiliano Zappa

Abstract. Both modellers and experimentalists agree that using expert knowledge can improve the realism of conceptual hydrological models. However, their use of expert knowledge differs for each step in the modelling procedure, which involves hydrologically mapping the dominant runoff processes (DRPs) occurring on a given catchment, parameterising these processes within a model, and allocating its parameters. Modellers generally use very simplified mapping approaches, applying their knowledge in constraining the model by defining parameter and process relational rules. In contrast, experimentalists usually prefer to invest all their detailed and qualitative knowledge about processes in obtaining as realistic spatial distribution of DRPs as possible, and in defining narrow value ranges for each model parameter.Runoff simulations are affected by equifinality and numerous other uncertainty sources, which challenge the assumption that the more expert knowledge is used, the better will be the results obtained. To test for the extent to which expert knowledge can improve simulation results under uncertainty, we therefore applied a total of 60 modelling chain combinations forced by five rainfall datasets of increasing accuracy to four nested catchments in the Swiss Pre-Alps. These datasets include hourly precipitation data from automatic stations interpolated with Thiessen polygons and with the inverse distance weighting (IDW) method, as well as different spatial aggregations of Combiprecip, a combination between ground measurements and radar quantitative estimations of precipitation. To map the spatial distribution of the DRPs, three mapping approaches with different levels of involvement of expert knowledge were used to derive so-called process maps. Finally, both a typical modellers' top-down set-up relying on parameter and process constraints and an experimentalists' set-up based on bottom-up thinking and on field expertise were implemented using a newly developed process-based runoff generation module (RGM-PRO). To quantify the uncertainty originating from forcing data, process maps, model parameterisation, and parameter allocation strategy, an analysis of variance (ANOVA) was performed.The simulation results showed that (i) the modelling chains based on the most complex process maps performed slightly better than those based on less expert knowledge; (ii) the bottom-up set-up performed better than the top-down one when simulating short-duration events, but similarly to the top-down set-up when simulating long-duration events; (iii) the differences in performance arising from the different forcing data were due to compensation effects; and (iv) the bottom-up set-up can help identify uncertainty sources, but is prone to overconfidence problems, whereas the top-down set-up seems to accommodate uncertainties in the input data best. Overall, modellers' and experimentalists' concept of model realism differ. This means that the level of detail a model should have to accurately reproduce the DRPs expected must be agreed in advance.


Author(s):  
Hyun Jun Park ◽  
Kwang Baek Kim

<p><span>Intel RealSense depth camera provides depth image using infrared projector and infrared camera. Using infrared radiation makes it possible to measure the depth with high accuracy, but the shadow of infrared radiation makes depth unmeasured regions. Intel RealSense SDK provides a postprocessing algorithm to correct it. However, this algorithm is not enough to be used and needs to be improved. Therefore, we propose a method to correct the depth image using image processing techniques. The proposed method corrects the depth using the adjacent depth information. Experimental results showed that the proposed method corrects the depth image more accurately than the Intel RealSense SDK.</span></p>


2019 ◽  
Vol 79 (10) ◽  
pp. 1060-1078 ◽  
Author(s):  
Hans-Georg Schnürch ◽  
Sven Ackermann ◽  
Celine D. Alt-Radtke ◽  
Lukas Angleitner ◽  
Jana Barinoff ◽  
...  

Abstract Purpose This is an official guideline, published and coordinated by the Gynecological Oncology Working Group (AGO) of the German Cancer Society (DKG) and the German Society for Gynecology and Obstetrics (DGGG). Vaginal cancers are rare tumors, which is why there is very little evidence on these tumors. Knowledge about the optimal clinical management is limited. This first German S2k guideline on vaginal cancer has aimed to compile the most current expert knowledge and offer new recommendations on the appropriate treatment as well as providing pointers about individually adapted therapies with lower morbidity rates than were previously generally available. The purpose of this guideline is also to set up a register to record data on treatment data and the course of disease as a means of obtaining evidence in future. Methods The present S2k guideline was developed by members of the Vulvar und Vaginal Tumors Commission of the AGO in an independently moderated, structured, formal consensus process and the contents were agreed with the mandate holders of the participating scientific societies and organizations. Recommendations To optimize the daily care of patients with vaginal cancer: 1. Monitor the spread pattern; 2. Follow the step-by-step diagnostic workup based on initial stage at detection; 3. As part of individualized clinical therapeutic management of vaginal cancer, follow the sentinel lymph node protocol described here, where possible; 4. Participate in the register study on vaginal cancer.


2013 ◽  
Vol 760-762 ◽  
pp. 1556-1561
Author(s):  
Ting Wei Du ◽  
Bo Liu

Indoor scene understanding based on the depth image data is a cutting-edge issue in the field of three-dimensional computer vision. Taking the layout characteristics of the indoor scenes and more plane features in these scenes into account, this paper presents a depth image segmentation method based on Gauss Mixture Model clustering. First, transform the Kinect depth image data into point cloud which is in the form of discrete three-dimensional point data, and denoise and down-sample the point cloud data; second, calculate the point normal of all points in the entire point cloud, then cluster the entire normal using Gaussian Mixture Model, and finally implement the entire point clouds segmentation by RANSAC algorithm. Experimental results show that the divided regions have obvious boundaries and segmentation quality is above normal, and lay a good foundation for object recognition.


2008 ◽  
Vol 39-40 ◽  
pp. 523-528
Author(s):  
Pavel Jirman ◽  
Ivo Matoušek

Improving technology and materials in glass production for the 21st century supposes implementation of high-level innovations. These innovations are necessary not to be only developed, produced and set up but also their qualities and perspectives need to be evaluated so that the ratio of their application is increased. The application ratio of developed innovations lies among 1-3% at present. All stages of glass processing like melting, forming or cold working have mostly limitations of its own further development which are necessary to be detected so that further possibility of innovation can be predicted. At present it is not sufficient to have only theoretic and expert knowledge of the field and IT applications but it is necessary to know the methods of creative thinking for achievement and application of required innovation. Understanding of the system of creative thinking makes possible to better and faster adapt to real life practice which changes very fast. TRIZ (Theory of Inventive Problem Solving) is a powerful method of creative technical thinking which originated by studying patents and by generalization of successful process solving. The method TRIZ makes possible to find a correct formulation of a task out of unclearly described situation as well as to solve the newly re-formulated task by using unique strong instruments of the TRIZ method [1]. Application of the TRIZ method is supported by a unique SW designed for collection of information, analyses, synthesis of solutions and verification of the found solutions. Practical examples of using the TRIZ method will be presented in the contribution on chosen glass technologies.


2020 ◽  
Vol 994 ◽  
pp. 280-287
Author(s):  
Anh Dao ◽  
Ágota Drégelyi-Kiss

Measuring dimensional parameters (such as diameter, distance) by industrial computed tomography (CT) becomes more and more popular because of its advantages such as non-destructive method and short measurement time. However, the goodness of the measured values needs to be evaluated as a requirement of quality control. An aluminium test piece are designed and manufactured for mapping the measurement errors and uncertainties in case of dimensional CT measurements. In this article, the measurement errors are investigated based on the results of an experimental design, response surface method (RSM). Three main factors are varied systematically: the magnification of the CT, the numbers of views (NoV), and the set-up of the scanning mode. In the course of measurement evaluation several GD&T parameters are determined such as diameter of holes, distances between the holes, flatness and perpendicularity. The purpose of this research is to calculate the measurement errors and to determine the factors which have an effect on the dimensional CT measurement process.


Author(s):  
R. Bettocchi ◽  
M. Pinelli ◽  
P. R. Spina ◽  
M. Venturini ◽  
G. A. Zanetta

The paper deals with the set-up and the application of an Artificial Intelligence technique based on Neural Networks (NNs) to gas turbine diagnostics, in order to evaluate its capabilities and its robustness. The data used for both training and testing the NNs were generated by means of a Cycle Program, calibrated on a Siemens V94.3A gas turbine. Such data are representative of operating points characterized by different boundary, load and health state conditions. The analyses carried out are aimed at the selection of the most appropriate NN structure for gas turbine diagnostics, by evaluating NN robustness with respect to: • interpolation capability and accuracy in the presence of data affected by measurement errors; • extrapolation capability in the presence of data lying outside the range of variation adopted for NN training; • accuracy in the presence of input data corrupted by bias errors; • accuracy when one input is not available. This situation is simulated by replacing the value of the unavailable input with its nominal value.


Author(s):  
Arindam Banerjee ◽  
Malcolm J. Andrews

A novel gas channel experiment is used to study the non-equilibrium development of high Atwood number Rayleigh-Taylor mixing. Two gas streams, one containing air-helium mixture and the other air, flow parallel to each other separated by a thin splitter plate. The streams meet at the end of a splitter plate leading to the formation of an unstable interface and initiation of buoyancy driven mixing. This set up is statistically steady and allows for long data collection times. Here, we describe initial measurements to determine the density profile and growth rate along the mix at low density differences (At ~ 0.05). The facility is however designed capable of large Atwood number studies (At ~ 0.75). Diagnostics include high resolution digital image analysis, which is used to determine the density profile across the mix. The growth parameter (α) is also estimated by a “moving window” calculation. The results are then verified with measurements of α made by a Constant temperature (CT) hot-wire probe and with the growth parameter obtained from small Atwood number experiments (At ~ 0.001). However, there were some inherent errors in the density profile measurements because of non-uniformity in the concentration of smoke. To verify that these errors were indeed measurement errors and not as a result of lack of statistical convergence, a detailed statistical convergence test was performed. It showed that convergence was a direct consequence of the number of different large 3D structures that were averaged over the duration of the run.


Author(s):  
Mohd Kufaisal bin Mohd Sidik ◽  
Mohd Shahrizal bin Sunar ◽  
Ismahafezi bin Ismail ◽  
Mohd Khalid bin Mokhtar ◽  
Normal binti Mat Jusoh

2017 ◽  
Author(s):  
Zhikai Liang ◽  
Piyush Pandey ◽  
Vincent Stoerger ◽  
Yuhang Xu ◽  
Yumou Qiu ◽  
...  

ABSTRACTMaize (Zea mays ssp. mays) is one of three crops, along with rice and wheat, responsible for more than 1/2 of all calories consumed around the world. Increasing the yield and stress tolerance of these crops is essential to meet the growing need for food. The cost and speed of plant phenotyping is currently the largest constraint on plant breeding efforts. Datasets linking new types of high throughput phenotyping data collected from plants to the performance of the same genotypes under agronomic conditions across a wide range of environments are essential for developing new statistical approaches and computer vision based tools. A set of maize inbreds – primarily recently off patent lines – were phenotyped using a high throughput platform at University of Nebraska-Lincoln. These lines have been previously subjected to high density genotyping, and scored for a core set of 13 phenotypes in field trials across 13 North American states in two years by the Genomes to Fields consortium. A total of 485 GB of image data including RGB, hyperspectral, fluorescence and thermal infrared photos has been released. Correlations between image-based measurements and manual measurements demonstrated the feasibility of quantifying variation in plant architecture using image data. However, naive approaches to measuring traits such as biomass can introduce nonrandom measurement errors confounded with genotype variation. Analysis of hyperspectral image data demonstrated unique signatures from stem tissue. Integrating heritable phenotypes from high-throughput phenotyping data with field data from different environments can reveal previously unknown factors influencing yield plasticity.


Sign in / Sign up

Export Citation Format

Share Document