GEOWAPP: A Geospatial Web Application for Lab Exercises in Surveying

GEOMATICA ◽  
2016 ◽  
Vol 70 (1) ◽  
pp. 31-42
Author(s):  
Jaime Garbanzo-León ◽  
Robert Kingdon ◽  
Emmanuel Stefanakis

E-learning applications that allow students to review their survey data are not widely used in urveying Engineering. To design and develop such an application, the requirements to be studied include user inter ac tions, technology interactions, existing exercises, data representation, etc. This study comprises the process of designing, developing, and testing a geospatial web application (GEOWAPP), which is intended to be under the adjunct mode of e-learning. Four exercises were supported by GEOWAPP: two levelling exer cis es, a traversing and a topographic survey exercise. The GEOWAPP contains five tools: Traversing Comparator, Differential Levelling Comparator, Least Squares Levelling Tool, Vertical Comparator, and Proximity Comparator. After testing using surveying real data and book exercise data, the GEOWAPP func tionality was found operational. Finally, user reviews were favourable towards the GEOWAPP. This application provides a new way to support surveying exercise lab practices by delivering immediate feedback.

Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


2021 ◽  
Vol 5 (1) ◽  
pp. 59
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

Terrestrial laser scanners (TLS) capture a large number of 3D points rapidly, with high precision and spatial resolution. These scanners are used for applications as diverse as modeling architectural or engineering structures, but also high-resolution mapping of terrain. The noise of the observations cannot be assumed to be strictly corresponding to white noise: besides being heteroscedastic, correlations between observations are likely to appear due to the high scanning rate. Unfortunately, if the variance can sometimes be modeled based on physical or empirical considerations, the latter are more often neglected. Trustworthy knowledge is, however, mandatory to avoid the overestimation of the precision of the point cloud and, potentially, the non-detection of deformation between scans recorded at different epochs using statistical testing strategies. The TLS point clouds can be approximated with parametric surfaces, such as planes, using the Gauss–Helmert model, or the newly introduced T-splines surfaces. In both cases, the goal is to minimize the squared distance between the observations and the approximated surfaces in order to estimate parameters, such as normal vector or control points. In this contribution, we will show how the residuals of the surface approximation can be used to derive the correlation structure of the noise of the observations. We will estimate the correlation parameters using the Whittle maximum likelihood and use comparable simulations and real data to validate our methodology. Using the least-squares adjustment as a “filter of the geometry” paves the way for the determination of a correlation model for many sensors recording 3D point clouds.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Camilo Broc ◽  
Therese Truong ◽  
Benoit Liquet

Abstract Background The increasing number of genome-wide association studies (GWAS) has revealed several loci that are associated to multiple distinct phenotypes, suggesting the existence of pleiotropic effects. Highlighting these cross-phenotype genetic associations could help to identify and understand common biological mechanisms underlying some diseases. Common approaches test the association between genetic variants and multiple traits at the SNP level. In this paper, we propose a novel gene- and a pathway-level approach in the case where several independent GWAS on independent traits are available. The method is based on a generalization of the sparse group Partial Least Squares (sgPLS) to take into account groups of variables, and a Lasso penalization that links all independent data sets. This method, called joint-sgPLS, is able to convincingly detect signal at the variable level and at the group level. Results Our method has the advantage to propose a global readable model while coping with the architecture of data. It can outperform traditional methods and provides a wider insight in terms of a priori information. We compared the performance of the proposed method to other benchmark methods on simulated data and gave an example of application on real data with the aim to highlight common susceptibility variants to breast and thyroid cancers. Conclusion The joint-sgPLS shows interesting properties for detecting a signal. As an extension of the PLS, the method is suited for data with a large number of variables. The choice of Lasso penalization copes with architectures of groups of variables and observations sets. Furthermore, although the method has been applied to a genetic study, its formulation is adapted to any data with high number of variables and an exposed a priori architecture in other application fields.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2021 ◽  
Vol 13 (2) ◽  
pp. 1-12
Author(s):  
Sumit Das ◽  
Manas Kumar Sanyal ◽  
Sarbajyoti Mallik

There is a lot of fake news roaming around various mediums, which misleads people. It is a big issue in this advanced intelligent era, and there is a need to find some solution to this kind of situation. This article proposes an approach that analyzes fake and real news. This analysis is focused on sentiment, significance, and novelty, which are a few characteristics of this news. The ability to manipulate daily information mathematically and statistically is allowed by expressing news reports as numbers and metadata. The objective of this article is to analyze and filter out the fake news that makes trouble. The proposed model is amalgamated with the web application; users can get real data and fake data by using this application. The authors have used the AI (artificial intelligence) algorithms, specifically logistic regression and LSTM (long short-term memory), so that the application works well. The results of the proposed model are compared with existing models.


Geophysics ◽  
2018 ◽  
Vol 83 (6) ◽  
pp. V345-V357 ◽  
Author(s):  
Nasser Kazemi

Given the noise-corrupted seismic recordings, blind deconvolution simultaneously solves for the reflectivity series and the wavelet. Blind deconvolution can be formulated as a fully perturbed linear regression model and solved by the total least-squares (TLS) algorithm. However, this algorithm performs poorly when the data matrix is a structured matrix and ill-conditioned. In blind deconvolution, the data matrix has a Toeplitz structure and is ill-conditioned. Accordingly, we develop a fully automatic single-channel blind-deconvolution algorithm to improve the performance of the TLS method. The proposed algorithm, called Toeplitz-structured sparse TLS, has no assumptions about the phase of the wavelet. However, it assumes that the reflectivity series is sparse. In addition, to reduce the model space and the number of unknowns, the algorithm benefits from the structural constraints on the data matrix. Our algorithm is an alternating minimization method and uses a generalized cross validation function to define the optimum regularization parameter automatically. Because the generalized cross validation function does not require any prior information about the noise level of the data, our approach is suitable for real-world applications. We validate the proposed technique using synthetic examples. In noise-free data, we achieve a near-optimal recovery of the wavelet and the reflectivity series. For noise-corrupted data with a moderate signal-to-noise ratio (S/N), we found that the algorithm successfully accounts for the noise in its model, resulting in a satisfactory performance. However, the results deteriorate as the S/N and the sparsity level of the data are decreased. We also successfully apply the algorithm to real data. The real-data examples come from 2D and 3D data sets of the Teapot Dome seismic survey.


2010 ◽  
Vol 16 (2) ◽  
pp. 177-180 ◽  
Author(s):  
Lilian J. Beijer ◽  
Toni C.M. Rietveld ◽  
Marijn M.A. van Beers ◽  
Robert M.L. Slangen ◽  
Henk van den Heuvel ◽  
...  

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 4013 ◽  
Author(s):  
Jie Huang ◽  
Tian Zhou ◽  
Weidong Du ◽  
Jiajun Shen ◽  
Wanyuan Zhang

A new fast deconvolved beamforming algorithm is proposed in this paper, and it can greatly reduce the computation complexity of the original Richardson–Lucy (R–L algorithm) deconvolution algorithm by utilizing the convolution theorem and the fast Fourier transform technique. This algorithm makes it possible for real-time high-resolution beamforming in a multibeam sonar system. This paper applies the new fast deconvolved beamforming algorithm to a high-frequency multibeam sonar system to obtain a high bearing resolution and low side lobe. In the sounding mode, it restrains the tunnel effect and makes the topographic survey more accurate. In the 2D acoustic image mode, it can obtain clear images, more details, and can better distinguish two close targets. Detailed implementation methods of the fast deconvolved beamforming are given, its computational complexity is analyzed, and its performance is evaluated with simulated and real data.


T-Comm ◽  
2020 ◽  
Vol 14 (12) ◽  
pp. 18-25
Author(s):  
Alina A. Sherstneva ◽  

The article aims to consider least squares approach for solving problems of queuing systems theory. The opportunity of predicting the behavior of infocommunication system is shown. Choosing the optimal model of its functioning is proposed. On base monitoring system metrics, statistical data were formed. The article proposes to make data trend forecasting, to estimate parameters of random processes over time. To obtain the results of functioning data in infocommunication systems that are as close as possible to the real values, polynomial and sine models are considered. The method of regression analysis is proposed to determine the parameter values for a model from a set of observational data. In theoretical research, the linear and nonlinear least squares methods are used in terms of a circle. The task of experimental analysis is to obtain an estimated parameter of sine, polynomial models and the center of circle. Experimental analysis was performed using the mathematical modeling program Matlab. A uniformly distributed random sequence and a random sequence with normal distribution are generated. The sequence with experimental data for polynomial and sine models, respectively, are calculated. The correspondence each model for generated data is shown in graphical form. The measurement data obeys observations. The estimated parameters are summarized in the tables. The polynomial order is estimated. The estimated dispersion curve of the polynomial model is obtained. The calculated variance values of the polynomial model are presented. Data trend forecasting for measurement data is made. The estimated values are extremally close to real data. The results are shown in graphs. Finally, an approximate model of the circumference of measurement data is presented in graphical form. After some iterations with estimated center from the arithmetic mean the new circle center is given. And quite close values for center and radius of circle are obtained.


2019 ◽  
Vol 1 ◽  
pp. 1-2 ◽  
Author(s):  
Mao Li ◽  
Ryo Inoue

<p><strong>Abstract.</strong> A table cartogram, visualization of table-form data, is a rectangle-shaped table in which each cell is transformed to express the magnitude of positive weight by its area while maintaining the adjacency relationship of cells in the original table. Winter (2011) applies an area cartogram generation method of Gastner and Newman (2004) for their generation, and Evans et al. (2018) proposes a new geometric procedure. The rows and columns on a table cartogram should be easily recognized by readers, however, no methods have focused to enhance the readability. This study proposes a method that defines table cartogram generation as an optimization problem and attempts to minimize vertical and horizontal deformation. Since the original tables are comprised of regular quadrangles, this study uses quadrangles to express cells in a table cartogram and fixes the outer border to attempt to retain the shape of a standard table.</p><p>This study proposes a two-step approach for table cartogram generation with cells that begin as squares and with fixed outer table borders. The first step only adjusts the vertical and horizontal borders of cells to express weights to the greatest possible degree. All cells maintain their rectangular shape after this step, although the limited degree of freedom of this operation results in low data representation accuracy. The second step adapts the cells of the low-accuracy table cartogram to accurately fit area to weight by relaxing the constraints on the directions of borders of cells. This study utilizes an area cartogram generation method proposed by Inoue and Shimizu (2006), which defines area cartogram generation as an optimization problem. The formulation with vertex coordinate parameters consists of an objective function that minimizes the difference between the given data and size of each cell, and a regularization term that controls the changes of bearing angles. It is formulated as non-linear least squares, and is solved through the iteration of linear least squares by linearizing the problem at the coordinates of vertices and updating the estimated coordinates until the value of the objective function becomes small enough.</p>


Sign in / Sign up

Export Citation Format

Share Document