Data-Driven Computation of Contact Dynamics During Two-Point Manipulation of Deformable Objects

Author(s):  
Wael Abdelrahman ◽  
Saeid Nahavandi ◽  
Douglas Creighton ◽  
Matthias Harders

This study represents a preliminary step towards data-driven computation of contact dynamics during manipulation of deformable objects at two points of contact. A modeling approach is proposed that characterizes the individual interaction at both points and the mutual effects of the two interactions on each other via a set of parameters. Both global as well as local coordinate systems are tested for encoding the contact mechanics. Artificial neural networks are trained on simulated data to capture the object behavior. A comparison of test data with the output of the trained system reveals a mean squared error percentage between 1% and 3% for simple interactions.

Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5317 ◽  
Author(s):  
Moonyoung Kwon ◽  
Sangjun Han ◽  
Kiwoong Kim ◽  
Sung Chan Jun

Electroencephalography (EEG) has relatively poor spatial resolution and may yield incorrect brain dynamics and distort topography; thus, high-density EEG systems are necessary for better analysis. Conventional methods have been proposed to solve these problems, however, they depend on parameters or brain models that are not simple to address. Therefore, new approaches are necessary to enhance EEG spatial resolution while maintaining its data properties. In this work, we investigated the super-resolution (SR) technique using deep convolutional neural networks (CNN) with simulated EEG data with white Gaussian and real brain noises, and experimental EEG data obtained during an auditory evoked potential task. SR EEG simulated data with white Gaussian noise or brain noise demonstrated a lower mean squared error and higher correlations with sensor information, and detected sources even more clearly than did low resolution (LR) EEG. In addition, experimental SR data also demonstrated far smaller errors for N1 and P2 components, and yielded reasonable localized sources, while LR data did not. We verified our proposed approach’s feasibility and efficacy, and conclude that it may be possible to explore various brain dynamics even with a small number of sensors.


2019 ◽  
Author(s):  
Mohammadreza Bahadorian ◽  
Christoph Zechner ◽  
Carl Modes

Many systems in biology and beyond employ collaborative, collective communication strategies for improved efficiency and adaptive benefit. One such paradigm of particular interest is the community estimation of a dynamic signal, when, for example, an epithelial tissue of cells must decide whether to react to a given dynamic external concentration of stress signaling molecules. At the level of dynamic cellular communication, however, it remains unknown what effect, if any, arises from communication beyond the mean field level. What are the limits and benefits to communication across a network of neighbor interactions? What is the role of Poissonian vs. super Poissonian dynamics in such a setting? How does the particular topology of connections impact the collective estimation and that of the individual participating cells? In this letter we construct a robust and general framework of signal estimation over continuous time Markov chains in order to address and answer these questions. Our results show that in the case of Possonian estimators, the communication solely enhances convergence speed of the Mean Squared Error (MSE) of the estimators to their steady-state values while leaving these values unchanged. However, in the super-Poissonian regime, MSE of estimators significantly decreases by increasing the number of neighbors. Surprisingly, in this case, the clustering coefficient of an estimator does not enhance its MSE while reducing total MSE of the population.


2018 ◽  
Vol 48 (3) ◽  
pp. 1137-1156 ◽  
Author(s):  
Shengwang Meng ◽  
Guangyuan Gao

AbstractWe consider compound Poisson claims reserving models applied to the paid claims and to the number of payments run-off triangles. We extend the standard Poisson-gamma assumption to account for over-dispersion in the payment counts and to account for various mean and variance structures in the individual payments. Two generalized linear models are applied consecutively to predict the unpaid claims. A bootstrap is used to estimate the mean squared error of prediction and to simulate the predictive distribution of the unpaid claims. We show that the extended compound Poisson models make reasonable predictions of the unpaid claims.


2020 ◽  
Author(s):  
Jon Saenz ◽  
Sheila Carreno-Madinabeitia ◽  
Ganix Esnaola ◽  
Santos J. González-Rojí ◽  
Gabriel Ibarra-Berastegi ◽  
...  

<p align="justify">A new diagram is proposed for the verification of vector quantities generated by individual or multiple models against a set of observations. It has been designed with the idea of extending the Taylor diagram to two-dimensional vector such as currents, wind velocity, or horizontal fluxes of water vapour, salinity, energy and other geophysical variables. The diagram is based on <span>a principal component</span> analysis of the two-dimensional structure of the mean squared error matrix between model and observations. This matrix is separated in two parts corresponding to the bias and the relative rotation of the empirical orthogonal functions of the data. We test the performance of this new diagram identifying the differences amongst <span>a</span> reference dataset and different model outputs using examples wind velocities, current, vertically integrated moisture transport and wave energy flux time series. An alternative setup is also <span>proposed</span> with an application to the time-averaged spatial field of surface wind velocity in the Northern and Southern Hemispheres according to different reanalyses and realizations of an ensemble of CMIP5 models. The examples of the use of the Sailor diagram show that it is a tool which helps identifying errors due to the bias or the orientation of the simulated vector time series or fields. An implementation of the algorithm in form of an R package (sailoR) is already publicly available from the CRAN repository, and besides the ability to plot the individual components of the error matrix, functions in the package also allow to easily retrieve the individual components of the mean squared error.</p>


Author(s):  
Hasnaa Khalifi ◽  
Marc Compere ◽  
Patrick Currier

Battery models can be developed from first principles or from empirical methods. Simulink Parameter Estimation toolbox was used to identify the battery parameters and validate the battery model with test data. Experimental data was obtained by discharging the battery of a modified 2013 Chevrolet Malibu hybrid electric vehicle. The resulting battery model provided accurate simulation results over the validation data. For the constant current discharge, the mean squared error between measured and simulated data was 0.26 volts for the 298V terminal voltage, and 6.07E−4 (%) for state of charge. For the extended variable current discharge, the mean squared error between measured and simulated data was 0.21 volts for terminal voltage and 9.25E−4 (%) for state of charge.


2017 ◽  
Vol 9 (1) ◽  
pp. 67-78
Author(s):  
M. R. Hasan ◽  
A. R. Baizid

The Bayesian estimation approach is a non-classical estimation technique in statistical inference and is very useful in real world situation. The aim of this paper is to study the Bayes estimators of the parameter of exponential distribution under different loss functions and compared among them as well as with the classical estimator named maximum likelihood estimator (MLE). Since exponential distribution is the life time distribution, we have studied exponential distribution using gamma prior. Here the gamma prior is used as the prior distribution of exponential distribution for finding the Bayes estimator. In our study we also used different symmetric and asymmetric loss functions such as squared error loss function, quadratic loss function, modified linear exponential (MLINEX) loss function and non-linear exponential (NLINEX) loss function. We have used simulated data using R-coding to find out the mean squared error (MSE) of different loss functions and hence found that non-classical estimator is better than classical estimator. Finally, mean square error (MSE) of the estimators of different loss functions are presented graphically.


2012 ◽  
Vol 61 (2) ◽  
pp. 277-290 ◽  
Author(s):  
Ádám Csorba ◽  
Vince Láng ◽  
László Fenyvesi ◽  
Erika Michéli

Napjainkban egyre nagyobb igény mutatkozik olyan technológiák és módszerek kidolgozására és alkalmazására, melyek lehetővé teszik a gyors, költséghatékony és környezetbarát talajadat-felvételezést és kiértékelést. Ezeknek az igényeknek felel meg a reflektancia spektroszkópia, mely az elektromágneses spektrum látható (VIS) és közeli infravörös (NIR) tartományában (350–2500 nm) végzett reflektancia-mérésekre épül. Figyelembe véve, hogy a talajokról felvett reflektancia spektrum információban nagyon gazdag, és a vizsgált tartományban számos talajalkotó rendelkezik karakterisztikus spektrális „ujjlenyomattal”, egyetlen görbéből lehetővé válik nagyszámú, kulcsfontosságú talajparaméter egyidejű meghatározása. Dolgozatunkban, a reflektancia spektroszkópia alapjaira helyezett, a talajok ösz-szetételének meghatározását célzó módszertani fejlesztés első lépéseit mutatjuk be. Munkánk során talajok szervesszén- és CaCO3-tartalmának megbecslését lehetővé tévő többváltozós matematikai-statisztikai módszerekre (részleges legkisebb négyzetek módszere, partial least squares regression – PLSR) épülő prediktív modellek létrehozását és tesztelését végeztük el. A létrehozott modellek tesztelése során megállapítottuk, hogy az eljárás mindkét talajparaméter esetében magas R2értéket [R2(szerves szén) = 0,815; R2(CaCO3) = 0,907] adott. A becslés pontosságát jelző közepes négyzetes eltérés (root mean squared error – RMSE) érték mindkét paraméter esetében közepesnek mondható [RMSE (szerves szén) = 0,467; RMSE (CaCO3) = 3,508], mely a reflektancia mérési előírások standardizálásával jelentősen javítható. Vizsgálataink alapján arra a következtetésre jutottunk, hogy a reflektancia spektroszkópia és a többváltozós kemometriai eljárások együttes alkalmazásával, gyors és költséghatékony adatfelvételezési és -értékelési módszerhez juthatunk.


Author(s):  
Nadia Hashim Al-Noor ◽  
Shurooq A.K. Al-Sultany

        In real situations all observations and measurements are not exact numbers but more or less non-exact, also called fuzzy. So, in this paper, we use approximate non-Bayesian computational methods to estimate inverse Weibull parameters and reliability function with fuzzy data. The maximum likelihood and moment estimations are obtained as non-Bayesian estimation. The maximum likelihood estimators have been derived numerically based on two iterative techniques namely “Newton-Raphson” and the “Expectation-Maximization” techniques. In addition, we provide compared numerically through Monte-Carlo simulation study to obtained estimates of the parameters and reliability function in terms of their mean squared error values and integrated mean squared error values respectively.


2014 ◽  
Vol 2 (2) ◽  
pp. 47-58
Author(s):  
Ismail Sh. Baqer

A two Level Image Quality enhancement is proposed in this paper. In the first level, Dualistic Sub-Image Histogram Equalization DSIHE method decomposes the original image into two sub-images based on median of original images. The second level deals with spikes shaped noise that may appear in the image after processing. We presents three methods of image enhancement GHE, LHE and proposed DSIHE that improve the visual quality of images. A comparative calculations is being carried out on above mentioned techniques to examine objective and subjective image quality parameters e.g. Peak Signal-to-Noise Ratio PSNR values, entropy H and mean squared error MSE to measure the quality of gray scale enhanced images. For handling gray-level images, convenient Histogram Equalization methods e.g. GHE and LHE tend to change the mean brightness of an image to middle level of the gray-level range limiting their appropriateness for contrast enhancement in consumer electronics such as TV monitors. The DSIHE methods seem to overcome this disadvantage as they tend to preserve both, the brightness and contrast enhancement. Experimental results show that the proposed technique gives better results in terms of Discrete Entropy, Signal to Noise ratio and Mean Squared Error values than the Global and Local histogram-based equalization methods


Sign in / Sign up

Export Citation Format

Share Document