scholarly journals Gradient Profile Estimation Using Exponential Cubic Spline Smoothing in a Bayesian Framework

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 674
Author(s):  
Kushani De De Silva ◽  
Carlo Cafaro ◽  
Adom Giffin

Attaining reliable gradient profiles is of utmost relevance for many physical systems. In many situations, the estimation of the gradient is inaccurate due to noise. It is common practice to first estimate the underlying system and then compute the gradient profile by taking the subsequent analytic derivative of the estimated system. The underlying system is often estimated by fitting or smoothing the data using other techniques. Taking the subsequent analytic derivative of an estimated function can be ill-posed. This becomes worse as the noise in the system increases. As a result, the uncertainty generated in the gradient estimate increases. In this paper, a theoretical framework for a method to estimate the gradient profile of discrete noisy data is presented. The method was developed within a Bayesian framework. Comprehensive numerical experiments were conducted on synthetic data at different levels of noise. The accuracy of the proposed method was quantified. Our findings suggest that the proposed gradient profile estimation method outperforms the state-of-the-art methods.

Entropy ◽  
2019 ◽  
Vol 22 (1) ◽  
pp. 46
Author(s):  
Maximilian Kurthen ◽  
Torsten Enßlin

We address the problem of two-variable causal inference without intervention. This task is to infer an existing causal relation between two random variables, i.e., X → Y or Y → X , from purely observational data. As the option to modify a potential cause is not given in many situations, only structural properties of the data can be used to solve this ill-posed problem. We briefly review a number of state-of-the-art methods for this, including very recent ones. A novel inference method is introduced, Bayesian Causal Inference (BCI) which assumes a generative Bayesian hierarchical model to pursue the strategy of Bayesian model selection. In the adopted model, the distribution of the cause variable is given by a Poisson lognormal distribution, which allows to explicitly regard the discrete nature of datasets, correlations in the parameter spaces, as well as the variance of probability densities on logarithmic scales. We assume Fourier diagonal Field covariance operators. The model itself is restricted to use cases where a direct causal relation X → Y has to be decided against a relation Y → X , therefore we compare it other methods for this exact problem setting. The generative model assumed provides synthetic causal data for benchmarking our model in comparison to existing state-of-the-art models, namely LiNGAM, ANM-HSIC, ANM-MML, IGCI, and CGNN. We explore how well the above methods perform in case of high noise settings, strongly discretized data, and very sparse data. BCI performs generally reliably with synthetic data as well as with the real world TCEP benchmark set, with an accuracy comparable to state-of-the-art algorithms. We discuss directions for the future development of BCI.


2018 ◽  
Vol 5 (2) ◽  
pp. 207-211
Author(s):  
Nazila Zarghi ◽  
Soheil Dastmalchian Khorasani

Abstract Evidence based social sciences, is one of the state-of- the-art area in this field. It is making decisions on the basis of conscientious, explicit and judicious use of the best available evidence from multiple sources. It also could be conducive to evidence based social work, i.e a kind of evidence based practice in some extent. In this new emerging field, the research findings help social workers in different levels of social sciences such as policy making, management, academic area, education, and social settings, etc.When using research in real setting, it is necessary to do critical appraisal, not only for trustingon internal validity or rigor methodology of the paper, but also for knowing in what extent research findings could be applied in real setting. Undoubtedly, the latter it is a kind of subjective judgment. As social sciences findings are highly context bound, it is necessary to pay more attention to this area. The present paper tries to introduce firstly evidence based social sciences and its importance and then propose criteria for critical appraisal of research findings for application in society.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


2021 ◽  
Vol 15 (4) ◽  
pp. 1-46
Author(s):  
Kui Yu ◽  
Lin Liu ◽  
Jiuyong Li

In this article, we aim to develop a unified view of causal and non-causal feature selection methods. The unified view will fill in the gap in the research of the relation between the two types of methods. Based on the Bayesian network framework and information theory, we first show that causal and non-causal feature selection methods share the same objective. That is to find the Markov blanket of a class attribute, the theoretically optimal feature set for classification. We then examine the assumptions made by causal and non-causal feature selection methods when searching for the optimal feature set, and unify the assumptions by mapping them to the restrictions on the structure of the Bayesian network model of the studied problem. We further analyze in detail how the structural assumptions lead to the different levels of approximations employed by the methods in their search, which then result in the approximations in the feature sets found by the methods with respect to the optimal feature set. With the unified view, we can interpret the output of non-causal methods from a causal perspective and derive the error bounds of both types of methods. Finally, we present practical understanding of the relation between causal and non-causal methods using extensive experiments with synthetic data and various types of real-world data.


2021 ◽  
Vol 11 (9) ◽  
pp. 4241
Author(s):  
Jiahua Wu ◽  
Hyo Jong Lee

In bottom-up multi-person pose estimation, grouping joint candidates into the appropriately structured corresponding instance of a person is challenging. In this paper, a new bottom-up method, the Partitioned CenterPose (PCP) Network, is proposed to better cluster the detected joints. To achieve this goal, we propose a novel approach called Partition Pose Representation (PPR) which integrates the instance of a person and its body joints based on joint offset. PPR leverages information about the center of the human body and the offsets between that center point and the positions of the body’s joints to encode human poses accurately. To enhance the relationships between body joints, we divide the human body into five parts, and then, we generate a sub-PPR for each part. Based on this PPR, the PCP Network can detect people and their body joints simultaneously, then group all body joints according to joint offset. Moreover, an improved l1 loss is designed to more accurately measure joint offset. Using the COCO keypoints and CrowdPose datasets for testing, it was found that the performance of the proposed method is on par with that of existing state-of-the-art bottom-up methods in terms of accuracy and speed.


2021 ◽  
Vol 11 (9) ◽  
pp. 4005
Author(s):  
Asep Maulana ◽  
Martin Atzmueller

Anomaly detection in complex networks is an important and challenging task in many application domains. Examples include analysis and sensemaking in human interactions, e.g., in (social) interaction networks, as well as the analysis of the behavior of complex technical and cyber-physical systems such as suspicious transactions/behavior in financial or routing networks; here, behavior and/or interactions typically also occur on different levels and layers. In this paper, we focus on detecting anomalies in such complex networks. In particular, we focus on multi-layer complex networks, where we consider the problem of finding sets of anomalous nodes for group anomaly detection. Our presented method is based on centrality-based many-objective optimization on multi-layer networks. Starting from the Pareto Front obtained via many-objective optimization, we rank anomaly candidates using the centrality information on all layers. This ranking is formalized via a scoring function, which estimates relative deviations of the node centralities, considering the density of the network and its respective layers. In a human-centered approach, anomalous sets of nodes can then be identified. A key feature of this approach is its interpretability and explainability, since we can directly assess anomalous nodes in the context of the network topology. We evaluate the proposed method using different datasets, including both synthetic as well as real-world network data. Our results demonstrate the efficacy of the presented approach.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 227
Author(s):  
Eckart Michaelsen ◽  
Stéphane Vujasinovic

Representative input data are a necessary requirement for the assessment of machine-vision systems. For symmetry-seeing machines in particular, such imagery should provide symmetries as well as asymmetric clutter. Moreover, there must be reliable ground truth with the data. It should be possible to estimate the recognition performance and the computational efforts by providing different grades of difficulty and complexity. Recent competitions used real imagery labeled by human subjects with appropriate ground truth. The paper at hand proposes to use synthetic data instead. Such data contain symmetry, clutter, and nothing else. This is preferable because interference with other perceptive capabilities, such as object recognition, or prior knowledge, can be avoided. The data are given sparsely, i.e., as sets of primitive objects. However, images can be generated from them, so that the same data can also be fed into machines requiring dense input, such as multilayered perceptrons. Sparse representations are preferred, because the author’s own system requires such data, and in this way, any influence of the primitive extraction method is excluded. The presented format allows hierarchies of symmetries. This is important because hierarchy constitutes a natural and dominant part in symmetry-seeing. The paper reports some experiments using the author’s Gestalt algebra system as symmetry-seeing machine. Additionally included is a comparative test run with the state-of-the-art symmetry-seeing deep learning convolutional perceptron of the PSU. The computational efforts and recognition performance are assessed.


Electronics ◽  
2018 ◽  
Vol 7 (12) ◽  
pp. 424 ◽  
Author(s):  
Peng Chen ◽  
Zhenxin Cao ◽  
Zhimin Chen ◽  
Linxi Liu ◽  
Man Feng

The performance of a direction-finding system is significantly degraded by the imperfection of an array. In this paper, the direction-of-arrival (DOA) estimation problem is investigated in the uniform linear array (ULA) system with the unknown mutual coupling (MC) effect. The system model with MC effect is formulated. Then, by exploiting the signal sparsity in the spatial domain, a compressed-sensing (CS)-based system model is proposed with the MC coefficients, and the problem of DOA estimation is converted into that of a sparse reconstruction. To solve the reconstruction problem efficiently, a novel DOA estimation method, named sparse-based DOA estimation with unknown MC effect (SDMC), is proposed, where both the sparse signal and the MC coefficients are estimated iteratively. Simulation results show that the proposed method can achieve better performance of DOA estimation in the scenario with MC effect than the state-of-the-art methods, and improve the DOA estimation performance about 31.64 % by reducing the MC effect by about 4 dB.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Hai Wang ◽  
Lei Dai ◽  
Yingfeng Cai ◽  
Long Chen ◽  
Yong Zhang

Traditional salient object detection models are divided into several classes based on low-level features and contrast between pixels. In this paper, we propose a model based on a multilevel deep pyramid (MLDP), which involves fusing multiple features on different levels. Firstly, the MLDP uses the original image as the input for a VGG16 model to extract high-level features and form an initial saliency map. Next, the MLDP further extracts high-level features to form a saliency map based on a deep pyramid. Then, the MLDP obtains the salient map fused with superpixels by extracting low-level features. After that, the MLDP applies background noise filtering to the saliency map fused with superpixels in order to filter out the interference of background noise and form a saliency map based on the foreground. Lastly, the MLDP combines the saliency map fused with the superpixels with the saliency map based on the foreground, which results in the final saliency map. The MLDP is not limited to low-level features while it fuses multiple features and achieves good results when extracting salient targets. As can be seen in our experiment section, the MLDP is better than the other 7 state-of-the-art models across three different public saliency datasets. Therefore, the MLDP has superiority and wide applicability in extraction of salient targets.


Sign in / Sign up

Export Citation Format

Share Document