FAST ACCURATE COMMUNITY DETECTION BASED ON DYNAMICS PROCESS AND STABILITY OPTIMIZATION

2012 ◽  
Vol 26 (31) ◽  
pp. 1250189 ◽  
Author(s):  
YUE ZHAO ◽  
JIE-LIANG MA ◽  
HUI-JIA LI

Utilizing dynamics system to identify community structure has become an important means of research. In this paper, inspired by the relationship between topology structures of networks and the dynamic Potts model, we present a novel method that describes the conditional inequality forming simple community can be transformed into the objective function F which is analogous to the Hamilton function of Potts model. Likewise, to detect the well performance of partitioning we develop improved-EM algorithm to search the optimal value of the objective function F by successively updating the dynamic process of the membership vector of nodes which is also commonly influenced by the weighting function W and the tightness expression T. Via adjusting relevant parameters properly, our method can effectively detect the community structures. Furthermore, stability as the new measure quality method is applied for refining the partitions the improved-EM algorithm detects and mitigating resolution limit brought by modularity. Simulation experiments on benchmark and real-data network all give excellent results.

2020 ◽  
Vol 25 (1) ◽  
pp. 129-138
Author(s):  
Lichao Nie ◽  
Zhao Ma ◽  
Bin Liu ◽  
Zhenhao Xu ◽  
Wei Zhou ◽  
...  

There is a high demand for high detection accuracy and resolution with respect to anomalous bodies due to the increased development of underground spaces. This study focused on the weighted inversion of observed data from individual array type electrical resistivity tomography (ERT), and developed an improved method of applying a data weighing function to the geoelectrical inversion procedure. In this method, the weighting factor as an observed data weighting term was introduced into the objective function. For individual arrays, the sensitivity decreases with increasing electrode interval. Therefore, the Jacobian matrices were computed for the observed data of individual arrays to determine the value of the weighting factor, and the weighting factor was calculated automatically during inversion. In this work, 2D combined inversion of ERT data from four-electrode Alfa-type arrays is examined. The effectiveness of the weighted inversion method was demonstrated using various synthetic and real data examples. The results indicated that the inversion method based on observed data weighted function could improve the contribution of observed data with depth information to the objective function. It has been proven that the combined weighted inversion method could be a feasible tool for improving the accuracies of positioning and resolution while imaging deep anomalous bodies in the subsurface.


2021 ◽  
Vol 11 (2) ◽  
pp. 582
Author(s):  
Zean Bu ◽  
Changku Sun ◽  
Peng Wang ◽  
Hang Dong

Calibration between multiple sensors is a fundamental procedure for data fusion. To address the problems of large errors and tedious operation, we present a novel method to conduct the calibration between light detection and ranging (LiDAR) and camera. We invent a calibration target, which is an arbitrary triangular pyramid with three chessboard patterns on its three planes. The target contains both 3D information and 2D information, which can be utilized to obtain intrinsic parameters of the camera and extrinsic parameters of the system. In the proposed method, the world coordinate system is established through the triangular pyramid. We extract the equations of triangular pyramid planes to find the relative transformation between two sensors. One capture of camera and LiDAR is sufficient for calibration, and errors are reduced by minimizing the distance between points and planes. Furthermore, the accuracy can be increased by more captures. We carried out experiments on simulated data with varying degrees of noise and numbers of frames. Finally, the calibration results were verified by real data through incremental validation and analyzing the root mean square error (RMSE), demonstrating that our calibration method is robust and provides state-of-the-art performance.


Atmosphere ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 564
Author(s):  
Hong Shen ◽  
Longkun Yu ◽  
Xu Jing ◽  
Fengfu Tan

The turbulence moment of order m (μm) is defined as the refractive index structure constant Cn2 integrated over the whole path z with path-weighting function zm. Optical effects of atmospheric turbulence are directly related to turbulence moments. To evaluate the optical effects of atmospheric turbulence, it is necessary to measure the turbulence moment. It is well known that zero-order moments of turbulence (μ0) and five-thirds-order moments of turbulence (μ5/3), which correspond to the seeing and the isoplanatic angles, respectively, have been monitored as routine parameters in astronomical site testing. However, the direct measurement of second-order moments of turbulence (μ2) of the whole layer atmosphere has not been reported. Using a star as the light source, it has been found that μ2 can be measured through the covariance of the irradiance in two receiver apertures with suitable aperture size and aperture separation. Numerical results show that the theoretical error of this novel method is negligible in all the typical turbulence models. This method enabled us to monitor μ2 as a routine parameter in astronomical site testing, which is helpful to understand the characteristics of atmospheric turbulence better combined with μ0 and μ5/3.


Author(s):  
E. Alper Yıldırım

AbstractWe study convex relaxations of nonconvex quadratic programs. We identify a family of so-called feasibility preserving convex relaxations, which includes the well-known copositive and doubly nonnegative relaxations, with the property that the convex relaxation is feasible if and only if the nonconvex quadratic program is feasible. We observe that each convex relaxation in this family implicitly induces a convex underestimator of the objective function on the feasible region of the quadratic program. This alternative perspective on convex relaxations enables us to establish several useful properties of the corresponding convex underestimators. In particular, if the recession cone of the feasible region of the quadratic program does not contain any directions of negative curvature, we show that the convex underestimator arising from the copositive relaxation is precisely the convex envelope of the objective function of the quadratic program, strengthening Burer’s well-known result on the exactness of the copositive relaxation in the case of nonconvex quadratic programs. We also present an algorithmic recipe for constructing instances of quadratic programs with a finite optimal value but an unbounded relaxation for a rather large family of convex relaxations including the doubly nonnegative relaxation.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. R165-R174 ◽  
Author(s):  
Marcelo Jorge Luz Mesquita ◽  
João Carlos Ribeiro Cruz ◽  
German Garabito Callapino

Estimation of an accurate velocity macromodel is an important step in seismic imaging. We have developed an approach based on coherence measurements and finite-offset (FO) beam stacking. The algorithm is an FO common-reflection-surface tomography, which aims to determine the best layered depth-velocity model by finding the model that maximizes a semblance objective function calculated from the amplitudes in common-midpoint (CMP) gathers stacked over a predetermined aperture. We develop the subsurface velocity model with a stack of layers separated by smooth interfaces. The algorithm is applied layer by layer from the top downward in four steps per layer. First, by automatic or manual picking, we estimate the reflection times of events that describe the interfaces in a time-migrated section. Second, we convert these times to depth using the velocity model via application of Dix’s formula and the image rays to the events. Third, by using ray tracing, we calculate kinematic parameters along the central ray and build a paraxial FO traveltime approximation for the FO common-reflection-surface method. Finally, starting from CMP gathers, we calculate the semblance of the selected events using this paraxial traveltime approximation. After repeating this algorithm for all selected CMP gathers, we use the mean semblance values as an objective function for the target layer. When this coherence measure is maximized, the model is accepted and the process is completed. Otherwise, the process restarts from step two with the updated velocity model. Because the inverse problem we are solving is nonlinear, we use very fast simulated annealing to search the velocity parameters in the target layers. We test the method on synthetic and real data sets to study its use and advantages.


2013 ◽  
Vol 2013 ◽  
pp. 1-8
Author(s):  
Teng Li ◽  
Huan Chang ◽  
Jun Wu

This paper presents a novel algorithm to numerically decompose mixed signals in a collaborative way, given supervision of the labels that each signal contains. The decomposition is formulated as an optimization problem incorporating nonnegative constraint. A nonnegative data factorization solution is presented to yield the decomposed results. It is shown that the optimization is efficient and decreases the objective function monotonically. Such a decomposition algorithm can be applied on multilabel training samples for pattern classification. The real-data experimental results show that the proposed algorithm can significantly facilitate the multilabel image classification performance with weak supervision.


Author(s):  
A. Stassopoulou ◽  
M. Petrou

We present in this paper a novel method for eliciting the conditional probability matrices needed for a Bayesian network with the help of a neural network. We demonstrate how we can obtain a correspondence between the two networks by deriving a closed-form solution so that the outputs of the two networks are similar in the least square error sense, not only when determining the discriminant function, but for the full range of their outputs. For this purpose we take into consideration the probability density functions of the independent variables of the problem when we compute the least square error approximation. Our methodoloy is demonstrated with the help of some real data concerning the problem of risk of desertification assessment for some burned forests in Attica, Greece where the parameters of the Bayesian network constructed for this task are successfully estimated given a neural network trained with a set of data.


2019 ◽  
Vol 20 (23) ◽  
pp. 6019 ◽  
Author(s):  
Dongliang Guo ◽  
Qiaoqiao Wang ◽  
Meng Liang ◽  
Wei Liu ◽  
Junlan Nie

Cavity analysis in molecular dynamics is important for understanding molecular function. However, analyzing the dynamic pattern of molecular cavities remains a difficult task. In this paper, we propose a novel method to topologically represent molecular cavities by vectorization. First, a characterization of cavities is established through Word2Vec model, based on an analogy between the cavities and natural language processing (NLP) terms. Then, we use some techniques such as dimension reduction and clustering to conduct an exploratory analysis of the vectorized molecular cavity. On a real data set, we demonstrate that our approach is applicable to maintain the topological characteristics of the cavity and can find the change patterns from a large number of cavities.


Author(s):  
Aaron Berk ◽  
Yaniv Plan ◽  
Özgür Yilmaz

Abstract The use of generalized Lasso is a common technique for recovery of structured high-dimensional signals. There are three common formulations of generalized Lasso; each program has a governing parameter whose optimal value depends on properties of the data. At this optimal value, compressed sensing theory explains why Lasso programs recover structured high-dimensional signals with minimax order-optimal error. Unfortunately in practice, the optimal choice is generally unknown and must be estimated. Thus, we investigate stability of each of the three Lasso programs with respect to its governing parameter. Our goal is to aid the practitioner in answering the following question: given real data, which Lasso program should be used? We take a step towards answering this by analysing the case where the measurement matrix is identity (the so-called proximal denoising setup) and we use $\ell _{1}$ regularization. For each Lasso program, we specify settings in which that program is provably unstable with respect to its governing parameter. We support our analysis with detailed numerical simulations. For example, there are settings where a 0.1% underestimate of a Lasso parameter can increase the error significantly and a 50% underestimate can cause the error to increase by a factor of $10^{9}$.


2013 ◽  
Vol 427-429 ◽  
pp. 1606-1609 ◽  
Author(s):  
Tao Chen ◽  
Hui Fang Deng

In this paper, we propose a novel method for image retrieval based on multi-instance learning with relevance feedback. The process of this method mainly includes the following three steps: First, it segments each image into a number of regions, treats images and regions as bags and instances respectively. Second, it constructs an objective function of multi-instance learning with the query images, which is used to rank the images from a large digital repository according to the distance values between the nearest region vector of each image and the maximum of the objective function. Third, based on the users relevance feedback, several rounds may be needed to refine the output images and their ranks. Finally, a satisfying set of images will be returned to users. Experimental results on COREL image data sets have demonstrated the effectiveness of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document