scholarly journals Together alone: a group-based polarization measurement

Author(s):  
Tanzhe Tang ◽  
Amineh Ghorbani ◽  
Flaminio Squazzoni ◽  
Caspar G. Chorus

AbstractThe growing polarization of our societies and economies has been extensively studied in various disciplines and is subject to public controversy. Yet, measuring polarization is hampered by the discrepancy between how polarization is conceptualized and measured. For instance, the notion of group, especially groups that are identified based on similarities between individuals, is key to conceptualizing polarization but is usually neglected when measuring polarization. To address the issue, this paper presents a new polarization measurement based on a grouping method called “Equal Size Binary Grouping” (ESBG) for both uni- and multi-dimensional discrete data, which satisfies a range of desired properties. Inspired by techniques of clustering, ESBG divides the population into two groups of equal sizes based on similarities between individuals, while overcoming certain theoretical and practical problems afflicting other grouping methods, such as discontinuity and contradiction of reasoning. Our new polarization measurement and the grouping method are illustrated by applying them to a two-dimensional synthetic data set. By means of a so-called “squeezing-and-moving” framework, we show that our measurement is closely related to bipolarization and could help stimulate further empirical research.

2021 ◽  
Vol 86 ◽  
pp. 16-32
Author(s):  
Dang Duc Trong ◽  
Tran Quoc Viet ◽  
Vo Dang Khoa ◽  
Nguyen Thi Hong Nhung

Author(s):  
Raul E. Avelar ◽  
Karen Dixon ◽  
Boniphace Kutela ◽  
Sam Klump ◽  
Beth Wemple ◽  
...  

The calibration of safety performance functions (SPFs) is a mechanism included in the Highway Safety Manual (HSM) to adjust SPFs in the HSM for use in intended jurisdictions. Critically, the quality of the calibration procedure must be assessed before using the calibrated SPFs. Multiple resources to aid practitioners in calibrating SPFs have been developed in the years following the publication of the HSM 1st edition. Similarly, the literature suggests multiple ways to assess the goodness-of-fit (GOF) of a calibrated SPF to a data set from a given jurisdiction. This paper uses the calibration results of multiple intersection SPFs to a large Mississippi safety database to examine the relations between multiple GOF metrics. The goal is to develop a sensible single index that leverages the joint information from multiple GOF metrics to assess overall quality of calibration. A factor analysis applied to the calibration results revealed three underlying factors explaining 76% of the variability in the data. From these results, the authors developed an index and performed a sensitivity analysis. The key metrics were found to be, in descending order: the deviation of the cumulative residual (CURE) plot from the 95% confidence area, the mean absolute deviation, the modified R-squared, and the value of the calibration factor. This paper also presents comparisons between the index and alternative scoring strategies, as well as an effort to verify the results using synthetic data. The developed index is recommended to comprehensively assess the quality of the calibrated intersection SPFs.


Water ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 107
Author(s):  
Elahe Jamalinia ◽  
Faraz S. Tehrani ◽  
Susan C. Steele-Dunne ◽  
Philip J. Vardon

Climatic conditions and vegetation cover influence water flux in a dike, and potentially the dike stability. A comprehensive numerical simulation is computationally too expensive to be used for the near real-time analysis of a dike network. Therefore, this study investigates a random forest (RF) regressor to build a data-driven surrogate for a numerical model to forecast the temporal macro-stability of dikes. To that end, daily inputs and outputs of a ten-year coupled numerical simulation of an idealised dike (2009–2019) are used to create a synthetic data set, comprising features that can be observed from a dike surface, with the calculated factor of safety (FoS) as the target variable. The data set before 2018 is split into training and testing sets to build and train the RF. The predicted FoS is strongly correlated with the numerical FoS for data that belong to the test set (before 2018). However, the trained model shows lower performance for data in the evaluation set (after 2018) if further surface cracking occurs. This proof-of-concept shows that a data-driven surrogate can be used to determine dike stability for conditions similar to the training data, which could be used to identify vulnerable locations in a dike network for further examination.


2021 ◽  
Vol 11 (4) ◽  
pp. 1431
Author(s):  
Sungsik Wang ◽  
Tae Heung Lim ◽  
Kyoungsoo Oh ◽  
Chulhun Seo ◽  
Hosung Choo

This article proposes a method for the prediction of wide range two-dimensional refractivity for synthetic aperture radar (SAR) applications, using an inverse distance weighted (IDW) interpolation of high-altitude radio refractivity data from multiple meteorological observatories. The radio refractivity is extracted from an atmospheric data set of twenty meteorological observatories around the Korean Peninsula along a given altitude. Then, from the sparse refractive data, the two-dimensional regional radio refractivity of the entire Korean Peninsula is derived using the IDW interpolation, in consideration of the curvature of the Earth. The refractivities of the four seasons in 2019 are derived at the locations of seven meteorological observatories within the Korean Peninsula, using the refractivity data from the other nineteen observatories. The atmospheric refractivities on 15 February 2019 are then evaluated across the entire Korean Peninsula, using the atmospheric data collected from the twenty meteorological observatories. We found that the proposed IDW interpolation has the lowest average, the lowest average root-mean-square error (RMSE) of ∇M (gradient of M), and more continuous results than other methods. To compare the resulting IDW refractivity interpolation for airborne SAR applications, all the propagation path losses across Pohang and Heuksando are obtained using the standard atmospheric condition of ∇M = 118 and the observation-based interpolated atmospheric conditions on 15 February 2019. On the terrain surface ranging from 90 km to 190 km, the average path losses in the standard and derived conditions are 179.7 dB and 182.1 dB, respectively. Finally, based on the air-to-ground scenario in the SAR application, two-dimensional illuminated field intensities on the terrain surface are illustrated.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2011 ◽  
Vol 90-93 ◽  
pp. 3277-3282 ◽  
Author(s):  
Bai Chao Wu ◽  
Ai Ping Tang ◽  
Lian Fa Wang

The foundation ofdelaunay triangulationandconstrained delaunay triangulationis the basis of three dimensional geographical information system which is one of hot issues of the contemporary era; moreover it is widely applied in finite element methods, terrain modeling and object reconstruction, euclidean minimum spanning tree and other applications. An algorithm for generatingconstrained delaunay triangulationin two dimensional planes is presented. The algorithm permits constrained edges and polygons (possibly with holes) to be specified in the triangulations, and describes some data structures related to constrained edges and polygons. In order to maintain the delaunay criterion largely,some new incremental points are added onto the constrained ones. After the data set is preprocessed, the foundation ofconstrained delaunay triangulationis showed as follows: firstly, the constrained edges and polygons generate initial triangulations,then the remained points completes the triangulation . Some pseudo-codes involved in the algorithm are provided. Finally, some conclusions and further studies are given.


2014 ◽  
Vol 7 (3) ◽  
pp. 781-797 ◽  
Author(s):  
P. Paatero ◽  
S. Eberly ◽  
S. G. Brown ◽  
G. A. Norris

Abstract. The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement of factor elements (BS-DISP). The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


2016 ◽  
Vol 14 (1) ◽  
pp. 172988141769231 ◽  
Author(s):  
Yingfeng Cai ◽  
Youguo He ◽  
Hai Wang ◽  
Xiaoqiang Sun ◽  
Long Chen ◽  
...  

The emergence and development of deep learning theory in machine learning field provide new method for visual-based pedestrian recognition technology. To achieve better performance in this application, an improved weakly supervised hierarchical deep learning pedestrian recognition algorithm with two-dimensional deep belief networks is proposed. The improvements are made by taking into consideration the weaknesses of structure and training methods of existing classifiers. First, traditional one-dimensional deep belief network is expanded to two-dimensional that allows image matrix to be loaded directly to preserve more information of a sample space. Then, a determination regularization term with small weight is added to the traditional unsupervised training objective function. By this modification, original unsupervised training is transformed to weakly supervised training. Subsequently, that gives the extracted features discrimination ability. Multiple sets of comparative experiments show that the performance of the proposed algorithm is better than other deep learning algorithms in recognition rate and outperforms most of the existing state-of-the-art methods in non-occlusion pedestrian data set while performs fair in weakly and heavily occlusion data set.


2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.


Sign in / Sign up

Export Citation Format

Share Document