scholarly journals A Symmetric Banzhaf Cooperation Value for Games with a Proximity Relation among the Agents

Symmetry ◽  
2020 ◽  
Vol 12 (7) ◽  
pp. 1196
Author(s):  
Inés Gallego ◽  
Julio R. Fernández ◽  
Andrés Jiménez-Losada ◽  
Manuel Ordóñez

A cooperative game represents a situation in which a set of agents form coalitions in order to achieve a common good. To allocate the benefits of the result of this cooperation there exist several values such as the Shapley value or the Banzhaf value. Sometimes it is considered that not all communications between players are feasible and a graph is introduced to represent them. Myerson (1977) introduced a Shapley-type value for these situations. Another model for cooperative games is the Owen model, Owen (1977), in which players that have similar interests form a priori unions that bargain as a block in order to get a fair payoff. The model of cooperation introduced in this paper combines these two models following Casajus (2007). The situation consists of a communication graph where a two-step value is defined. In the first step a negotiation among the connected components is made and in the second one players inside each connected component bargain. This model can be extended to fuzzy contexts such as proximity relations that consider leveled closeness between agents as we proposed in 2016. There are two extensions of the Banzhaf value to the Owen model, because the natural way loses the group symmetry property. In this paper we construct an appropriate value to extend the symmetric option for situations with a proximity relation and provide it with an axiomatization. Then we apply this value to a political situation.

2020 ◽  
Vol DMTCS Proceedings, 28th... ◽  
Author(s):  
Martina Juhnke-Kubitzke ◽  
Timo De Wolff

International audience Amoebas are projections of complex algebraic varieties in the algebraic torus under a Log-absolute value map, which have connections to various mathematical subjects. While amoebas of hypersurfaces have been inten- sively studied during the last years, the non-hypersurface case is barely understood so far. We investigate intersections of amoebas of n hypersurfaces in (C∗)n, which are genuine supersets of amoebas given by non-hypersurface vari- eties. Our main results are amoeba analogs of Bernstein's Theorem and Be ́zout's Theorem providing an upper bound for the number of connected components of such intersections. Moreover, we show that the order map for hypersur- face amoebas can be generalized in a natural way to intersections of amoebas. We show that, analogous to the case of amoebas of hypersurfaces, the restriction of this generalized order map to a single connected component is still 1-to-1.


1996 ◽  
Vol 05 (04) ◽  
pp. 427-439 ◽  
Author(s):  
RICCARDO BENEDETTI ◽  
CARLO PETRONIO

In this paper we discuss the beautiful idea of Justin Roberts [7] (see also [8]) to re-obtain the Turaev-Viro invariants [11] via skein theory, and re-prove elementarily the Turaev-Walker theorem [9], [10], [13]. We do this by exploiting the presentation of 3-manifolds introduced in [1], [4]. Our presentation supports in a very natural way a formal implementation of Roberts’ idea. More specifically, what we show is how to explicitly extract from an o-graph (the object by which we represent a manifold, see below), one of the framed links in S3 which Roberts uses in the construction of his invariant, and a planar diagrammatic representation of such a link. This implies that the proofs of invariance and equality with the Turaev-Viro invariant can be carried out in a completely “algebraic” way, in terms of a planar diagrammatic calculus which does not require any interpretation of 3-dimensional figures. In particular, when proving the “term-by-term” equality of the expansion of the Roberts invariant with the state sum which gives the Turaev-Viro invariant, we simultaneously apply several times the “fusion rule” (which is formally defined, strictly speaking, only in diagrammatic terms), showing that the “braiding and twisting” which a priori may exist on tetrahedra is globally dispensable. In our point of view the success of this formal “algebraic” approach witnesses a certain efficiency of our presentation of 3-manifolds via o-graphs. In this work we will widely use recoupling theory which was very clearly exposed in [2], and therefore we will avoid recalling notations. Actually, for the purpose of stating and proving our results we will need to slightly extend the class of trivalent ribbon diagrams on which the bracket can be computed. We also address the reader to the references quoted in [2], in particular for the fundamental contributions of Lickorish to this area. In our approach it is more natural to consider invariants of compact 3-manifolds with non-empty boundary. The case of closed 3-manifolds is included by introducing a correction factor corresponding to boundary spheres, as explained in §2. Our main result is actually an extension to manifolds with boundary of the Turaev-Walker theorem: we show that the Turaev-Viro invariant of such a manifold coincides (up to a factor which depends on the Euler characteristic) with the Reshetikhin-Turaev-Witten invariant of the manifold mirrored in its boundary.


1999 ◽  
Vol 31 (03) ◽  
pp. 579-595 ◽  
Author(s):  
J. Cao

The distribution of the size of one connected component and the largest connected component of the excursion set is derived for stationary χ2, t and F fields, in the limit of high or low thresholds. This extends previous results for stationary Gaussian fields (Nosko 1969, Adler 1981) and for χ2 fields in one and two dimensions (Aronowich and Adler 1986, 1988). An application of this is to detect regional changes in positron emission tomography (PET) images of blood flow in human brain, using the size of the largest connected component of the excursion set as a test statistic.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Vincent Majanga ◽  
Serestina Viriri

Recent advances in medical imaging analysis, especially the use of deep learning, are helping to identify, detect, classify, and quantify patterns in radiographs. At the center of these advances is the ability to explore hierarchical feature representations learned from data. Deep learning is invaluably becoming the most sought out technique, leading to enhanced performance in analysis of medical applications and systems. Deep learning techniques have achieved great performance results in dental image segmentation. Segmentation of dental radiographs is a crucial step that helps the dentist to diagnose dental caries. The performance of these deep networks is however restrained by various challenging features of dental carious lesions. Segmentation of dental images becomes difficult due to a vast variety in topologies, intricacies of medical structures, and poor image qualities caused by conditions such as low contrast, noise, irregular, and fuzzy edges borders, which result in unsuccessful segmentation. The dental segmentation method used is based on thresholding and connected component analysis. Images are preprocessed using the Gaussian blur filter to remove noise and corrupted pixels. Images are then enhanced using erosion and dilation morphology operations. Finally, segmentation is done through thresholding, and connected components are identified to extract the Region of Interest (ROI) of the teeth. The method was evaluated on an augmented dataset of 11,114 dental images. It was trained with 10 090 training set images and tested on 1024 testing set images. The proposed method gave results of 93 % for both precision and recall values, respectively.


2016 ◽  
Vol 7 (1) ◽  
pp. 41-57 ◽  
Author(s):  
Nitigya Sambyal ◽  
Pawanesh Abrol

Text detection and segmentation system serves as important method for document analysis as it helps in many content based image analysis tasks. This research paper proposes a connected component technique for text extraction and character segmentation using maximally stable extremal regions (MSERs) for text line formation followed by connected components to determined separate characters. The system uses a cluster size of five which is selected by experimental evaluation for identifying characters. Sobel edge detector is used as it reduces the execution time but at the same time maintains quality of the results. The algorithm is tested along a set of JPEG, PNG and BMP images over varying features like font size, style, colour, background colour and text variation. Further the CPU time in execution of the algorithm with three different edge detectors namely prewitt, sobel and canny is observed. Text identification using MSER gave very good results whereas character segmentation gave on average 94.572% accuracy for the various test cases considered for this study.


2011 ◽  
Vol 22 (05) ◽  
pp. 1161-1185
Author(s):  
ABUSAYEED SAIFULLAH ◽  
YUNG H. TSIN

A self-stabilizing algorithm is a distributed algorithm that can start from any initial (legitimate or illegitimate) state and eventually converge to a legitimate state in finite time without being assisted by any external agent. In this paper, we propose a self-stabilizing algorithm for finding the 3-edge-connected components of an asynchronous distributed computer network. The algorithm stabilizes in O(dnΔ) rounds and every processor requires O(n log Δ) bits, where Δ(≤ n) is an upper bound on the degree of a node, d(≤ n) is the diameter of the network, and n is the total number of nodes in the network. These time and space complexity are at least a factor of n better than those of the previously best-known self-stabilizing algorithm for 3-edge-connectivity. The result of the computation is kept in a distributed fashion by assigning, upon stabilization of the algorithm, a component identifier to each processor which uniquely identifies the 3-edge-connected component to which the processor belongs. Furthermore, the algorithm is designed in such a way that its time complexity is dominated by that of the self-stabilizing depth-first search spanning tree construction in the sense that any improvement made in the latter automatically implies improvement in the time complexity of the algorithm.


Author(s):  
DACHENG WANG ◽  
SARGUR N. SRIHARI

Automatic analysis of images of forms is a problem of both practical and theoretical interest; due to its importance in office automation, and due to the conceptual challenges posed for document image analysis, respectively. We describe an approach to the extraction of text, both typed and handwritten, from scanned and digitized images of filled-out forms. In decomposing a filled-out form into three basic components of boxes, line segments and the remainder (handwritten and typed characters, words, and logos), the method does not use a priori knowledge of form structure. The input binary image is first segmented into small and large connected components. Complex boxes are decomposed into elementary regions using an approach based on key-point analysis. Handwritten and machine-printed text that touches or overlaps guide lines and boxes are separated by removing lines. Characters broken by line removal are rejoined using a character patching method. Experimental results with filled-out forms, from several different domains (insurance, banking, tax, retail and postal) are given.


2016 ◽  
Vol 2 (49) ◽  
pp. 46 ◽  
Author(s):  
Amitai Etzioni

Liberal communitarianism holds that a good society is based on a carefully crafted balance between individual rights and the common good; that both normative elements have the same fundamental standing and neither a priori trumps the other. Societies can lose the good balance either by becoming excessively committed to the common good (e.g. national security) or to individual rights (e.g. privacy). Even societies that have established a careful balance often need to recalibrate it following changes in historical conditions (such as the 2001 attacks on the American homeland) and technological developments (such as the invention of smart cell phones).


2021 ◽  
Vol 12 (3) ◽  
pp. 25-43
Author(s):  
Maan Ammar ◽  
Muhammad Shamdeen ◽  
Mazen Kasedeh ◽  
Kinan Mansour ◽  
Waad Ammar

We introduce in this paper a reliable method for automatic extraction of lungs nodules from CT chest images and shed the light on the details of using the Weighted Euclidean Distance (WED) for classifying lungs connected components into nodule and not-nodule. We explain also using Connected Component Labeling (CCL) in an effective and flexible method for extraction of lungs area from chest CT images with a wide variety of shapes and sizes. This lungs extraction method makes use of, as well as CCL, some morphological operations. Our tests have shown that the performance of the introduce method is high. Finally, in order to check whether the method works correctly or not for healthy and patient CT images, we tested the method by some images of healthy persons and demonstrated that the overall performance of the method is satisfactory.


2020 ◽  
Vol 54 (1) ◽  
pp. 143-161
Author(s):  
A. Skoda

Let G = (N, E, w) be a weighted communication graph. For any subset A ⊆ N, we delete all minimum-weight edges in the subgraph induced by A. The connected components of the resultant subgraph constitute the partition 𝒫min(A) of A. Then, for every cooperative game (N, v), the 𝒫min-restricted game (N, v̅) is defined by v̅(A)=∑F∈𝒫min(A)v(F) for all A ⊆ N. We prove that we can decide in polynomial time if there is inheritance of ℱ-convexity, i.e., if for every ℱ-convex game the 𝒫min-restricted game is ℱ-convex, where ℱ-convexity is obtained by restricting convexity to connected subsets. This implies that we can also decide in polynomial time for any unweighted graph if there is inheritance of convexity for Myerson’s graph-restricted game.


Sign in / Sign up

Export Citation Format

Share Document