Tanking, Shirking, and Running Dead: The Role of Economics and Large Data Sets in Identifying Competition Corruption and its Causes

2020 ◽  
pp. 19-34
Author(s):  
Wray Vamplew
Keyword(s):  
Author(s):  
Afrand Agah ◽  
Mehran Asadi

This article introduces a new method to discover the role of influential people in online social networks and presents an algorithm that recognizes influential users to reach a target in the network, in order to provide a strategic advantage for organizations to direct the scope of their digital marketing strategies. Social links among friends play an important role in dictating their behavior in online social networks, these social links determine the flow of information in form of wall posts via shares, likes, re-tweets, mentions, etc., which determines the influence of a node. This article initially identities the correlated nodes in large data sets using customized divide-and-conquer algorithm and then measures the influence of each of these nodes using a linear function. Furthermore, the empirical results show that users who have the highest influence are those whose total number of friends are closer to the total number of friends of each node divided by the total number of nodes in the network.


2014 ◽  
pp. 26-35
Author(s):  
Dan Cvrcek ◽  
Vaclav Matyas ◽  
Marek Kumpost

Many papers and articles attempt to define or even quantify privacy, typically with a major focus on anonymity. A related research exercise in the area of evidence-based trust models for ubiquitous computing environments has given us an impulse to take a closer look at the definition(s) of privacy in the Common Criteria, which we then transcribed in a bit more formal manner. This led us to a further review of unlinkability, and revision of another semi-formal model allowing for expression of anonymity and unlinkability – the Freiburg Privacy Diamond. We propose new means of describing (obviously only observable) characteristics of a system to reflect the role of contexts for profiling – and linking – users with actions in a system. We believe this approach should allow for evaluating privacy in large data sets.


Author(s):  
Janusz Bobulski ◽  
Mariusz Kubanek

Big Data in medicine includes possibly fast processing of large data sets, both current and historical in purpose supporting the diagnosis and therapy of patients' diseases. Support systems for these activities may include pre-programmed rules based on data obtained from the interview medical and automatic analysis of test results diagnostic results will lead to classification of observations to a specific disease entity. The current revolution using Big Data significantly expands the role of computer science in achieving these goals, which is why we propose a Big Data computer data processing system using artificial intelligence to analyze and process medical images.


2017 ◽  
Vol 52 (4) ◽  
pp. 1731-1763 ◽  
Author(s):  
Ilias Filippou ◽  
Mark P. Taylor

We study the role of domestic and global factors in the payoffs of portfolios mimicking carry, dollar-carry, and momentum strategies. Using factors summarizing large data sets of macroeconomic and financial variables, we find that global equity-market factors are predictive for carry-trade returns, whereas U.S. inflation and consumption variables drive dollar-carry-trade payoffs, momentum returns are predominantly driven by U.S. inflation factors, and global factors capture the countercyclical nature of currency premia. We also find predictability in the exchange-rate component of each strategy and demonstrate strong economic value for risk-averse investors with mean-variance preferences, regardless of base currency.


2020 ◽  
Author(s):  
Christoph von Hagke

<p>For understanding the formation of mountain belts it is necessary to gain quantitative insights on fault and fracture mechanics on multiple scales. In particular, for addressing the role of fluids on larger processes, it is inevitable to constrain fault and fracture geometries at depth, as well as gain insights on how fluids influence fault mechanics. At least partly, the future of such analyses lies in exploiting large data sets, as well as in multi- and interdisciplinary research.</p><p>In this talk I will present results from variety of geological settings, including dilatant faults at Mid-Ocean Ridges, the Oman Mountains, the Khao Kwang fold-trust belt in Thailand, and the European Alps. I will show how multi-scale studies and the use of large data sets helps constraining fluid migration in mountain belts, fault geometries, as well as possible feedbacks between fluid flow and strain localization. Results are then applied to discuss the role of mechanical stratigraphy on structural style in foreland fold-thrust belts.</p>


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Sign in / Sign up

Export Citation Format

Share Document