scholarly journals Parallel analysis of Ethereum blockchain transaction data using cluster computing

2022 ◽  
Author(s):  
Baran Kılıç ◽  
Can Özturan ◽  
Alper Sen

AbstractAbility to perform fast analysis on massive public blockchain transaction data is needed in various applications such as tracing fraudulent financial transactions. The blockchain data is continuously growing and is organized as a sequence of blocks containing transactions. This organization, however, cannot be used for parallel graph algorithms which need efficient distributed graph data structures. Using message passing libraries (MPI), we develop a scalable cluster-based system that constructs a distributed transaction graph in parallel and implement various transaction analysis algorithms. We report performance results from our system operating on roughly 5 years of 10.2 million block Ethereum Mainnet blockchain data. We report timings obtained from tests involving distributed transaction graph construction, partitioning, page ranking of addresses, degree distribution, token transaction counting, connected components finding and our new parallel blacklisted address trace forest computation algorithm on a 16 node economical cluster set up on the Amazon cloud. Our system is able to construct a distributed graph of 766 million transactions in 218 s and compute the forest of blacklisted address traces in 32 s.

1993 ◽  
Vol 2 (4) ◽  
pp. 133-144 ◽  
Author(s):  
Jon B. Weissman ◽  
Andrew S. Grimshaw ◽  
R.D. Ferraro

The conventional wisdom in the scientific computing community is that the best way to solve large-scale numerically intensive scientific problems on today's parallel MIMD computers is to use Fortran or C programmed in a data-parallel style using low-level message-passing primitives. This approach inevitably leads to nonportable codes and extensive development time, and restricts parallel programming to the domain of the expert programmer. We believe that these problems are not inherent to parallel computing but are the result of the programming tools used. We will show that comparable performance can be achieved with little effort if better tools that present higher level abstractions are used. The vehicle for our demonstration is a 2D electromagnetic finite element scattering code we have implemented in Mentat, an object-oriented parallel processing system. We briefly describe the application. Mentat, the implementation, and present performance results for both a Mentat and a hand-coded parallel Fortran version.


Legal Ukraine ◽  
2019 ◽  
pp. 31-35
Author(s):  
Alice Osadko

This article describes conversion centers, examines the specific issues of counteracting their operations as one of the most effective mechanisms for shading money out of the real economy, and identifies some of the weaknesses that exist in eliminating these centers, and suggests ways to address them. Due to the political and legislative changes that are taking place in our country, the authorities' desire to stabilize the country's anti-corruption economy, and unlawful mechanisms in this field are undergoing significant changes. Yes, criminal organizations have recently been set up in Ukraine, existing as large conversion centers designed to cover up economic crimes by illegally converting cash into cash or vice versa. The Conversion Center article is a carefully structured and well-structured stable crime group that exists with a commercial bank or in close collaboration. The purpose of the article is to investigate the activities of conversion centers and to counteract their functioning in the context of the fight against corruption and economic crime. An analysis of current law and practice shows that the functions of counteracting crime in the financial sector, namely the operation of conversion centers, are unjustifiably divided into departments and often duplicated. In particular, such powers are vested in the units of the National Police), the Security Service of Ukraine, the Tax Police (DFS). According to the National Institute for Strategic Studies under the President of Ukraine, in Europe there are two options for the full integration of law enforcement in the fight against economic crime: within the Ministry of the Interior and the Ministry of Finance. All this requires the formation of the concept of strategic construction and determining the location of the tax police or financial investigation service (DFS or FIU) in the fight against economic (tax) crime. This concept should define the basic directions and principles of improvement of managerial, organizational and personnel work, legal, personnel, resource and other law enforcement activity in the specified field on the basis of analysis and assessment of tax security of the person, society and the state. Key words: fictitious enterprise, conversion centers, financial transactions, legalization of income, liability, decriminalization, fraud.


2015 ◽  
Author(s):  
Marcelo Pita ◽  
Gustavo Torres

A graph-based method is proposed for inferring similarities among companies from their affiliations in the context of expenditure financial transactions in the Brazilian Federal Government. There are trusted and untrusted companies. We performed a basic cluster analysis in the companies network to verify whether clusters (connected components) are discriminative concerning companies trustworthiness. Results show evidences that this is true, reinforcing the following hypotheses: (1) there are suppliers associations, which evidences the formation of cartels; and (2) public agencies and agents play an important role in the legality of financial transactions.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Vincent Majanga ◽  
Serestina Viriri

Recent advances in medical imaging analysis, especially the use of deep learning, are helping to identify, detect, classify, and quantify patterns in radiographs. At the center of these advances is the ability to explore hierarchical feature representations learned from data. Deep learning is invaluably becoming the most sought out technique, leading to enhanced performance in analysis of medical applications and systems. Deep learning techniques have achieved great performance results in dental image segmentation. Segmentation of dental radiographs is a crucial step that helps the dentist to diagnose dental caries. The performance of these deep networks is however restrained by various challenging features of dental carious lesions. Segmentation of dental images becomes difficult due to a vast variety in topologies, intricacies of medical structures, and poor image qualities caused by conditions such as low contrast, noise, irregular, and fuzzy edges borders, which result in unsuccessful segmentation. The dental segmentation method used is based on thresholding and connected component analysis. Images are preprocessed using the Gaussian blur filter to remove noise and corrupted pixels. Images are then enhanced using erosion and dilation morphology operations. Finally, segmentation is done through thresholding, and connected components are identified to extract the Region of Interest (ROI) of the teeth. The method was evaluated on an augmented dataset of 11,114 dental images. It was trained with 10 090 training set images and tested on 1024 testing set images. The proposed method gave results of 93 % for both precision and recall values, respectively.


2000 ◽  
Vol 09 (03) ◽  
pp. 343-367
Author(s):  
STEPHEN W. RYAN ◽  
ARVIND K. BANSAL

This paper describes a system to distribute and retrieve multimedia knowledge on a cluster of heterogeneous high performance architectures distributed over the Internet. The knowledge is represented using facts and rules in an associative logic-programming model. Associative computation facilitates distribution of facts and rules, and exploits coarse grain data parallel computation. Associative logic programming uses a flat data model that can be easily mapped onto heterogeneous architectures. The paper describes an abstract instruction set for the distributed version of the associative logic programming and the corresponding implementation. The implementation uses a message-passing library for architecture independence within a cluster, uses object oriented programming for modularity and portability, and uses Java as a front-end interface to provide a graphical user interface and multimedia capability and remote access via the Internet. The performance results on a cluster of IBM RS 6000 workstations are presented. The results show that distribution of data improves the performance almost linearly for small number of processors in a cluster.


Proceedings ◽  
2019 ◽  
Vol 30 (1) ◽  
pp. 38
Author(s):  
Vithlani ◽  
Marcel ◽  
Melville ◽  
Prüm ◽  
Lam ◽  
...  

The acquisition, storage, and processing of huge amounts of data and their fast analysis to generate information is not a new approach, but it becomes challenging through smart decision-making on the choice of hardware and software improvements. In the specific cases of environment protection, nature conservation, and precision farming, where fast and accurate reactions are required, drone technologies with imaging sensors are of interest in many research groups. However, post-processing of the images acquired by drone-based sensors such as the generation of orthomosaics from aerial images and superimposing the orthomosaics on a global map to identify the exact locations of the interested area is computationally intensive and sometimes takes hours or even days to achieve desired results. Initial tests have shown that photogrammetry software takes less time to generate an orthomosaic by running them on a workstation with higher CPU, RAM and GPU configurations. Tasks like setting up the application environment with dependencies, making this setup portable and manage installed services can be challenging, especially for small-and-medium-sized enterprises that have limited resources in exploring different architectures. To enhance the competitiveness of the small and medium-sized enterprises and research institutions, the accessibility of the proposed solution includes the integration of open-source tools and frameworks such as Kubernetes (version v1.13.4, available online: https://kubernetes.io/) and OpenDroneMap (version 0.3, available online: https://github.com/OpenDroneMap/ODM) enabling a reference architecture that is as vendor-neutral as possible. Current work is based on an on-premise cluster computing approach for fast and efficient photogrammetry process using open source software such as OpenDroneMap combined with light-weight containerization techniques such as Docker (version 17.12.1, available online: https://www.docker.io/), orchestrated by Kubernetes. The services provided by OpenDroneMap enable microservice-based architecture. These container-based services can be administered easily by a container orchestrator like Kubernetes. After setting up the servers with core OpenDroneMap services on our container-based cluster with Kubernetes as the orchestrator engine, the plan is to use the advantages of Kubernetes' powerful management capabilities to help maximize resource efficiency as the basis for creating Service Level Agreements to provide a cloud service.


1999 ◽  
Vol 103 (1027) ◽  
pp. 443-447 ◽  
Author(s):  
W. McMillan ◽  
M. Woodgate ◽  
B. E. Richards ◽  
B. J. Gribben ◽  
K. J. Badcock ◽  
...  

Abstract Motivated by a lack of sufficient local and national computing facilities for computational fluid dynamics simulations, the Affordable Systems Computing Unit (ASCU) was established to investigate low cost alternatives. The options considered have all involved cluster computing, a term which refers to the grouping of a number of components into a managed system capable of running both serial and parallel applications. The present work aims to demonstrate the utility of commodity processors for dedicated batch processing. The performance of the cluster has proved to be extremely cost effective, enabling large three dimensional flow simulations on a computer costing less than £25k sterling at current market prices. The experience gained on this system in terms of single node performance, message passing and parallel performance will be discussed. In particular, comparisons with the performance of other systems will be made. Several medium-large scale CFD simulations performed using the new cluster will be presented to demonstrate the potential of commodity processor based parallel computers for aerodynamic simulation.


Author(s):  
Karthik R ◽  
Navinkumar R ◽  
Rammkumar U ◽  
Mothilal K. C.

Cashless transactions such as online transactions, credit card transactions, and mobile wallet are becoming more popular in financial transactions nowadays. With increased number of such cashless transaction, number of fraudulent transactions is also increasing. Fraud can be distinguished by analyzing spending behavior of customers (users) from previous transaction data. Credit card fraud has highly imbalanced publicly available datasets. In this paper, we apply many supervised machine learning algorithms to detect credit card fraudulent transactions using a real-world dataset. Furthermore, we employ these algorithms to implement a super classifier using ensemble learning methods. We identify the most important variables that may lead to higher accuracy in credit card fraudulent transaction detection. Additionally, we compare and discuss the performance of various supervised machine learning algorithms that exist in literature against the super classifier that we implemented in this paper.


Author(s):  
Alexander Outkin ◽  
Silvio Flaim ◽  
Andy Seirp ◽  
Julia Gavrilov

We present in this chapter an overview of a financial system model (FinSim) created by the authors at the Los Alamos National Laboratory. The purpose of this model is to understand the impacts of external disruptions to the financial system, in particular disruptions to telecommunication networks and electric power systems; and to model how those impacts are affected by the interactions between different components of the financial system, e.g. markets and payment systems, and by individual agents actions and regulatory interventions. We use agent-based modeling to represent the interactions within the financial system and the decision-making processes of banks and traders. We model explicitly message-passing necessary for execution of financial transactions, which allows a realistic representation of the financial system dependency on telecommunications. We describe implementation of the payment system, securities market and liquidity market components; and present a sample telecommunications disruption scenario and its preliminary results.


2006 ◽  
Vol 17 (02) ◽  
pp. 303-322
Author(s):  
SHARAREH BABVEY ◽  
ANU G. BOURGEOIS ◽  
JOSÉ ALBERTO FERNÁNDEZ-ZEPEDA ◽  
STEVEN W. MCLAUGHLIN

In this paper we propose constant-time parallel algorithms for implementing the message-passing decoder of LDPC codes on a two dimensional R-Mesh and an LARPBS. The R-Mesh and LARPBS are dynamically reconfigurable models that provide hardware reuse and flexibility to problem changes. The same hardware can implement the decoder in both probability and logarithm domains over different channels. Moreover, to decode an alternate code, we may simply set up the required connections between the bit-nodes and check-nodes by modifying the initialization phase of the proposed algorithms. No extra wiring or hardware changes are required, as compared to other existing approaches. We illustrate that the R-Mesh and the LARPBS are efficient models for parallel implementation of the decoder in terms of time complexity, flexibility to problem changes and simplicity of routing messages. We also demonstrate that it is possible to optimally scale large block code sizes down to a smaller, available machine size if using an LR-Mesh or HVR-Mesh, two variants of the R-Mesh model.


Sign in / Sign up

Export Citation Format

Share Document