scholarly journals The Complex Community Structure of the Bitcoin Address Correspondence Network

2021 ◽  
Vol 9 ◽  
Author(s):  
Jan Alexander Fischer ◽  
Andres Palechor ◽  
Daniele Dell’Aglio ◽  
Abraham Bernstein ◽  
Claudio J. Tessone

Bitcoin is built on a blockchain, an immutable decentralized ledger that allows entities (users) to exchange Bitcoins in a pseudonymous manner. Bitcoins are associated with alpha-numeric addresses and are transferred via transactions. Each transaction is composed of a set of input addresses (associated with unspent outputs received from previous transactions) and a set of output addresses (to which Bitcoins are transferred). Despite Bitcoin was designed with anonymity in mind, different heuristic approaches exist to detect which addresses in a specific transaction belong to the same entity. By applying these heuristics, we build an Address Correspondence Network: in this representation, addresses are nodes are connected with edges if at least one heuristic detects them as belonging to the same entity. In this paper, we analyze for the first time the Address Correspondence Network and show it is characterized by a complex topology, signaled by a broad, skewed degree distribution and a power-law component size distribution. Using a large-scale dataset of addresses for which the controlling entities are known, we show that a combination of external data coupled with standard community detection algorithms can reliably identify entities. The complex nature of the Address Correspondence Network reveals that usage patterns of individual entities create statistical regularities; and that these regularities can be leveraged to more accurately identify entities and gain a deeper understanding of the Bitcoin economy as a whole.

2018 ◽  
Vol 12 (5) ◽  
pp. 1-36 ◽  
Author(s):  
Fabrício A. Silva ◽  
Augusto C. S. A. Domingues ◽  
Thais R. M. Braga Silva

2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Christof A. Bertram ◽  
Marc Aubreville ◽  
Christian Marzahl ◽  
Andreas Maier ◽  
Robert Klopfleisch

AbstractWe introduce a novel, large-scale dataset for microscopy cell annotations. The dataset includes 32 whole slide images (WSI) of canine cutaneous mast cell tumors, selected to include both low grade cases as well as high grade cases. The slides have been completely annotated for mitotic figures and we provide secondary annotations for neoplastic mast cells, inflammatory granulocytes, and mitotic figure look-alikes. Additionally to a blinded two-expert manual annotation with consensus, we provide an algorithm-aided dataset, where potentially missed mitotic figures were detected by a deep neural network and subsequently assessed by two human experts. We included 262,481 annotations in total, out of which 44,880 represent mitotic figures. For algorithmic validation, we used a customized RetinaNet approach, followed by a cell classification network. We find F1-Scores of 0.786 and 0.820 for the manually labelled and the algorithm-aided dataset, respectively. The dataset provides, for the first time, WSIs completely annotated for mitotic figures and thus enables assessment of mitosis detection algorithms on complete WSIs as well as region of interest detection algorithms.


Author(s):  
Chongsheng Zhang ◽  
Ruixing Zong ◽  
Shuang Cao ◽  
Yi Men ◽  
Bofeng Mo

Oracle Bone Inscriptions (OBI) research is very meaningful for both history and literature. In this paper, we introduce our contributions in AI-Powered Oracle Bone (OB) fragments rejoining and OBI recognition. (1) We build a real-world dataset OB-Rejoin, and propose an effective OB rejoining algorithm which yields a top-10 accuracy of 98.39%. (2) We design a practical annotation software to facilitate OBI annotation, and build OracleBone-8000, a large-scale dataset with character-level annotations. We adopt deep learning based scene text detection algorithms for OBI localization, which yield an F-score of 89.7%. We propose a novel deep template matching algorithm for OBI recognition which achieves an overall accuracy of 80.9%. Since we have been cooperating closely with OBI domain experts, our effort above helps advance their research. The resources of this work are available at https://github.com/chongshengzhang/OracleBone.


Author(s):  
Seán Damer

This book seeks to explain how the Corporation of Glasgow, in its large-scale council house-building programme in the inter- and post-war years, came to reproduce a hierarchical Victorian class structure. The three tiers of housing scheme which it constructed – Ordinary, Intermediate, and Slum-Clearance – effectively signified First, Second and Third Class. This came about because the Corporation uncritically reproduced the offensive and patriarchal attitudes of the Victorian bourgeoisie towards the working-class. The book shows how this worked out on the ground in Glasgow, and describes the attitudes of both authoritarian housing officials, and council tenants. This is the first time the voice of Glasgow’s council tenants has been heard. The conclusion is that local council housing policy was driven by unapologetic considerations of social class.


Author(s):  
Jin Zhou ◽  
Qing Zhang ◽  
Jian-Hao Fan ◽  
Wei Sun ◽  
Wei-Shi Zheng

AbstractRecent image aesthetic assessment methods have achieved remarkable progress due to the emergence of deep convolutional neural networks (CNNs). However, these methods focus primarily on predicting generally perceived preference of an image, making them usually have limited practicability, since each user may have completely different preferences for the same image. To address this problem, this paper presents a novel approach for predicting personalized image aesthetics that fit an individual user’s personal taste. We achieve this in a coarse to fine manner, by joint regression and learning from pairwise rankings. Specifically, we first collect a small subset of personal images from a user and invite him/her to rank the preference of some randomly sampled image pairs. We then search for the K-nearest neighbors of the personal images within a large-scale dataset labeled with average human aesthetic scores, and use these images as well as the associated scores to train a generic aesthetic assessment model by CNN-based regression. Next, we fine-tune the generic model to accommodate the personal preference by training over the rankings with a pairwise hinge loss. Experiments demonstrate that our method can effectively learn personalized image aesthetic preferences, clearly outperforming state-of-the-art methods. Moreover, we show that the learned personalized image aesthetic benefits a wide variety of applications.


2021 ◽  
Vol 7 (3) ◽  
pp. 50
Author(s):  
Anselmo Ferreira ◽  
Ehsan Nowroozi ◽  
Mauro Barni

The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned. A problem hindering research in this area is the lack of large scale reference datasets to be used for algorithm development and benchmarking. Motivated by this issue, we present a new dataset composed of a large number of synthetic and natural printed face images. To highlight the difficulties associated with the analysis of the images of the dataset, we carried out an extensive set of experiments comparing several printer attribution methods. We also verified that state-of-the-art methods to distinguish natural and synthetic face images fail when applied to print and scanned images. We envision that the availability of the new dataset and the preliminary experiments we carried out will motivate and facilitate further research in this area.


2020 ◽  
Vol 501 (1) ◽  
pp. L71-L75
Author(s):  
Cornelius Rampf ◽  
Oliver Hahn

ABSTRACT Perturbation theory is an indispensable tool for studying the cosmic large-scale structure, and establishing its limits is therefore of utmost importance. One crucial limitation of perturbation theory is shell-crossing, which is the instance when cold-dark-matter trajectories intersect for the first time. We investigate Lagrangian perturbation theory (LPT) at very high orders in the vicinity of the first shell-crossing for random initial data in a realistic three-dimensional Universe. For this, we have numerically implemented the all-order recursion relations for the matter trajectories, from which the convergence of the LPT series at shell-crossing is established. Convergence studies performed at large orders reveal the nature of the convergence-limiting singularities. These singularities are not the well-known density singularities at shell-crossing but occur at later times when LPT already ceased to provide physically meaningful results.


Author(s):  
Dingwang Huang ◽  
Kang Wang ◽  
Lintao Li ◽  
Kuang Feng ◽  
Na An ◽  
...  

3.17% efficient Cu2ZnSnS4–BiVO4 integrated tandem cell and a large scale 5 × 5 cm integrated CZTS–BiVO4 tandem device for standalone overall solar water splitting was assembled for the first time.


Author(s):  
Anil S. Baslamisli ◽  
Partha Das ◽  
Hoang-An Le ◽  
Sezer Karaoglu ◽  
Theo Gevers

AbstractIn general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.


Sign in / Sign up

Export Citation Format

Share Document