general purpose
Recently Published Documents





2022 ◽  
Vol 40 (3) ◽  
pp. 1-21
Lili Wang ◽  
Chenghan Huang ◽  
Ying Lu ◽  
Weicheng Ma ◽  
Ruibo Liu ◽  

Complex user behavior, especially in settings such as social media, can be organized as time-evolving networks. Through network embedding, we can extract general-purpose vector representations of these dynamic networks which allow us to analyze them without extensive feature engineering. Prior work has shown how to generate network embeddings while preserving the structural role proximity of nodes. These methods, however, cannot capture the temporal evolution of the structural identity of the nodes in dynamic networks. Other works, on the other hand, have focused on learning microscopic dynamic embeddings. Though these methods can learn node representations over dynamic networks, these representations capture the local context of nodes and do not learn the structural roles of nodes. In this article, we propose a novel method for learning structural node embeddings in discrete-time dynamic networks. Our method, called HR2vec , tracks historical topology information in dynamic networks to learn dynamic structural role embeddings. Through experiments on synthetic and real-world temporal datasets, we show that our method outperforms other well-known methods in tasks where structural equivalence and historical information both play important roles. HR2vec can be used to model dynamic user behavior in any networked setting where users can be represented as nodes. Additionally, we propose a novel method (called network fingerprinting) that uses HR2vec embeddings for modeling whole (or partial) time-evolving networks. We showcase our network fingerprinting method on synthetic and real-world networks. Specifically, we demonstrate how our method can be used for detecting foreign-backed information operations on Twitter.

2022 ◽  
Vol 15 (3) ◽  
pp. 1-31
Shulin Zeng ◽  
Guohao Dai ◽  
Hanbo Sun ◽  
Jun Liu ◽  
Shiyao Li ◽  

INFerence-as-a-Service (INFaaS) has become a primary workload in the cloud. However, existing FPGA-based Deep Neural Network (DNN) accelerators are mainly optimized for the fastest speed of a single task, while the multi-tenancy of INFaaS has not been explored yet. As the demand for INFaaS keeps growing, simply increasing the number of FPGA-based DNN accelerators is not cost-effective, while merely sharing these single-task optimized DNN accelerators in a time-division multiplexing way could lead to poor isolation and high-performance loss for INFaaS. On the other hand, current cloud-based DNN accelerators have excessive compilation overhead, especially when scaling out to multi-FPGA systems for multi-tenant sharing, leading to unacceptable compilation costs for both offline deployment and online reconfiguration. Therefore, it is far from providing efficient and flexible FPGA virtualization for public and private cloud scenarios. Aiming to solve these problems, we propose a unified virtualization framework for general-purpose deep neural networks in the cloud, enabling multi-tenant sharing for both the Convolution Neural Network (CNN), and the Recurrent Neural Network (RNN) accelerators on a single FPGA. The isolation is enabled by introducing a two-level instruction dispatch module and a multi-core based hardware resources pool. Such designs provide isolated and runtime-programmable hardware resources, which further leads to performance isolation for multi-tenant sharing. On the other hand, to overcome the heavy re-compilation overheads, a tiling-based instruction frame package design and a two-stage static-dynamic compilation, are proposed. Only the lightweight runtime information is re-compiled with ∼1 ms overhead, thus guaranteeing the private cloud’s performance. Finally, the extensive experimental results show that the proposed virtualized solutions achieve up to 3.12× and 6.18× higher throughput in the private cloud compared with the static CNN and RNN baseline designs, respectively.

2022 ◽  
Vol 389 ◽  
pp. 114422
Masoud Behzadinasab ◽  
Mert Alaydin ◽  
Nathaniel Trask ◽  
Yuri Bazilevs

Seçkin Canbaz ◽  
Gökhan Erdemir

In general, modern operating systems can be divided into two essential parts, real-time operating systems (RTOS) and general-purpose operating systems (GPOS). The main difference between GPOS and RTOS is the system istime-critical or not. It means that; in GPOS, a high-priority thread cannot preempt a kernel call. But, in RTOS, a low-priority task is preempted by a high-priority task if necessary, even if it’s executing a kernel call. Most Linux distributions can be used as both GPOS and RTOS with kernel modifications. In this study, two Linux distributions, Ubuntu and Pardus, were analyzed and their performances were compared both as GPOS and RTOS for path planning of the multi-robot systems. Robot groups with different numbers of members were used to perform the path tracking tasks using both Ubuntu and Pardus as GPOS and RTOS. In this way, both the performance of two different Linux distributions in robotic applications were observed and compared in two forms, GPOS, and RTOS.

2022 ◽  
Vol 21 (1) ◽  
pp. 1-27
Albin Eldstål-Ahrens ◽  
Angelos Arelakis ◽  
Ioannis Sourdis

In this article, we introduce L 2 C, a hybrid lossy/lossless compression scheme applicable both to the memory subsystem and I/O traffic of a processor chip. L 2 C employs general-purpose lossless compression and combines it with state-of-the-art lossy compression to achieve compression ratios up to 16:1 and to improve the utilization of chip’s bandwidth resources. Compressing memory traffic yields lower memory access time, improving system performance, and energy efficiency. Compressing I/O traffic offers several benefits for resource-constrained systems, including more efficient storage and networking. We evaluate L 2 C as a memory compressor in simulation with a set of approximation-tolerant applications. L 2 C improves baseline execution time by an average of 50% and total system energy consumption by 16%. Compared to the lossy and lossless current state-of-the-art memory compression approaches, L 2 C improves execution time by 9% and 26%, respectively, and reduces system energy costs by 3% and 5%, respectively. I/O compression efficacy is evaluated using a set of real-life datasets. L 2 C achieves compression ratios of up to 10.4:1 for a single dataset and on average about 4:1, while introducing no more than 0.4% error.

2022 ◽  
Vol 15 ◽  
Yu Yan ◽  
Yaël Balbastre ◽  
Mikael Brudfors ◽  
John Ashburner

Segmentation of brain magnetic resonance images (MRI) into anatomical regions is a useful task in neuroimaging. Manual annotation is time consuming and expensive, so having a fully automated and general purpose brain segmentation algorithm is highly desirable. To this end, we propose a patched-based labell propagation approach based on a generative model with latent variables. Once trained, our Factorisation-based Image Labelling (FIL) model is able to label target images with a variety of image contrasts. We compare the effectiveness of our proposed model against the state-of-the-art using data from the MICCAI 2012 Grand Challenge and Workshop on Multi-Atlas Labelling. As our approach is intended to be general purpose, we also assess how well it can handle domain shift by labelling images of the same subjects acquired with different MR contrasts.

2022 ◽  
Vol 4 (1) ◽  
Alex El-Shaikh ◽  
Marius Welzel ◽  
Dominik Heider ◽  
Bernhard Seeger

ABSTRACT Due to the rapid cost decline of synthesizing and sequencing deoxyribonucleic acid (DNA), high information density, and its durability of up to centuries, utilizing DNA as an information storage medium has received the attention of many scientists. State-of-the-art DNA storage systems exploit the high capacity of DNA and enable random access (predominantly random reads) by primers, which serve as unique identifiers for directly accessing data. However, primers come with a significant limitation regarding the maximum available number per DNA library. The number of different primers within a library is typically very small (e.g. ≈10). We propose a method to overcome this deficiency and present a general-purpose technique for addressing and directly accessing thousands to potentially millions of different data objects within the same DNA pool. Our approach utilizes a fountain code, sophisticated probe design, and microarray technologies. A key component is locality-sensitive hashing, making checks for dissimilarity among such a large number of probes and data objects feasible.

AI Magazine ◽  
2022 ◽  
Vol 42 (3) ◽  
pp. 3-6
Dietmar Jannach ◽  
Pearl Pu ◽  
Francesco Ricci ◽  
Markus Zanker

The origins of modern recommender systems date back to the early 1990s when they were mainly applied experimentally to personal email and information filtering. Today, 30 years later, personalized recommendations are ubiquitous and research in this highly successful application area of AI is flourishing more than ever. Much of the research in the last decades was fueled by advances in machine learning technology. However, building a successful recommender sys-tem requires more than a clever general-purpose algorithm. It requires an in-depth understanding of the specifics of the application environment and the expected effects of the system on its users. Ultimately, making recommendations is a human-computer interaction problem, where a computerized system supports users in information search or decision-making contexts. This special issue contains a selection of papers reflecting this multi-faceted nature of the problem and puts open research challenges in recommender systems to the fore-front. It features articles on the latest learning technology, reflects on the human-computer interaction aspects, reports on the use of recommender systems in practice, and it finally critically discusses our research methodology.

Wolfgang Adam ◽  
Iacopo Vivarelli

The second period of datataking at the Large Hadron Collider (LHC) has provided a large dataset of proton–proton collisions that is unprecedented in terms of its centre-of-mass energy of 13 TeV and integrated luminosity of almost 140 fb[Formula: see text]. These data constitute a formidable laboratory for the search for new particles predicted by models of supersymmetry. The analysis activity is still ongoing, but a host of results on supersymmetry had already been released by the general purpose LHC experiments ATLAS and CMS. In this paper, we provide a map into this remarkable body of research, which spans a multitude of experimental signatures and phenomenological scenarios. In the absence of conclusive evidence for the production of supersymmetric particles we discuss the constraints obtained in the context of various models. We finish with a short outlook on the new opportunities for the next runs that will be provided by the upgrade of detectors and accelerator.

2022 ◽  
pp. 016555152110624
Celso A S Santos ◽  
Alessandro M Baldi ◽  
Fábio R de Assis Neto ◽  
Monalessa P Barcellos

Crowdsourcing arose as a problem-solving strategy that uses a large number of workers to achieve tasks and solve specific problems. Although there are many studies that explore crowdsourcing platforms and systems, little attention has been paid to define what a crowd-powered project is. To address this issue, this article introduces a general-purpose conceptual model that represents the essential elements involved in this kind of project and how they relate to each other. We consider that the workflow in crowdsourcing projects is context-oriented and should represent the planning and coordination by the crowdsourcer in the project, instead of only facilitating decomposing a complex task into subtask sets. Since structural models are limited to cannot properly represent the execution flow, we also introduce the use of behavioural conceptual models, specifically Unified Modeling Language (UML) activity diagrams, to represent the user, tasks, assets, control activities and products involved in a specific project.

Sign in / Sign up

Export Citation Format

Share Document