computationally expensive
Recently Published Documents


TOTAL DOCUMENTS

463
(FIVE YEARS 180)

H-INDEX

31
(FIVE YEARS 5)

2022 ◽  
Vol 16 (4) ◽  
pp. 1-43
Author(s):  
Aida Sheshbolouki ◽  
M. Tamer Özsu

We study the fundamental problem of butterfly (i.e., (2,2)-bicliques) counting in bipartite streaming graphs. Similar to triangles in unipartite graphs, enumerating butterflies is crucial in understanding the structure of bipartite graphs. This benefits many applications where studying the cohesion in a graph shaped data is of particular interest. Examples include investigating the structure of computational graphs or input graphs to the algorithms, as well as dynamic phenomena and analytic tasks over complex real graphs. Butterfly counting is computationally expensive, and known techniques do not scale to large graphs; the problem is even harder in streaming graphs. In this article, following a data-driven methodology, we first conduct an empirical analysis to uncover temporal organizing principles of butterflies in real streaming graphs and then we introduce an approximate adaptive window-based algorithm, sGrapp, for counting butterflies as well as its optimized version sGrapp-x. sGrapp is designed to operate efficiently and effectively over any graph stream with any temporal behavior. Experimental studies of sGrapp and sGrapp-x show superior performance in terms of both accuracy and efficiency.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 473
Author(s):  
Christoforos Nalmpantis ◽  
Nikolaos Virtsionis Gkalinikis ◽  
Dimitris Vrakas

Deploying energy disaggregation models in the real-world is a challenging task. These models are usually deep neural networks and can be costly when running on a server or prohibitive when the target device has limited resources. Deep learning models are usually computationally expensive and they have large storage requirements. Reducing the computational cost and the size of a neural network, without trading off any performance is not a trivial task. This paper suggests a novel neural architecture that has less learning parameters, smaller size and fast inference time without trading off performance. The proposed architecture performs on par with two popular strong baseline models. The key characteristic is the Fourier transformation which has no learning parameters and it can be computed efficiently.


2021 ◽  
Author(s):  
Anjir Ahmed Chowdhury ◽  
Md Abir Hossen ◽  
Md Ali Azam ◽  
Md. Hafizur Rahman

Abstract Hyperparameter optimization or tuning plays a significant role in the performance and reliability of deep learning (DL). Many hyperparameter optimization algorithms have been developed for obtaining better validation accuracy in DL training. Most state-of-the-art hyperparameters are computationally expensive due to a focus on validation accuracy. Therefore, they are unsuitable for online or on-the-fly training applications which require computational efficiency. In this paper, we develop a novel greedy approach-based hyperparameter optimization (GHO) algorithm for faster training applications, e.g., on-the-fly training. We perform an empirical study to compute the performance such as computation time and energy consumption of the GHO and compare it with two state-of-the-art hyperparameter optimization algorithms. We also deploy the GHO algorithm in an edge device to validate the performance of our algorithm. We perform post-training quantization to the GHO algorithm to reduce inference time and latency.


2021 ◽  
Author(s):  
Shaopeng Liu ◽  
David Koslicki

AbstractK-mer based methods are used ubiquitously in the field of computational biology. However, determining the optimal value of k for a specific application often remains heuristic. Simply reconstructing a new k-mer set with another k-mer size is computationally expensive, especially in metagenomic analysis where data sets are large. Here, we introduce a hashing-based technique that leverages a kind of bottom-m sketch as well as a k-mer ternary search tree (KTST) to obtain k-mer based similarity estimates for a range of k values. By truncating k-mers stored in a pre-built KTST with a large k = kmax value, we can simultaneously obtain k-mer based estimates for all k values up to kmax. This truncation approach circumvents the reconstruction of new k-mer sets when changing k values, making analysis more time and space-efficient. For example, we show that when using a KTST to estimate the containment index between a RefSeq-based microbial reference database and simulated metagenome data for 10 values of k, the running time is close to 10x faster compared to a classic MinHash approach while using less than one-fifth the space to store the data structure. A python implementation of this method, CMash, is available at https://github.com/dkoslicki/CMash. The reproduction of all experiments presented herein can be accessed via https://github.com/KoslickiLab/CMASH-reproducibles.


2021 ◽  
pp. 1-11
Author(s):  
Swapna Donepudi ◽  
K. Thammi Reddy

Voting is a process for making collective decisions or to express a mass opinion on list of options available. It is a most commonly used instrument to elect a political representative. It is apparent that the methodology currently followed for voting in India can be improved at many levels to make it more robust and efficient. Currently, the voting methodology followed in Indian political elections has two major issues, one is high cost per voter, and low voter turnout. There are many attempts by other democratic setups to tackle this problem by offering online method of voting, but the most trustworthy and promising solution is considered to be voting platform/infrastructure backed by Blockchain. Most of the currently existing Blockchain based voting solutions are computationally expensive, doesn’t provide a verifiable secret ballot, slow, and Byzantine Fault Tolerant Proof of Work algorithms run on a public Blockchain network. The work presented in this paper aims at addressing these issues by proposing Blockchain based framework that leverages Hyperledger Fabric for Scalable Voting System. The proposed method uses Aadhar number for authentication of voters. The proposed method can efficiently cater the secure, trustworthy, and promising to Indian scale. The proposed method offers a various solution, offline and online voting with features such as cost-effective deployments, instantaneous vote counting, Cast as Intended Verifiability, and an observable and auditable architecture. The proposed method has been tested on real time setup and the experimental results are promising.


2021 ◽  
Author(s):  
Francisco Daniel Filip Duarte

Abstract Artificial intelligence in general and optimization tasks applied to the design of very efficient structures rely on response surfaces to forecast the output of functions, and are vital part of these methodologies. Yet they have important limitations, since greater precisions require greater data sets, thus, training or updating larger response surfaces become computationally expensive or unfeasible. This has been an important bottle neck limitation to achieve more promising results, rendering many optimization and AI tasks with a low performance.To solve this challenge, a new methodology created to segment response surfaces is hereby presented. Differently than other similar methodologies, this algorithm named outer input method has a very simple and robust operation, generating a mesh of near isopopulated partitions of inputs which share similitude. The great advantage it offers is that it can be applied to any data set with any type of distribution, such as random, Cartesian, or clustered, for domains with any number of coordinates, significantly simplifying any metamodel with a mesh ensemble.This study demonstrates how one of the most known and precise metamodel denominated Kriging, yet with expensive computation costs, can be significantly simplified with a response surface mesh, increasing training speed up to 567 times, while using a quad-core parallel processing. Since individual mesh elements can be parallelized or updated individually, its faster operational speed has its speed increased.


2021 ◽  
Author(s):  
Sukriti Manna ◽  
Alberto Hernandez ◽  
Yunzhe Wang ◽  
Peter Lile ◽  
Shanping Liu ◽  
...  

The chemical and structural properties of atomically precise nanoclusters are of great interest in numerous applications, but the structures of the clusters can be computationally expensive to predict. In this work, we present the largest database of cluster structures and properties determined using ab-initio methods to date. We report the methodologies used to discover low-energy clusters as well as the energies, relaxed structures, and physical properties (such as relative stability, HOMO-LUMO gap among others) for over 50,000 clusters across 55 elements. We have identified 589 structures which have energies lower than any previously reported in the literature by at least 1 meV/atom, and we have identified 1340 new structures for clusters that were previously unexplored in the literature. Patterns in the data reveal insights into the chemical and structural relationships among the elements at the nanoscale. We describe how the database can be accessed for future studies and the development of nanocluster-based technologies.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Zachary P. Neal ◽  
Rachel Domagalski ◽  
Bruce Sagan

AbstractProjections of bipartite or two-mode networks capture co-occurrences, and are used in diverse fields (e.g., ecology, economics, bibliometrics, politics) to represent unipartite networks. A key challenge in analyzing such networks is determining whether an observed number of co-occurrences between two nodes is significant, and therefore whether an edge exists between them. One approach, the fixed degree sequence model (FDSM), evaluates the significance of an edge’s weight by comparison to a null model in which the degree sequences of the original bipartite network are fixed. Although the FDSM is an intuitive null model, it is computationally expensive because it requires Monte Carlo simulation to estimate each edge’s p value, and therefore is impractical for large projections. In this paper, we explore four potential alternatives to FDSM: fixed fill model, fixed row model, fixed column model, and stochastic degree sequence model (SDSM). We compare these models to FDSM in terms of accuracy, speed, statistical power, similarity, and ability to recover known communities. We find that the computationally-fast SDSM offers a statistically conservative but close approximation of the computationally-impractical FDSM under a wide range of conditions, and that it correctly recovers a known community structure even when the signal is weak. Therefore, although each backbone model may have particular applications, we recommend SDSM for extracting the backbone of bipartite projections when FDSM is impractical.


Sign in / Sign up

Export Citation Format

Share Document