An Efficient Scheme for Compression of Electro Cardiac Signal using Divide and Conquer Algorithm

Author(s):  
Thelapolu Shalini ◽  
Srinivas Bachu ◽  
Naluguru Udaya Kumar
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Israel F. Araujo ◽  
Daniel K. Park ◽  
Francesco Petruccione ◽  
Adenilton J. da Silva

AbstractAdvantages in several fields of research and industry are expected with the rise of quantum computers. However, the computational cost to load classical data in quantum computers can impose restrictions on possible quantum speedups. Known algorithms to create arbitrary quantum states require quantum circuits with depth O(N) to load an N-dimensional vector. Here, we show that it is possible to load an N-dimensional vector with exponential time advantage using a quantum circuit with polylogarithmic depth and entangled information in ancillary qubits. Results show that we can efficiently load data in quantum devices using a divide-and-conquer strategy to exchange computational time for space. We demonstrate a proof of concept on a real quantum device and present two applications for quantum machine learning. We expect that this new loading strategy allows the quantum speedup of tasks that require to load a significant volume of information to quantum devices.


Author(s):  
Afrand Agah ◽  
Mehran Asadi

This article introduces a new method to discover the role of influential people in online social networks and presents an algorithm that recognizes influential users to reach a target in the network, in order to provide a strategic advantage for organizations to direct the scope of their digital marketing strategies. Social links among friends play an important role in dictating their behavior in online social networks, these social links determine the flow of information in form of wall posts via shares, likes, re-tweets, mentions, etc., which determines the influence of a node. This article initially identities the correlated nodes in large data sets using customized divide-and-conquer algorithm and then measures the influence of each of these nodes using a linear function. Furthermore, the empirical results show that users who have the highest influence are those whose total number of friends are closer to the total number of friends of each node divided by the total number of nodes in the network.


Author(s):  
Afrand Agah ◽  
Mehran Asadi

This article introduces a new method to discover the role of influential people in online social networks and presents an algorithm that recognizes influential users to reach a target in the network, in order to provide a strategic advantage for organizations to direct the scope of their digital marketing strategies. Social links among friends play an important role in dictating their behavior in online social networks, these social links determine the flow of information in form of wall posts via shares, likes, re-tweets, mentions, etc., which determines the influence of a node. This article initially identities the correlated nodes in large data sets using customized divide-and-conquer algorithm and then measures the influence of each of these nodes using a linear function. Furthermore, the empirical results show that users who have the highest influence are those whose total number of friends are closer to the total number of friends of each node divided by the total number of nodes in the network.


2019 ◽  
Author(s):  
Oriol Tintó Prims ◽  
Mario C. Acosta ◽  
Andrew M. Moore ◽  
Miguel Castrillo ◽  
Kim Serradell ◽  
...  

Abstract. Mixed-precision approaches can provide substantial speed-ups for both computing- and memory-bound codes requiring little effort. Most scientific codes have overengineered the numerical precision leading to a situation where models are using more resources than required without having a clue about where these resources are unnecessary and where are really needed. Consequently, there is the possibility to obtain performance benefits from using a more appropriate choice of precision and the only thing that is needed is a method to determine which real variables can be represented with fewer bits without affecting the accuracy of the results. This paper presents a novel method to enable modern and legacy codes to benefit from a reduction of precision without sacrificing accuracy. It consists in a simple idea: if we can measure how reducing the precision of a group of variables affects the outputs, we can evaluate the level of precision this group of variables need. Modifying and recompiling the code for each case that has to be evaluated would require an amount of effort that makes this task prohibitive. Instead, the method presented in this paper relies on the use of a tool called Reduced Precision Emulator (RPE) that can significantly streamline the process . Using the RPE and a list of parameters containing the precisions that will be used for each real variable in the code, it is possible within a single binary to emulate the effect on the outputs of a specific choice of precision. Once we have the potential of emulating the effects of reduced precision, we can proceed with the design of the tests required to obtain knowledge about all the variables in the model. The number of possible combinations is prohibitively large and impossible to explore. The alternative of performing a screening of the variables individually can give certain insight about the precision needed by the variables, but on the other hand some more complex interactions that involve several variables may remain hidden. Instead, we use a divide-and-conquer algorithm that identifies the parts that cannot handle reduced precision and builds a set of variables that can. The method has been put to proof using two state-of-the-art ocean models, NEMO and ROMS, with very promising results. Obtaining this information is crucial to build afterwards an actual mixed precision version of the code that will bring the promised performance benefits.


Sign in / Sign up

Export Citation Format

Share Document