An Algorithm for Mining Frequent Stream Data Items Using Hash Function and Fading Factor

2011 ◽  
Vol 130-134 ◽  
pp. 2661-2665 ◽  
Author(s):  
Qing Ling Mei ◽  
Ling Chen

A new algorithm to mine the frequent items in data stream is presented. The algorithm adopts a time fading factor to emphasize the importance of the relatively newer data, and records the densities of the data items in Hash tables. For a given threshold of density S and an integer k, our algorithm can mine the top k frequent items. Computation time for processing each data item is O(1) . Experimental results show that the algorithm outperforms other methods in terms of accuracy, memory requirement, and processing speed.

2013 ◽  
Vol 554-557 ◽  
pp. 1375-1381 ◽  
Author(s):  
Laurence Giraud-Moreau ◽  
Abel Cherouat ◽  
Jie Zhang ◽  
Houman Borouchaki

Recently, new sheet metal forming technique, incremental forming has been introduced. It is based on using a single spherical tool, which is moved along CNC controlled tool path. During the incremental forming process, the sheet blank is fixed in sheet holder. The tool follows a certain tool path and progressively deforms the sheet. Nowadays, numerical simulations of metal forming are widely used by industry to predict the geometry of the part, stresses and strain during the forming process. Because incremental forming is a dieless process, it is perfectly suited for prototyping and small volume production [1, 2]. On the other hand, this process is very slow and therefore it can only be used when a slow series production is required. As the sheet incremental forming process is an emerging process which has a high industrial interest, scientific efforts are required in order to optimize the process and to increase the knowledge of this process through experimental studies and the development of accurate simulation models. In this paper, a comparison between numerical simulation and experimental results is realized in order to assess the suitability of the numerical model. The experimental investigation is realized using a three-axis CNC milling machine. The forming tool consists in a cylindrical rotating punch with a hemispherical head. A subroutine has been developed to describe the tool path from CAM procedure. A numerical model has been developed to simulate the sheet incremental forming process. The finite element code Abaqus explicit has been used. The simulation of the incremental forming process stays a complex task and the computation time is often prohibitive for many reasons. During this simulation, the blank is deformed by a sequence of small increments that requires many numerical increments to be performed. Moreover, the size of the tool diameter is generally very small compared to the size of the metal sheet and thus the contact zone between the tool and the sheet is limited. As the tool deforms almost every part of the sheet, small elements are required everywhere in the sheet resulting in a very high computation time. In this paper, an adaptive remeshing method has been used to simulate the incremental forming process. This strategy, based on adaptive refinement and coarsening procedures avoids having an initially fine mesh, resulting in an enormous computing time. Experiments have been carried out using aluminum alloy sheets. The final geometrical shape and the thickness profile have been measured and compared with the numerical results. These measurements have allowed validating the proposed numerical model. References [1] M. Yamashita, M. Grotoh, S.-Y. Atsumi, Numerical simulation of incremental forming of sheet metal, J. Processing Technology, No. 199 (2008), p. 163 172. [2] C. Henrard, A.M. Hbraken, A. Szekeres, J.R. Duflou, S. He, P. Van Houtte, Comparison of FEM Simulations for the Incremental Forming Process, Advanced Materials Research, 6-8 (2005), p. 533-542.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Bing Tang ◽  
Linyao Kang ◽  
Li Zhang ◽  
Feiyan Guo ◽  
Haiwu He

Nonnegative matrix factorization (NMF) has been introduced as an efficient way to reduce the complexity of data compression and its capability of extracting highly interpretable parts from data sets, and it has also been applied to various fields, such as recommendations, image analysis, and text clustering. However, as the size of the matrix increases, the processing speed of nonnegative matrix factorization is very slow. To solve this problem, this paper proposes a parallel algorithm based on GPU for NMF in Spark platform, which makes full use of the advantages of in-memory computation mode and GPU acceleration. The new GPU-accelerated NMF on Spark platform is evaluated in a 4-node Spark heterogeneous cluster using Google Compute Engine by configuring each node a NVIDIA K80 CUDA device, and experimental results indicate that it is competitive in terms of computational time against the existing solutions on a variety of matrix orders. Furthermore, a GPU-accelerated NMF-based parallel collaborative filtering (CF) algorithm is also proposed, utilizing the advantages of data dimensionality reduction and feature extraction of NMF, as well as the multicore parallel computing mode of CUDA. Using real MovieLens data sets, experimental results have shown that the parallelization of NMF-based collaborative filtering on Spark platform effectively outperforms traditional user-based and item-based CF with a higher processing speed and higher recommendation accuracy.


Author(s):  
Regant Y. S. Hung ◽  
Kwok Fai Lai ◽  
Hing Fung Ting
Keyword(s):  

2015 ◽  
Vol 781 ◽  
pp. 568-571 ◽  
Author(s):  
Sanun Srisuk ◽  
Wachirapong Kesjindatanawaj ◽  
Surachai Ongkittikul

In this paper, we present a technique for accelerating the bilateral filtering using GPGPU. Bilateral filtering is a tool for an image smoothing with edge preserving properties. It serves as a mixture of domain and range filters. Domain filter suppresses Gaussian noise while range filter maintains sharp edges. Bilateral filtering is a nonlinear filtering in which the filter kernel must be computed pixel by pixel. Therefore conventional fast Fourier transform technique cannot be used to accelerate the bilateral filtering. Instead, general purpose GPU is used as a parallel machine to reduce time consuming of the bilateral filtering. We will show the experimental results by comparing the computation time of CPU and GPU. It was cleared that, from the experimental results, GPU outperformed the CPU in terms of computation time.


Author(s):  
Michiharu Maeda ◽  
◽  
Noritaka Shigei ◽  
Hiromi Miyajima ◽  
Kenichi Suzaki ◽  
...  

Two reductions in competitive learning founded on distortion standards are discussed from the viewpoint of generating necessary and appropriate reference vectors under the condition of their predetermined number. The first approach is termed the segmental reduction and competitive learning algorithm. The algorithm is presented as follows: First, numerous reference vectors are prepared and the algorithm is processed under competitive learning. Next, reference vectors are sequentially eliminated to reach their prespecified number based on the partition error criterion. The second approach is termed the general reduction and competitive learning algorithm. The algorithm is presented as follows: First, numerous reference vectors are prepared and the algorithm is processed under competitive learning. Next, reference vectors are sequentially erased based on the average distortion criterion. Experimental results demonstrate the effectiveness of our approaches compared to conventional techniques in average distortion. The two approaches are applied to image coding to determine their feasibility in quality and computation time.


1983 ◽  
Vol 37 (3) ◽  
pp. 273-279 ◽  
Author(s):  
K. L. Sala ◽  
R. W. Yip ◽  
R. LeSage

The use of the fast Fourier transform in the processing of photographic data obtained from picosecond continuum spectroscopy is described. The resulting reduction in the complexity and computation time has permitted all of the data acquisition and processing to be carried out with an eight-bit microcomputer. Specific examples of some key problems in the data processing that are peculiar to this spectroscopic technique and methods of overcoming these problems are discussed. Experimental results that serve to illustrate both the experimental technique itself as well as the versatility and reliability of the data processing algorithm are presented for the transient absorption of a Cr(III) complex in solutions.


Sign in / Sign up

Export Citation Format

Share Document