scholarly journals FPGA Implementation of Real-Time Compressive Sensing with Partial Fourier Dictionary

2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Yinghui Quan ◽  
Yachao Li ◽  
Xiaoxiao Gao ◽  
Mengdao Xing

This paper presents a novel real-time compressive sensing (CS) reconstruction which employs high density field-programmable gate array (FPGA) for hardware acceleration. Traditionally, CS can be implemented using a high-level computer language in a personal computer (PC) or multicore platforms, such as graphics processing units (GPUs) and Digital Signal Processors (DSPs). However, reconstruction algorithms are computing demanding and software implementation of these algorithms is extremely slow and power consuming. In this paper, the orthogonal matching pursuit (OMP) algorithm is refined to solve the sparse decomposition optimization for partial Fourier dictionary, which is always adopted in radar imaging and detection application. OMP reconstruction can be divided into two main stages: optimization which finds the closely correlated vectors and least square problem. For large scale dictionary, the implementation of correlation is time consuming since it often requires a large number of matrix multiplications. Also solving the least square problem always needs a scalable matrix decomposition operation. To solve these problems efficiently, the correlation optimization is implemented by fast Fourier transform (FFT) and the large scale least square problem is implemented by Conjugate Gradient (CG) technique, respectively. The proposed method is verified by FPGA (Xilinx Virtex-7 XC7VX690T) realization, revealing its effectiveness in real-time applications.

Author(s):  
Charu Bhardwaj ◽  
Urvashi Sharma ◽  
Shruti Jain ◽  
Meenakshi Sood

Compression serves as a significant feature for efficient storage and transmission of medical, satellite, and natural images. Transmission speed is a key challenge in transmitting a large amount of data especially for magnetic resonance imaging and computed tomography scan images. Compressive sensing is an optimization-based option to acquire sparse signal using sub-Nyquist criteria exploiting only the signal of interest. This chapter explores compressive sensing for correct sensing, acquisition, and reconstruction of clinical images. In this chapter, distinctive overall performance metrics like peak signal to noise ratio, root mean square error, structural similarity index, compression ratio, etc. are assessed for medical image evaluation by utilizing best three reconstruction algorithms: basic pursuit, least square, and orthogonal matching pursuit. Basic pursuit establishes a well-renowned reconstruction method among the examined recovery techniques. At distinct measurement samples, on increasing the number of measurement samples, PSNR increases significantly and RMSE decreases.


2021 ◽  
Vol 13 (5) ◽  
pp. 2950
Author(s):  
Su-Kyung Sung ◽  
Eun-Seok Lee ◽  
Byeong-Seok Shin

Climate change increases the frequency of localized heavy rains and typhoons. As a result, mountain disasters, such as landslides and earthworks, continue to occur, causing damage to roads and residential areas downstream. Moreover, large-scale civil engineering works, including dam construction, cause rapid changes in the terrain, which harm the stability of residential areas. Disasters, such as landslides and earthenware, occur extensively, and there are limitations in the field of investigation; thus, there are many studies being conducted to model terrain geometrically and to observe changes in terrain according to external factors. However, conventional topography methods are expressed in a way that can only be interpreted by people with specialized knowledge. Therefore, there is a lack of consideration for three-dimensional visualization that helps non-experts understand. We need a way to express changes in terrain in real time and to make it intuitive for non-experts to understand. In conventional height-based terrain modeling and simulation, there is a problem in which some of the sampled data are irregularly distorted and do not show the exact terrain shape. The proposed method utilizes a hierarchical vertex cohesion map to correct inaccurately modeled terrain caused by uniform height sampling, and to compensate for geometric errors using Hausdorff distances, while not considering only the elevation difference of the terrain. The mesh reconstruction, which triangulates the three-vertex placed at each location and makes it the smallest unit of 3D model data, can be done at high speed on graphics processing units (GPUs). Our experiments confirm that it is possible to express changes in terrain accurately and quickly compared with existing methods. These functions can improve the sustainability of residential spaces by predicting the damage caused by mountainous disasters or civil engineering works around the city and make it easy for non-experts to understand.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Ziang Lei

3D reconstruction techniques for animated images and animation techniques for faces are important research in computer graphics-related fields. Traditional 3D reconstruction techniques for animated images mainly rely on expensive 3D scanning equipment and a lot of time-consuming postprocessing manually and require the scanned animated subject to remain in a fixed pose for a considerable period. In recent years, the development of large-scale computing power of computer-related hardware, especially distributed computing, has made it possible to come up with a real-time and efficient solution. In this paper, we propose a 3D reconstruction method for multivisual animated images based on Poisson’s equation theory. The calibration theory is used to calibrate the multivisual animated images, obtain the internal and external parameters of the camera calibration module, extract the feature points from the animated images of each viewpoint by using the corner point detection operator, then match and correct the extracted feature points by using the least square median method, and complete the 3D reconstruction of the multivisual animated images. The experimental results show that the proposed method can obtain the 3D reconstruction results of multivisual animation images quickly and accurately and has certain real-time and reliability.


In this paper is presented a novel area efficient Fast Fourier transform (FFT) for real-time compressive sensing (CS) reconstruction. Among various methodologies used for CS reconstruction algorithms, Greedy-based orthogonal matching pursuit (OMP) approach provides better solution in terms of accurate implementation with complex computations overhead. Several computationally intensive arithmetic operations like complex matrix multiplication are required to formulate correlative vectors making this algorithm highly complex and power consuming hardware implementation. Computational complexity becomes very important especially in complex FFT models to meet different operational standards and system requirements. In general, for real time applications, FFT transforms are required for high speed computations as well as with least possible complexity overhead in order to support wide range of applications. This paper presents an hardware efficient FFT computation technique with twiddle factor normalization for correlation optimization in orthogonal matching pursuit (OMP). Experimental results are provided to validate the performance metrics of the proposed normalization techniques against complexity and energy related issues. The proposed method is verified by FPGA synthesizer, and validated with appropriate currently available comparative analyzes.


Author(s):  
Ryan S. Richards ◽  
Mikola Lysenko ◽  
Roshan M. D’Souza ◽  
Gary An

Agent-Based Modeling has been recently recognized as a method for in-silico multi-scale modeling of biological cell systems. Agent-Based Models (ABMs) allow results from experimental studies of individual cell behaviors to be scaled into the macro-behavior of interacting cells in complex cell systems or tissues. Current generation ABM simulation toolkits are designed to work on serial von-Neumann architectures, which have poor scalability. The best systems can barely handle tens of thousands of agents in real-time. Considering that there are models for which mega-scale populations have significantly different emergent behaviors than smaller population sizes, it is important to have the ability to model such large scale models in real-time. In this paper we present a new framework for simulating ABMs on programmable graphics processing units (GPUs). Novel algorithms and data-structures have been developed for agent-state representation, agent motion, and replication. As a test case, we have implemented an abstracted version of the Systematic Inflammatory Response System (SIRS) ABM. Compared to the original implementation on the NetLogo system, our implementation can handle an agent population that is over three orders of magnitude larger with close to 40 updates/sec. We believe that our system is the only one of its kind that is capable of efficiently handling realistic problem sizes in biological simulations.


Sensors ◽  
2019 ◽  
Vol 19 (9) ◽  
pp. 2206 ◽  
Author(s):  
Muhammad Aqib ◽  
Rashid Mehmood ◽  
Ahmed Alzahrani ◽  
Iyad Katib ◽  
Aiiad Albeshri ◽  
...  

Road transportation is the backbone of modern economies, albeit it annually costs 1.25 million deaths and trillions of dollars to the global economy, and damages public health and the environment. Deep learning is among the leading-edge methods used for transportation-related predictions, however, the existing works are in their infancy, and fall short in multiple respects, including the use of datasets with limited sizes and scopes, and insufficient depth of the deep learning studies. This paper provides a novel and comprehensive approach toward large-scale, faster, and real-time traffic prediction by bringing four complementary cutting-edge technologies together: big data, deep learning, in-memory computing, and Graphics Processing Units (GPUs). We trained deep networks using over 11 years of data provided by the California Department of Transportation (Caltrans), the largest dataset that has been used in deep learning studies. Several combinations of the input attributes of the data along with various network configurations of the deep learning models were investigated for training and prediction purposes. The use of the pre-trained model for real-time prediction was explored. The paper contributes novel deep learning models, algorithms, implementation, analytics methodology, and software tool for smart cities, big data, high performance computing, and their convergence.


2017 ◽  
Vol 7 (3) ◽  
pp. 447-508 ◽  
Author(s):  
Nicolas Keriven ◽  
Anthony Bourrier ◽  
Rémi Gribonval ◽  
Patrick Pérez

Abstract Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. We propose a ‘compressive learning’ framework, where we estimate model parameters from a sketch of the training data. This sketch is a collection of generalized moments of the underlying probability distribution of the data. It can be computed in a single pass on the training set and is easily computable on streams or distributed datasets. The proposed framework shares similarities with compressive sensing, which aims at drastically reducing the dimension of high-dimensional signals while preserving the ability to reconstruct them. To perform the estimation task, we derive an iterative algorithm analogous to sparse reconstruction algorithms in the context of linear inverse problems. We exemplify our framework with the compressive estimation of a Gaussian mixture model (GMM), providing heuristics on the choice of the sketching procedure and theoretical guarantees of reconstruction. We experimentally show on synthetic data that the proposed algorithm yields results comparable to the classical expectation-maximization technique while requiring significantly less memory and fewer computations when the number of database elements is large. We further demonstrate the potential of the approach on real large-scale data (over $10^{8}$ training samples) for the task of model-based speaker verification. Finally, we draw some connections between the proposed framework and approximate Hilbert space embedding of probability distributions using random features. We show that the proposed sketching operator can be seen as an innovative method to design translation-invariant kernels adapted to the analysis of GMMs. We also use this theoretical framework to derive preliminary information preservation guarantees, in the spirit of infinite-dimensional compressive sensing.


Author(s):  
Mohsen Nourazar ◽  
Bart Goossens

AbstractTensor Cores are specialized hardware units added to recent NVIDIA GPUs to speed up matrix multiplication-related tasks, such as convolutions and densely connected layers in neural networks. Due to their specific hardware implementation and programming model, Tensor Cores cannot be straightforwardly applied to other applications outside machine learning. In this paper, we demonstrate the feasibility of using NVIDIA Tensor Cores for the acceleration of a non-machine learning application: iterative Computed Tomography (CT) reconstruction. For large CT images and real-time CT scanning, the reconstruction time for many existing iterative reconstruction methods is relatively high, ranging from seconds to minutes, depending on the size of the image. Therefore, CT reconstruction is an application area that could potentially benefit from Tensor Core hardware acceleration. We first studied the reconstruction algorithm’s performance as a function of the hardware related parameters and proposed an approach to accelerate reconstruction on Tensor Cores. The results show that the proposed method provides about 5 $$\times $$ × increase in speed and energy saving using the NVIDIA RTX 2080 Ti GPU for the parallel projection of 32 images of size $$512\times 512$$ 512 × 512 . The relative reconstruction error due to the mixed-precision computations was almost equal to the error of single-precision (32-bit) floating-point computations. We then presented an approach for real-time and memory-limited applications by exploiting the symmetry of the system (i.e., the acquisition geometry). As the proposed approach is based on the conjugate gradient method, it can be generalized to extend its application to many research and industrial fields.


Sign in / Sign up

Export Citation Format

Share Document