processing unit
Recently Published Documents


TOTAL DOCUMENTS

3056
(FIVE YEARS 1106)

H-INDEX

44
(FIVE YEARS 11)

Author(s):  
Dana Khwailleh ◽  
Firas Al-balas

The rapid growth of internet of things (IoT) in multiple areas brings research challenges closely linked to the nature of IoT technology. Therefore, there has been a need to secure the collected data from IoT sensors in an efficient and dynamic way taking into consideration the nature of collected data due to its importance. So, in this paper, a dynamic algorithm has been developed to distinguish the importance of data collected and apply the suitable security approach for each type of data collected. This was done by using hybrid system that combines block cipher and stream cipher systems. After data classification using machine learning classifiers the less important data are encrypted using stream cipher (SC) that use rivest cipher 4 algorithm, and more important data encrypted using block cipher (BC) that use advanced encryption standard algorithm. By applying a performance evaluation using simulation, the proposed method guarantees that it encrypts the data with less central processing unit (CPU) time with improvement in the security over the data by using the proposed hybrid system.


2022 ◽  
Vol 18 (2) ◽  
pp. 1-24
Author(s):  
Sourabh Kulkarni ◽  
Mario Michael Krell ◽  
Seth Nabarro ◽  
Csaba Andras Moritz

Epidemiology models are central to understanding and controlling large-scale pandemics. Several epidemiology models require simulation-based inference such as Approximate Bayesian Computation (ABC) to fit their parameters to observations. ABC inference is highly amenable to efficient hardware acceleration. In this work, we develop parallel ABC inference of a stochastic epidemiology model for COVID-19. The statistical inference framework is implemented and compared on Intel’s Xeon CPU, NVIDIA’s Tesla V100 GPU, Google’s V2 Tensor Processing Unit (TPU), and the Graphcore’s Mk1 Intelligence Processing Unit (IPU), and the results are discussed in the context of their computational architectures. Results show that TPUs are 3×, GPUs are 4×, and IPUs are 30× faster than Xeon CPUs. Extensive performance analysis indicates that the difference between IPU and GPU can be attributed to higher communication bandwidth, closeness of memory to compute, and higher compute power in the IPU. The proposed framework scales across 16 IPUs, with scaling overhead not exceeding 8% for the experiments performed. We present an example of our framework in practice, performing inference on the epidemiology model across three countries and giving a brief overview of the results.


Author(s):  
Hala Khankhour ◽  
Otman Abdoun ◽  
Jâafar Abouchabaka

<span>This article presents a new approach of integrating parallelism into the genetic algorithm (GA), to solve the problem of routing in a large ad hoc network, the goal is to find the shortest path routing. Firstly, we fix the source and destination, and we use the variable-length chromosomes (routes) and their genes (nodes), in our work we have answered the following question: what is the better solution to find the shortest path: the sequential or parallel method?. All modern systems support simultaneous processes and threads, processes are instances of programs that generally run independently, for example, if you start a program, the operating system spawns a new process that runs parallel elements to other programs, within these processes, we can use threads to execute code simultaneously. Therefore, we can make the most of the available central processing unit (CPU) cores. Furthermore, the obtained results showed that our algorithm gives a much better quality of solutions. Thereafter, we propose an example of a network with 40 nodes, to study the difference between the sequential and parallel methods, then we increased the number of sensors to 100 nodes, to solve the problem of the shortest path in a large ad hoc network.</span>


2022 ◽  
Author(s):  
Shijie Yan ◽  
Steven L Jacques ◽  
Jessica C. Ramella-Roman ◽  
Qianqian Fang

Significance: Monte Carlo (MC) methods have been applied for studying interactions between polarized light and biological tissues, but most existing MC codes supporting polarization modeling can only simulate homogeneous or multi-layered domains, resulting in approximations when handling realistic tissue structures. Aim: Over the past decade, the speed of MC simulations has seen dramatic improvement with massively-parallel computing techniques. Developing hardware-accelerated MC simulation algorithms that can accurately model polarized light inside 3-D heterogeneous tissues can greatly expand the utility of polarization in biophotonics applications. Approach: Here we report a highly efficient polarized MC algorithm capable of modeling arbitrarily complex media defined over a voxelated domain. Each voxel of the domain can be associated with spherical scatters of various radii and densities. The Stokes vector of each simulated photon packet is updated through photon propagation, creating spatially resolved polarization measurements over the detectors or domain surface. Results: We have implemented this algorithm in our widely disseminated MC simulator, Monte Carlo eXtreme (MCX). It is validated by comparing with a reference CPU-based simulator in both homogeneous and layered domains, showing excellent agreement and a 931-fold speedup. Conclusion: The polarization-enabled MCX (pMCX) offers biophotonics community an efficient tool to explore polarized light in bio-tissues, and is freely available at http://mcx.space/.


2022 ◽  
Vol 2022 ◽  
pp. 1-11
Author(s):  
Hai Tan ◽  
Hao Xu ◽  
Jiguang Dai

Automatic extraction of road information from remote sensing images is widely used in many fields, such as urban planning and automatic navigation. However, due to interference from noise and occlusion, the existing road extraction methods can easily lead to road discontinuity. To solve this problem, a road extraction network with bidirectional spatial information reasoning (BSIRNet) is proposed, in which neighbourhood feature fusion is used to capture spatial context dependencies and expand the receptive field, and an information processing unit with a recurrent neural network structure is used to capture channel dependencies. BSIRNet enhances the connectivity of road information through spatial information reasoning. Using the public Massachusetts road dataset and Wuhan University road dataset, the superiority of the proposed method is verified by comparing its results with those of other models.


2022 ◽  
Vol 8 (1) ◽  
pp. 9
Author(s):  
Bruno Sauvalle ◽  
Arnaud de La Fortelle

The goal of background reconstruction is to recover the background image of a scene from a sequence of frames showing this scene cluttered by various moving objects. This task is fundamental in image analysis, and is generally the first step before more advanced processing, but difficult because there is no formal definition of what should be considered as background or foreground and the results may be severely impacted by various challenges such as illumination changes, intermittent object motions, highly cluttered scenes, etc. We propose in this paper a new iterative algorithm for background reconstruction, where the current estimate of the background is used to guess which image pixels are background pixels and a new background estimation is performed using those pixels only. We then show that the proposed algorithm, which uses stochastic gradient descent for improved regularization, is more accurate than the state of the art on the challenging SBMnet dataset, especially for short videos with low frame rates, and is also fast, reaching an average of 52 fps on this dataset when parameterized for maximal accuracy using acceleration with a graphics processing unit (GPU) and a Python implementation.


Author(s):  
Ram C. Sharma ◽  
Keitarou Hara

This research introduces Genus-Physiognomy-Ecosystem (GPE) mapping at a prefecture level through machine learning of multi-spectral and multi-temporal satellite images at 10m spatial resolution, and later integration of prefecture wise maps into country scale for dealing with 88 GPE types to be classified from a large size of training data involved in the research effectively. This research was made possible by harnessing entire archives of Level-2A product, Bottom of Atmosphere reflectance images collected by MultiSpectral Instruments onboard a constellation of two polar-orbiting Sentinel-2 mission satellites. The satellite images were pre-processed for cloud masking and monthly median composite images consisting of 10 multi-spectral bands and 7 spectral indexes were generated. The ground truth labels were extracted from extant vegetation survey maps by implementing systematic stratified sampling approach and noisy labels were dropped out for preparing a reliable ground truth database. Graphics Processing Unit (GPU) implementation of Gradient Boosting Decision Trees (GBDT) classifier was employed for classification of 88 GPE types from 204 satellite features. The classification accuracy computed with 25% test data varied from 65-81% in terms of F1-score across 48 prefectural regions. This research produced seamless maps of 88 GPE types first time at a country scale with an average 72% F1-score.


2022 ◽  
Vol 13 (1) ◽  
Author(s):  
Senfeng Zeng ◽  
Chunsen Liu ◽  
Xiaohe Huang ◽  
Zhaowu Tang ◽  
Liwei Liu ◽  
...  

AbstractWith the rapid development of artificial intelligence, parallel image processing is becoming an increasingly important ability of computing hardware. To meet the requirements of various image processing tasks, the basic pixel processing unit contains multiple functional logic gates and a multiplexer, which leads to notable circuit redundancy. The pixel processing unit retains a large optimizing space to solve the area redundancy issues in parallel computing. Here, we demonstrate a pixel processing unit based on a single WSe2 transistor that has multiple logic functions (AND and XNOR) that are electrically switchable. We further integrate these pixel processing units into a low transistor-consumption image processing array, where both image intersection and image comparison tasks can be performed. Owing to the same image processing power, the consumption of transistors in our image processing unit is less than 16% of traditional circuits.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 471
Author(s):  
Piotr Perek ◽  
Aleksander Mielczarek ◽  
Dariusz Makowski

In recent years, cinematography and other digital content creators have been eagerly turning to Three-Dimensional (3D) imaging technology. The creators of movies, games, and augmented reality applications are aware of this technology’s advantages, possibilities, and new means of expression. The development of electronic and IT technologies enables the achievement of a better and better quality of the recorded 3D image and many possibilities for its correction and modification in post-production. However, preparing a correct 3D image that does not cause perception problems for the viewer is still a complex and demanding task. Therefore, planning and then ensuring the correct parameters and quality of the recorded 3D video is essential. Despite better post-production techniques, fixing errors in a captured image can be difficult, time consuming, and sometimes impossible. The detection of errors typical for stereo vision related to the depth of the image (e.g., depth budget violation, stereoscopic window violation) during the recording allows for their correction already on the film set, e.g., by different scene layouts and/or different camera configurations. The paper presents a prototype of an independent, non-invasive diagnostic system that supports the film crew in the process of calibrating stereoscopic cameras, as well as analysing the 3D depth while working on a film set. The system acquires full HD video streams from professional cameras using Serial Digital Interface (SDI), synchronises them, and estimates and analyses the disparity map. Objective depth analysis using computer tools while recording scenes allows stereographers to immediately spot errors in the 3D image, primarily related to the violation of the viewing comfort zone. The paper also describes an efficient method of analysing a 3D video using Graphics Processing Unit (GPU). The main steps of the proposed solution are uncalibrated rectification and disparity map estimation. The algorithms selected and implemented for the needs of this system do not require knowledge of intrinsic and extrinsic camera parameters. Thus, they can be used in non-cooperative environments, such as a film set, where the camera configuration often changes. Both of them are implemented with the use of a GPU to improve the data processing efficiency. The paper presents the evaluation results of the algorithms’ accuracy, as well as the comparison of the performance of two implementations—with and without the GPU acceleration. The application of the described GPU-based method makes the system efficient and easy to use. The system can process a video stream with full HD resolution at a speed of several frames per second.


Sign in / Sign up

Export Citation Format

Share Document