convolution operation
Recently Published Documents


TOTAL DOCUMENTS

97
(FIVE YEARS 50)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Vol 14 (4) ◽  
pp. 1-23
Author(s):  
José Romero Hung ◽  
Chao Li ◽  
Pengyu Wang ◽  
Chuanming Shao ◽  
Jinyang Guo ◽  
...  

ACE-GCN is a fast and resource/energy-efficient FPGA accelerator for graph convolutional embedding under data-driven and in-place processing conditions. Our accelerator exploits the inherent power law distribution and high sparsity commonly exhibited by real-world graphs datasets. Contrary to other hardware implementations of GCN, on which traditional optimization techniques are employed to bypass the problem of dataset sparsity, our architecture is designed to take advantage of this very same situation. We propose and implement an innovative acceleration approach supported by our “implicit-processing-by-association” concept, in conjunction with a dataset-customized convolutional operator. The computational relief and consequential acceleration effect arise from the possibility of replacing rather complex convolutional operations for a faster embedding result estimation. Based on a computationally inexpensive and super-expedited similarity calculation, our accelerator is able to decide from the automatic embedding estimation or the unavoidable direct convolution operation. Evaluations demonstrate that our approach presents excellent applicability and competitive acceleration value. Depending on the dataset and efficiency level at the target, between 23× and 4,930× PyG baseline, coming close to AWB-GCN by 46% to 81% on smaller datasets and noticeable surpassing AWB-GCN for larger datasets and with controllable accuracy loss levels. We further demonstrate the unique hardware optimization characteristics of our approach and discuss its multi-processing potentiality.


2021 ◽  
Author(s):  
Min Zhong ◽  
Jiu-sheng Li

Abstract We propose a novel metasurface based on a combined pattern of outer C-shaped ring and inner rectangular ring. By Fourier convolution operation to generating different predesigned sequences of metasurfaces, we realize various functionalities to flexible manipulate terahertz waves including vortex terahertz beam splitting, anomalous vortex terahertz wave deflection, vortex terahertz wave splitting and deflection simultaneously. The incident terahertz wave can be flexibly controlled in a single metasurface. The designed metasurface has an extensive application prospect in the field of future terahertz communication and sensing.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yongho Kim ◽  
Gilnam Ryu ◽  
Yongho Choi

Simulation speed depends on code structures. Hence, it is crucial how to build a fast algorithm. We solve the Allen–Cahn equation by an explicit finite difference method, so it requires grid calculations implemented by many for-loops in the simulation code. In terms of programming, many for-loops make the simulation speed slow. We propose a model architecture containing a pad and a convolution operation on the Allen–Cahn equation for fast computation while maintaining accuracy. Also, the GPU operation is used to boost up the speed more. In this way, the simulation of other differential equations can be improved. In this paper, various numerical simulations are conducted to confirm that the Allen–Cahn equation follows motion by mean curvature and phase separation in two-dimensional and three-dimensional spaces. Finally, we demonstrate that our algorithm is much faster than an unoptimized code and the CPU operation.


2021 ◽  
Author(s):  
Zenglin Li ◽  
Wei Wang ◽  
Shaoxuan Deng ◽  
Jia Qu ◽  
Yuxiang Li ◽  
...  

2021 ◽  
Vol 14 (1) ◽  
pp. 93-107
Author(s):  
Pavlo Radiuk ◽  
Olexander Barmak ◽  
Iurii Krak

Aim: This study investigates the topology of convolutional neural networks and proposes an information technology for the early detection of pneumonia in X-rays. Background: For the past decade, pneumonia has been one of the most widespread respiratory diseases. Every year, a significant part of the world's population suffers from pneumonia, which leads to millions of deaths worldwide. Inflammation occurs rapidly and usually proceeds in severe forms. Thus, early detection of the disease plays a critical role in its successful treatment. Objective: The most operating means of diagnosing pneumonia is the chest X-ray, which produces radiographs. Automated diagnostics using computing devices and computer vision techniques have become beneficial in X-ray image analysis, serving as an ancillary decision-making system. Nonetheless, such systems require continuous improvement for individual patient adjustment to ensure a successful, timely diagnosis. Methods: Nowadays, artificial neural networks serve as a promising solution for identifying pneumonia in radiographs. Despite the high level of recognition accuracy, neural networks have been perceived as black boxes because of the unclear interpretation of their performance results. Altogether, an insufficient explanation for the early diagnosis can be perceived as a severe negative feature of automated decision-making systems, as the lack of interpretation results may negatively affect the final clinical decision. To address this issue, we propose an approach to the automated diagnosis of early pneumonia, based on the classification of radiographs with weakly expressed disease features. Results: An effective spatial convolution operation with several dilated rates, combining various receptive feature fields, was used in convolutional layers to detect and analyze visual deviations in the X-ray image. Due to applying the dilated convolution operation, the network avoids significant losses of objects' spatial information providing relatively low computational costs. We also used transfer training to overcome the lack of data in the early diagnosis of pneumonia. An image analysis strategy based on class activation maps was used to interpret the classification results, critical for clinical decision making. Conclusion: According to the computational results, the proposed convolutional architecture may be an excellent solution for instant diagnosis in case of the first suspicion of early pneumonia.


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2800
Author(s):  
Aleksandr Cariow ◽  
Janusz P. Paplinski

A set of efficient algorithmic solutions suitable to the fully parallel hardware implementation of the short-length circular convolution cores is proposed. The advantage of the presented algorithms is that they require significantly fewer multiplications as compared to the naive method of implementing this operation. During the synthesis of the presented algorithms, the matrix notation of the cyclic convolution operation was used, which made it possible to represent this operation using the matrix–vector product. The fact that the matrix multiplicand is a circulant matrix allows its successful factorization, which leads to a decrease in the number of multiplications when calculating such a product. The proposed algorithms are oriented towards a completely parallel hardware implementation, but in comparison with a naive approach to a completely parallel hardware implementation, they require a significantly smaller number of hardwired multipliers. Since the wired multiplier occupies a much larger area on the VLSI and consumes more power than the wired adder, the proposed solutions are resource efficient and energy efficient in terms of their hardware implementation. We considered circular convolutions for sequences of lengths N= 2, 3, 4, 5, 6, 7, 8, and 9.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Qian Yi ◽  
Guixuan Zhang ◽  
Shuwu Zhang

Distant supervision is an effective method to automatically collect large-scale datasets for relation extraction (RE). Automatically constructed datasets usually comprise two types of noise: the intrasentence noise and the wrongly labeled noisy sentence. To address issues caused by the above two types of noise and improve distantly supervised relation extraction, this paper proposes a novel distantly supervised relation extraction model, which consists of an entity-based gated convolution sentence encoder and a multilevel sentence selective attention (Matt) module. Specifically, we first apply an entity-based gated convolution operation to force the sentence encoder to extract entity-pair-related features and filter out useless intrasentence noise information. Furthermore, the multilevel attention schema fuses the bag information to obtain a fine-grained bag-specific query vector, which can better identify valid sentences and reduce the influence of wrongly labeled sentences. Experimental results on a large-scale benchmark dataset show that our model can effectively reduce the influence of the above two types of noise and achieves state-of-the-art performance in relation extraction.


2021 ◽  
Vol 13 (18) ◽  
pp. 3724
Author(s):  
Weisheng Li ◽  
Dongwen Cao ◽  
Yidong Peng ◽  
Chao Yang

Remote sensing products with high temporal and spatial resolution can be hardly obtained under the constrains of existing technology and cost. Therefore, the spatiotemporal fusion of remote sensing images has attracted considerable attention. Spatiotemporal fusion algorithms based on deep learning have gradually developed, but they also face some problems. For example, the amount of data affects the model’s ability to learn, and the robustness of the model is not high. The features extracted through the convolution operation alone are insufficient, and the complex fusion method also introduces noise. To solve these problems, we propose a multi-stream fusion network for remote sensing spatiotemporal fusion based on Transformer and convolution, called MSNet. We introduce the structure of the Transformer, which aims to learn the global temporal correlation of the image. At the same time, we also use a convolutional neural network to establish the relationship between input and output and to extract features. Finally, we adopt the fusion method of average weighting to avoid using complicated methods to introduce noise. To test the robustness of MSNet, we conducted experiments on three datasets and compared them with four representative spatiotemporal fusion algorithms to prove the superiority of MSNet (Spectral Angle Mapper (SAM) < 0.193 on the CIA dataset, erreur relative global adimensionnelle de synthese (ERGAS) < 1.687 on the LGC dataset, and root mean square error (RMSE) < 0.001 on the AHB dataset).


Author(s):  
N. Devi

Abstract: This paper focuses on the task of recognizing handwritten Hindi characters using a Convolutional Neural Network (CNN) based. The recognized characters can then be stored digitally in the computer or used for other purposes. The dataset used is obtained from the UC Irvine Machine Learning Repository which contains 92,000 images divided into training (80%) and test set (20%). It contains different forms of handwritten Devanagari characters written by different individuals which can be used to train and test handwritten text recognizers. It contains four CNN layers followed by three fully connected layers for recognition. Grayscale handwritten character images are used as input. Filters are applied on the images to extract different features at each layer. This is done by the Convolution operation. The two other main operations involved are Pooling and Flattening. The output of the CNN layers is fed to the fully connected layers. Finally, the chance or probability score of each character is determined and the character with the highest probability score is shown as the output. A recognition accuracy of 98.94% is obtained. Similar models exist for the purpose, but the proposed model achieved a better performance and accuracy than some of the earlier models. Keywords: Devanagari characters, Convolutional Neural Networks, Image Processing


Sign in / Sign up

Export Citation Format

Share Document