filter kernel
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 12)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 2090 (1) ◽  
pp. 012064
Author(s):  
A Boguslawski ◽  
K Wawrzak ◽  
A Paluszewska ◽  
B J Geurts

Abstract The paper presents a new approximate deconvolution subgrid model for Large Eddy Simulation in which corrections to implicit filtering due to spatial discretization are integrated explicitly. The top-hat filter implied by second-order central finite differencing is a key example, which is discretised using the discrete Fourier transform involving all the mesh points in the computational domain. This discrete filter kernel is inverted by inverse Wiener filtering. The inverse filter obtained in this way is used to deconvolve the resolved scales of the implicitly filtered velocity field on the computational grid. Subgrid stresses are subsequently calculated directly from the deconvolved velocity field. The model was applied to study decaying two-dimensional turbulence. Results were compared with predictions based on the Smagorinsky model and the dynamic Germano model. A posteriori testing in which Large Eddy Simulation is compared with filtered Direct Numerical Simulation obtained with a Fourier spectral method is included. The new model presented strictly speaking applies to periodic problems. The idea of recovering a high-order inversion of the numerically induced filter kernel can be extended to more general non-periodic problems, also in three spatial dimensions.


Author(s):  
Thomas Janke ◽  
Dirk Michaelis

Particle Tracking Velocimetry (PTV) or Lagrangian Particle Tracking (LPT) picked up a lot of interest over the last years due to their ability to acquire global flow fields at high spatial and temporal resolution. The most recent research focused mainly on algorithmic advancements in order to increase the obtainable data density and on its application to new flow cases. Only a small amount of studies tried to quantify the measurement uncertainties linked to these volumetric measurement approaches. Within this contribution we want to present how to acquire measurement uncertainties for the position, velocity and acceleration for each data point along a trajectory by means of linear regression analysis tools. Based on these uncertainties, an adaptive filtering approach is introduced, which eliminates the user’s choice of the filter kernel length and which automatically determines its optimal value.


Author(s):  
Anh-Tuan Hoang ◽  
Neil Hanley ◽  
Maire O’Neill

Deep learning (DL) has proven to be very effective for image recognition tasks, with a large body of research on various model architectures for object classification. Straight-forward application of DL to side-channel analysis (SCA) has already shown promising success, with experimentation on open-source variable key datasets showing that secret keys can be revealed with 100s traces even in the presence of countermeasures. This paper aims to further improve the application of DL for SCA, by enhancing the power of DL when targeting the secret key of cryptographic algorithms when protected with SCA countermeasures. We propose a new model, CNN-based model with Plaintext feature extension (CNNP) together with multiple convolutional filter kernel sizes and structures with deeper and narrower neural networks, which has empirically proven its effectiveness by outperforming reference profiling attack methods such as template attacks (TAs), convolutional neural networks (CNNs) and multilayer perceptron (MLP) models. Our model generates state-of-the art results when attacking the ASCAD variable-key database, which has a restricted number of training traces per key, recovering the key within 40 attack traces in comparison with order of 100s traces required by straightforward machine learning (ML) application. During the profiling stage an attacker needs no additional knowledge on the implementation, such as the masking scheme or random mask values, only the ability to record the power consumption or electromagnetic field traces, plaintext/ciphertext and the key. Additionally, no heuristic pre-processing is required in order to break the high-order masking countermeasures of the target implementation.


2020 ◽  
Author(s):  
Johannes Schulz-Stellenfleth ◽  
Bughsin Djath ◽  
Verena Haid

<p>The large number of already existing and planned offshore wind parks in the German Bight leads to challenging requirements with regard to reliable information on various processes in the atmosphere and the ocean. In particular wind shadowing effects play a major role for the optimal planning and operation of wind park installations. Synthetic Aperture Radar (SAR) satellites have proved their capability of giving a 2D view of the wakes generated behind wind farms at a  high spatial resolution. However, the estimation of wind speed deficits from SAR data is still a challenge, because undisturbed reference wind fields are usually not available at the exact location of the wake. A common approach is therefore to identify some reference areas on SAR scenes outside the wake region, which naturally leads to errors in the deficit computations. <br>In this study a new filter approach for the deficit estimation is proposed, which allows to derive error bars for the deficits. The filter is based on a 2D convolution operation with a filter kernel, which has a shape depending on the wind park geometry and the wind direction. The errors depend on spectral properties of the background wind fields, which are estimated from SAR data as well. In this context the stability of the atmospheric boundary layer is shown to play a major role. Examples are shown using data acquired by the SENTINEL-1A/B satellites. The approach is seen as a contribution to make SAR based deficit computations more objective and automised, which is essential for the application of the method to larger data sets and to make wake analysis done in different regions more comparable.</p>


2020 ◽  
Vol 237 ◽  
pp. 01003
Author(s):  
Huan Xie ◽  
Dan Ye ◽  
Gang Hai ◽  
Xiaohua Tong

The Ice, Cloud and Land Elevation Satellite-2 (ICESat-2) was launched on September 15th, 2018, which continues to perform measurement tasks as the successor to ICESat. Unlike full waveform technology of ICESat, ICESat-2 employs micropulse photon counting technology, which provides higher accuracy, but produces plenty of noise. This paper proposes an adaptive filter based on local slope by adjusting elliptic filter kernel. The general approach is 1) data preprocessing, 2) Gaussian density calculation, 3) OTSU adaptive threshold calculation. This method is seen to be robust in identifying signal points from high background noise points and suitable for low density data caused by slope.


2020 ◽  
Vol 2 (1) ◽  
pp. 8-14
Author(s):  
V. O. Parubochyi ◽  
◽  
R. Ya. Shuvar ◽  

Lighting Normalization is an especially important issue in the image recognitions systems since different illumination conditions can significantly change the recognition results, and the lighting normalization allows minimizing negative effects of various illumination conditions. In this paper, we are evaluating the recognition performance of several lighting normalization methods based on the Self-Quotient ImagE(SQI) method introduced by Haitao Wang, Stan Z. Li, Yangsheng Wang, and Jianjun Zhang. For evaluation, we chose the original implementation and the most perspective latest modifications of the original SQI method, including the Gabor Quotient ImagE(GQI) method introduced by Sanun Srisuk and Amnart Petpon in 2008, and the Fast Self-Quotient ImagE(FSQI) method and its modifications proposed by authors in previous works. We are proposing an evaluation framework which uses the Cropped Extended Yale Face Database B, which allows showing the difference of the recognition results for different illumination conditions. Also, we are testing all results using two classifiers: Nearest Neighbor Classifier and Linear Support Vector Classifier. This approach allows us not only to calculate recognition accuracy for each method and select the best method but also show the importance of the proper choice of the classification method, which can have a significant influence on recognition results. We were able to show the significant decreasing of recognition accuracy for un-processed (RAW) images with increasing the angle between the lighting source and the normal to the object. From the other side, our experiments had shown the almost uniform distribution of the recognition accuracy for images processed by lighting normalization methods based on the SQI method. Another showed but expected result represented in this paper is the increasing of the recognition accuracy with the increasing of the filter kernel size. However, the large filter kernel sizes are much more computationally expensive and can produce negative effects on output images. Also, we were shown in our experiments, that the second modification of the FSQI method, called FSQI3, is better almost in all cases for all filter kernel sizes, especially, if we use Linear Support Vector Classifier for classification.


2019 ◽  
Vol 43 (1) ◽  
pp. 69-78 ◽  
Author(s):  
Kazuhiro Sato ◽  
Yu Tomita ◽  
Ryota Kageyama ◽  
Yumi Takane ◽  
Shingo Kayano ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document