scholarly journals Noise-Aware and Light-Weight VLSI Design of Bilateral Filter for Robust and Fast Image Denoising in Mobile Systems

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4722
Author(s):  
Sung-Joon Jang ◽  
Youngbae Hwang

The range kernel of bilateral filter degrades image quality unintentionally in real environments because the pixel intensity varies randomly due to the noise that is generated in image sensors. Furthermore, the range kernel increases the complexity due to the comparisons with neighboring pixels and the multiplications with the corresponding weights. In this paper, we propose a noise-aware range kernel, which estimates noise using an intensity difference-based image noise model and dynamically adjusts weights according to the estimated noise, in order to alleviate the quality degradation of bilateral filters by noise. In addition, to significantly reduce the complexity, an approximation scheme is introduced, which converts the proposed noise-aware range kernel into a binary kernel while using the statistical hypothesis test method. Finally, blue a fully parallelized and pipelined very-large-scale integration (VLSI) architecture of a noise-aware bilateral filter (NABF) that is based on the proposed binary range kernel is presented, which was successfully implemented in field-programmable gate array (FPGA). The experimental results show that the proposed NABF is more robust to noise than the conventional bilateral filter under various noise conditions. Furthermore, the proposed VLSI design of the NABF achieves 10.5 and 95.7 times higher throughput and uses 63.6–97.5% less internal memory than state-of-the-art bilateral filter designs.

2019 ◽  
Vol 4 (1) ◽  
pp. 46
Author(s):  
Muflih Muflih ◽  
Mohamad Judha

Hypertension can be treated traditionally with complementary therapies such as cupping therapy. Evaluation of the effectiveness of cupping therapy on blood pressure reduction is thought to be influenced by variations in cupping therapy techniques. The purpose of this study was to prove scientifically the effectiveness of the number of heads, duration and location of cupping therapy points with a decrease in the value of blood pressure in patients at the Klaten Migoenani Health Nursing Clinic. This research method uses the Quasy experimental one group pre post test method. Data from the analysis of blood pressure measurements in patients undergoing cupping therapy were measured with digital tension and a statistical hypothesis test was performed. The sample technique uses quota sampling. The results showed that cupping therapy effectively reduced the average 20 mmHg of systolic and diastolic blood pressure by the number of locations of cupping points 1-3 locations, the number of heads of 18-24 and for 25-30 minutes of therapy through the process of nitrite dioxide stimulation which caused peripheral vasodilation . The conclusion of this study is that variations in blood pressure reduction in cupping therapy can be determined from the number of heads, duration and location of cupping points.


2021 ◽  
Vol 8 ◽  
Author(s):  
Hilary Kates Varghese ◽  
Kim Lowell ◽  
Jennifer Miksis-Olds

Technological innovation in underwater acoustics has progressed research in marine mammal behavior by providing the ability to collect data on various marine mammal biological and behavioral attributes across time and space. But with this comes the need for an approach to distill the large amounts of data collected. Though disparate general statistical and modeling approaches exist, here, a holistic quantitative approach specifically motivated by the need to analyze different aspects of marine mammal behavior within a Before-After Control-Impact framework using spatial observations is introduced: the Global-Local-Comparison (GLC) approach. This approach capitalizes on the use of data sets from large-scale, hydrophone arrays and combines established spatial autocorrelation statistics of (Global) Moran’s I and (Local) Getis-Ord Gi∗ (Gi∗) with (Comparison) statistical hypothesis testing to provide a detailed understanding of array-wide, local, and order-of-magnitude changes in spatial observations. This approach was demonstrated using beaked whale foraging behavior (using foraging-specific clicks as a proxy) during acoustic exposure events as an exemplar. The demonstration revealed that the Moran’s I analysis was effective at showing whether an array-wide change in behavior had occurred, i.e., clustered to random distribution, or vice-versa. The Gi∗ analysis identified where hot or cold spots of foraging activity occurred and how those spots varied spatially from one analysis period to the next. Since neither spatial statistic could be used to directly compare the magnitude of change between analysis periods, a statistical hypothesis test, using the Kruskal-Wallis test, was used to directly compare the number of foraging events among analysis periods. When all three components of the GLC approach were used together, a comprehensive assessment of group level spatial foraging activity was obtained. This spatial approach is demonstrated on marine mammal behavior, but it can be applied to a broad range of spatial observations over a wide variety of species.


2019 ◽  
Vol 37 (4) ◽  
pp. 393-427 ◽  
Author(s):  
Phuc Nguyen ◽  
Khai Nguyen ◽  
Ryutaro Ichise ◽  
Hideaki Takeda

Abstract In recent years, there has been an increasing interest in numerical semantic labeling, in which the meaning of an unknown numerical column is assigned by the label of the most relevant columns in predefined knowledge bases. Previous methods used the p value of a statistical hypothesis test to estimate the relevance and thus strongly depend on the distribution and data domain. In other words, they are unstable for general cases, when such knowledge is undefined. Our goal is solving semantic labeling without using such information while guaranteeing high accuracy. We propose EmbNum+, a neural numerical embedding for learning both discriminant representations and a similarity metric from numerical columns. EmbNum+ maps lists of numerical values of columns into feature vectors in an embedding space, and a similarity metric can be calculated directly on these feature vectors. Evaluations on many datasets of various domains confirmed that EmbNum+ consistently outperformed other state-of-the-art approaches in terms of accuracy. The compact embedding representations also made EmbNum+ significantly faster than others and enable large-scale semantic labeling. Furthermore, attribute augmentation can be used to enhance the robustness and unlock the portability of EmbNum+, making it possible to be trained on one domain but applicable to many different domains.


2014 ◽  
Vol 926-930 ◽  
pp. 3434-3437
Author(s):  
Feng Xu Hu ◽  
Yu Hong Chen

Aiming at image denoising problem, some researchers put forward bilateral filtering algorithm. The algorithm makes weighting for spatial distance and pixel intensity difference. It introduces pixel weighting kernel function on the Gaussian locality smoothing mode, which is more favorable to keep the image margin detail. This paper tries the new improvement on the foundation of bilateral filter technique. It adopts coherent values scanning to treat energy center corresponding apparent dip as smoothing discontinuous geologic body direction. The directivity dramatically reduces, and it is easy to be generalized to non-planar geometrical morphology lineup. And it adopts the method based on waveform self-adaption mode on coherent values estimation. The estimated depending on the time dip and coherent values property are more reliable and physical significance. When calculating pixel value similarity weight, it uses discontinuous geologic body direction sampling point mean or median to replace calculate point. It is much easier to suppress random noise and more effective to keep geologic feature.


2019 ◽  
Vol 1 (2) ◽  
pp. 653-683 ◽  
Author(s):  
Frank Emmert-Streib ◽  
Matthias Dehmer

A statistical hypothesis test is one of the most eminent methods in statistics. Its pivotal role comes from the wide range of practical problems it can be applied to and the sparsity of data requirements. Being an unsupervised method makes it very flexible in adapting to real-world situations. The availability of high-dimensional data makes it necessary to apply such statistical hypothesis tests simultaneously to the test statistics of the underlying covariates. However, if applied without correction this leads to an inevitable increase in Type 1 errors. To counteract this effect, multiple testing procedures have been introduced to control various types of errors, most notably the Type 1 error. In this paper, we review modern multiple testing procedures for controlling either the family-wise error (FWER) or the false-discovery rate (FDR). We emphasize their principal approach allowing categorization of them as (1) single-step vs. stepwise approaches, (2) adaptive vs. non-adaptive approaches, and (3) marginal vs. joint multiple testing procedures. We place a particular focus on procedures that can deal with data with a (strong) correlation structure because real-world data are rarely uncorrelated. Furthermore, we also provide background information making the often technically intricate methods accessible for interdisciplinary data scientists.


2019 ◽  
Vol 19 (2) ◽  
pp. 134-140
Author(s):  
Baek-Ju Sung ◽  
Sung-kyu Lee ◽  
Mu-Seong Chang ◽  
Do-Sik Kim

Author(s):  
YongAn LI

Background: The symbolic nodal analysis acts as a pivotal part of the very large scale integration (VLSI) design. Methods: In this work, based on the terminal relations for the pathological elements and the voltage differencing inverting buffered amplifier (VDIBA), twelve alternative pathological models for the VDIBA are presented. Moreover, the proposed models are applied to the VDIBA-based second-order filter and oscillator so as to simplify the circuit analysis. Results: The result shows that the behavioral models for the VDIBA are systematic, effective and powerful in the symbolic nodal circuit analysis.</P>


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Moritz Mercker ◽  
Philipp Schwemmer ◽  
Verena Peschko ◽  
Leonie Enners ◽  
Stefan Garthe

Abstract Background New wildlife telemetry and tracking technologies have become available in the last decade, leading to a large increase in the volume and resolution of animal tracking data. These technical developments have been accompanied by various statistical tools aimed at analysing the data obtained by these methods. Methods We used simulated habitat and tracking data to compare some of the different statistical methods frequently used to infer local resource selection and large-scale attraction/avoidance from tracking data. Notably, we compared spatial logistic regression models (SLRMs), spatio-temporal point process models (ST-PPMs), step selection models (SSMs), and integrated step selection models (iSSMs) and their interplay with habitat and animal movement properties in terms of statistical hypothesis testing. Results We demonstrated that only iSSMs and ST-PPMs showed nominal type I error rates in all studied cases, whereas SSMs may slightly and SLRMs may frequently and strongly exceed these levels. iSSMs appeared to have on average a more robust and higher statistical power than ST-PPMs. Conclusions Based on our results, we recommend the use of iSSMs to infer habitat selection or large-scale attraction/avoidance from animal tracking data. Further advantages over other approaches include short computation times, predictive capacity, and the possibility of deriving mechanistic movement models.


Author(s):  
Yuan-Ho Chen ◽  
Chieh-Yang Liu

AbstractIn this paper, a very-large-scale integration (VLSI) design that can support high-efficiency video coding inverse discrete cosine transform (IDCT) for multiple transform sizes is proposed. The proposed two-dimensional (2-D) IDCT is implemented at a low area by using a single one-dimensional (1-D) IDCT core with a transpose memory. The proposed 1-D IDCT core decomposes a 32-point transform into 16-, 8-, and 4-point matrix products according to the symmetric property of the transform coefficient. Moreover, we use the shift-and-add unit to share hardware resources between multiple transform dimension matrix products. The 1-D IDCT core can simultaneously calculate the first- and second-dimensional data. The results indicate that the proposed 2-D IDCT core has a throughput rate of 250 MP/s, with only 110 K gate counts when implemented into the Taiwan semiconductor manufacturing (TSMC) 90-nm complementary metal-oxide-semiconductor (CMOS) technology. The results show the proposed circuit has the smallest area supporting the multiple transform sizes.


Sign in / Sign up

Export Citation Format

Share Document