Peak Signal-to-Noise Ratio Evaluation of Server Display Monitors and Client Display Monitors in a Digital Subtraction Angiography Devices

2020 ◽  
Vol 3 (1) ◽  
pp. 33-41
Author(s):  
Hwunjae Lee ◽  
◽  
Junhaeng Lee ◽  

This study evaluated PSNR of server display monitor and client display monitor of DSA system. The signal is acquired and imaged during the surgery and stored in the PACS server. After that, distortion of the original signal is an important problem in the process of observation on the client monitor. There are many problems such as noise generated during compression and image storage/transmission in PACS, information loss during image storage and transmission, and deterioration in image quality when outputting medical images from a monitor. The equipment used for the experiment in this study was P's DSA. We used two types of monitors in our experiment, one is P’s company resolution 1280×1024 pixel monitor, and the other is W’s company resolution 1536×2048 pixel monitor. The PACS Program used MARO-view, and for the experiment, a PSNR measurement program using Visual C++ was implemented and used for the experiment. As a result of the experiment, the PSNR value of the kidney angiography image was 26.958dB, the PSNR value of the lung angiography image was 28.9174 dB, the PSNR value of the heart angiography image was 22.8315dB, and the PSNR value of the neck angiography image was 37.0319 dB, and the knee blood vessels image showed a PSNR value of 43.2052 dB, respectively. In conclusion, it can be seen that there is almost no signal distortion in the process of acquiring, storing, and transmitting images in PACS. However, it suggests that the image signal may be distorted depending on the resolution and performance of each monitor. Therefore, it will be necessary to evaluate the performance of the monitor and to maintain the performance.

Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 717
Author(s):  
Mariia Nazarkevych ◽  
Natalia Kryvinska ◽  
Yaroslav Voznyi

This article presents a new method of image filtering based on a new kind of image processing transformation, particularly the wavelet-Ateb–Gabor transformation, that is a wider basis for Gabor functions. Ateb functions are symmetric functions. The developed type of filtering makes it possible to perform image transformation and to obtain better biometric image recognition results than traditional filters allow. These results are possible due to the construction of various forms and sizes of the curves of the developed functions. Further, the wavelet transformation of Gabor filtering is investigated, and the time spent by the system on the operation is substantiated. The filtration is based on the images taken from NIST Special Database 302, that is publicly available. The reliability of the proposed method of wavelet-Ateb–Gabor filtering is proved by calculating and comparing the values of peak signal-to-noise ratio (PSNR) and mean square error (MSE) between two biometric images, one of which is filtered by the developed filtration method, and the other by the Gabor filter. The time characteristics of this filtering process are studied as well.


Universe ◽  
2021 ◽  
Vol 7 (6) ◽  
pp. 174
Author(s):  
Karl Wette

The likelihood ratio for a continuous gravitational wave signal is viewed geometrically as a function of the orientation of two vectors; one representing the optimal signal-to-noise ratio, and the other representing the maximised likelihood ratio or F-statistic. Analytic marginalisation over the angle between the vectors yields a marginalised likelihood ratio, which is a function of the F-statistic. Further analytic marginalisation over the optimal signal-to-noise ratio is explored using different choices of prior. Monte-Carlo simulations show that the marginalised likelihood ratios had identical detection power to the F-statistic. This approach demonstrates a route to viewing the F-statistic in a Bayesian context, while retaining the advantages of its efficient computation.


1994 ◽  
Vol 04 (02) ◽  
pp. 441-446 ◽  
Author(s):  
V.S. ANISHCHENKO ◽  
M.A. SAFONOVA ◽  
L.O. CHUA

Using numerical simulation, we establish the possibility of realizing the stochastic resonance (SR) phenomenon in Chua’s circuit when it is excited by either an amplitude-modulated or a frequency-modulated signal. It is shown that the application of a frequency-modulated signal to a Chua’s circuit operating in a regime of dynamical intermittency is preferable over an amplitude-modulated signal from the point of view of minimizing the signal distortion and maximizing the signal-to-noise ratio (SNR).


2021 ◽  
Vol 647 ◽  
pp. L3 ◽  
Author(s):  
J. Cernicharo ◽  
C. Cabezas ◽  
M. Agúndez ◽  
B. Tercero ◽  
N. Marcelino ◽  
...  

We present the discovery in TMC-1 of allenyl acetylene, H2CCCHCCH, through the observation of nineteen lines with a signal-to-noise ratio ∼4–15. For this species, we derived a rotational temperature of 7 ± 1 K and a column density of 1.2 ± 0.2 × 1013 cm−2. The other well known isomer of this molecule, methyl diacetylene (CH3C4H), has also been observed and we derived a similar rotational temperature, Tr = 7.0 ± 0.3 K, and a column density for its two states (A and E) of 6.5 ± 0.3 × 1012 cm−2. Hence, allenyl acetylene and methyl diacetylene have a similar abundance. Remarkably, their abundances are close to that of vinyl acetylene (CH2CHCCH). We also searched for the other isomer of C5H4, HCCCH2CCH (1.4-Pentadiyne), but only a 3σ upper limit of 2.5 × 1012 cm−2 to the column density can be established. These results have been compared to state-of-the-art chemical models for TMC-1, indicating the important role of these hydrocarbons in its chemistry. The rotational parameters of allenyl acetylene have been improved by fitting the existing laboratory data together with the frequencies of the transitions observed in TMC-1.


Circuit World ◽  
2019 ◽  
Vol 45 (3) ◽  
pp. 156-168 ◽  
Author(s):  
Yavar Safaei Mehrabani ◽  
Mehdi Bagherizadeh ◽  
Mohammad Hossein Shafiabadi ◽  
Abolghasem Ghasempour

Purpose This paper aims to present an inexact 4:2 compressor cell using carbon nanotube filed effect transistors (CNFETs). Design/methodology/approach To design this cell, the capacitive threshold logic (CTL) has been used. Findings To evaluate the proposed cell, comprehensive simulations are carried out at two levels of the circuit and image processing. At the circuit level, the HSPICE software has been used and the power consumption, delay, and power-delay product are calculated. Also, the power-delaytransistor count product (PDAP) is used to make a compromise between all metrics. On the other hand, the Monte Carlo analysis has been used to scrutinize the robustness of the proposed cell against the variations in the manufacturing process. The results of simulations at this level of abstraction indicate the superiority of the proposed cell to other circuits. At the application level, the MATLAB software is also used to evaluate the peak signal-to-noise ratio (PSNR) figure of merit. At this level, the two primary images are multiplied by a multiplier circuit consisting of 4:2 compressors. The results of this simulation also show the superiority of the proposed cell to others. Originality/value This cell significantly reduces the number of transistors and only consists of NOT gates.


Geophysics ◽  
1986 ◽  
Vol 51 (10) ◽  
pp. 1879-1892 ◽  
Author(s):  
P. L. McFadden ◽  
B. J. Drummond ◽  
S. Kravis

Multichannel geophysical data are usually stacked by calculating the average of the observations on all channels. In the Nth‐root stack, the average of the Nth root of each observation is raised to the Nth power, with the signs of the observations and average maintained. When N = 1, the process is identical to conventional linear stacking or averaging. Nth‐root stacking has been applied in the processing of seismic refraction and teleseismic array data. In some experiments and certain applications it is inferior to linear stacking, but in others it is superior. Although the variance for an Nth‐root stack is typically less than for a linear stack, the mean square error is larger, because of signal attenuation. The fractional amount by which the signal is attenuated depends in a complicated way on the number of data channels, the order (N) of the stack, the signal‐to‐noise ratio, and the noise distribution. Because the signal‐to‐noise ratio varies across a wavelet, peaking where the signal is greatest and approaching zero at the zero‐crossing points, the attenuation of the signal varies across a wavelet, thereby producing signal distortion. The main visual effect of the distortion is a sharpening of the legs of the wavelet. However, the attenuation of the signal is accompanied by a much greater attenuation of the background noise, leading to a significant contrast enhancement. It is this sharpening of the signal, accompanied by the contrast enhancement, that makes the technique powerful in beam‐steering applications of array data. For large values of N, the attenuation of the signal with low signal‐to‐noise ratios ultimately leads to its destruction. Nth‐root stacking is therefore particularly powerful in applications where signal sharpening and contrast enhancement are important but signal distortion is not.


1998 ◽  
Vol 5 (3) ◽  
pp. 1050-1051
Author(s):  
D. E. Sayers ◽  
P. T. Goeller ◽  
B. I. Boyanov ◽  
R. J. Nemanich

The capabilities and performance of a UHV system for in situ studies of metal–semiconductor interactions are described. The UHV system consists of interconnected deposition and analysis chambers, each of which is capable of maintaining a base pressure of approximately 1 × 10−10 torr. The deposited materials and their reaction products can be studied in situ with RHEED, XAFS, AES, XPS, UPS and ARUPS. Results from a study of the reaction of 0.7- and 1.7-monolayer-thick films of cobalt with strained silicon–germanium alloys are presented. The signal-to-noise ratio obtained in these experiments indicates that the apparatus is capable of supporting in situ EXAFS studies of ∼0.1-monolayer-thick films.


2021 ◽  
Vol 7 (14) ◽  
pp. eabe0340
Author(s):  
Sophie Bavard ◽  
Aldo Rustichini ◽  
Stefano Palminteri

Evidence suggests that economic values are rescaled as a function of the range of the available options. Although locally adaptive, range adaptation has been shown to lead to suboptimal choices, particularly notable in reinforcement learning (RL) situations when options are extrapolated from their original context to a new one. Range adaptation can be seen as the result of an adaptive coding process aiming at increasing the signal-to-noise ratio. However, this hypothesis leads to a counterintuitive prediction: Decreasing task difficulty should increase range adaptation and, consequently, extrapolation errors. Here, we tested the paradoxical relation between range adaptation and performance in a large sample of participants performing variants of an RL task, where we manipulated task difficulty. Results confirmed that range adaptation induces systematic extrapolation errors and is stronger when decreasing task difficulty. Last, we propose a range-adapting model and show that it is able to parsimoniously capture all the behavioral results.


Sign in / Sign up

Export Citation Format

Share Document