OPTIMIZATION OF PROCESS PARAMETERS IN ND: YAG LASER WELDING OF HASTELLOY SHEETS THROUGH TAGUCHI METHOD

2019 ◽  
Vol 14 (1) ◽  
Author(s):  
Saravanan S

Optimization of weld width and tensile strength in pulsed Nd: YAG laser welded Hastelloy C-276 sheets, subjected to varied welding speed (350-450 mm/min), pulse energy (10-14 J) and pulse duration (6-8 ms), is attempted. Experimental conditions are designed based on Taguchi L9 orthogonal array. The parameters for attaining a minimum weld width and maximum tensile strength were determined by computing the Signal-to-Noise ratio. Further, a mathematical model is developed for determining the weld width and tensile strength of the weld, based on the regression analysis using statistical software MINITAB-16 and the level of fit are determined by analysis of variance.

2019 ◽  
Vol 9 (11) ◽  
pp. 2181
Author(s):  
Anuradha Goswami ◽  
Jia-Qian Jiang

This research aims to depict the comparative performance of micropollutants’ removal by FeSO4- and zero-valent iron (Fe(0))-catalytic Fenton oxidation and to explore the possibilities of minimising the sludge production from the process. The emerging micropollutants used for the study were gabapentin, sulfamethoxazole, diuron, terbutryn and terbuthylazine. The Taguchi method, which evaluates the signal-to-noise ratio instead of the standard deviation, was used to develop robust experimental conditions. Though both FeSO4- and Fe(0)-catalytic Fenton oxidation were able to completely degrade the stated micropollutants, the Fe(0)-catalytic Fenton process delivered better removal of dissolved organic carbon (DOC; 70%) than FeSO4 catalytic Fenton oxidation (45%). Fe(0)-catalytic Fenton oxidation facilitated heterogeneous treatment functions, which eliminated toxicity from contaminated solution and there was no recognisable sludge production.


2019 ◽  
Author(s):  
A. Fragasso ◽  
S. Schmid ◽  
C. Dekker

AbstractNanopores bear great potential as single-molecule tools for bioanalytical sensing and sequencing, due to their exceptional sensing capabilities, high-throughput, and low cost. The detection principle relies on detecting small differences in the ionic current as biomolecules traverse the nanopore. A major bottleneck for the further progress of this technology is the noise that is present in the ionic current recordings, because it limits the signal-to-noise ratio and thereby the effective time resolution of the experiment. Here, we review the main types of noise at low and high frequencies and discuss the underlying physics. Moreover, we compare biological and solid-state nanopores in terms of the signal-to-noise ratio (SNR), the important figure of merit, by measuring free translocations of a short ssDNA through a selected set of nanopores under typical experimental conditions. We find that SiNx solid-state nanopores provide the highest SNR, due to the large currents at which they can be operated and the relatively low noise at high frequencies. However, the real game-changer for many applications is a controlled slowdown of the translocation speed, which for MspA was shown to increase the SNR >160-fold. Finally, we discuss practical approaches for lowering the noise for optimal experimental performance and further development of the nanopore technology.


2020 ◽  
Vol 48 (8) ◽  
pp. e43-e43 ◽  
Author(s):  
Guanjue Xiang ◽  
Cheryl A Keller ◽  
Belinda Giardine ◽  
Lin An ◽  
Qunhua Li ◽  
...  

Abstract Quantitative comparison of epigenomic data across multiple cell types or experimental conditions is a promising way to understand the biological functions of epigenetic modifications. However, differences in sequencing depth and signal-to-noise ratios in the data from different experiments can hinder our ability to identify real biological variation from raw epigenomic data. Proper normalization is required prior to data analysis to gain meaningful insights. Most existing methods for data normalization standardize signals by rescaling either background regions or peak regions, assuming that the same scale factor is applicable to both background and peak regions. While such methods adjust for differences in sequencing depths, they do not address differences in the signal-to-noise ratios across different experiments. We developed a new data normalization method, called S3norm, that normalizes the sequencing depths and signal-to-noise ratios across different data sets simultaneously by a monotonic nonlinear transformation. We show empirically that the epigenomic data normalized by our method, compared to existing methods, can better capture real biological variation, such as impact on gene expression regulation.


2009 ◽  
Vol 16-19 ◽  
pp. 273-277 ◽  
Author(s):  
Ying Liu ◽  
Yan Li ◽  
Jin Tao Xu

To reduce the noise error existing in the output signal of fiber optic gyroscopes (FOGs) and increase the precision of the FOGs, this paper established a mathematical model of the FOGs output signal, analyzed the error characteristics of the FOGs output signal, put forward a new de-noising arithmetic based on the wavelet transform, soft and hard threshold compromise filtering, threshold values were determined by multi-dimensions recursion arithmetic. Through experiment, it has been already validated that the proposed approach had the competitive performances on visual quality, signal to noise ratio (SNR) and the standard variation, it is effective in eliminating the white noises existing in the output signal of the FOG.


2011 ◽  
Vol 396-398 ◽  
pp. 110-114
Author(s):  
Xu Hai Chen ◽  
Min Du

Electrochemical real-time PCR (EC-rtPCR) overcomes several shortcomings of fluorescence-based real-time PCR. But traditional electrochemical method has some limitations which reduce the accuracy and efficiency of testing. To overcome the disadvantages of chronoamperometry (CA) we report a novel electrochemical method where a peak current is quickly generated for the current vs. time curve by changing the waveform of voltage excitation in the working electrode. In particular, we derived a mathematical model to illustrate the principle of this method and it can also be used to demonstrate that the peak current is linear with regards to the concentration of the target substance. Moreover, we developed a device with an improved electrochemical circuit to generate the voltage excitation and detect the peak automatically. Finally, the device was used to study the electrochemical behavior of K3[Fe(CN)6]. It’s shown that the method has a better signal to noise ratio and higher sensitivity than chronoamperometry. The obtained peak current is linear with regards to the concentration of the target substance.


2011 ◽  
Vol 230-232 ◽  
pp. 564-568
Author(s):  
Qing Xuan Wang ◽  
Ai Hua Dong

Making use of the advantages of the soft and hard threshold methods proposed by Donoho, this paper presents a deviation variable based on quadratic polynomial and differentiable improvement function, according to the characteristics of the underground water pipe leakage signal. In the paper we constructed the mathematical model of the function, utilized wavelet de-noising algorithm for the collected ultrasonic signal de-noising, compared the de-noising effect by means of the value of Signal to Noise Ratio (SNR) and Mean Squared Error (MSE). Results show that the new function has the better de-noising effect compared with the original threshold function.


2018 ◽  
Vol 17 (1) ◽  
pp. 79-86 ◽  
Author(s):  
N. A. Starasotnikau ◽  
R. V. Feodortsau

Accuracy in determination of coordinates for image having simple shapes is considered as one of important and significant parameters in metrological optoelectronic systems such as autocollimators, stellar sensors, Shack-Hartmann sensors, schemes for geometric calibration of digital cameras for aerial and space imagery, various tracking systems. The paper describes a mathematical model for a measuring stand based on a collimator which projects a test-object onto a photodetector of an optoelectronic device. The mathematical model takes into account characteristic noises for photodetectors: a shot noise of the desired signal (photon) and a shot noise of a dark signal, readout and spatial heterogeneity of CCD (charge-coupled device) matrix elements. In order to reduce noise effect it is proposed to apply the Wiener filter for smoothing an image and its unambiguous identification and also enter a threshold according to brightness level. The paper contains a comparison of two algorithms for determination of coordinates in accordance with energy gravity center and contour. Sobel, Pruitt, Roberts, Laplacian Gaussian, Canni detectors have been used for determination of the test-object contour. The essence of the algorithm for determination of coordinates lies in search for an image contour in the form of a circle with its subsequent approximation and determination of the image center. An error calculation has been made while determining coordinates of a gravity center for test-objects of various diameters: 5, 10, 20, 30, 40, 50 pixels of a photodetector and also signalto-noise ratio values: 200, 100, 70, 20, 10. Signal-to-noise ratio has been calculated as a difference between maximum image intensity of the test-object and the background which is divided by mean-square deviation of the background. The accuracy for determination of coordinates has been improved by 0.5-1 order in case when there was an increase in a signal-to-noise ratio. Accuracy improvement due to increase of a diameter in a test-object is typical for large signal-to-noise ratios: 70 or more. The conducted investigations have made it possible to establish that the algorithm for determination of coordinates of the energy gravity center is more accurate in comparison with contour methods and requires less computing power (for the MatLab software package), which is related to discreteness while determining a contour.


2014 ◽  
Vol 57 (4) ◽  
pp. 1512-1520 ◽  
Author(s):  
Michelle Mason ◽  
Kostas Kokkinakis

Purpose The purpose of this study was to evaluate the contribution of a contralateral hearing aid to the perception of consonants, in terms of voicing, manner, and place-of-articulation cues in reverberation and noise by adult cochlear implantees aided by bimodal fittings. Method Eight postlingually deafened adult cochlear implant (CI) listeners with a fully inserted CI in 1 ear and low-frequency hearing in the other ear were tested on consonant perception. They were presented with consonant stimuli processed in the following experimental conditions: 1 quiet condition, 2 different reverberation times (0.3 s and 1.0 s), and the combination of 2 reverberation times with a single signal-to-noise ratio (5 dB). Results Consonant perception improved significantly when listening in combination with a contralateral hearing aid as opposed to listening with a CI alone in 0.3 s and 1.0 s of reverberation. Significantly higher scores were also noted when noise was added to 0.3 s of reverberation. Conclusions A considerable benefit was noted from the additional acoustic information in conditions of reverberation and reverberation plus noise. The bimodal benefit observed was more pronounced for voicing and manner of articulation than for place of articulation.


2018 ◽  
Author(s):  
Guanjue Xiang ◽  
Cheryl A. Keller ◽  
Belinda Giardine ◽  
Lin An ◽  
Qunhua Li ◽  
...  

ABSTRACTQuantitative comparison of epigenomic data across multiple cell types or experimental conditions is a promising way to understand the biological functions of epigenetic modifications. However, differences in sequencing depth and signal-to-noise ratios in the data from different experiments can hinder our ability to identify real biological variation from raw epigenomic data. Proper normalization is required prior to data analysis to gain meaningful insights. Most existing methods for data normalization standardize signals by rescaling either background regions or peak regions, assuming that the same scale factor is applicable to both background and peak regions. While such methods adjust for differences in sequencing depths, they do not address differences in the signal-to-noise ratios across different experiments. We developed a new data normalization method, called S3norm, that normalizes the sequencing depths and signal-to-noise ratios across different data sets simultaneously by a monotonic nonlinear transformation. We show empirically that the epigenomic data normalized by our method, compared to existing methods, can better capture real biological variation, such as impact on gene expression regulation.


Sign in / Sign up

Export Citation Format

Share Document