noise injection
Recently Published Documents


TOTAL DOCUMENTS

170
(FIVE YEARS 46)

H-INDEX

16
(FIVE YEARS 4)

Author(s):  
Sai Kiran Cherupally ◽  
Jian Meng ◽  
Adnan Siraj Rakin ◽  
Shihui Yin ◽  
Injune Yeo ◽  
...  

Abstract We present a novel deep neural network (DNN) training scheme and RRAM in-memory computing (IMC) hardware evaluation towards achieving high robustness to the RRAM device/array variations and adversarial input attacks. We present improved IMC inference accuracy results evaluated on state-of-the-art DNNs including ResNet-18, AlexNet, and VGG with binary, 2-bit, and 4-bit activation/weight precision for the CIFAR-10 dataset. These DNNs are evaluated with measured noise data obtained from three different RRAM-based IMC prototype chips. Across these various DNNs and IMC chip measurements, we show that our proposed hardware noise-aware DNN training consistently improves DNN inference accuracy for actual IMC hardware, up to 8% accuracy improvement for the CIFAR-10 dataset. We also analyze the impact of our proposed noise injection scheme on the adversarial robustness of ResNet-18 DNNs with 1-bit, 2-bit, and 4-bit activation/weight precision. Our results show up to 6% improvement in the robustness to black-box adversarial input attacks.


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2773
Author(s):  
Moo-Yeol Choi ◽  
Bai-Sun Kong

A linearity enhancement scheme for voltage-controlled oscillator (VCO)-based continuous-time (CT) delta-sigma (ΔΣ) analog-to-digital converters (ADCs) is proposed. Unlike conventional input feedforwarding techniques, the proposed feedforwarding scheme using digital feedback residue quantization (DFRQ) can avoid the analog summing amplifier, allow intrinsic anti-aliasing filtering (AAF) characteristic, and cause no switching noise injection into the input. A VCO-based CT ΔΣ ADC adapting the proposed DFRQ enables residue-only processing in the quantizer, avoiding the degradation of signal-to-noise and distortion ratio (SNDR) due to VCO nonlinearity. The use of DFRQ also reduces the voltage swing of integrators without the drawbacks caused by conventional input feedforwarding techniques. The performance evaluation results indicate that the proposed VCO-based CT ΔΣ ADC with DFRQ provides 30.3-dB SNDR improvement, reaching up to 83.5-dB in 2-MHz signal bandwidth.


2021 ◽  
Author(s):  
George Zhou ◽  
Yunchan Chen ◽  
Candace Chien

Abstract Background: The application of machine learning to cardiac auscultation has the potential to improve the accuracy and efficiency of both routine and point-of-care screenings. The use of Convolutional Neural Networks (CNN) on heart sound spectrograms in particular has defined state-of-the-art performance. However, the relative paucity of patient data remains a significant barrier to creating models that can adapt to the wide range of between-subject variability. To that end, we examined a CNN model’s performance on automated heart sound classification, before and after various forms of data augmentation, and aimed to identify the most optimal augmentation methods for cardiac spectrogram analysis.Results: We built a standard CNN model to classify cardiac sound recordings as either normal or abnormal. The baseline control model achieved an ROC AUC of 0.945±0.016. Among the data augmentation techniques explored, horizontal flipping of the spectrogram image improved the model performance the most, with an ROC AUC of 0.957±0.009. Principal component analysis color augmentation (PCA) and perturbations of saturation-value (SV) of the hue-saturation-value (HSV) color scale achieved an ROC AUC of 0.949±0.014 and 0.946±0.019, respectively. Time and frequency masking resulted in an ROC AUC of 0.948±0.012. Pitch shifting, time stretching and compressing, noise injection, vertical flipping, and applying random color filters all negatively impacted model performance.Conclusion: Data augmentation can improve classification accuracy by expanding and diversifying the dataset, which protects against overfitting to random variance. However, data augmentation is necessarily domain specific. For example, methods like noise injection have found success in other areas of automated sound classification, but in the context of cardiac sound analysis, noise injection can mimic the presence of murmurs and worsen model performance. Thus, care should be taken to ensure clinically appropriate forms of data augmentation to avoid negatively impacting model performance.


Mathematics ◽  
2021 ◽  
Vol 9 (17) ◽  
pp. 2144
Author(s):  
Chaim Baskin ◽  
Evgenii Zheltonozhkii ◽  
Tal Rozen ◽  
Natan Liss ◽  
Yoav Chai ◽  
...  

Convolutional Neural Networks (CNNs) are very popular in many fields including computer vision, speech recognition, natural language processing, etc. Though deep learning leads to groundbreaking performance in those domains, the networks used are very computationally demanding and are far from being able to perform in real-time applications even on a GPU, which is not power efficient and therefore does not suit low power systems such as mobile devices. To overcome this challenge, some solutions have been proposed for quantizing the weights and activations of these networks, which accelerate the runtime significantly. Yet, this acceleration comes at the cost of a larger error unless spatial adjustments are carried out. The method proposed in this work trains quantized neural networks by noise injection and a learned clamping, which improve accuracy. This leads to state-of-the-art results on various regression and classification tasks, e.g., ImageNet classification with architectures such as ResNet-18/34/50 with as low as 3 bit weights and activations. We implement the proposed solution on an FPGA to demonstrate its applicability for low-power real-time applications. The quantization code will become publicly available upon acceptance.


Author(s):  
Rolf Hoffmann ◽  
Dominique Désérable ◽  
Franciszek Seredyński

AbstractThe objective is to demonstrate that a probabilistic cellular automata rule can place reliably a maximal number of dominoes in different active area shapes, exemplarily evaluated for the square and diamond. The basic rule forms domino patterns, but the number of dominoes is not necessarily maximal and the patterns are not always stable. It works with templates derived from domino tiles. The first proposed enhancement (Rule Option 1) can form always stable patterns. The second enhancement (Rule Option 2) can maximize the number of dominoes, but the reached patterns are not always stable. All rules drive the evolution by specific noise injection.


Geophysics ◽  
2021 ◽  
pp. 1-43
Author(s):  
Chao Zhang ◽  
Mirko van der Baan

Neural networks hold substantial promise to automate various processing and interpretation tasks. Yet their performance is often sub-optimal compared with standard but more closely guided approaches. Lack of performance is often attributed to poor generalization, in particular if fewer training examples are provided than free parameters exist in the machine learning algorithm. In this case the training data are typically memorized instead of the algorithm learning the underlying general trends. Network generalization is improved if the provided samples are representative, in that they describe all features of interest well. We argue that a more subtle condition preventing poor performance is that the provided examples must also be complete; the examples must span the full solution space. Ensuring completeness during training is challenging unless the target application is well understood. We illustrate that one possible solution is to make the problem more general if this greatly increases the number of available training data. For instance, if seismic images are treated as a subclass of natural images, then a deep-learning-based denoiser for seismic data can be trained using exclusively natural images. The latter are widely available. The resulting denoising algorithm has never seen any seismic data during the training stage; yet it displays a performance comparable to standard and advanced random-noise reduction methods. We exclude any seismic data during training to demonstrate the natural images are both complete and representative for this specific task. Furthermore, we apply a novel approach to increase the amount of training data known as double noise injection, providing both noisy input and output images during the training process. Given the importance of network generalization, we hope that insights gained in this study may help improve the performance of a range of machine learning applications in geophysics.


Author(s):  
Baptiste Roziere ◽  
Nathanael Carraz Rakotonirina ◽  
Vlad Hosu ◽  
Andry Rasoanaivo ◽  
Hanhe Lin ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document