scholarly journals Near-channel classifier: symbiotic communication and classification in high-dimensional space

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Michael Hersche ◽  
Stefan Lippuner ◽  
Matthias Korb ◽  
Luca Benini ◽  
Abbas Rahimi

AbstractBrain-inspired high-dimensional (HD) computing represents and manipulates data using very long, random vectors with dimensionality in the thousands. This representation provides great robustness for various classification tasks where classifiers operate at low signal-to-noise ratio (SNR) conditions. Similarly, hyperdimensional modulation (HDM) leverages the robustness of complex-valued HD representations to reliably transmit information over a wireless channel, achieving a similar SNR gain compared to state-of-the-art codes. Here, we first propose methods to improve HDM in two ways: (1) reducing the complexity of encoding and decoding operations by generating, manipulating, and transmitting bipolar or integer vectors instead of complex vectors; (2) increasing the SNR gain by 0.2 dB using a new soft-feedback decoder; it can also increase the additive superposition capacity of HD vectors up to 1.7$$\times$$ × in noise-free cases. Secondly, we propose to combine encoding/decoding aspects of communication with classification into a single framework by relying on multifaceted HD representations. This leads to a near-channel classification (NCC) approach that avoids transformations between different representations and the overhead of multiple layers of encoding/decoding, hence reducing latency and complexity of a wireless smart distributed system while providing robustness against noise and interference from other nodes. We provide a use-case for wearable hand gesture recognition with 5 classes from 64 EMG sensors, where the encoded vectors are transmitted to a remote node for either performing NCC, or reconstruction of the encoded data. In NCC mode, the original classification accuracy of 94% is maintained, even in the channel at SNR of 0 dB, by transmitting 10,000-bit vectors. We remove the redundancy by reducing the vector dimensionality to 2048-bit that still exhibits a graceful degradation: less than 6% accuracy loss is occurred in the channel at − 5 dB, and with the interference from 6 nodes that simultaneously transmit their encoded vectors. In the reconstruction mode, it improves the mean-squared error by up to 20 dB, compared to standard decoding, when transmitting 2048-dimensional vectors.

2014 ◽  
Vol 2 (2) ◽  
pp. 47-58
Author(s):  
Ismail Sh. Baqer

A two Level Image Quality enhancement is proposed in this paper. In the first level, Dualistic Sub-Image Histogram Equalization DSIHE method decomposes the original image into two sub-images based on median of original images. The second level deals with spikes shaped noise that may appear in the image after processing. We presents three methods of image enhancement GHE, LHE and proposed DSIHE that improve the visual quality of images. A comparative calculations is being carried out on above mentioned techniques to examine objective and subjective image quality parameters e.g. Peak Signal-to-Noise Ratio PSNR values, entropy H and mean squared error MSE to measure the quality of gray scale enhanced images. For handling gray-level images, convenient Histogram Equalization methods e.g. GHE and LHE tend to change the mean brightness of an image to middle level of the gray-level range limiting their appropriateness for contrast enhancement in consumer electronics such as TV monitors. The DSIHE methods seem to overcome this disadvantage as they tend to preserve both, the brightness and contrast enhancement. Experimental results show that the proposed technique gives better results in terms of Discrete Entropy, Signal to Noise ratio and Mean Squared Error values than the Global and Local histogram-based equalization methods


2018 ◽  
pp. 1940-1954
Author(s):  
Suma K. V. ◽  
Bheemsain Rao

Reduction in the capillary density in the nailfold region is frequently observed in patients suffering from Hypertension (Feng J, 2010). Loss of capillaries results in avascular regions which have been well characterized in many diseases (Mariusz, 2009). Nailfold capillary images need to be pre-processed so that noise can be removed, background can be separated and the useful parameters may be computed using image processing algorithms. Smoothing filters such as Gaussian, Median and Adaptive Median filters are compared using Mean Squared Error and Peak Signal-to-Noise Ratio. Otsu's thresholding is employed for segmentation. Connected Component Labeling algorithm is applied to calculate the number of capillaries per mm. This capillary density is used to identify rarefaction of capillaries and also the severity of rarefaction. Avascular region is detected by determining the distance between the peaks of the capillaries using Euclidian distance. Detection of rarefaction of capillaries and avascular regions can be used as a diagnostic tool for Hypertension and various other diseases.


2016 ◽  
Vol 5 (2) ◽  
pp. 73-86
Author(s):  
Suma K. V. ◽  
Bheemsain Rao

Reduction in the capillary density in the nailfold region is frequently observed in patients suffering from Hypertension (Feng J, 2010). Loss of capillaries results in avascular regions which have been well characterized in many diseases (Mariusz, 2009). Nailfold capillary images need to be pre-processed so that noise can be removed, background can be separated and the useful parameters may be computed using image processing algorithms. Smoothing filters such as Gaussian, Median and Adaptive Median filters are compared using Mean Squared Error and Peak Signal-to-Noise Ratio. Otsu's thresholding is employed for segmentation. Connected Component Labeling algorithm is applied to calculate the number of capillaries per mm. This capillary density is used to identify rarefaction of capillaries and also the severity of rarefaction. Avascular region is detected by determining the distance between the peaks of the capillaries using Euclidian distance. Detection of rarefaction of capillaries and avascular regions can be used as a diagnostic tool for Hypertension and various other diseases.


2015 ◽  
Vol 8 (4) ◽  
pp. 32
Author(s):  
Sabarish Sridhar

Steganography, water marking and encryption are widely used in image processing and communication. A general practice is to use them independently or in combination of two - for e.g. data hiding with encryption or steganography alone. This paper aims to combine the features of watermarking, image encryption as well as image steganography to provide reliable and secure data transmission .The basics of data hiding and encryption are explained. The first step involves inserting the required watermark on the image at the optimum bit plane. The second step is to use an RSA hash to actually encrypt the image. The final step involves obtaining a cover image and hiding the encrypted image within this cover image. A set of metrics will be used for evaluation of the effectiveness of the digital water marking. The list includes Mean Squared Error, Peak Signal to Noise Ratio and Feature Similarity.


Author(s):  
SONALI R. MAHAKALE ◽  
NILESHSINGH V. THAKUR

This paper deals with the comparative study of research work done in the field of Image Filtering. Different noises can affect the image in different ways. Although various solutions are available for denoising them, a detail study of the research is required in order to design a filter which will fulfill the desire aspects along with handling most of the image filtering issues. An output image should be judged on the basis of Image Quality Metrics for ex-: Peak-Signal-to-Noise ratio (PSNR), Mean Squared Error (MSE) and Mean Absolute Error (MAE) and Execution Time.


Author(s):  
Calvin Omind Munna

Currently, there a growing demand of data produced and stored in clinical domains. Therefore, for effective dealings of massive sets of data, a fusion methodology needs to be analyzed by considering the algorithmic complexities. For effective minimization of the severance of image content, hence minimizing the capacity to store and communicate data in optimal forms, image processing methodology has to be involved. In that case, in this research, two compression methodologies: lossy compression and lossless compression were utilized for the purpose of compressing images, which maintains the quality of images. Also, a number of sophisticated approaches to enhance the quality of the fused images have been applied. The methodologies have been assessed and various fusion findings have been presented. Lastly, performance parameters were obtained and evaluated with respect to sophisticated approaches. Structure Similarity Index Metric (SSIM), Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR) are the metrics, which were utilized for the sample clinical pictures. Critical analysis of the measurement parameters shows higher efficiency compared to numerous image processing methods. This research draws understanding to these approaches and enables scientists to choose effective methodologies of a particular application.


Author(s):  
Elie Fute Tagne ◽  
Hugues Marie Kamdjou ◽  
Alain Bertrand Bomgni ◽  
Armand Nzeukou

The expansion of sensitive dataderiving from a variety of applications has requiredthe need to transmit and/or archivethem with increased performance in terms of quality, transmission delay or storage volume. However, lossless compression techniques are almost unacceptable in the application fields where data does not allow alterations because of the fact that loss of crucial information can distort the analysis. This paper introduces MediCompress, a lightweight lossless data compression approach for irretrievable data like those from the medical or astronomy fields. The proposed approachis based on entropic Arithmetic coding, Run-length encoding, Burrows-wheeler transform and Move-to-front encoding. The results obtained on medical images have an interesting Compression Ratio (CR) in comparison with the lossless compressor SPIHT and a better Peak Signal to Noise Ratio (PSNR) and Mean Squared Error (MSE) than SPIHT and JPEG2000.


Biometrika ◽  
2020 ◽  
Author(s):  
D C Ahfock ◽  
W J Astle ◽  
S Richardson

Summary Sketching is a probabilistic data compression technique that has been largely developed by the computer science community. Numerical operations on big datasets can be intolerably slow; sketching algorithms address this issue by generating a smaller surrogate dataset. Typically, inference proceeds on the compressed dataset. Sketching algorithms generally use random projections to compress the original dataset, and this stochastic generation process makes them amenable to statistical analysis. We argue that the sketched data can be modelled as a random sample, thus placing this family of data compression methods firmly within an inferential framework. In particular, we focus on the Gaussian, Hadamard and Clarkson–Woodruff sketches and their use in single-pass sketching algorithms for linear regression with huge samples. We explore the statistical properties of sketched regression algorithms and derive new distributional results for a large class of sketching estimators. A key result is a conditional central limit theorem for data-oblivious sketches. An important finding is that the best choice of sketching algorithm in terms of mean squared error is related to the signal-to-noise ratio in the source dataset. Finally, we demonstrate the theory and the limits of its applicability on two datasets.


Author(s):  
Monirosharieh Vameghestahbanati ◽  
Hasan S. Mir ◽  
Mohamed El-Tarhuni

In this paper, the authors propose a framework that allows an overlay (new) system to operate simultaneously with a legacy (existing) system. By jointly optimizing the transmitter and the receiver filters of the overlay system, the sum of the mean-squared error (MSE) of the new system plus the excess MSE in the existing system due to the introduction of the overlay system is minimized. The effects of varying key parameters such as the overlay transmitter power and the amount of overlap between the legacy and the overlay systems are investigated. Furthermore, the sensitivity of the system to accuracy of signal-to-noise ratio (SNR) estimate and the channel estimate is also examined.


Electronics ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 866
Author(s):  
Farzad Mohaddes ◽  
Rafael da Silva ◽  
Fatma Akbulut ◽  
Yilu Zhou ◽  
Akhilesh Tanneeru ◽  
...  

The performance of a low-power single-lead armband in generating electrocardiogram (ECG) signals from the chest and left arm was validated against a BIOPAC MP160 benchtop system in real-time. The filtering performance of three adaptive filtering algorithms, namely least mean squares (LMS), recursive least squares (RLS), and extended kernel RLS (EKRLS) in removing white (W), power line interference (PLI), electrode movement (EM), muscle artifact (MA), and baseline wandering (BLW) noises from the chest and left-arm ECG was evaluated with respect to the mean squared error (MSE). Filter parameters of the used algorithms were adjusted to ensure optimal filtering performance. LMS was found to be the most effective adaptive filtering algorithm in removing all noises with minimum MSE. However, for removing PLI with a maximal signal-to-noise ratio (SNR), RLS showed lower MSE values than LMS when the step size was set to 1 × 10−5. We proposed a transformation framework to convert the denoised left-arm and chest ECG signals to their low-MSE and high-SNR surrogate chest signals. With wide applications in wearable technologies, the proposed pipeline was found to be capable of establishing a baseline for comparing left-arm signals with original chest signals, getting one step closer to making use of the left-arm ECG in clinical cardiac evaluations.


Sign in / Sign up

Export Citation Format

Share Document