filter noise
Recently Published Documents


TOTAL DOCUMENTS

67
(FIVE YEARS 27)

H-INDEX

7
(FIVE YEARS 2)

2021 ◽  
Vol 11 (1) ◽  
pp. 192
Author(s):  
Cheng-Yu Lin ◽  
Yi-Wen Wang ◽  
Febryan Setiawan ◽  
Nguyen Thi Hoang Trang ◽  
Che-Wei Lin

Background: Heart rate variability (HRV) and electrocardiogram (ECG)-derived respiration (EDR) have been used to detect sleep apnea (SA) for decades. The present study proposes an SA-detection algorithm using a machine-learning framework and bag-of-features (BoF) derived from an ECG spectrogram. Methods: This study was verified using overnight ECG recordings from 83 subjects with an average apnea–hypopnea index (AHI) 29.63 (/h) derived from the Physionet Apnea-ECG and National Cheng Kung University Hospital Sleep Center database. The study used signal preprocessing to filter noise and artifacts, ECG time–frequency transformation using continuous wavelet transform (CWT), BoF feature generation, machine-learning classification using support vector machine (SVM), ensemble learning (EL), k-nearest neighbor (KNN) classification, and cross-validation. The time length of the spectrogram was set as 10 and 60 s to examine the required minimum spectrogram window time length to achieve satisfactory accuracy. Specific frequency bands of 0.1–50, 8–50, 0.8–10, and 0–0.8 Hz were also extracted to generate the BoF to determine the band frequency best suited for SA detection. Results: The five-fold cross-validation accuracy using the BoF derived from the ECG spectrogram with 10 and 60 s time windows were 90.5% and 91.4% for the 0.1–50 Hz and 8–50 Hz frequency bands, respectively. Conclusion: An SA-detection algorithm utilizing BoF and a machine-learning framework was successfully developed in this study with satisfactory classification accuracy and high temporal resolution.


Author(s):  
Vladimir F. Telezhkin ◽  
◽  
Bekhruz B. Saidov ◽  

In this paper, we investigate the problem of improving data quality using the Kalman filter in Matlab Simulink. Recently, this filter has become one of the most common algorithms for filtering and processing data in the implementation of control systems (including automated control systems) and the creation of software systems for digital filtering from noise and interference, for example, speech signals. It is also widely used in many fields of science and technology. Due to its simplicity and efficiency, it can be found in GPS receivers, in devices for processing sensor readings for various purposes, etc. It is known that one of the important tasks that should be solved in systems for processing sensor readings is the ability to detect and filter noise. Sensor noise leads to unstable measurement data. This, of course, ultimately leads to a decrease in the accuracy and performance of the control device. One of the methods that can be used to solve the problem of optimal filtering is the development of cybernetic algorithms based on the Kalman and Wiener filters. The filtering process can be carried out in two forms, namely: hardware and software algorithms. Hardware filtering can be built electronically. However, it is less efficient as it requires additional circuitry in the system. To overcome this obstacle, you can use filtering in the form of programming algorithms in a single method. In addition to the fact that it does not require electronic hardware circuitry, the filtering performed is even more accurate because it uses a computational process. The paper analyzes the results of applying the Kalman filter to eliminate errors when measuring the coordinates of the tracked target, to obtain a "smoothed" trajectory and shows the results of the filter development process when processing an electrocardiogram. The development of the Kalman filter algorithm is based on the procedure of recursive assessment of the measured state of the research object.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Zhoubao Sun ◽  
Pengfei Chen ◽  
Xiaodong Zhang

With the popularity of Internet of things technology and intelligent devices, the application prospect of accurate step counting has gained more and more attention. To solve the problems that the existing algorithms use threshold to filter noise, and the parameters cannot be updated in time, an intelligent optimization strategy based on deep reinforcement learning is proposed. In this study, the counting problem is transformed into a serialization decision optimization. This study integrates the noise recognition and the user feedback to update parameters. The end-to-end processing is direct, which alleviates the inaccuracy of step counting in the follow-up step counting module caused by the inaccuracy of noise filtering in the two-stage processing and makes the model parameters continuously updated. Finally, the experimental results show that the proposed model achieves superior performance to existing approaches.


2021 ◽  
Vol 13 (22) ◽  
pp. 4658
Author(s):  
Fengquan Li ◽  
Zhuling Sun ◽  
Mingyuan Liu ◽  
Shanfeng Yuan ◽  
Lei Wei ◽  
...  

Very-high-frequency (VHF) electromagnetic signals have been well used to image lightning channels with high temporal and spatial resolution due to their capability to penetrate clouds. A lightning broadband VHF interferometer with three VHF antennas configured in a scalene-triangle shape has been installed in Lhasa since 2019, to detect the lightning VHF signals. Using the signals from the VHF interferometer, a new hybrid algorithm, called the TDOA-EMTR technique, combining the time difference of arrival (TDOA) and the electromagnetic time reversal (EMTR) technique is introduced to the image the two-dimensional lightning channels. The TDOA technique is firstly applied to calculate the initial solutions for the whole lightning flash. According to the results by the TDOA method, the domain used for the EMTR technique is predetermined, and then the EMTR technique is operated to obtain the final positioning result. Unlike the original EMTR technique, the low-power frequency points for each time window are removed based on the FFT spectrum. Metrics used to filter noise events are adjusted. Detailed imaging results of a negative cloud-to-ground (CG) lightning flash and an intra-cloud (IC) lightning flash by the TDOA method and the TDOA-EMTR are presented. Compared with the original EMTR method, the positioning efficiency can be improved by more than a factor of 3 to 4, depending on the scope of the pre-determined domain. Results show that the new algorithm can obtain much weaker radiation sources and simultaneously occurring sources, compared with the TDOA method.


2021 ◽  
Vol 2089 (1) ◽  
pp. 012029
Author(s):  
Ram Singh ◽  
Lakhwinder Kaur

Abstract Magnetic Resonance Image (MRI) is an important medical image acquisition technique used to acquire high contrast images of human body anatomical structures and soft tissue organs. MRI system does not use any harmful radioactive ionized material like x-rays and computerized tomography (CT) imaging techniques. High-resolution MRI is desirable in many clinical applications such as tumor segmentation, image registration, edges & boundary detection, and image classification. During MRI acquisition, many practical constraints limit the MRI quality by introducing random Gaussian noise and some other artifacts by the thermal energy of the patient body, random scanner voltage fluctuations, body motion artifacts, electronics circuits impulse noise, etc. High-resolution MRI can be acquired by increasing scan time, but considering patient comfort, it is not preferred in practice. Hence, postacquisition image processing techniques are used to filter noise contents and enhance the MRI quality to make it fit for further image analysis tasks. The main motive of MRI enhancement is to reconstruct a high-quality MRI while improving and retaining its important features. The new deep learning image denoising and artifacts removal methods have shown tremendous potential for high-quality image reconstruction from noise degraded MRI while preserving useful image information. This paper presents a noise-residue learning convolution neural network (CNN) model to denoise and enhance the quality of noise-corrupted low-resolution MR images. The proposed technique shows better performance in comparison with other conventional MRI enhancement methods. The reconstructed image quality is evaluated by the peak-signal-to-noise ratio (PSNR) and structural similarity index (SSIM) metrics by optimizing information loss in reconstructed MRI measured in mean squared error (MSE) metric.


2021 ◽  
Author(s):  
Yves Quilfen ◽  
Jean-François Piolle ◽  
Bertrand Chapron

Abstract. Satellite altimeters routinely supply sea surface height (SSH) measurements, which are key observations for monitoring ocean dynamics. However, below a wavelength of about 70 km, along-track altimeter measurements are often characterized by a dramatic drop in signal-to-noise ratio, making it very challenging to fully exploit the available altimeter observations to precisely analyze small mesoscale variations in SSH. Although various approaches have been proposed and applied to identify and filter noise from measurements, no distinct methodology has emerged for systematic application in operational products. To best address this unresolved issue, the Copernicus Marine Environment Monitoring Service (CMEMS) actually provides simple band-pass filtered data to mitigate noise contamination of along-track SSH signals. More innovative and suitable noise filtering methods are thus left to users seeking to unveil small-scale altimeter signals. As demonstrated here, a fully data-driven approach is developed and applied successfully to provide robust estimates of noise-free Sea Level Anomaly (SLA) signals. The method combines Empirical Mode Decomposition (EMD), to help analyze non-stationary and non-linear processes, and an adaptive noise filtering technique inspired by Discrete Wavelet Transform (DWT) decompositions. It is found to best resolve the distribution of SLA variability in the 30–120 km mesoscale wavelength band. A practical uncertainty variable is attached to the denoised SLA estimates that accounts for errors related to the local signal-to-noise ratio, but also for uncertainties in the denoising process, which assumes that the SLA variability results in part from a stochastic process. For the available period, measurements from the Jason-3, Sentinel-3 and Saral/AltiKa missions are processed and analyzed, and their energy spectral and seasonal distributions characterized in the small mesoscale domain. In anticipation of the upcoming SWOT (Surface Water and Ocean Topography) mission data, the SASSA data set (Satellite Altimeter Short-scale Signals Analysis, Quilfen and Piolle, 2021) of denoised SLA measurements for three reference altimeter missions already yields valuable opportunities to evaluate global small mesoscale kinetic energy distributions.


2021 ◽  
Vol 4 (2) ◽  
pp. 629-641
Author(s):  
Yesika Prebina Br. Bangun

This study aims to describe the concept and process of visualization of the close-up photo retouching technique by Petra Sinuraya. This research method is descriptive analytic research on the work. The subject of this type of research is Petra Sinuraya's close up photography retouching work. Data were analyzed descriptively analytic with percentage analysis using simple statistical procedures. Data obtained by using interview and documentation methods. The instrument was designed based on interview and documentation guidelines, and was developed based on situations that occurred in the field. The research was conducted by taking and selecting documents in the form of 10 pieces of art photos. The results showed that the close-up photo retouching process used by Petra Sinuraya was a digital technique by sharpening skills through the touch of tools available in Photoshop software. The role of composition in the retouching process is very important for client needs so that the photo looks more attractive in the final result. As for what Petra Sinuraya does in Close Up photo retouching is in various ways such as the Spot Hiling Brush for Smoothing the Skin, Burn and dodge tool for eye retouching, Dodge and Burn for Lightening / Darkening Contrast, filter noise and Gaussian blur - for flawless skin. and the Patch tool to enhance photos.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Enas A. Hakim Khalil ◽  
Enas M. F. El Houby ◽  
Hoda Korashy Mohamed

AbstractCurrently, expressing feelings through social media requires great consideration as an essential part of our lives; besides sharing ideas and thoughts, we share moments and good memories. Social media such as Facebook, Twitter, Weibo, and LinkedIn, are considered rich sources of opinionated text data. Both organizations and individuals are interested in using social media to analyze people's opinions and extract sentiments and emotions. Most studies on social media analysis mainly classified sentiment as positive, negative, or neutral classes. The challenge in emotion analysis arises because humans can express one or several emotions within one expression. Human beings can recognize these different emotions well; however, it is still not easy for an emotion analysis system. In most cases, the Arabic language used through social media is of a slangy or colloquial form, making it more challenging to preprocess and filter noise since most lemmatization and stemming tools are built on Modern Standard Arabic (MSA). An emotion analysis model has been implemented to categorize emotions. The model is a multiclass and multilabel classification problem. However, few studies have been adapted for this emotion classification problem in Arabic social media. Nearly the only work is the one of SemEval 2018 task1- sub-task E-c. Several machine learning approaches have been implemented in this task; a few studies were based on deep learning. Our model implemented a novel multilayer bidirectional long short term memory (BiLSTM) trained on top of pre-trained word embedding vectors. The model achieved state-of-the-art performance enhancement. This approach has been compared with other models developed in the same tasks using Support Vector Machines (SVM), random forest (RF), and fully connected neural networks. The proposed model achieved a performance improvement over the best results obtained for this task.


2021 ◽  
Vol 9 (3) ◽  
pp. 214-224
Author(s):  
Oleg Shipit’ko ◽  
Anatoly Kabakov

The paper proposes an algorithm for mapping linear features detected on the roadway — road marking lines, curbs, road boundaries. The algorithm is based on a mapping method with an inverse observation model. An inverse observation model is proposed to take into account the spatial error of the linear feature visual detector. The influence of various parameters of the model on the resulting quality of mapping was studied. The mapping algorithm was tested on data recorded on an autonomous vehicle while driving at the test site. The quality of the mapping algorithm was assessed according to several quality metrics known from the literature. In addition, the mapping problem was considered as a binary classification problem, in which each map cell may or may not contain the desired feature, and the ROC curve and AUC-ROC metric were used to assess the quality. As a naive solution, a map was built containing all detected linear features without any additional filtering. For the map built on the basis of the raw data, the AUC-ROC was 0.75, and as a result of applying the algorithm, the value of 0.81 was reached. The experimental results have confirmed that the proposed algorithm can effectively filter noise and false-positive detections of the detector, which confirms the applicability of the proposed algorithm and the inverse observation model for solving practical problems. Key words Linear features, mapping, inverse observation model, road map, autonomous vehicle, digital road map.


2021 ◽  
Vol 13 (13) ◽  
pp. 2433
Author(s):  
Shu Yang ◽  
Fengchao Peng ◽  
Sibylle von Löwis ◽  
Guðrún Nína Petersen ◽  
David Christian Finger

Doppler lidars are used worldwide for wind monitoring and recently also for the detection of aerosols. Automatic algorithms that classify the lidar signals retrieved from lidar measurements are very useful for the users. In this study, we explore the value of machine learning to classify backscattered signals from Doppler lidars using data from Iceland. We combined supervised and unsupervised machine learning algorithms with conventional lidar data processing methods and trained two models to filter noise signals and classify Doppler lidar observations into different classes, including clouds, aerosols and rain. The results reveal a high accuracy for noise identification and aerosols and clouds classification. However, precipitation detection is underestimated. The method was tested on data sets from two instruments during different weather conditions, including three dust storms during the summer of 2019. Our results reveal that this method can provide an efficient, accurate and real-time classification of lidar measurements. Accordingly, we conclude that machine learning can open new opportunities for lidar data end-users, such as aviation safety operators, to monitor dust in the vicinity of airports.


Sign in / Sign up

Export Citation Format

Share Document