scholarly journals Template Matching and Matrix Profile for Signal Quality Assessment of Carotid and Femoral Laser Doppler Vibrometer Signals

2022 ◽  
Vol 12 ◽  
Author(s):  
Silvia Seoni ◽  
Simeon Beeckman ◽  
Yanlu Li ◽  
Soren Aasmul ◽  
Umberto Morbiducci ◽  
...  

Background: Laser-Doppler Vibrometry (LDV) is a laser-based technique that allows measuring the motion of moving targets with high spatial and temporal resolution. To demonstrate its use for the measurement of carotid-femoral pulse wave velocity, a prototype system was employed in a clinical feasibility study. Data were acquired for analysis without prior quality control. Real-time application, however, will require a real-time assessment of signal quality. In this study, we (1) use template matching and matrix profile for assessing the quality of these previously acquired signals; (2) analyze the nature and achievable quality of acquired signals at the carotid and femoral measuring site; (3) explore models for automated classification of signal quality.Methods: Laser-Doppler Vibrometry data were acquired in 100 subjects (50M/50F) and consisted of 4–5 sequences of 20-s recordings of skin displacement, differentiated two times to yield acceleration. Each recording consisted of data from 12 laser beams, yielding 410 carotid-femoral and 407 carotid-carotid recordings. Data quality was visually assessed on a 1–5 scale, and a subset of best quality data was used to construct an acceleration template for both measuring sites. The time-varying cross-correlation of the acceleration signals with the template was computed. A quality metric constructed on several features of this template matching was derived. Next, the matrix-profile technique was applied to identify recurring features in the measured time series and derived a similar quality metric. The statistical distribution of the metrics, and their correlates with basic clinical data were assessed. Finally, logistic-regression-based classifiers were developed and their ability to automatically classify LDV-signal quality was assessed.Results: Automated quality metrics correlated well with visual scores. Signal quality was negatively correlated with BMI for femoral recordings but not for carotid recordings. Logistic regression models based on both methods yielded an accuracy of minimally 80% for our carotid and femoral recording data, reaching 87% for the femoral data.Conclusion: Both template matching and matrix profile were found suitable methods for automated grading of LDV signal quality and were able to generate a quality metric that was on par with the signal quality assessment of the expert. The classifiers, developed with both quality metrics, showed their potential for future real-time implementation.

2021 ◽  
Vol 8 ◽  
Author(s):  
Mojtaba Akbari ◽  
Jay Carriere ◽  
Tyler Meyer ◽  
Ron Sloboda ◽  
Siraj Husain ◽  
...  

During an ultrasound (US) scan, the sonographer is in close contact with the patient, which puts them at risk of COVID-19 transmission. In this paper, we propose a robot-assisted system that automatically scans tissue, increasing sonographer/patient distance and decreasing contact duration between them. This method is developed as a quick response to the COVID-19 pandemic. It considers the preferences of the sonographers in terms of how US scanning is done and can be trained quickly for different applications. Our proposed system automatically scans the tissue using a dexterous robot arm that holds US probe. The system assesses the quality of the acquired US images in real-time. This US image feedback will be used to automatically adjust the US probe contact force based on the quality of the image frame. The quality assessment algorithm is based on three US image features: correlation, compression and noise characteristics. These US image features are input to the SVM classifier, and the robot arm will adjust the US scanning force based on the SVM output. The proposed system enables the sonographer to maintain a distance from the patient because the sonographer does not have to be holding the probe and pressing against the patient's body for any prolonged time. The SVM was trained using bovine and porcine biological tissue, the system was then tested experimentally on plastisol phantom tissue. The result of the experiments shows us that our proposed quality assessment algorithm successfully maintains US image quality and is fast enough for use in a robotic control loop.


2012 ◽  
Vol 241-244 ◽  
pp. 2354-2361
Author(s):  
Ling Song ◽  
Tao Shen Li ◽  
Yan Chen

Real-time video transmission demands tremendous bandwidth, throughput and strict delay. For transmitting real-time video in the multi-interface multi-channel Ad hoc, firstly, we applied multi-interface multi-channel extension methods to the AOMDV (Ad-hoc On-demand Multipath Distance Vector) routing protocol, and improved extant channel switching algorithm, called MIMC-AOMDV (Multi-Interface Multi-Channel AOMDV) routing protocol. Secondly, we proposed video streaming delay QoS(Quality of Service) constraint and link-quality metrics, which used the multi interface queue’s total used length to get QMMIMC-AOMDV (Quality metric MIMC -AOMDV) routing protocol. The simulations show that the proposed QMMIMC-AOMDV can reduce the frame delay effectively and raise frame decodable rate and peak signal to noise ratio (PSNR), it is more suitable for real-time video streams.


Author(s):  
B. Dukai ◽  
R. Peters ◽  
S. Vitalis ◽  
J. van Liempt ◽  
J. Stoter

Abstract. Fully automated reconstruction of high-detail building models on a national scale is challenging. It raises a set of problems that are seldom found when processing smaller areas, single cities. Often there is no reference, ground truth available to evaluate the quality of the reconstructed models. Therefore, only relative quality metrics are computed, comparing the models to the source data sets. In the paper we present a set of relative quality metrics that we use for assessing the quality of 3D building models, that were reconstructed in a fully automated process, in Levels of Detail 1.2, 1.3, 2.2 for the whole of the Netherlands. The source data sets for the reconstruction are the Dutch Building and Address Register (BAG) and the National Height Model (AHN). The quality assessment is done by comparing the building models to these two data sources. The work presented in this paper lays the foundation for future research on the quality control and management of automated building reconstruction. Additionally, it serves as an important step in our ongoing effort for a fully automated building reconstruction method of high-detail, high-quality models.


2015 ◽  
Vol 6 (4) ◽  
Author(s):  
Mamluatul Hani’ah ◽  
Yogi Kurniaawan ◽  
Umi Laili Yuhana

Abstract. Software quality assurance is one method to increase quality of software. Improvement of software quality can be measured with software quality metric. Software quality metrics are part of software quality measurement model. Currently software quality models have a very diverse types, so that software quality metrics become increasingly diverse. The various types of metrics to measure the quality of software create proper metrics selection issues to fit the desired quality measurement parameters. Another problem is the validation need to be performed on these metrics in order to obtain objective and valid results. In this paper, a systematic mapping of the software quality metric is conducted in the last nine years. This paper brings up issues in software quality metrics that can be used by other researchers. Furthermore, current trends are introduced and discussed.Keywords: Software Quality, Software Assesment, Metric Abstrak. Penjaminan kualitas suatu perangkat lunak merupakan salah satu cara meningkatkan kualitas suatu perangkat lunak. Metrik kualitas perangkat lunak merupakan bagian dari model pengukuran kualitas perangkat lunak. Model kualitas perangkat lunak memiliki jenis yang sangat beragam, sehinggga metrik kualitas perangkat lunak menjadi semakin beragam jenisnya. Beragamnya jenis metrik pengukuran kualitas perangkat lunak memberikan permasalahan pemilihan metrik yang tepat agar sesuai dengan parameter pengukuran kualitas yang diinginkan. Permasalahan yang lain adalah validasi yang harus dilakukan terhadap metrik tersebut agar diperoleh hasil yang obyektif dan valid. Dalam makalah ini akan dilakukan pemetaan sistemastis terhadap metrik pengukuran kualitas perangkat lunak pada sembilan tahun terakhir. Diharapkan dengan pemetaan sistematis akan dapat memunculkan permasalahan-permasalahan pada metrik kualitas perangkat lunak yang dapat digunakan sebagai penelitian untuk peneliti yang lain. Kata Kunci: Kualitas Perangkat Lunak, Penjaminan Perangkat Lunak, Metrik


2020 ◽  
Author(s):  
Hangsik Shin

BACKGROUND In clinical use of photoplethysmogram, waveform distortion due to motion noise or low perfusion may lead to inaccurate analysis and diagnostic results. Therefore, it is necessary to find an appropriate analysis method to evaluate the signal quality of the photoplethysmogram so that its wide use in mobile healthcare can be further increased. OBJECTIVE The purpose of this study was to develop a machine learning model that could accurately evaluate the quality of a photoplethysmogram based on the shape of the photoplethysmogram and the phase relevance in a pulsatile waveform without requiring a complicated pre-processing. Its performance was then verified. METHODS Photoplethysmograms were recorded for 76 participants (5 minutes for each participant). All recorded photoplethysmograms were segmented for each beat to obtain a total of 49,561 pulsatile segments. These pulsatile segments were manually labeled as 'good' and 'bad' classes and converted to a two-dimensional phase space trajectory image with size of 124 × 124 using a recurrence plot. The classification model was implemented using a convolutional neural network with a two-layer structure. It was verified through a five-fold cross validation. RESULTS As a result, the proposed model correctly classified 48,827 segments out of 49,561 segments and misclassified 734 segments, showing a balanced accuracy of 0.975. Sensitivity, specificity, and positive predictive values of the developed model for the test dataset with a ‘bad’ class classification were 0.964, 0.987, and 0.848, respectively. The area under the curve was 0.994. CONCLUSIONS The convolutional neural network model with recurrence plot as input proposed in this study can be used for signal quality assessment as a generalized model with high accuracy through data expansion. It has an advantage in that it does not require a complicated pre-processing or feature detection process. CLINICALTRIAL KCT0002080


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Abdelmoghit Zaarane ◽  
Ibtissam Slimani ◽  
Abdellatif Hamdoun ◽  
Issam Atouf

Nowadays, real-time vehicle detection is one of the biggest challenges in driver-assistance systems due to the complex environment and the diverse types of vehicles. Vehicle detection can be exploited to accomplish several tasks such as computing the distances to other vehicles, which can help the driver by warning to slow down the vehicle to avoid collisions. In this paper, we propose an efficient real-time vehicle detection method following two steps: hypothesis generation and hypothesis verification. In the first step, potential vehicles locations are detected based on template matching technique using cross-correlation which is one of the fast algorithms. In the second step, two-dimensional discrete wavelet transform (2D-DWT) is used to extract features from the hypotheses generated in the first step and then to classify them as vehicles and nonvehicles. The choice of the classifier is very important due to the pivotal role that plays in the quality of the final results. Therefore, SVMs and AdaBoost are two classifiers chosen to be used in this paper and their results are compared thereafter. The results of the experiments are compared with some existing system, and it showed that our proposed system has good performance in terms of robustness and accuracy and that our system can meet the requirements in real time.


2021 ◽  
Author(s):  
Alireza Javaheri ◽  
Catarina Brites ◽  
Fernando Pereira ◽  
Joao Ascenso

Point cloud coding solutions have been recently standardized to address the needs of multiple application scenarios. The design and assessment of point cloud coding methods require reliable objective quality metrics to evaluate the level of degradation introduced by compression or any other type of processing. Several point cloud objective quality metrics has been recently proposed to reliable estimate human perceived quality, including the so-called projection-based metrics. In this context, this paper proposes a joint geometry and color projection-based point cloud objective quality metric which solves the critical weakness of this type of quality metrics, i.e., the misalignment between the reference and degraded projected images. Moreover, the proposed point cloud quality metric exploits the best performing 2D quality metrics in the literature to assess the quality of the projected images. The experimental results show that the proposed projection-based quality metric offers the best subjective-objective correlation performance in comparison with other metrics in the literature. The Pearson correlation gains regarding D1-PSNR and D2-PSNR metrics are 17% and 14.2 when data with all coding degradations is considered.


With the passage of time and the growth of ecommercea new web world needs to be built their users can share their ideas and opinions differently domains.There are thousands of websites that sell these various products. The quick growth in the number of reviews and their availability and the arrival of rich reviews for rich products for sale online, the right choice for many products has been difficult for users. Consumers will soon be able to verify the authenticity and quality of the products. What better way is there to ask people who have already bought the product? That’s where customer reviews come from. What’s worse is the popular products with thousands of updates — we don’t have the time or the patience to read all of them thousands. Therefore, our app simplifies this task by analysing and summarizing all the reviews that will help the user determine what other consumers have experienced in purchasing this product. This function focuses on mining updates from websites like Amazon, allowing the user to write freely to view. Automatically removes updates from websites. It also uses algorithms such as the Naïve Bayes classifier, Logistic Regression and SentiWordNet algorithm to classify reviews as good and bad reviews. Finally, we used quality metric parameters to measure the performance of each algo.


2020 ◽  
Author(s):  
Shermeen Nizami ◽  
Carolyn McGregor ◽  
James Robert Green

BACKGROUND Clinical decision support systems (CDSS) have the potential to lower patient mortality and morbidity rates. However, signal artifacts present in physiologic data affect the reliability and accuracy of CDSS. Moreover, patient monitors and other medical devices generate false alarms while processing artifactual data. This leads to alarm fatigue due to increased noise levels, staff disruption, and staff desensitization in busy critical care environments. Thereby, adversely affecting the quality of care at the patient bedside. Hence, artifact detection (AD) algorithms play a crucial role in assessing the quality of physiologic data and mitigating the impact of these artifacts. OBJECTIVE Recently, we developed a novel AD framework for integrating AD algorithms with CDSS. The framework was designed with features to support real-time implementation within critical care. In this research, we evaluate the framework and its features in a false alarm reduction study. We develop static framework component models followed by dynamic framework compositions to formulate four CDSS. We evaluate these formulations using neonatal patient data, and validate the six framework features of flexibility, reusability, signal quality indicator standardization, scalability, customizability, and real-time implementation support. METHODS We develop four exemplar static AD components with standardized requirements and provisions interfaces facilitating interoperability of framework components. These AD components are mixed and matched into four different AD compositions to mitigate artifacts. Each AD composition is integrated with a novel static clinical event detection (CED) component to formulate and evaluate dynamic CDSS for arterial oxygen saturation (SpO2) alarms generation. RESULTS With a sensitivity of 80%, the lowest achievable SpO2 false alarm rate is 39%. This demonstrates the utility of the framework in identifying the optimal dynamic composition to serve a given clinical need. CONCLUSIONS The framework features including reusability, signal quality indicator standardization, scalability, and customizability allow for novel CDSS formulations to be evaluated and compared. The optimal solution for a CDSS can then be hard-coded and integrated within clinical workflows for real-time implementation. Flexibility to serve different clinical needs and standardized component interoperability of the framework support the potential for real-time clinical implementation of AD.


2018 ◽  
Vol 8 (9) ◽  
pp. 1757-1762 ◽  
Author(s):  
Jie Zhang ◽  
Licai Yang ◽  
Zhonghua Su ◽  
Xueqin Mao ◽  
Kan Luo ◽  
...  

Background: Noise is unavoidable in the physiological signal measurement system. Poor quality signals can affect the results of analysis and disable the following clinical diagnosis. Thus, it is necessary to perform signal quality assessment before we interpreting the signal. Objective: In this work, we describe a method combing support vector machine (SVM) and multi-feature fusion for assessing the signal quality of pulsatile waveforms, concentrating on the photoplethysmogram (PPG). Methods: PPG signals from 53 healthy volunteers were recorded. Each had a 5 min length. Signal quality in each heart beat was manual annotated by clinical expert, and then the signal quality in 5 s episode was automatically calculated according to the results from each beat segments, resulting in a total of 13,294 5-s PPG segments. Then a SVM was trained to classify clean/noisy PPG recordings by inputting a set of twelve signal quality features. Further experiments were carried out to verify the proposed SVM based signal quality classifier method. Results: An average accuracy of 87.90%, a sensitivity of 88.10% and a specificity of 87.66% were found on the 10-fold cross validation. Conclusions: The signal quality of PPGs can be accurately classified by using the proposed method.


Sign in / Sign up

Export Citation Format

Share Document