quality metrics
Recently Published Documents


TOTAL DOCUMENTS

1547
(FIVE YEARS 468)

H-INDEX

40
(FIVE YEARS 6)

2022 ◽  
Vol 27 (2) ◽  
pp. 1-33
Author(s):  
Liu Liu ◽  
Sibren Isaacman ◽  
Ulrich Kremer

Many embedded environments require applications to produce outcomes under different, potentially changing, resource constraints. Relaxing application semantics through approximations enables trading off resource usage for outcome quality. Although quality is a highly subjective notion, previous work assumes given, fixed low-level quality metrics that often lack a strong correlation to a user’s higher-level quality experience. Users may also change their minds with respect to their quality expectations depending on the resource budgets they are willing to dedicate to an execution. This motivates the need for an adaptive application framework where users provide execution budgets and a customized quality notion. This article presents a novel adaptive program graph representation that enables user-level, customizable quality based on basic quality aspects defined by application developers. Developers also define application configuration spaces, with possible customization to eliminate undesirable configurations. At runtime, the graph enables the dynamic selection of the configuration with maximal customized quality within the user-provided resource budget. An adaptive application framework based on our novel graph representation has been implemented on Android and Linux platforms and evaluated on eight benchmark programs, four with fully customizable quality. Using custom quality instead of the default quality, users may improve their subjective quality experience value by up to 3.59×, with 1.76× on average under different resource constraints. Developers are able to exploit their application structure knowledge to define configuration spaces that are on average 68.7% smaller as compared to existing, structure-oblivious approaches. The overhead of dynamic reconfiguration averages less than 1.84% of the overall application execution time.


2022 ◽  
Vol 12 ◽  
Author(s):  
Silvia Seoni ◽  
Simeon Beeckman ◽  
Yanlu Li ◽  
Soren Aasmul ◽  
Umberto Morbiducci ◽  
...  

Background: Laser-Doppler Vibrometry (LDV) is a laser-based technique that allows measuring the motion of moving targets with high spatial and temporal resolution. To demonstrate its use for the measurement of carotid-femoral pulse wave velocity, a prototype system was employed in a clinical feasibility study. Data were acquired for analysis without prior quality control. Real-time application, however, will require a real-time assessment of signal quality. In this study, we (1) use template matching and matrix profile for assessing the quality of these previously acquired signals; (2) analyze the nature and achievable quality of acquired signals at the carotid and femoral measuring site; (3) explore models for automated classification of signal quality.Methods: Laser-Doppler Vibrometry data were acquired in 100 subjects (50M/50F) and consisted of 4–5 sequences of 20-s recordings of skin displacement, differentiated two times to yield acceleration. Each recording consisted of data from 12 laser beams, yielding 410 carotid-femoral and 407 carotid-carotid recordings. Data quality was visually assessed on a 1–5 scale, and a subset of best quality data was used to construct an acceleration template for both measuring sites. The time-varying cross-correlation of the acceleration signals with the template was computed. A quality metric constructed on several features of this template matching was derived. Next, the matrix-profile technique was applied to identify recurring features in the measured time series and derived a similar quality metric. The statistical distribution of the metrics, and their correlates with basic clinical data were assessed. Finally, logistic-regression-based classifiers were developed and their ability to automatically classify LDV-signal quality was assessed.Results: Automated quality metrics correlated well with visual scores. Signal quality was negatively correlated with BMI for femoral recordings but not for carotid recordings. Logistic regression models based on both methods yielded an accuracy of minimally 80% for our carotid and femoral recording data, reaching 87% for the femoral data.Conclusion: Both template matching and matrix profile were found suitable methods for automated grading of LDV signal quality and were able to generate a quality metric that was on par with the signal quality assessment of the expert. The classifiers, developed with both quality metrics, showed their potential for future real-time implementation.


Author(s):  
Gareth D Hastings ◽  
Raymond A Applegate ◽  
Alexander W Schill ◽  
Chuan Hu ◽  
Daniel R Coates ◽  
...  

2022 ◽  
Author(s):  
Matthias S Treder ◽  
Ryan Codrai ◽  
Kamen A Tsvetanov

Background: Generative Adversarial Networks (GANs) can synthesize brain images from image or noise input. So far, the gold standard for assessing the quality of the generated images has been human expert ratings. However, due to limitations of human assessment in terms of cost, scalability, and the limited sensitivity of the human eye to more subtle statistical relationships, a more automated approach towards evaluating GANs is required. New method: We investigated to what extent visual quality can be assessed using image quality metrics and we used group analysis and spatial independent components analysis to verify that the GAN reproduces multivariate statistical relationships found in real data. Reference human data was obtained by recruiting neuroimaging experts to assess real Magnetic Resonance (MR) images and images generated by a Wasserstein GAN. Image quality was manipulated by exporting images at different stages of GAN training. Results: Experts were sensitive to changes in image quality as evidenced by ratings and reaction times, and the generated images reproduced group effects (age, gender) and spatial correlations moderately well. We also surveyed a number of image quality metrics which consistently failed to fully reproduce human data. While the metrics Structural Similarity Index Measure (SSIM) and Naturalness Image Quality Evaluator (NIQE) showed good overall agreement with human assessment for lower-quality images (i.e. images from early stages of GAN training), only a Deep Quality Assessment (QA) model trained on human ratings was sensitive to the subtle differences between higher-quality images. Conclusions: We recommend a combination of group analyses, spatial correlation analyses, and both distortion metrics (SSIM, NIQE) and perceptual models (Deep QA) for a comprehensive evaluation and comparison of brain images produced by GANs.


Author(s):  
Paola Patricia Ariza-Colpas ◽  
Enrico Vicario ◽  
Shariq Aziz Butt ◽  
Emiro De-la_Hoz-Franco ◽  
Marlon Alberto Piñeres-Melo ◽  
...  

Background: Older adults who have poor health, such as those in personal conditions motivate them to remain active and productive, both at home and in geriatric homes, they need a combination of advanced methods of visual monitoring, optimization, pattern recognition and learning, that provide safe and comfortable environments and that once serve as a tool to facilitate the work of family members and workers. It should be noted that this also seeks to recreate a technology that gives these adults autonomy in indoor environments. Objective: Generate a prediction model of activities of daily living through classification techniques and selection of characteristics, to contribute to the development in this area of knowledge, especially in the field of health, to carry out an accurate monitoring of activities of the elderly or people with some type of disability. Technological developments allow predictive analysis of activities of daily life, contributing to the identification of patterns in advance, to take actions to improve the quality of life of the elderly. Method: The vanKasteren, CASAS Kyoto and CASAS Aruba datasets were used, which have certain variability in terms of occupation and the number of activities of daily life to be identified, to validate a predictive model capable of supporting their identification. activities in indoor environments. Results: After implementing 12 classifiers, among which the following stand out: Classification Via Regression, OneR, Attribute Selected, J48, Random SubSpace, RandomForest, RandomCommittee, Bagging, Random Tree, JRip, LMT and REP Tree, are analyzed in the light of precision and recall quality metrics, those classifiers that show better results when identifying activities of daily life. For the specific case of this experimentation, the Classification Via Regression and OneR classifiers obtain the best results. Conclusion: The efficiency of the predictive model based on classification is concluded, showing the results of the two classifiers Classification Via Regression and OneR with quality metrics higher than 90% even when the datasets vary in occupation and number of activities


2022 ◽  
Vol 197 ◽  
pp. 377-384
Author(s):  
Bramantyo Adhilaksono ◽  
Bambang Setiawan

2021 ◽  
pp. 147715352110580
Author(s):  
A Eissfeldt ◽  
TQ Khanh

Multichannel LED luminaires with more than three channels offer the advantage to vary the spectrum and keeping the chromaticity steady. However, the optimisation calculations of various quality metrics are a challenge for real-time implementation, especially for the limited resources of a luminaire’s microcontroller. Here, we present a method in which a five-channel system is simulated with a quickly solvable 3-channel system by defining virtual channels, each consisting of two LED channels. An analysis of the influence of the parameterisation of the virtual valences on various quality metrics is presented. It shows how these parameters must be set at the time of the mixing calculation, in order to optimise the desired quality aspect. The mixing calculation can thus be carried out in real-time without high hardware requirements and is suitable for further developments, for example, to compensate for colour drift of the LEDs through sensor feedback.


IoT ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 761-785
Author(s):  
Kosuke Ito ◽  
Shuji Morisaki ◽  
Atsuhiro Goto

This study proposes a security-quality-metrics method tailored for the Internet of things (IoT) and evaluates conformity of the proposed approach with pertinent cybersecurity regulations and guidelines for IoT. Cybersecurity incidents involving IoT devices have recently come to light; consequently, IoT security correspondence has become a necessity. The ISO 25000 series is used for software; however, the concept of security as a quality factor has not been applied to IoT devices. Because software vulnerabilities were not the device vendors’ responsibility as product liability, most vendors did not consider the security capability of IoT devices as part of their quality control. Furthermore, an appropriate IoT security-quality metric for vendors does not exist; instead, vendors have to set their security standards, which lack consistency and are difficult to justify by themselves. To address this problem, the authors propose a universal method for specifying IoT security-quality metrics on a globally accepted scale, inspired by the goal/question/metric (GQM) method. The method enables vendors to verify their products to conform to the requirements of existing baselines and certification programs and to help vendors to tailor their quality requirements to meet the given security requirements. The IoT users would also be able to use these metrics to verify the security quality of IoT devices.


Sign in / Sign up

Export Citation Format

Share Document