scholarly journals Artifact Correction in Short-Term HRV during Strenuous Physical Exercise

Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6372
Author(s):  
Aleksandra Królak ◽  
Tomasz Wiktorski ◽  
Magnus Friestad Bjørkavoll-Bergseth ◽  
Stein Ørn

Heart rate variability (HRV) analysis can be a useful tool to detect underlying heart or even general health problems. Currently, such analysis is usually performed in controlled or semi-controlled conditions. Since many of the typical HRV measures are sensitive to data quality, manual artifact correction is common in literature, both as an exclusive method or in addition to various filters. With proliferation of Personal Monitoring Devices with continuous HRV analysis an opportunity opens for HRV analysis in a new setting. However, current artifact correction approaches have several limitations that hamper the analysis of real-life HRV data. To address this issue we propose an algorithm for automated artifact correction that has a minimal impact on HRV measures, but can handle more artifacts than existing solutions. We verify this algorithm based on two datasets. One collected during a recreational bicycle race and another one in a laboratory, both using a PMD in form of a GPS watch. Data include direct measurement of electrical myocardial signals using chest straps and direct measurements of power using a crank sensor (in case of race dataset), both paired with the watch. Early results suggest that the algorithm can correct more artifacts than existing solutions without a need for manual support or parameter tuning. At the same time, the error introduced to HRV measures for peak correction and shorter gaps is similar to the best existing solution (Kubios-inspired threshold-based cubic interpolation) and better than commonly used median filter. For longer gaps, cubic interpolation can in some cases result in lower error in HRV measures, but the shape of the curve it generates matches ground truth worse than our algorithm. It might suggest that further development of the proposed algorithm may also improve these results.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Narendra Narisetti ◽  
Michael Henke ◽  
Christiane Seiler ◽  
Astrid Junker ◽  
Jörn Ostermann ◽  
...  

AbstractHigh-throughput root phenotyping in the soil became an indispensable quantitative tool for the assessment of effects of climatic factors and molecular perturbation on plant root morphology, development and function. To efficiently analyse a large amount of structurally complex soil-root images advanced methods for automated image segmentation are required. Due to often unavoidable overlap between the intensity of fore- and background regions simple thresholding methods are, generally, not suitable for the segmentation of root regions. Higher-level cognitive models such as convolutional neural networks (CNN) provide capabilities for segmenting roots from heterogeneous and noisy background structures, however, they require a representative set of manually segmented (ground truth) images. Here, we present a GUI-based tool for fully automated quantitative analysis of root images using a pre-trained CNN model, which relies on an extension of the U-Net architecture. The developed CNN framework was designed to efficiently segment root structures of different size, shape and optical contrast using low budget hardware systems. The CNN model was trained on a set of 6465 masks derived from 182 manually segmented near-infrared (NIR) maize root images. Our experimental results show that the proposed approach achieves a Dice coefficient of 0.87 and outperforms existing tools (e.g., SegRoot) with Dice coefficient of 0.67 by application not only to NIR but also to other imaging modalities and plant species such as barley and arabidopsis soil-root images from LED-rhizotron and UV imaging systems, respectively. In summary, the developed software framework enables users to efficiently analyse soil-root images in an automated manner (i.e. without manual interaction with data and/or parameter tuning) providing quantitative plant scientists with a powerful analytical tool.


2021 ◽  
Author(s):  
Shikha Suman ◽  
Ashutosh Karna ◽  
Karina Gibert

Hierarchical clustering is one of the most preferred choices to understand the underlying structure of a dataset and defining typologies, with multiple applications in real life. Among the existing clustering algorithms, the hierarchical family is one of the most popular, as it permits to understand the inner structure of the dataset and find the number of clusters as an output, unlike popular methods, like k-means. One can adjust the granularity of final clustering to the goals of the analysis themselves. The number of clusters in a hierarchical method relies on the analysis of the resulting dendrogram itself. Experts have criteria to visually inspect the dendrogram and determine the number of clusters. Finding automatic criteria to imitate experts in this task is still an open problem. But, dependence on the expert to cut the tree represents a limitation in real applications like the fields industry 4.0 and additive manufacturing. This paper analyses several cluster validity indexes in the context of determining the suitable number of clusters in hierarchical clustering. A new Cluster Validity Index (CVI) is proposed such that it properly catches the implicit criteria used by experts when analyzing dendrograms. The proposal has been applied on a range of datasets and validated against experts ground-truth overcoming the results obtained by the State of the Art and also significantly reduces the computational cost.


2019 ◽  
Author(s):  
Lisa Kroll ◽  
Nikolaus Böhning ◽  
Heidi Müßigbrodt ◽  
Maria Stahl ◽  
Pavel Halkin ◽  
...  

BACKGROUND Agitation is common in geriatric patients with dementia (PWD) admitted to an emergency department (ED) and is associated with a higher risk of an unfavourable clinical course. Hence, monitoring of vital signs and enhanced movement is essential in these patients during their stay in the ED. Since PWD rarely tolerate fixed monitoring devices, non-contact monitoring systems might represent appropriate alternatives. OBJECTIVE To study the reliability of a non-contact monitoring system (NCMSys) and of a tent-like device (“Charité Dome”, ChD), aimed to shelter PWD from the busy ED-environment. Further, effects of the ChD on wellbeing and agitation of PWD will be measured. METHODS Both devices were attached to patient’s bed. Tests on technical reliability and other safety issues of the NCMSys and the ChD were performed at the iDoc-institute. A feasibility study evaluating the reliability of the NCMSys with and without the ChD was performed in the real-life setting of an ED and on a geriatric-gerontopsychiatric ward. Technical reliability and other safety issues were tested with six healthy volunteers. For the feasibility study 19 patients were included (ten males and nine females; mean age: 77.4 (55-93) years of which 14 were PWD. PWD inclusion criteria were age ≥55 years, a dementia diagnosis as well as a written consent (by patients themselves or by a custodian). Exclusion criteria were acute life-threatening situations and a missing consent. RESULTS Heart rate, changes in movement and sound emissions were measured reliably by the NCMSys, whereas patient movements affected respiratory rate measurements. The ChD did not impact patients’ vital signs or movements in our study setting. However, 53% of the PWD (7/13) and most of the patients without dementia (4/5) benefited from its use regarding their agitation and overall wellbeing. CONCLUSIONS NCMSys and ChD work reliably in the clinical setting and have positive effects on agitation and wellbeing. The results of this feasibility study encourages prospective studies with longer durations to further evaluate this concept for monitoring and prevention of agitation in PWD in the ED. CLINICALTRIAL ICTRP: “Charité-Dome-Study - DRKS00014737”


Information ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 53
Author(s):  
Jinfang Sheng ◽  
Ben Lu ◽  
Bin Wang ◽  
Jie Hu ◽  
Kai Wang ◽  
...  

The research on complex networks is a hot topic in many fields, among which community detection is a complex and meaningful process, which plays an important role in researching the characteristics of complex networks. Community structure is a common feature in the network. Given a graph, the process of uncovering its community structure is called community detection. Many community detection algorithms from different perspectives have been proposed. Achieving stable and accurate community division is still a non-trivial task due to the difficulty of setting specific parameters, high randomness and lack of ground-truth information. In this paper, we explore a new decision-making method through real-life communication and propose a preferential decision model based on dynamic relationships applied to dynamic systems. We apply this model to the label propagation algorithm and present a Community Detection based on Preferential Decision Model, called CDPD. This model intuitively aims to reveal the topological structure and the hierarchical structure between networks. By analyzing the structural characteristics of complex networks and mining the tightness between nodes, the priority of neighbor nodes is chosen to perform the required preferential decision, and finally the information in the system reaches a stable state. In the experiments, through the comparison of eight comparison algorithms, we verified the performance of CDPD in real-world networks and synthetic networks. The results show that CDPD not only has better performance than most recent algorithms on most datasets, but it is also more suitable for many community networks with ambiguous structure, especially sparse networks.


Author(s):  
Flavio Bonfatti ◽  
Paola Daniela Monari ◽  
Luca Martinelli

This chapter is aimed at presenting a practical approach, and its technological implementation, for enabling small companies to exchange business documents in different formats and languages with minimal impact on their legacy systems and working practices. The proposed solution differs from the general-purpose or theoretical approaches reported in other chapters of this book, as it is intended to focus on the basic interoperability requirements of small companies in their real life. Special attention is spent to show how to define a minimal reference ontology, use it for annotating the data fields in legacy systems, and map it onto existing standards in order to remove the cultural and technical obstacles for small companies to join the global electronic market. These techniques have been studied and prototyped, and are presently validated, by some EU-funded projects.


2020 ◽  
Author(s):  
Rui Fan ◽  
Hengli Wang ◽  
Bohuan Xue ◽  
Huaiyang Huang ◽  
Yuan Wang ◽  
...  

Over the past decade, significant efforts have been made to improve the trade-off between speed and accuracy of surface normal estimators (SNEs). This paper introduces an accurate and ultrafast SNE for structured range data. The proposed approach computes surface normals by simply performing three filtering operations, namely, two image gradient filters (in horizontal and vertical directions, respectively) and a mean/median filter, on an inverse depth image or a disparity image. Despite the simplicity of the method, no similar method already exists in the literature. In our experiments, we created three large-scale synthetic datasets (easy, medium and hard) using 24 3-dimensional (3D) mesh models. Each mesh model is used to generate 1800--2500 pairs of 480x640 pixel depth images and the corresponding surface normal ground truth from different views. The average angular errors with respect to the easy, medium and hard datasets are 1.6 degrees, 5.6 degrees and 15.3 degrees, respectively. Our C++ and CUDA implementations achieve a processing speed of over 260 Hz and 21 kHz, respectively. Our proposed SNE achieves a better overall performance than all other existing computer vision-based SNEs. Our datasets and source code are publicly available at: sites.google.com/view/3f2n.


2020 ◽  
Author(s):  
Rui Fan ◽  
Hengli Wang ◽  
Bohuan Xue ◽  
Huaiyang Huang ◽  
Yuan Wang, ◽  
...  

Over the past decade, significant efforts have been made to improve the trade-off between speed and accuracy of surface normal estimators (SNEs). This paper introduces an accurate and ultrafast SNE for structured range data. The proposed approach computes surface normals by simply performing three filtering operations, namely, two image gradient filters (in horizontal and vertical directions, respectively) and a mean/median filter, on an inverse depth image or a disparity image. Despite the simplicity of the method, no similar method already exists in the literature. In our experiments, we created three large-scale synthetic datasets (easy, medium and hard) using 24 3-dimensional (3D) mesh models. Each mesh model is used to generate 1800--2500 pairs of 480x640 pixel depth images and the corresponding surface normal ground truth from different views. The average angular errors with respect to the easy, medium and hard datasets are 1.6 degrees, 5.6 degrees and 15.3 degrees, respectively. Our C++ and CUDA implementations achieve a processing speed of over 260 Hz and 21 kHz, respectively. Our proposed SNE achieves a better overall performance than all other existing computer vision-based SNEs. Our datasets and source code are publicly available at: sites.google.com/view/3f2n.


Author(s):  
Bakhan Tofiq Ahmed ◽  
Omar Younis Abdulhameed

Fingerprint recognition is a dominant form of biometric due to its distinctiveness. The study aims to extract and select the best features of fingerprint images, and evaluate the strength of the Shark Smell Optimization (SSO) and Genetic Algorithm (GA) in the search space with a chosen set of metrics. The proposed model consists of seven phases namely, enrollment, image preprocessing by using weighted median filter, feature extraction by using SSO, weight generation by using Chebyshev polynomial first kind (CPFK), feature selection by using GA, creation of a user’s database, and matching features by using Euclidean distance (ED). The effectiveness of the proposed model’s algorithms and performance is evaluated on 150 real fingerprint images that were collected from university students by the ZKTeco scanner at Sulaimani city, Iraq. The system’s performance was measured by three renowned error rate metrics, namely, False Acceptance Rate (FAR), False Rejection Rate (FRR), and Correct Verification Rate (CVR). The experimental outcome showed that the proposed fingerprint recognition model was exceedingly accurate recognition because of a low rate of both FAR and FRR, with a high CVR percentage gained which was 0.00, 0.00666, and 99.334%, respectively. This finding would be useful for improving biometric secure authentication based fingerprint. It is also possibly applied to other research topics such as fraud detection, e-payment, and other real-life applications authentication.


2009 ◽  
Author(s):  
David Doria

In recent years, Light Detection and Ranging (LiDAR) scanners have become more prevalent in the scientific community. They capture a “2.5-D” image of a scene by sending out thousands of laser pulses and using time-of-flight calculations to determine the distance to the first reflecting surface in the scene. Rather than setting up a collection of objects in real life and actually sending lasers into the scene, one can simply create a scene out of 3d models and “scan” it by casting rays at the models. This is a great resource for any researchers who work with 3D model/surface/point data and LiDAR data. The synthetic scanner can be used to produce data sets for which a ground truth is known in order to ensure algorithms are behaving properly before moving to “real” LiDAR scans. Also, noise can be added to the points to attempt to simulate a real LiDAR scan for researchers who do not have access to the very expensive equipment required to obtain real scans.


Sign in / Sign up

Export Citation Format

Share Document