scholarly journals PPGTempStitch: A MATLAB Toolbox for Augmenting Annotated Photoplethsmogram Signals

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4007
Author(s):  
Qunfeng Tang ◽  
Zhencheng Chen ◽  
Carlo Menon ◽  
Rabab Ward ◽  
Mohamed Elgendi

An annotated photoplethysmogram (PPG) is required when evaluating PPG algorithms that have been developed to detect the onset and systolic peaks of PPG waveforms. However, few publicly accessible PPG datasets exist in which the onset and systolic peaks of the waveforms are annotated. Therefore, this study developed a MATLAB toolbox that stitches predetermined annotated PPGs in a random manner to generate a long, annotated PPG signal. With this toolbox, any combination of four annotated PPG templates that represent regular, irregular, fast rhythm, and noisy PPG waveforms can be stitched together to generate a long, annotated PPG. Furthermore, this toolbox can simulate real-life PPG signals by introducing different noise levels and PPG waveforms. The toolbox can implement two stitching methods: one based on the systolic peak and the other on the onset. Additionally, cubic spline interpolation is used to smooth the waveform around the stitching point, and a skewness index is used as a signal quality index to select the final signal output based on the stitching method used. The developed toolbox is free and open-source software, and a graphical user interface is provided. The method of synthesizing by stitching introduced in this paper is a data augmentation strategy that can help researchers significantly increase the size and diversity of annotated PPG signals available for training and testing different feature extraction algorithms.

2021 ◽  
Vol 11 (10) ◽  
pp. 4554
Author(s):  
João F. Teixeira ◽  
Mariana Dias ◽  
Eva Batista ◽  
Joana Costa ◽  
Luís F. Teixeira ◽  
...  

The scarcity of balanced and annotated datasets has been a recurring problem in medical image analysis. Several researchers have tried to fill this gap employing dataset synthesis with adversarial networks (GANs). Breast magnetic resonance imaging (MRI) provides complex, texture-rich medical images, with the same annotation shortage issues, for which, to the best of our knowledge, no previous work tried synthesizing data. Within this context, our work addresses the problem of synthesizing breast MRI images from corresponding annotations and evaluate the impact of this data augmentation strategy on a semantic segmentation task. We explored variations of image-to-image translation using conditional GANs, namely fitting the generator’s architecture with residual blocks and experimenting with cycle consistency approaches. We studied the impact of these changes on visual verisimilarity and how an U-Net segmentation model is affected by the usage of synthetic data. We achieved sufficiently realistic-looking breast MRI images and maintained a stable segmentation score even when completely replacing the dataset with the synthetic set. Our results were promising, especially when concerning to Pix2PixHD and Residual CycleGAN architectures.


2021 ◽  
Vol 5 (3) ◽  
pp. 1-10
Author(s):  
Melih Öz ◽  
Taner Danışman ◽  
Melih Günay ◽  
Esra Zekiye Şanal ◽  
Özgür Duman ◽  
...  

The human eye contains valuable information about an individual’s identity and health. Therefore, segmenting the eye into distinct regions is an essential step towards gathering this useful information precisely. The main challenges in segmenting the human eye include low light conditions, reflections on the eye, variations in the eyelid, and head positions that make an eye image hard to segment. For this reason, there is a need for deep neural networks, which are preferred due to their success in segmentation problems. However, deep neural networks need a large amount of manually annotated data to be trained. Manual annotation is a labor-intensive task, and to tackle this problem, we used data augmentation methods to improve synthetic data. In this paper, we detail the exploration of the scenario, which, with limited data, whether performance can be enhanced using similar context data with image augmentation methods. Our training and test set consists of 3D synthetic eye images generated from the UnityEyes application and manually annotated real-life eye images, respectively. We examined the effect of using synthetic eye images with the Deeplabv3+ network in different conditions using image augmentation methods on the synthetic data. According to our experiments, the network trained with processed synthetic images beside real-life images produced better mIoU results than the network, which only trained with real-life images in the Base dataset. We also observed mIoU increase in the test set we created from MICHE II competition images.


Author(s):  
Chanjal C

Predicting the relevance between two given videos with respect to their visual content is a key component for content-based video recommendation and retrieval. The application is in video recommendation, video annotation, Category or near-duplicate video retrieval, video copy detection and so on. In order to estimate video relevance previous works utilize textual content of videos and lead to poor performance. The proposed method is feature re-learning for video relevance prediction. This work focus on the visual contents to predict the relevance between two videos. A given feature is projected into a new space by an affine transformation. Different from previous works this use a standard triplet ranking loss that optimize the projection process by a novel negative-enhanced triplet ranking loss. In order to generate more training data, propose a data augmentation strategy which works directly on video features. The multi-level augmentation strategy works for video features, which benefits the feature relearning. The proposed augmentation strategy can be flexibly used for frame-level or video-level features. The loss function that consider the absolute similarity of positive pairs and supervise the feature re-learning process and a new formula for video relevance computation.


2021 ◽  
pp. 484-496
Author(s):  
Xiangyu Wei ◽  
Meifei Chen ◽  
Manxi Wu ◽  
Xiaowei Zhang ◽  
Bin Hu

2019 ◽  
Vol 8 (9) ◽  
pp. 390 ◽  
Author(s):  
Kun Zheng ◽  
Mengfei Wei ◽  
Guangmin Sun ◽  
Bilal Anas ◽  
Yu Li

Vehicle detection based on very high-resolution (VHR) remote sensing images is beneficial in many fields such as military surveillance, traffic control, and social/economic studies. However, intricate details about the vehicle and the surrounding background provided by VHR images require sophisticated analysis based on massive data samples, though the number of reliable labeled training data is limited. In practice, data augmentation is often leveraged to solve this conflict. The traditional data augmentation strategy uses a combination of rotation, scaling, and flipping transformations, etc., and has limited capabilities in capturing the essence of feature distribution and proving data diversity. In this study, we propose a learning method named Vehicle Synthesis Generative Adversarial Networks (VS-GANs) to generate annotated vehicles from remote sensing images. The proposed framework has one generator and two discriminators, which try to synthesize realistic vehicles and learn the background context simultaneously. The method can quickly generate high-quality annotated vehicle data samples and greatly helps in the training of vehicle detectors. Experimental results show that the proposed framework can synthesize vehicles and their background images with variations and different levels of details. Compared with traditional data augmentation methods, the proposed method significantly improves the generalization capability of vehicle detectors. Finally, the contribution of VS-GANs to vehicle detection in VHR remote sensing images was proved in experiments conducted on UCAS-AOD and NWPU VHR-10 datasets using up-to-date target detection frameworks.


2020 ◽  
Vol 496 (3) ◽  
pp. 3553-3571
Author(s):  
Benjamin E Stahl ◽  
Jorge Martínez-Palomera ◽  
WeiKang Zheng ◽  
Thomas de Jaeger ◽  
Alexei V Filippenko ◽  
...  

ABSTRACT We present deepSIP (deep learning of Supernova Ia Parameters), a software package for measuring the phase and – for the first time using deep learning – the light-curve shape of a Type Ia supernova (SN Ia) from an optical spectrum. At its core, deepSIP consists of three convolutional neural networks trained on a substantial fraction of all publicly available low-redshift SN Ia optical spectra, on to which we have carefully coupled photometrically derived quantities. We describe the accumulation of our spectroscopic and photometric data sets, the cuts taken to ensure quality, and our standardized technique for fitting light curves. These considerations yield a compilation of 2754 spectra with photometrically characterized phases and light-curve shapes. Though such a sample is significant in the SN community, it is small by deep-learning standards where networks routinely have millions or even billions of free parameters. We therefore introduce a data-augmentation strategy that meaningfully increases the size of the subset we allocate for training while prioritizing model robustness and telescope agnosticism. We demonstrate the effectiveness of our models by deploying them on a sample unseen during training and hyperparameter selection, finding that Model I identifies spectra that have a phase between −10 and 18 d and light-curve shape, parametrized by Δm15, between 0.85 and 1.55 mag with an accuracy of 94.6 per cent. For those spectra that do fall within the aforementioned region in phase–Δm15 space, Model II predicts phases with a root-mean-square error (RMSE) of 1.00 d and Model III predicts Δm15 values with an RMSE of 0.068 mag.


Sign in / Sign up

Export Citation Format

Share Document