annotation method
Recently Published Documents


TOTAL DOCUMENTS

104
(FIVE YEARS 36)

H-INDEX

7
(FIVE YEARS 2)

2022 ◽  
Vol 40 (1) ◽  
pp. 71-82
Author(s):  
Shogo Okano ◽  
Tatsuhito Makino ◽  
Kosei Demura

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8442
Author(s):  
Esben Lykke Skovgaard ◽  
Jesper Pedersen ◽  
Niels Christian Møller ◽  
Anders Grøntved ◽  
Jan Christian Brønd

With the emergence of machine learning for the classification of sleep and other human behaviors from accelerometer data, the need for correctly annotated data is higher than ever. We present and evaluate a novel method for the manual annotation of in-bed periods in accelerometer data using the open-source software Audacity®, and we compare the method to the EEG-based sleep monitoring device Zmachine® Insight+ and self-reported sleep diaries. For evaluating the manual annotation method, we calculated the inter- and intra-rater agreement and agreement with Zmachine and sleep diaries using interclass correlation coefficients and Bland–Altman analysis. Our results showed excellent inter- and intra-rater agreement and excellent agreement with Zmachine and sleep diaries. The Bland–Altman limits of agreement were generally around ±30 min for the comparison between the manual annotation and the Zmachine timestamps for the in-bed period. Moreover, the mean bias was minuscule. We conclude that the manual annotation method presented is a viable option for annotating in-bed periods in accelerometer data, which will further qualify datasets without labeling or sleep records.


Information ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 519
Author(s):  
Inma García-Pereira ◽  
Pablo Casanova-Salas ◽  
Jesús Gimeno ◽  
Pedro Morillo ◽  
Dirk Reiners

Augmented Reality (AR) annotations are a powerful way of communication when collaborators cannot be present at the same time in a given environment. However, this situation presents several challenges, for example: how to record the AR annotations for later consumption, how to align virtual and real world in unprepared environments or how to offer the annotations to users with different AR devices. In this paper we present a cross-device AR annotation method that allows users to create and display annotations asynchronously in environments without the need for prior preparation (AR markers, point cloud capture, etc.). This is achieved through an easy user-assisted calibration process and a data model that allows any type of annotation to be stored on any device. The experimental study carried out with 40 participants has verified our two hypotheses: we are able to visualize AR annotations in indoor environments without prior preparation regardless of the device used and the overall usability of the system is satisfactory.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 6996
Author(s):  
Boyu Kuang ◽  
Zeeshan A. Rana ◽  
Yifan Zhao

Sky and ground are two essential semantic components in computer vision, robotics, and remote sensing. The sky and ground segmentation has become increasingly popular. This research proposes a sky and ground segmentation framework for the rover navigation visions by adopting weak supervision and transfer learning technologies. A new sky and ground segmentation neural network (network in U-shaped network (NI-U-Net)) and a conservative annotation method have been proposed. The pre-trained process achieves the best results on a popular open benchmark (the Skyfinder dataset) by evaluating seven metrics compared to the state-of-the-art. These seven metrics achieve 99.232%, 99.211%, 99.221%, 99.104%, 0.0077, 0.0427, and 98.223% on accuracy, precision, recall, dice score (F1), misclassification rate (MCR), root mean squared error (RMSE), and intersection over union (IoU), respectively. The conservative annotation method achieves superior performance with limited manual intervention. The NI-U-Net can operate with 40 frames per second (FPS) to maintain the real-time property. The proposed framework successfully fills the gap between the laboratory results (with rich idea data) and the practical application (in the wild). The achievement can provide essential semantic information (sky and ground) for the rover navigation vision.


2021 ◽  
Author(s):  
Yuqi Fang ◽  
Delong Zhu ◽  
Niyun Zhou ◽  
Li Liu ◽  
Jianhua Yao
Keyword(s):  

Author(s):  
V. V. Sajithvariyar ◽  
S. Aswin ◽  
V. Sowmya ◽  
K. P. Soman ◽  
R. Sivanpillai ◽  
...  

Abstract. The deep learning (DL) models require timely updates to continue their reliability and robustness in prediction, classification, and segmentation tasks. When the deep learning models are tested with a limited test set, the model will not reveal the drawbacks. Every deep learning baseline model needs timely updates by incorporating more data, change in architecture, and hyper parameter tuning. This work focuses on updating the Conditional Generative Adversarial Network (C-GAN) based epiphyte identification deep learning model by incorporating 4 different generator architectures of GAN and two different loss functions. The four generator architectures used in this task are Resnet-6. Resnet-9, Resnet-50 and Resnet-101. A new annotation method called background removed annotation was tested to analyse the improvement in the epiphyte identification protocol. All the results obtained from the model by changing the above parameters are reported using two common evaluation metrics. Based on the parameter tuning experiment, Resnet-6, and Resnet- 9, with binary cross-entropy (BCE) as the loss function, attained higher scores also Resnet-6 with MSE as loss function performed well. The new annotation by removing the background had minimal effect on identifying the epiphytes.


Microscopy ◽  
2021 ◽  
Author(s):  
Kohki Konishi ◽  
Takao Nonaka ◽  
Shunsuke Takei ◽  
Keisuke Ohta ◽  
Hideo Nishioka ◽  
...  

Abstract Three-dimensional (3D) observation of a biological sample using serial-section electron microscopy is widely used. However, organelle segmentation requires a significant amount of manual time. Therefore, several studies have been conducted to improve their efficiency. One such promising method is 3D deep learning (DL), which is highly accurate. However, the creation of training data for 3D DL still requires manual time and effort. In this study, we developed a highly efficient integrated image segmentation tool that includes stepwise DL with manual correction. The tool has four functions: efficient tracers for annotation, model training/inference for organelle segmentation using a lightweight convolutional neural network, efficient proofreading, and model refinement. We applied this tool to increase the training data step by step (stepwise annotation method) to segment the mitochondria in the cells of the cerebral cortex. We found that the stepwise annotation method reduced the manual operation time by one-third compared with that of the fully manual method, where all the training data were created manually. Moreover, we demonstrated that the F1 score, the metric of segmentation accuracy, was 0.9 by training the 3D DL model with these training data. The stepwise annotation method using this tool and the 3D DL model improved the segmentation efficiency for various organelles.


2021 ◽  
Vol 11 (13) ◽  
pp. 5931
Author(s):  
Ji’an You ◽  
Zhaozheng Hu ◽  
Chao Peng ◽  
Zhiqiang Wang

Large amounts of high-quality image data are the basis and premise of the high accuracy detection of objects in the field of convolutional neural networks (CNN). It is challenging to collect various high-quality ship image data based on the marine environment. A novel method based on CNN is proposed to generate a large number of high-quality ship images to address this. We obtained ship images with different perspectives and different sizes by adjusting the ships’ postures and sizes in three-dimensional (3D) simulation software, then 3D ship data were transformed into 2D ship image according to the principle of pinhole imaging. We selected specific experimental scenes as background images, and the target ships of the 2D ship images were superimposed onto the background images to generate “Simulation–Real” ship images (named SRS images hereafter). Additionally, an image annotation method based on SRS images was designed. Finally, the target detection algorithm based on CNN was used to train and test the generated SRS images. The proposed method is suitable for generating a large number of high-quality ship image samples and annotation data of corresponding ship images quickly to significantly improve the accuracy of ship detection. The annotation method proposed is superior to the annotation methods that label images with the image annotation software of Label-me and Label-img in terms of labeling the SRS images.


Sign in / Sign up

Export Citation Format

Share Document