manual annotation
Recently Published Documents


TOTAL DOCUMENTS

230
(FIVE YEARS 131)

H-INDEX

12
(FIVE YEARS 5)

2022 ◽  
Vol 15 ◽  
Author(s):  
Yu Yan ◽  
Yaël Balbastre ◽  
Mikael Brudfors ◽  
John Ashburner

Segmentation of brain magnetic resonance images (MRI) into anatomical regions is a useful task in neuroimaging. Manual annotation is time consuming and expensive, so having a fully automated and general purpose brain segmentation algorithm is highly desirable. To this end, we propose a patched-based labell propagation approach based on a generative model with latent variables. Once trained, our Factorisation-based Image Labelling (FIL) model is able to label target images with a variety of image contrasts. We compare the effectiveness of our proposed model against the state-of-the-art using data from the MICCAI 2012 Grand Challenge and Workshop on Multi-Atlas Labelling. As our approach is intended to be general purpose, we also assess how well it can handle domain shift by labelling images of the same subjects acquired with different MR contrasts.


2021 ◽  
Vol 12 (1) ◽  
pp. 330
Author(s):  
Ana Alves-Pinto ◽  
Christoph Demus ◽  
Michael Spranger ◽  
Dirk Labudde ◽  
Eleanor Hobley

Named entity recognition (NER) constitutes an important step in the processing of unstructured text content for the extraction of information as well as for the computer-supported analysis of large amounts of digital data via machine learning methods. However, NER often relies on domain-specific knowledge, being conducted manually in a time- and human-resource-intensive process. These can be reduced with statistical models performing NER automatically. The current work investigates whether Conditional Random Fields (CRF) can be efficiently trained for NER in German texts, by means of an iterative procedure combining self-learning with a manual annotation–active learning–component. The training dataset increases continuously with the iterative procedure. Whilst self-learning did not markedly improve the performance of the CRF for NER, the manual annotation of sentences with the lowest probability of correct prediction clearly improved the model F1-score and simultaneously reduced the amount of manual annotation required to train the model. A model with an F1-score of 0.885 was able to be trained in 11.4 h.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8442
Author(s):  
Esben Lykke Skovgaard ◽  
Jesper Pedersen ◽  
Niels Christian Møller ◽  
Anders Grøntved ◽  
Jan Christian Brønd

With the emergence of machine learning for the classification of sleep and other human behaviors from accelerometer data, the need for correctly annotated data is higher than ever. We present and evaluate a novel method for the manual annotation of in-bed periods in accelerometer data using the open-source software Audacity®, and we compare the method to the EEG-based sleep monitoring device Zmachine® Insight+ and self-reported sleep diaries. For evaluating the manual annotation method, we calculated the inter- and intra-rater agreement and agreement with Zmachine and sleep diaries using interclass correlation coefficients and Bland–Altman analysis. Our results showed excellent inter- and intra-rater agreement and excellent agreement with Zmachine and sleep diaries. The Bland–Altman limits of agreement were generally around ±30 min for the comparison between the manual annotation and the Zmachine timestamps for the in-bed period. Moreover, the mean bias was minuscule. We conclude that the manual annotation method presented is a viable option for annotating in-bed periods in accelerometer data, which will further qualify datasets without labeling or sleep records.


2021 ◽  
Vol 11 (24) ◽  
pp. 12037
Author(s):  
Xiaoyu Hou ◽  
Jihui Xu ◽  
Jinming Wu ◽  
Huaiyu Xu

Counting people in crowd scenarios is extensively conducted in drone inspections, video surveillance, and public safety applications. Today, crowd count algorithms with supervised learning have improved significantly, but with a reliance on a large amount of manual annotation. However, in real world scenarios, different photo angles, exposures, location heights, complex backgrounds, and limited annotation data lead to supervised learning methods not working satisfactorily, plus many of them suffer from overfitting problems. To address the above issues, we focus on training synthetic crowd data and investigate how to transfer information to real-world datasets while reducing the need for manual annotation. CNN-based crowd-counting algorithms usually consist of feature extraction, density estimation, and count regression. To improve the domain adaptation in feature extraction, we propose an adaptive domain-invariant feature extracting module. Meanwhile, after taking inspiration from recent innovative meta-learning, we present a dynamic-β MAML algorithm to generate a density map in unseen novel scenes and render the density estimation model more universal. Finally, we use a counting map refiner to optimize the coarse density map transformation into a fine density map and then regress the crowd number. Extensive experiments show that our proposed domain adaptation- and model-generalization methods can effectively suppress domain gaps and produce elaborate density maps in cross-domain crowd-counting scenarios. We demonstrate that the proposals in our paper outperform current state-of-the-art techniques.


Author(s):  
Mikolaj Cieslak ◽  
Nazifa Khan ◽  
Pascal Ferraro ◽  
Raju Soolanayakanahally ◽  
Stephen J Robinson ◽  
...  

Abstract Artificial neural networks that recognize and quantify relevant aspects of crop plants show great promise in image-based phenomics, but their training requires many annotated images. The acquisition of these images is comparatively simple, but their manual annotation is time-consuming. Realistic plant models, which can be annotated automatically, thus present an attractive alternative to real plant images for training purposes. Here we show how such models can be constructed and calibrated quickly, using maize and canola as case studies.


2021 ◽  
Vol 16 (1) ◽  
pp. 124-133
Author(s):  
Ana Llorens

This report describes the open-source Recorded Brahms Corpus (RBC) dataset, as well as the methods employed to extract and process the data. The dataset contains (micro)timing and dynamic data from 21 recordings of Brahms's Cello Sonatas, Opp. 38 and 99, focusing on note and beat onsets and duration, tempo fluctuations, and dynamic variations. Consistent manual annotation of the corpus in Sonic Visualiser was necessary prior to automatic extraction. Data for each recording and measurement unit are given as TXT files. Scores in various digital formats, the original SV files and diamond-shaped scape plots visualizations of the data are offered too. Expansion of the corpus with further movements of the sonatas, further recordings thereof and other compositions by Brahms is planned. The study of the data may contribute to performance studies and music theory alike.


2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Rachel Lynn Graves ◽  
Jeanmarie Perrone ◽  
Mohammed Ali Al-Garadi ◽  
Yuan-Chi Yang ◽  
Jennifer S. Love ◽  
...  

2021 ◽  
Author(s):  
Felix M. Bauer ◽  
Lena Lärm ◽  
Shehan Morandage ◽  
Guillaume Lobet ◽  
Jan Vanderborght ◽  
...  

Root systems of crops play a significant role in agro-ecosystems. The root system is essential for water and nutrient uptake, plant stability, symbiosis with microbes and a good soil structure. Minirhizotrons, consisting of transparent tubes that create windows into the soil, have shown to be effective to non-invasively investigate the root system. Root traits, like root length observed around the tubes of minirhizotron, can therefore be obtained throughout the crop growing season. Analyzing datasets from minirhizotrons using common manual annotation methods, with conventional software tools, are time consuming and labor intensive. Therefore, an objective method for high throughput image analysis that provides data for field root-phenotyping is necessary. In this study we developed a pipeline combining state-of-the-art software tools, using deep neural networks and automated feature extraction. This pipeline consists of two major components and was applied to large root image datasets from minirhizotrons. First, a segmentation by a neural network model, trained with a small image sample is performed. Training and segmentation are done using “Root-Painter”. Then, an automated feature extraction from the segments is carried out by “RhizoVision Explorer”. To validate the results of our automated analysis pipeline, a comparison of root length between manually annotated and automatically processed data was realized with more than 58,000 images. Mainly the results show a high correlation (R=0.81) between manually and automatically determined root lengths. With respect to the processing time, our new pipeline outperforms manual annotation by 98.1 - 99.6 %. Our pipeline,combining state-of-the-art software tools, significantly reduces the processing time for minirhizotron images. Thus, image analysis is no longer the bottle-neck in high-throughput phenotyping approaches.


Sign in / Sign up

Export Citation Format

Share Document