scholarly journals Automated Annotation of Cell Identities in Dense Cellular Images

Author(s):  
Shivesh Chaudhary ◽  
Sol Ah Lee ◽  
Yueyi Li ◽  
Dhaval S. Patel ◽  
Hang Lu

AbstractAssigning cell identities in dense image stacks is critical for many applications, for comparing data across animals and experiment conditions, and investigating properties of specific cells. Conventional methods are laborious, require experience, and could introduce bias. We present a generalizable framework based on Conditional Random Fields models for automatic cell identification. This approach searches for optimal arrangements of labels that maximally preserves prior knowledge such as geometrical relationships. The algorithm shows better accuracy and more robust handling of perturbations, e.g. missing cells and position variability, with both synthetic and experimental ground-truth data. The framework is generalizable across strains, imaging conditions, and easily builds and utilizes active data-driven atlases, which further improves accuracy. We demonstrate the utility in gene-expression pattern analysis, multi-cellular calcium imaging, and whole-brain imaging experiments. Thus, our framework is highly valuable to a wide variety of annotation scenarios including in zebrafish, Drosophila, hydra, and mouse brains.

Author(s):  
K. Moe ◽  
I. Toschi ◽  
D. Poli ◽  
F. Lago ◽  
C. Schreiner ◽  
...  

This paper discusses the potential of current photogrammetric multi-head oblique cameras, such as UltraCam Osprey, to improve the efficiency of standard photogrammetric methods for surveying applications like inventory surveys and topographic mapping for public administrations or private customers. <br><br> In 2015, Terra Messflug (TM), a subsidiary of Vermessung AVT ZT GmbH (Imst, Austria), has flown a number of urban areas in Austria, Czech Republic and Hungary with an UltraCam Osprey Prime multi-head camera system from Vexcel Imaging. In collaboration with FBK Trento (Italy), the data acquired at Imst (a small town in Tyrol, Austria) were analysed and processed to extract precise 3D topographic information. The Imst block comprises 780 images and covers an area of approx. 4.5 km by 1.5 km. Ground truth data is provided in the form of 6 GCPs and several check points surveyed with RTK GNSS. Besides, 3D building data obtained by photogrammetric stereo plotting from a 5 cm nadir flight and a LiDAR point cloud with 10 to 20 measurements per m² are available as reference data or for comparison. The photogrammetric workflow, from flight planning to Dense Image Matching (DIM) and 3D building extraction, is described together with the achieved accuracy. For each step, the differences and innovation with respect to standard photogrammetric procedures based on nadir images are shown, including high overlaps, improved vertical accuracy, and visibility of areas masked in the standard vertical views. Finally the advantages of using oblique images for inventory surveys are demonstrated.


Author(s):  
I. Toschi ◽  
F. Remondino ◽  
R. Rothe ◽  
K. Klimek

<p><strong>Abstract.</strong> Hybrid sensor solutions, that feature active laser and passive image sensors on the same platform, are rapidly entering the airborne market of topographic and urban mapping, offering new opportunities for an improved quality of geo-spatial products. In this perspective, a concurrent acquisition of LiDAR data and oblique imagery, seems to have all the potential to lead the airborne (urban) mapping sector a step forward. This contribution focuses on the first commercial example of such an integrated, all-in-one mapping solution, namely the Leica CityMapper hybrid sensor. By analysing two CityMapper datasets acquired over the city of Heilbronn (Germany) and Bordeaux (France), the paper investigates potential and challenges, w.r.t. (i) number and distribution of tie points between nadir and oblique images, (ii) strategy for image aerial triangulation (AT) and accuracy achievable w.r.t ground truth data, (iii) local noise level and completeness of dense image matching (DIM) point clouds w.r.t LiDAR data. Solutions for an integrated processing of the concurrently acquired ranging and imaging data are proposed, that open new opportunities for exploiting the real potential of both data sources.</p>


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Shivesh Chaudhary ◽  
Sol Ah Lee ◽  
Yueyi Li ◽  
Dhaval S Patel ◽  
Hang Lu

Although identifying cell names in dense image stacks is critical in analyzing functional whole-brain data enabling comparison across experiments, unbiased identification is very difficult, and relies heavily on researchers' experiences. Here we present a probabilistic-graphical-model framework, CRF_ID, based on Conditional Random Fields, for unbiased and automated cell identification. CRF_ID focuses on maximizing intrinsic similarity between shapes. Compared to existing methods, CRF_ID achieves higher accuracy on simulated and ground-truth experimental datasets, and better robustness against challenging noise conditions common in experimental data. CRF_ID can further boost accuracy by building atlases from annotated data in highly computationally efficient manner, and by easily adding new features (e.g. from new strains). We demonstrate cell annotation in C. elegans images across strains, animal orientations, and tasks including gene-expression localization, multi-cellular and whole-brain functional imaging experiments. Together, these successes demonstrate that unbiased cell annotation can facilitate biological discovery, and this approach may be valuable to annotation tasks for other systems.


Author(s):  
Rizki Perdana Rangkuti ◽  
◽  
Vektor Dewanto ◽  
Aprinaldi ◽  
Wisnu Jatmiko ◽  
...  

One promising approach to pixel-wise semantic segmentation is based on conditional random fields (CRFs). CRF-based semantic segmentation requires ground-truth annotations to supervisedly train the classifier that generates unary potentials. However, the number of (public) annotation data for training is limitedly small. We observe that the Internet can provide relevant images for any given keywords. Our idea is to convert keyword-related images to pixel-wise annotated images, then use them as training data. In particular, we rely on saliency filters to identify the salient object (foreground) of a retrieved image, which mostly agrees with the given keyword. We utilize saliency information for back-and-foreground CRF-based semantic segmentation to further obtain pixel-wise ground-truth annotations. Experiment results show that training data from Google images improves both the learning performance and the accuracy of semantic segmentation. This suggests that our proposed method is promising for harvesting substantial training data from the Internet for training the classifier in CRF-based semantic segmentation.


Author(s):  
Rui Wang ◽  
Xin Xin ◽  
Wei Chang ◽  
Kun Ming ◽  
Biao Li ◽  
...  

In this paper, we investigate how to improve Chinese named entity recognition (NER) by jointly modeling NER and constituent parsing, in the framework of neural conditional random fields (CRF). We reformulate the parsing task to heightlimited constituent parsing, by which the computational complexity can be significantly reduced, and the majority of phrase-level grammars are retained. Specifically, an unified model of neural semi-CRF and neural tree-CRF is proposed, which simultaneously conducts word segmentation, part-ofspeech (POS) tagging, NER, and parsing. The challenge comes from how to train and infer the joint model, which has not been solved previously. We design a dynamic programming algorithm for both training and inference, whose complexity is O(n·4h), where n is the sentence length and h is the height limit. In addition, we derive a pruning algorithm for the joint model, which further prunes 99.9% of the search space with 2% loss of the ground truth data. Experimental results on the OntoNotes 4.0 dataset have demonstrated that the proposed model outperforms the state-of-the-art method by 2.79 points in the F1-measure.


Author(s):  
K. Moe ◽  
I. Toschi ◽  
D. Poli ◽  
F. Lago ◽  
C. Schreiner ◽  
...  

This paper discusses the potential of current photogrammetric multi-head oblique cameras, such as UltraCam Osprey, to improve the efficiency of standard photogrammetric methods for surveying applications like inventory surveys and topographic mapping for public administrations or private customers. &lt;br&gt;&lt;br&gt; In 2015, Terra Messflug (TM), a subsidiary of Vermessung AVT ZT GmbH (Imst, Austria), has flown a number of urban areas in Austria, Czech Republic and Hungary with an UltraCam Osprey Prime multi-head camera system from Vexcel Imaging. In collaboration with FBK Trento (Italy), the data acquired at Imst (a small town in Tyrol, Austria) were analysed and processed to extract precise 3D topographic information. The Imst block comprises 780 images and covers an area of approx. 4.5 km by 1.5 km. Ground truth data is provided in the form of 6 GCPs and several check points surveyed with RTK GNSS. Besides, 3D building data obtained by photogrammetric stereo plotting from a 5 cm nadir flight and a LiDAR point cloud with 10 to 20 measurements per m² are available as reference data or for comparison. The photogrammetric workflow, from flight planning to Dense Image Matching (DIM) and 3D building extraction, is described together with the achieved accuracy. For each step, the differences and innovation with respect to standard photogrammetric procedures based on nadir images are shown, including high overlaps, improved vertical accuracy, and visibility of areas masked in the standard vertical views. Finally the advantages of using oblique images for inventory surveys are demonstrated.


2011 ◽  
Vol 22 (8) ◽  
pp. 1897-1910 ◽  
Author(s):  
Yun LIU ◽  
Zhi-Ping CAI ◽  
Ping ZHONG ◽  
Jian-Ping YIN ◽  
Jie-Ren CHENG

Sign in / Sign up

Export Citation Format

Share Document