scholarly journals CytoBrowser: a browser-based collaborative annotation platform for whole slide images

F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 226
Author(s):  
Christopher Rydell ◽  
Joakim Lindblad

We present CytoBrowser, an open-source (GPLv3) JavaScript and Node.js driven environment for fast and accessible collaborative online visualization, assessment, and annotation of very large microscopy images, including, but not limited to, z-stacks (focus stacks) of cytology or histology whole slide images. CytoBrowser provides a web-based viewer for high-resolution zoomable images and facilitates easy remote collaboration, with options for joint-view visualization and simultaneous collaborative annotation of very large datasets. It delivers a unique combination of functionalities not found in other software solutions, making it a preferred tool for large scale annotation of whole slide image data. The web browser interface is directly accessible on any modern computer or even on a mobile phone, without need for additional software. By sharing a "session", several remote users can interactively explore and jointly annotate whole slide image data, thereby reaching improved data understanding and annotation quality, effortless project scaling and distribution of resources to/from remote locations, efficient creation of "ground truth" annotations for methods' evaluation and training of machine learning-based approaches, a user-friendly learning environment for medical students, to just name a few. Rectangle and polygon region annotations complement point-based annotations, each with a selectable annotation-class as well as free-form text fields. The default setting of CytoBrowser presents an interface for the Bethesda cancer grading system, while other annotation schemes can easily be incorporated. Automatic server side storage of annotations is complemented by JSON-based import/export options facilitating easy interoperability with other tools. CytoBrowser is available here: https://mida-group.github.io/CytoBrowser/.

2019 ◽  
Author(s):  
Sunho Park ◽  
Hongming Xu ◽  
Tae Hyun Hwang

Tumor mutation burden (TMB) is a quantitative measurement of how many mutations present in tumor cells from a patient tumor as assessed by next-generation sequencing (NGS) technology. High TMB is used as a predictive biomarker to select patients that likely respond to immunotherapy in many cancer types, thus it is critical to accurately measure TMB for cancer patients who need to receive the immunotherapy. Recent studies showed that image features from histopathology whole slide images can be used to predict genetic features (e.g., mutation status) or clinical outcome of cancer patients. In this study, we develop a computational method to predict the TMB level from cancer patients’ histopathology whole slide images. The prediction problem is formulated as multiple instance learning (MIL) because a whole slide image (a bag) has to be divided into multiple image blocks (instances) due to computational reasons but a single label is available only to an entire whole slide image not to each image block. In particular, we propose a novel heteroscedastic noise model for MIL based on the framework of Gaussian process (GP), where the noise variance is assumed to be a latent function of image level features. This noise variance can encode the confidence in predicting the TMB level from each training image and make the method to put different levels of effort to classify images according to how difficult each image can be correctly classified. The method tries to fit an easier image well while it does not put much effort in classifying a harder (ambiguous) image correctly. Expectation and propagation (EP) is employed to efficiently infer our model and to find the optimal hyper-parameters. We have demonstrated from synthetic and real-world data sets that our method outperforms on TMB prediction from whole slide images base-line methods, including a special case of our method that does not include the heteroscedastic noise modeling and multiple instance ordinal regression (MIOR) that is one of few algorithms to solve ordinal regression in the MIL setting.


2020 ◽  
Vol 7 ◽  
pp. 237428952095192
Author(s):  
Joann G. Elmore ◽  
Hannah Shucard ◽  
Annie C. Lee ◽  
Pin-Chieh Wang ◽  
Kathleen F. Kerr ◽  
...  

Digital whole slide images are Food and Drug Administration approved for clinical diagnostic use in pathology; however, integration is nascent. Trainees from 9 pathology training programs completed an online survey to ascertain attitudes toward and experiences with whole slide images for pathological interpretations. Respondents (n = 76) reported attending 63 unique medical schools (45 United States, 18 international). While 63% reported medical school exposure to whole slide images, most reported ≤ 5 hours. Those who began training more recently were more likely to report at least some exposure to digital whole slide image training in medical school compared to those who began training earlier: 75% of respondents beginning training in 2017 or 2018 reported exposure to whole slide images compared to 54% for trainees beginning earlier. Trainees exposed to whole slide images in medical school were more likely to agree they were comfortable using whole slide images for interpretation compared to those not exposed (29% vs 12%; P = .06). Most trainees agreed that accurate diagnoses can be made using whole slide images for primary diagnosis (92%; 95% CI: 86-98) and that whole slide images are useful for obtaining second opinions (93%; 95% CI: 88-99). Trainees reporting whole slide image experience during training, compared to those with no experience, were more likely to agree they would use whole slide images in 5 years for primary diagnosis (64% vs 50%; P = .3) and second opinions (86% vs 76%; P = .4). In conclusion, although exposure to whole slide images in medical school has increased, overall exposure is limited. Positive attitudes toward future whole slide image diagnostic use were associated with exposure to this technology during medical training. Curricular integration may promote adoption.


AI ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 684-704
Author(s):  
Karen Panetta ◽  
Landry Kezebou ◽  
Victor Oludare ◽  
James Intriligator ◽  
Sos Agaian

The concept of searching and localizing vehicles from live traffic videos based on descriptive textual input has yet to be explored in the scholarly literature. Endowing Intelligent Transportation Systems (ITS) with such a capability could help solve crimes on roadways. One major impediment to the advancement of fine-grain vehicle recognition models is the lack of video testbench datasets with annotated ground truth data. Additionally, to the best of our knowledge, no metrics currently exist for evaluating the robustness and performance efficiency of a vehicle recognition model on live videos and even less so for vehicle search and localization models. In this paper, we address these challenges by proposing V-Localize, a novel artificial intelligence framework for vehicle search and continuous localization captured from live traffic videos based on input textual descriptions. An efficient hashgraph algorithm is introduced to compute valid target information from textual input. This work further introduces two novel datasets to advance AI research in these challenging areas. These datasets include (a) the most diverse and large-scale Vehicle Color Recognition (VCoR) dataset with 15 color classes—twice as many as the number of color classes in the largest existing such dataset—to facilitate finer-grain recognition with color information; and (b) a Vehicle Recognition in Video (VRiV) dataset, a first of its kind video testbench dataset for evaluating the performance of vehicle recognition models in live videos rather than still image data. The VRiV dataset will open new avenues for AI researchers to investigate innovative approaches that were previously intractable due to the lack of annotated traffic vehicle recognition video testbench dataset. Finally, to address the gap in the field, five novel metrics are introduced in this paper for adequately accessing the performance of vehicle recognition models in live videos. Ultimately, the proposed metrics could also prove intuitively effective at quantitative model evaluation in other video recognition applications. T One major advantage of the proposed vehicle search and continuous localization framework is that it could be integrated in ITS software solution to aid law enforcement, especially in critical cases such as of amber alerts or hit-and-run incidents.


2019 ◽  
Vol 6 ◽  
pp. 237428951985984 ◽  
Author(s):  
Bih-Rong Wei ◽  
Charles H. Halsey ◽  
Shelley B. Hoover ◽  
Munish Puri ◽  
Howard H. Yang ◽  
...  

Validating digital pathology as substitute for conventional microscopy in diagnosis remains a priority to assure effectiveness. Intermodality concordance studies typically focus on achieving the same diagnosis by digital display of whole slide images and conventional microscopy. Assessment of discrete histological features in whole slide images, such as mitotic figures, has not been thoroughly evaluated in diagnostic practice. To further gauge the interchangeability of conventional microscopy with digital display for primary diagnosis, 12 pathologists examined 113 canine naturally occurring mucosal melanomas exhibiting a wide range of mitotic activity. Design reflected diverse diagnostic settings and investigated independent location, interpretation, and enumeration of mitotic figures. Intermodality agreement was assessed employing conventional microscopy (CM40×), and whole slide image specimens scanned at 20× (WSI20×) and at 40× (WSI40×) objective magnifications. An aggregate 1647 mitotic figure count observations were available from conventional microscopy and whole slide images for comparison. The intraobserver concordance rate of paired observations was 0.785 to 0.801; interobserver rate was 0.784 to 0.794. Correlation coefficients between the 2 digital modes, and as compared to conventional microscopy, were similar and suggest noninferiority among modalities, including whole slide image acquired at lower 20× resolution. As mitotic figure counts serve for prognostic grading of several tumor types, including melanoma, 6 of 8 pathologists retrospectively predicted survival prognosis using whole slide images, compared to 9 of 10 by conventional microscopy, a first evaluation of whole slide image for mitotic figure prognostic grading. This study demonstrated agreement of replicate reads obtained across conventional microscopy and whole slide images. Hence, quantifying mitotic figures served as surrogate histological feature with which to further credential the interchangeability of whole slide images for primary diagnosis.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Laurie Needham ◽  
Murray Evans ◽  
Darren P. Cosker ◽  
Logan Wade ◽  
Polly M. McGuigan ◽  
...  

AbstractHuman movement researchers are often restricted to laboratory environments and data capture techniques that are time and/or resource intensive. Markerless pose estimation algorithms show great potential to facilitate large scale movement studies ‘in the wild’, i.e., outside of the constraints imposed by marker-based motion capture. However, the accuracy of such algorithms has not yet been fully evaluated. We computed 3D joint centre locations using several pre-trained deep-learning based pose estimation methods (OpenPose, AlphaPose, DeepLabCut) and compared to marker-based motion capture. Participants performed walking, running and jumping activities while marker-based motion capture data and multi-camera high speed images (200 Hz) were captured. The pose estimation algorithms were applied to 2D image data and 3D joint centre locations were reconstructed. Pose estimation derived joint centres demonstrated systematic differences at the hip and knee (~ 30–50 mm), most likely due to mislabeling of ground truth data in the training datasets. Where systematic differences were lower, e.g., the ankle, differences of 1–15 mm were observed depending on the activity. Markerless motion capture represents a highly promising emerging technology that could free movement scientists from laboratory environments but 3D joint centre locations are not yet consistently comparable to marker-based motion capture.


2020 ◽  
Author(s):  
Yang Deng ◽  
Min Feng ◽  
Yong Jiang ◽  
Yanyan Zhou ◽  
Hangyu Qing ◽  
...  

Abstract Background: Pathology plays a very important role in the cancer diagnosis, as the gold standard for the identification of tumors. The rapid development of digital pathology (DP) which based on Whole Slide Image (WSI) has led to many improvements in telepathological consultation, digital management, and computer-assisted diagnosis by artificial intelligence (AI). In DP, the common digitization strategy is to scan the pathology slice with X20 or X40 objective. Usually, the X40's WSI is 4 times bigger than the X20's, and obviously, the storage space and transmission time of the data should be 4 times. These increased costs will be great negative factor in the popularization of DP. But at the same time, some cases have to use the high magnification WSI for reliable diagnosis. Methods: In this article, we present a novel super-resolution process which could be used for WSI through Deep Learning. This process powered by AI, have the ability to switch X20 WSI to X40 without loss of whole and locally features. Furthermore, we collect the examples of WSI data of patients with 100 uterine leiomyosarcoma and adult granulosa cell tumor (AGCT) of ovary respectively, which are used to test our super-resolution process. Results: We used the peak signal-to-noise ratio (PSNR), the structural similarity (SSIM), and the Blind/Referenceless Image Spatial QUality Evaluator (BRISQUE) to test the resulting X40 WSI synthesized by the super-resolution (SR), which were 42.03, 0.99 and 49.22 . Then, we tested our SR images from subjective evaluation of the pathologist's perspective, and tested that if the pathologists could objectively distinguish the images between SR and high-resolution (HR), to further confirm the consistency between our SR images and the real HR images. Conclusions: The testing results indicate that the X40 WSI synthesized by the super-resolution matches the performance of the one generated from the X40 objective in diagnosis of both tumors. We believe that this is a reliable method can be used in a variety of tumors' digital slides, and will be available for a large scale in clinical pathology as an innovative technique.


Doklady BGUIR ◽  
2020 ◽  
Vol 18 (8) ◽  
pp. 21-28
Author(s):  
S. N. Rjabceva ◽  
V. A. Kovalev ◽  
V. D. Malyshev ◽  
I. A. Siamionik ◽  
M. A. Derevyanko ◽  
...  

Analysis of breast cancer whole-slide image is an extremely labor-intensive process. Histological whole slide images have the following features: a high degree of tissue diversity both in one image and between different images, hierarchy, a large amount of graphic information and different artifacts. In this work, pre-processing of breast cancer whole-slide tissue image was carried out, which included normalization of the color distribution and the image area selection. We reduced the operating time of the other algorithms and excluded areas of breast cancer whole-slide tissue with a background to analyze. Also, an algorithm for finding similar neoplastic regions for semi-automatic selection using various image descriptors has been developed and implemented.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Manuel Tiglio ◽  
Aarón Villanueva

AbstractWe introduce a new approach for findinghigh accuracy, free and closed-form expressionsfor the gravitational waves emitted by binary black hole collisions fromab initio models. More precisely, our expressions are built from numerical surrogate models based on supercomputer simulations of the Einstein equations, which have been shown to be essentially indistinguishable from each other. Distinct aspects of our approach are that: (i) representations of the gravitational waves can beexplicitlywritten in a few lines, (ii) these representations are free-form yet still fast to search for and validate and (iii) there are no underlying physical approximations in the underlying model. The key strategy is combining techniques from Artificial Intelligence and Reduced Order Modeling for parameterized systems. Namely, symbolic regression through genetic programming combined with sparse representations in parameter space and the time domain using Reduced Basis and the Empirical Interpolation Method enabling fast free-form symbolic searches and large-scale a posteriori validations. As a proof of concept we present our results for the collision of two black holes, initially without spin, and with an initial separation corresponding to 25–31 gravitational wave cycles before merger. The minimum overlap, compared to ground truth solutions, is 99%. That is, 1% difference between our closed-form expressions and supercomputer simulations; this is considered for gravitational (GW) science more than the minimum required due to experimental numerical errors which otherwise dominate. This paper aims to contribute to the field of GWs in particular and Artificial Intelligence in general.


Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


Sign in / Sign up

Export Citation Format

Share Document