scholarly journals Object detection for automatic cancer cell counting in zebrafish xenografts

PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0260609
Author(s):  
Carina Albuquerque ◽  
Leonardo Vanneschi ◽  
Roberto Henriques ◽  
Mauro Castelli ◽  
Vanda Póvoa ◽  
...  

Cell counting is a frequent task in medical research studies. However, it is often performed manually; thus, it is time-consuming and prone to human error. Even so, cell counting automation can be challenging to achieve, especially when dealing with crowded scenes and overlapping cells, assuming different shapes and sizes. In this paper, we introduce a deep learning-based cell detection and quantification methodology to automate the cell counting process in the zebrafish xenograft cancer model, an innovative technique for studying tumor biology and for personalizing medicine. First, we implemented a fine-tuned architecture based on the Faster R-CNN using the Inception ResNet V2 feature extractor. Second, we performed several adjustments to optimize the process, paying attention to constraints such as the presence of overlapped cells, the high number of objects to detect, the heterogeneity of the cells’ size and shape, and the small size of the data set. This method resulted in a median error of approximately 1% of the total number of cell units. These results demonstrate the potential of our novel approach for quantifying cells in poorly labeled images. Compared to traditional Faster R-CNN, our method improved the average precision from 71% to 85% on the studied data set.


Author(s):  
D. E. Becker

An efficient, robust, and widely-applicable technique is presented for computational synthesis of high-resolution, wide-area images of a specimen from a series of overlapping partial views. This technique can also be used to combine the results of various forms of image analysis, such as segmentation, automated cell counting, deblurring, and neuron tracing, to generate representations that are equivalent to processing the large wide-area image, rather than the individual partial views. This can be a first step towards quantitation of the higher-level tissue architecture. The computational approach overcomes mechanical limitations, such as hysterisis and backlash, of microscope stages. It also automates a procedure that is currently done manually. One application is the high-resolution visualization and/or quantitation of large batches of specimens that are much wider than the field of view of the microscope.The automated montage synthesis begins by computing a concise set of landmark points for each partial view. The type of landmarks used can vary greatly depending on the images of interest. In many cases, image analysis performed on each data set can provide useful landmarks. Even when no such “natural” landmarks are available, image processing can often provide useful landmarks.



2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Roko Duplancic ◽  
Darko Kero

AbstractWe describe a novel approach for quantification and colocalization of immunofluorescence (IF) signals of multiple markers on high-resolution panoramic images of serial histological sections utilizing standard staining techniques and readily available software for image processing and analysis. Human gingiva samples stained with primary antibodies against the common leukocyte antigen CD45 and factors related to heparan sulfate glycosaminoglycans (HS GAG) were used. Expression domains and spatial gradients of IF signals were quantified by histograms and 2D plot profiles, respectively. The importance of histomorphometric profiling of tissue samples and IF signal thresholding is elaborated. This approach to quantification of IF staining utilizes pixel (px) counts and comparison of px grey value (GV) or luminance. No cell counting is applied either to determine the cellular content of a given histological section nor the number of cells positive to the primary antibody of interest. There is no selection of multiple Regions-Of-Interest (ROIs) since the entire histological section is quantified. Although the standard IF staining protocol is applied, the data output enables colocalization of multiple markers (up to 30) from a given histological sample. This can serve as an alternative for colocalization of IF staining of multiple primary antibodies based on repeating cycles of staining of the same histological section since those techniques require non standard staining protocols and sophisticated equipment that can be out of reach for small laboratories in academic settings. Combined with the data from ontological bases, this approach to quantification of IF enables creation of in silico virtual disease models.



2021 ◽  
pp. 1-45
Author(s):  
Benjamin Leard ◽  
Joshua Linn ◽  
Yichen Christy Zhou

Abstract During historical periods in which US fuel economy standards were unchanging, automakers increased performance but not fuel economy, contrasting with recent periods of tightening standards and rising fuel economy. This paper evaluates the welfare consequences of automakers forgoing performance increases to raise fuel economy as standards have tightened since 2012. Using a unique data set and a novel approach to account for fuel economy and performance endogeneity, we find undervaluation of fuel cost savings and high valuation of performance. Welfare costs of forgone performance approximately equal expected fuel savings benefits, suggesting approximately zero net private consumer benefit from tightened standards.



2021 ◽  
Vol 109 (4) ◽  
Author(s):  
Anson Parker ◽  
Abbey Heflin ◽  
Lucy Carr Jones

As part of a larger project to understand the publishing choices of UVA Health authors and support open access publishing, a team from the Claude Moore Health Sciences Library analyzed an open data set from Europe PMC, which includes metadata from PubMed records. We used the Europe PMC REST API to search for articles published in 2017–2020 with “University of Virginia” in the author affiliation field. Subsequently, we parsed the JSON metadata in Python and used Streamlit to create a data visualization from our public GitHub repository. At present, this shows the relative proportions of open access versus subscription-only articles published by UVA Health authors. Although subscription services like Web of Science, Scopus, and Dimensions allow users to do similar analyses, we believe this is a novel approach to doing this type of bibliometric research with open data and open source tools.  



2021 ◽  
Author(s):  
Sophie Goliber ◽  
Taryn Black ◽  
Ginny Catania ◽  
James M. Lea ◽  
Helene Olsen ◽  
...  

Abstract. Marine-terminating outlet glacier terminus traces, mapped from satellite and aerial imagery, have been used extensively in understanding how outlet glaciers adjust to climate change variability over a range of time scales. Numerous studies have digitized termini manually, but this process is labor-intensive, and no consistent approach exists. A lack of coordination leads to duplication of efforts, particularly for Greenland, which is a major scientific research focus. At the same time, machine learning techniques are rapidly making progress in their ability to automate accurate extraction of glacier termini, with promising developments across a number of optical and SAR satellite sensors. These techniques rely on high quality, manually digitized terminus traces to be used as training data for robust automatic traces. Here we present a database of manually digitized terminus traces for machine learning and scientific applications. These data have been collected, cleaned, assigned with appropriate metadata including image scenes, and compiled so they can be easily accessed by scientists. The TermPicks data set includes 39,060 individual terminus traces for 278 glaciers with a mean and median number of traces per glacier of 136 ± 190 and 93, respectively. Across all glaciers, 32,567 dates have been picked, of which 4,467 have traces from more than one author (duplication of 14 %). We find a median error of ∼100 m among manually-traced termini. Most traces are obtained after 1999, when Landsat 7 was launched. We also provide an overview of an updated version of The Google Earth Engine Digitization Tool (GEEDiT), which has been developed specifically for future manual picking of the Greenland Ice Sheet.



2018 ◽  
Author(s):  
Wennan Chang ◽  
Changlin Wan ◽  
Xiaoyu Lu ◽  
Szu-wei Tu ◽  
Yifan Sun ◽  
...  

AbstractWe developed a novel deconvolution method, namely Inference of Cell Types and Deconvolution (ICTD) that addresses the fundamental issue of identifiability and robustness in current tissue data deconvolution problem. ICTD provides substantially new capabilities for omics data based characterization of a tissue microenvironment, including (1) maximizing the resolution in identifying resident cell and sub types that truly exists in a tissue, (2) identifying the most reliable marker genes for each cell type, which are tissue and data set specific, (3) handling the stability problem with co-linear cell types, (4) co-deconvoluting with available matched multi-omics data, and (5) inferring functional variations specific to one or several cell types. ICTD is empowered by (i) rigorously derived mathematical conditions of identifiable cell type and cell type specific functions in tissue transcriptomics data and (ii) a semi supervised approach to maximize the knowledge transfer of cell type and functional marker genes identified in single cell or bulk cell data in the analysis of tissue data, and (iii) a novel unsupervised approach to minimize the bias brought by training data. Application of ICTD on real and single cell simulated tissue data validated that the method has consistently good performance for tissue data coming from different species, tissue microenvironments, and experimental platforms. Other than the new capabilities, ICTD outperformed other state-of-the-art devolution methods on prediction accuracy, the resolution of identifiable cell, detection of unknown sub cell types, and assessment of cell type specific functions. The premise of ICTD also lies in characterizing cell-cell interactions and discovering cell types and prognostic markers that are predictive of clinical outcomes.



2011 ◽  
Vol 493-494 ◽  
pp. 325-330 ◽  
Author(s):  
J.A. Cortês ◽  
Elena Mavropoulos ◽  
Moema Hausen ◽  
Alexandre Rossi ◽  
J.M. Granjeiro ◽  
...  

Cell adhesion, proliferation and differentiation are important specific parameters to be evaluated on biocompatibility studies of candidate biomaterials for clinical applications. Several different methodologies have been employed to study, both qualitative and quantitatively, the direct interactions of ceramic materials with cultured mammal and human cells. However, while quantitatively evaluating cell density, viability and metabolic responses to test materials, several methodological challenges may arise, either by impairing the use of some widely applied techniques, or by generating false or conflicting results. In this work, we tested the inherent interference of different representative calcium phosphate ceramic surfaces (stoichiometric dense and porous hydroxyapatite (HA) and cation-substituted apatite tablets) on different tests for quantitative evaluation of osteoblast adhesion and metabolism, either based on direct cell counting after trypsinization, colorimetric assays (XTT, Neutral Red and Crystal Violet) and fluorescence microscopy. Cell adhesion estimation after trypsinization was highly dependent on the time of treatment, and the group with the highest level of estimated adhesion was inverted from 5 to 20 minutes of exposition to trypsin. Both dense and porous HA samples presented high levels of background adsorption of the Crystal Violet dye, impairing cell detection. HA surfaces also were able to adsorb high levels of fluorescent dyes (DAPI and phalloidin-TRITC), generating backgrounds which, in the case of porous HA, impaired cell detection and counting by image processing software (Image Pro Plus 6.0). We conclude that the choice for the most suitable method for cell detection and estimation is highly dependent on very specific characteristics of the studied material, and methodological adaptations on well established protocols must always be carefully taken on consideration.



2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
A. D'Amato

PurposeThe purpose of this paper is to analyze the relationship between intellectual capital and firm capital structure by exploring whether firm profitability and risk are drivers of this relationship.Design/methodology/approachBased on a comprehensive data set of Italian firms over the 2008–2017 period, this paper examines whether intellectual capital affects firm financial leverage. Moreover, it analyzes whether firm profitability and risk mediate the abovementioned relationship. Financial leverage is measured by the debt/equity ratio. Intellectual capital is measured via the value-added intellectual coefficient approach.FindingsThe findings show that firms with a high level of intellectual capital have lower financial leverage and are more profitable and riskier than firms with a low level of intellectual capital. Furthermore, this study finds that firm profitability and risk mediate the relationship between intellectual capital and financial leverage. Thus, the higher profitability and risk of intellectual capital-intensive firms help explain their lower financial leverage.Research limitations/implicationsThe findings have several implications. From a theoretical standpoint, the paper presents and tests a mediating model of the relationship between intellectual capital and financial leverage and its underlying processes. In terms of the more general managerial implications, the results provide managers with a clear interpretation of the relationship between intellectual capital and financial leverage and point to the need to strengthen the capital structure of intangible-intensive firms.Originality/valueThrough a mediation framework, this study provides empirical evidence on the relationship between intellectual capital and firm financial leverage by exploring the underlying mechanisms behind that relationship, which is a novel approach in the literature.



2021 ◽  
Vol 18 (1) ◽  
pp. 34-57
Author(s):  
Weifeng Pan ◽  
Xinxin Xu ◽  
Hua Ming ◽  
Carl K. Chang

Mashup technology has become a promising way to develop and deliver applications on the web. Automatically organizing Mashups into functionally similar clusters helps improve the performance of Mashup discovery. Although there are many approaches aiming to cluster Mashups, they solely focus on utilizing semantic similarities to guide the Mashup clustering process and are unable to utilize both the structural and semantic information in Mashup profiles. In this paper, a novel approach to cluster Mashups into groups is proposed, which integrates structural similarity and semantic similarity using fuzzy AHP (fuzzy analytic hierarchy process). The structural similarity is computed from usage histories between Mashups and Web APIs using SimRank algorithm. The semantic similarity is computed from the descriptions and tags of Mashups using LDA (latent dirichlet allocation). A clustering algorithm based on the genetic algorithm is employed to cluster Mashups. Comprehensive experiments are performed on a real data set collected from ProgrammableWeb. The results show the effectiveness of the approach when compared with two kinds of conventional approaches.



Author(s):  
Shashidhara Bola

A new method is proposed to classify the lung nodules as benign and malignant. The method is based on analysis of lung nodule shape, contour, and texture for better classification. The data set consists of 39 lung nodules of 39 patients which contain 19 benign and 20 malignant nodules. Lung regions are segmented based on morphological operators and lung nodules are detected based on shape and area features. The proposed algorithm was tested on LIDC (lung image database consortium) datasets and the results were found to be satisfactory. The performance of the method for distinction between benign and malignant was evaluated by the use of receiver operating characteristic (ROC) analysis. The method achieved area under the ROC curve was 0.903 which reduces the false positive rate.



Sign in / Sign up

Export Citation Format

Share Document