scholarly journals A Learning-Based Formulation of Parametric Curve Fitting for Bioimage Analysis

Author(s):  
Soham Mandal ◽  
Virginie Uhlmann

AbstractParametric curve models are convenient to describe and quantitatively characterize the contour of objects in bioimages. Unfortunately, designing algorithms to fit smoothly such models onto image data classically requires significant domain expertise. Here, we propose a convolutional neural network-based approach to predict a continuous parametric representation of the outline of biological objects. We successfully apply our method on the Kaggle 2018 Data Science Bowl dataset composed of a varied collection of images of cell nuclei. This work is a first step towards user-friendly bioimage analysis tools that extract continuously-defined representations of objects.

2019 ◽  
Vol 16 (12) ◽  
pp. 1247-1253 ◽  
Author(s):  
Juan C. Caicedo ◽  
Allen Goodman ◽  
Kyle W. Karhohs ◽  
Beth A. Cimini ◽  
Jeanelle Ackerman ◽  
...  

Abstract Segmenting the nuclei of cells in microscopy images is often the first step in the quantitative analysis of imaging data for biological and biomedical applications. Many bioimage analysis tools can segment nuclei in images but need to be selected and configured for every experiment. The 2018 Data Science Bowl attracted 3,891 teams worldwide to make the first attempt to build a segmentation method that could be applied to any two-dimensional light microscopy image of stained nuclei across experiments, with no human interaction. Top participants in the challenge succeeded in this task, developing deep-learning-based models that identified cell nuclei across many image types and experimental conditions without the need to manually adjust segmentation parameters. This represents an important step toward configuration-free bioimage analysis software tools.


2021 ◽  
Vol 22 (S6) ◽  
Author(s):  
Yasmine Mansour ◽  
Annie Chateau ◽  
Anna-Sophie Fiston-Lavier

Abstract Background Meiotic recombination is a vital biological process playing an essential role in genome's structural and functional dynamics. Genomes exhibit highly various recombination profiles along chromosomes associated with several chromatin states. However, eu-heterochromatin boundaries are not available nor easily provided for non-model organisms, especially for newly sequenced ones. Hence, we miss accurate local recombination rates necessary to address evolutionary questions. Results Here, we propose an automated computational tool, based on the Marey maps method, allowing to identify heterochromatin boundaries along chromosomes and estimating local recombination rates. Our method, called BREC (heterochromatin Boundaries and RECombination rate estimates) is non-genome-specific, running even on non-model genomes as long as genetic and physical maps are available. BREC is based on pure statistics and is data-driven, implying that good input data quality remains a strong requirement. Therefore, a data pre-processing module (data quality control and cleaning) is provided. Experiments show that BREC handles different markers' density and distribution issues. Conclusions BREC's heterochromatin boundaries have been validated with cytological equivalents experimentally generated on the fruit fly Drosophila melanogaster genome, for which BREC returns congruent corresponding values. Also, BREC's recombination rates have been compared with previously reported estimates. Based on the promising results, we believe our tool has the potential to help bring data science into the service of genome biology and evolution. We introduce BREC within an R-package and a Shiny web-based user-friendly application yielding a fast, easy-to-use, and broadly accessible resource. The BREC R-package is available at the GitHub repository https://github.com/GenomeStructureOrganization.


Author(s):  
Tien Anh Tran

The ship energy efficiency management is an important topic in the field of the energy management onboard and the exhaust gases emission nowadays. The advanced model plays a vital role to improve the ship energy efficiency management when considering the variable factors. The establishment of the ship energy efficiency model through energy efficiency operational indicator (EEOI) index has been conducted through Monte Carlo simulation method along with using the operation data of a bulk carrier. A bulk carrier is chosen, namely, M/V NSU JUSTICE 250,000 DWT of VINIC Shipping Transportation Company in Vietnam. This research uses the real operational data to perform a statistical methodology which calculates the various factors used to calculate EEOI. This method is supported by Matlab program through the curve fitting tool. The normal distribution estimation and the kernel density estimation method are used for the parametric curve fitting and non-parametric curve fitting, respectively. The average weather condition (wind speed and wave height) and the fouling condition of hull have been investigated and compared with the research results. The validation of the proposed methods has been conducted through the study of the external factors influencing the research results. The research result shows the optimal operational data for the fuel consumption at each certain voyage. This paper is useful for the ship-owners and the ship-operators in the field of the ship energy efficiency management.


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Feixiao Long

Abstract Background Cell nuclei segmentation is a fundamental task in microscopy image analysis, based on which multiple biological related analysis can be performed. Although deep learning (DL) based techniques have achieved state-of-the-art performances in image segmentation tasks, these methods are usually complex and require support of powerful computing resources. In addition, it is impractical to allocate advanced computing resources to each dark- or bright-field microscopy, which is widely employed in vast clinical institutions, considering the cost of medical exams. Thus, it is essential to develop accurate DL based segmentation algorithms working with resources-constraint computing. Results An enhanced, light-weighted U-Net (called U-Net+) with modified encoded branch is proposed to potentially work with low-resources computing. Through strictly controlled experiments, the average IOU and precision of U-Net+ predictions are confirmed to outperform other prevalent competing methods with 1.0% to 3.0% gain on the first stage test set of 2018 Kaggle Data Science Bowl cell nuclei segmentation contest with shorter inference time. Conclusions Our results preliminarily demonstrate the potential of proposed U-Net+ in correctly spotting microscopy cell nuclei with resources-constraint computing.


2013 ◽  
Vol 11 (02) ◽  
pp. 1250024 ◽  
Author(s):  
ALEXANDRA HERZOG ◽  
BJÖRN VOSS ◽  
DANIELA KEILBERG ◽  
EDINA HOT ◽  
LOTTE SØGAARD-ANDERSEN ◽  
...  

The extraction of fluorescence intensity profiles of single cells from image data is a common challenge in cell biology. The manual segmentation of cells, the extraction of cell orientation and finally the extraction of intensity profiles are time-consuming tasks. This article proposes a routine for the segmentation of single rod-shaped cells (i.e. without neighboring cells in a distance of the cell length) from image data combined with an extraction of intensity distributions along the longitudinal cell axis under the aggravated conditions of (i) a low spatial resolution and (ii) lacking information on the imaging system i.e. the point spread function and signal-to-noise ratio. The algorithm named cipsa transfers a new approach from particle streak velocimetry to cell classification interpreting the rod-shaped as streak-like structures. An automatic reduction of systematic errors such as photobleaching and defocusing is included to guarantee robustness of the proposed approach under the described conditions and to the convenience of end-users unfamiliar with image processing. Performance of the algorithm has been tested on image sequences with high noise level produced by an overlay of different error sources. The developed algorithm provides a user-friendly, stand-alone procedure.


1997 ◽  
Vol 27 (1) ◽  
pp. 117-137 ◽  
Author(s):  
Alexander J. McNeil

AbstractGood estimates for the tails of loss severity distributions are essential for pricing or positioning high-excess loss layers in reinsurance. We describe parametric curve-fitting methods for modelling extreme historical losses. These methods revolve around the generalized Pareto distribution and are supported by extreme value theory. We summarize relevant theoretical results and provide an extensive example of their application to Danish data on large fire insurance losses.


GigaScience ◽  
2020 ◽  
Vol 9 (3) ◽  
Author(s):  
Marcus Wagner ◽  
Sarah Reinke ◽  
René Hänsel ◽  
Wolfram Klapper ◽  
Ulf-Dietrich Braumann

Abstract Background We present an image dataset related to automated segmentation and counting of macrophages in diffuse large B-cell lymphoma (DLBCL) tissue sections. For the classification of DLBCL subtypes, as well as for providing a prognosis of the clinical outcome, the analysis of the tumor microenvironment and, particularly, of the different types and functions of tumor-associated macrophages is indispensable. Until now, however, most information about macrophages has been obtained either in a completely indirect way by gene expression profiling or by manual counts in immunohistochemically (IHC) fluorescence-stained tissue samples while automated recognition of single IHC stained macrophages remains a difficult task. In an accompanying publication, a reliable approach to this problem has been established, and a large set of related images has been generated and analyzed. Results Provided image data comprise (i) fluorescence microscopy images of 44 multiple immunohistostained DLBCL tumor subregions, captured at 4 channels corresponding to CD14, CD163, Pax5, and DAPI; (ii) ”cartoon-like” total variation–filtered versions of these images, generated by Rudin-Osher-Fatemi denoising; (iii) an automatically generated mask of the evaluation subregion, based on information from the DAPI channel; and (iv) automatically generated segmentation masks for macrophages (using information from CD14 and CD163 channels), B-cells (using information from Pax5 channel), and all cell nuclei (using information from DAPI channel). Conclusions A large set of IHC stained DLBCL specimens is provided together with segmentation masks for different cell populations generated by a reference method for automated image analysis, thus featuring considerable reuse potential.


Author(s):  
Przemysław Mazurek ◽  
Dorota Oszutowsk A-M Ażurek

Abstract The Slit Island Method (SIM) is a technique for the estimation of the fractal dimension of an object by determining the area- perimeter relations for successive slits. The SIM could be applied for image analysis of irregular grayscale objects and their classification using the fractal dimension. It is known that this technique is not functional in some cases. It is emphasized in this paper that for specific objects a negative or an infinite fractal dimension could be obtained. The transformation of the input image data from unipolar to bipolar gives a possibility of reformulated image analysis using the Ising model context. The polynomial approximation of the obtained area-perimeter curve allows object classification. The proposed technique is applied to the images of cervical cell nuclei (Papanicolaou smears) for the preclassification of the correct and atypical cells.


2014 ◽  
Vol 39 (4) ◽  
pp. 249-270 ◽  
Author(s):  
Lech Madeyski ◽  
Marek Majchrzak

Abstract Context. Software data collection precedes analysis which, in turn, requires data science related skills. Software defect prediction is hardly used in industrial projects as a quality assurance and cost reduction mean. Objectives. There are many studies and several tools which help in various data analysis tasks but there is still neither an open source tool nor standardized approach. Results. We developed Defect Prediction for software systems (DePress), which is an extensible software measurement, and data integration framework which can be used for prediction purposes (e.g. defect prediction, effort prediction) and software changes analysis (e.g. release notes, bug statistics, commits quality). DePress is based on the KNIME project and allows building workflows in a graphic, end-user friendly manner. Conclusions. We present main concepts, as well as the development state of the DePress framework. The results show that DePress can be used in Open Source, as well as in industrial project analysis.


Sign in / Sign up

Export Citation Format

Share Document