scholarly journals Investigating the Impact of the Bit Depth of Fluorescence-Stained Images on the Performance of Deep Learning-Based Nuclei Instance Segmentation

Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 967
Author(s):  
Amirreza Mahbod ◽  
Gerald Schaefer ◽  
Christine Löw ◽  
Georg Dorffner ◽  
Rupert Ecker ◽  
...  

Nuclei instance segmentation can be considered as a key point in the computer-mediated analysis of histological fluorescence-stained (FS) images. Many computer-assisted approaches have been proposed for this task, and among them, supervised deep learning (DL) methods deliver the best performances. An important criterion that can affect the DL-based nuclei instance segmentation performance of FS images is the utilised image bit depth, but to our knowledge, no study has been conducted so far to investigate this impact. In this work, we released a fully annotated FS histological image dataset of nuclei at different image magnifications and from five different mouse organs. Moreover, by different pre-processing techniques and using one of the state-of-the-art DL-based methods, we investigated the impact of image bit depth (i.e., eight bits vs. sixteen bits) on the nuclei instance segmentation performance. The results obtained from our dataset and another publicly available dataset showed very competitive nuclei instance segmentation performances for the models trained with 8 bit and 16 bit images. This suggested that processing 8 bit images is sufficient for nuclei instance segmentation of FS images in most cases. The dataset including the raw image patches, as well as the corresponding segmentation masks is publicly available in the published GitHub repository.

2021 ◽  
Vol 11 (15) ◽  
pp. 7046
Author(s):  
Jorge Francisco Ciprián-Sánchez ◽  
Gilberto Ochoa-Ruiz ◽  
Lucile Rossi ◽  
Frédéric Morandini

Wildfires stand as one of the most relevant natural disasters worldwide, particularly more so due to the effect of climate change and its impact on various societal and environmental levels. In this regard, a significant amount of research has been done in order to address this issue, deploying a wide variety of technologies and following a multi-disciplinary approach. Notably, computer vision has played a fundamental role in this regard. It can be used to extract and combine information from several imaging modalities in regard to fire detection, characterization and wildfire spread forecasting. In recent years, there has been work pertaining to Deep Learning (DL)-based fire segmentation, showing very promising results. However, it is currently unclear whether the architecture of a model, its loss function, or the image type employed (visible, infrared, or fused) has the most impact on the fire segmentation results. In the present work, we evaluate different combinations of state-of-the-art (SOTA) DL architectures, loss functions, and types of images to identify the parameters most relevant to improve the segmentation results. We benchmark them to identify the top-performing ones and compare them to traditional fire segmentation techniques. Finally, we evaluate if the addition of attention modules on the best performing architecture can further improve the segmentation results. To the best of our knowledge, this is the first work that evaluates the impact of the architecture, loss function, and image type in the performance of DL-based wildfire segmentation models.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Dominik Jens Elias Waibel ◽  
Sayedali Shetab Boushehri ◽  
Carsten Marr

Abstract Background Deep learning contributes to uncovering molecular and cellular processes with highly performant algorithms. Convolutional neural networks have become the state-of-the-art tool to provide accurate and fast image data processing. However, published algorithms mostly solve only one specific problem and they typically require a considerable coding effort and machine learning background for their application. Results We have thus developed InstantDL, a deep learning pipeline for four common image processing tasks: semantic segmentation, instance segmentation, pixel-wise regression and classification. InstantDL enables researchers with a basic computational background to apply debugged and benchmarked state-of-the-art deep learning algorithms to their own data with minimal effort. To make the pipeline robust, we have automated and standardized workflows and extensively tested it in different scenarios. Moreover, it allows assessing the uncertainty of predictions. We have benchmarked InstantDL on seven publicly available datasets achieving competitive performance without any parameter tuning. For customization of the pipeline to specific tasks, all code is easily accessible and well documented. Conclusions With InstantDL, we hope to empower biomedical researchers to conduct reproducible image processing with a convenient and easy-to-use pipeline.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1459 ◽  
Author(s):  
Tamás Czimmermann ◽  
Gastone Ciuti ◽  
Mario Milazzo ◽  
Marcello Chiurazzi ◽  
Stefano Roccella ◽  
...  

This paper reviews automated visual-based defect detection approaches applicable to various materials, such as metals, ceramics and textiles. In the first part of the paper, we present a general taxonomy of the different defects that fall in two classes: visible (e.g., scratches, shape error, etc.) and palpable (e.g., crack, bump, etc.) defects. Then, we describe artificial visual processing techniques that are aimed at understanding of the captured scenery in a mathematical/logical way. We continue with a survey of textural defect detection based on statistical, structural and other approaches. Finally, we report the state of the art for approaching the detection and classification of defects through supervised and non-supervised classifiers and deep learning.


GEOMATICA ◽  
2019 ◽  
Vol 73 (2) ◽  
pp. 29-44
Author(s):  
Won Mo Jung ◽  
Faizaan Naveed ◽  
Baoxin Hu ◽  
Jianguo Wang ◽  
Ningyuan Li

With the advance of deep learning networks, their applications in the assessment of pavement conditions are gaining more attention. A convolutional neural network (CNN) is the most commonly used network in image classification. In terms of pavement assessment, most existing CNNs are designed to only distinguish between cracks and non-cracks. Few networks classify cracks in different levels of severity. Information on the severity of pavement cracks is critical for pavement repair services. In this study, the state-of-the-art CNN used in the detection of pavement cracks was improved to localize the cracks and identify their distress levels based on three categories (low, medium, and high). In addition, a fully convolutional network (FCN) was, for the first time, utilized in the detection of pavement cracks. These designed architectures were validated using the data acquired on four highways in Ontario, Canada, and compared with the ground truth that was provided by the Ministry of Transportation of Ontario (MTO). The results showed that with the improved CNN, the prediction precision on a series of test image patches were 72.9%, 73.9%, and 73.1% for cracks with the severity levels of low, medium, and high, respectively. The precision for the FCN was tested on whole pavement images, resulting in 62.8%, 63.3%, and 66.4%, respectively, for cracks with the severity levels of low, medium, and high. It is worth mentioning that the ground truth contained some uncertainties, which partially contributed to the relatively low precision.


2020 ◽  
Vol 163 ◽  
pp. 01001
Author(s):  
Georgy Ayzel ◽  
Liubov Kurochkina ◽  
Eduard Kazakov ◽  
Sergei Zhuravlev

Streamflow prediction is a vital public service that helps to establish flash-flood early warning systems or assess the impact of projected climate change on water management. However, the availability of streamflow observations limits the utilization of the state-of-the-art streamflow prediction techniques to the basins where hydrometric gauging stations exist. Since the most river basins in the world are ungauged, the development of the specialized techniques for the reliable streamflow prediction in ungauged basins (PUB) is of crucial importance. In recent years, the emerging field of deep learning provides a myriad of new models that can breathe new life into the stagnating PUB methods. In the presented study, we benchmark the streamflow prediction efficiency of Long Short-Term Memory (LSTM) networks against the standard technique of GR4J hydrological model parameters regionalization (HMREG) at 200 basins in Northwest Russia. Results show that the LSTM-based regional hydrological model significantly outperforms the HMREG scheme in terms of median Nash-Sutcliffe efficiency (NSE), which is 0.73 and 0.61 for LSTM and HMREG, respectively. Moreover, LSTM demonstrates the comparable median NSE with that for basin-scale calibration of GR4J (0.75). Therefore, this study underlines the high utilization potential of deep learning for the PUB by demonstrating the new state-of-the-art performance in this field.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4899 ◽  
Author(s):  
Georg Brunnhofer ◽  
Alexander Bergmann ◽  
Andreas Klug ◽  
Martin Kraft

An in-line holographic particle counter concept is presented and validated where multiple micrometer sized particles are detected in a three dimensional sampling volume, all at once. The proposed PIU is capable of detecting holograms of particles which sizes are in the lower μ m- range. The detection and counting principle is based on common image processing techniques using a customized HT with a result directly relating to the particle number concentration in the recorded sampling volume. The proposed counting unit is mounted ontop of a CNM for comparison with a commercial TSI-3775 CPC. The concept does not only allow for a precise in-situ determination of low particle number concentrations but also enables easy upscaling to higher particle densities (e.g., > 30 . 000 # c c m ) through its linear expandability and option of cascading. The impact of coincidence at higher particle densities is shown and two coincidence correction approaches are presented where, at last, its analogy to the coincidence correction methods used in state-of-the-art CPCs is identified.


2021 ◽  
Vol 12 (24) ◽  
pp. 25
Author(s):  
Andrea Felicetti ◽  
Marina Paolanti ◽  
Primo Zingaretti ◽  
Roberto Pierdicca ◽  
Eva Savina Malinverni

<div class="page" title="Page 1"><div class="layoutArea"><div class="column"><p class="VARAbstract">Mosaic is an ancient type of art used to create decorative images or patterns combining small components. A digital version of a mosaic can be useful for archaeologists, scholars and restorers who are interested in studying, comparing and preserving mosaics. Nowadays, archaeologists base their studies mainly on manual operation and visual observation that, although still fundamental, should be supported by an automatized procedure of information extraction. In this context, this research explains improvements which can change the manual and time-consuming procedure of mosaic tesserae drawing. More specifically, this paper analyses the advantages of using Mo.Se. (Mosaic Segmentation), an algorithm that exploits deep learning and image segmentation techniques; the methodology combines U-Net 3 Network with the Watershed algorithm. The final purpose is to define a workflow which establishes the steps to perform a robust segmentation and obtain a digital (vector) representation of a mosaic. The detailed approach is presented, and theoretical justifications are provided, building various connections with other models, thus making the workflow both theoretically valuable and practically scalable for medium or large datasets. The automatic segmentation process was tested with the high-resolution orthoimage of an ancient mosaic by following a close-range photogrammetry procedure. Our approach has been tested in the pavement of St. Stephen's Church in Umm ar-Rasas, a Jordan archaeological site, located 30 km southeast of the city of Madaba (Jordan). Experimental results show that this generalized framework yields good performances, obtaining higher accuracy compared with other state-of-the-art approaches. Mo.Se. has been validated using publicly available datasets as a benchmark, demonstrating that the combination of learning-based methods with procedural ones enhances segmentation performance in terms of overall accuracy, which is almost 10% higher. This study’s ambitious aim is to provide archaeologists with a tool which accelerates their work of automatically extracting ancient geometric mosaics.</p><p><strong>Highlights:</strong></p><ul><li><p>A Mo.Se. (Mosaic Segmentation) algorithm is described with the purpose to perform robust image segmentation to automatically detect tesserae in ancient mosaics.</p></li><li><p>This research aims to overcome manual and time-consuming procedure of tesserae segmentation by proposing an approach that uses deep learning and image processing techniques, obtaining a digital replica of a mosaic.</p></li><li><p>Extensive experiments show that the proposed framework outperforms state-of-the-art methods with higher accuracy, even compared with publicly available datasets.</p></li></ul></div></div></div>


2020 ◽  
Author(s):  
Dominik Waibel ◽  
Sayedali Shetab Boushehri ◽  
Carsten Marr

AbstractMotivationDeep learning contributes to uncovering and understanding molecular and cellular processes with highly performant image computing algorithms. Convolutional neural networks have become the state-of-the-art tool to provide accurate, consistent and fast data processing. However, published algorithms mostly solve only one specific problem and they often require expert skills and a considerable computer science and machine learning background for application.ResultsWe have thus developed a deep learning pipeline called InstantDL for four common image processing tasks: semantic segmentation, instance segmentation, pixel-wise regression and classification. InstantDL enables experts and non-experts to apply state-of-the-art deep learning algorithms to biomedical image data with minimal effort. To make the pipeline robust, we have automated and standardized workflows and extensively tested it in different scenarios. Moreover, it allows to assess the uncertainty of predictions. We have benchmarked InstantDL on seven publicly available datasets achieving competitive performance without any parameter tuning. For customization of the pipeline to specific tasks, all code is easily accessible.Availability and ImplementationInstantDL is available under the terms of MIT licence. It can be found on GitHub: https://github.com/marrlab/[email protected]


Author(s):  
Yantao Yu ◽  
Zhen Wang ◽  
Bo Yuan

Factorization machines (FMs) are a class of general predictors working effectively with sparse data, which represents features using factorized parameters and weights. However, the accuracy of FMs can be adversely affected by the fixed representation trained for each feature, as the same feature is usually not equally predictive and useful in different instances. In fact, the inaccurate representation of features may even introduce noise and degrade the overall performance. In this work, we improve FMs by explicitly considering the impact of individual input upon the representation of features. We propose a novel model named \textit{Input-aware Factorization Machine} (IFM), which learns a unique input-aware factor for the same feature in different instances via a neural network. Comprehensive experiments on three real-world recommendation datasets are used to demonstrate the effectiveness and mechanism of IFM. Empirical results indicate that IFM is significantly better than the standard FM model and consistently outperforms four state-of-the-art deep learning based methods.


Sign in / Sign up

Export Citation Format

Share Document