scholarly journals Artificial Intelligence-based Segmentation of Nuclei in Multi-organ Histopathology Images: Model Development and Validation (Preprint)

2020 ◽  
Author(s):  
Tahir Mahmood ◽  
Muhammad Owais ◽  
Kyoung Jun Noh ◽  
Hyo Sik Yoon ◽  
Adnan Haider ◽  
...  

BACKGROUND Accurate nuclei segmentation in histopathology images plays a key role in digital pathology. It is considered a prerequisite for the determination of cell phenotype, nuclear morphometrics, cell classification, and the grading and prognosis of cancer. However, it is a very challenging task because of the different types of nuclei, large intra-class variations, and diverse cell morphologies. Consequently, the manual inspection of such images under high-resolution microscopes is tedious and time-consuming. Alternatively, artificial intelligence (AI)-based automated techniques, which are fast, robust, and require less human effort, can be used. Recently, several AI-based nuclei segmentation techniques have been proposed. They have shown a significant performance improvement for this task, but there is room for further improvement. Thus, we propose an AI-based nuclei segmentation technique in which we adopt a new nuclei segmentation network empowered by residual skip connections to address this issue. OBJECTIVE The aim of this study was to develop an AI-based nuclei segmentation method for histopathology images of multiple organs. METHODS Our proposed residual-skip-connections-based nuclei segmentation network (R-NSN) is comprised of two main stages: Stain normalization and nuclei segmentation as shown in Figure 2. In the 1st stage, a histopathology image is stain normalized to balance the color and intensity variation. Subsequently, it is used as an input to the R-NSN in stage 2, which outputs a segmented image. RESULTS Experiments were performed on two publicly available datasets: 1) The Cancer Genomic Atlas (TCGA), and 2) Triple-negative Breast Cancer (TNBC). The results show that our proposed technique achieves an aggregated Jaccard index (AJI) of 0.6794, Dice coefficient of 0.8084, and F1-measure of 0.8547 on the TCGA dataset, and an AJI of 0.7332, Dice coefficient of 0.8441, precision of 0.8352, recall of 0.8306, and F1-measure of 0.8329 on the TNBC dataset. These values are higher than those of the state-of-the-art methods. CONCLUSIONS The proposed R-NSN has the potential to maintain crucial features by using the residual connectivity from the encoder to the decoder and uses only a few layers, which reduces the computational cost of the model. The selection of a good stain normalization technique, the effective use of residual connections to avoid information loss, and the use of only a few layers to reduce the computational cost yielded outstanding results. Thus, our nuclei segmentation method is robust and is superior to the state-of-the-art methods. We expect that this study will contribute to the development of computational pathology software for research and clinical use and enhance the impact of computational pathology.

2019 ◽  
Vol 1 (1) ◽  
Author(s):  
Hwejin Jung ◽  
Bilal Lodhi ◽  
Jaewoo Kang

Abstract Background Since nuclei segmentation in histopathology images can provide key information for identifying the presence or stage of a disease, the images need to be assessed carefully. However, color variation in histopathology images, and various structures of nuclei are two major obstacles in accurately segmenting and analyzing histopathology images. Several machine learning methods heavily rely on hand-crafted features which have limitations due to manual thresholding. Results To obtain robust results, deep learning based methods have been proposed. Deep convolutional neural networks (DCNN) used for automatically extracting features from raw image data have been proven to achieve great performance. Inspired by such achievements, we propose a nuclei segmentation method based on DCNNs. To normalize the color of histopathology images, we use a deep convolutional Gaussian mixture color normalization model which is able to cluster pixels while considering the structures of nuclei. To segment nuclei, we use Mask R-CNN which achieves state-of-the-art object segmentation performance in the field of computer vision. In addition, we perform multiple inference as a post-processing step to boost segmentation performance. We evaluate our segmentation method on two different datasets. The first dataset consists of histopathology images of various organ while the other consists histopathology images of the same organ. Performance of our segmentation method is measured in various experimental setups at the object-level and the pixel-level. In addition, we compare the performance of our method with that of existing state-of-the-art methods. The experimental results show that our nuclei segmentation method outperforms the existing methods. Conclusions We propose a nuclei segmentation method based on DCNNs for histopathology images. The proposed method which uses Mask R-CNN with color normalization and multiple inference post-processing provides robust nuclei segmentation results. Our method also can facilitate downstream nuclei morphological analyses as it provides high-quality features extracted from histopathology images.


2021 ◽  
Author(s):  
Kai Guo ◽  
Zhenze Yang ◽  
Chi-Hua Yu ◽  
Markus J. Buehler

This review revisits the state of the art of research efforts on the design of mechanical materials using machine learning.


Author(s):  
Mauro Vallati ◽  
Lukáš Chrpa ◽  
Thomas L. Mccluskey

AbstractThe International Planning Competition (IPC) is a prominent event of the artificial intelligence planning community that has been organized since 1998; it aims at fostering the development and comparison of planning approaches, assessing the state-of-the-art in planning and identifying new challenging benchmarks. IPC has a strong impact also outside the planning community, by providing a large number of ready-to-use planning engines and testing pioneering applications of planning techniques.This paper focusses on the deterministic part of IPC 2014, and describes format, participants, benchmarks as well as a thorough analysis of the results. Generally, results of the competition indicates some significant progress, but they also highlight issues and challenges that the planning community will have to face in the future.


2020 ◽  
Vol 6 (2) ◽  
pp. 135-161
Author(s):  
Diego Alejandro Borbón Rodríguez ◽  
◽  
Luisa Fernanda Borbón Rodríguez ◽  
Jeniffer Laverde Pinzón

Advances in neurotechnologies and artificial intelligence have led to an innovative proposal to establish ethical and legal limits to the development of technologies: Human NeuroRights. In this sense, the article addresses, first, some advances in neurotechnologies and artificial intelligence, as well as their ethical implications. Second, the state of the art on the innovative proposal of Human NeuroRights is exposed, specifically, the proposal of the NeuroRights Initiative of Columbia University. Third, the proposal for the rights of free will and equitable access to augmentation technologies is critically analyzed to conclude that, although it is necessary to propose new regulations for neurotechnologies and artificial intelligence, the debate is still very premature as if to try to incorporate a new category of human rights that may be inconvenient or unnecessary. Finally, some considerations on how to regulate new technologies are explained and the conclusions of the work are presented.


2017 ◽  
Vol 2 (1) ◽  
pp. 299-316 ◽  
Author(s):  
Cristina Pérez-Benito ◽  
Samuel Morillas ◽  
Cristina Jordán ◽  
J. Alberto Conejero

AbstractIt is still a challenge to improve the efficiency and effectiveness of image denoising and enhancement methods. There exists denoising and enhancement methods that are able to improve visual quality of images. This is usually obtained by removing noise while sharpening details and improving edges contrast. Smoothing refers to the case of denoising when noise follows a Gaussian distribution.Both operations, smoothing noise and sharpening, have an opposite nature. Therefore, there are few approaches that simultaneously respond to both goals. We will review these methods and we will also provide a detailed study of the state-of-the-art methods that attack both problems in colour images, separately.


2017 ◽  
Vol 108 (1) ◽  
pp. 307-318 ◽  
Author(s):  
Eleftherios Avramidis

AbstractA deeper analysis on Comparative Quality Estimation is presented by extending the state-of-the-art methods with adequacy and grammatical features from other Quality Estimation tasks. The previously used linear method, unable to cope with the augmented features, is replaced with a boosting classifier assisted by feature selection. The methods indicated show improved performance for 6 language pairs, when applied on the output from MT systems developed over 7 years. The improved models compete better with reference-aware metrics.Notable conclusions are reached through the examination of the contribution of the features in the models, whereas it is possible to identify common MT errors that are captured by the features. Many grammatical/fluency features have a good contribution, few adequacy features have some contribution, whereas source complexity features are of no use. The importance of many fluency and adequacy features is language-specific.


2022 ◽  
Vol 134 ◽  
pp. 103548
Author(s):  
Bianca Caiazzo ◽  
Mario Di Nardo ◽  
Teresa Murino ◽  
Alberto Petrillo ◽  
Gianluca Piccirillo ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3603
Author(s):  
Dasol Jeong ◽  
Hasil Park ◽  
Joongchol Shin ◽  
Donggoo Kang ◽  
Joonki Paik

Person re-identification (Re-ID) has a problem that makes learning difficult such as misalignment and occlusion. To solve these problems, it is important to focus on robust features in intra-class variation. Existing attention-based Re-ID methods focus only on common features without considering distinctive features. In this paper, we present a novel attentive learning-based Siamese network for person Re-ID. Unlike existing methods, we designed an attention module and attention loss using the properties of the Siamese network to concentrate attention on common and distinctive features. The attention module consists of channel attention to select important channels and encoder-decoder attention to observe the whole body shape. We modified the triplet loss into an attention loss, called uniformity loss. The uniformity loss generates a unique attention map, which focuses on both common and discriminative features. Extensive experiments show that the proposed network compares favorably to the state-of-the-art methods on three large-scale benchmarks including Market-1501, CUHK03 and DukeMTMC-ReID datasets.


Sign in / Sign up

Export Citation Format

Share Document