scholarly journals A State-of-the-Art Review for Gastric Histopathology Image Analysis Approaches and Future Development

2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Shiliang Ai ◽  
Chen Li ◽  
Xiaoyan Li ◽  
Tao Jiang ◽  
Marcin Grzegorzek ◽  
...  

Gastric cancer is a common and deadly cancer in the world. The gold standard for the detection of gastric cancer is the histological examination by pathologists, where Gastric Histopathological Image Analysis (GHIA) contributes significant diagnostic information. The histopathological images of gastric cancer contain sufficient characterization information, which plays a crucial role in the diagnosis and treatment of gastric cancer. In order to improve the accuracy and objectivity of GHIA, Computer-Aided Diagnosis (CAD) has been widely used in histological image analysis of gastric cancer. In this review, the CAD technique on pathological images of gastric cancer is summarized. Firstly, the paper summarizes the image preprocessing methods, then introduces the methods of feature extraction, and then generalizes the existing segmentation and classification techniques. Finally, these techniques are systematically introduced and analyzed for the convenience of future researchers.

Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 562
Author(s):  
Jonathan de Matos ◽  
Steve Ataky ◽  
Alceu de Souza Britto ◽  
Luiz Soares de Oliveira ◽  
Alessandro Lameiras Koerich

Histopathological images (HIs) are the gold standard for evaluating some types of tumors for cancer diagnosis. The analysis of such images is time and resource-consuming and very challenging even for experienced pathologists, resulting in inter-observer and intra-observer disagreements. One of the ways of accelerating such an analysis is to use computer-aided diagnosis (CAD) systems. This paper presents a review on machine learning methods for histopathological image analysis, including shallow and deep learning methods. We also cover the most common tasks in HI analysis, such as segmentation and feature extraction. Besides, we present a list of publicly available and private datasets that have been used in HI research.


2011 ◽  
Vol 26 (5) ◽  
pp. 1485-1489 ◽  
Author(s):  
Keisuke Kubota ◽  
Junko Kuroda ◽  
Masashi Yoshida ◽  
Keiichiro Ohta ◽  
Masaki Kitajima

2021 ◽  
Vol 8 ◽  
Author(s):  
Hongliang He ◽  
Chi Zhang ◽  
Jie Chen ◽  
Ruizhe Geng ◽  
Luyang Chen ◽  
...  

Nuclear segmentation of histopathological images is a crucial step in computer-aided image analysis. There are complex, diverse, dense, and even overlapping nuclei in these histopathological images, leading to a challenging task of nuclear segmentation. To overcome this challenge, this paper proposes a hybrid-attention nested UNet (Han-Net), which consists of two modules: a hybrid nested U-shaped network (H-part) and a hybrid attention block (A-part). H-part combines a nested multi-depth U-shaped network and a dense network with full resolution to capture more effective features. A-part is used to explore attention information and build correlations between different pixels. With these two modules, Han-Net extracts discriminative features, which effectively segment the boundaries of not only complex and diverse nuclei but also small and dense nuclei. The comparison in a publicly available multi-organ dataset shows that the proposed model achieves the state-of-the-art performance compared to other models.


2020 ◽  
Vol 13 (1) ◽  
pp. 106-118
Author(s):  
Santisudha Panigrahi ◽  
Tripti Swarnkar

Oral diseases are the 6th most revealed malignancy happening in head and neck regions found mainly in south Asian countries. It is the most common cancer with fourteen deaths in an hour on a yearly basis, as per the WHO oral cancer incidence in India. Due to the cost of tests, mistakes in the recognition procedure, and the enormous remaining task at hand of the cytopathologist, oral growths cannot be diagnosed promptly. This area is open to be looked into by biomedical analysts to identify it at an early stage. At present, with the advent of entire slide computerized scanners and tissue histopathology, there is a gigantic aggregation of advanced digital histopathological images, which has prompted the necessity for their analysis. A lot of computer aided analysis techniques have been developed by utilizing machine learning strategies for prediction and prognosis of cancer. In this review paper, first various steps of obtaining histopathological images, followed by the visualization and classification done by the doctors are discussed. As machine learning techniques are well known, in the second part of this review, the works done for histopathological image analysis as well as other oral datasets using these strategies for growth prognosis and anticipation are discussed. Comparing the pitfalls of machine learning and how it has overcome by deep learning mostly for image recognition tasks are also discussed subsequently. The third part of the manuscript describes how deep learning is beneficial and widely used in different cancer domains. Due to the remarkable growth of deep learning and wide applicability, it is best suited for the prognosis of oral disease. The aim of this review is to provide insight to the researchers opting to work for oral cancer by implementing deep learning and artificial neural networks.


Author(s):  
Inzamam Mashood Nasir ◽  
Muhammad Rashid ◽  
Jamal Hussain Shah ◽  
Muhammad Sharif ◽  
Muhammad Yahiya Haider Awan ◽  
...  

Background: Breast cancer is considered as the most perilous sickness among females worldwide and the ratio of new cases is expanding yearly. Many researchers have proposed efficient algorithms to diagnose breast cancer at early stages, which have increased the efficiency and performance by utilizing the learned features of gold standard histopathological images. Objective: Most of these systems have either used traditional handcrafted features or deep features which had a lot of noise and redundancy, which ultimately decrease the performance of the system. Methods: A hybrid approach is proposed by fusing and optimizing the properties of handcrafted and deep features to classify the breast cancer images. HOG and LBP features are serially fused with pretrained models VGG19 and InceptionV3. PCR and ICR are used to evaluate the classification performance of proposed method. Results: The method concentrates on histopathological images to classify the breast cancer. The performance is compared with state-of-the-art techniques, where an overall patient-level accuracy of 97.2% and image-level accuracy of 96.7% is recorded. Conclusion: The proposed hybrid method achieves the best performance as compared to previous methods and it can be used for the intelligent healthcare systems and early breast cancer detection.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Aysen Degerli ◽  
Mete Ahishali ◽  
Mehmet Yamac ◽  
Serkan Kiranyaz ◽  
Muhammad E. H. Chowdhury ◽  
...  

AbstractComputer-aided diagnosis has become a necessity for accurate and immediate coronavirus disease 2019 (COVID-19) detection to aid treatment and prevent the spread of the virus. Numerous studies have proposed to use Deep Learning techniques for COVID-19 diagnosis. However, they have used very limited chest X-ray (CXR) image repositories for evaluation with a small number, a few hundreds, of COVID-19 samples. Moreover, these methods can neither localize nor grade the severity of COVID-19 infection. For this purpose, recent studies proposed to explore the activation maps of deep networks. However, they remain inaccurate for localizing the actual infestation making them unreliable for clinical use. This study proposes a novel method for the joint localization, severity grading, and detection of COVID-19 from CXR images by generating the so-called infection maps. To accomplish this, we have compiled the largest dataset with 119,316 CXR images including 2951 COVID-19 samples, where the annotation of the ground-truth segmentation masks is performed on CXRs by a novel collaborative human–machine approach. Furthermore, we publicly release the first CXR dataset with the ground-truth segmentation masks of the COVID-19 infected regions. A detailed set of experiments show that state-of-the-art segmentation networks can learn to localize COVID-19 infection with an F1-score of 83.20%, which is significantly superior to the activation maps created by the previous methods. Finally, the proposed approach achieved a COVID-19 detection performance with 94.96% sensitivity and 99.88% specificity.


2015 ◽  
Vol 258 (3) ◽  
pp. 233-240 ◽  
Author(s):  
CHENG LU ◽  
MENGYAO JI ◽  
ZHEN MA ◽  
MRINAL MANDAL

Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 967
Author(s):  
Amirreza Mahbod ◽  
Gerald Schaefer ◽  
Christine Löw ◽  
Georg Dorffner ◽  
Rupert Ecker ◽  
...  

Nuclei instance segmentation can be considered as a key point in the computer-mediated analysis of histological fluorescence-stained (FS) images. Many computer-assisted approaches have been proposed for this task, and among them, supervised deep learning (DL) methods deliver the best performances. An important criterion that can affect the DL-based nuclei instance segmentation performance of FS images is the utilised image bit depth, but to our knowledge, no study has been conducted so far to investigate this impact. In this work, we released a fully annotated FS histological image dataset of nuclei at different image magnifications and from five different mouse organs. Moreover, by different pre-processing techniques and using one of the state-of-the-art DL-based methods, we investigated the impact of image bit depth (i.e., eight bits vs. sixteen bits) on the nuclei instance segmentation performance. The results obtained from our dataset and another publicly available dataset showed very competitive nuclei instance segmentation performances for the models trained with 8 bit and 16 bit images. This suggested that processing 8 bit images is sufficient for nuclei instance segmentation of FS images in most cases. The dataset including the raw image patches, as well as the corresponding segmentation masks is publicly available in the published GitHub repository.


Sign in / Sign up

Export Citation Format

Share Document