scholarly journals A Comparative Evaluation of Texture Features for Semantic Segmentation of Breast Histopathological Images

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 64331-64346
Author(s):  
R. Rashmi ◽  
Keerthana Prasad ◽  
Chethana Babu K. Udupa ◽  
V. Shwetha

2021 ◽  
Vol 13 (16) ◽  
pp. 3065
Author(s):  
Libo Wang ◽  
Rui Li ◽  
Dongzhi Wang ◽  
Chenxi Duan ◽  
Teng Wang ◽  
...  

Semantic segmentation from very fine resolution (VFR) urban scene images plays a significant role in several application scenarios including autonomous driving, land cover classification, urban planning, etc. However, the tremendous details contained in the VFR image, especially the considerable variations in scale and appearance of objects, severely limit the potential of the existing deep learning approaches. Addressing such issues represents a promising research field in the remote sensing community, which paves the way for scene-level landscape pattern analysis and decision making. In this paper, we propose a Bilateral Awareness Network which contains a dependency path and a texture path to fully capture the long-range relationships and fine-grained details in VFR images. Specifically, the dependency path is conducted based on the ResT, a novel Transformer backbone with memory-efficient multi-head self-attention, while the texture path is built on the stacked convolution operation. In addition, using the linear attention mechanism, a feature aggregation module is designed to effectively fuse the dependency features and texture features. Extensive experiments conducted on the three large-scale urban scene image segmentation datasets, i.e., ISPRS Vaihingen dataset, ISPRS Potsdam dataset, and UAVid dataset, demonstrate the effectiveness of our BANet. Specifically, a 64.6% mIoU is achieved on the UAVid dataset.



2021 ◽  
Author(s):  
Jigyasa Singh Katrolia ◽  
Lars Kramer ◽  
Jason Rambach ◽  
Bruno Mirbach ◽  
Didier Stricker


Author(s):  
L. He ◽  
Z. Wu ◽  
Y. Zhang ◽  
Z. Hu

Abstract. In the remote sensing imagery, spectral and texture features are always complex due to different landscapes, which leads to misclassifications in the results of semantic segmentation. The object-based Markov random field provides an effective solution to this problem. However, the state-of-the-art object-based Markov random field still needs to be improved. In this paper, an object-based Markov Random Field model based on hierarchical segmentation tree with auxiliary labels is proposed. A remote sensing imagery is first segmented and the object-based hierarchical segmentation tree is built based on initial segmentation objects and merging criteria. And then, the object-based Markov random field with auxiliary label fields is established on the hierarchical tree structure. A probabilistic inference is applied to solve this model by iteratively updating label field and auxiliary label fields. In the experiment, this paper utilized a Worldview-3 image to evaluate the performance, and the results show the validity and the accuracy of the presented semantic segmentation approach.



Cancers ◽  
2020 ◽  
Vol 12 (8) ◽  
pp. 2031 ◽  
Author(s):  
Taimoor Shakeel Sheikh ◽  
Yonghee Lee ◽  
Migyung Cho

Diagnosis of pathologies using histopathological images can be time-consuming when many images with different magnification levels need to be analyzed. State-of-the-art computer vision and machine learning methods can help automate the diagnostic pathology workflow and thus reduce the analysis time. Automated systems can also be more efficient and accurate, and can increase the objectivity of diagnosis by reducing operator variability. We propose a multi-scale input and multi-feature network (MSI-MFNet) model, which can learn the overall structures and texture features of different scale tissues by fusing multi-resolution hierarchical feature maps from the network’s dense connectivity structure. The MSI-MFNet predicts the probability of a disease on the patch and image levels. We evaluated the performance of our proposed model on two public benchmark datasets. Furthermore, through ablation studies of the model, we found that multi-scale input and multi-feature maps play an important role in improving the performance of the model. Our proposed model outperformed the existing state-of-the-art models by demonstrating better accuracy, sensitivity, and specificity.



2018 ◽  
Vol 69 ◽  
pp. 125-133 ◽  
Author(s):  
Jiayun Li ◽  
William Speier ◽  
King Chung Ho ◽  
Karthik V. Sarma ◽  
Arkadiusz Gertych ◽  
...  


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1873
Author(s):  
Xiao Xiao ◽  
Fan Yang ◽  
Amir Sadovnik

A blur detection problem which aims to separate the blurred and clear regions of an image is widely used in many important computer vision tasks such object detection, semantic segmentation, and face recognition, attracting increasing attention from researchers and industry in recent years. To improve the quality of the image separation, many researchers have spent enormous efforts on extracting features from various scales of images. However, the matter of how to extract blur features and fuse these features synchronously is still a big challenge. In this paper, we regard blur detection as an image segmentation problem. Inspired by the success of the U-net architecture for image segmentation, we propose a multi-scale dilated convolutional neural network called MSDU-net. In this model, we design a group of multi-scale feature extractors with dilated convolutions to extract textual information at different scales at the same time. The U-shape architecture of the MSDU-net can fuse the different-scale texture features and generated semantic features to support the image segmentation task. We conduct extensive experiments on two classic public benchmark datasets and show that the MSDU-net outperforms other state-of-the-art blur detection approaches.



Sign in / Sign up

Export Citation Format

Share Document