scholarly journals Automated Classification of Glandular Tissue by Statistical Proximity Sampling

2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Jimmy C. Azar ◽  
Martin Simonsson ◽  
Ewert Bengtsson ◽  
Anders Hast

Due to the complexity of biological tissue and variations in staining procedures, features that are based on the explicit extraction of properties from subglandular structures in tissue images may have difficulty generalizing well over an unrestricted set of images and staining variations. We circumvent this problem by an implicit representation that is both robust and highly descriptive, especially when combined with a multiple instance learning approach to image classification. The new feature method is able to describe tissue architecture based on glandular structure. It is based on statistically representing the relative distribution of tissue components around lumen regions, while preserving spatial and quantitative information, as a basis for diagnosing and analyzing different areas within an image. We demonstrate the efficacy of the method in extracting discriminative features for obtaining high classification rates for tubular formation in both healthy and cancerous tissue, which is an important component in Gleason and tubule-based Elston grading. The proposed method may be used for glandular classification, also in other tissue types, in addition to general applicability as a region-based feature descriptor in image analysis where the image represents a bag with a certain label (or grade) and the region-based feature vectors represent instances.

2021 ◽  
Vol 132 ◽  
pp. S287-S288
Author(s):  
Jianling Ji ◽  
Ryan Schmidt ◽  
Westley Sherman ◽  
Ryan Peralta ◽  
Megan Roytman ◽  
...  

2021 ◽  
pp. 104973232199379
Author(s):  
Olaug S. Lian ◽  
Sarah Nettleton ◽  
Åge Wifstad ◽  
Christopher Dowrick

In this article, we qualitatively explore the manner and style in which medical encounters between patients and general practitioners (GPs) are mutually conducted, as exhibited in situ in 10 consultations sourced from the One in a Million: Primary Care Consultations Archive in England. Our main objectives are to identify interactional modes, to develop a classification of these modes, and to uncover how modes emerge and shift both within and between consultations. Deploying an interactional perspective and a thematic and narrative analysis of consultation transcripts, we identified five distinctive interactional modes: question and answer (Q&A) mode, lecture mode, probabilistic mode, competition mode, and narrative mode. Most modes are GP-led. Mode shifts within consultations generally map on to the chronology of the medical encounter. Patient-led narrative modes are initiated by patients themselves, which demonstrates agency. Our classification of modes derives from complete naturally occurring consultations, covering a wide range of symptoms, and may have general applicability.


Author(s):  
Amira S. Ashour ◽  
Merihan M. Eissa ◽  
Maram A. Wahba ◽  
Radwa A. Elsawy ◽  
Hamada Fathy Elgnainy ◽  
...  

Author(s):  
Chaoqing Wang ◽  
Junlong Cheng ◽  
Yuefei Wang ◽  
Yurong Qian

A vehicle make and model recognition (VMMR) system is a common requirement in the field of intelligent transportation systems (ITS). However, it is a challenging task because of the subtle differences between vehicle categories. In this paper, we propose a hierarchical scheme for VMMR. Specifically, the scheme consists of (1) a feature extraction framework called weighted mask hierarchical bilinear pooling (WMHBP) based on hierarchical bilinear pooling (HBP) which weakens the influence of invalid background regions by generating a weighted mask while extracting features from discriminative regions to form a more robust feature descriptor; (2) a hierarchical loss function that can learn the appearance differences between vehicle brands, and enhance vehicle recognition accuracy; (3) collection of vehicle images from the Internet and classification of images with hierarchical labels to augment data for solving the problem of insufficient data and low picture resolution and improving the model’s generalization ability and robustness. We evaluate the proposed framework for accuracy and real-time performance and the experiment results indicate a recognition accuracy of 95.1% and an FPS (frames per second) of 107 for the framework for the Stanford Cars public dataset, which demonstrates the superiority of the method and its availability for ITS.


Sign in / Sign up

Export Citation Format

Share Document