A COMPUTER-AIDED SYSTEM FOR MASS DETECTION AND CLASSIFICATION IN DIGITIZED MAMMOGRAMS

2005 ◽  
Vol 17 (05) ◽  
pp. 215-228 ◽  
Author(s):  
SHENG-CHIH YANG ◽  
CHUIN-MU WANG ◽  
YI-NUNG CHUNG ◽  
GIU-CHENG HSU ◽  
SAN-KAN LEE ◽  
...  

This paper presents a computer-assisted diagnostic system for mass detection and classification, which performs mass detection on regions of interest followed by the benign-malignant classification on detected masses. In order for mass detection to be effective, a sequence of preprocessing steps are designed to enhance the intensity of a region of interest, remove the noise effects and locate suspicious masses using five texture features generated from the spatial gray level difference matrix (SGLDM) and fractal dimension. Finally, a probabilistic neural network (PNN) coupled with entropic thresholding techniques is developed for mass extraction. Since the shapes of masses are crucial in classification between benignancy and malignancy, four shape features are further generated and joined with the five features previously used in mass detection to be implemented in another PNN for mass classification. To evaluate our designed system a data set collected in the Taichung Veteran General Hospital, Taiwan, R.O.C. was used for performance evaluation. The results are encouraging and have shown promise of our system.

Author(s):  
Wei Qian ◽  
Lihua Li ◽  
Laurence Clarke ◽  
Fei Mao ◽  
Robert A. Clark ◽  
...  

Author(s):  
Salman Qadri

The purpose of this study is to highlight the significance of machine vision for the Classification of kidney stone identification. A novel optimized fused texture features frame work was designed to identify the stones in kidney.  A fused 234 texture feature namely (GLCM, RLM and Histogram) feature set was acquired by each region of interest (ROI). It was observed that on each image 8 ROI’s of sizes (16x16, 20x20 and 22x22) were taken. It was difficult to handle a large feature space 280800 (1200x234). Now to overcome this data handling issue we have applied feature optimization technique namely POE+ACC and acquired 30 most optimized features set for each ROI. The optimized fused features data set 3600(1200x30) was used to four machine vision Classifiers that is Random Forest, MLP, j48 and Naïve Bayes. Finally, it was observed that Random Forest provides best results of 90% accuracy on ROI 22x22 among the above discussed deployed Classifiers


Author(s):  
Attila Koppány

The successful diagnostic activity has an important role in the changes of the repair costs and the efficient elimination of the damages. The aim of the general building diagnostics is to determine the various visible or instrumentally observable alterations, to qualify the constructions from the suitability and personal safety (accidence) points of view. Our diagnostic system is primarily based on a visual examination on the spot, its method is suitable for the examination of almost all important structures and structure changes of the buildings. During the operation of the diagnostic system a large number of data–valuable for the professional practice–was collected and will be collected also in the future, the analysis of which data set is specially suitable for revaluing construction and the practical application of the experiences later during the building maintenance and reconstruction work. For using the system a so-called “morphological box” has been created, that contains the hierarchic system of constructions, which is connected with the construction components’ thesaurus appointed by the correct structure codes of these constructions’ place in the hierarchy. The thesaurus was not only necessary because of the easy surveillance of the system, but to exclude the usage of structure-name synonyms in the interest of unified handling. The analysis of which data set is specially suitable for revaluing earlier built constructions and can help to create knowledge based new constructions for the future.


2019 ◽  
Vol 8 (5) ◽  
pp. 683 ◽  
Author(s):  
Heung Cheol Kim ◽  
Jong Kook Rhim ◽  
Jun Hyong Ahn ◽  
Jeong Jin Park ◽  
Jong Un Moon ◽  
...  

The assessment of rupture probability is crucial to identifying at risk intracranial aneurysms (IA) in patients harboring multiple aneurysms. We aimed to develop a computer-assisted detection system for small-sized aneurysm ruptures using a convolutional neural network (CNN) based on images of three-dimensional digital subtraction angiography. A retrospective data set, including 368 patients, was used as a training cohort for the CNN using the TensorFlow platform. Aneurysm images in six directions were obtained from each patient and the region-of-interest in each image was extracted. The resulting CNN was prospectively tested in 272 patients and the sensitivity, specificity, overall accuracy, and receiver operating characteristics (ROC) were compared to a human evaluator. Our system showed a sensitivity of 78.76% (95% CI: 72.30%–84.30%), a specificity of 72.15% (95% CI: 60.93%–81.65%), and an overall diagnostic accuracy of 76.84% (95% CI: 71.36%–81.72%) in aneurysm rupture predictions. The area under the ROC (AUROC) in the CNN was 0.755 (95% CI: 0.699%–0.805%), better than that obtained from a human evaluator (AUROC: 0.537; p < 0.001). The CNN-based prediction system was feasible to assess rupture risk in small-sized aneurysms with diagnostic accuracy superior to human evaluators. Additional studies based on a large data set are necessary to enhance diagnostic accuracy and to facilitate clinical application.


Diagnostics ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1224
Author(s):  
Francesco Bianconi ◽  
Mario Luca Fravolini ◽  
Isabella Palumbo ◽  
Giulia Pascoletti ◽  
Susanna Nuvoli ◽  
...  

Computer-assisted analysis of three-dimensional imaging data (radiomics) has received a lot of research attention as a possible means to improve the management of patients with lung cancer. Building robust predictive models for clinical decision making requires the imaging features to be stable enough to changes in the acquisition and extraction settings. Experimenting on 517 lung lesions from a cohort of 207 patients, we assessed the stability of 88 texture features from the following classes: first-order (13 features), Grey-level Co-Occurrence Matrix (24), Grey-level Difference Matrix (14), Grey-level Run-length Matrix (16), Grey-level Size Zone Matrix (16) and Neighbouring Grey-tone Difference Matrix (five). The analysis was based on a public dataset of lung nodules and open-access routines for feature extraction, which makes the study fully reproducible. Our results identified 30 features that had good or excellent stability relative to lesion delineation, 28 to intensity quantisation and 18 to both. We conclude that selecting the right set of imaging features is critical for building clinical predictive models, particularly when changes in lesion delineation and/or intensity quantisation are involved.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Mario Fernando Jojoa Acosta ◽  
Liesle Yail Caballero Tovar ◽  
Maria Begonya Garcia-Zapirain ◽  
Winston Spencer Percybrooks

Abstract Background Melanoma has become more widespread over the past 30 years and early detection is a major factor in reducing mortality rates associated with this type of skin cancer. Therefore, having access to an automatic, reliable system that is able to detect the presence of melanoma via a dermatoscopic image of lesions and/or skin pigmentation can be a very useful tool in the area of medical diagnosis. Methods Among state-of-the-art methods used for automated or computer assisted medical diagnosis, attention should be drawn to Deep Learning based on Convolutional Neural Networks, wherewith segmentation, classification and detection systems for several diseases have been implemented. The method proposed in this paper involves an initial stage that automatically crops the region of interest within a dermatoscopic image using the Mask and Region-based Convolutional Neural Network technique, and a second stage based on a ResNet152 structure, which classifies lesions as either “benign” or “malignant”. Results Training, validation and testing of the proposed model was carried out using the database associated to the challenge set out at the 2017 International Symposium on Biomedical Imaging. On the test data set, the proposed model achieves an increase in accuracy and balanced accuracy of 3.66% and 9.96%, respectively, with respect to the best accuracy and the best sensitivity/specificity ratio reported to date for melanoma detection in this challenge. Additionally, unlike previous models, the specificity and sensitivity achieve a high score (greater than 0.8) simultaneously, which indicates that the model is good for accurate discrimination between benign and malignant lesion, not biased towards any of those classes. Conclusions The results achieved with the proposed model suggest a significant improvement over the results obtained in the state of the art as far as performance of skin lesion classifiers (malignant/benign) is concerned.


2021 ◽  
Vol 11 (4) ◽  
pp. 1965
Author(s):  
Raul-Ronald Galea ◽  
Laura Diosan ◽  
Anca Andreica ◽  
Loredana Popa ◽  
Simona Manole ◽  
...  

Despite the promising results obtained by deep learning methods in the field of medical image segmentation, lack of sufficient data always hinders performance to a certain degree. In this work, we explore the feasibility of applying deep learning methods on a pilot dataset. We present a simple and practical approach to perform segmentation in a 2D, slice-by-slice manner, based on region of interest (ROI) localization, applying an optimized training regime to improve segmentation performance from regions of interest. We start from two popular segmentation networks, the preferred model for medical segmentation, U-Net, and a general-purpose model, DeepLabV3+. Furthermore, we show that ensembling of these two fundamentally different architectures brings constant benefits by testing our approach on two different datasets, the publicly available ACDC challenge, and the imATFIB dataset from our in-house conducted clinical study. Results on the imATFIB dataset show that the proposed approach performs well with the provided training volumes, achieving an average Dice Similarity Coefficient of the whole heart of 89.89% on the validation set. Moreover, our algorithm achieved a mean Dice value of 91.87% on the ACDC validation, being comparable to the second best-performing approach on the challenge. Our approach provides an opportunity to serve as a building block of a computer-aided diagnostic system in a clinical setting.


IAWA Journal ◽  
2011 ◽  
Vol 32 (2) ◽  
pp. 221-232 ◽  
Author(s):  
Carolina Sarmiento ◽  
Pierre Détienne ◽  
Christine Heinz ◽  
Jean-François Molino ◽  
Pierre Grard ◽  
...  

Sustainable management and conservation of tropical trees and forests require accurate identification of tree species. Reliable, user-friendly identification tools based on macroscopic morphological features have already been developed for various tree floras. Wood anatomical features provide also a considerable amount of information that can be used for timber traceability, certification and trade control. Yet, this information is still poorly used, and only a handful of experts are able to use it for plant species identification. Here, we present an interactive, user-friendly tool based on vector graphics, illustrating 99 states of 27 wood characters from 110 Amazonian tree species belonging to 34 families. Pl@ntWood is a graphical identification tool based on the IDAO system, a multimedia approach to plant identification. Wood anatomical characters were selected from the IAWA list of microscopic features for hardwood identification, which will enable us to easily extend this work to a larger number of species. A stand-alone application has been developed and an on-line version will be delivered in the near future. Besides allowing non-specialists to identify plants in a user-friendly interface, this system can be used with different purposes such as teaching, conservation, management, and selftraining in the wood anatomy of tropical species.


Sign in / Sign up

Export Citation Format

Share Document