scholarly journals Features for detecting smoke in laparoscopic videos

2017 ◽  
Vol 3 (2) ◽  
pp. 521-524
Author(s):  
Nour Aldeen Jalal ◽  
Tamer Abdulbaki Alshirbaji ◽  
Lars Mündermann ◽  
Knut Möller

AbstractVideo-based smoke detection in laparoscopic surgery has different potential applications, such as the automatic addressing of surgical events associated with the electrocauterization task and the development of automatic smoke removal. In the literature, video-based smoke detection has been studied widely for fire surveillance systems. Nevertheless, the proposed methods are insufficient for smoke detection in laparoscopic videos because they often depend on assumptions which rarely hold in laparoscopic surgery such as static camera. In this paper, ten visual features based on motion, texture and colour of smoke are proposed and evaluated for smoke detection in laparoscopic videos. These features are RGB channels, energy-based feature, texture features based on gray level co-occurrence matrix (GLCM), HSV colour space feature, features based on the detection of moving regions using optical flow and the smoke colour in HSV colour space. These features were tested on four laparoscopic cholecystectomy videos. Experimental observations show that each feature can provide valuable information in performing the smoke detection task. However, each feature has weaknesses to detect the presence of smoke in some cases. By combining all proposed features smoke with high and even low density can be identified robustly and the classification accuracy increases significantly.

2017 ◽  
Vol 3 (2) ◽  
pp. 191-194 ◽  
Author(s):  
Tamer Abdulbaki Alshirbaji ◽  
Nour Aldeen Jalal ◽  
Lars Mündermann ◽  
Knut Möller

AbstractSmoke in laparoscopic videos usually appears due to the use of electrocautery when cutting or coagulating tissues. Therefore, detecting smoke can be used for event-based annotation in laparoscopic surgeries by retrieving the events associated with the electrocauterization. Furthermore, smoke detection can also be used for automatic smoke removal. However, detecting smoke in laparoscopic video is a challenge because of the changeability of smoke patterns, the moving camera and the different lighting conditions. In this paper, we present a video-based smoke detection algorithm to detect smoke of different densities such as fog, low and high density in laparoscopic videos. The proposed method depends on extracting various visual features from the laparoscopic images and providing them to support vector machine (SVM) classifier. Features are based on motion, colour and texture patterns of the smoke. We validated our algorithm using experimental evaluation on four laparoscopic cholecystectomy videos. These four videos were manually annotated by defining every frame as smoke or non-smoke frame. The algorithm was applied to the videos by using different feature combinations for classification. Experimental results show that the combination of all proposed features gives the best classification performance. The overall accuracy (i.e. correctly classified frames) is around 84%, with the sensitivity (i.e. correctly detected smoke frames) and the specificity (i.e. correctly detected non-smoke frames) are 89% and 80%, respectively.


Author(s):  
Brahma Ratih Rahayu F. ◽  
Panca Mudjirahardjo ◽  
Muhammad Aziz Muslim

Peanuts are a food crop commodity that Indonesians widely consume as a vegetable fat and protein source. However, the quality and quantity of peanut productivity may decline, one of which is due to plant diseases. Efforts that can be made to maintain peanut productivity are the application of technology to detect peanut plant diseases early; thus, disease control can be carried out earlier. This study presents a technology development application, particularly digital image processing, to identify disease features of infected peanut leaves based on GLCM texture features and colour features in the HSV colour space and classified using the SVM method. The development of the SVM method that is applied is the Multiclass SVM with the DAGSVM strategy, which can classify more than two classes. Based on the experimental results, it confirms that the combination of HSV colour features and GLCM texture features with an angular orientation of 0 degrees and classified by the Multiclass SVM method with polynomial kernels produces the highest accuracy, i.e. 99.1667% for leaf spot class, 97.5% for leaf rust class, 98.8333% for eyespot class, 100% for normal leaf class and 100% for other leaf class.


Author(s):  
Weiguo Cao ◽  
Marc J. Pomeroy ◽  
Yongfeng Gao ◽  
Matthew A. Barish ◽  
Almas F. Abbasi ◽  
...  

AbstractTexture features have played an essential role in the field of medical imaging for computer-aided diagnosis. The gray-level co-occurrence matrix (GLCM)-based texture descriptor has emerged to become one of the most successful feature sets for these applications. This study aims to increase the potential of these features by introducing multi-scale analysis into the construction of GLCM texture descriptor. In this study, we first introduce a new parameter - stride, to explore the definition of GLCM. Then we propose three multi-scaling GLCM models according to its three parameters, (1) learning model by multiple displacements, (2) learning model by multiple strides (LMS), and (3) learning model by multiple angles. These models increase the texture information by introducing more texture patterns and mitigate direction sparsity and dense sampling problems presented in the traditional Haralick model. To further analyze the three parameters, we test the three models by performing classification on a dataset of 63 large polyp masses obtained from computed tomography colonoscopy consisting of 32 adenocarcinomas and 31 benign adenomas. Finally, the proposed methods are compared to several typical GLCM-texture descriptors and one deep learning model. LMS obtains the highest performance and enhances the prediction power to 0.9450 with standard deviation 0.0285 by area under the curve of receiver operating characteristics score which is a significant improvement.


2020 ◽  
Vol 43 (1) ◽  
pp. 29-45
Author(s):  
Alex Noel Joseph Raj ◽  
Ruban Nersisson ◽  
Vijayalakshmi G. V. Mahesh ◽  
Zhemin Zhuang

Nipple is a vital landmark in the breast lesion diagnosis. Although there are advanced computer-aided detection (CADe) systems for nipple detection in breast mediolateral oblique (MLO) views of mammogram images, few academic works address the coronal views of breast ultrasound (BUS) images. This paper addresses a novel CADe system to locate the Nipple Shadow Area (NSA) in ultrasound images. Here the Hu Moments and Gray-level Co-occurrence Matrix (GLCM) were calculated through an iterative sliding window for the extraction of shape and texture features. These features are then concatenated and fed into an Artificial Neural Network (ANN) to obtain probable NSA’s. Later, contour features, such as shape complexity through fractal dimension, edge distance from the periphery and contour area, were computed and passed into a Support Vector Machine (SVM) to identify the accurate NSA in each case. The coronal plane BUS dataset is built upon our own, which consists of 64 images from 13 patients. The test results show that the proposed CADe system achieves 91.99% accuracy, 97.55% specificity, 82.46% sensitivity and 88% F-score on our dataset.


2014 ◽  
Vol 668-669 ◽  
pp. 1041-1044
Author(s):  
Lin Lin Song ◽  
Qing Hu Wang ◽  
Zhi Li Pei

This paper firstly studies the texture features. We construct a gray-difference primitive co-occurrence matrix to extract texture features by combining statistical methods with structural ones. The experiment results show that the features of the gray-difference primitive co-occurrence matrix are more delicate than the traditional gray co-occurrence matrix.


BMC Cancer ◽  
2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Sihua Niu ◽  
Jianhua Huang ◽  
Jia Li ◽  
Xueling Liu ◽  
Dan Wang ◽  
...  

Abstract Background The classification of Breast Imaging Reporting and Data System 4A (BI-RADS 4A) lesions is mostly based on the personal experience of doctors and lacks specific and clear classification standards. The development of artificial intelligence (AI) provides a new method for BI-RADS categorisation. We analysed the ultrasonic morphological and texture characteristics of BI-RADS 4A benign and malignant lesions using AI, and these ultrasonic characteristics of BI-RADS 4A benign and malignant lesions were compared to examine the value of AI in the differential diagnosis of BI-RADS 4A benign and malignant lesions. Methods A total of 206 lesions of BI-RADS 4A examined using ultrasonography were analysed retrospectively, including 174 benign lesions and 32 malignant lesions. All of the lesions were contoured manually, and the ultrasonic morphological and texture features of the lesions, such as circularity, height-to-width ratio, margin spicules, margin coarseness, margin indistinctness, margin lobulation, energy, entropy, grey mean, internal calcification and angle between the long axis of the lesion and skin, were calculated using grey level gradient co-occurrence matrix analysis. Differences between benign and malignant lesions of BI-RADS 4A were analysed. Results Significant differences in margin lobulation, entropy, internal calcification and ALS were noted between the benign group and malignant group (P = 0.013, 0.045, 0.045, and 0.002, respectively). The malignant group had more margin lobulations and lower entropy compared with the benign group, and the benign group had more internal calcifications and a greater angle between the long axis of the lesion and skin compared with the malignant group. No significant differences in circularity, height-to-width ratio, margin spicules, margin coarseness, margin indistinctness, energy, and grey mean were noted between benign and malignant lesions. Conclusions Compared with the naked eye, AI can reveal more subtle differences between benign and malignant BI-RADS 4A lesions. These results remind us carefully observation of the margin and the internal echo is of great significance. With the help of morphological and texture information provided by AI, doctors can make a more accurate judgment on such atypical benign and malignant lesions.


The higher levels of blood glucose most often causes a metabolic disorder commonly called as Diabetes, scientifically as Diabetes Mellitus. A consequence of this is a major loss of vision and in long terms may eventually cause complete blindness. It initiates with swelling on blood vessels, formation of microaneurysms at the end of narrow capillaries. Haemorrhages due to rupture of small vessels and fluid leak causes exudates. The specialist examines it to diagnose and gives proper treatment. Fundus images are the fundamental tool for proper diagnosis of patients by medical experts. In this research work the fundus images are taken for processing, the neural network and support vector machine are trained for the proposed model. The features are extracted from the diabetic retinopathy image by using texture based algorithms such as Gabor, Local binary pattern and Gray level co-occurrence matrix for rating the level of diabetic retinopathy. The performance of all methods is calculated based on accuracy, precision, Recall and f-measure.


2021 ◽  
Vol 2083 (4) ◽  
pp. 042037
Author(s):  
Xia Yang

Abstract In structured light geometric reconstruction, due to the complexity of shooting methods and scene lighting conditions, the resulting images may be lack of image details due to uneven light. For this reason, the article proposes a Retinex algorithm with colour restoration and colour saturation correction strategy based on HSV colour space transformation based on artificial intelligence technology. Then distinguish whether it is a bright area according to the threshold value, and modify the insufficient transmittance estimation of the bright area. Finally, the intensity component and saturation value are restored in the HIS colour space, and the histogram is used to stretch the intensity component.


2020 ◽  
Vol 3 (4) ◽  
pp. 240-251
Author(s):  
Dmitro Yuriiovych Hrishko ◽  
Ievgen Arnoldovich Nastenko ◽  
Maksym Oleksandrovych Honcharuk ◽  
Volodymyr Anatoliyovich Pavlov

This article discusses the use of texture analysis methods to obtain informative features that describe the texture of liver ultrasound images. In total, 317 liver ultrasound images were analyzed, which were provided by the Institute of Nuclear Medicine and Radiation Diagnostics of NAMS of Ukraine. The images were taken by three different sensors (convex, linear, and linear sensor in increased signal level mode). Both images of patients with a normal liver condition and patients with specific liver disease (there were diseases such as: autoimmune hepatitis, Wilson's disease, hepatitis B and C, steatosis, and cirrhosis) were present in the database. Texture analysis was used for “Feature Construction”, which resulted in more than a hundred different informative features that made up a common stack. Among them, there are such features as: three authors’ patented features derived from the grey level co-occurrence matrix; features, obtained with the help of spatial sweep method (working by the principle of group method of data handling), which was applied to ultrasound images; statistical features, calculated on the images, brought to one scale with the help of differential horizontal and vertical matrices, which are proposed by the authors; greyscale pairs ensembles (found using the genetic algorithm), which identify liver pathology on images, transformed with the help of horizontal and vertical differentiations, in the best possible way. The resulting trait stack was used to solve the problem of binary classification (“norma-pathology”) of ultrasound liver images. A Machine Learning method, namely “Random Forest”, was used for this purpose. Before the classification, in order to obtain objective results, the total samples were divided into training (70 %), testing (20 %), and examining (10 %). The result was the best three Random Forest models separately for each sensor, which gave the following recognition rates: 93.4 % for the convex sensor, 92.9 % for the linear sensor, and 92 % for the reinforced linear sensor


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi142-vi142
Author(s):  
Kaylie Cullison ◽  
Garrett Simpson ◽  
Danilo Maziero ◽  
Kolton Jones ◽  
Radka Stoyanova ◽  
...  

Abstract A dilemma in treating glioblastoma is that MRI after chemotherapy and radiation therapy (chemoRT) shows areas of presumed tumor growth in up to 50% of patients. These areas can represent true progression (TP), tumor growth with tumors non-responsive to treatment, or pseudoprogression (PP), edema and tumor necrosis with favorable treatment response. On imaging, TP and PP are usually not discernable. Patients in this study undergo six weeks of chemoRT on a combination MRI/RT device, receiving daily MRIs. The goal of this study is to explore the correlation of radiomics features with progression. The tumor lesion and surrounding areas of growth/edema were manually outlined as regions of interest (ROIs) for each daily T2-weighted MRI scan. The ROIs were used to calculate texture features: statistical features based on the gray-level co-occurrence matrix (GLCM), the gray-level zone size matrix (GLZSM), the gray-level run length matrix (GLRLM), and the neighborhood gray-tone difference matrix (NGTDM). Each of these matrix classes describe the probability of spatial relationships of gray levels occurring within the ROI. Daily texture features were averaged per week of treatment for each patient. Patient response was retrospectively defined as no progression (NP), TP, or PP. A Kruskal-Wallis test was performed to identify texture features that correlated most strongly with patient response. Forty texture features were calculated for 12 patients (19 treated, 7 excluded due to no T2 lesion or progression status unknown, 6 NP, 3 TP, 3 PP). There was a trend of more texture features correlating significantly with response in weeks 4-6 of treatment, compared to weeks 1-3. A particular texture feature, GLSZM Small Zone Low Gray-Level Emphasis, showed increasing difference between PP and TP over time, with significant difference during week 6 of treatment (p=0.0495). Future directions include correlating early outcomes with greater numbers of patients and daily multiparametric MRI.


Sign in / Sign up

Export Citation Format

Share Document