scholarly journals A Computer-Aided Method for Digestive System Abnormality Detection in WCE Images

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Zahra Amiri ◽  
Hamid Hassanpour ◽  
Azeddine Beghdadi

Wireless capsule endoscopy (WCE) is a powerful tool for the diagnosis of gastrointestinal diseases. The output of this tool is in video with a length of about eight hours, containing about 8000 frames. It is a difficult task for a physician to review all of the video frames. In this paper, a new abnormality detection system for WCE images is proposed. The proposed system has four main steps: (1) preprocessing, (2) region of interest (ROI) extraction, (3) feature extraction, and (4) classification. In ROI extraction, at first, distinct areas are highlighted and nondistinct areas are faded by using the joint normal distribution; then, distinct areas are extracted as an ROI segment by considering a threshold. The main idea is to extract abnormal areas in each frame. Therefore, it can be used to extract various lesions in WCE images. In the feature extraction step, three different types of features (color, texture, and shape) are employed. Finally, the features are classified using the support vector machine. The proposed system was tested on the Kvasir-Capsule dataset. The proposed system can detect multiple lesions from WCE frames with high accuracy.

2018 ◽  
Vol 8 (7) ◽  
pp. 1210 ◽  
Author(s):  
Mahdieh Izadpanahkakhk ◽  
Seyyed Razavi ◽  
Mehran Taghipour-Gorjikolaie ◽  
Seyyed Zahiri ◽  
Aurelio Uncini

Palmprint verification is one of the most significant and popular approaches for personal authentication due to its high accuracy and efficiency. Using deep region of interest (ROI) and feature extraction models for palmprint verification, a novel approach is proposed where convolutional neural networks (CNNs) along with transfer learning are exploited. The extracted palmprint ROIs are fed to the final verification system, which is composed of two modules. These modules are (i) a pre-trained CNN architecture as a feature extractor and (ii) a machine learning classifier. In order to evaluate our proposed model, we computed the intersection over union (IoU) metric for ROI extraction along with accuracy, receiver operating characteristic (ROC) curves, and equal error rate (EER) for the verification task.The experiments demonstrated that the ROI extraction module could significantly find the appropriate palmprint ROIs, and the verification results were crucially precise. This was verified by different databases and classification methods employed in our proposed model. In comparison with other existing approaches, our model was competitive with the state-of-the-art approaches that rely on the representation of hand-crafted descriptors. We achieved a IoU score of 93% and EER of 0.0125 using a support vector machine (SVM) classifier for the contact-based Hong Kong Polytechnic University Palmprint (HKPU) database. It is notable that all codes are open-source and can be accessed online.


2016 ◽  
Vol 78 (8-2) ◽  
Author(s):  
Krishna Mohan Kudiri

Estimation of human emotions during a conversation is difficult using a computer. In this study, facial expressions and speech are used in order to estimate emotions (angry, sad, happy, boredom, disgust and surprise). A proposed hybrid system through facial expressions and speech is used to estimate emotions of a person when he is engaged in a conversational session. Relative Bin Frequency Coefficients and Relative Sub-Image-Based features are used for acoustic and visual modalities respectively. Support Vector Machine is used for classification. This study shows that the proposed feature extraction through acoustic and visual data is the most prominent aspect affecting the emotion detection system, along with the proposed fusion technique. Although some other aspects are considered to be affecting the system, the effect is relatively minor. It was observed that the performance of the bimodal system was lower than the unimodal system through deliberate facial expressions. In order to deal with the problem, a suitable database is used. The results indicate that the proposed system showed better performance, with respect to basic emotional classes than the rest.


2019 ◽  
Vol 16 (10) ◽  
pp. 4170-4178
Author(s):  
Sheifali Gupta ◽  
Gurleen Kaur ◽  
Deepali Gupta ◽  
Udit Jindal

This paper tends to the issue of coin recognition when dealing with shading and reflection variations under the same lighting conditions. In order to approach the problem, a database containing Brazilian coin images (both front and reverse side of the coin) consisting of five different denominations have been used which is provided by the kaggle-diverse and largest data community in the world. This work focuses on an automatic image classification process for Brazilian coins. The imagebased classification of coins primarily incorporates three stages where the initial step is Region of Interest (ROI) extraction; the subsequent advance is extraction of features and classification. The first step of ROI extraction is accomplished by segmenting the coin region using the proposed segmentation method. In the second step i.e., feature extraction; Histogram of Oriented Gradients (HOG) features are extracted from the image. The image is converted to a vector containing feature values. The third step is where the extracted features are mapped to the class and are known as classification. Three classification algorithms i.e., Support Vector Machine (SVM), Artificial Neural Network (ANN) and K-Nearest Neighbour are compared for classification of five coin denominations. With the proposed segmentation methodology, the best classification accuracy of 92% is achieved in the case of ANN classifier.


Author(s):  
Toni Dwi Novianto ◽  
I Made Susi Erawan

<p class="AbstractEnglish"><strong>Abstract:</strong> Fish eye color is an important attribute of fish quality. The change in eye color during the storage process correlates with freshness and has a direct effect on consumer perception. The process of changing the color of the fish eye can be analyzed using image processing. The purpose of this study was to obtain the best classification method for predicting fish freshness based on image processing in fish eyes. Three tuna fish were used in this study. The test was carried out for 20 hours with an eye image every 2 hours at room temperature. Fish eye image processing uses Matlab R.2017a software while the classification uses Weka 3.8 software. The image processing stages are taking fish eye image, segmenting ROI (region of interest), converting RGB image to grayscale, and feature extraction. Feature extraction used is the gray-level co-occurrence matrix (GLCM). The classification techniques used are artificial neural networks (ANN), k-neighborhood neighbors (k-NN), and support vector machines (SVM). The results showed the value using ANN = 0.53, k-NN = 0.83, and SVM = 0.69. Based on these results it can be determined that the best classification technique is to use the k-nearest neighbor (k-NN).</p><p class="AbstrakIndonesia"><strong>Abstrak:</strong> Warna mata ikan merupakan atribut penting pada kualitas ikan. Perubahan warna mata ikan selama proses penyimpanan berhubungan dengan tingkat kesegaran dan memiliki efek langsung pada persepsi konsumen. Proses perubahan warna mata ikan dapat dianalisis menggunakan pengolahan citra. Tujuan penelitian ini adalah mendapatkan metode klasifikasi terbaik untuk memprediksi kesegaran ikan berbasis pengolahan citra pada mata ikan. Tiga ekor ikan tuna digunakan dalam penelitian ini. Pengujian dilakukan selama 20 jam dengan pengambilan citra mata setiap 2 jam pada suhu ruang. Pengolahan citra mata ikan menggunakan software matlab R.2017a sedangkan pengklasifiannya menggunakan software Weka 3.8. Tahapan pengolahan citra meliputi pengambilan citra mata ikan, segmentasi ROI (<em>region of interest</em>), konversi citra RGB menjadi <em>grayscale</em>, dan ekstraksi fitur. Ekstraksi fitur yang digunakan yaitu <em>gray-level co-occurrence matrix</em> (GLCM).  Teknik klasifikasi yang digunakan yaitu, <em>artificial neural network</em> (ANN), <em>k-nearest neighbors</em> (k-NN), dan <em>support vector machine</em> (SVM). Hasil penelitian menunjukkan nilai korelasi menggunakan ANN = 0,53, k-NN = 0,83, dan SVM = 0,69. Berdasarkan hasil tersebut dapat disimpulkan teknik klasifikasi terbaik adalah menggunakan <em>k-nearest neighbors</em> (k-NN).</p>


2021 ◽  
Vol 17 (1) ◽  
pp. 15-37
Author(s):  
Rashmi Shrivastava ◽  
Manju Pandey

Human fall detection is a subcategory of ambient assisted living. Falls are dangerous for old aged people especially those who are unaccompanied. Detection of falls as early as possible along with high accuracy is indispensable to save the person otherwise it may lead to physical disability even death also. The proposed fall detection system is implemented in the edge computing scenario. An adaptive window-based approach is proposed here for feature extraction because window size affects the performance of the classifier. For training and testing purposes two public datasets and our collected dataset have been used. Anomaly identification based on a support vector machine with an enhanced chi-square kernel is used here for the classification of Activities of Daily Living (ADL) and fall activities. Using the proposed approach 100% sensitivity and 98.08% specificity have been achieved which are better when compared with three recent research based on unsupervised learning. One of the important aspects of this study is that it is also validated on actual real fall data and got 100% accuracy. This complete fall detection model is implemented in the fog computing scenario. The proposed approach of adaptive window based feature extraction is better than static window based approaches and three recent fall detection methods.


The complications of abnormal behavior and behavior identification are very eminent problems in the video processing. Abnormal behavior detector can be designed by choosing the region of interest through feature detector and by tracking them over the short time period. Therefore, the detector shows the trade-off among the object tracking and optical flow. Since, various regions normally display the various types of motion pattern, we introduce Distribution Based Crowd Abnormality Detection (DCAD) which catches the statistics of object trajectories which are passing via the Spatio-temporal cube. This technique directly provides the distribution to define the frame. Also clustering is not required to build the dictionary. Besides, we exploited the motion trajectories to calculate the “power potentials” in the pixel space which defines the amount of interaction among the people. Furthermore, utilize the standard method for classification by considering SVMs (Support Vector Machines) discriminative learning method to recognize the abnormalities.


Emotion plays a critical job ineffectively conveying one’s convictions and intentions. As an outcome, identification of emotion has turned into focus point of few studies recently. Patient observing models are getting to be significant in patient concern and can endow with helpful feedback related to health issues for caregivers and clinicians. In this work, patient fulfilment recognition framework is proposed that uses image frames extracted from the recorded visual-audio modality dataset. The images are treated with techniques such as Local Binary Pattern (LBP) which is a ocular descriptor. The proposed framework incorporates feature extraction from the images and then the Support Vector Machine (SVM) is applied for classification. The three distinct types of emotions are whether the patient is happy, sad or neutral and the same are detected based on the results. The result of such an analysis can be made use of by a group of analysts which include doctors, healthcare experts and system experts to improve smart healthcare system in steps. The reliability of information provided by such a system makes such upgradations more meaningful.


2020 ◽  
Vol 3 (1) ◽  
pp. 46-51
Author(s):  
Febri Liantoni ◽  
Agus Santoso

In this era to recognize breast tumors can be based on mammogram images. This method will expedite the process of recognition and classification of breast cancer. This research was conducted classification techniques of breast cancer using mammogram images. The proposed model targets classification studies for cases of malignant, and benign cancer. The research consisted of five main stages, preprocessing, histogram equalization, convolution, feature extraction, and classification. For preprocessing cropping the image using region of interest (ROI), for convolution, median filter and histogram equalization are used to improve image quality. Feature extraction using Gray-Level Co-Occurrence Matrix (GLCM) with 5 features, entropy, correlation, contrast, homogeneity, and variance. The final step is the classification using Radial Basis Function Neural Network (RBFNN) and Support Vector Machine (SVM). Based on the hypotheses that have been tested and discussed, the accuracy for RBFNN is 86.27%, while the accuracy for SVM is 84.31%. This shows that the RBFNN method is better than SVM in distinguishing types of breast cancer. These results prove the process of improving image construction using histogram equalization and the median filter is useful in the classification process.


2011 ◽  
Vol 474-476 ◽  
pp. 782-785
Author(s):  
Shuang Xu ◽  
Ji Dong Suo ◽  
Ji Yin Zhao

In this paper, a method of palmprint segmentation and location is proposed. The proposed method focuses on region of interest (ROI) extraction of palmprint images which involve transition and rotation. Firstly, binary of palmprint image is used to define the edge of palmprint. Then we separate the fingers and palms and find the two valley points of the index finger and middle finger, ring finger and little finger. Finally, rotate image based on the two valley points and correct image position and create coordinate system according to valley points to determine ROI. This method provides a necessary preprocessing for further feature extraction and matching.The effectiveness of the proposed method is verified using the PolyU palmprint database.


Sign in / Sign up

Export Citation Format

Share Document