scholarly journals Optimized Tree Strategy with Principal Component Analysis Using Feature Selection-Based Classification for Newborn Infant’s Jaundice Symptoms

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Debabrata Samanta ◽  
M. P. Karthikeyan ◽  
Marimuthu Karuppiah ◽  
Dalima Parwani ◽  
Manish Maheshwari ◽  
...  

One of the most important and difficult research fields is newborn jaundice grading. The mitotic count is an important component in determining the severity of newborn jaundice. The use of principal component analysis (PCA) feature selection and an optimal tree strategy classifier to produce automatic mitotic detection in histopathology images and grading is given. This study makes use of real-time and benchmark datasets, as well as specific approaches for detecting jaundice in newborn newborns. According to research, the quality of the feature may have a negative impact on categorization performance. Additionally, compressing the classification method for exclusive main properties can result in a classification performance bottleneck. As a result, identifying appropriate characteristics for training the classifier is required. By combining a feature selection method with a classification model, this is possible. The major outcomes of this study revealed that image processing techniques are critical for predicting neonatal hyperbilirubinemia. Image processing is a method of translating analogue images to digital formats and manipulating them. The primary goal of medical image processing is to collect information useful for disease detection, diagnosis, monitoring, and therapy. Image datasets can be used to validate the performance of newborn jaundice detection. When compared to conventional approaches, it offers results that are accurate, quick, and time efficient. Accuracy, sensitivity, and specificity, which are common performance indicators, were also predictive.

Author(s):  
Norsyela Muhammad Noor Mathivanan ◽  
Nor Azura Md.Ghani ◽  
Roziah Mohd Janor

<span>The curse of dimensionality and the empty space phenomenon emerged as a critical problem in text classification. One way of dealing with this problem is applying a feature selection technique before performing a classification model. This technique helps to reduce the time complexity and sometimes increase the classification accuracy. This study introduces a feature selection technique using K-Means clustering to overcome the weaknesses of traditional feature selection technique such as principal component analysis (PCA) that require a lot of time to transform all the inputs data. This proposed technique decides on features to retain based on the significance value of each feature in a cluster. This study found that k-means clustering helps to increase the efficiency of KNN model for a large data set while KNN model without feature selection technique is suitable for a small data set. A comparison between K-Means clustering and PCA as a feature selection technique shows that proposed technique is better than PCA especially in term of computation time. Hence, k-means clustering is found to be helpful in reducing the data dimensionality with less time complexity compared to PCA without affecting the accuracy of KNN model for a high frequency data.</span>


2020 ◽  
Vol 9 (2) ◽  
pp. 72-79
Author(s):  
Sari Ayu Wulandari ◽  
Sutikno Madnasri ◽  
Ratih Pramitasari ◽  
Susilo Susilo

The need for aroma recognition devices or often known as enose (electronic nose), is increasing. In the health field, enose can detect early diabetes mellitus (DM) type 2 from the aroma of urine. Enose is an aroma recognition tool that uses a pattern recognition algorithm to recognize the urine aroma of diabetics based on input signals from an array of gas sensors. The need for portable enose devices is increasing due to the increasing need for real-time needs. Enose devices have an enormous impact on the choice of the gas sensor Array in the enose. This article discusses the effect of the number of sensor arrays used on the recognition results. Enose uses a maximum of 4 sensors, with a maximum feature matrix. After that, the feature matrix enters the PCA (Principal Component Analysis) feature extraction and clustering using the FCM (Fuzzy C Means) method. The number of sensors indicates the number of features. Enose using method for feature selection, it’s a variation from 4 sensors, where experiment 1 uses 4 sensors, experiment 2 uses a variation of 3 sensors and experiment 3 uses a variation of 2 sensors. Especially for sensors 3 and 4 using feature extraction method, PCA (Principal Component Analysis), to reduce features to only 2 best features. As for the variation of 2 sensors use primer feature matrix. After feature selection, the number of features is 2 out of 11 variations. Next, do the grouping using the FCM (Fuzzy C Means) method. The results show that using two sensors has a high accuracy rate of 92.5%.


2019 ◽  
Vol 9 (22) ◽  
pp. 4733
Author(s):  
Cuiping Shao ◽  
Huiyun Li ◽  
Zheng Wang ◽  
Jiayan Fang

Nanoscale CMOS technology has encountered severe reliability issues especially in on-chip memory. Conventional word-level error resilience techniques such as Error Correcting Codes (ECC) suffer from high physical overhead and inability to correct increasingly reported multiple bit flip errors. On the other hands, state-of-the-art applications such as image processing and machine learning loosen the requirement on the levels of data protection, which result in dedicated techniques of approximated fault tolerance. In this work, we introduce a novel error protection scheme for memory, based on feature extraction through Principal Component Analysis and the modular-wise technique to segment the data before PCA. The extracted features can be protected by replacing the fault vector with the averaged confinement vectors. This approach confines the errors with either single or multi-bit flips for generic data blocks, whilst achieving significant savings on execution time and memory usage compared to traditional ECC techniques. Experimental results of image processing demonstrate that the proposed technique results in a reconstructed image with PSNR over 30 dB, while robust against both single bit and multiple bit flip errors, with reduced memory storage to just 22.4% compared to the conventional ECC-based technique.


Author(s):  
P. Geethanjali

This chapter discusses design and development of a surface Electromyogram (EMG) signal detection and conditioning system along with the issues of gratuitous spurious signals such as power line interference, artifacts, etc., which make signals plausible. In order to construe the recognition of hand gestures from EMG signals, Time Domain (TD) and well as Autoregressive (AR) coefficients features are extracted. The extracted features are diminished using the Principal Component Analysis (PCA) to alleviate the burden of the classifier. A four-channel continuous EMG signal conditioning system is developed and EMG signals are acquired from 10 able-bodied subjects to classify the 6 unique movements of hand and wrist. The reduced statistical TD and AR features are used to classify the signal patterns through k Nearest Neighbour (kNN) as well as Neural Network (NN) classifier. Further, EMG signals acquired from a transradial amputee using 8-channel systems for the 6 amenable motions are also classified. Statistical Analysis of Variance (ANOVA) results on classification performance of able-bodied subject divulge that the performance TD-PCA features are more significant than the AR-PCA features. Further, no significant difference in the performance of NN classifier and kNN classifier is construed with TD reduced features. Since the average classification error of kNN classifier with TD features is found to be less, kNN classifier is implemented in off-line using the TMS2407eZdsp digital signal controller to study the actuation of three low-power DC drives in the identification of intended motion with an able-bodied subject.


Sign in / Sign up

Export Citation Format

Share Document