scholarly journals A Framework for Pedestrian Attribute Recognition Using Deep Learning

2022 ◽  
Vol 12 (2) ◽  
pp. 622
Author(s):  
Saadman Sakib ◽  
Kaushik Deb ◽  
Pranab Kumar Dhar ◽  
Oh-Jin Kwon

The pedestrian attribute recognition task is becoming more popular daily because of its significant role in surveillance scenarios. As the technological advances are significantly more than before, deep learning came to the surface of computer vision. Previous works applied deep learning in different ways to recognize pedestrian attributes. The results are satisfactory, but still, there is some scope for improvement. The transfer learning technique is becoming more popular for its extraordinary performance in reducing computation cost and scarcity of data in any task. This paper proposes a framework that can work in surveillance scenarios to recognize pedestrian attributes. The mask R-CNN object detector extracts the pedestrians. Additionally, we applied transfer learning techniques on different CNN architectures, i.e., Inception ResNet v2, Xception, ResNet 101 v2, ResNet 152 v2. The main contribution of this paper is fine-tuning the ResNet 152 v2 architecture, which is performed by freezing layers, last 4, 8, 12, 14, 20, none, and all. Moreover, data balancing techniques are applied, i.e., oversampling, to resolve the class imbalance problem of the dataset and analysis of the usefulness of this technique is discussed in this paper. Our proposed framework outperforms state-of-the-art methods, and it provides 93.41% mA and 89.24% mA on the RAP v2 and PARSE100K datasets, respectively.

2020 ◽  
Vol 10 (4) ◽  
pp. 1276 ◽  
Author(s):  
Eréndira Rendón ◽  
Roberto Alejo ◽  
Carlos Castorena ◽  
Frank J. Isidro-Ortega ◽  
Everardo E. Granda-Gutiérrez

The class imbalance problem has been a hot topic in the machine learning community in recent years. Nowadays, in the time of big data and deep learning, this problem remains in force. Much work has been performed to deal to the class imbalance problem, the random sampling methods (over and under sampling) being the most widely employed approaches. Moreover, sophisticated sampling methods have been developed, including the Synthetic Minority Over-sampling Technique (SMOTE), and also they have been combined with cleaning techniques such as Editing Nearest Neighbor or Tomek’s Links (SMOTE+ENN and SMOTE+TL, respectively). In the big data context, it is noticeable that the class imbalance problem has been addressed by adaptation of traditional techniques, relatively ignoring intelligent approaches. Thus, the capabilities and possibilities of heuristic sampling methods on deep learning neural networks in big data domain are analyzed in this work, and the cleaning strategies are particularly analyzed. This study is developed on big data, multi-class imbalanced datasets obtained from hyper-spectral remote sensing images. The effectiveness of a hybrid approach on these datasets is analyzed, in which the dataset is cleaned by SMOTE followed by the training of an Artificial Neural Network (ANN) with those data, while the neural network output noise is processed with ENN to eliminate output noise; after that, the ANN is trained again with the resultant dataset. Obtained results suggest that best classification outcome is achieved when the cleaning strategies are applied on an ANN output instead of input feature space only. Consequently, the need to consider the classifier’s nature when the classical class imbalance approaches are adapted in deep learning and big data scenarios is clear.


Author(s):  
David A. Wood ◽  
Sajjad Mardanirad ◽  
Hassan Zakeri

AbstractMultiple machine learning (ML) and deep learning (DL) models are evaluated and their prediction performance compared in classifying five wellbore fluid-loss classes from a 20-well drilling dataset (Azadegan oil field, Iran). That dataset includes 65,376 data records with seventeen drilling variables. The dataset fluid-loss classes are heavily imbalanced (> 95% of data records belong to the less significant loss classes 1 and 2; only 0.05% of the data records belong to the complete-loss class 5). Class imbalance and the lack of high correlations between the drilling variables and fluid-loss classes pose challenges for ML/DL models. Tree-based and data matching ML algorithms outperform DL and regression-based ML algorithms in predicting the fluid-loss classes. Random forest (RF), after training and testing, makes only 35 prediction errors for all data records. Consideration of precision recall and F1-scores and expanded confusion matrices show that the RF model provides the best predictions for fluid-loss classes 1 to 3, but that for class 4 Adaboost (ADA) and class 5 decision tree (DT) outperform RF. This suggests that an ensemble of the fast to execute RF, ADA and DT models may be the best way to practically achieve reliable wellbore fluid-loss predictions. DL models underperform several ML models evaluated and are particularly poor at predicting the least represented classes 4 and 5. The DL models also require much longer execution times than the ML models, making them less attractive for field operations that require prompt information regarding rapid real-time decision responses to pending class-4 and class-5 fluid-loss events.


2021 ◽  
Vol 7 ◽  
pp. e671
Author(s):  
Shilpi Bose ◽  
Chandra Das ◽  
Abhik Banerjee ◽  
Kuntal Ghosh ◽  
Matangini Chattopadhyay ◽  
...  

Background Machine learning is one kind of machine intelligence technique that learns from data and detects inherent patterns from large, complex datasets. Due to this capability, machine learning techniques are widely used in medical applications, especially where large-scale genomic and proteomic data are used. Cancer classification based on bio-molecular profiling data is a very important topic for medical applications since it improves the diagnostic accuracy of cancer and enables a successful culmination of cancer treatments. Hence, machine learning techniques are widely used in cancer detection and prognosis. Methods In this article, a new ensemble machine learning classification model named Multiple Filtering and Supervised Attribute Clustering algorithm based Ensemble Classification model (MFSAC-EC) is proposed which can handle class imbalance problem and high dimensionality of microarray datasets. This model first generates a number of bootstrapped datasets from the original training data where the oversampling procedure is applied to handle the class imbalance problem. The proposed MFSAC method is then applied to each of these bootstrapped datasets to generate sub-datasets, each of which contains a subset of the most relevant/informative attributes of the original dataset. The MFSAC method is a feature selection technique combining multiple filters with a new supervised attribute clustering algorithm. Then for every sub-dataset, a base classifier is constructed separately, and finally, the predictive accuracy of these base classifiers is combined using the majority voting technique forming the MFSAC-based ensemble classifier. Also, a number of most informative attributes are selected as important features based on their frequency of occurrence in these sub-datasets. Results To assess the performance of the proposed MFSAC-EC model, it is applied on different high-dimensional microarray gene expression datasets for cancer sample classification. The proposed model is compared with well-known existing models to establish its effectiveness with respect to other models. From the experimental results, it has been found that the generalization performance/testing accuracy of the proposed classifier is significantly better compared to other well-known existing models. Apart from that, it has been also found that the proposed model can identify many important attributes/biomarker genes.


2018 ◽  
Author(s):  
Rodrigo Moraes ◽  
João Francisco Valiati ◽  
Wilson Pires Gavião Neto

Many people make their opinions available on the Internet nowadays, and researchers have been proposing methods to automate the task of classifying textual reviews as positive or negative. Usual supervised learning techniques have been adopted to accomplish such a task. In practice, positive reviews are abundant in comparison to negative's. This context poses challenges to learning-based methods and data undersampling/oversampling are popular preprocessing techniques to overcome the problem. A combination of sampling techniques and learning methods, like Artificial Neural Networks (ANN) or Support Vector Machines (SVM), has been successfully adopted as a classification approach in many areas, while the sentiment classification literature has not explored ANN in studies that involve sampling methods to balance data. Even the performance of SVM, which is widely used as a sentiment learner, has been rarely addressed under the context of a preceding sampling method. This paper addresses document-level sentiment analysis with unbalanced data and focus on empirically assessing the performance of ANN in the context of undersampling the (majority) set of positive reviews. We adopted the performance of SVM as a baseline, since some studies have indicated SVM as being less subject to the class imbalance problem. Results are produced in terms of a traditional bag-of-words model with popular feature selection and weighting methods. Our experiments indicated that SVM are more stable than ANN in highly unbalanced (80%) data scenarios. However, under the discarding of information generated by random undersampling, ANN outperform SVM or produce comparable results.


2020 ◽  
Vol 3 (2) ◽  
pp. 20 ◽  
Author(s):  
Aliyu Abubakar ◽  
Mohammed Ajuji ◽  
Ibrahim Usman Yahya

While visual assessment is the standard technique for burn evaluation, computer-aided diagnosis is increasingly sought due to high number of incidences globally. Patients are increasingly facing challenges which are not limited to shortage of experienced clinicians, lack of accessibility to healthcare facilities and high diagnostic cost. Certain number of studies were proposed in discriminating burn and healthy skin using machine learning leaving a huge and important gap unaddressed; whether burns and related skin injuries can be effectively discriminated using machine learning techniques. Therefore, we specifically use transfer learning by leveraging pre-trained deep learning models due to deficient dataset in this paper, to discriminate two classes of skin injuries—burnt skin and injured skin. Experiments were extensively conducted using three state-of-the-art pre-trained deep learning models that includes ResNet50, ResNet101 and ResNet152 for image patterns extraction via two transfer learning strategies—fine-tuning approach where dense and classification layers were modified and trained with features extracted by base layers and in the second approach support vector machine (SVM) was used to replace top-layers of the pre-trained models, trained using off-the-shelf features from the base layers. Our proposed approach records near perfect classification accuracy in categorizing burnt skin ad injured skin of approximately 99.9%.


2019 ◽  
Author(s):  
William Barcellos ◽  
Nicolas Hiroaki Shitara ◽  
Carolina Toledo Ferraz ◽  
Raissa Tavares Vieira Queiroga ◽  
Jose Hiroki Saito ◽  
...  

The aim of this paper is to evaluate the performance of Transfer Learning techniques applied in Convolucional Neural Networks for biometric periocular classification. Two aspects of Transfer Learning were evaluated: the technique known as Fine Tuning and the technique known as Feature Extraction. Two CNN architectures were evaluated, the AlexNet and the VGG-16, and two image databases were used. These two databases have different characteristics regarding the method of acquisition, the amount of classes, the class balancing, and the number of elements in each class. Three experiments were conducted to evaluate the performance of the CNNs. In the first experiment we measured the Feature Extraction accuracy, and in the second one we evaluated the Fine Tuning performance. In the third experiment, we used the AlexNet for Fine Tuning in one database, and then, the FC7 layer of this trained CNN was used for Feature Extraction in the other database. We concluded that the data quality (the presence or not of class samples in the training set), class imbalance (different number of elements in each class) and the selection method of the training and testing, directly influence the CNN accuracy. The Feature Extraction method, by being more simple and does not require network training, has lower accuracy than Fine Tuning. Furthermore, Fine Tuning a CNN with periocular's images from one database, doesn't increase the accuracy of this CNN in Feature Extraction mode for another periocular's database. The accuracy is quite similar to that obtained by the original pre-trained network


Author(s):  
Aliyu Abubakar ◽  
Mohammed Ajuji ◽  
Ibrahim Usman Yahya

While visual assessment is the standard technique for burn evaluation, computer-aided diagnosis is increasingly sought due to high number of incidences globally. Patients are increasingly facing challenges which are not limited to shortage of experienced clinicians, lack of accessibility to healthcare facilities, and high diagnostic cost. Certain number of studies were proposed in discriminating burn and healthy skin using machine learning leaving a huge and important gap unaddressed; whether burns and related skin injuries can be effectively discriminated using machine learning techniques. Therefore, we specifically use pre-trained deep learning models due to deficient dataset to train a new model from scratch. Experiments were extensively conducted using three state-of-the-art pre-trained deep learning models that includes ResNet50, ResNet101 and ResNet152 for image patterns extraction via two transfer learning strategies: fine-tuning approach where dense and classification layers were modified and trained with features extracted by base layers, and in the second approach support vector machine (SVM) was used to replace top-layers of the pre-trained models, trained using off-the-shelf features from the base layers. Our proposed approach records near perfect classification accuracy of approximately 99.9%.


2021 ◽  
Author(s):  
Sayantani Basu ◽  
Roy H. Campbell

The COrona VIrus Disease (COVID-19) pandemic led to the occurrence of several variants with time. This has led to an increased importance of understanding sequence data related to COVID-19. In this chapter, we propose an alignment-free k-mer based LSTM (Long Short-Term Memory) deep learning model that can classify 20 different variants of COVID-19. We handle the class imbalance problem by sampling a fixed number of sequences for each class label. We handle the vanishing gradient problem in LSTMs arising from long sequences by dividing the sequence into fixed lengths and obtaining results on individual runs. Our results show that one- vs-all classifiers have test accuracies as high as 92.5% with tuned hyperparameters compared to the multi-class classifier model. Our experiments show higher overall accuracies for B.1.1.214, B.1.177.21, B.1.1.7, B.1.526, and P.1 on the one-vs-all classifiers, suggesting the presence of distinct mutations in these variants. Our results show that embedding vector size and batch sizes have insignificant improvement in accuracies, but changing from 2-mers to 3-mers mostly improves accuracies. We also studied individual runs which show that most accuracies improved after the 20th run, indicating that these sequence positions may have more contributions to distinguishing among different COVID-19 variants.


2020 ◽  
Vol 16 (3) ◽  
pp. 60-86 ◽  
Author(s):  
Debashree Devi ◽  
Suyel Namasudra ◽  
Seifedine Kadry

The subject of a class imbalance is a well-investigated topic which addresses performance degradation of standard learning models due to uneven distribution of classes in a dataspace. Cluster-based undersampling is a popular solution in the domain which offers to eliminate majority class instances from a definite number of clusters to balance the training data. However, distance-based elimination of instances often got affected by the underlying data distribution. Recently, ensemble learning techniques have emerged as effective solution due to its weighted learning principle of rare instances. In this article, a boosting aided adaptive cluster-based undersampling technique is proposed to facilitate elimination of learning- insignificant majority class instances from the clusters, detected through AdaBoost ensemble learning model. The proposed work is validated with seven existing cluster based undersampling techniques for six binary datasets and three classification models. The experimental results have established the effectives of the proposed technique than the existing methods.


Sign in / Sign up

Export Citation Format

Share Document