feature transformation
Recently Published Documents


TOTAL DOCUMENTS

240
(FIVE YEARS 73)

H-INDEX

17
(FIVE YEARS 4)

2022 ◽  
Vol 13 (1) ◽  
pp. 1-11
Author(s):  
Shih-Chia Huang ◽  
Quoc-Viet Hoang ◽  
Da-Wei Jaw

Despite the recent improvement of object detection techniques, many of them fail to detect objects in low-luminance images. The blurry and dimmed nature of low-luminance images results in the extraction of vague features and failure to detect objects. In addition, many existing object detection methods are based on models trained on both sufficient- and low-luminance images, which also negatively affect the feature extraction process and detection results. In this article, we propose a framework called Self-adaptive Feature Transformation Network (SFT-Net) to effectively detect objects in low-luminance conditions. The proposed SFT-Net consists of the following three modules: (1) feature transformation module, (2) self-adaptive module, and (3) object detection module. The purpose of the feature transformation module is to enhance the extracted feature through unsupervisely learning a feature domain projection procedure. The self-adaptive module is utilized as a probabilistic module producing appropriate features either from the transformed or the original features to further boost the performance and generalization ability of the proposed framework. Finally, the object detection module is designed to accurately detect objects in both low- and sufficient- luminance images by using the appropriate features produced by the self-adaptive module. The experimental results demonstrate that the proposed SFT-Net framework significantly outperforms the state-of-the-art object detection techniques, achieving an average precision (AP) of up to 6.35 and 11.89 higher on the sufficient- and low- luminance domain, respectively.


2021 ◽  
pp. 1-16
Author(s):  
Fang He ◽  
Wenyu Zhang ◽  
Zhijia Yan

Credit scoring has become increasingly important for financial institutions. With the advancement of artificial intelligence, machine learning methods, especially ensemble learning methods, have become increasingly popular for credit scoring. However, the problems of imbalanced data distribution and underutilized feature information have not been well addressed sufficiently. To make the credit scoring model more adaptable to imbalanced datasets, the original model-based synthetic sampling method is extended herein to balance the datasets by generating appropriate minority samples to alleviate class overlap. To enable the credit scoring model to extract inherent correlations from features, a new bagging-based feature transformation method is proposed, which transforms features using a tree-based algorithm and selects features using the chi-square statistic. Furthermore, a two-layer ensemble method that combines the advantages of dynamic ensemble selection and stacking is proposed to improve the classification performance of the proposed multi-stage ensemble model. Finally, four standardized datasets are used to evaluate the performance of the proposed ensemble model using six evaluation metrics. The experimental results confirm that the proposed ensemble model is effective in improving classification performance and is superior to other benchmark models.


2021 ◽  
Vol 2078 (1) ◽  
pp. 012045
Author(s):  
Xiaomeng Guo ◽  
Li Yi ◽  
Hang Zou ◽  
Yining Gao

Abstract Most existing face super-resolution (SR) methods are developed based on an assumption that the degradation is fixed and known (e.g., bicubic down sampling). However, these methods suffer a severe performance drop in various unknown degradations in real-world applications. Previous methods usually rely on facial priors, such as facial geometry prior or reference prior, to restore realistic face details. Nevertheless, low-quality inputs cannot provide accurate geometric priors while high-quality references are often unavailable, which limits the use of face super-resolution in real-world scenes. In this work, we propose GPLSR which used the rich priors encapsulated in the pre-trained face GAN network to perform blind face super-resolution. This generative facial priori is introduced into the face super-resolution process through channel squeeze-and-excitation spatial feature transformation layer (SE-SFT), which makes our method achieve a good balance between realness and fidelity. Moreover, GPLSR can restores facial details with single forward pass because of powerful generative facial prior information. Extensive experiment shows that when the magnification factor is 16, this method achieves better performance than existing techniques in both synthetic and real datasets.


2021 ◽  
Author(s):  
Ziyue Zhang ◽  
Shuai Jiang ◽  
Congzhentao Huang ◽  
Richard Yi Da Xu

2021 ◽  
Author(s):  
Tomoaki Hayakawa ◽  
Chee Siang Leow ◽  
Akio Kobayashi ◽  
Takehito Utsuro ◽  
Hiromitsu Nishizaki

2021 ◽  
Vol 13 (8) ◽  
pp. 216
Author(s):  
Yu Zhao ◽  
Yi Zhu ◽  
Qiao Yu ◽  
Xiaoying Chen

Traditional research methods in software defect prediction use part of the data in the same project to train the defect prediction model and predict the defect label of the remaining part of the data. However, in the practical realm of software development, the software project that needs to be predicted is generally a brand new software project, and there is not enough labeled data to build a defect prediction model; therefore, traditional methods are no longer applicable. Cross-project defect prediction uses the labeled data of the same type of project similar to the target project to build the defect prediction model, so as to solve the problem of data loss in traditional methods. However, the difference in data distribution between the same type of project and the target project reduces the performance of defect prediction. To solve this problem, this paper proposes a cross-project defect prediction method based on manifold feature transformation. This method transforms the original feature space of the project into a manifold space, then reduces the difference in data distribution of the transformed source project and the transformed target project in the manifold space, and finally uses the transformed source project to train a naive Bayes prediction model with better performance. A comparative experiment was carried out using the Relink dataset and the AEEEM dataset. The experimental results show that compared with the benchmark method and several cross-project defect prediction methods, the proposed method effectively reduces the difference in data distribution between the source project and the target project, and obtains a higher F1 value, which is an indicator commonly used to measure the performance of the two-class model.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5207
Author(s):  
Febryan Setiawan ◽  
Che-Wei Lin

Conventional approaches to diagnosing Parkinson’s disease (PD) and rating its severity level are based on medical specialists’ clinical assessment of symptoms, which are subjective and can be inaccurate. These techniques are not very reliable, particularly in the early stages of the disease. A novel detection and severity classification algorithm using deep learning approaches was developed in this research to classify the PD severity level based on vertical ground reaction force (vGRF) signals. Different variations in force patterns generated by the irregularity in vGRF signals due to the gait abnormalities of PD patients can indicate their severity. The main purpose of this research is to aid physicians in detecting early stages of PD, planning efficient treatment, and monitoring disease progression. The detection algorithm comprises preprocessing, feature transformation, and classification processes. In preprocessing, the vGRF signal is divided into 10, 15, and 30 s successive time windows. In the feature transformation process, the time domain vGRF signal in windows with varying time lengths is modified into a time–frequency spectrogram using a continuous wavelet transform (CWT). Then, principal component analysis (PCA) is used for feature enhancement. Finally, different types of convolutional neural networks (CNNs) are employed as deep learning classifiers for classification. The algorithm performance was evaluated using k-fold cross-validation (kfoldCV). The best average accuracy of the proposed detection algorithm in classifying the PD severity stage classification was 96.52% using ResNet-50 with vGRF data from the PhysioNet database. The proposed detection algorithm can effectively differentiate gait patterns based on time–frequency spectrograms of vGRF signals associated with different PD severity levels.


Sign in / Sign up

Export Citation Format

Share Document