scholarly journals Assisted Feature Engineering and Feature Learning to Build Knowledge-based Agents for Arcade Games

Author(s):  
Bastian Andelefski ◽  
Stefan Schiffer
2018 ◽  
Vol 11 (1) ◽  
pp. 2 ◽  
Author(s):  
Tao Zhang ◽  
Hong Tang

Detailed information about built-up areas is valuable for mapping complex urban environments. Although a large number of classification algorithms for such areas have been developed, they are rarely tested from the perspective of feature engineering and feature learning. Therefore, we launched a unique investigation to provide a full test of the Operational Land Imager (OLI) imagery for 15-m resolution built-up area classification in 2015, in Beijing, China. Training a classifier requires many sample points, and we proposed a method based on the European Space Agency’s (ESA) 38-m global built-up area data of 2014, OpenStreetMap, and MOD13Q1-NDVI to achieve the rapid and automatic generation of a large number of sample points. Our aim was to examine the influence of a single pixel and image patch under traditional feature engineering and modern feature learning strategies. In feature engineering, we consider spectra, shape, and texture as the input features, and support vector machine (SVM), random forest (RF), and AdaBoost as the classification algorithms. In feature learning, the convolutional neural network (CNN) is used as the classification algorithm. In total, 26 built-up land cover maps were produced. The experimental results show the following: (1) The approaches based on feature learning are generally better than those based on feature engineering in terms of classification accuracy, and the performance of ensemble classifiers (e.g., RF) are comparable to that of CNN. Two-dimensional CNN and the 7-neighborhood RF have the highest classification accuracies at nearly 91%; (2) Overall, the classification effect and accuracy based on image patches are better than those based on single pixels. The features that can highlight the information of the target category (e.g., PanTex (texture-derived built-up presence index) and enhanced morphological building index (EMBI)) can help improve classification accuracy. The code and experimental results are available at https://github.com/zhangtao151820/CompareMethod.


2020 ◽  
Vol 15 (1) ◽  
pp. 68-76 ◽  
Author(s):  
Jihong Wang ◽  
Hao Wang ◽  
Xiaodan Wang ◽  
Huiyou Chang

Background: Identifying Drug-Target Interactions (DTIs) is a major challenge for current drug discovery and drug repositioning. Compared to traditional experimental approaches, in silico methods are fast and inexpensive. With the increase in open-access experimental data, numerous computational methods have been applied to predict DTIs. Methods: In this study, we propose an end-to-end learning model of Factorization Machine and Deep Neural Network (FM-DNN), which emphasizes both low-order (first or second order) and high-order (higher than second order) feature interactions without any feature engineering other than raw features. This approach combines the power of FM and DNN learning for feature learning in a new neural network architecture. Results: The experimental DTI basic features include drug characteristics (609), target characteristics (1819), plus drug ID, target ID, total 2430. We compare 8 models such as SVM, GBDT, WIDE-DEEP etc, the FM-DNN algorithm model obtains the best results of AUC(0.8866) and AUPR(0.8281). Conclusion: Feature engineering is a job that requires expert knowledge, it is often difficult and time-consuming to achieve good results. FM-DNN can auto learn a lower-order expression by FM and a high-order expression by DNN.FM-DNN model has outstanding advantages over other commonly used models.


Author(s):  
Hadeer Elziaat ◽  
Nashwa El-Bendary ◽  
Ramadan Moawad

Freezing of gait (FoG) is a common symptom of Parkinson's disease (PD) that causes intermittent absence of forward progression of patient's feet while walking. Accordingly, FoG momentary episodes are always accompanied with falls. This chapter presents a novel multi-feature fusion model for early detection of FoG episodes in patients with PD. In this chapter, two feature engineering schemes are investigated, namely time-domain hand-crafted feature engineering and convolutional neural network (CNN)-based spectrogram feature learning. Data of tri-axial accelerometer sensors for patients with PD is utilized to characterize the performance of the proposed model through several experiments with various machine learning (ML) algorithms. Obtained experimental results showed that the multi-feature fusion approach has outperformed typical single feature sets. Conclusively, the significance of this chapter is to highlight the impact of using feature fusion of multi-feature sets through investigating the performance of a FoG episodes early detection model.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1137 ◽  
Author(s):  
Mingliang Mei ◽  
Ji Chang ◽  
Yuling Li ◽  
Zerui Li ◽  
Xiaochuan Li ◽  
...  

Autonomous robots that operate in the field can enhance their security and efficiency by accurate terrain classification, which can be realized by means of robot-terrain interaction-generated vibration signals. In this paper, we explore the vibration-based terrain classification (VTC), in particular for a wheeled robot with shock absorbers. Because the vibration sensors are usually mounted on the main body of the robot, the vibration signals are dampened significantly, which results in the vibration signals collected on different terrains being more difficult to discriminate. Hence, the existing VTC methods applied to a robot with shock absorbers may degrade. The contributions are two-fold: (1) Several experiments are conducted to exhibit the performance of the existing feature-engineering and feature-learning classification methods; and (2) According to the long short-term memory (LSTM) network, we propose a one-dimensional convolutional LSTM (1DCL)-based VTC method to learn both spatial and temporal characteristics of the dampened vibration signals. The experiment results demonstrate that: (1) The feature-engineering methods, which are efficient in VTC of the robot without shock absorbers, are not so accurate in our project; meanwhile, the feature-learning methods are better choices; and (2) The 1DCL-based VTC method outperforms the conventional methods with an accuracy of 80.18%, which exceeds the second method (LSTM) by 8.23%.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4838
Author(s):  
Philip Gouverneur ◽  
Frédéric Li ◽  
Wacław M. Adamczyk ◽  
Tibor M. Szikszay ◽  
Kerstin Luedtke ◽  
...  

While even the most common definition of pain is under debate, pain assessment has remained the same for decades. But the paramount importance of precise pain management for successful healthcare has encouraged initiatives to improve the way pain is assessed. Recent approaches have proposed automatic pain evaluation systems using machine learning models trained with data coming from behavioural or physiological sensors. Although yielding promising results, machine learning studies for sensor-based pain recognition remain scattered and not necessarily easy to compare to each other. In particular, the important process of extracting features is usually optimised towards specific datasets. We thus introduce a comparison of feature extraction methods for pain recognition based on physiological sensors in this paper. In addition, the PainMonit Database (PMDB), a new dataset including both objective and subjective annotations for heat-induced pain in 52 subjects, is introduced. In total, five different approaches including techniques based on feature engineering and feature learning with deep learning are evaluated on the BioVid and PMDB datasets. Our studies highlight the following insights: (1) Simple feature engineering approaches can still compete with deep learning approaches in terms of performance. (2) More complex deep learning architectures do not yield better performance compared to simpler ones. (3) Subjective self-reports by subjects can be used instead of objective temperature-based annotations to build a robust pain recognition system.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Wentao Wei ◽  
Xuhui Hu ◽  
Hua Liu ◽  
Ming Zhou ◽  
Yan Song

As a machine-learning-driven decision-making problem, the surface electromyography (sEMG)-based hand movement recognition is one of the key issues in robust control of noninvasive neural interfaces such as myoelectric prosthesis and rehabilitation robot. Despite the recent success in sEMG-based hand movement recognition using end-to-end deep feature learning technologies based on deep learning models, the performance of today’s sEMG-based hand movement recognition system is still limited by the noisy, random, and nonstationary nature of sEMG signals and researchers have come up with a number of methods that improve sEMG-based hand movement via feature engineering. Aiming at achieving higher sEMG-based hand movement recognition accuracies while enabling a trade-off between performance and computational complexity, this study proposed a progressive fusion network (PFNet) framework, which improves sEMG-based hand movement recognition via integration of domain knowledge-guided feature engineering and deep feature learning. In particular, it learns high-level feature representations from raw sEMG signals and engineered time-frequency domain features via a feature learning network and a domain knowledge network, respectively, and then employs a 3-stage progressive fusion strategy to progressively fuse the two networks together and obtain the final decisions. Extensive experiments were conducted on five sEMG datasets to evaluate our proposed PFNet, and the experimental results showed that the proposed PFNet could achieve the average hand movement recognition accuracies of 87.8%, 85.4%, 68.3%, 71.7%, and 90.3% on the five datasets, respectively, which outperformed those achieved by the state of the arts.


2021 ◽  
Author(s):  
Yuan Yuan ◽  
Jie Liu ◽  
Jiankun Wang ◽  
Wenzheng Chi ◽  
Guodong Chen ◽  
...  

Author(s):  
Huifeng Guo ◽  
Ruiming TANG ◽  
Yunming Ye ◽  
Zhenguo Li ◽  
Xiuqiang He

Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide & Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.


2021 ◽  
Author(s):  
Daria Pylypenko ◽  
Kwabena Amponsah-Kaakyire ◽  
Koel Dutta Chowdhury ◽  
Josef van Genabith ◽  
Cristina España-Bonet

2019 ◽  
Vol 9 (1) ◽  
pp. I
Author(s):  
Gopi Aryal

Artificial intelligence (AI) is machine intelligence that mimics human cognitive function. It denotes the intelligence presented by some artificial entities including computers and robots.  In supervised learning, a machine is trained with data that contain pairs of inputs and outputs. In unsupervised learning, machines are given data inputs that are not explicitly programmed.1 Machine learning refines a model that predicts outputs using sample inputs (features) and a feedback loop. It relies heavily on extracting or selecting salient features, which is a combination of art and science (“feature engineering”).  A subset of feature learning is deep learning, which harnesses neural networks modeled after the biological nervous system of animals. Deep learning discovers the features from the raw data provided during training. Hidden layers in the artificial neural network represent increasingly more complex features in the data. Convolutional neural network is a type of deep learning commonly used for image analysis.


Sign in / Sign up

Export Citation Format

Share Document