scholarly journals A Prediction Method of Electromagnetic Environment Effects for UAV LiDAR Detection System

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Min Huang ◽  
Dandan Liu ◽  
Liyun Ma ◽  
Jingyang Wang ◽  
Yuming Wang ◽  
...  

With the rapid development of science and technology, UAVs (Unmanned Aerial Vehicles) have become a new type of weapon in the informatization battlefield by their advantages of low loss and zero casualty rate. In recent years, UAV navigation electromagnetic decoy and electromagnetic interference crashes have activated widespread international attention. The UAV LiDAR detection system is susceptible to electromagnetic interference in a complex electromagnetic environment, which results in inaccurate detection and causes the mission to fail. Therefore, it is very necessary to predict the effects of the electromagnetic environment. Traditional electromagnetic environment effect prediction methods mostly use a single model of mathematical model and machine learning, but the traditional prediction method has poor processing nonlinear ability and weak generalization ability. Therefore, this paper uses the Stacking fusion model algorithm in machine learning to study the electromagnetic environment effect prediction. This paper proposes a Stacking fusion model based on machine learning to predict electromagnetic environment effects. The method consists of Extreme Gradient Boosting algorithm (XGB), Gradient Boosting Decision Tree algorithm (GBDT), K Nearest Neighbor algorithm (KNN), and Decision Tree algorithm (DT). Experimental results show that, comprising with the other seven machine learning algorithms, the Stacking fusion model has a better classification prediction accuracy of 0.9762, a lower Hamming code distance of 0.0336, and a higher Kappa coefficient of 0.955. The fusion model proposed in this paper has a better predictive effect on electromagnetic environment effects and is of great significance for improving the accuracy and safety of UAV LiDAR detection systems under the complex electromagnetic environment on the battlefield.

2021 ◽  
Vol 13 (11) ◽  
pp. 2096
Author(s):  
Zhongqi Yu ◽  
Yuanhao Qu ◽  
Yunxin Wang ◽  
Jinghui Ma ◽  
Yu Cao

A visibility forecast model called a boosting-based fusion model (BFM) was established in this study. The model uses a fusion machine learning model based on multisource data, including air pollutants, meteorological observations, moderate resolution imaging spectroradiometer (MODIS) aerosol optical depth (AOD) data, and an operational regional atmospheric environmental modeling System for eastern China (RAEMS) outputs. Extreme gradient boosting (XGBoost), a light gradient boosting machine (LightGBM), and a numerical prediction method, i.e., RAEMS were fused to establish this prediction model. Three sets of prediction models, that is, BFM, LightGBM based on multisource data (LGBM), and RAEMS, were used to conduct visibility prediction tasks. The training set was from 1 January 2015 to 31 December 2018 and used several data pre-processing methods, including a synthetic minority over-sampling technique (SMOTE) data resampling, a loss function adjustment, and a 10-fold cross verification. Moreover, apart from the basic features (variables), more spatial and temporal gradient features were considered. The testing set was from 1 January to 31 December 2019 and was adopted to validate the feasibility of the BFM, LGBM, and RAEMS. Statistical indicators confirmed that the machine learning methods improved the RAEMS forecast significantly and consistently. The root mean square error and correlation coefficient of BFM for the next 24/48 h were 5.01/5.47 km and 0.80/0.77, respectively, which were much higher than those of RAEMS. The statistics and binary score analysis for different areas in Shanghai also proved the reliability and accuracy of using BFM, particularly in low-visibility forecasting. Overall, BFM is a suitable tool for predicting the visibility. It provides a more accurate visibility forecast for the next 24 and 48 h in Shanghai than LGBM and RAEMS. The results of this study provide support for real-time operational visibility forecasts.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 46
Author(s):  
Gangqiang Zhang ◽  
Wei Zheng ◽  
Wenjie Yin ◽  
Weiwei Lei

The launch of GRACE satellites has provided a new avenue for studying the terrestrial water storage anomalies (TWSA) with unprecedented accuracy. However, the coarse spatial resolution greatly limits its application in hydrology researches on local scales. To overcome this limitation, this study develops a machine learning-based fusion model to obtain high-resolution (0.25°) groundwater level anomalies (GWLA) by integrating GRACE observations in the North China Plain. Specifically, the fusion model consists of three modules, namely the downscaling module, the data fusion module, and the prediction module, respectively. In terms of the downscaling module, the GRACE-Noah model outperforms traditional data-driven models (multiple linear regression and gradient boosting decision tree (GBDT)) with the correlation coefficient (CC) values from 0.24 to 0.78. With respect to the data fusion module, the groundwater level from 12 monitoring wells is incorporated with climate variables (precipitation, runoff, and evapotranspiration) using the GBDT algorithm, achieving satisfactory performance (mean values: CC: 0.97, RMSE: 1.10 m, and MAE: 0.87 m). By merging the downscaled TWSA and fused groundwater level based on the GBDT algorithm, the prediction module can predict the water level in specified pixels. The predicted groundwater level is validated against 6 in-situ groundwater level data sets in the study area. Compare to the downscaling module, there is a significant improvement in terms of CC metrics, on average, from 0.43 to 0.71. This study provides a feasible and accurate fusion model for downscaling GRACE observations and predicting groundwater level with improved accuracy.


2020 ◽  
Author(s):  
Jialin Wang ◽  
Jing Yang ◽  
Hongli Ren ◽  
Jinxiao Li ◽  
Qing Bao ◽  
...  

<p>The seasonal prediction of summer rainfall is crucial for regional disaster reduction but currently has a low prediction skill. This study developed a machine learning (ML)-based dynamical (MLD) seasonal prediction method for summer rainfall in China based on suitable circulation fields from an operational dynamical prediction model CAS FGOALS-f2. Through choosing optimum hyperparameters for three ML methods to reach the best fitting and the least overfitting, gradient boosting regression trees eventually exhibit the highest prediction skill, obtaining averaged values of 0.33 in the reference training period (1981-2010) and 0.19 in eight individual years (2011-2018) of independent prediction, which significantly improves the previous dynamical prediction skill by more than 300%. Further study suggests that both reducing overfitting and using the best dynamical prediction are imperative in MLD application prospects, which warrants further investigation.</p>


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Qiang Zhao

Over the last two decades, the identification of ancient artifacts has been regarded as one of the most challenging tasks for archaeologists. Chinese people consider these artifacts as symbols of their cultural heritage. The development of technology has helped in the identification of ancient artifacts to a greater extent. The study preferred machine-learning algorithms to identify the ancient artifacts found throughout China. The major cities of China were selected for the study and classified the cities based on different features like temple, modern city, harbour, battle, and South China. The study used a decision tree algorithm for recognition and gradient boosting for perception aspects. According to the findings of the study, the algorithms produced 98% accuracy and prediction in detecting ancient artifacts in China. The proposed models provide a good indicator for detecting archaeological site locations.


Author(s):  
Thambirajah Ravichandran ◽  
Keyhan Gavahi ◽  
Kumaraswamy Ponnambalam ◽  
Valentin Burtea ◽  
S. Jamshid Mousavi

Abstract This paper presents an acoustic leak detection system for distribution water mains using machine learning methods. The problem is formulated as a binary classifier to identify leak and no-leak cases using acoustic signals. A supervised learning methodology has been employed using several detection features extracted from acoustic signals, such as power spectral density and time-series data. The training and validation data sets have been collected over several months from multiple cities across North America. The proposed solution includes a multi-strategy ensemble learning (MEL) using a gradient boosting tree (GBT) classification model, which has performed better in maximizing detection rate and minimizing false positives as compared with other classification models such as KNN, ANN, and rule-based techniques. Further improvements have been achieved using a multitude of GBT classifiers combined in a parallel ensemble method called bagging algorithm. The proposed MEL approach demonstrates a significant improvement in performance, resulting in a reduction of false positives reports by an order of magnitude.


Author(s):  
Kartik Madkaikar ◽  
◽  
Manthan Nagvekar ◽  
Preity Parab ◽  
Riya Raika ◽  
...  

Credit card fraud is a serious criminal offense. It costs individuals and financial institutions billions of dollars annually. According to the reports of the Federal Trade Commission (FTC), a consumer protection agency, the number of theft reports doubled in the last two years. It makes the detection and prevention of fraudulent activities critically important to financial institutions. Machine learning algorithms provide a proactive mechanism to prevent credit card fraud with acceptable accuracy. In this paper Machine Learning algorithms such as Logistic Regression, Naïve Bayes, Random Forest, K- Nearest Neighbor, Gradient Boosting, Support Vector Machine, and Neural Network algorithms are implemented for detection of fraudulent transactions. A comparative analysis of these algorithms is performed to identify an optimal solution.


2021 ◽  
Vol 9 (4) ◽  
pp. 376 ◽  
Author(s):  
Yunfei Yang ◽  
Haiwen Tu ◽  
Lei Song ◽  
Lin Chen ◽  
De Xie ◽  
...  

Resistance is one of the important performance indicators of ships. In this paper, a prediction method based on the Radial Basis Function neural network (RBFNN) is proposed to predict the resistance of a 13500 transmission extension unit (13500TEU) container ship at different drafts. The predicted draft state in the known range is called interpolation prediction; otherwise, it is extrapolation prediction. First, ship features are extracted to make the resistance Rt prediction. The resistance prediction results show that the performance of the RBFNN is significantly better than the other four machine learning models, backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF), and extreme gradient boosting (XGBoost). Then, the ship data is processed in a dimensionless manner, and the models mentioned above are used to predict the total resistance coefficient Ct of the container ship. The prediction results show that the RBFNN prediction model still performs well. Good results can be obtained by RBFNN in interpolation prediction, even when using part of dimensionless features. Finally, the accuracy of the prediction method based on RBFNN is greatly improved compared with the modified admiralty coefficient.


Improving the performance of link prediction is a significant role in the evaluation of social network. Link prediction is known as one of the primary purposes for recommended systems, bio information, and web. Most machine learning methods that depend on SNA model’s metrics use supervised learning to develop link prediction models. Supervised learning actually needed huge amount of data set to train the model of link prediction to obtain an optimal level of performance. In few years, Deep Reinforcement Learning (DRL) has achieved excellent success in various domain such as SNA. In this paper, we present the use of deep reinforcement learning (DRL) to improve the performance and accuracy of the model for the applied dataset. The experiment shows that the dataset created by the DRL model through self-play or auto-simulation can be utilized to improve the link prediction model. We have used three different datasets: JUNANES, MAMBO, JAKE. Experimental results show that the DRL proposed method provide accuracy of 85% for JUNANES, 87% for MAMABO, and 78% for JAKE dataset which outperforms the GBM next highest accuracy of 75% for JUNANES, 79% for MAMBO and 71% for JAKE dataset respectively trained with 2500 iteration and also in terms of AUC measures as well. The DRL model shows the better efficiency than a traditional machine learning strategy, such as, Random Forest and the gradient boosting machine (GBM).


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 938
Author(s):  
Nicolas Zurbuchen ◽  
Adriana Wilde ◽  
Pascal Bruegger

Falls are dangerous for the elderly, often causing serious injuries especially when the fallen person stays on the ground for a long time without assistance. This paper extends our previous work on the development of a Fall Detection System (FDS) using an inertial measurement unit worn at the waist. Data come from SisFall, a publicly available dataset containing records of Activities of Daily Living and falls. We first applied a preprocessing and a feature extraction stage before using five Machine Learning algorithms, allowing us to compare them. Ensemble learning algorithms such as Random Forest and Gradient Boosting have the best performance, with a Sensitivity and Specificity both close to 99%. Our contribution is: a multi-class classification approach for fall detection combined with a study of the effect of the sensors’ sampling rate on the performance of the FDS. Our multi-class classification approach splits the fall into three phases: pre-fall, impact, post-fall. The extension to a multi-class problem is not trivial and we present a well-performing solution. We experimented sampling rates between 1 and 200 Hz. The results show that, while high sampling rates tend to improve performance, a sampling rate of 50 Hz is generally sufficient for an accurate detection.


Sign in / Sign up

Export Citation Format

Share Document