scholarly journals Experimental Evaluation of Computer Vision and Machine Learning-Based UAV Detection and Ranging

Drones ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 37
Author(s):  
Bingsheng Wei ◽  
Martin Barczyk

We consider the problem of vision-based detection and ranging of a target UAV using the video feed from a monocular camera onboard a pursuer UAV. Our previously published work in this area employed a cascade classifier algorithm to locate the target UAV, which was found to perform poorly in complex background scenes. We thus study the replacement of the cascade classifier algorithm with newer machine learning-based object detection algorithms. Five candidate algorithms are implemented and quantitatively tested in terms of their efficiency (measured as frames per second processing rate), accuracy (measured as the root mean squared error between ground truth and detected location), and consistency (measured as mean average precision) in a variety of flight patterns, backgrounds, and test conditions. Assigning relative weights of 20%, 40% and 40% to these three criteria, we find that when flying over a white background, the top three performers are YOLO v2 (76.73 out of 100), Faster RCNN v2 (63.65 out of 100), and Tiny YOLO (59.50 out of 100), while over a realistic background, the top three performers are Faster RCNN v2 (54.35 out of 100, SSD MobileNet v1 (51.68 out of 100) and SSD Inception v2 (50.72 out of 100), leading us to recommend Faster RCNN v2 as the recommended solution. We then provide a roadmap for further work in integrating the object detector into our vision-based UAV tracking system.

2021 ◽  
pp. 1-16
Author(s):  
Kevin Kloos

The use of machine learning algorithms at national statistical institutes has increased significantly over the past few years. Applications range from new imputation schemes to new statistical output based entirely on machine learning. The results are promising, but recent studies have shown that the use of machine learning in official statistics always introduces a bias, known as misclassification bias. Misclassification bias does not occur in traditional applications of machine learning and therefore it has received little attention in the academic literature. In earlier work, we have collected existing methods that are able to correct misclassification bias. We have compared their statistical properties, including bias, variance and mean squared error. In this paper, we present a new generic method to correct misclassification bias for time series and we derive its statistical properties. Moreover, we show numerically that it has a lower mean squared error than the existing alternatives in a wide variety of settings. We believe that our new method may improve machine learning applications in official statistics and we aspire that our work will stimulate further methodological research in this area.


Author(s):  
Kalva Sindhu Priya

Abstract: In the present scenario, it is quite aware that almost every field is moving into machine based automation right from fundamentals to master level systems. Among them, Machine Learning (ML) is one of the important tool which is most similar to Artificial Intelligence (AI) by allowing some well known data or past experience in order to improve automatically or estimate the behavior or status of the given data through various algorithms. Modeling a system or data through Machine Learning is important and advantageous as it helps in the development of later and newer versions. Today most of the information technology giants such as Facebook, Uber, Google maps made Machine learning as a critical part of their ongoing operations for the better view of users. In this paper, various available algorithms in ML is given briefly and out of all the existing different algorithms, Linear Regression algorithm is used to predict a new set of values by taking older data as reference. However, a detailed predicted model is discussed clearly by building a code with the help of Machine Learning and Deep Learning tool in MATLAB/ SIMULINK. Keywords: Machine Learning (ML), Linear Regression algorithm, Curve fitting, Root Mean Squared Error


2021 ◽  
Author(s):  
Hangsik Shin

BACKGROUND Arterial stiffness due to vascular aging is a major indicator for evaluating cardiovascular risk. OBJECTIVE In this study, we propose a method of estimating age by applying machine learning to photoplethysmogram for non-invasive vascular age assessment. METHODS The machine learning-based age estimation model that consists of three convolutional layers and two-layer fully connected layers, was developed using segmented photoplethysmogram by pulse from a total of 752 adults aged 19–87 years. The performance of the developed model was quantitatively evaluated using mean absolute error, root-mean-squared-error, Pearson’s correlation coefficient, coefficient of determination. The Grad-Cam was used to explain the contribution of photoplethysmogram waveform characteristic in vascular age estimation. RESULTS Mean absolute error of 8.03, root mean squared error of 9.96, 0.62 of correlation coefficient, and 0.38 of coefficient of determination were shown through 10-fold cross validation. Grad-Cam, used to determine the weight that the input signal contributes to the result, confirmed that the contribution to the age estimation of the photoplethysmogram segment was high around the systolic peak. CONCLUSIONS The machine learning-based vascular aging analysis method using the PPG waveform showed comparable or superior performance compared to previous studies without complex feature detection in evaluating vascular aging. CLINICALTRIAL 2015-0104


2010 ◽  
Vol 1 (4) ◽  
pp. 17-45
Author(s):  
Antons Rebguns ◽  
Diana F. Spears ◽  
Richard Anderson-Sprecher ◽  
Aleksey Kletsov

This paper presents a novel theoretical framework for swarms of agents. Before deploying a swarm for a task, it is advantageous to predict whether a desired percentage of the swarm will succeed. The authors present a framework that uses a small group of expendable “scout” agents to predict the success probability of the entire swarm, thereby preventing many agent losses. The scouts apply one of two formulas to predict – the standard Bernoulli trials formula or the new Bayesian formula. For experimental evaluation, the framework is applied to simulated agents navigating around obstacles to reach a goal location. Extensive experimental results compare the mean-squared error of the predictions of both formulas with ground truth, under varying circumstances. Results indicate the accuracy and robustness of the Bayesian approach. The framework also yields an intriguing result, namely, that both formulas usually predict better in the presence of (Lennard-Jones) inter-agent forces than when their independence assumptions hold.


2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Hye-Jin Kim ◽  
Sung Min Park ◽  
Byung Jin Choi ◽  
Seung-Hyun Moon ◽  
Yong-Hyuk Kim

We propose three quality control (QC) techniques using machine learning that depend on the type of input data used for training. These include QC based on time series of a single weather element, QC based on time series in conjunction with other weather elements, and QC using spatiotemporal characteristics. We performed machine learning-based QC on each weather element of atmospheric data, such as temperature, acquired from seven types of IoT sensors and applied machine learning algorithms, such as support vector regression, on data with errors to make meaningful estimates from them. By using the root mean squared error (RMSE), we evaluated the performance of the proposed techniques. As a result, the QC done in conjunction with other weather elements had 0.14% lower RMSE on average than QC conducted with only a single weather element. In the case of QC with spatiotemporal characteristic considerations, the QC done via training with AWS data showed performance with 17% lower RMSE than QC done with only raw data.


Proceedings ◽  
2020 ◽  
Vol 59 (1) ◽  
pp. 2
Author(s):  
Benoit Figuet ◽  
Raphael Monstein ◽  
Michael Felux

In this paper, we present an aircraft localization solution developed in the context of the Aircraft Localization Competition and applied to the OpenSky Network real-world ADS-B data. The developed solution is based on a combination of machine learning and multilateration using data provided by time synchronized ground receivers. A gradient boosting regression technique is used to obtain an estimate of the geometric altitude of the aircraft, as well as a first guess of the 2D aircraft position. Then, a triplet-wise and an all-in-view multilateration technique are implemented to obtain an accurate estimate of the aircraft latitude and longitude. A sensitivity analysis of the accuracy as a function of the number of receivers is conducted and used to optimize the proposed solution. The obtained predictions have an accuracy below 25 m for the 2D root mean squared error and below 35 m for the geometric altitude.


2021 ◽  
pp. 202-208
Author(s):  
Daniel Theodorus ◽  
Sarjon Defit ◽  
Gunadi Widi Nurcahyo

Industri 4.0 mendorong banyak perusahaan bertransformasi ke sistem digital. Machine Learning merupakan salah satu solusi dalam analisa data. Analisa data menjadi poin penting dalam memberikan layanan yang terbaik (user experience) kepada pelanggan. Lokasi yang diangkat dalam penelitian ini adalah PT. Sentral Tukang Indonesia yang bergerak dalam bidang penjualan bahan bangunan dan alat pertukangan seperti: cat, tripleks, aluminium, keramik, dan hpl. Dengan banyaknya data yang tersedia, menyebabkan perusahaan mengalami kesulitan dalam memberikan rekomendasi produk kepada pelanggan. Sistem rekomendasi muncul sebagai solusi dalam memberikan rekomendasi produk,  berdasarkan interaksi antara pelanggan dengan pelanggan lainnya yang terdapat di dalam data histori penjualan. Tujuan dari penelitian ini adalah Membantu perusahaan dalam memberikan rekomendasi produk sehingga dapat meningkatkan penjualan, memudahkan pelanggan untuk menemukan produk yang dibutuhkan, dan meningkatkan layanan yang terbaik kepada pelanggan.Data yang digunakan adalah data histori penjualan dalam 1 periode (Q1 2021), data pelanggan, dan data produk pada PT. Sentral Tukang Indonesia. Data histori penjualan tersebut akan dibagi menjadi 80% untuk dataset training dan 20% untuk dataset testing. Metode Item-based Collaborative Filtering pada penelitian ini memakai algoritma Cosine Similarity untuk menghitung tingkat kemiripan antar produk. Prediksi score memakai rumus Weighted Sum dan dalam menghitung tingkat error memakai rumus Root Mean Squared Error. Hasil dari penelitian ini memperlihatkan rekomendasi top 10 produk per pelanggan. Produk yang tampil merupakan produk yang memiliki score tertinggi dari pelanggan tersebut. Penelitian ini dapat menjadi referensi dan acuan bagi perusahaan dalam memberikan rekomendasi produk yang dibutuhkan oleh pelanggan.


Author(s):  
Ahmed Hassan Mohammed Hassan ◽  
◽  
Arfan Ali Mohammed Qasem ◽  
Walaa Faisal Mohammed Abdalla ◽  
Omer H. Elhassan

Day by day, the accumulative incidence of COVID-19 is rapidly increasing. After the spread of the Corona epidemic and the death of more than a million people around the world countries, scientists and researchers have tended to conduct research and take advantage of modern technologies to learn machine to help the world to get rid of the Coronavirus (COVID-19) epidemic. To track and predict the disease Machine Learning (ML) can be deployed very effectively. ML techniques have been anticipated in areas that need to identify dangerous negative factors and define their priorities. The significance of a proposed system is to find the predict the number of people infected with COVID19 using ML. Four standard models anticipate COVID-19 prediction, which are Neural Network (NN), Support Vector Machines (SVM), Bayesian Network (BN) and Polynomial Regression (PR). The data utilized to test these models content of number of deaths, newly infected cases, and recoveries in the next 20 days. Five measures parameters were used to evaluate the performance of each model, namely root mean squared error (RMSE), mean squared error (MAE), mean absolute error (MSE), Explained Variance score and r2 score (R2). The significance and value of proposed system auspicious mechanism to anticipate these models for the current cenario of the COVID-19 epidemic. The results showed NN outperformed the other models, while in the available dataset the SVM performs poorly in all the prediction. Reference to our results showed that injuries will increase slightly in the coming days. Also, we find that the results give rise to hope due to the low death rate. For future perspective, case explanation and data amalgamation must be kept up persistently.


2021 ◽  
Vol 2070 (1) ◽  
pp. 012145
Author(s):  
R Shiva Shankar ◽  
CH Raminaidu ◽  
VV Sivarama Raju ◽  
J Rajanikanth

Abstract Epilepsy is a chronic neurological illness that affects millions of people throughout the world. Epilepsy affects around 50 million people globally. It is estimated that if epilepsy is correctly diagnosed and treated, up to 70% of people with the condition will be seizure-free. There is a need to detect epilepsy at the initial stages to reduce symptoms by medications and other strategies. We use Epileptic Seizure Recognition dataset to train the model which is provided by UCI Machine Learning Repository. There are 179 attributes and 11,500 unique values in this dataset. MLP, PCA with RF, QDA, LDA, and PCA with ANN were applied among them; PCA with ANN provided the better metrics. For the metrics, we received the following findings. It is 97.55% Accuracy, 94.24% Precision, 91.48% recall, 83.38% hinge loss, and 2.32% mean squared error.


2021 ◽  
Author(s):  
Mengbo Guo ◽  
Xuyang Xu ◽  
Han Xie

Density functional theory (DFT) is a ubiquitous first-principles method, but the approximate nature of the exchange-correlation functional poses an inherent limitation for the accuracy of various computed properties. In this context, surrogate models based on machine learning have the potential to provide a more efficient and physically meaningful understanding of electronic properties, such as the band gap. Here, we construct a gradient boosting regression (GBR) model for prediction of the band gap of binary compounds from simple physical descriptors, using a dataset of over 4000 DFT-computed band gaps. Out of 27 features, electronegativity, periodic group, and highest occupied energy level exhibit the highest importance score, consistent with the underlying physics of the electronic structure. We obtain a model accuracy of 0.81 and root mean squared error of 0.26 eV using the top five features, achieving accuracy comparable to previously reported values but employing less number of features. Our work presents a rapid and interpretable prediction model for solid-state band gap with high fidelity to DFT and can be extended beyond binary materials considered in this study.


Sign in / Sign up

Export Citation Format

Share Document