scholarly journals Towards Benthic Habitat 3D Mapping Using Machine Learning Algorithms and Structures from Motion Photogrammetry

2020 ◽  
Vol 12 (1) ◽  
pp. 127 ◽  
Author(s):  
Hassan Mohamed ◽  
Kazuo Nadaoka ◽  
Takashi Nakamura

The accurate classification and 3D mapping of benthic habitats in coastal ecosystems are vital for developing management strategies for these valuable shallow water environments. However, both automatic and semiautomatic approaches for deriving ecologically significant information from a towed video camera system are quite limited. In the current study, we demonstrate a semiautomated framework for high-resolution benthic habitat classification and 3D mapping using Structure from Motion and Multi View Stereo (SfM-MVS) algorithms and automated machine learning classifiers. The semiautomatic classification of benthic habitats was performed using several attributes extracted automatically from labeled examples by a human annotator using raw towed video camera image data. The Bagging of Features (BOF), Hue Saturation Value (HSV), and Gray Level Co-occurrence Matrix (GLCM) methods were used to extract these attributes from 3000 images. Three machine learning classifiers (k-nearest neighbor (k-NN), support vector machine (SVM), and bagging (BAG)) were trained by using these attributes, and their outputs were assembled by the fuzzy majority voting (FMV) algorithm. The correctly classified benthic habitat images were then geo-referenced using a differential global positioning system (DGPS). Finally, SfM-MVS techniques used the resulting classified geo-referenced images to produce high spatial resolution digital terrain models and orthophoto mosaics for each category. The framework was tested for the identification and 3D mapping of seven habitats in a portion of the Shiraho area in Japan. These seven habitats were corals (Acropora and Porites), blue corals (H. coerulea), brown algae, blue algae, soft sand, hard sediments (pebble, cobble, and boulders), and seagrass. Using the FMV algorithm, we achieved an overall accuracy of 93.5% in the semiautomatic classification of the seven habitats.

2019 ◽  
Vol 16 (2) ◽  
pp. 5-16
Author(s):  
Amit Singh ◽  
Ivan Li ◽  
Otto Hannuksela ◽  
Tjonnie Li ◽  
Kyungmin Kim

Gravitational waves are theorized to be gravitationally lensed when they propagate near massive objects. Such lensing effects cause potentially detectable repeated gravitational wave patterns in ground- and space-based gravitational wave detectors. These effects are difficult to discriminate when the lens is small and the repeated patterns superpose. Traditionally, matched filtering techniques are used to identify gravitational-wave signals, but we instead aim to utilize machine learning techniques to achieve this. In this work, we implement supervised machine learning classifiers (support vector machine, random forest, multi-layer perceptron) to discriminate such lensing patterns in gravitational wave data. We train classifiers with spectrograms of both lensed and unlensed waves using both point-mass and singular isothermal sphere lens models. As the result, classifiers return F1 scores ranging from 0:852 to 0:996, with precisions from 0:917 to 0:992 and recalls ranging from 0:796 to 1:000 depending on the type of classifier and lensing model used. This supports the idea that machine learning classifiers are able to correctly determine lensed gravitational wave signals. This also suggests that in the future, machine learning classifiers may be used as a possible alternative to identify lensed gravitational wave events and to allow us to study gravitational wave sources and massive astronomical objects through further analysis. KEYWORDS: Gravitational Waves; Gravitational Lensing; Geometrical Optics; Machine Learning; Classification; Support Vector Machine; Random Tree Forest; Multi-layer Perceptron


2022 ◽  
Vol 12 (2) ◽  
pp. 828
Author(s):  
Tebogo Bokaba ◽  
Wesley Doorsamy ◽  
Babu Sena Paul

Road traffic accidents (RTAs) are a major cause of injuries and fatalities worldwide. In recent years, there has been a growing global interest in analysing RTAs, specifically concerned with analysing and modelling accident data to better understand and assess the causes and effects of accidents. This study analysed the performance of widely used machine learning classifiers using a real-life RTA dataset from Gauteng, South Africa. The study aimed to assess prediction model designs for RTAs to assist transport authorities and policymakers. It considered classifiers such as naïve Bayes, logistic regression, k-nearest neighbour, AdaBoost, support vector machine, random forest, and five missing data methods. These classifiers were evaluated using five evaluation metrics: accuracy, root-mean-square error, precision, recall, and receiver operating characteristic curves. Furthermore, the assessment involved parameter adjustment and incorporated dimensionality reduction techniques. The empirical results and analyses show that the RF classifier, combined with multiple imputations by chained equations, yielded the best performance when compared with the other combinations.


2019 ◽  
Vol 8 (4) ◽  
pp. 2187-2191

Music in an essential part of life and the emotion carried by it is key to its perception and usage. Music Emotion Recognition (MER) is the task of identifying the emotion in musical tracks and classifying them accordingly. The objective of this research paper is to check the effectiveness of popular machine learning classifiers like XGboost, Random Forest, Decision Trees, Support Vector Machine (SVM), K-Nearest-Neighbour (KNN) and Gaussian Naive Bayes on the task of MER. Using the MIREX-like dataset [17] to test these classifiers, the effects of oversampling algorithms like Synthetic Minority Oversampling Technique (SMOTE) [22] and Random Oversampling (ROS) were also verified. In all, the Gaussian Naive Bayes classifier gave the maximum accuracy of 40.33%. The other classifiers gave accuracies in between 20.44% and 38.67%. Thus, a limit on the classification accuracy has been reached using these classifiers and also using traditional musical or statistical metrics derived from the music as input features. In view of this, deep learning-based approaches using Convolutional Neural Networks (CNNs) [13] and spectrograms of the music clips for MER is a promising alternative.


2020 ◽  
Vol 10 (19) ◽  
pp. 6683
Author(s):  
Andrea Murari ◽  
Emmanuele Peluso ◽  
Michele Lungaroni ◽  
Riccardo Rossi ◽  
Michela Gelfusa ◽  
...  

The inadequacies of basic physics models for disruption prediction have induced the community to increasingly rely on data mining tools. In the last decade, it has been shown how machine learning predictors can achieve a much better performance than those obtained with manually identified thresholds or empirical descriptions of the plasma stability limits. The main criticisms of these techniques focus therefore on two different but interrelated issues: poor “physics fidelity” and limited interpretability. Insufficient “physics fidelity” refers to the fact that the mathematical models of most data mining tools do not reflect the physics of the underlying phenomena. Moreover, they implement a black box approach to learning, which results in very poor interpretability of their outputs. To overcome or at least mitigate these limitations, a general methodology has been devised and tested, with the objective of combining the predictive capability of machine learning tools with the expression of the operational boundary in terms of traditional equations more suited to understanding the underlying physics. The proposed approach relies on the application of machine learning classifiers (such as Support Vector Machines or Classification Trees) and Symbolic Regression via Genetic Programming directly to experimental databases. The results are very encouraging. The obtained equations of the boundary between the safe and disruptive regions of the operational space present almost the same performance as the machine learning classifiers, based on completely independent learning techniques. Moreover, these models possess significantly better predictive power than traditional representations, such as the Hugill or the beta limit. More importantly, they are realistic and intuitive mathematical formulas, which are well suited to supporting theoretical understanding and to benchmarking empirical models. They can also be deployed easily and efficiently in real-time feedback systems.


2012 ◽  
Vol 36 (6) ◽  
pp. 3861-3874 ◽  
Author(s):  
Juliana T. Pollettini ◽  
Sylvia R. G. Panico ◽  
Julio C. Daneluzzi ◽  
Renato Tinós ◽  
José A. Baranauskas ◽  
...  

2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Hyo-Sik Ham ◽  
Hwan-Hee Kim ◽  
Myung-Sup Kim ◽  
Mi-Jung Choi

Current many Internet of Things (IoT) services are monitored and controlled through smartphone applications. By combining IoT with smartphones, many convenient IoT services have been provided to users. However, there are adverse underlying effects in such services including invasion of privacy and information leakage. In most cases, mobile devices have become cluttered with important personal user information as various services and contents are provided through them. Accordingly, attackers are expanding the scope of their attacks beyond the existing PC and Internet environment into mobile devices. In this paper, we apply a linear support vector machine (SVM) to detect Android malware and compare the malware detection performance of SVM with that of other machine learning classifiers. Through experimental validation, we show that the SVM outperforms other machine learning classifiers.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1955
Author(s):  
Ikram Sumaiya Thaseen ◽  
Vanitha Mohanraj ◽  
Sakthivel Ramachandran ◽  
Kishore Sanapala ◽  
Sang-Soo Yeo

In recent years, different variants of the botnet are targeting government, private organizations and there is a crucial need to develop a robust framework for securing the IoT (Internet of Things) network. In this paper, a Hadoop based framework is proposed to identify the malicious IoT traffic using a modified Tomek-link under-sampling integrated with automated Hyper-parameter tuning of machine learning classifiers. The novelty of this paper is to utilize a big data platform for benchmark IoT datasets to minimize computational time. The IoT benchmark datasets are loaded in the Hadoop Distributed File System (HDFS) environment. Three machine learning approaches namely naive Bayes (NB), K-nearest neighbor (KNN), and support vector machine (SVM) are used for categorizing IoT traffic. Artificial immune network optimization is deployed during cross-validation to obtain the best classifier parameters. Experimental analysis is performed on the Hadoop platform. The average accuracy of 99% and 90% is obtained for BoT_IoT and ToN_IoT datasets. The accuracy difference in ToN-IoT dataset is due to the huge number of data samples captured at the edge layer and fog layer. However, in BoT-IoT dataset only 5% of the training and test samples from the complete dataset are considered for experimental analysis as released by the dataset developers. The overall accuracy is improved by 19% in comparison with state-of-the-art techniques. The computational times for the huge datasets are reduced by 3–4 hours through Map Reduce in HDFS.


Author(s):  
Hamza Turabieh ◽  
Ahmad S. Alghamdi

Wi-Fi technology is now everywhere either inside or outside buildings. Using Wi-fi technology introduces an indoor localization service(s) (ILS). Determining indoor user location is a hard and complex problem. Several applications highlight the importance of indoor user localization such as disaster management, health care zones, Internet of Things applications (IoT), and public settlement planning. The measurements of Wi-Fi signal strength (i.e., Received Signal Strength Indicator (RSSI)) can be used to determine indoor user location. In this paper, we proposed a hybrid model between a wrapper feature selection algorithm and machine learning classifiers to determine indoor user location. We employed the Minimum Redundancy Maximum Relevance (mRMR) algorithm as a feature selection to select the most active access point (AP) based on RSSI values. Six different machine learning classifiers were used in this work (i.e., Decision Tree (DT), Support Vector Machine (SVM), k-nearest neighbors (kNN), Linear Discriminant Analysis (LDA), Ensemble-Bagged Tree (EBaT), and Ensemble Boosted Tree (EBoT)). We examined all classifiers on a public dataset obtained from UCI repository. The obtained results show that EBoT outperforms all other classifiers based on accuracy value/


2021 ◽  
Vol 50 (8) ◽  
pp. 2479-2497
Author(s):  
Buvana M. ◽  
Muthumayil K.

One of the most symptomatic diseases is COVID-19. Early and precise physiological measurement-based prediction of breathing will minimize the risk of COVID-19 by a reasonable distance from anyone; wearing a mask, cleanliness, medication, balanced diet, and if not well stay safe at home. To evaluate the collected datasets of COVID-19 prediction, five machine learning classifiers were used: Nave Bayes, Support Vector Machine (SVM), Logistic Regression, K-Nearest Neighbour (KNN), and Decision Tree. COVID-19 datasets from the Repository were combined and re-examined to remove incomplete entries, and a total of 2500 cases were utilized in this study. Features of fever, body pain, runny nose, difficulty in breathing, shore throat, and nasal congestion, are considered to be the most important differences between patients who have COVID-19s and those who do not. We exhibit the prediction functionality of five machine learning classifiers. A publicly available data set was used to train and assess the model. With an overall accuracy of 99.88 percent, the ensemble model is performed commendably. When compared to the existing methods and studies, the proposed model is performed better. As a result, the model presented is trustworthy and can be used to screen COVID-19 patients timely, efficiently.


Sign in / Sign up

Export Citation Format

Share Document