scholarly journals Detecting Multi-Decadal Changes in Seagrass Cover in Tauranga Harbour, New Zealand, Using Landsat Imagery and Boosting Ensemble Classification Techniques

2021 ◽  
Vol 10 (6) ◽  
pp. 371
Author(s):  
Nam-Thang Ha ◽  
Merilyn Manley-Harris ◽  
Tien-Dat Pham ◽  
Ian Hawes

Seagrass provides a wide range of essential ecosystem services, supports climate change mitigation, and contributes to blue carbon sequestration. This resource, however, is undergoing significant declines across the globe, and there is an urgent need to develop change detection techniques appropriate to the scale of loss and applicable to the complex coastal marine environment. Our work aimed to develop remote-sensing-based techniques for detection of changes between 1990 and 2019 in the area of seagrass meadows in Tauranga Harbour, New Zealand. Four state-of-the-art machine-learning models, Random Forest (RF), Support Vector Machine (SVM), Extreme Gradient Boost (XGB), and CatBoost (CB), were evaluated for classification of seagrass cover (presence/absence) in a Landsat 8 image from 2019, using near-concurrent Ground-Truth Points (GTPs). We then used the most accurate one of these models, CB, with historic Landsat imagery supported by classified aerial photographs for an estimation of change in cover over time. The CB model produced the highest accuracies (precision, recall, F1 scores of 0.94, 0.96, and 0.95 respectively). We were able to use Landsat imagery to document the trajectory and spatial distribution of an approximately 50% reduction in seagrass area from 2237 ha to 1184 ha between the years 1990–2019. Our illustration of change detection of seagrass in Tauranga Harbour suggests that machine-learning techniques, coupled with historic satellite imagery, offers potential for evaluation of historic as well as ongoing seagrass dynamics.

Energies ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 197
Author(s):  
Claudia Corradino ◽  
Giuseppe Bilotta ◽  
Annalisa Cappello ◽  
Luigi Fortuna ◽  
Ciro Del Negro

Lava flow mapping has direct relevance to volcanic hazards once an eruption has begun. Satellite remote sensing techniques are increasingly used to map newly erupted lava, thanks to their capability to survey large areas with frequent revisit time and accurate spatial resolution. Visible and infrared satellite data are routinely used to detect the distributions of volcanic deposits and monitor thermal features, even if clouds are a serious obstacle for optical sensors, since they cannot be penetrated by optical radiation. On the other hand, radar satellite data have been playing an important role in surface change detection and image classification, being able to operate in all weather conditions, although their use is hampered by the special imaging geometry, the complicated scattering process, and the presence of speckle noise. Thus, optical and radar data are complementary data sources that can be used to map lava flows effectively, in addition to alleviating cloud obstruction and improving change detection performance. Here, we propose a machine learning approach based on the Google Earth Engine (GEE) platform to analyze simultaneously the images acquired by the synthetic aperture radar (SAR) sensor, on board of Sentinel-1 mission, and by optical and multispectral sensors of Landsat-8 missions and Multi-Spectral Imager (MSI), on board of Sentinel-2 mission. Machine learning classifiers, including K-means algorithm (K-means) and support vector machine (SVM), are used to map lava flows automatically from a combination of optical and SAR images. We describe the operation of this approach by using a retrospective analysis of two recent lava flow-forming eruptions at Mount Etna (Italy) and Fogo Island (Cape Verde). We found that combining both radar and optical imagery improved the accuracy and reliability of lava flow mapping. The results highlight the need to fully exploit the extraordinary potential of complementary satellite sensors to provide time-critical hazard information during volcanic eruptions.


Author(s):  
Amandeep Kaur ◽  
Sushma Jain ◽  
Shivani Goel ◽  
Gaurav Dhiman

Context: Code smells are symptoms, that something may be wrong in software systems that can cause complications in maintaining software quality. In literature, there exists many code smells and their identification is far from trivial. Thus, several techniques have also been proposed to automate code smell detection in order to improve software quality. Objective: This paper presents an up-to-date review of simple and hybrid machine learning based code smell detection techniques and tools. Methods: We collected all the relevant research published in this field till 2020. We extracted the data from those articles and classified them into two major categories. In addition, we compared the selected studies based on several aspects like, code smells, machine learning techniques, datasets, programming languages used by datasets, dataset size, evaluation approach, and statistical testing. Results: Majority of empirical studies have proposed machine- learning based code smell detection tools. Support vector machine and decision tree algorithms are frequently used by the researchers. Along with this, a major proportion of research is conducted on Open Source Softwares (OSS) such as, Xerces, Gantt Project and ArgoUml. Furthermore, researchers paid more attention towards Feature Envy and Long Method code smells. Conclusion: We identified several areas of open research like, need of code smell detection techniques using hybrid approaches, need of validation employing industrial datasets, etc.


2020 ◽  
Vol 2020 ◽  
pp. 1-14 ◽  
Author(s):  
Randa Aljably ◽  
Yuan Tian ◽  
Mznah Al-Rodhaan

Nowadays, user’s privacy is a critical matter in multimedia social networks. However, traditional machine learning anomaly detection techniques that rely on user’s log files and behavioral patterns are not sufficient to preserve it. Hence, the social network security should have multiple security measures to take into account additional information to protect user’s data. More precisely, access control models could complement machine learning algorithms in the process of privacy preservation. The models could use further information derived from the user’s profiles to detect anomalous users. In this paper, we implement a privacy preservation algorithm that incorporates supervised and unsupervised machine learning anomaly detection techniques with access control models. Due to the rich and fine-grained policies, our control model continuously updates the list of attributes used to classify users. It has been successfully tested on real datasets, with over 95% accuracy using Bayesian classifier, and 95.53% on receiver operating characteristic curve using deep neural networks and long short-term memory recurrent neural network classifiers. Experimental results show that this approach outperforms other detection techniques such as support vector machine, isolation forest, principal component analysis, and Kolmogorov–Smirnov test.


Author(s):  
Adwait Patil

Abstract: Alzheimer’s disease is one of the neurodegenerative disorders. It initially starts with innocuous symptoms but gradually becomes severe. This disease is so dangerous because there is no treatment, the disease is detected but typically at a later stage. So it is important to detect Alzheimer at an early stage to counter the disease and for a probable recovery for the patient. There are various approaches currently used to detect symptoms of Alzheimer’s disease (AD) at an early stage. The fuzzy system approach is not widely used as it heavily depends on expert knowledge but is quite efficient in detecting AD as it provides a mathematical foundation for interpreting the human cognitive processes. Another more accurate and widely accepted approach is the machine learning detection of AD stages which uses machine learning algorithms like Support Vector Machines (SVMs) , Decision Tree , Random Forests to detect the stage depending on the data provided. The final approach is the Deep Learning approach using multi-modal data that combines image , genetic data and patient data using deep models and then uses the concatenated data to detect the AD stage more efficiently; this method is obscure as it requires huge volumes of data. This paper elaborates on all the three approaches and provides a comparative study about them and which method is more efficient for AD detection. Keywords: Alzheimer’s Disease (AD), Fuzzy System , Machine Learning , Deep Learning , Multimodal data


Forests ◽  
2019 ◽  
Vol 11 (1) ◽  
pp. 11
Author(s):  
Pablito M. López-Serrano ◽  
José Luis Cárdenas Domínguez ◽  
José Javier Corral-Rivas ◽  
Enrique Jiménez ◽  
Carlos A. López-Sánchez ◽  
...  

An accurate estimation of forests’ aboveground biomass (AGB) is required because of its relevance to the carbon cycle, and because of its economic and ecological importance. The selection of appropriate variables from satellite information and physical variables is important for precise AGB prediction mapping. Because of the complex relationships for AGB prediction, non-parametric machine-learning techniques represent potentially useful techniques for AGB estimation, but their use and comparison in forest remote-sensing applications is still relatively limited. The objective of the present study was to evaluate the performance of automatic learning techniques, support vector regression (SVR) and random forest (RF), to predict the observed AGB (from 318 permanent sampling plots) from the Landsat 8 Landsat 8 Operational Land Imager (OLI) sensor, spectral indexes, texture indexes and physical variables the Sierra Madre Occidental in Mexico. The result showed that the best SVR model explained 80% of the total variance (root mean square error (RMSE) = 8.20 Mg ha−1). The variables that best predicted AGB, in order of importance, were the bands that belong to the region of red and near and middle infrared, and the average temperature. The results show that the SVR technique has a good potential for the estimation of the AGB and that the selection of the model hyperparameters has important implications for optimizing the goodness of fit.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3144 ◽  
Author(s):  
Sherif Said ◽  
Ilyes Boulkaibet ◽  
Murtaza Sheikh ◽  
Abdullah S. Karar ◽  
Samer Alkork ◽  
...  

In this paper, a customizable wearable 3D-printed bionic arm is designed, fabricated, and optimized for a right arm amputee. An experimental test has been conducted for the user, where control of the artificial bionic hand is accomplished successfully using surface electromyography (sEMG) signals acquired by a multi-channel wearable armband. The 3D-printed bionic arm was designed for the low cost of 295 USD, and was lightweight at 428 g. To facilitate a generic control of the bionic arm, sEMG data were collected for a set of gestures (fist, spread fingers, wave-in, wave-out) from a wide range of participants. The collected data were processed and features related to the gestures were extracted for the purpose of training a classifier. In this study, several classifiers based on neural networks, support vector machine, and decision trees were constructed, trained, and statistically compared. The support vector machine classifier was found to exhibit an 89.93% success rate. Real-time testing of the bionic arm with the optimum classifier is demonstrated.


2020 ◽  
Vol 12 (22) ◽  
pp. 3776
Author(s):  
Andrea Tassi ◽  
Marco Vizzari

Google Earth Engine (GEE) is a versatile cloud platform in which pixel-based (PB) and object-oriented (OO) Land Use–Land Cover (LULC) classification approaches can be implemented, thanks to the availability of the many state-of-art functions comprising various Machine Learning (ML) algorithms. OO approaches, including both object segmentation and object textural analysis, are still not common in the GEE environment, probably due to the difficulties existing in concatenating the proper functions, and in tuning the various parameters to overcome the GEE computational limits. In this context, this work is aimed at developing and testing an OO classification approach combining the Simple Non-Iterative Clustering (SNIC) algorithm to identify spatial clusters, the Gray-Level Co-occurrence Matrix (GLCM) to calculate cluster textural indices, and two ML algorithms (Random Forest (RF) or Support Vector Machine (SVM)) to perform the final classification. A Principal Components Analysis (PCA) is applied to the main seven GLCM indices to synthesize in one band the textural information used for the OO classification. The proposed approach is implemented in a user-friendly, freely available GEE code useful to perform the OO classification, tuning various parameters (e.g., choose the input bands, select the classification algorithm, test various segmentation scales) and compare it with a PB approach. The accuracy of OO and PB classifications can be assessed both visually and through two confusion matrices that can be used to calculate the relevant statistics (producer’s, user’s, overall accuracy (OA)). The proposed methodology was broadly tested in a 154 km2 study area, located in the Lake Trasimeno area (central Italy), using Landsat 8 (L8), Sentinel 2 (S2), and PlanetScope (PS) data. The area was selected considering its complex LULC mosaic mainly composed of artificial surfaces, annual and permanent crops, small lakes, and wooded areas. In the study area, the various tests produced interesting results on the different datasets (OA: PB RF (L8 = 72.7%, S2 = 82%, PS = 74.2), PB SVM (L8 = 79.1%, S2 = 80.2%, PS = 74.8%), OO RF (L8 = 64%, S2 = 89.3%, PS = 77.9), OO SVM (L8 = 70.4, S2 = 86.9%, PS = 73.9)). The broad code application demonstrated very good reliability of the whole process, even though the OO classification process resulted, sometimes, too demanding on higher resolution data, considering the available computational GEE resources.


DYNA ◽  
2019 ◽  
Vol 86 (211) ◽  
pp. 32-41 ◽  
Author(s):  
Juan D. Pineda-Jaramillo

In recent decades, transportation planning researchers have used diverse types of machine learning (ML) algorithms to research a wide range of topics. This review paper starts with a brief explanation of some ML algorithms commonly used for transportation research, specifically Artificial Neural Networks (ANN), Decision Trees (DT), Support Vector Machines (SVM) and Cluster Analysis (CA). Then, these different methodologies used by researchers for modeling travel mode choice are collected and compared with the Multinomial Logit Model (MNL) which is the most commonly-used discrete choice model. Finally, the characterization of ML algorithms is discussed and Random Forest (RF), a variant of Decision Tree algorithms, is presented as the best methodology for modeling travel mode choice.


2017 ◽  
Vol 49 (5) ◽  
pp. 1608-1617 ◽  
Author(s):  
Matias Bonansea ◽  
Claudia Rodriguez ◽  
Lucio Pinotti

Abstract Landsat satellites, 5 and 7, have significant potential for estimating several water quality parameters, but to our knowledge, there are few investigations which integrate these earlier sensors with the newest and improved mission of Landsat 8 satellite. Thus, the comparability of water quality assessing across different Landsat sensors needs to be evaluated. The main objective of this study was to assess the feasibility of integrating Landsat sensors to estimate chlorophyll-a concentration (Chl-a) in Río Tercero reservoir (Argentina). A general model to retrieve Chl-a was developed (R2 = 0.88). Using observed versus predicted Chl-a values the model was validated (R2 = 0.89) and applied to Landsat imagery obtaining spatial representations of Chl-a in the reservoir. Results showed that Landsat 8 can be combined with Landsat 5 and 7 to construct an empirical model to estimate water quality characteristics, such as Chl-a in a reservoir. As the number of available and upcoming sensors with open access will increase with time, we expect that this trend will certainly further promote remote sensing applications and serve as a valuable basis for a wide range of water quality assessments.


Sign in / Sign up

Export Citation Format

Share Document