scholarly journals A national landslide inventory of Denmark

2021 ◽  
Author(s):  
Gregor Luetzenburg ◽  
Kristian Svennevig ◽  
Anders Anker Bjørk ◽  
Marie Keiding ◽  
Aart Kroon

Abstract. Landslides are a frequent natural hazard occurring globally in regions with steep topography. Additionally, landslides are playing an important role in landscape evolution by transporting sediment downslope. Landslide inventory mapping is a common technique to assess the spatial distribution and extend of landslides in an area of interest. High-resolution digital elevation models (DEMs) have proven to be useful databases to map landslides in large areas across different land covers and topography. So far, Denmark had no national landslide inventory. Here we create the first comprehensive national landslide inventory for Denmark derived from a 40 cm resolution DEM from 2015 supported by several 12.5 cm resolution orthophotos. The landslide inventory is created based on a manual expert-based mapping approach, and we implemented a quality control mechanism to assess the completeness of the inventory. Overall, we mapped 3202 landslide polygons in Denmark with a level of completeness of 87 %. The landslide inventory can act as a starting point for a more comprehensive hazard and risk reduction framework for Denmark. Furthermore, machine-learning algorithms can use the dataset as a training dataset to improve future automated mapping approaches. The complete landslide inventory is made freely available for download at https://doi.org/10.6084/m9.figshare.16965439.v1 (Svennevig and Luetzenburg, 2021) or as web map (https://data.geus.dk/landskred/) for further investigations.

2020 ◽  
Author(s):  
Joseph Prinable ◽  
Peter Jones ◽  
David Boland ◽  
Alistair McEwan ◽  
Cindy Thamrin

BACKGROUND The ability to continuously monitor breathing metrics may have indications for general health as well as respiratory conditions such as asthma. However, few studies have focused on breathing due to a lack of available wearable technologies. OBJECTIVE Examine the performance of two machine learning algorithms in extracting breathing metrics from a finger-based pulse oximeter, which is amenable to long-term monitoring. METHODS Pulse oximetry data was collected from 11 healthy and 11 asthma subjects who breathed at a range of controlled respiratory rates. UNET and Long Short-Term memory (LSTM) algorithms were applied to the data, and results compared against breathing metrics derived from respiratory inductance plethysmography measured simultaneously as a reference. RESULTS The UNET vs LSTM model provided breathing metrics which were strongly correlated with those from the reference signal (all p<0.001, except for inspiratory:expiratory ratio). The following relative mean bias(95% confidence interval) were observed: inspiration time 1.89(-52.95, 56.74)% vs 1.30(-52.15, 54.74)%, expiration time -3.70(-55.21, 47.80)% vs -4.97(-56.84, 46.89)%, inspiratory:expiratory ratio -4.65(-87.18, 77.88)% vs -5.30(-87.07, 76.47)%, inter-breath intervals -2.39(-32.76, 27.97)% vs -3.16(-33.69, 27.36)%, and respiratory rate 2.99(-27.04 to 33.02)% vs 3.69(-27.17 to 34.56)%. CONCLUSIONS Both machine learning models show strongly correlation and good comparability with reference, with low bias though wide variability for deriving breathing metrics in asthma and health cohorts. Future efforts should focus on improvement of performance of these models, e.g. by increasing the size of the training dataset at the lower breathing rates. CLINICALTRIAL Sydney Local Health District Human Research Ethics Committee (#LNR\16\HAWKE99 ethics approval).


2021 ◽  
Vol 13 (4) ◽  
pp. 815
Author(s):  
Mary-Anne Fobert ◽  
Vern Singhroy ◽  
John G. Spray

Dominica is a geologically young, volcanic island in the eastern Caribbean. Due to its rugged terrain, substantial rainfall, and distinct soil characteristics, it is highly vulnerable to landslides. The dominant triggers of these landslides are hurricanes, tropical storms, and heavy prolonged rainfall events. These events frequently lead to loss of life and the need for a growing portion of the island’s annual budget to cover the considerable cost of reconstruction and recovery. For disaster risk mitigation and landslide risk assessment, landslide inventory and susceptibility maps are essential. Landslide inventory maps record existing landslides and include details on their type, location, spatial extent, and time of occurrence. These data are integrated (when possible) with the landslide trigger and pre-failure slope conditions to generate or validate a susceptibility map. The susceptibility map is used to identify the level of potential landslide risk (low, moderate, or high). In Dominica, these maps are produced using optical satellite and aerial images, digital elevation models, and historic landslide inventory data. This study illustrates the benefits of using satellite Interferometric Synthetic Aperture Radar (InSAR) to refine these maps. Our study shows that when using continuous high-resolution InSAR data, active slopes can be identified and monitored. This information can be used to highlight areas most at risk (for use in validating and updating the susceptibility map), and can constrain the time of occurrence of when the landslide was initiated (for use in landslide inventory mapping). Our study shows that InSAR can be used to assist in the investigation of pre-failure slope conditions. For instance, our initial findings suggest there is more land motion prior to failure on clay soils with gentler slopes than on those with steeper slopes. A greater understanding of pre-failure slope conditions will support the generation of a more dependable susceptibility map. Our study also discusses the integration of InSAR deformation-rate maps and time-series analysis with rainfall data in support of the development of rainfall thresholds for different terrains. The information provided by InSAR can enhance inventory and susceptibility mapping, which will better assist with the island’s current disaster mitigation and resiliency efforts.


2020 ◽  
Vol 13 (1) ◽  
pp. 10
Author(s):  
Andrea Sulova ◽  
Jamal Jokar Arsanjani

Recent studies have suggested that due to climate change, the number of wildfires across the globe have been increasing and continue to grow even more. The recent massive wildfires, which hit Australia during the 2019–2020 summer season, raised questions to what extent the risk of wildfires can be linked to various climate, environmental, topographical, and social factors and how to predict fire occurrences to take preventive measures. Hence, the main objective of this study was to develop an automatized and cloud-based workflow for generating a training dataset of fire events at a continental level using freely available remote sensing data with a reasonable computational expense for injecting into machine learning models. As a result, a data-driven model was set up in Google Earth Engine platform, which is publicly accessible and open for further adjustments. The training dataset was applied to different machine learning algorithms, i.e., Random Forest, Naïve Bayes, and Classification and Regression Tree. The findings show that Random Forest outperformed other algorithms and hence it was used further to explore the driving factors using variable importance analysis. The study indicates the probability of fire occurrences across Australia as well as identifies the potential driving factors of Australian wildfires for the 2019–2020 summer season. The methodical approach and achieved results and drawn conclusions can be of great importance to policymakers, environmentalists, and climate change researchers, among others.


2015 ◽  
Vol 32 (6) ◽  
pp. 821-827 ◽  
Author(s):  
Enrique Audain ◽  
Yassel Ramos ◽  
Henning Hermjakob ◽  
Darren R. Flower ◽  
Yasset Perez-Riverol

Abstract Motivation: In any macromolecular polyprotic system—for example protein, DNA or RNA—the isoelectric point—commonly referred to as the pI—can be defined as the point of singularity in a titration curve, corresponding to the solution pH value at which the net overall surface charge—and thus the electrophoretic mobility—of the ampholyte sums to zero. Different modern analytical biochemistry and proteomics methods depend on the isoelectric point as a principal feature for protein and peptide characterization. Protein separation by isoelectric point is a critical part of 2-D gel electrophoresis, a key precursor of proteomics, where discrete spots can be digested in-gel, and proteins subsequently identified by analytical mass spectrometry. Peptide fractionation according to their pI is also widely used in current proteomics sample preparation procedures previous to the LC-MS/MS analysis. Therefore accurate theoretical prediction of pI would expedite such analysis. While such pI calculation is widely used, it remains largely untested, motivating our efforts to benchmark pI prediction methods. Results: Using data from the database PIP-DB and one publically available dataset as our reference gold standard, we have undertaken the benchmarking of pI calculation methods. We find that methods vary in their accuracy and are highly sensitive to the choice of basis set. The machine-learning algorithms, especially the SVM-based algorithm, showed a superior performance when studying peptide mixtures. In general, learning-based pI prediction methods (such as Cofactor, SVM and Branca) require a large training dataset and their resulting performance will strongly depend of the quality of that data. In contrast with Iterative methods, machine-learning algorithms have the advantage of being able to add new features to improve the accuracy of prediction. Contact: [email protected] Availability and Implementation: The software and data are freely available at https://github.com/ypriverol/pIR. Supplementary information: Supplementary data are available at Bioinformatics online.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Olutosin Taiwo ◽  
Absalom E. Ezugwu

The smart home is now an established area of interest and research that contributes to comfort in modern homes. With the Internet being an essential part of broad communication in modern life, IoT has allowed homes to go beyond building to interactive abodes. In many spheres of human life, the IoT has grown exponentially, including monitoring ecological factors, controlling the home and its appliances, and storing data generated by devices in the house in the cloud. Smart home includes multiple components, technologies, and devices that generate valuable data for predicting home and environment activities. This work presents the design and development of a ubiquitous, cloud-based intelligent home automation system. The system controls, monitors, and oversees the security of a home and its environment via an Android mobile application. One module controls and monitors electrical appliances and environmental factors, while another module oversees the home’s security by detecting motion and capturing images. Our work uses a camera to capture images of objects triggered by their motion being detected. To avoid false alarms, we used the concept of machine learning to differentiate between images of regular home occupants and those of an intruder. The support vector machine algorithm is proposed in this study to classify the features of the image captured and determine if it is that of a regular home occupant or an intruder before sending an alarm to the user. The design of the mobile application allows a graphical display of the activities in the house. Our work proves that machine learning algorithms can improve home automation system functionality and enhance home security. The work’s prototype was implemented using an ESP8266 board, an ESP32-CAM board, a 5 V four-channel relay module, and sensors.


2019 ◽  
Vol 70 (6) ◽  
pp. 454-464
Author(s):  
Omar Benmiloud ◽  
Salem Arif

Abstract Dynamic equivalent (DE) is an important process of multi-area interconnected power systems. It allows to perform stability assessment of a specific area (area of interest) at minimum cost. This study is intended to investigate the dynamic equivalent of two relatively large power systems. The fourth-order model of synchronous generators with a simplified excitation system is used as equivalent to the group of generators in the external system. To improve the accuracy of the estimated model, the identification is carried in two stages. First, using the global search Sine Cosine Algorithm (SCA) to find a starting set values, then this set is used as starting point for the fine-tuning made through the Pattern Search (PS) algorithm. To increase the reliability of the model’s parameters, two disturbances are used to avoid the identification based on a specific event. The developed program is applied on two standard power systems, namely, the New England (NE) system and the Northeast Power Coordinating Council (NPCC) system. Simulation results confirm the ability of the optimized model to preserve the main dynamic properties of the original system with accuracy.


2016 ◽  
Vol 47 (1) ◽  
pp. 275
Author(s):  
E. Kokinou ◽  
C. Panagiotakis ◽  
Th. Kinigopoulos

Image processing and understanding and further pattern recognition comprises a precious tool for the automatic extraction of information using digital topography. The aim of this work is the retrieval of areas with similar topography using digital elevation data. It can be applied to geomorphology, forestry, regional and urban planning, and many other applications for analyzing and managing natural resources. In specifics, the user selects the area of interest, navigating overhead a high resolution elevation image and determines two (3) parameters (step, number of local minima and display scale). Furthermore the regions with similar relief to the initial model are determined. Experimental results show high efficiency of the proposed scheme.


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Samir Rustamov

We suggested different structured hybrid systems for the sentence-level subjectivity analysis based on three supervised machine learning algorithms, namely, Hidden Markov Model, Fuzzy Control System, and Adaptive Neuro-Fuzzy Inference System. The suggested feature extraction algorithm in our experiment computes a feature vector using statistical textual terms frequencies in a training dataset not having the use of any lexical knowledge except tokenization. Taking into consideration this fact, the above-mentioned methods may be employed in other languages as these methods do not utilize the morphological, syntactical, and lexical analysis in the classification problems.


2021 ◽  
Author(s):  
Myeong Gyu Kim ◽  
Jae Hyun Kim ◽  
Kyungim Kim

BACKGROUND Garlic-related misinformation is prevalent whenever a virus outbreak occurs. Again, with the outbreak of coronavirus disease 2019 (COVID-19), garlic-related misinformation is spreading through social media sites, including Twitter. Machine learning-based approaches can be used to detect misinformation from vast tweets. OBJECTIVE This study aimed to develop machine learning algorithms for detecting misinformation on garlic and COVID-19 in Twitter. METHODS This study used 5,929 original tweets mentioning garlic and COVID-19. Tweets were manually labeled as misinformation, accurate information, and others. We tested the following algorithms: k-nearest neighbors; random forest; support vector machine (SVM) with linear, radial, and polynomial kernels; and neural network. Features for machine learning included user-based features (verified account, user type, number of followers, and follower rate) and text-based features (uniform resource locator, negation, sentiment score, Latent Dirichlet Allocation topic probability, number of retweets, and number of favorites). A model with the highest accuracy in the training dataset (70% of overall dataset) was tested using a test dataset (30% of overall dataset). Predictive performance was measured using overall accuracy, sensitivity, specificity, and balanced accuracy. RESULTS SVM with the polynomial kernel model showed the highest accuracy of 0.670. The model also showed a balanced accuracy of 0.757, sensitivity of 0.819, and specificity of 0.696 for misinformation. Important features in the misinformation and accurate information classes included topic 4 (common myths), topic 13 (garlic-specific myths), number of followers, topic 11 (misinformation on social media), and follower rate. Topic 3 (cooking recipes) was the most important feature in the others class. CONCLUSIONS Our SVM model showed good performance in detecting misinformation. The results of our study will help detect misinformation related to garlic and COVID-19. It could also be applied to prevent misinformation related to dietary supplements in the event of a future outbreak of a disease other than COVID-19.


Sign in / Sign up

Export Citation Format

Share Document