scholarly journals Automatic detection of avalanches combining array classification and localization

2019 ◽  
Vol 7 (2) ◽  
pp. 491-503 ◽  
Author(s):  
Matthias Heck ◽  
Alec van Herwijnen ◽  
Conny Hammer ◽  
Manuel Hobiger ◽  
Jürg Schweizer ◽  
...  

Abstract. We used continuous data from a seismic monitoring system to automatically determine the avalanche activity at a remote field site above Davos, Switzerland. The approach is based on combining a machine learning algorithm with array processing techniques to provide an operational method capable of near real-time classification. First, we used a recently developed method based on hidden Markov models (HMMs) to automatically identify events in continuous seismic data using only a single training event. For the 2016–2017 winter period, this resulted in 117 events. Second, to eliminate falsely classified events such as airplanes and local earthquakes, we implemented an additional HMM-based classifier at a second array 14 km away. By cross-checking the results of both arrays, we reduced the number of classifications by about 50 %. In a third and final step we used multiple signal classification (MUSIC), an array processing technique, to determine the direction of the source. As snow avalanches recorded at our arrays typically generate signals with small changes in source direction, events with large changes were dismissed. From the 117 initially detected events during the 4-month period, our classification workflow removed 96 events. The majority of the remaining 21 events were on 9 and 10 March 2017, in line with visual avalanche observations in the Davos region. Our results suggest that the classification workflow presented could be used to identify major avalanche periods and highlight the importance of array processing techniques for the automatic classification of avalanches in seismic data.

2018 ◽  
Author(s):  
Matthias Heck ◽  
Alec van Herwijnen ◽  
Conny Hammer ◽  
Manuel Hobiger ◽  
Jürg Schweizer ◽  
...  

Abstract. We use a seismic monitoring system to automatically determine the avalanche activity at a remote field site near Davos, Switzerland. By using a recently developed approach based on hidden Markov models (HMMs), a machine learning algorithm, we were able to automatically identify avalanches in continuous seismic data by providing as little as one single training event. Furthermore, we implemented an operational method to provide near real-time classification results. For the 2016–2017 winter period 117 events were automatically identified. False classified events such as airplanes and local earthquakes were filtered using a new approach containing two additional classification steps. In a first step, we implemented a second HMM based classifier at a second array 14 km away to automatically identify airplanes and earthquakes. By cross-checking the results of both arrays we reduced the amount of false classifications by about 50 %. In a second step, we used multiple signal classifications (MUSIC), an array processing technique to determine the direction of the source. Although avalanche events have a moving source character only small changes of the source direction are common for snow avalanches whereas false classifications had large changes in the source direction and were therefore dismissed. From the 117 detected events during the 4 month period we were able to identify 90 false classifications based on these two additional steps. The obtained avalanche activity based on the remaining 27 avalanche events was in line with visual observations performed in the area of Davos.


2021 ◽  
Author(s):  
Yuan-Yuan (Annie) Chang ◽  
Konrad Bogner ◽  
Massimiliano Zappa ◽  
Daniela I.V. Domeisen ◽  
Christian M. Grams

<p>Across the globe, there has been an increasing interest in improving the predictability of weekly to monthly (sub-seasonal) hydro-meteorological forecasts as they play a valuable role in medium- to long-term planning in many sectors such as agriculture, navigation, hydro-power production, and hazard warnings. A Precipitation-Runoff-Evapotranspiration HRU model (PREVAH) has been previously set up with raw metrological forcing of 51 ensemble members and 32 days lead time taken from the operational European Centre for Medium-Range Weather Forecasts (ECMWF) extended-range forecast. The PREVAH model is used to generate hydrological forecasts for the study area, which consists of 300 catchments covering approximately the entire area of Switzerland. The primary goal of this study is to improve the quality of the categorical forecast of weekly mean total discharge in a catchment laying in the lower, normal, or upper tercile of the climatological distribution at a monthly horizon. Therefore, we explore <span>the approach to post-process PREVAH outputs using machine learning algorithm Gaussian process</span>. Weather regime (WR) data, based on 500 hPa geopotential height in the Atlantic-European region are used as an added feature to further enhance the post-processing performance.</p><p>By comparing the overall accuracy and the ranked probability skill score of the post-processed forecasts with the ones of raw forecasts we show that the proposed post-processing techniques are able to improve the forecast skill. The degree of improvement varies by catchment, lead time and variable. The benefit of the added WR data is not consistent across the study area but most promising in high altitude catchments with steep slopes. Among the seven types of WRs, the majority of the corrections are observed when either a European blocking or a Scandinavian blocking is forecasted as the dominant weather regime. By applying a “best practice” to each individual catchment, that is the processing technique with the highest accuracy among the different proposed techniques, a median accuracy of 0.65 (improved from a value of 0.53 with no processing technique) can be achieved at 4-week lead time. Due to the small data size, the conclusions should be considered preliminary, but this study highlights the potential of improving the skill of sub-seasonal hydro-meteorological forecasts utilizing weather regime data and machine learning in a real-time deployable setup.</p>


Author(s):  
Eimad Abdu Abusham

Detecting plant diseases using the traditional method such as the naked eye can sometimes lead to incorrect identification and classification of the diseases. Consequently, this traditional method can strongly contribute to the losses of the crop. Image processing techniques have been used as an approach to detect and classify plant diseases. This study aims to focus on the diseases affecting the leaves of al-berseem and how to use image processing techniques to detect al-berseem diseases. Early detection of diseases important for finding appropriate treatment quickly and avoid economic losses. Detect the plant disease is based on the symptoms and signs that appear on the leaves. The detection steps include image preprocessing, segmentation, and identification. The image noise is removed in the preprocessing stage by using the MATLAB features energy, mean, homogeneity, and others. The k-mean-clustering is used to detect the affected area in leaves. Finally, KNN will be used to recognize unhealthy leaves and determines disease types (fungal diseases, pest diseases (shall), leaf minor (red spider), and deficiency of nutrient (yellow leaf)); these four types of diseases will detect in this thesis. Identification is the last step in which the disease will identify and classified.


2019 ◽  
Vol 91 (1) ◽  
pp. 370-389
Author(s):  
Michał Chamarczuk ◽  
Yohei Nishitsuji ◽  
Michał Malinowski ◽  
Deyan Draganov

Abstract We present a method for automatic detection and classification of seismic events from continuous ambient‐noise (AN) recordings using an unsupervised machine‐learning (ML) approach. We combine classic and recently developed array‐processing techniques with ML enabling the use of unsupervised techniques in the routine processing of continuous data. We test our method on a dataset from a large‐number (large‐N) array, which was deployed over the Kylylahti underground mine (Finland), and show the potential to automatically process and cluster the volumes of AN data. Automatic sorting of detected events into different classes allows faster data analysis and facilitates the selection of desired parts of the wavefield for imaging (e.g., using seismic interferometry) and monitoring. First, using array‐processing techniques, we obtain directivity, location, velocity, and frequency representations of AN data. Next, we transform these representations into vector‐shaped matrices. The transformed data are input into a clustering algorithm (called k‐means) to define groups of similar events, and optimization methods are used to obtain the optimal number of clusters (called elbow and silhouette tests). We use these techniques to obtain the optimal number of classes that characterize the AN recordings and consequently assign the proper class membership (cluster) to each data sample. For the Kylylahti AN, the unsupervised clustering produced 40 clusters. After visual inspection of events belonging to different clusters that were quality controlled by the silhouette method, we confirm the reliability of 10 clusters with a prediction accuracy higher than 90%. The obtained division into separate seismic‐event classes proves the feasibility of the unsupervised ML approach to advance the automation of processing and the utilization of array AN data. Our workflow is very flexible and can be easily adapted for other input features and classification algorithms.


2020 ◽  
Vol 224 (2) ◽  
pp. 1133-1140
Author(s):  
Chloé Gradon ◽  
Philippe Roux ◽  
Ludovic Moreau ◽  
Albanne Lecointre ◽  
Yehuda Ben Zion

SUMMARY We analyse dominant sources identified in a catalogue of more than 156 000 localizations performed using a 26-d data set recorded by a dense array set on the San Jacinto fault near Anza, in California. Events were localized using an array processing technique called Match Field Processing. As for all array processing techniques, the quality of the event position decrease when the events are outside of the array. We thus separate localizations in and outside the array using simple geometrical conditions. We compare the time distribution of the localization to additional data such as meteorological data, day of human activity as well as existing catalogues to determine the nature of the dominant events located using our method. We find that most of the events located outside of the array could be attributed to a surface structure excited by wind. On the other hand, part of the localizations under the array occur during regional earthquakes and could correspond to diffraction on the fault's heterogeneities. The rest of the localizations inside the array could be generated by the fault itself.


Author(s):  
Shubha Kadambe

Even though there are distinct areas for different functionalities in the mammalian neo-cortex, it seems to use the same algorithm to understand a large variety of input modalities. In addition, it appears that the neo-cortex effortlessly identifies the correlation among many sensor modalities and fuses information obtained from them. The question then is, can we discover the brain’s learning algorithm and approximate it for problems such as computer vision and automatic speech recognition that the mammalian brain is so good at? The answer is: it is an orders of magnitude problem, i.e., not a simple task. However, we can attempt to develop mathematical foundations based on the understanding of how a human brain learns. This chapter is focused along that direction. In particular, it is focused on the ventral stream – the “what pathway” - and describes common algorithms that can be used for representation and classification of signals from different sensor modalities such as auditory and visual. These common algorithms are based on dictionary learning with a beta process, hierarchical graphical models, and embedded hidden Markov models.


2019 ◽  
Vol 8 (4) ◽  
pp. 8797-8801

In this we explore the effectiveness of language features to identify Twitter messages ' feelings. We assess the utility of existing lexical tools as well as capturing features of informal and innovative language knowledge used in micro blogging. We take a supervised approach to the problem, but to create training data, we use existing hash tags in the Twitter data. We Using three separate Twitter messaging companies in our experiments. We use the hash tagged data set (HASH) for development and training, which we compile from the Edinburgh Twitter corpus, and the emoticon data set (EMOT) from the I Sieve Corporation (ISIEVE) for evaluation. Twitter contains huge amount of data . This data may be of different types such as structured data or unstructured data. So by using this data and Appling pre processing techniques we can be able to read the comments from the users. And also the comments will be classified into three categories. They are positive negative and also the neutral comments.Today they use the processing of natural language, information, and text interpretation to derive and classify text feeling into pos itive, negative, and neutral categories. We can also examine the utility of language features to identify Twitter mess ages ' feelings. In addition, state-of - the-art approaches take into consideration only the tweet to be classified when classifying the feeling; they ignore its context (i.e. related tweets).Since tweets are usually short and more ambiguous, however, it is sometimes not enough to consider only the current tweet for classification of sentiments.Informal and innovative microblogging language. We take a sup ervised approach to the problem, but to create training data, we use existing hashtags in the Twitter data.This paper also contrasts sentiment analysis approaches in evaluating political views using Naïve Bayes supervised machine learning algorithm which performs in better analysis compared to other techniques Paper


Author(s):  
Yashpal Jitarwal ◽  
Tabrej Ahamad Khan ◽  
Pawan Mangal

In earlier times fruits were sorted manually and it was very time consuming and laborious task. Human sorted the fruits of the basis of shape, size and color. Time taken by human to sort the fruits is very large therefore to reduce the time and to increase the accuracy, an automatic classification of fruits comes into existence.To improve this human inspection and reduce time required for fruit sorting an advance technique is developed that accepts information about fruits from their images, and is called as Image Processing Technique.


Sign in / Sign up

Export Citation Format

Share Document