Comparative analysis of hyperspectral anomaly detection strategies on a new high spatial and spectral resolution data set

Author(s):  
Stefania Matteoli ◽  
Francesca Carnesecchi ◽  
Marco Diani ◽  
Giovanni Corsini ◽  
Leandro Chiarantini
2016 ◽  
Vol 9 (3) ◽  
pp. 1051-1062 ◽  
Author(s):  
Andreas Engel ◽  
Harald Bönisch ◽  
Tim Schwarzenberger ◽  
Hans-Peter Haase ◽  
Katja Grunow ◽  
...  

Abstract. MIPAS-Envisat is a satellite-borne sensor which measured vertical profiles of a wide range of trace gases from 2002 to 2012 using IR emission spectroscopy. We present geophysical validation of the MIPAS-Envisat operational retrieval (version 6.0) of N2O, CH4, CFC-12, and CFC-11 by the European Space Agency (ESA). The geophysical validation data are derived from measurements of samples collected by a cryogenic whole air sampler flown to altitudes of up to 34 km by means of large scientific balloons. In order to increase the number of coincidences between the satellite and the balloon observations, we applied a trajectory matching technique. The results are presented for different time periods due to a change in the spectroscopic resolution of MIPAS in early 2005. Retrieval results for N2O, CH4, and CFC-12 show partly good agreement for some altitude regions, which differs for the periods with different spectroscopic resolution. The more recent low spectroscopic resolution data above 20 km altitude show agreement with the combined uncertainties, while there is a tendency of the earlier high spectral resolution data set to underestimate these species above 25 km. The earlier high spectral resolution data show a significant overestimation of the mixing ratios for N2O, CH4, and CFC-12 below 20 km. These differences need to be considered when using these data. The CFC-11 results from the operation retrieval version 6.0 cannot be recommended for scientific studies due to a systematic overestimation of the CFC-11 mixing ratios at all altitudes.


Author(s):  
K. Siangchaew ◽  
J. Bentley ◽  
M. Libera

Energy-filtered electron-spectroscopic TEM imaging provides a new way to study the microstructure of polymers without heavy-element stains. Since spectroscopic imaging exploits the signal generated directly by the electron-specimen interaction, it can produce richer and higher resolution data than possible with most staining methods. There are basically two ways to collect filtered images (fig. 1). Spectrum imaging uses a focused probe that is digitally rastered across a specimen with an entire energy-loss spectrum collected at each x-y pixel to produce a 3-D data set. Alternatively, filtering schemes such as the Zeiss Omega filter and the Gatan Imaging Filter (GIF) acquire individual 2-D images with electrons of a defined range of energy loss (δE) that typically is 5-20 eV.


Author(s):  
Luigi Leonardo Palese

In 2019, an outbreak occurred which resulted in a global pandemic. The causative agent of this serious global health threat was a coronavirus similar to the agent of SARS, referred to as SARS-CoV-2. In this work an analysis of the available structures of the SARS-CoV-2 main protease has been performed. From a data set of crystallographic structures the dynamics of the protease has been obtained. Furthermore, a comparative analysis of the structures of SARS-CoV-2 with those of the main protease of the coronavirus responsible of SARS (SARS-CoV) was carried out. The results of these studies suggest that, although main proteases of SARS-CoV and SARS-CoV-2 are similar at the backbone level, some plasticity at the substrate binding site can be observed. The consequences of these structural aspects on the search for effective inhibitors of these enzymes are discussed, with a focus on already known compounds. The results obtained show that compounds containing an oxirane ring could be considered as inhibitors of the main protease of SARS-CoV-2.


Author(s):  
Ritu Khandelwal ◽  
Hemlata Goyal ◽  
Rajveer Singh Shekhawat

Introduction: Machine learning is an intelligent technology that works as a bridge between businesses and data science. With the involvement of data science, the business goal focuses on findings to get valuable insights on available data. The large part of Indian Cinema is Bollywood which is a multi-million dollar industry. This paper attempts to predict whether the upcoming Bollywood Movie would be Blockbuster, Superhit, Hit, Average or Flop. For this Machine Learning techniques (classification and prediction) will be applied. To make classifier or prediction model first step is the learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations. Methods: All the techniques related to classification and Prediction such as Support Vector Machine(SVM), Random Forest, Decision Tree, Naïve Bayes, Logistic Regression, Adaboost, and KNN will be applied and try to find out efficient and effective results. All these functionalities can be applied with GUI Based workflows available with various categories such as data, Visualize, Model, and Evaluate. Result: To make classifier or prediction model first step is learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations Conclusion: This paper focuses on Comparative Analysis that would be performed based on different parameters such as Accuracy, Confusion Matrix to identify the best possible model for predicting the movie Success. By using Advertisement Propaganda, they can plan for the best time to release the movie according to the predicted success rate to gain higher benefits. Discussion: Data Mining is the process of discovering different patterns from large data sets and from that various relationships are also discovered to solve various problems that come in business and helps to predict the forthcoming trends. This Prediction can help Production Houses for Advertisement Propaganda and also they can plan their costs and by assuring these factors they can make the movie more profitable.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2532
Author(s):  
Encarna Quesada ◽  
Juan J. Cuadrado-Gallego ◽  
Miguel Ángel Patricio ◽  
Luis Usero

Anomaly Detection research is focused on the development and application of methods that allow for the identification of data that are different enough—compared with the rest of the data set that is being analyzed—and considered anomalies (or, as they are more commonly called, outliers). These values mainly originate from two sources: they may be errors introduced during the collection or handling of the data, or they can be correct, but very different from the rest of the values. It is essential to correctly identify each type as, in the first case, they must be removed from the data set but, in the second case, they must be carefully analyzed and taken into account. The correct selection and use of the model to be applied to a specific problem is fundamental for the success of the anomaly detection study and, in many cases, the use of only one model cannot provide sufficient results, which can be only reached by using a mixture model resulting from the integration of existing and/or ad hoc-developed models. This is the kind of model that is developed and applied to solve the problem presented in this paper. This study deals with the definition and application of an anomaly detection model that combines statistical models and a new method defined by the authors, the Local Transilience Outlier Identification Method, in order to improve the identification of outliers in the sensor-obtained values of variables that affect the operations of wind tunnels. The correct detection of outliers for the variables involved in wind tunnel operations is very important for the industrial ventilation systems industry, especially for vertical wind tunnels, which are used as training facilities for indoor skydiving, as the incorrect performance of such devices may put human lives at risk. In consequence, the use of the presented model for outlier detection may have a high impact in this industrial sector. In this research work, a proof-of-concept is carried out using data from a real installation, in order to test the proposed anomaly analysis method and its application to control the correct performance of wind tunnels.


2021 ◽  
pp. 1-11
Author(s):  
Velichka Traneva ◽  
Stoyan Tranev

Analysis of variance (ANOVA) is an important method in data analysis, which was developed by Fisher. There are situations when there is impreciseness in data In order to analyze such data, the aim of this paper is to introduce for the first time an intuitionistic fuzzy two-factor ANOVA (2-D IFANOVA) without replication as an extension of the classical ANOVA and the one-way IFANOVA for a case where the data are intuitionistic fuzzy rather than real numbers. The proposed approach employs the apparatus of intuitionistic fuzzy sets (IFSs) and index matrices (IMs). The paper also analyzes a unique set of data on daily ticket sales for a year in a multiplex of Cinema City Bulgaria, part of Cineworld PLC Group, applying the two-factor ANOVA and the proposed 2-D IFANOVA to study the influence of “ season ” and “ ticket price ” factors. A comparative analysis of the results, obtained after the application of ANOVA and 2-D IFANOVA over the real data set, is also presented.


2015 ◽  
Vol 53 (2) ◽  
pp. 869-882 ◽  
Author(s):  
Eva M. Ampe ◽  
Dries Raymaekers ◽  
Erin L. Hestir ◽  
Maarten Jansen ◽  
Els Knaeps ◽  
...  

2005 ◽  
Vol 13 ◽  
pp. 811-812
Author(s):  
Guillaume Hébrard

AbstractThe first detection and identification of deuterium Balmer lines were recently reported in H ii regions, using high spectral resolution data secured at CFHT and VLT. The Di lines appear as faint, narrow emission features in the blue wings of the H i Balmer lines and can be distinguished from high-velocity Hi emission. The identification as deuterium and the excitation mechanism as fluorescence are both established beyond doubt. The deuterium Balmer series might lead to a new, optical method of deuterium abundance measurement in the interstellar medium. This may be the only way to observe atomic deuterium in objects like the Magellanic Clouds or low metallicity blue compact galaxies.


Author(s):  
Rupam Mukherjee

For prognostics in industrial applications, the degree of anomaly of a test point from a baseline cluster is estimated using a statistical distance metric. Among different statistical distance metrics, energy distance is an interesting concept based on Newton’s Law of Gravitation, promising simpler computation than classical distance metrics. In this paper, we review the state of the art formulations of energy distance and point out several reasons why they are not directly applicable to the anomaly-detection problem. Thereby, we propose a new energy-based metric called the P-statistic which addresses these issues, is applicable to anomaly detection and retains the computational simplicity of the energy distance. We also demonstrate its effectiveness on a real-life data-set.


Sign in / Sign up

Export Citation Format

Share Document