scholarly journals Comparative Analysis of Ellipsoidal Height and Shuttle Radar Topographic Mission Elevation

2020 ◽  
Vol 24 (8) ◽  
pp. 1397-1402
Author(s):  
V.A. Ijaware

Ellipsoidal elevation represents a precise geospatial data type within the analysis and modelling of various hydrological and ecological phenomenon  required in preserving the human environment. Likewise, Shuttle Radar Topographic Mission (SRTM) has created an unparalleled data set of global elevations that are freely available for modelling ubiquitous environmental applications. This research aims to carry out a  comparative analysis of ellipsoidal heights and SRTM heights with the following objectives: downloading DEM’s (SRTM) data covering the study  area, determining the spot heights within the boundary in conventional method, extract DEM’S heights within the boundary of the study area, and compared the heights in the conventional method with DEM’S heights. South GPS and Leica Total Station were used to acquire data for control extension and spot heightening respectively while the elevation of SRTM data was obtained by transforming the X and Y data from GPS observationto Longitude and Latitude before using ArcGIS 10.6 to extract the elevation of the boundary pillar and all the spot heights which were relatively  compared in terms of its products- heights, contour, 3-D wireframe, 3-D surface model, and overlaid of contour on shaded relief. The results of the study showed that vertical difference using conventional method and SRTM dataset ranges between -2.345m to 11.026m. Also, the hypothesis tested using a two-tail student t-test and F-test revealed that one mean is not significantly different from the other at 95% confidence level. The research recommends that the products obtained for the two systems can be used interchangeably. Keywords: Shuttle radar topographic mission, Ellipsoidal elevation, contour, 3D wireframe, 3D surface model

2019 ◽  
Vol 45 (5) ◽  
pp. 43-48 ◽  
Author(s):  
Besim Ajvazi ◽  
Kornél Czimber

Geographic Information System (GIS) uses geospatial databases as a model of the real world. Since we are speaking of the real world this entails that in many cases the information about the Earth’s surface is highly important. Therefore, the generation of a surface model is significant. Basically, the quality of the Digital Elevation Model (DEM) depends on the source data or techniques used to obtain them. However, different spatial interpolation methods used for the same data may provide different results. This paper compares the accuracy of different spatial interpolation methods such as IDW, Kriging, Natural Neighbor and Spline. Since interpolation is essential in DEM generation, then is important to do a comparative analysis of such methods to find out which one provides more accurate results. The DEM data set used is from an aero photogrammetric surveying. According to this data set, three scenarios are performed for each of the methods. Selected random control points are derived from the base data set. The first example includes 10% of randomly selected control points, the second example includes 20%, and the third example includes 30%. The Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE) are calculated. We find out that results do not have much difference; however, the most accurate results are derived from the Spline and Kriging interpolation methods.


Author(s):  
Luigi Leonardo Palese

In 2019, an outbreak occurred which resulted in a global pandemic. The causative agent of this serious global health threat was a coronavirus similar to the agent of SARS, referred to as SARS-CoV-2. In this work an analysis of the available structures of the SARS-CoV-2 main protease has been performed. From a data set of crystallographic structures the dynamics of the protease has been obtained. Furthermore, a comparative analysis of the structures of SARS-CoV-2 with those of the main protease of the coronavirus responsible of SARS (SARS-CoV) was carried out. The results of these studies suggest that, although main proteases of SARS-CoV and SARS-CoV-2 are similar at the backbone level, some plasticity at the substrate binding site can be observed. The consequences of these structural aspects on the search for effective inhibitors of these enzymes are discussed, with a focus on already known compounds. The results obtained show that compounds containing an oxirane ring could be considered as inhibitors of the main protease of SARS-CoV-2.


Author(s):  
Ritu Khandelwal ◽  
Hemlata Goyal ◽  
Rajveer Singh Shekhawat

Introduction: Machine learning is an intelligent technology that works as a bridge between businesses and data science. With the involvement of data science, the business goal focuses on findings to get valuable insights on available data. The large part of Indian Cinema is Bollywood which is a multi-million dollar industry. This paper attempts to predict whether the upcoming Bollywood Movie would be Blockbuster, Superhit, Hit, Average or Flop. For this Machine Learning techniques (classification and prediction) will be applied. To make classifier or prediction model first step is the learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations. Methods: All the techniques related to classification and Prediction such as Support Vector Machine(SVM), Random Forest, Decision Tree, Naïve Bayes, Logistic Regression, Adaboost, and KNN will be applied and try to find out efficient and effective results. All these functionalities can be applied with GUI Based workflows available with various categories such as data, Visualize, Model, and Evaluate. Result: To make classifier or prediction model first step is learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations Conclusion: This paper focuses on Comparative Analysis that would be performed based on different parameters such as Accuracy, Confusion Matrix to identify the best possible model for predicting the movie Success. By using Advertisement Propaganda, they can plan for the best time to release the movie according to the predicted success rate to gain higher benefits. Discussion: Data Mining is the process of discovering different patterns from large data sets and from that various relationships are also discovered to solve various problems that come in business and helps to predict the forthcoming trends. This Prediction can help Production Houses for Advertisement Propaganda and also they can plan their costs and by assuring these factors they can make the movie more profitable.


2021 ◽  
pp. 1-11
Author(s):  
Velichka Traneva ◽  
Stoyan Tranev

Analysis of variance (ANOVA) is an important method in data analysis, which was developed by Fisher. There are situations when there is impreciseness in data In order to analyze such data, the aim of this paper is to introduce for the first time an intuitionistic fuzzy two-factor ANOVA (2-D IFANOVA) without replication as an extension of the classical ANOVA and the one-way IFANOVA for a case where the data are intuitionistic fuzzy rather than real numbers. The proposed approach employs the apparatus of intuitionistic fuzzy sets (IFSs) and index matrices (IMs). The paper also analyzes a unique set of data on daily ticket sales for a year in a multiplex of Cinema City Bulgaria, part of Cineworld PLC Group, applying the two-factor ANOVA and the proposed 2-D IFANOVA to study the influence of “ season ” and “ ticket price ” factors. A comparative analysis of the results, obtained after the application of ANOVA and 2-D IFANOVA over the real data set, is also presented.


2017 ◽  
Vol 10 (5) ◽  
pp. 2031-2055 ◽  
Author(s):  
Thomas Schwitalla ◽  
Hans-Stefan Bauer ◽  
Volker Wulfmeyer ◽  
Kirsten Warrach-Sagi

Abstract. Increasing computational resources and the demands of impact modelers, stake holders, and society envision seasonal and climate simulations with the convection-permitting resolution. So far such a resolution is only achieved with a limited-area model whose results are impacted by zonal and meridional boundaries. Here, we present the setup of a latitude-belt domain that reduces disturbances originating from the western and eastern boundaries and therefore allows for studying the impact of model resolution and physical parameterization. The Weather Research and Forecasting (WRF) model coupled to the NOAH land–surface model was operated during July and August 2013 at two different horizontal resolutions, namely 0.03 (HIRES) and 0.12° (LOWRES). Both simulations were forced by the European Centre for Medium-Range Weather Forecasts (ECMWF) operational analysis data at the northern and southern domain boundaries, and the high-resolution Operational Sea Surface Temperature and Sea Ice Analysis (OSTIA) data at the sea surface.The simulations are compared to the operational ECMWF analysis for the representation of large-scale features. To analyze the simulated precipitation, the operational ECMWF forecast, the CPC MORPHing (CMORPH), and the ENSEMBLES gridded observation precipitation data set (E-OBS) were used as references.Analyzing pressure, geopotential height, wind, and temperature fields as well as precipitation revealed (1) a benefit from the higher resolution concerning the reduction of monthly biases, root mean square error, and an improved Pearson skill score, and (2) deficiencies in the physical parameterizations leading to notable biases in distinct regions like the polar Atlantic for the LOWRES simulation, the North Pacific, and Inner Mongolia for both resolutions.In summary, the application of a latitude belt on a convection-permitting resolution shows promising results that are beneficial for future seasonal forecasting.


2013 ◽  
Vol 7 (3) ◽  
pp. 2333-2372
Author(s):  
E. Kantzas ◽  
M. Lomas ◽  
S. Quegan ◽  
E. Zakharova

Abstract. An increasing number of studies have demonstrated the significant climatic and ecological changes occurring in the northern latitudes over the past decades. As coupled, earth-system models attempt to describe and simulate the dynamics and complex feedbacks of the Arctic environment, it is important to reduce their uncertainties in short-term predictions by improving the description of both the systems processes and its initial state. This study focuses on snow-related variables and extensively utilizes a historical data set (1966–1996) of field snow measurements acquired across the extend of the Former Soviet Union (FSU) to evaluate a range of simulated snow metrics produced by a variety of land surface models, most of them embedded in IPCC-standard climate models. We reveal model-specific issues in simulating snow dynamics such as magnitude and timings of SWE as well as evolution of snow density. We further employ the field snow measurements alongside novel and model-independent methodologies to extract for the first time (i) a fresh snow density value (57–117 kg m–3) for the region and (ii) mean monthly snowpack sublimation estimates across a grassland-dominated western (November–February) [9.2, 6.1, 9.15, 15.25] mm and forested eastern sub-sector (November–March) [1.53, 1.52, 3.05, 3.80, 12.20] mm; we subsequently use the retrieved values to assess relevant model outputs. The discussion session consists of two parts. The first describes a sensitivity study where field data of snow depth and snow density are forced directly into the surface heat exchange formulation of a land surface model to evaluate how inaccuracies in simulating snow metrics affect important modeled variables and carbon fluxes such as soil temperature, thaw depth and soil carbon decomposition. The second part showcases how the field data can be assimilated with ready-available optimization techniques to pinpoint model issues and improve their performance.


This paper proposes an improved data compression technique compared to existing Lempel-Ziv-Welch (LZW) algorithm. LZW is a dictionary-updation based compression technique which stores elements from the data in the form of codes and uses them when those strings recur again. When the dictionary gets full, every element in the dictionary are removed in order to update dictionary with new entry. Therefore, the conventional method doesn’t consider frequently used strings and removes all the entry. This method is not an effective compression when the data to be compressed are large and when there are more frequently occurring string. This paper presents two new methods which are an improvement for the existing LZW compression algorithm. In this method, when the dictionary gets full, the elements that haven’t been used earlier are removed rather than removing every element of the dictionary which happens in the existing LZW algorithm. This is achieved by adding a flag to every element of the dictionary. Whenever an element is used the flag is set high. Thus, when the dictionary gets full, the dictionary entries where the flag was set high are kept and others are discarded. In the first method, the entries are discarded abruptly, whereas in the second method the unused elements are removed once at a time. Therefore, the second method gives enough time for the nascent elements of the dictionary. These techniques all fetch similar results when data set is small. This happens due to the fact that difference in the way they handle the dictionary when it’s full. Thus these improvements fetch better results only when a relatively large data is used. When all the three techniques' models were used to compare a data set with yields best case scenario, the compression ratios of conventional LZW is small compared to improved LZW method-1 and which in turn is small compared to improved LZW method-2.


Author(s):  
Olga N. Nasonova ◽  
Yeugeniy M. Gusev ◽  
Evgeny E. Kovalev ◽  
Georgy V. Ayzel

Abstract. Climate change impact on river runoff was investigated within the framework of the second phase of the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP2) using a physically-based land surface model Soil Water – Atmosphere – Plants (SWAP) (developed in the Institute of Water Problems of the Russian Academy of Sciences) and meteorological projections (for 2006–2099) simulated by five General Circulation Models (GCMs) (including GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM, and NorESM1-M) for each of four Representative Concentration Pathway (RCP) scenarios (RCP2.6, RCP4.5, RCP6.0, and RCP8.5). Eleven large-scale river basins were used in this study. First of all, SWAP was calibrated and validated against monthly values of measured river runoff with making use of forcing data from the WATCH data set and all GCMs' projections were bias-corrected to the WATCH. Then, for each basin, 20 projections of possible changes in river runoff during the 21st century were simulated by SWAP. Analysis of the obtained hydrological projections allowed us to estimate their uncertainties resulted from application of different GCMs and RCP scenarios. On the average, the contribution of different GCMs to the uncertainty of the projected river runoff is nearly twice larger than the contribution of RCP scenarios. At the same time the contribution of GCMs slightly decreases with time.


Sign in / Sign up

Export Citation Format

Share Document