scholarly journals Quantitative data analysis of ESAR data

2013 ◽  
Vol 11 ◽  
pp. 291-295
Author(s):  
N. Phruksahiran ◽  
M. Chandra

Abstract. A synthetic aperture radar (SAR) data processing uses the backscattered electromagnetic wave to map radar reflectivity of the ground surface. The polarization property in radar remote sensing was used successfully in many applications, especially in target decomposition. This paper presents a case study to the experiments which are performed on ESAR L-Band full polarized data sets from German Aerospace Center (DLR) to demonstrate the potential of coherent target decomposition and the possibility of using the weather radar measurement parameter, such as the differential reflectivity and the linear depolarization ratio to obtain the quantitative information of the ground surface. The raw data of ESAR has been processed by the SAR simulator developed using MATLAB program code with Range-Doppler algorithm.

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 177
Author(s):  
Laszlo Podolszki ◽  
Ivan Kosović ◽  
Tomislav Novosel ◽  
Tomislav Kurečić

In March 2018, a landslide in Hrvatska Kostajnica completely destroyed multiple households. The damage was extensive, and lives were endangered. The question remains: Can it happen again? To enhance the knowledge and understanding of the soil and rock behaviour before, during, and after this geo-hazard event, multi-level sensing technologies in landslide research were applied. Day after the event field mapping and unmanned aerial vehicle (UAV) data were collected with the inspection of available orthophoto and “geo” data. For the landslide, a new geological column was developed with mineralogical and geochemical analyses. The application of differential interferometric synthetic aperture radar (DInSAR) for detecting ground surface displacement was undertaken in order to determine pre-failure behaviour and to give indications about post-failure deformations. In 2020, electrical resistivity tomography (ERT) in the landslide body was undertaken to determine the depth of the landslide surface, and in 2021 ERT measurements in the vicinity of the landslide area were performed to obtain undisturbed material properties. Moreover, in 2021, detailed light detection and ranging (LIDAR) data were acquired for the area. All these different level data sets are being analyzed in order to develop a reliable landslide model as a first step towards answering the aforementioned question. Based on applied multi-level sensing technologies and acquired data, the landslide model is taking shape. However, further detailed research is still recommended.


Author(s):  
Harrison Togia ◽  
Oceana P. Francis ◽  
Karl Kim ◽  
Guohui Zhang

Hazards to roadways and travelers can be drastically different because hazards are largely dependent on the regional environment and climate. This paper describes the development of a qualitative method for assessing infrastructure importance and hazard exposure for rural highway segments in Hawai‘i under different conditions. Multiple indicators of roadway importance are considered, including traffic volume, population served, accessibility, connectivity, reliability, land use, and roadway connection to critical infrastructures, such as hospitals and police stations. The method of evaluating roadway hazards and importance can be tailored to fit different regional hazard scenarios. It assimilates data from diverse sources to estimate risks of disruption. A case study for Highway HI83 in Hawai‘i, which is exposed to multiple hazards, is conducted. Weakening of the road by coastal erosion, inundation from sea level rise, and rockfall hazards require adaptation solutions. By analyzing the risk of disruption to highway segments, adaptation approaches can be prioritized. Using readily available geographic information system data sets for the exposure and impacts of potential hazards, this method could be adapted not only for emergency management but also for planning, design, and engineering of resilient highways.


2021 ◽  
Vol 13 (13) ◽  
pp. 2559
Author(s):  
Daniele Cerra ◽  
Miguel Pato ◽  
Kevin Alonso ◽  
Claas Köhler ◽  
Mathias Schneider ◽  
...  

Spectral unmixing represents both an application per se and a pre-processing step for several applications involving data acquired by imaging spectrometers. However, there is still a lack of publicly available reference data sets suitable for the validation and comparison of different spectral unmixing methods. In this paper, we introduce the DLR HyperSpectral Unmixing (DLR HySU) benchmark dataset, acquired over German Aerospace Center (DLR) premises in Oberpfaffenhofen. The dataset includes airborne hyperspectral and RGB imagery of targets of different materials and sizes, complemented by simultaneous ground-based reflectance measurements. The DLR HySU benchmark allows a separate assessment of all spectral unmixing main steps: dimensionality estimation, endmember extraction (with and without pure pixel assumption), and abundance estimation. Results obtained with traditional algorithms for each of these steps are reported. To the best of our knowledge, this is the first time that real imaging spectrometer data with accurately measured targets are made available for hyperspectral unmixing experiments. The DLR HySU benchmark dataset is openly available online and the community is welcome to use it for spectral unmixing and other applications.


Author(s):  
Violeta Cabello ◽  
David Romero ◽  
Ana Musicki ◽  
Ângela Guimarães Pereira ◽  
Baltasar Peñate

AbstractThe literature on the water–energy–food nexus has repeatedly signaled the need for transdisciplinary approaches capable of weaving the plurality of knowledge bodies involved in the governance of different resources. To fill this gap, Quantitative Story-Telling (QST) has been proposed as a science for adaptive governance approach that aims at fostering pluralistic and reflexive research processes to overcome narrow framings of water, energy, and food policies as independent domains. Yet, there are few practical applications of QST and most run on a pan-European scale. In this paper, we apply the theory of QST through a practical case study regarding non-conventional water sources as an innovation for water and agricultural governance in the Canary Islands. We present the methods mixed to mobilize different types of knowledge and analyze interconnections between water, energy, and food supply. First, we map and interview relevant knowledge holders to elicit narratives about the current and future roles of alternative water resources in the arid Canarian context. Second, we run a quantitative diagnosis of nexus interconnections related to the use of these resources for irrigation. This analysis provides feedback to the narratives in terms of constraints and uncertainties that might hamper the expectations posed on this innovation. Thirdly, the mixed analysis is used as fuel for discussion in participatory narrative assessment workshops. Our experimental QST process succeeded in co-creating new knowledge regarding the water–energy–food nexus while addressing some relational and epistemological uncertainties in the development of alternative water resources. Yet, the extent to which mainstream socio-technical imaginaries surrounding this innovation were transformed was rather limited. We conclude that the potential of QST within sustainability place-based research resides on its capacity to: (a) bridge different sources of knowledge, including local knowledge; (b) combine both qualitative and quantitative information regarding the sustainable use of local resources, and (c) co-create narratives on desirable and viable socio-technical pathways. Open questions remain as to how to effectively mobilize radically diverse knowledge systems in complex analytical exercises where everyone feels safe to participate.


Forecasting ◽  
2021 ◽  
Vol 3 (2) ◽  
pp. 322-338
Author(s):  
Marvin Carl May ◽  
Alexander Albers ◽  
Marc David Fischer ◽  
Florian Mayerhofer ◽  
Louis Schäfer ◽  
...  

Currently, manufacturing is characterized by increasing complexity both on the technical and organizational levels. Thus, more complex and intelligent production control methods are developed in order to remain competitive and achieve operational excellence. Operations management described early on the influence among target metrics, such as queuing times, queue length, and production speed. However, accurate predictions of queue lengths have long been overlooked as a means to better understanding manufacturing systems. In order to provide queue length forecasts, this paper introduced a methodology to identify queue lengths in retrospect based on transitional data, as well as a comparison of easy-to-deploy machine learning-based queue forecasting models. Forecasting, based on static data sets, as well as time series models can be shown to be successfully applied in an exemplary semiconductor case study. The main findings concluded that accurate queue length prediction, even with minimal available data, is feasible by applying a variety of techniques, which can enable further research and predictions.


2016 ◽  
Vol 41 (4) ◽  
pp. 357-388 ◽  
Author(s):  
Elizabeth A. Stuart ◽  
Anna Rhodes

Background: Given increasing concerns about the relevance of research to policy and practice, there is growing interest in assessing and enhancing the external validity of randomized trials: determining how useful a given randomized trial is for informing a policy question for a specific target population. Objectives: This article highlights recent advances in assessing and enhancing external validity, with a focus on the data needed to make ex post statistical adjustments to enhance the applicability of experimental findings to populations potentially different from their study sample. Research design: We use a case study to illustrate how to generalize treatment effect estimates from a randomized trial sample to a target population, in particular comparing the sample of children in a randomized trial of a supplemental program for Head Start centers (the Research-Based, Developmentally Informed study) to the national population of children eligible for Head Start, as represented in the Head Start Impact Study. Results: For this case study, common data elements between the trial sample and population were limited, making reliable generalization from the trial sample to the population challenging. Conclusions: To answer important questions about external validity, more publicly available data are needed. In addition, future studies should make an effort to collect measures similar to those in other data sets. Measure comparability between population data sets and randomized trials that use samples of convenience will greatly enhance the range of research and policy relevant questions that can be answered.


2017 ◽  
Vol 78 (5) ◽  
pp. 717-736 ◽  
Author(s):  
Samuel Green ◽  
Yanyun Yang

Bifactor models are commonly used to assess whether psychological and educational constructs underlie a set of measures. We consider empirical underidentification problems that are encountered when fitting particular types of bifactor models to certain types of data sets. The objective of the article was fourfold: (a) to allow readers to gain a better general understanding of issues surrounding empirical identification, (b) to offer insights into empirical underidentification with bifactor models, (c) to inform methodologists who explore bifactor models about empirical underidentification with these models, and (d) to propose strategies for structural equation model users to deal with underidentification problems that can emerge when applying bifactor models.


2001 ◽  
Vol 105 (1051) ◽  
pp. 501-516 ◽  
Author(s):  
A. P. Brown

Abstract For the purpose of the design and certification of inflight icing protection systems for transport and general aviation aircraft, the eventual re-definition/expansion of the icing environment of FAR 25/JAR 25, Appendix C is under consideration. Such a re-definition will be aided by gathering as much inflight icing event data as reasonably possible, from widely-different geographic locations. The results of a 12-month pilot programme of icing event data gathering are presented. Using non-instrumented turboprop aircraft flying upon mid-altitude routine air transport operations, the programme has gathered observational data from across the British Isles and central France. By observing a number of metrics, notably windscreen lower-corner ice impingement limits, against an opposing corner vortex-flow, supported by wing leading edge impingement limits, the observed icing events have been classified as ‘small’, ‘medium’ or ‘large’ droplet. Using the guidance of droplet trajectory modelling, MVD values for the three droplet size bins have been conjectured to be 15, 40 and 80mm. Hence, the ‘large’ droplet category would be in exceedance of FAR/JAR 25, Appendix C. Data sets of 117 winter-season and 55 summer-season icing events have been statistically analysed. As defined above, the data sets include 11 winter and five summer large droplet icing encounters. Icing events included ‘sandpaper’ icing from short-duration ‘large’ droplets, and a singular ridge formation icing event in ‘large’ droplet. The frequency of ‘large’ droplet icing events amounted to 1 in 20 flight hours in winter and 1 in 35 flight hours in summer. These figures reflect ‘large’ droplet icing encounter probabilities perhaps substantially greater than previously considered. The ‘large’ droplet events were quite localised, mean scale-size being about 6nm.


Author(s):  
Deborah L. Thurston

Abstract A formal methodology is presented which may be used to evaluate design alternatives in the iterative design/redesign process. Deterministic multiattribute utility analysis is used to compare the overall utility or value of alternative designs as a function of the levels of several performance characteristics of a manufactured system. The evaluation function reflects the designers subjective preferences. Sensitivity analysis provides quantitative information as to how a design should be modified in order to increase its utility to the design decision maker. Improvements in one or more areas or performance and tradeoffs between attributes which would increase desirability of a design most may be quantified. A case study of materials selection and design in the automotive industry is presented. The methodology was applied to 6 automotive companies in the United States and Europe, and results are used to illustrate the steps followed in application.


Sign in / Sign up

Export Citation Format

Share Document