scholarly journals Calculating earthquake damage building by building: the case of the city of Cologne, Germany

Author(s):  
Cecilia I. Nievas ◽  
Marco Pilz ◽  
Karsten Prehn ◽  
Danijel Schorlemmer ◽  
Graeme Weatherill ◽  
...  

AbstractThe creation of building exposure models for seismic risk assessment is frequently challenging due to the lack of availability of detailed information on building structures. Different strategies have been developed in recent years to overcome this, including the use of census data, remote sensing imagery and volunteered graphic information (VGI). This paper presents the development of a building-by-building exposure model based exclusively on openly available datasets, including both VGI and census statistics, which are defined at different levels of spatial resolution and for different moments in time. The initial model stemming purely from building-level data is enriched with statistics aggregated at the neighbourhood and city level by means of a Monte Carlo simulation that enables the generation of full realisations of damage estimates when using the exposure model in the context of an earthquake scenario calculation. Though applicable to any other region of interest where analogous datasets are available, the workflow and approach followed are explained by focusing on the case of the German city of Cologne, for which a scenario earthquake is defined and the potential damage is calculated. The resulting exposure model and damage estimates are presented, and it is shown that the latter are broadly consistent with damage data from the 1978 Albstadt earthquake, notwithstanding the differences in the scenario. Through this real-world application we demonstrate the potential of VGI and open data to be used for exposure modelling for natural risk assessment, when combined with suitable knowledge on building fragility and accounting for the inherent uncertainties.

2015 ◽  
Vol 3 (8) ◽  
pp. 5045-5084
Author(s):  
R. Figueiredo ◽  
M. Martina

Abstract. One of the necessary components to perform catastrophe risk modelling is information on the buildings at risk, such as their spatial location, geometry, height, occupancy type and other characteristics. This is commonly referred to as the exposure model or dataset. When modelling large areas, developing exposure datasets with the relevant information about every individual building is not practicable. Thus, census data at coarse spatial resolutions are often used as the starting point for the creation of such datasets, after which disaggregation to finer resolutions is carried out using different methods, based on proxies such as the population distribution. While these methods can produce acceptable results, they cannot be considered ideal. Nowadays, the availability of open data is increasing and it is possible to obtain information about buildings for some regions. Although this type of information is usually limited and, therefore, insufficient to generate an exposure dataset, it can still be very useful in its elaboration. In this paper, we focus on how open building data can be used to develop a gridded exposure model by disaggregating existing census data at coarser resolutions. Furthermore, we analyse how the selection of the level of spatial resolution can impact the accuracy and precision of the model, and compare the results in terms of affected residential building areas, due to a flood event, between different models.


2016 ◽  
Vol 16 (2) ◽  
pp. 417-429 ◽  
Author(s):  
R. Figueiredo ◽  
M. Martina

Abstract. One of the necessary components to perform catastrophe risk modelling is information on the buildings at risk, such as their spatial location, geometry, height, occupancy type and other characteristics. This is commonly referred to as the exposure model or data set. When modelling large areas, developing exposure data sets with the relevant information about every individual building is not practicable. Thus, census data at coarse spatial resolutions are often used as the starting point for the creation of such data sets, after which disaggregation to finer resolutions is carried out using different methods, based on proxies such as the population distribution. While these methods can produce acceptable results, they cannot be considered ideal. Nowadays, the availability of open data is increasing and it is possible to obtain information about buildings for some regions. Although this type of information is usually limited and, therefore, insufficient to generate an exposure data set, it can still be very useful in its elaboration. In this paper, we focus on how open building data can be used to develop a gridded exposure model by disaggregating existing census data at coarser resolutions. Furthermore, we analyse how the selection of the level of spatial resolution can impact the accuracy and precision of the model, and compare the results in terms of affected residential building areas, due to a flood event, between different models.


2013 ◽  
Vol 17 (5) ◽  
pp. 1871-1892 ◽  
Author(s):  
H. C. Winsemius ◽  
L. P. H. Van Beek ◽  
B. Jongman ◽  
P. J. Ward ◽  
A. Bouwman

Abstract. There is an increasing need for strategic global assessments of flood risks in current and future conditions. In this paper, we propose a framework for global flood risk assessment for river floods, which can be applied in current conditions, as well as in future conditions due to climate and socio-economic changes. The framework's goal is to establish flood hazard and impact estimates at a high enough resolution to allow for their combination into a risk estimate, which can be used for strategic global flood risk assessments. The framework estimates hazard at a resolution of ~ 1 km2 using global forcing datasets of the current (or in scenario mode, future) climate, a global hydrological model, a global flood-routing model, and more importantly, an inundation downscaling routine. The second component of the framework combines hazard with flood impact models at the same resolution (e.g. damage, affected GDP, and affected population) to establish indicators for flood risk (e.g. annual expected damage, affected GDP, and affected population). The framework has been applied using the global hydrological model PCR-GLOBWB, which includes an optional global flood routing model DynRout, combined with scenarios from the Integrated Model to Assess the Global Environment (IMAGE). We performed downscaling of the hazard probability distributions to 1 km2 resolution with a new downscaling algorithm, applied on Bangladesh as a first case study application area. We demonstrate the risk assessment approach in Bangladesh based on GDP per capita data, population, and land use maps for 2010 and 2050. Validation of the hazard estimates has been performed using the Dartmouth Flood Observatory database. This was done by comparing a high return period flood with the maximum observed extent, as well as by comparing a time series of a single event with Dartmouth imagery of the event. Validation of modelled damage estimates was performed using observed damage estimates from the EM-DAT database and World Bank sources. We discuss and show sensitivities of the estimated risks with regard to the use of different climate input sets, decisions made in the downscaling algorithm, and different approaches to establish impact models.


2021 ◽  
Vol 7 ◽  
pp. 20-26
Author(s):  
Anurag Ajay ◽  
Peter Craufurd ◽  
Sachin Sharma

Approximately 7,600 wheat plots were surveyed and geo-tagged in the 2017-18 winter or rabi season in Bihar and eastern Uttar Pradesh (UP) in India to capture farmers’ wheat production practices at the landscape level. A two-stage cluster sampling method, based on Census data and electoral rolls, was used to identify 210 wheat farmers in each of 40 districts. The survey, implemented in Open Data Kit (ODK), recorded 226 variables covering major crop production factors such as previous crop, residue management, crop establishment method, variety and seed sources, nutrient management, irrigation management, weed flora and their management, harvesting method and farmer reported yield. Crop cuts were also made in 10% of fields. Data were very carefully checked with enumerators. These data should be very useful for technology targeting, yield prediction and other spatial analyses.


Author(s):  
Andrés Abarca ◽  
Ricardo Monteiro

In recent years, the use of large scale seismic risk assessment has become increasingly popular to evaluate the fragility of a specific region to an earthquake event, through the convolution of hazard, exposure and vulnerability. These studies tend to focus on the building stock of the region and sometimes neglect the evaluation of the infrastructure, which has great importance when determining the ability of a social group to attend to a disaster and to eventually resume normal activities. This study, developed within the scope of the EU-funded project ITERATE (Improved Tools for Disaster Risk Mitigation in Algeria), focuses on the proposal of an exposure model for bridge structures in Northern Algeria. The proposed model was developed using existing national data surveys, as well as satellite information and field observations. As a result, the location and detailed characterization of a significant share of the Algeria roadway bridge inventory was developed, as well as the definition of a taxonomy that is able to classify the most common structural systems used in Algerian bridge construction. The outcome of this study serves as input to estimate the fragility of the bridge infrastructure inventory and, furthermore, to the overall risk assessment of the Northern Algerian region. Such fragility model will, in turn, enable the evaluation of earthquake scenarios at a regional scale and provide valuable information to decision makers for the implementation of risk mitigation measures.


2017 ◽  
Vol 33 (1) ◽  
pp. 299-322 ◽  
Author(s):  
Catalina Yepes-Estrada ◽  
Vitor Silva ◽  
Jairo Valcárcel ◽  
Ana Beatriz Acevedo ◽  
Nicola Tarque ◽  
...  

This study presents an open and transparent exposure model for the residential building stock in South America. This model captures the geographical distribution, structural characteristics (including information about construction materials, lateral load resisting system, range of number of stories), average built-up area, replacement cost, expected number of occupants, and number of dwellings and buildings. The methodology utilized to develop this model was based on national population and housing statistics and expert judgment from dozens of local researchers and practitioners. This model has been developed as part of the South America Risk Assessment (SARA) project led by the Global Earthquake Model (GEM), and it can be used to perform earthquake risk analyses. It is available at different geographical scales for seven Andean countries: Argentina, Bolivia, Chile, Colombia, Ecuador, Peru, and Venezuela (DOI: 10.13117/GEM. DATASET.EXP.ANDEAN-v1.0).


Author(s):  
Albert Meroño-Peñuela ◽  
Ashkan Ashkpour ◽  
Valentijn Gilissen ◽  
Jan Jonker ◽  
Tom Vreugdenhil ◽  
...  

The Dutch Historical Censuses (1795–1971) contain statistics that describe almost two centuries of History in the Netherlands. These censuses were conducted once every 10 years (with some exceptions) from 1795 to 1971. Researchers have used its wealth of demographic, occupational, and housing information to answer fundamental questions in social economic history. However, accessing these data has traditionally been a time consuming and knowledge intensive task. In this paper, we describe the outcomes of the cedar project, which make access to the digitized assets of the Dutch Historical Censuses easier, faster, and more reliable. This is achieved by using the data publishing paradigm of Linked Data from the Semantic Web. We use a digitized sample of 2,288 census tables to produce a linked dataset of more than 6.8 million statistical observations. The dataset is modeled using the rdf Data Cube, Open Annotation, and prov vocabularies. The contributions of representing this dataset as Linked Data are: (1) a uniform database interface for efficient querying of census data; (2) a standardized and reproducible data harmonization workflow; and (3) an augmentation of the dataset through richer connections to related resources on the Web.


Sign in / Sign up

Export Citation Format

Share Document