scholarly journals Dam Breach Size Comparison for Flood Simulations. A HEC-RAS Based, GIS Approach for Drăcșani Lake, Sitna River, Romania

Water ◽  
2020 ◽  
Vol 12 (4) ◽  
pp. 1090 ◽  
Author(s):  
Liviu-Marian Albu ◽  
Andrei Enea ◽  
Marina Iosub ◽  
Iuliana-Gabriela Breabăn

Floods are the most destructive natural phenomenon, by the total number of casualties, and value of property damage, compared to any other type of natural disaster. However, some of the most destructive flash floods are related to dam breaches or complete collapses, that release the large amounts of water, affecting inhabited areas. Worldwide, numerous dams have almost reached or surpassed the estimated construction life span, and pose an increasing risk to structure stability. Considering their continuous degrading state, increasing rainfall aggressiveness, due to climatic changes, technical error, or even human error, there are numerous, potential causes, for which dams could develop breaches and completely fail. This study aims to portray a comparative perspective of flood impact, with real-life consequences, measured by quantifiable parameters, generated from computer simulations of different breach sizes. These parameters include the total flooded surface, water velocity, maximum water depth, number of affected buildings, etc. The analysis was undergone by means of HEC-RAS based 2D hydraulic modeling and GIS, depending on high-accuracy Lidar terrain data and historical hydrological data. As a case study, Drăcșani Lake with the associated Sulița earthfill embankment dam was chosen, being one of the largest and oldest artificial lakes in Romania.

Hydrology ◽  
2020 ◽  
Vol 7 (4) ◽  
pp. 72
Author(s):  
Vasilis Bellos ◽  
Vasileios Kaisar Tsakiris ◽  
George Kopsiaftis ◽  
George Tsakiris

Dam break studies consist of two submodels: (a) the dam breach submodel which derives the flood hydrograph and (b) the hydrodynamic submodel which, using the flood hydrograph, derives the flood peaks and maximum water depths in the downstream reaches of the river. In this paper, a thorough investigation of the uncertainty observed in the output of the hydrodynamic model, due to the seven dam breach parameters, is performed in a real-world case study (Papadiana Dam, located at Tavronitis River in Crete, Greece). Three levels of uncertainty are examined (flow peak of the flood hydrograph at the dam location, flow peaks and maximum water depths downstream along the river) with two methods: (a) a Morris-based sensitivity analysis for investigating the influence of each parameter on the final results; (b) a Monte Carlo-based forward uncertainty analysis for defining the distribution of uncertainty band and its statistical characteristics. Among others, it is found that uncertainty of the flow peaks is greater than the uncertainty of the maximum water depths, whereas there is a decreasing trend of uncertainty as we move downstream along the river.


Author(s):  
Eleonora FIORE ◽  
Giuliano SANSONE ◽  
Chiara Lorenza REMONDINO ◽  
Paolo Marco TAMBORRINI

Interest in offering Entrepreneurship Education (EE) to all kinds of university students is increasing. Therefore, universities are increasing the number of entrepreneurship courses intended for students from different fields of study and with different education levels. Through a single case study of the Contamination Lab of Turin (CLabTo), we suggest how EE may be taught to all kinds of university students. We have combined design methods with EE to create a practical-oriented entrepreneurship course which allows students to work in transdisciplinary teams through a learning-by-doing approach on real-life projects. Professors from different departments have been included to create a multidisciplinary environment. We have drawn on programme assessment data, including pre- and post-surveys. Overall, we have found a positive effect of the programme on the students’ entrepreneurial skills. However, when the data was broken down according to the students’ fields of study and education levels, mixed results emerged.


2018 ◽  
Vol 60 (1) ◽  
pp. 55-65
Author(s):  
Krystyna Ilmurzyńska

Abstract This article investigates the suitability of traditional and participatory planning approaches in managing the process of spatial development of existing housing estates, based on the case study of Warsaw’s Ursynów Północny district. The basic assumption of the article is that due to lack of government schemes targeted at the restructuring of large housing estates, it is the business environment that drives spatial transformations and through that shapes the development of participation. Consequently the article focuses on the reciprocal relationships between spatial transformations and participatory practices. Analysis of Ursynów Północny against the background of other estates indicates that it presents more endangered qualities than issues to be tackled. Therefore the article focuses on the potential of the housing estate and good practices which can be tracked throughout its lifetime. The paper focuses furthermore on real-life processes, addressing the issue of privatisation, development pressure, formal planning procedures and participatory budgeting. In the conclusion it attempts to interpret the existing spatial structure of the estate as a potential framework for a participatory approach.


2014 ◽  
Vol 30 (2) ◽  
pp. 113-126 ◽  
Author(s):  
Dominic Detzen ◽  
Tobias Stork genannt Wersborg ◽  
Henning Zülch

ABSTRACT This case originates from a real-life business situation and illustrates the application of impairment tests in accordance with IFRS and U.S. GAAP. In the first part of the case study, students examine conceptual questions of impairment tests under IFRS and U.S. GAAP with respect to applicable accounting standards, definitions, value concepts, and frequency of application. In addition, the case encourages students to discuss the impairment regime from an economic point of view. The second part of the instructional resource continues to provide instructors with the flexibility of applying U.S. GAAP and/or IFRS when students are asked to test a long-lived asset for impairment and, if necessary, allocate any potential impairment. This latter part demonstrates that impairment tests require professional judgment that students are to exercise in the case.


Author(s):  
Apostolos C. Tsolakis ◽  
Angelina D. Bintoudi ◽  
Lampros Zyglakis ◽  
Stylianos Zikos ◽  
Christos Timplalexis ◽  
...  
Keyword(s):  

2021 ◽  
Vol 7 (4) ◽  
pp. 64
Author(s):  
Tanguy Ophoff ◽  
Cédric Gullentops ◽  
Kristof Van Beeck ◽  
Toon Goedemé

Object detection models are usually trained and evaluated on highly complicated, challenging academic datasets, which results in deep networks requiring lots of computations. However, a lot of operational use-cases consist of more constrained situations: they have a limited number of classes to be detected, less intra-class variance, less lighting and background variance, constrained or even fixed camera viewpoints, etc. In these cases, we hypothesize that smaller networks could be used without deteriorating the accuracy. However, there are multiple reasons why this does not happen in practice. Firstly, overparameterized networks tend to learn better, and secondly, transfer learning is usually used to reduce the necessary amount of training data. In this paper, we investigate how much we can reduce the computational complexity of a standard object detection network in such constrained object detection problems. As a case study, we focus on a well-known single-shot object detector, YoloV2, and combine three different techniques to reduce the computational complexity of the model without reducing its accuracy on our target dataset. To investigate the influence of the problem complexity, we compare two datasets: a prototypical academic (Pascal VOC) and a real-life operational (LWIR person detection) dataset. The three optimization steps we exploited are: swapping all the convolutions for depth-wise separable convolutions, perform pruning and use weight quantization. The results of our case study indeed substantiate our hypothesis that the more constrained a problem is, the more the network can be optimized. On the constrained operational dataset, combining these optimization techniques allowed us to reduce the computational complexity with a factor of 349, as compared to only a factor 9.8 on the academic dataset. When running a benchmark on an Nvidia Jetson AGX Xavier, our fastest model runs more than 15 times faster than the original YoloV2 model, whilst increasing the accuracy by 5% Average Precision (AP).


2021 ◽  
Vol 11 (8) ◽  
pp. 378
Author(s):  
Jaco Griffioen ◽  
Monique van der Drift ◽  
Hans van den Broek

This paper sets out to enhance current Maritime Crew Resource Management (MCRM) training, and with that to improve the training of technical and non-technical skills given to bachelor maritime officers. The rationale for CRM training is improving safety performance by reducing accidents caused by human error. The central notion of CRM training is that applying good resource management principles during day-to-day operations will lead to a beneficial change in attitudes and behaviour regarding safety. This article therefore indicates that enhanced MCRM should play a more structural role in the training of student officers. However, the key question is: what are the required changes in attitude and behaviour that will create sufficient adaptability to improve safety performance? To provide an answer, we introduce the Resilience Engineering (RE) theory. From an RE point of view, we elaborate on the relation between team adaptability and safety performance, operationalized as a competence profile. In addition, a case study of the ‘Rotterdam Approach’ will be presented, in which the MCRM training design has been enhanced with RE, with the objective to train team adaptability skills for improved safety performance.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 405
Author(s):  
Marcos Lupión ◽  
Javier Medina-Quero ◽  
Juan F. Sanjuan ◽  
Pilar M. Ortigosa

Activity Recognition (AR) is an active research topic focused on detecting human actions and behaviours in smart environments. In this work, we present the on-line activity recognition platform DOLARS (Distributed On-line Activity Recognition System) where data from heterogeneous sensors are evaluated in real time, including binary, wearable and location sensors. Different descriptors and metrics from the heterogeneous sensor data are integrated in a common feature vector whose extraction is developed by a sliding window approach under real-time conditions. DOLARS provides a distributed architecture where: (i) stages for processing data in AR are deployed in distributed nodes, (ii) temporal cache modules compute metrics which aggregate sensor data for computing feature vectors in an efficient way; (iii) publish-subscribe models are integrated both to spread data from sensors and orchestrate the nodes (communication and replication) for computing AR and (iv) machine learning algorithms are used to classify and recognize the activities. A successful case study of daily activities recognition developed in the Smart Lab of The University of Almería (UAL) is presented in this paper. Results present an encouraging performance in recognition of sequences of activities and show the need for distributed architectures to achieve real time recognition.


2020 ◽  
Vol 12 (1) ◽  
Author(s):  
Yisong Lin ◽  
Xuefeng Wang ◽  
Hao Hu ◽  
Hui Zhao

Abstract By exemplifying the feeder service for the port of Kotka, this study proposed a multi-objective optimization model for feeder network design. Innovative for difference from the single-objective evaluation system, the objective of feeder network design was proposed to include single allocation cost, intra-Europe cargo revenue, equipment balance, sailing cycle, allocation utilization, service route competitiveness, and stability. A three-stage control system was presented, and numerical experiment based on container liner’s real life data was conducted to verify the mathematical model and the control system. The numerical experiment revealed that the three-stage control system is effective and practical, and the research ideas had been applicable with satisfactory effect.


2021 ◽  
Vol 13 (6) ◽  
pp. 3553
Author(s):  
Philippe Nimmegeers ◽  
Alexej Parchomenko ◽  
Paul De Meulenaere ◽  
Dagmar R. D’hooge ◽  
Paul H. M. Van Steenberge ◽  
...  

Multilevel statistical entropy analysis (SEA) is a method that has been recently proposed to evaluate circular economy strategies on the material, component and product levels to identify critical stages of resource and functionality losses. However, the comparison of technological alternatives may be difficult, and equal entropies do not necessarily correspond with equal recyclability. A coupling with energy consumption aspects is strongly recommended but largely lacking. The aim of this paper is to improve the multilevel SEA method to reliably assess the recyclability of plastics. Therefore, the multilevel SEA method is first applied to a conceptual case study of a fictitious bag filled with plastics, and the possibilities and limitations of the method are highlighted. Subsequently, it is proposed to extend the method with the computation of the relative decomposition energies of components and products. Finally, two recyclability metrics are proposed. A plastic waste collection bag filled with plastic bottles is used as a case study to illustrate the potential of the developed extended multilevel SEA method. The proposed extension allows us to estimate the recyclability of plastics. In future work, this method will be refined and other potential extensions will be studied together with applications to real-life plastic products and plastic waste streams.


Sign in / Sign up

Export Citation Format

Share Document