scholarly journals Integration of US Army Corps of Engineers' Time-Series Data Management System with Continuous SWMM Modeling

Author(s):  
Yiwen Wang ◽  
◽  
William James ◽  
1964 ◽  
Vol 1 (4) ◽  
pp. 215-226 ◽  
Author(s):  
W G Brown

Calculations using the Neumann solution (as modified by Aldrich) and thermal properties of soils (obtained by Kersten) show that the frost penetration depth for the same freezing index for essentially all soils with any moisture content and for dry sand and rock varies by a factor of about 2 to 1. The extremes calculated in this way bracket the experimentally determined design curve of the US Army Corps of Engineers and give it theoretical support. The theoretical calculations and additional experimental data are used as a basis for a small alteration in the slope of the design curve. This modified design curve is recommended for field use because of (1) inherent imperfections in existing theory and (2) practical limitations to precise specification of field conditions.


2020 ◽  
Author(s):  
Paolo Oliveri ◽  
SImona Simoncelli ◽  
Pierluigi DI Pietro ◽  
Sara Durante

<p>One of the main challenges for the present and future in ocean observations is to find best practices for data management: infrastructures like Copernicus and SeaDataCloud already take responsibility for assembly, archive, update and publish data. Here we present the strengths and weaknesses in a SeaDataCloud Temperature and Salinity time series data collections, in particular a tool able to recognize the different devices and platforms and to merge them with processed Copernicus platforms.</p><p>While Copernicus has the main target to quickly acquire and publish data, SeaDataNet aims to publish data with the best quality available. This two data repository should be considered together, since the originator can ingest the data in both the infrastructures or only in one, or partially in both. This results sometimes in data partially available in Copernicus or SeaDataCloud, with great impact for the researcher who wants to access as much data as possible. The data reprocessing should not be loaded on researchers' shoulders, since only skilled users in all data management plan know how merge the data.</p><p>The SeaDataCloud time series data collections is a Global Ocean soon-to-be-published dataset that will represent a reference for ocean researchers, released in binary, user friendly Ocean Data View format. The database management plan was originally for profiles, but had been adapted for time series, resolving several issues like the uniqueness of the identifiers (ID).</p><p>Here we present an extension of the SOURCE (Sea Observations Utility for Reprocessing. Calibration and Evaluation) Python package, able to enhance the data quality with redundant sophisticated methods and simplify their usage. </p><p>SOURCE increases quality control (Q/C) performances on observations using statistical quality check procedures that follows the ocean best practices guidelines, exploiting the following  issues:</p><ol><li>Find and aggregate all broken time series using likeness in ID parameter strings;</li> <li>Find and organize in a dictionary all different metadata variables;</li> <li>Correct time series time to match simpler measure units;</li> <li>Filter devices that are outside of a selected horizontal rectangle;</li> <li>Give some information on original Q/C scheme by SeaDataCloud infrastructure;</li> <li>Give information tables on platforms and on the merged ID string duplicates together with an errors log file (missing time, depth, data, wrong Q/C variables, etc.).</li> </ol><p>In particular, the duplicates table and the log file may be helpful to SeaDataCloud partners in order to update the data collection and make it finally available for the users.</p><p>The reconstructed SeaDataCloud time series data, divided by parameter and stored in a more flexible dataset, give the possibility to ingest it in the main part of the software, allowing to compare it with Copernicus time series, find the same platform using horizontal and vertical surroundings (without looking to ID) find and cleanup  duplicated data, merge the two databases to extend the data coverage.</p><p>This allow researchers to have the most wide and the best quality possible data for the final users release and to to use these data to calibrate and validate models, in order to reach an idea of a whole area sea conditions.</p>


2015 ◽  
Vol 35 (2) ◽  
pp. 196-208 ◽  
Author(s):  
Julie Dean Rosati ◽  
Katherine Flynn Touzinsky ◽  
W. Jeff Lillycrop

2017 ◽  
Vol 8 (1) ◽  
pp. 125-151 ◽  
Author(s):  
Eric M Gagnet ◽  
John M Hoemann ◽  
James S Davidson

Over recent decades, three distinct methods have evolved that are currently being used to generate resistance functions for single-degree-of-freedom analyses of unreinforced masonry walls subjected to blast loading. The degree of differences in these resistance definitions depends on whether the wall is assumed to be simply supported or whether compression arching forces result from rotation restraint at the supports. The first method originated in the late 1960s as a result of both experimental and analytical research sponsored by the US Department of Defense. That method, referred to as the Wiehle method, is the basis of Unified Facilities Criteria 3-340-02 and other derived analytical software such as the Wall Analysis Code developed by the US Army Corps of Engineers, Engineer Research and Development Center. The second method is based on elastic mechanics and an assumed linear decay function that follows and is the basis of the widely used Single-Degree-of-Freedom Blast Effects Design Spreadsheets software distributed by the US Army Corps of Engineers, Protective Design Center. The third method is largely based on concrete and masonry behavioral theories developed by Paulay and Priestly in the early 1990s. This article systematically compares the resistance methodologies for arching and non-arching scenarios, demonstrates the implications by plugging the disparate resistance functions into blast load single-degree-of-freedom models, compares the analytical results to full-scale blast test results, and offers conclusions about the accuracy and efficacies of each method.


Sign in / Sign up

Export Citation Format

Share Document