scholarly journals Computer Aided Quick Determination of Earthquake Performance of Buildings by Using Street Survey and Preliminary Assessment Methods

2019 ◽  
Vol 2 (2) ◽  
pp. 200-210
Author(s):  
Murat Muvafık ◽  
Muhammet Özdemir

In this study, 7 different rapid evaluation methods which are used to determine the performance of buildings under the influence of earthquakes in a fast and practical way are examined. These methods were used to determine the earthquake performance behaviors (risky or safe) of buildings according to each method by using the parameters of 50 buildings that were collapsed or severely damaged in Van earthquake that occurred in 2011. Accurate estimation percentages of the methods on the buildings were calculated by comparing the obtained earthquake performance behaviors with the current situation of the buildings. The most suitable method has been tried to determine for 50 buildings related to these calculations. At the same time, a computer program called EPA (Earthquake Performance Analysis) was developed in order to evaluate the parameters of the related data set faster, easier and without error. Three of the 7 rapid assessment methods used to determine the earthquake performance behavior of buildings are first-stage methods called street screening (6306 RYY, FEMA P-154 and Sucuoğlu and Yazgan level-1), and the remaining four methods are second-stage methods called pre-assessment (Sucuoğlu and Yazgan level-2, Özcebe, Yakut and MVP). According to the results, the pre-assessment methods predicted the earthquake performance status of the buildings examined by 24% higher than the street screening methods. At the same time, the most successful method of street survey methods was 6306 RYY with 74% accurate estimation, and the most successful method of preliminary assessment with 86% accurate prediction was Yakut method.

2017 ◽  
Vol 25 (4) ◽  
pp. 413-434 ◽  
Author(s):  
Justin Grimmer ◽  
Solomon Messing ◽  
Sean J. Westwood

Randomized experiments are increasingly used to study political phenomena because they can credibly estimate the average effect of a treatment on a population of interest. But political scientists are often interested in how effects vary across subpopulations—heterogeneous treatment effects—and how differences in the content of the treatment affects responses—the response to heterogeneous treatments. Several new methods have been introduced to estimate heterogeneous effects, but it is difficult to know if a method will perform well for a particular data set. Rather than using only one method, we show how an ensemble of methods—weighted averages of estimates from individual models increasingly used in machine learning—accurately measure heterogeneous effects. Building on a large literature on ensemble methods, we show how the weighting of methods can contribute to accurate estimation of heterogeneous treatment effects and demonstrate how pooling models lead to superior performance to individual methods across diverse problems. We apply the ensemble method to two experiments, illuminating how the ensemble method for heterogeneous treatment effects facilitates exploratory analysis of treatment effects.


2015 ◽  
Vol 8 (12) ◽  
pp. 12663-12707 ◽  
Author(s):  
T. E. Taylor ◽  
C. W. O'Dell ◽  
C. Frankenberg ◽  
P. Partain ◽  
H. Q. Cronk ◽  
...  

Abstract. The objective of the National Aeronautics and Space Administration's (NASA) Orbiting Carbon Observatory-2 (OCO-2) mission is to retrieve the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2) from satellite measurements of reflected sunlight in the near-infrared. These estimates can be biased by clouds and aerosols within the instrument's field of view (FOV). Screening of the most contaminated soundings minimizes unnecessary calls to the computationally expensive Level 2 (L2) XCO2 retrieval algorithm. Hence, robust cloud screening methods have been an important focus of the OCO-2 algorithm development team. Two distinct, computationally inexpensive cloud screening algorithms have been developed for this application. The A-Band Preprocessor (ABP) retrieves the surface pressure using measurements in the 0.76 μm O2 A-band, neglecting scattering by clouds and aerosols, which introduce photon path-length (PPL) differences that can cause large deviations between the expected and retrieved surface pressure. The Iterative Maximum A-Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) Preprocessor (IDP) retrieves independent estimates of the CO2 and H2O column abundances using observations taken at 1.61 μm (weak CO2 band) and 2.06 μm (strong CO2 band), while neglecting atmospheric scattering. The CO2 and H2O column abundances retrieved in these two spectral regions differ significantly in the presence of cloud and scattering aerosols. The combination of these two algorithms, which key off of different features in the spectra, provides the basis for cloud screening of the OCO-2 data set. To validate the OCO-2 cloud screening approach, collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, were compared to results from the two OCO-2 cloud screening algorithms. With tuning to allow throughputs of ≃ 30 %, agreement between the OCO-2 and MODIS cloud screening methods is found to be ≃ 85 % over four 16-day orbit repeat cycles in both the winter (December) and spring (April–May) for OCO-2 nadir-land, glint-land and glint-water observations. No major, systematic, spatial or temporal dependencies were found, although slight differences in the seasonal data sets do exist and validation is more problematic with increasing solar zenith angle and when surfaces are covered in snow and ice and have complex topography. To further analyze the performance of the cloud screening algorithms, an initial comparison of OCO-2 observations was made to collocated measurements from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) aboard the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO). These comparisons highlight the strength of the OCO-2 cloud screening algorithms in identifying high, thin clouds but suggest some difficulty in identifying some clouds near the surface, even when the optical thicknesses are greater than 1.


2002 ◽  
Vol 6 (4) ◽  
pp. 709-720 ◽  
Author(s):  
M. G. R. Holmes ◽  
A. R. Young ◽  
A. Gustard ◽  
R. Grew

Abstract. Traditionally, the estimation of Mean Flow (MF) in ungauged catchments has been approached using conceptual water balance models or empirical formulae relating climatic inputs to stream flow. In the UK, these types of models have difficulty in predicting MF in low rainfall areas because the conceptualisation of soil moisture behaviour and its relationship with evaporation rates used is rather simplistic. However, it is in these dry regions where the accurate estimation of flows is most critical to effective management of a scarce resource. A novel approach to estimating MF, specifically designed to improve estimation of runoff in dry catchments, has been developed using a regionalisation of the Penman drying curve theory. The dynamic water balance style Daily Soil Moisture Accounting (DSMA) model operates at a daily time step, using inputs of precipitation and potential evaporation and simulates the development of soil moisture deficits explicitly. The model has been calibrated using measured MFs from a large data set of catchments in the United Kingdom. The performance of the DSMA model is superior to existing established steady state and dynamic water-balance models over the entire data set considered and the largest improvement is observed in very low rainfall catchments. It is concluded that the performance of all models in high rainfall areas is likely to be limited by the spatial representation of rainfall. Keywords: hydrological models, regionalisation, water resources, mean flow, runoff, water balance, Penman drying curve, soil moisture model


Geology ◽  
2020 ◽  
Vol 48 (7) ◽  
pp. 718-722
Author(s):  
Jason S. Alexander ◽  
Brandon J. McElroy ◽  
Snehalata Huzurbazar ◽  
Marissa L. Murr

Abstract Accurate estimation of paleo–streamflow depth from outcrop is important for estimation of channel slopes, water discharges, sediment fluxes, and basin sizes of ancient river systems. Bar-scale inclined strata deposited from slipface avalanching on fluvial bar margins are assumed to be indicators of paleodepth insofar as their thickness approaches but does not exceed formative flow depths. We employed a unique, large data set from a prolonged bank-filling flood in the sandy, braided Missouri River (USA) to examine scaling between slipface height and measures of river depth during the flood. The analyses demonstrated that the most frequent slipface height observations underestimate study-reach mean flow depth at peak stage by a factor of 3, but maximum values are approximately equal to mean flow depth. At least 70% of the error is accounted for by the difference between slipface base elevation and mean bed elevation, while the difference between crest elevation and water surface accounts for ∼30%. Our analysis provides a scaling for bar-scale inclined strata formed by avalanching and suggests risk of systematic bias in paleodepth estimation if mean thickness measurements of these deposits are equated to mean bankfull depth.


2019 ◽  
Vol 50 (3) ◽  
pp. 263-267
Author(s):  
Jill A Hancock ◽  
Glen A Palmer

Abstract Background Single-vial fecal immunochemical testing (FIT) is an accepted method of colorectal cancer (CRC) screening. The available 3-vial FIT data set allows for comparison of colonoscopy results using various screening methods. Objective To determine the optimal number of vials for a strong FIT-screening program by examining whether using only a single vial impacts the use of colonoscopy for CRC screening. Methods Patients were given 3-vial FIT collection kits that were processed with a positive hemoglobin cut-off detection level of 100 ng per mL. If FIT results were positive, colonoscopy testing was performed using standard practices. Results Detection of CRC and precursor adenoma was examined in 932 patients, with a positive colonoscopy sensitivity of 56.2% and 3.0% CRC detection after 3-vial FIT; after single-vial screening, those values were 60.9% and 4.7%, respectively. Conclusions Prescreening patients with FIT testing before colonoscopy allows colonoscopy testing to be targeted to higher-risk patients. Implementing use of only a single vial from the 3-vial FIT screening kit would reduce the colonoscopy reflex rate, colonoscopy complication numbers, facility costs, and patient distress by more than 40%, compared with 3-vial screening.


2013 ◽  
Vol 3 (4) ◽  
pp. 61-83 ◽  
Author(s):  
Eleftherios Tiakas ◽  
Apostolos N. Papadopoulos ◽  
Yannis Manolopoulos

The last years there is an increasing interest for query processing techniques that take into consideration the dominance relationship between items to select the most promising ones, based on user preferences. Skyline and top-k dominating queries are examples of such techniques. A skyline query computes the items that are not dominated, whereas a top-k dominating query returns the k items with the highest domination score. To enable query optimization, it is important to estimate the expected number of skyline items as well as the maximum domination value of an item. In this article, the authors provide an estimation for the maximum domination value under the dinstinct values and attribute independence assumptions. The authors provide three different methodologies for estimating and calculating the maximum domination value and the authors test their performance and accuracy. Among the proposed estimation methods, their method Estimation with Roots outperforms all others and returns the most accurate results. They also introduce the eliminating dimension, i.e., the dimension beyond which all domination values become zero, and the authors provide an efficient estimation of that dimension. Moreover, the authors provide an accurate estimation of the skyline cardinality of a data set.


2019 ◽  
pp. 1-7 ◽  
Author(s):  
Paul Riviere ◽  
Christopher Tokeshi ◽  
Jiayi Hou ◽  
Vinit Nalawade ◽  
Reith Sarkar ◽  
...  

PURPOSE Treatment decisions about localized prostate cancer depend on accurate estimation of the patient’s life expectancy. Current cancer and noncancer survival models use a limited number of predefined variables, which could restrict their predictive capability. We explored a technique to create more comprehensive survival prediction models using insurance claims data from a large administrative data set. These data contain substantial information about medical diagnoses and procedures, and thus may provide a broader reflection of each patient’s health. METHODS We identified 57,011 Medicare beneficiaries with localized prostate cancer diagnosed between 2004 and 2009. We constructed separate cancer survival and noncancer survival prediction models using a training data set and assessed performance on a test data set. Potential model inputs included clinical and demographic covariates, and 8,971 distinct insurance claim codes describing comorbid diseases, procedures, surgeries, and diagnostic tests. We used a least absolute shrinkage and selection operator technique to identify predictive variables in the final survival models. Each model’s predictive capacity was compared with existing survival models with a metric of explained randomness (ρ2) ranging from 0 to 1, with 1 indicating an ideal prediction. RESULTS Our noncancer survival model included 143 covariates and had improved survival prediction (ρ2 = 0.60) compared with the Charlson comorbidity index (ρ2 = 0.26) and Elixhauser comorbidity index (ρ2 = 0.26). Our cancer-specific survival model included nine covariates, and had similar survival predictions (ρ2 = 0.71) to the Memorial Sloan Kettering prediction model (ρ2 = 0.68). CONCLUSION Survival prediction models using high-dimensional variable selection techniques applied to claims data show promise, particularly with noncancer survival prediction. After further validation, these analyses could inform clinical decisions for men with prostate cancer.


2014 ◽  
Vol 32 (6) ◽  
pp. 519-526 ◽  
Author(s):  
David M. Hyman ◽  
Anne A. Eaton ◽  
Mrinal M. Gounder ◽  
Gary L. Smith ◽  
Erika G. Pamer ◽  
...  

Purpose All patients in phase I trials do not have equivalent susceptibility to serious drug-related toxicity (SDRT). Our goal was to develop a nomogram to predict the risk of cycle-one SDRT to better select appropriate patients for phase I trials. Patients and Methods The prospectively maintained database of patients with solid tumor enrolled onto Cancer Therapeutics Evaluation Program–sponsored phase I trials activated between 2000 and 2010 was used. SDRT was defined as a grade ≥ 4 hematologic or grade ≥ 3 nonhematologic toxicity attributed, at least possibly, to study drug(s). Logistic regression was used to test the association of candidate factors to cycle-one SDRT. A final model, or nomogram, was chosen based on both clinical and statistical significance and validated internally using a bootstrapping technique and externally in an independent data set. Results Data from 3,104 patients enrolled onto 127 trials were analyzed to build the nomogram. In a model with multiple covariates, Eastern Cooperative Oncology Group performance status, WBC count, creatinine clearance, albumin, AST, number of study drugs, biologic study drug (yes v no), and dose (relative to maximum administered) were significant predictors of cycle-one SDRT. All significant factors except dose were included in the final nomogram. The model was validated both internally (bootstrap-adjusted concordance index, 0.60) and externally (concordance index, 0.64). Conclusion This nomogram can be used to accurately predict a patient's risk for SDRT at the time of enrollment. Excluding patients at high risk for SDRT should improve the safety and efficiency of phase I trials.


Geophysics ◽  
2017 ◽  
Vol 82 (1) ◽  
pp. N1-N12 ◽  
Author(s):  
Francisco de S. Oliveira ◽  
Jose J. S. de Figueiredo ◽  
Andrei G. Oliveira ◽  
Jörg Schleicher ◽  
Iury C. S. Araújo

Quality factor estimation and correction are necessary to compensate the seismic energy dissipated during acoustic-/elastic-wave propagation in the earth. In this process, known as [Formula: see text]-filtering in the realm of seismic processing, the main goal is to improve the resolution of the seismic signal, as well as to recover part of the energy dissipated by the anelastic attenuation. We have found a way to improve [Formula: see text]-factor estimation from seismic reflection data. Our methodology is based on the combination of the peak-frequency-shift (PFS) method and the redatuming operator. Our innovation is in the way we correct traveltimes when the medium consists of many layers. In other words, the correction of the traveltime table used in the PFS method is performed using the redatuming operator. This operation, performed iteratively, allows a more accurate estimation of the [Formula: see text] factor layer by layer. Applications to synthetic and real data (Viking Graben) reveal the feasibility of our analysis.


Sign in / Sign up

Export Citation Format

Share Document