scholarly journals EFFECT OF CREST ELEVATIONS ON THE OVERTOPPING DISCHARGES OVER COMPOSITE SEAWALL

Author(s):  
Burak Aydogan ◽  
Berna Ayat ◽  
Yalçın Yüksel ◽  
Esin Çevik ◽  
Özgen Goller

Overtopping discharges over a composite seawall model has been studied experimentally. Effect of crest elevations on the overtopping discharges was studied based on the new data set of 132 tests. Test data is compared to the existing formulas given by EurOtop Manual (2007) and EurOtop II Manual (2016). Results showed that both EurOtop Manual (2007) and EurOtop II (2016) formulas for composite seawalls are applicable to our test setup with a roughness coefficient of 0.55 which corresponds a rubble mound composite structure with two layers and an impermeable core. Vertical wall caused the composite structure act as an impermeable structure. Height of the storm wall was raised 2, 4, 6, 8, and 10 cm successively during the tests. Results showed that a reduction rate clearly depends on the wave steepness.


2003 ◽  
Vol 42 (05) ◽  
pp. 564-571 ◽  
Author(s):  
M. Schumacher ◽  
E. Graf ◽  
T. Gerds

Summary Objectives: A lack of generally applicable tools for the assessment of predictions for survival data has to be recognized. Prediction error curves based on the Brier score that have been suggested as a sensible approach are illustrated by means of a case study. Methods: The concept of predictions made in terms of conditional survival probabilities given the patient’s covariates is introduced. Such predictions are derived from various statistical models for survival data including artificial neural networks. The idea of how the prediction error of a prognostic classification scheme can be followed over time is illustrated with the data of two studies on the prognosis of node positive breast cancer patients, one of them serving as an independent test data set. Results and Conclusions: The Brier score as a function of time is shown to be a valuable tool for assessing the predictive performance of prognostic classification schemes for survival data incorporating censored observations. Comparison with the prediction based on the pooled Kaplan Meier estimator yields a benchmark value for any classification scheme incorporating patient’s covariate measurements. The problem of an overoptimistic assessment of prediction error caused by data-driven modelling as it is, for example, done with artificial neural nets can be circumvented by an assessment in an independent test data set.



Author(s):  
Joshua Auld ◽  
Abolfazl (Kouros) Mohammadian ◽  
Marcelo Simas Oliveira ◽  
Jean Wolf ◽  
William Bachman

Research was undertaken to determine whether demographic characteristics of individual travelers could be derived from travel pattern information when no information about the individual was available. This question is relevant in the context of anonymously collected travel information, such as cell phone traces, when used for travel demand modeling. Determining the demographics of a traveler from such data could partially obviate the need for large-scale collection of travel survey data, depending on the purpose for which the data were to be used. This research complements methodologies used to identify activity stops, purposes, and mode types from raw trace data and presumes that such methods exist and are available. The paper documents the development of procedures for taking raw activity streams estimated from GPS trace data and converting these into activity travel pattern characteristics that are then combined with basic land use information and used to estimate various models of demographic characteristics. The work status, education level, age, and license possession of individuals and the presence of children in their households were all estimated successfully with substantial increases in performance versus null model expectations for both training and test data sets. The gender, household size, and number of vehicles proved more difficult to estimate, and performance was lower on the test data set; these aspects indicate overfitting in these models. Overall, the demographic models appear to have potential for characterizing anonymous data streams, which could extend the usability and applicability of such data sources to the travel demand context.



2021 ◽  
Author(s):  
David Cotton ◽  

<p><strong>Introduction</strong></p><p>HYDROCOASTAL is a two year project funded by ESA, with the objective to maximise exploitation of SAR and SARin altimeter measurements in the coastal zone and inland waters, by evaluating and implementing new approaches to process SAR and SARin data from CryoSat-2, and SAR altimeter data from Sentinel-3A and Sentinel-3B. Optical data from Sentinel-2 MSI and Sentinel-3 OLCI instruments will also be used in generating River Discharge products.</p><p>New SAR and SARin processing algorithms for the coastal zone and inland waters will be developed and implemented and evaluated through an initial Test Data Set for selected regions. From the results of this evaluation a processing scheme will be implemented to generate global coastal zone and river discharge data sets.</p><p>A series of case studies will assess these products in terms of their scientific impacts.</p><p>All the produced data sets will be available on request to external researchers, and full descriptions of the processing algorithms will be provided</p><p> </p><p><strong>Objectives</strong></p><p>The scientific objectives of HYDROCOASTAL are to enhance our understanding  of interactions between the inland water and coastal zone, between the coastal zone and the open ocean, and the small scale processes that govern these interactions. Also the project aims to improve our capability to characterize the variation at different time scales of inland water storage, exchanges with the ocean and the impact on regional sea-level changes</p><p>The technical objectives are to develop and evaluate  new SAR  and SARin altimetry processing techniques in support of the scientific objectives, including stack processing, and filtering, and retracking. Also an improved Wet Troposphere Correction will be developed and evaluated.</p><p><strong>Project  Outline</strong></p><p>There are four tasks to the project</p><ul><li>Scientific Review and Requirements Consolidation: Review the current state of the art in SAR and SARin altimeter data processing as applied to the coastal zone and to inland waters</li> <li>Implementation and Validation: New processing algorithms with be implemented to generate a Test Data sets, which will be validated against models, in-situ data, and other satellite data sets. Selected algorithms will then be used to generate global coastal zone and river discharge data sets</li> <li>Impacts Assessment: The impact of these global products will be assess in a series of Case Studies</li> <li>Outreach and Roadmap: Outreach material will be prepared and distributed to engage with the wider scientific community and provide recommendations for development of future missions and future research.</li> </ul><p> </p><p><strong>Presentation</strong></p><p>The presentation will provide an overview to the project, present the different SAR altimeter processing algorithms that are being evaluated in the first phase of the project, and early results from the evaluation of the initial test data set.</p><p> </p>



Author(s):  
Yanxiang Yu ◽  
◽  
Chicheng Xu ◽  
Siddharth Misra ◽  
Weichang Li ◽  
...  

Compressional and shear sonic traveltime logs (DTC and DTS, respectively) are crucial for subsurface characterization and seismic-well tie. However, these two logs are often missing or incomplete in many oil and gas wells. Therefore, many petrophysical and geophysical workflows include sonic log synthetization or pseudo-log generation based on multivariate regression or rock physics relations. Started on March 1, 2020, and concluded on May 7, 2020, the SPWLA PDDA SIG hosted a contest aiming to predict the DTC and DTS logs from seven “easy-to-acquire” conventional logs using machine-learning methods (GitHub, 2020). In the contest, a total number of 20,525 data points with half-foot resolution from three wells was collected to train regression models using machine-learning techniques. Each data point had seven features, consisting of the conventional “easy-to-acquire” logs: caliper, neutron porosity, gamma ray (GR), deep resistivity, medium resistivity, photoelectric factor, and bulk density, respectively, as well as two sonic logs (DTC and DTS) as the target. The separate data set of 11,089 samples from a fourth well was then used as the blind test data set. The prediction performance of the model was evaluated using root mean square error (RMSE) as the metric, shown in the equation below: RMSE=sqrt(1/2*1/m* [∑_(i=1)^m▒〖(〖DTC〗_pred^i-〖DTC〗_true^i)〗^2 + 〖(〖DTS〗_pred^i-〖DTS〗_true^i)〗^2 ] In the benchmark model, (Yu et al., 2020), we used a Random Forest regressor and conducted minimal preprocessing to the training data set; an RMSE score of 17.93 was achieved on the test data set. The top five models from the contest, on average, beat the performance of our benchmark model by 27% in the RMSE score. In the paper, we will review these five solutions, including preprocess techniques and different machine-learning models, including neural network, long short-term memory (LSTM), and ensemble trees. We found that data cleaning and clustering were critical for improving the performance in all models.



2021 ◽  
Author(s):  
Louise Bloch ◽  
Christoph M. Friedrich

Abstract Background: The prediction of whether Mild Cognitive Impaired (MCI) subjects will prospectively develop Alzheimer's Disease (AD) is important for the recruitment and monitoring of subjects for therapy studies. Machine Learning (ML) is suitable to improve early AD prediction. The etiology of AD is heterogeneous, which leads to noisy data sets. Additional noise is introduced by multicentric study designs and varying acquisition protocols. This article examines whether an automatic and fair data valuation method based on Shapley values can identify subjects with noisy data. Methods: An ML-workow was developed and trained for a subset of the Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort. The validation was executed for an independent ADNI test data set and for the Australian Imaging, Biomarker and Lifestyle Flagship Study of Ageing (AIBL) cohort. The workow included volumetric Magnetic Resonance Imaging (MRI) feature extraction, subject sample selection using data Shapley, Random Forest (RF) and eXtreme Gradient Boosting (XGBoost) for model training and Kernel SHapley Additive exPlanations (SHAP) values for model interpretation. This model interpretation enables clinically relevant explanation of individual predictions. Results: The XGBoost models which excluded 116 of the 467 subjects from the training data set based on their Logistic Regression (LR) data Shapley values outperformed the models which were trained on the entire training data set and which reached a mean classification accuracy of 58.54 % by 14.13 % (8.27 percentage points) on the independent ADNI test data set. The XGBoost models, which were trained on the entire training data set reached a mean accuracy of 60.35 % for the AIBL data set. An improvement of 24.86 % (15.00 percentage points) could be reached for the XGBoost models if those 72 subjects with the smallest RF data Shapley values were excluded from the training data set. Conclusion: The data Shapley method was able to improve the classification accuracies for the test data sets. Noisy data was associated with the number of ApoEϵ4 alleles and volumetric MRI measurements. Kernel SHAP showed that the black-box models learned biologically plausible associations.



2012 ◽  
Vol 51 (01) ◽  
pp. 39-44 ◽  
Author(s):  
K. Matsuoka ◽  
K. Yoshino

SummaryObjectives: The aim of this study is to present a method of assessing psychological tension that is optimized to every individual on the basis of the heart rate variability (HRV) data which, to eliminate the influence of the inter-individual variability, are measured in a long time period during daily life.Methods: HRV and body accelerations were recorded from nine normal subjects for two months of normal daily life. Fourteen HRV indices were calculated with the HRV data at 512 seconds prior to the time of every mental tension level report. Data to be analyzed were limited to those with body accelerations of 30 mG (0.294 m/s2) and lower. Further, the differences from the reference values in the same time zone were calculated with both the mental tension score (Δtension) and HRV index values (ΔHRVI). The multiple linear regression model that estimates Δtension from the scores for principal components of ΔHRVI were then constructed for each individual. The data were divided into training data set and test data set in accordance with the twofold cross validation method. Multiple linear regression coefficients were determined using the training data set, and with the optimized model its generalization capability was checked using the test data set.Results: The subjects’ mean Pearson correlation coefficient was 0.52 with the training data set and 0.40 with the test data set. The subjects’ mean coefficient of determination was 0.28 with the training data set and 0.11 with the test data set.Conclusion: We proposed a method of assessing psychological tension that is optimized to every individual based on HRV data measured over a long period of daily life.



2020 ◽  
Vol 58 (8) ◽  
pp. 1667-1679
Author(s):  
Benedikt Franke ◽  
J. Weese ◽  
I. Waechter-Stehle ◽  
J. Brüning ◽  
T. Kuehne ◽  
...  

Abstract The transvalvular pressure gradient (TPG) is commonly estimated using the Bernoulli equation. However, the method is known to be inaccurate. Therefore, an adjusted Bernoulli model for accurate TPG assessment was developed and evaluated. Numerical simulations were used to calculate TPGCFD in patient-specific geometries of aortic stenosis as ground truth. Geometries, aortic valve areas (AVA), and flow rates were derived from computed tomography scans. Simulations were divided in a training data set (135 cases) and a test data set (36 cases). The training data was used to fit an adjusted Bernoulli model as a function of AVA and flow rate. The model-predicted TPGModel was evaluated using the test data set and also compared against the common Bernoulli equation (TPGB). TPGB and TPGModel both correlated well with TPGCFD (r > 0.94), but significantly overestimated it. The average difference between TPGModel and TPGCFD was much lower: 3.3 mmHg vs. 17.3 mmHg between TPGB and TPGCFD. Also, the standard error of estimate was lower for the adjusted model: SEEModel = 5.3 mmHg vs. SEEB = 22.3 mmHg. The adjusted model’s performance was more accurate than that of the conventional Bernoulli equation. The model might help to improve non-invasive assessment of TPG.



Author(s):  
Munendra Tomar ◽  
Toby Fore ◽  
Marc Baumeister ◽  
Chris Yoxall ◽  
Thomas Beuker

The management of Stress Corrosion Cracking (SCC) represents one of the challenges for pipeline operating companies with pipelines potentially susceptible to SCC. In order to help better support the management of SCC, a graded performance specification for the high-resolution Electro-Magnetic Acoustical Transducer (EMAT) In-Line Inspection (ILI) technology is derived which provides higher levels of confidence for detection of crack-field anomalies with critical dimensions. This paper presents the process used to derive the graded performance values for the EMAT ILI technology with regard to SCC. The process covers the Probability of Detection (POD) and Probability of Identification (POI). A blind test was carried out to derive the graded performance specification. Test data set was compiled comprising EMAT data for several joints containing relevant anomalies and neighboring joints, some containing additional shallow SCC. These joints had been dug based on EMAT ILI data and all of the joints were evaluated with 360 degree Non-Destructive Examination (NDE) followed by destructive testing. For each target joint with relevant anomalies to be assessed, four additional joints were added in random order to generate a realistic density and distribution of anomalies. Furthermore, pipe joints with non-crack like anomalies, as well as pipe joints with mixed populations were included in the blind test data set to ensure a realistic feature population and to assess POI without side effects of a weighed feature population inside set of composed ILI data. The data set was then evaluated by multiple analysts and result from each analyst were evaluated and utilized to derive the POD and POI values for the graded specification. In addition, the full process of data analysis including team lead review was carried out for one of the analysts for comparison to the individual analyst results. Anomaly dimensions were compared against the true population to derive the POD and POI values. Furthermore, length and depth sizing performance was assessed.



2008 ◽  
Vol 17 (04) ◽  
pp. 659-685 ◽  
Author(s):  
VASILE RUS ◽  
PHILIP M. McCARTHY ◽  
DANIELLE S. McNAMARA ◽  
ARTHUR C. GRAESSER

In this paper we study a graph-based approach to the task of Recognizing Textual Entailment between a Text and a Hypothesis. The approach takes into account the full lexico-syntactic context of both the Text and Hypothesis and is based on the concept of subsumption. It starts with mapping the Text and Hypothesis on to graph structures that have nodes representing concepts and edges representing lexico-syntactic relations among concepts. An entailment decision is then made on the basis of a subsumption score between the Text-graph and Hypothesis-graph. The results obtained from a standard entailment test data set were promising. The impact of synonymy on entailment is quantified and discussed. An important advantage to a solution like ours is its ability to be customized to obtain high-confidence results.



Sign in / Sign up

Export Citation Format

Share Document