The European Creep Collaborative Committee (ECCC) Approach to Creep Data Assessment

2008 ◽  
Vol 130 (2) ◽  
Author(s):  
Stuart Holdsworth

The European Creep Collaborative Committee (ECCC) approach to creep data assessment has now been established for almost ten years. The methodology covers the analysis of rupture strength and ductility, creep strain, and stress relaxation data, for a range of material conditions. This paper reviews the concepts and procedures involved. The original approach was devised to determine data sheets for use by committees responsible for the preparation of National and International Design and Product Standards, and the methods developed for data quality evaluation and data analysis were therefore intentionally rigorous. The focus was clearly on the determination of long-time property values from the largest possible data sets involving a significant number of observations in the mechanism regime for which predictions were required. More recently, the emphasis has changed. There is now an increasing requirement for full property descriptions from very short times to very long and hence the need for much more flexible model representations than were previously required. There continues to be a requirement for reliable long-time predictions from relatively small data sets comprising relatively short duration tests, in particular, to exploit new alloy developments at the earliest practical opportunity. In such circumstances, it is not feasible to apply the same degree of rigor adopted for large data set assessment. Current developments are reviewed.

2020 ◽  
pp. 1-11
Author(s):  
Erjia Yan ◽  
Zheng Chen ◽  
Kai Li

Citation sentiment plays an important role in citation analysis and scholarly communication research, but prior citation sentiment studies have used small data sets and relied largely on manual annotation. This paper uses a large data set of PubMed Central (PMC) full-text publications and analyzes citation sentiment in more than 32 million citances within PMC, revealing citation sentiment patterns at the journal and discipline levels. This paper finds a weak relationship between a journal’s citation impact (as measured by CiteScore) and the average sentiment score of citances to its publications. When journals are aggregated into quartiles based on citation impact, we find that journals in higher quartiles are cited more favorably than those in the lower quartiles. Further, social science journals are found to be cited with higher sentiment, followed by engineering and natural science and biomedical journals, respectively. This result may be attributed to disciplinary discourse patterns in which social science researchers tend to use more subjective terms to describe others’ work than do natural science or biomedical researchers.


Author(s):  
Stuart R. Holdsworth

The ECCC (European Creep Collaborative Committee) approach to creep data assessment has now been established for almost 10 years. The methodology covers the analysis of rupture strength and ductility, creep strain and stress relaxation data, for a range of material conditions. This paper reviews the concepts and procedures involved. The original approach was devised to determine Data Sheets for use by committees responsible for the preparation of National and International Design and Product Standards, and the methods developed for data quality evaluation and data analysis were therefore intentionally rigorous. The focus was clearly on the determination of long time property values from the largest possible datasets involving a significant number of observations in the mechanism regime for which predictions were required. More recently, the emphasis has changed. There is now an increasing requirement for full property descriptions from very short times to very long, and hence the need for much more flexible model representations than were previously required. There continues to be a requirement for reliable long time predictions from relatively small datasets comprising relatively short duration tests, in particular to exploit new alloy developments at the earliest practical opportunity. In such circumstances, it is not feasible to apply the same degree of rigour adopted for large dataset assessment. Current developments are reviewed.


Author(s):  
Jianping Ju ◽  
Hong Zheng ◽  
Xiaohang Xu ◽  
Zhongyuan Guo ◽  
Zhaohui Zheng ◽  
...  

AbstractAlthough convolutional neural networks have achieved success in the field of image classification, there are still challenges in the field of agricultural product quality sorting such as machine vision-based jujube defects detection. The performance of jujube defect detection mainly depends on the feature extraction and the classifier used. Due to the diversity of the jujube materials and the variability of the testing environment, the traditional method of manually extracting the features often fails to meet the requirements of practical application. In this paper, a jujube sorting model in small data sets based on convolutional neural network and transfer learning is proposed to meet the actual demand of jujube defects detection. Firstly, the original images collected from the actual jujube sorting production line were pre-processed, and the data were augmented to establish a data set of five categories of jujube defects. The original CNN model is then improved by embedding the SE module and using the triplet loss function and the center loss function to replace the softmax loss function. Finally, the depth pre-training model on the ImageNet image data set was used to conduct training on the jujube defects data set, so that the parameters of the pre-training model could fit the parameter distribution of the jujube defects image, and the parameter distribution was transferred to the jujube defects data set to complete the transfer of the model and realize the detection and classification of the jujube defects. The classification results are visualized by heatmap through the analysis of classification accuracy and confusion matrix compared with the comparison models. The experimental results show that the SE-ResNet50-CL model optimizes the fine-grained classification problem of jujube defect recognition, and the test accuracy reaches 94.15%. The model has good stability and high recognition accuracy in complex environments.


Author(s):  
Jungeui Hong ◽  
Elizabeth A. Cudney ◽  
Genichi Taguchi ◽  
Rajesh Jugulum ◽  
Kioumars Paryani ◽  
...  

The Mahalanobis-Taguchi System is a diagnosis and predictive method for analyzing patterns in multivariate cases. The goal of this study is to compare the ability of the Mahalanobis-Taguchi System and a neural network to discriminate using small data sets. We examine the discriminant ability as a function of data set size using an application area where reliable data is publicly available. The study uses the Wisconsin Breast Cancer study with nine attributes and one class.


2019 ◽  
Vol 51 (4) ◽  
pp. 167-179
Author(s):  
Marcin Pietroń

Abstract Databases are a basic component of every GIS system and many geoinformation applications. They also hold a prominent place in the tool kit of any cartographer. Solutions based on the relational model have been the standard for a long time, but there is a new increasingly popular technological trend – solutions based on the NoSQL database which have many advantages in the context of processing of large data sets. This paper compares the performance of selected spatial relational and NoSQL databases executing queries with selected spatial operators. It has been hypothesised that a non-relational solution will prove to be more effective, which was confirmed by the results of the study. The same spatial data set was loaded into PostGIS and MongoDB databases, which ensured standardisation of data for comparison purposes. Then, SQL queries and JavaScript commands were used to perform specific spatial analyses. The parameters necessary to compare the performance were measured at the same time. The study’s results have revealed which approach is faster and utilises less computer resources. However, it is difficult to clearly identify which technology is better because of a number of other factors which have to be considered when choosing the right tool.


Author(s):  
Kim Wallin

The standard Master Curve (MC) deals only with materials assumed to be homogeneous, but MC analysis methods for inhomogeneous materials have also been developed. Especially the bi-modal and multi-modal analysis methods are becoming more and more standard. Their drawback is that these methods are generally reliable only with sufficiently large data sets (number of valid tests, r ≥ 15–20). Here, the possibility of using the multi-modal analysis method with smaller data sets is assessed, and a new procedure to conservatively account for possible inhomogeneities is proposed.


2019 ◽  
Vol 65 (8) ◽  
pp. 995-1005 ◽  
Author(s):  
Thomas Røraas ◽  
Sverre Sandberg ◽  
Aasne K Aarsand ◽  
Bård Støve

Abstract BACKGROUND Biological variation (BV) data have many applications for diagnosing and monitoring disease. The standard statistical approaches for estimating BV are sensitive to “noisy data” and assume homogeneity of within-participant CV. Prior knowledge about BV is mostly ignored. The aims of this study were to develop Bayesian models to calculate BV that (a) are robust to “noisy data,” (b) allow heterogeneity in the within-participant CVs, and (c) take advantage of prior knowledge. METHOD We explored Bayesian models with different degrees of robustness using adaptive Student t distributions instead of the normal distributions and when the possibility of heterogeneity of the within-participant CV was allowed. Results were compared to more standard approaches using chloride and triglyceride data from the European Biological Variation Study. RESULTS Using the most robust Bayesian approach on a raw data set gave results comparable to a standard approach with outlier assessments and removal. The posterior distribution of the fitted model gives access to credible intervals for all parameters that can be used to assess reliability. Reliable and relevant priors proved valuable for prediction. CONCLUSIONS The recommended Bayesian approach gives a clear picture of the degree of heterogeneity, and the ability to crudely estimate personal within-participant CVs can be used to explore relevant subgroups. Because BV experiments are expensive and time-consuming, prior knowledge and estimates should be considered of high value and applied accordingly. By including reliable prior knowledge, precise estimates are possible even with small data sets.


Author(s):  
Gary Smith ◽  
Jay Cordes

Patterns are inevitable and we should not be surprised by them. Streaks, clusters, and correlations are the norm, not the exception. In a large number of coin flips, there are likely to be coincidental clusters of heads and tails. In nationwide data on cancer, crime, or test scores, there are likely to be flukey clusters. When the data are separated into smaller geographic units like cities, the most extreme results are likely to be found in the smallest cities. In athletic competitions between well-matched teams, the outcome of a small number of games is almost meaningless. Our challenge is to overcome our inherited inclination to think that all patterns are meaningful; for example, thinking that clustering in large data sets or differences among small data sets must be something real that needs to be explained. Often, it is just meaningless happenstance.


1981 ◽  
Vol 35 (1) ◽  
pp. 35-42 ◽  
Author(s):  
J. D. Algeo ◽  
M. B. Denton

A numerical method for evaluating the inverted Abel integral employing cubic spline approximations is described along with a modification of the procedure of Cremers and Birkebak, and an extension of the Barr method. The accuracy of the computations is evaluated at several noise levels and with varying resolution of the input data. The cubic spline method is found to be useful only at very low noise levels, but capable of providing good results with small data sets. The Barr method is computationally the simplest, and is adequate when large data sets are available. For noisy data, the method of Cremers and Birkebak gave the best results.


1972 ◽  
Vol 94 (1) ◽  
pp. 1-6 ◽  
Author(s):  
R. M. Goldhoff ◽  
R. F. Gill

In this paper a method is presented for correlating the creep and rupture strengths of a wide variety of commercial alloys. The ultimate aim of this correlation is to predict design creep properties from rupture data alone. This is of considerable interest because rupture parameter or isothermal rupture curves are frequently the only data available since relatively little creep data is taken today. It is demonstrated in this work that reasonable predictions, useful in design, can be made. The alloys studied range from aluminum base through low alloy and stainless steels and include iron-nickel, nickel, and cobalt-base superalloys. Very long time data for single heats of each of the alloy types has been taken from either the literature or sources willing to make such data available. The construction is simple, and common techniques for determining scatter in the correlation are developed. The predictions include scatter bands of strain-time data developed from the 15 data sets encompassing all the alloys. It is suggested that some refinement might be gained by studying numerous heats of a single specification material where such data is available. A complicating problem of structural instability arises and is discussed in the paper.


Sign in / Sign up

Export Citation Format

Share Document