A Method for Predicting Creep Data for Commercial Alloys on a Correlation Between Creep Strength and Rupture Strength

1972 ◽  
Vol 94 (1) ◽  
pp. 1-6 ◽  
Author(s):  
R. M. Goldhoff ◽  
R. F. Gill

In this paper a method is presented for correlating the creep and rupture strengths of a wide variety of commercial alloys. The ultimate aim of this correlation is to predict design creep properties from rupture data alone. This is of considerable interest because rupture parameter or isothermal rupture curves are frequently the only data available since relatively little creep data is taken today. It is demonstrated in this work that reasonable predictions, useful in design, can be made. The alloys studied range from aluminum base through low alloy and stainless steels and include iron-nickel, nickel, and cobalt-base superalloys. Very long time data for single heats of each of the alloy types has been taken from either the literature or sources willing to make such data available. The construction is simple, and common techniques for determining scatter in the correlation are developed. The predictions include scatter bands of strain-time data developed from the 15 data sets encompassing all the alloys. It is suggested that some refinement might be gained by studying numerous heats of a single specification material where such data is available. A complicating problem of structural instability arises and is discussed in the paper.

2008 ◽  
Vol 130 (2) ◽  
Author(s):  
Stuart Holdsworth

The European Creep Collaborative Committee (ECCC) approach to creep data assessment has now been established for almost ten years. The methodology covers the analysis of rupture strength and ductility, creep strain, and stress relaxation data, for a range of material conditions. This paper reviews the concepts and procedures involved. The original approach was devised to determine data sheets for use by committees responsible for the preparation of National and International Design and Product Standards, and the methods developed for data quality evaluation and data analysis were therefore intentionally rigorous. The focus was clearly on the determination of long-time property values from the largest possible data sets involving a significant number of observations in the mechanism regime for which predictions were required. More recently, the emphasis has changed. There is now an increasing requirement for full property descriptions from very short times to very long and hence the need for much more flexible model representations than were previously required. There continues to be a requirement for reliable long-time predictions from relatively small data sets comprising relatively short duration tests, in particular, to exploit new alloy developments at the earliest practical opportunity. In such circumstances, it is not feasible to apply the same degree of rigor adopted for large data set assessment. Current developments are reviewed.


Author(s):  
Stuart R. Holdsworth

The ECCC (European Creep Collaborative Committee) approach to creep data assessment has now been established for almost 10 years. The methodology covers the analysis of rupture strength and ductility, creep strain and stress relaxation data, for a range of material conditions. This paper reviews the concepts and procedures involved. The original approach was devised to determine Data Sheets for use by committees responsible for the preparation of National and International Design and Product Standards, and the methods developed for data quality evaluation and data analysis were therefore intentionally rigorous. The focus was clearly on the determination of long time property values from the largest possible datasets involving a significant number of observations in the mechanism regime for which predictions were required. More recently, the emphasis has changed. There is now an increasing requirement for full property descriptions from very short times to very long, and hence the need for much more flexible model representations than were previously required. There continues to be a requirement for reliable long time predictions from relatively small datasets comprising relatively short duration tests, in particular to exploit new alloy developments at the earliest practical opportunity. In such circumstances, it is not feasible to apply the same degree of rigour adopted for large dataset assessment. Current developments are reviewed.


Author(s):  
Tannistha Pal

Images captured in severe atmospheric catastrophe especially in fog critically degrade the quality of an image and thereby reduces the visibility of an image which in turn affects several computer vision applications like visual surveillance detection, intelligent vehicles, remote sensing, etc. Thus acquiring clear vision is the prime requirement of any image. In the last few years, many approaches have been made towards solving this problem. In this article, a comparative analysis has been made on different existing image defogging algorithms and then a technique has been proposed for image defogging based on dark channel prior strategy. Experimental results show that the proposed method shows efficient results by significantly improving the visual effects of images in foggy weather. Also computational time of the existing techniques are much higher which has been overcame in this paper by using the proposed method. Qualitative assessment evaluation is performed on both benchmark and real time data sets for determining theefficacy of the technique used. Finally, the whole work is concluded with its relative advantages and shortcomings.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5204
Author(s):  
Anastasija Nikiforova

Nowadays, governments launch open government data (OGD) portals that provide data that can be accessed and used by everyone for their own needs. Although the potential economic value of open (government) data is assessed in millions and billions, not all open data are reused. Moreover, the open (government) data initiative as well as users’ intent for open (government) data are changing continuously and today, in line with IoT and smart city trends, real-time data and sensor-generated data have higher interest for users. These “smarter” open (government) data are also considered to be one of the crucial drivers for the sustainable economy, and might have an impact on information and communication technology (ICT) innovation and become a creativity bridge in developing a new ecosystem in Industry 4.0 and Society 5.0. The paper inspects OGD portals of 60 countries in order to understand the correspondence of their content to the Society 5.0 expectations. The paper provides a report on how much countries provide these data, focusing on some open (government) data success facilitating factors for both the portal in general and data sets of interest in particular. The presence of “smarter” data, their level of accessibility, availability, currency and timeliness, as well as support for users, are analyzed. The list of most competitive countries by data category are provided. This makes it possible to understand which OGD portals react to users’ needs, Industry 4.0 and Society 5.0 request the opening and updating of data for their further potential reuse, which is essential in the digital data-driven world.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Yiwen Zhang ◽  
Yuanyuan Zhou ◽  
Xing Guo ◽  
Jintao Wu ◽  
Qiang He ◽  
...  

The K-means algorithm is one of the ten classic algorithms in the area of data mining and has been studied by researchers in numerous fields for a long time. However, the value of the clustering number k in the K-means algorithm is not always easy to be determined, and the selection of the initial centers is vulnerable to outliers. This paper proposes an improved K-means clustering algorithm called the covering K-means algorithm (C-K-means). The C-K-means algorithm can not only acquire efficient and accurate clustering results but also self-adaptively provide a reasonable numbers of clusters based on the data features. It includes two phases: the initialization of the covering algorithm (CA) and the Lloyd iteration of the K-means. The first phase executes the CA. CA self-organizes and recognizes the number of clusters k based on the similarities in the data, and it requires neither the number of clusters to be prespecified nor the initial centers to be manually selected. Therefore, it has a “blind” feature, that is, k is not preselected. The second phase performs the Lloyd iteration based on the results of the first phase. The C-K-means algorithm combines the advantages of CA and K-means. Experiments are carried out on the Spark platform, and the results verify the good scalability of the C-K-means algorithm. This algorithm can effectively solve the problem of large-scale data clustering. Extensive experiments on real data sets show that the accuracy and efficiency of the C-K-means algorithm outperforms the existing algorithms under both sequential and parallel conditions.


2013 ◽  
Vol 278-280 ◽  
pp. 831-834 ◽  
Author(s):  
Xiao Sun ◽  
Hao Zhou ◽  
Xiang Jiang Lu ◽  
Yong Yang

This paper designed a motor winding testing system, it can do the dielectric withstand voltage test of inter-turn under 30kV.The system can communicate effectively between PC and machine, by using the PC's powerful capacity of process data and PLC's better stability and the Labview's convenient UI. So the system has real-time data collection, preservation, analysis and other characteristics. This system is able to achieve factory testing and type testing of the motor windings facilitating. Various performance indicators were stable and reliable by field test during a long time.


2014 ◽  
Vol 7 (10) ◽  
pp. 3337-3354 ◽  
Author(s):  
M. Pastel ◽  
J.-P. Pommereau ◽  
F. Goutail ◽  
A. Richter ◽  
A. Pazmiño ◽  
...  

Abstract. Long time series of ozone and NO2 total column measurements in the southern tropics are available from two ground-based SAOZ (Système d'Analyse par Observation Zénithale) UV-visible spectrometers operated within the Network for the Detection of Atmospheric Composition Change (NDACC) in Bauru (22° S, 49° W) in S-E Brazil since 1995 and Reunion Island (21° S, 55° E) in the S-W Indian Ocean since 1993. Although the stations are located at the same latitude, significant differences are observed in the columns of both species, attributed to differences in tropospheric content and equivalent latitude in the lower stratosphere. These data are used to identify which satellites operating during the same period, are capturing the same features and are thus best suited for building reliable merged time series for trend studies. For ozone, the satellites series best matching SAOZ observations are EP-TOMS (1995–2004) and OMI-TOMS (2005–2011), whereas for NO2, best results are obtained by combining GOME version GDP5 (1996–2003) and SCIAMACHY – IUP (2003–2011), displaying lower noise and seasonality in reference to SAOZ. Both merged data sets are fully consistent with the larger columns of the two species above South America and the seasonality of the differences between the two stations, reported by SAOZ, providing reliable time series for further trend analyses and identification of sources of interannual variability in the future analysis.


Author(s):  
Aakriti Shukla ◽  
◽  
Dr Damodar Prasad Tiwari ◽  

Dimension reduction or feature selection is thought to be the backbone of big data applications in order to improve performance. Many scholars have shifted their attention in recent years to data science and analysis for real-time applications using big data integration. It takes a long time for humans to interact with big data. As a result, while handling high workload in a distributed system, it is necessary to make feature selection elastic and scalable. In this study, a survey of alternative optimizing techniques for feature selection are presented, as well as an analytical result analysis of their limits. This study contributes to the development of a method for improving the efficiency of feature selection in big complicated data sets.


Sign in / Sign up

Export Citation Format

Share Document