Determination of Principal Reynolds Stresses in Pulsatile Flows After Elliptical Filtering of Discrete Velocity Measurements

1993 ◽  
Vol 115 (4A) ◽  
pp. 396-403 ◽  
Author(s):  
J. T. Baldwin ◽  
S. Deutsch ◽  
H. L. Petrie ◽  
J. M. Tarbell

The purpose of this study was to develop a method to accurately determine mean velocities and Reynolds stresses in pulsatile flows. The pulsatile flow used to develop this method was produced within a transparent model of a left ventricular assist device (LVAD). Velocity measurements were taken at locations within the LVAD using a two-component laser Doppler anemometry (LDA) system. At each measurement location, as many as 4096 realizations of two coincident orthogonal velocity components were collected during preselected time windows over the pump cycle. The number of realizations was varied to determine how the number of data points collected affects the accuracy of the results. The duration of the time windows was varied to determine the maximum window size consistent with an assumption of pseudostationary flow. Erroneous velocity realizations were discarded from individual data sets by implementing successive elliptical filters on the velocity components. The mean velocities and principal Reynolds stresses were determined for each of the filtered data sets. The filtering technique, while eliminating less than 5 percent of the original data points, significantly reduced the computed Reynolds stresses. The results indicate that, with proper filtering, reasonable accuracy can be achieved using a velocity data set of 250 points, provided the time window is small enough to ensure pseudostationary flow (typically 20 to 40 ms). The results also reveal that the time window which is required to assume pseudostationary flow varies with location and cycle time and can range from 100 ms to less than 20 ms. Rotation of the coordinate system to the principal stress axes can lead to large variations in the computed Reynolds stresses, up to 2440 dynes/cm2 for the normal stress and 7620 dynes/cm2 for the shear stress.

2018 ◽  
Vol 11 (2) ◽  
pp. 53-67
Author(s):  
Ajay Kumar ◽  
Shishir Kumar

Several initial center selection algorithms are proposed in the literature for numerical data, but the values of the categorical data are unordered so, these methods are not applicable to a categorical data set. This article investigates the initial center selection process for the categorical data and after that present a new support based initial center selection algorithm. The proposed algorithm measures the weight of unique data points of an attribute with the help of support and then integrates these weights along the rows, to get the support of every row. Further, a data object having the largest support is chosen as an initial center followed by finding other centers that are at the greatest distance from the initially selected center. The quality of the proposed algorithm is compared with the random initial center selection method, Cao's method, Wu method and the method introduced by Khan and Ahmad. Experimental analysis on real data sets shows the effectiveness of the proposed algorithm.


2021 ◽  
Author(s):  
TIONG GOH ◽  
MengJun Liu

The ability to predict COVID-19 patients' level of severity (death or survival) enables clinicians to prioritise treatment. Recently, using three blood biomarkers, an interpretable machine learning model was developed to predict the mortality of COVID-19 patients. The method was reported to be suffering from performance stability because the identified biomarkers are not consistent predictors over an extended duration. To sustain performance, the proposed method partitioned data into three different time windows. For each window, an end-classifier, a mid-classifier and a front-classifier were designed respectively using the XGboost single tree approach. These time window classifiers were integrated into a majority vote classifier and tested with an isolated test data set. The voting classifier strengthens the overall performance of 90% cumulative accuracy from a 14 days window to a 21 days prediction window. An additional 7 days of prediction window can have a considerable impact on a patient's chance of survival. This study validated the feasibility of the time window voting classifier and further support the selection of biomarkers features set for the early prognosis of patients with a higher risk of mortality.


2021 ◽  
Vol 87 (6) ◽  
pp. 445-455
Author(s):  
Yi Ma ◽  
Zezhong Zheng ◽  
Yutang Ma ◽  
Mingcang Zhu ◽  
Ran Huang ◽  
...  

Many manifold learning algorithms conduct an eigen vector analysis on a data-similarity matrix with a size of N×N, where N is the number of data points. Thus, the memory complexity of the analysis is no less than O(N2). We pres- ent in this article an incremental manifold learning approach to handle large hyperspectral data sets for land use identification. In our method, the number of dimensions for the high-dimensional hyperspectral-image data set is obtained with the training data set. A local curvature varia- tion algorithm is utilized to sample a subset of data points as landmarks. Then a manifold skeleton is identified based on the landmarks. Our method is validated on three AVIRIS hyperspectral data sets, outperforming the comparison algorithms with a k–nearest-neighbor classifier and achieving the second best performance with support vector machine.


Fractals ◽  
2001 ◽  
Vol 09 (01) ◽  
pp. 105-128 ◽  
Author(s):  
TAYFUN BABADAGLI ◽  
KAYHAN DEVELI

This paper presents an evaluation of the methods applied to calculate the fractal dimension of fracture surfaces. Variogram (applicable to 1D self-affine sets) and power spectral density analyses (applicable to 2D self-affine sets) are selected to calculate the fractal dimension of synthetic 2D data sets generated using fractional Brownian motion (fBm). Then, the calculated values are compared with the actual fractal dimensions assigned in the generation of the synthetic surfaces. The main factor considered is the size of the 2D data set (number of data points). The critical sample size that yields the best agreement between the calculated and actual values is defined for each method. Limitations and the proper use of each method are clarified after an extensive analysis. The two methods are also applied to synthetically and naturally developed fracture surfaces of different types of rocks. The methods yield inconsistent fractal dimensions for natural fracture surfaces and the reasons of this are discussed. The anisotropic feature of fractal dimension that may lead to a correlation of fracturing mechanism and multifractality of the fracture surfaces is also addressed.


2020 ◽  
pp. 089686082097693
Author(s):  
Alix Clarke ◽  
Pietro Ravani ◽  
Matthew J Oliver ◽  
Mohamed Mahsin ◽  
Ngan N Lam ◽  
...  

Background: Technique failure is an important outcome measure in research and quality improvement in peritoneal dialysis (PD) programs, but there is a lack of consistency in how it is reported. Methods: We used data collected about incident dialysis patients from 10 Canadian dialysis programs between 1 January 2004 and 31 December 2018. We identified four main steps that are required when calculating the risk of technique failure. We changed one variable at a time, and then all steps, simultaneously, to determine the impact on the observed risk of technique failure at 24 months. Results: A total of 1448 patients received PD. Selecting different cohorts of PD patients changed the observed risk of technique failure at 24 months by 2%. More than one-third of patients who switched to hemodialysis returned to PD—90% returned within 180 days. The use of different time windows of observation for a return to PD resulted in risks of technique failure that differed by 16%. The way in which exit events were handled during the time window impacted the risk of technique failure by 4% and choice of statistical method changed results by 4%. Overall, the observed risk of technique failure at 24 months differed by 20%, simply by applying different approaches to the same data set. Conclusions: The approach to reporting technique failure has an important impact on the observed results. We present a robust and transparent methodology to track technique failure over time and to compare performance between programs.


Author(s):  
UREERAT WATTANACHON ◽  
CHIDCHANOK LURSINSAP

Existing clustering algorithms, such as single-link clustering, k-means, CURE, and CSM are designed to find clusters based on predefined parameters specified by users. These algorithms may be unsuccessful if the choice of parameters is inappropriate with respect to the data set being clustered. Most of these algorithms work very well for compact and hyper-spherical clusters. In this paper, a new hybrid clustering algorithm called Self-Partition and Self-Merging (SPSM) is proposed. The SPSM algorithm partitions the input data set into several subclusters in the first phase and, then, removes the noisy data in the second phase. In the third phase, the normal subclusters are continuously merged to form the larger clusters based on the inter-cluster distance and intra-cluster distance criteria. From the experimental results, the SPSM algorithm is very efficient to handle the noisy data set, and to cluster the data sets of arbitrary shapes of different density. Several examples for color image show the versatility of the proposed method and compare with results described in the literature for the same images. The computational complexity of the SPSM algorithm is O(N2), where N is the number of data points.


Author(s):  
Md. Zakir Hossain ◽  
Md.Nasim Akhtar ◽  
R.B. Ahmad ◽  
Mostafijur Rahman

<span>Data mining is the process of finding structure of data from large data sets. With this process, the decision makers can make a particular decision for further development of the real-world problems. Several data clusteringtechniques are used in data mining for finding a specific pattern of data. The K-means method isone of the familiar clustering techniques for clustering large data sets.  The K-means clustering method partitions the data set based on the assumption that the number of clusters are fixed.The main problem of this method is that if the number of clusters is to be chosen small then there is a higher probability of adding dissimilar items into the same group. On the other hand, if the number of clusters is chosen to be high, then there is a higher chance of adding similar items in the different groups. In this paper, we address this issue by proposing a new K-Means clustering algorithm. The proposed method performs data clustering dynamically. The proposed method initially calculates a threshold value as a centroid of K-Means and based on this value the number of clusters are formed. At each iteration of K-Means, if the Euclidian distance between two points is less than or equal to the threshold value, then these two data points will be in the same group. Otherwise, the proposed method will create a new cluster with the dissimilar data point. The results show that the proposed method outperforms the original K-Means method.</span>


2007 ◽  
Vol 46 (01) ◽  
pp. 22-28 ◽  
Author(s):  
H.-J. Kaiser ◽  
H. Kuehl ◽  
K.-C. Koch ◽  
B. Nowak ◽  
U. Buell ◽  
...  

Summary Aim: Using 8-frames/cardiac cycle with gated SPECT underestimates end-diastolic volumes (EDV) and ejection fractions (LVEF), and overestimates end-systolic volumes (ESV). However, using 16-frames/cardiac cycle significantly decreases the signal-to-noise-ratio. We analyzed 16-frames and rebinned 8-frame gated SPECT data using common 4D-MSPECT and QGS algorithms. Patients, methods: 120 patients were examined using gated SPECT on a Siemens Multispect 3 (triple-head gamma camera) 60 minutes after intravenous administration at rest of about 450 MBq (two-day protocol) or about 750 MBq (one-day protocol) 99mTc-tetrofosmin. Reoriented short axis slices (16-frames) were summed framewise (1+2,3+4, etc.) yielding 8-frame data sets. EDV, ESV and LVEF were calculated for both data sets using 4D-MSPECT and QGS. Results: QGS succeeded with 119, 4D-MSPECT with 117 patients. For the remaining 116 patients, higher EDV (+0.8ml/+3.8ml) and LVEF (+1.5%/+2.6%; absolute) and lower ESV (–1.7ml/–0.9ml) (4D-MSPECT/QGS) were found for 16-frame runs. Bland-Altman limits were smaller for QGS than 4D-MSPECT [EDV 32/12ml, ESV 21/10ml, LVEF 17/7% (4D-MSPECT/QGS)]. Conclusion: Both algorithms showed the expected effects. Contour finding using QGS failed with only one data set, whereas contour finding using 4D-MSPECT failed with three data sets. Since the effects observed between the 8– and the 16-frame studies are relatively small and quite predictable, 8-frame studies can be employed in clinical routine with hardly any loss at all, plus contour finding appears less susceptible to error.


2020 ◽  
Vol 498 (3) ◽  
pp. 3440-3451
Author(s):  
Alan F Heavens ◽  
Elena Sellentin ◽  
Andrew H Jaffe

ABSTRACT Bringing a high-dimensional data set into science-ready shape is a formidable challenge that often necessitates data compression. Compression has accordingly become a key consideration for contemporary cosmology, affecting public data releases, and reanalyses searching for new physics. However, data compression optimized for a particular model can suppress signs of new physics, or even remove them altogether. We therefore provide a solution for exploring new physics during data compression. In particular, we store additional agnostic compressed data points, selected to enable precise constraints of non-standard physics at a later date. Our procedure is based on the maximal compression of the MOPED algorithm, which optimally filters the data with respect to a baseline model. We select additional filters, based on a generalized principal component analysis, which are carefully constructed to scout for new physics at high precision and speed. We refer to the augmented set of filters as MOPED-PC. They enable an analytic computation of Bayesian Evidence that may indicate the presence of new physics, and fast analytic estimates of best-fitting parameters when adopting a specific non-standard theory, without further expensive MCMC analysis. As there may be large numbers of non-standard theories, the speed of the method becomes essential. Should no new physics be found, then our approach preserves the precision of the standard parameters. As a result, we achieve very rapid and maximally precise constraints of standard and non-standard physics, with a technique that scales well to large dimensional data sets.


Author(s):  
Tushar ◽  
Tushar ◽  
Shibendu Shekhar Roy ◽  
Dilip Kumar Pratihar

Clustering is a potential tool of data mining. A clustering method analyzes the pattern of a data set and groups the data into several clusters based on the similarity among themselves. Clusters may be either crisp or fuzzy in nature. The present chapter deals with clustering of some data sets using Fuzzy C-Means (FCM) algorithm and Entropy-based Fuzzy Clustering (EFC) algorithm. In FCM algorithm, the nature and quality of clusters depend on the pre-defined number of clusters, level of cluster fuzziness and a threshold value utilized for obtaining the number of outliers (if any). On the other hand, the quality of clusters obtained by the EFC algorithm is dependent on a constant used to establish the relationship between the distance and similarity of two data points, a threshold value of similarity and another threshold value used for determining the number of outliers. The clusters should ideally be distinct and at the same time compact in nature. Moreover, the number of outliers should be as minimum as possible. Thus, the above problem may be posed as an optimization problem, which will be solved using a Genetic Algorithm (GA). The best set of multi-dimensional clusters will be mapped into 2-D for visualization using a Self-Organizing Map (SOM).


Sign in / Sign up

Export Citation Format

Share Document