Route to Higher Fidelity FT-IR Imaging

2000 ◽  
Vol 54 (4) ◽  
pp. 486-495 ◽  
Author(s):  
Rohit Bhargava ◽  
Shi-Qing Wang ◽  
Jack L. Koenig

FT-IR imaging employing a focal plane array (FPA) detector is often plagued by low signal-to-noise ratio (SNR) data. A mathematical transform that re-orders spectral data points into decreasing order of SNR is employed to reduce noise by retransforming the ordered data set using only a few relevant data points. This approach is shown to result in significant gains in terms of image fidelity by examining microscopically phase-separated composites termed polymer dispersed liquid crystals (PDLCs). The actual gains depend on the SNR characteristics of the original data. Noise is reduced by a factor greater than 5 if the noise in the initial data is sufficiently low. For a moderate absorbance level of 0.5 a.u., the achievable SNR by reducing noise is greater than 100 for a collection time of less than 4 min. The criteria for optimal application of a noise-reducing procedure employing the minimum noise fraction (MNF) transform are discussed and various variables in the process quantified. This noise reduction is shown to provide high-quality images for accurate morphological analysis. The coupling of mathematical transformation techniques with spectroscopic Fourier transform infrared (FT-IR) imaging is shown to result in high-fidelity images without increasing collection time or drastically modifying hardware.

Anomaly detection is the most important task in data mining techniques. This helps to increase the scalability, accuracy and efficiency. During the extraction process, the outsource may damage their original data set and that will be defined as the intrusion. To avoid the intrusion and maintain the anomaly detection in a high densely populated environment is another difficult task. For that purpose, Grid Partitioning for Anomaly Detection (GPAD) has been proposed for high density environment. This technique will detect the outlier using the grid partitioning approach and density based outlier detection scheme. Initially, all the data sets will be split in the grid format. Allocate the equal amount of data points to each grid. Compare the density of each grid to their neighbor grid in a zigzag manner. Based on the response, lesser density grid will be detected as outlier function as well as that grid will be eliminated. This proposed Grid Partitioning for Anomaly Detection (GPAD) has reduced the complexity and increases the accuracy and these will be proven in simulation part.


1995 ◽  
Vol 268 (4) ◽  
pp. H1682-H1687 ◽  
Author(s):  
A. P. Blaber ◽  
Y. Yamamoto ◽  
R. L. Hughson

We tested the hypothesis that the spontaneous beat-by-beat interactions of systolic blood pressure (SBP) and R-R interval reflected true baroreflex events rather than chance interactions. Original data sets of 1,024 heartbeats obtained in seated rest from six healthy subjects [R-R interval = 953 +/- 94 (+/- SE) ms] were compared with isospectral [generated by a windowed (inverse) Fourier transform with phase randomization] and isodistribution (data points randomly shuffled) surrogate data sets. The isospectral data set was used to test for random phase relationships, and the isodistribution data set was used for effects of white noise between SBP and R-R interval. Spontaneous baroreflex sequences were defined as three or more beats in which SBP and the R-R interval of the same (lag 0), next (lag 1), or next following (lag 2) beat changed in the same direction. The total number of baroreflex sequences in the original data was significantly greater than the surrogates (P < 0.001). In the original data, there were significantly (P < 0.001) more lag 0 than lag 1 or lag 2 baroreflex sequences. Therefore, these results indicated that spontaneous baroreflex sequences represented physiological rather than chance interactions and that baroreflex responses can occur within the same beat.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Xiaoliang Zhang ◽  
Yulin He ◽  
Yi Jin ◽  
Honglian Qin ◽  
Muhammad Azhar ◽  
...  

The k-means algorithm is sensitive to the outliers. In this paper, we propose a robust two-stage k-means clustering algorithm based on the observation point mechanism, which can accurately discover the cluster centers without the disturbance of outliers. In the first stage, a small subset of the original data set is selected based on a set of nondegenerate observation points. The subset is a good representation of the original data set because it only contains all those points that have a higher density of the original data set and does not include the outliers. In the second stage, we use the k-means clustering algorithm to cluster the selected subset and find the proper cluster centers as the true cluster centers of the original data set. Based on these cluster centers, the rest data points of the original data set are assigned to the clusters whose centers are the closest to the data points. The theoretical analysis and experimental results show that the proposed clustering algorithm has the lower computational complexity and better robustness in comparison with k-means clustering algorithm, thus demonstrating the feasibility and effectiveness of our proposed clustering algorithm.


Computers ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 26 ◽  
Author(s):  
Marta Wlodarczyk-Sielicka ◽  
Jacek Lubczonek

At the present time, spatial data are often acquired using varied remote sensing sensors and systems, which produce big data sets. One significant product from these data is a digital model of geographical surfaces, including the surface of the sea floor. To improve data processing, presentation, and management, it is often indispensable to reduce the number of data points. This paper presents research regarding the application of artificial neural networks to bathymetric data reductions. This research considers results from radial networks and self-organizing Kohonen networks. During reconstructions of the seabed model, the results show that neural networks with fewer hidden neurons than the number of data points can replicate the original data set, while the Kohonen network can be used for clustering during big geodata reduction. Practical implementations of neural networks capable of creating surface models and reducing bathymetric data are presented.


Author(s):  
Dingwen Tao ◽  
Sheng Di ◽  
Hanqi Guo ◽  
Zizhong Chen ◽  
Franck Cappello

Because of the vast volume of data being produced by today’s scientific simulations and experiments, lossy data compressor allowing user-controlled loss of accuracy during the compression is a relevant solution for significantly reducing the data size. However, lossy compressor developers and users are missing a tool to explore the features of scientific data sets and understand the data alteration after compression in a systematic and reliable way. To address this gap, we have designed and implemented a generic framework called Z-checker. On the one hand, Z-checker combines a battery of data analysis components for data compression. On the other hand, Z-checker is implemented as an open-source community tool to which users and developers can contribute and add new analysis components based on their additional analysis demands. In this article, we present a survey of existing lossy compressors. Then, we describe the design framework of Z-checker, in which we integrated evaluation metrics proposed in prior work as well as other analysis tools. Specifically, for lossy compressor developers, Z-checker can be used to characterize critical properties (such as entropy, distribution, power spectrum, principal component analysis, and autocorrelation) of any data set to improve compression strategies. For lossy compression users, Z-checker can detect the compression quality (compression ratio and bit rate) and provide various global distortion analysis comparing the original data with the decompressed data (peak signal-to-noise ratio, normalized mean squared error, rate–distortion, rate-compression error, spectral, distribution, and derivatives) and statistical analysis of the compression error (maximum, minimum, and average error; autocorrelation; and distribution of errors). Z-checker can perform the analysis with either coarse granularity (throughout the whole data set) or fine granularity (by user-defined blocks), such that the users and developers can select the best fit, adaptive compressors for different parts of the data set. Z-checker features a visualization interface displaying all analysis results in addition to some basic views of the data sets such as time series. To the best of our knowledge, Z-checker is the first tool designed to assess lossy compression comprehensively for scientific data sets.


2015 ◽  
Vol 2015 ◽  
pp. 1-13 ◽  
Author(s):  
Aysun Sezer ◽  
Hasan Basri Sezer ◽  
Songul Albayrak

Proton density (PD) weighted MR images present inhomogeneity problem, low signal to noise ratio (SNR) and cannot define bone borders clearly. Segmentation of PD weighted images is hampered with these properties of PD weighted images which even limit the visual inspection. The purpose of this study is to determine the effectiveness of segmentation of humeral head from axial PD MR images with active contour without edge (ACWE) model. We included 219 images from our original data set. We extended the use of speckle reducing anisotropic diffusion (SRAD) in PD MR images by estimation of standard deviation of noise (SDN) from ROI. To overcome the problem of initialization of the initial contour of these region based methods, the location of the initial contour was automatically determined with use of circular Hough transform. For comparison, signed pressure force (SPF), fuzzy C-means, and Gaussian mixture models were applied and segmentation results of all four methods were also compared with the manual segmentation results of an expert. Experimental results on our own database show promising results. This is the first study in the literature to segment normal and pathological humeral heads from PD weighted MR images.


1994 ◽  
Vol 144 ◽  
pp. 139-141 ◽  
Author(s):  
J. Rybák ◽  
V. Rušin ◽  
M. Rybanský

AbstractFe XIV 530.3 nm coronal emission line observations have been used for the estimation of the green solar corona rotation. A homogeneous data set, created from measurements of the world-wide coronagraphic network, has been examined with a help of correlation analysis to reveal the averaged synodic rotation period as a function of latitude and time over the epoch from 1947 to 1991.The values of the synodic rotation period obtained for this epoch for the whole range of latitudes and a latitude band ±30° are 27.52±0.12 days and 26.95±0.21 days, resp. A differential rotation of green solar corona, with local period maxima around ±60° and minimum of the rotation period at the equator, was confirmed. No clear cyclic variation of the rotation has been found for examinated epoch but some monotonic trends for some time intervals are presented.A detailed investigation of the original data and their correlation functions has shown that an existence of sufficiently reliable tracers is not evident for the whole set of examinated data. This should be taken into account in future more precise estimations of the green corona rotation period.


Author(s):  
Wendy J. Schiller ◽  
Charles Stewart III

From 1789 to 1913, U.S. senators were not directly elected by the people—instead the Constitution mandated that they be chosen by state legislators. This radically changed in 1913, when the Seventeenth Amendment to the Constitution was ratified, giving the public a direct vote. This book investigates the electoral connections among constituents, state legislators, political parties, and U.S. senators during the age of indirect elections. The book finds that even though parties controlled the partisan affiliation of the winning candidate for Senate, they had much less control over the universe of candidates who competed for votes in Senate elections and the parties did not always succeed in resolving internal conflict among their rank and file. Party politics, money, and personal ambition dominated the election process, in a system originally designed to insulate the Senate from public pressure. The book uses an original data set of all the roll call votes cast by state legislators for U.S. senators from 1871 to 1913 and all state legislators who served during this time. Newspaper and biographical accounts uncover vivid stories of the political maneuvering, corruption, and partisanship—played out by elite political actors, from elected officials, to party machine bosses, to wealthy business owners—that dominated the indirect Senate elections process. The book raises important questions about the effectiveness of Constitutional reforms, such as the Seventeenth Amendment, that promised to produce a more responsive and accountable government.


2020 ◽  
Author(s):  
Eva Østergaard-Nielsen ◽  
Stefano Camatarri

Abstract The role orientation of political representatives and candidates is a longstanding concern in studies of democratic representation. The growing trend in countries to allow citizens abroad to candidate in homeland elections from afar provides an interesting opportunity for understanding how international mobility and context influences ideas of representation among these emigrant candidates. In public debates, emigrant candidates are often portrayed as delegates of the emigrant constituencies. However, drawing on the paradigmatic case of Italy and an original data set comprising emigrant candidates, we show that the perceptions of styles of representation abroad are more complex. Systemic differences between electoral districts at home and abroad are relevant for explaining why and how candidates develop a trustee or delegate orientation.


Author(s):  
Simona Babiceanu ◽  
Sanhita Lahiri ◽  
Mena Lockwood

This study uses a suite of performance measures that was developed by taking into consideration various aspects of congestion and reliability, to assess impacts of safety projects on congestion. Safety projects are necessary to help move Virginia’s roadways toward safer operation, but can contribute to congestion and unreliability during execution, and can affect operations after execution. However, safety projects are assessed primarily for safety improvements, not for congestion. This study identifies an appropriate suite of measures, and quantifies and compares the congestion and reliability impacts of safety projects on roadways for the periods before, during, and after project execution. The paper presents the performance measures, examines their sensitivity based on operating conditions, defines thresholds for congestion and reliability, and demonstrates the measures using a set of Virginia safety projects. The data set consists of 10 projects totalling 92 mi and more than 1M data points. The study found that, overall, safety projects tended to have a positive impact on congestion and reliability after completion, and the congestion variability measures were sensitive to the threshold of reliability. The study concludes with practical recommendations for primary measures that may be used to measure overall impacts of safety projects: percent vehicle miles traveled (VMT) reliable with a customized threshold for Virginia; percent VMT delayed; and time to travel 10 mi. However, caution should be used when applying the results directly to other situations, because of the limited number of projects used in the study.


Sign in / Sign up

Export Citation Format

Share Document