scholarly journals Principal components of short-term variability in the ultraviolet albedo of Venus

2019 ◽  
Vol 626 ◽  
pp. A30 ◽  
Author(s):  
Pushkar Kopparla ◽  
Yeon Joo Lee ◽  
Takeshi Imamura ◽  
Atsushi Yamazaki

We explore the dominant modes of variability in the observed albedo at the cloud tops of Venus using Akatsuki UVI 283 nm and 365 nm observations, which are sensitive to SO2 and unknown UV absorber distributions respectively. The observations, taken over the period Dec. 2016 to May 2018, consist of images of the dayside of Venus, most often observed at intervals of two hours, but interspersed with longer gaps. The orbit of the spacecraft does not allow for continuous observation of the full dayside and the unobserved regions cause significant gaps in the datasets. Each dataset is subdivided into three subsets for three observing periods, the unobserved data are interpolated, and each subset is then subjected to a principal component analysis to find six oscillating patterns in the albedo. Principal components in all three periods show similar morphologies at 283 nm, but are much more variable at 365 nm. Some spatial patterns and the timescales of these modes correspond to well-known physical processes in the atmosphere of Venus such as the ~4-day Kelvin wave, 5-day Rossby waves, and the overturning circulation, while others defy a simple explanation. We also a find a hemispheric mode that is not well understood and discuss its implications.

2008 ◽  
Vol 23 (6) ◽  
pp. 1146-1161 ◽  
Author(s):  
Steven J. Greybush ◽  
Sue Ellen Haupt ◽  
George S. Young

Abstract Previous methods for creating consensus forecasts weight individual ensemble members based upon their relative performance over the previous N days, implicitly making a short-term persistence assumption about the underlying flow regime. A postprocessing scheme in which model performance is linked to underlying weather regimes could improve the skill of deterministic ensemble model consensus forecasts. Here, principal component analysis of several synoptic- and mesoscale fields from the North American Regional Reanalysis dataset provides an objective means for characterizing atmospheric regimes. Clustering techniques, including K-means and a genetic algorithm, are developed that use the resulting principal components to distinguish among the weather regimes. This pilot study creates a weighted consensus from 48-h surface temperature predictions produced by the University of Washington Mesoscale Ensemble, a varied-model (differing physics and parameterization schemes) multianalysis ensemble with eight members. Different optimal weights are generated for each weather regime. A second regime-dependent consensus technique uses linear regression to predict the relative performance of the ensemble members based upon the principal components. Consensus forecasts obtained by the regime-dependent schemes are compared using cross validation with traditional N-day ensemble consensus forecasts for four locations in the Pacific Northwest, and show improvement over methods that rely on the short-term persistence assumption.


2006 ◽  
Vol 27 (2) ◽  
pp. 87-92 ◽  
Author(s):  
Willem K.B. Hofstee ◽  
Dick P.H. Barelds ◽  
Jos M.F. Ten Berge

Hofstee and Ten Berge (2004a) have proposed a new look at personality assessment data, based on a bipolar proportional (-1, .. . 0, .. . +1) scale, a corresponding coefficient of raw-scores likeness L = ΢XY/N, and raw-scores principal component analysis. In a normal sample, the approach resulted in a structure dominated by a first principal component, according to which most people are faintly to mildly socially desirable. We hypothesized that a more differentiated structure would arise in a clinical sample. We analyzed the scores of 775 psychiatric clients on the 132 items of the Dutch Personality Questionnaire (NPV). In comparison to a normative sample (N = 3140), the eigenvalue for the first principal component appeared to be 1.7 times as small, indicating that such clients have less personality (social desirability) in common. Still, the match between the structures in the two samples was excellent after oblique rotation of the loadings. We applied the abridged m-dimensional circumplex design, by which persons are typed by their two highest scores on the principal components, to the scores on the first four principal components. We identified five types: Indignant (1-), Resilient (1-2+), Nervous (1-2-), Obsessive-Compulsive (1-3-), and Introverted (1-4-), covering 40% of the psychiatric sample. Some 26% of the individuals had negligible scores on all type vectors. We discuss the potential and the limitations of our approach in a clinical context.


Methodology ◽  
2016 ◽  
Vol 12 (1) ◽  
pp. 11-20 ◽  
Author(s):  
Gregor Sočan

Abstract. When principal component solutions are compared across two groups, a question arises whether the extracted components have the same interpretation in both populations. The problem can be approached by testing null hypotheses stating that the congruence coefficients between pairs of vectors of component loadings are equal to 1. Chan, Leung, Chan, Ho, and Yung (1999) proposed a bootstrap procedure for testing the hypothesis of perfect congruence between vectors of common factor loadings. We demonstrate that the procedure by Chan et al. is both theoretically and empirically inadequate for the application on principal components. We propose a modification of their procedure, which constructs the resampling space according to the characteristics of the principal component model. The results of a simulation study show satisfactory empirical properties of the modified procedure.


2006 ◽  
Vol 1 (1) ◽  
Author(s):  
K. Katayama ◽  
K. Kimijima ◽  
O. Yamanaka ◽  
A. Nagaiwa ◽  
Y. Ono

This paper proposes a method of stormwater inflow prediction using radar rainfall data as the input of the prediction model constructed by system identification. The aim of the proposal is to construct a compact system by reducing the dimension of the input data. In this paper, Principal Component Analysis (PCA), which is widely used as a statistical method for data analysis and compression, is applied to pre-processing radar rainfall data. Then we evaluate the proposed method using the radar rainfall data and the inflow data acquired in a certain combined sewer system. This study reveals that a few principal components of radar rainfall data can be appropriate as the input variables to storm water inflow prediction model. Consequently, we have established a procedure for the stormwater prediction method using a few principal components of radar rainfall data.


2017 ◽  
Vol 921 (3) ◽  
pp. 24-29 ◽  
Author(s):  
S.I. Lesnykh ◽  
A.K. Cherkashin

The proposed procedure of integral mapping is based on calculation of evaluation functions on the integral indicators (II) taking into account the feature of the local geographical environment, when geosystems in the same states in the different environs have various estimates. Calculation of II is realized with application of a Principal Component Analysis for processing of the forest database, allowing to consider in II the weight of each indicator (attribute). The final value of II is equal to a difference of the first (condition of geosystem) and the second (condition of environmental background) principal components. The evaluation functions are calculated on this value for various problems of integral mapping. The environmental factors of variability is excluded from final value of II, therefore there is an opportunity to find the invariant evaluation function and to determine coefficients of this function. Concepts and functions of the theory of reliability for making the evaluation maps of the hazard of functioning and stability of geosystems are used.


2021 ◽  
pp. 000370282098784
Author(s):  
James Renwick Beattie ◽  
Francis Esmonde-White

Spectroscopy rapidly captures a large amount of data that is not directly interpretable. Principal Components Analysis (PCA) is widely used to simplify complex spectral datasets into comprehensible information by identifying recurring patterns in the data with minimal loss of information. The linear algebra underpinning PCA is not well understood by many applied analytical scientists and spectroscopists who use PCA. The meaning of features identified through PCA are often unclear. This manuscript traces the journey of the spectra themselves through the operations behind PCA, with each step illustrated by simulated spectra. PCA relies solely on the information within the spectra, consequently the mathematical model is dependent on the nature of the data itself. The direct links between model and spectra allow concrete spectroscopic explanation of PCA, such the scores representing ‘concentration’ or ‘weights’. The principal components (loadings) are by definition hidden, repeated and uncorrelated spectral shapes that linearly combine to generate the observed spectra. They can be visualized as subtraction spectra between extreme differences within the dataset. Each PC is shown to be a successive refinement of the estimated spectra, improving the fit between PC reconstructed data and the original data. Understanding the data-led development of a PCA model shows how to interpret application specific chemical meaning of the PCA loadings and how to analyze scores. A critical benefit of PCA is its simplicity and the succinctness of its description of a dataset, making it powerful and flexible.


1988 ◽  
Vol 18 (1) ◽  
pp. 211-218 ◽  
Author(s):  
J. L. Vazquez-Barquero ◽  
P. Williams ◽  
J. F. Diez-Manrique ◽  
J. Lequerica ◽  
A. Arenal

SynopsisThe factor structure of the 60-item version of the General Health Questionnaire was explored, using data collected in a community study in a rural area of northern Spain. Six principal components, similar to those previously reported with this instrument, were found to provide a good description of the data structure.The 30-item and 12-item versions of the GHQ were then disembedded from the parent version, and further principal components analyses carried out. Again, the results were similar to previous studies: in each of the three versions analysed here, the two most important components represented a disturbance of mood (‘general dysphoria’)– including aspects of anxiety, depression and irritability– and a disturbance of social performance (‘social function/optimism’).The principal component structure of the GHQ-60 was then utilized to calculate factor scores, and these were compared with PSE ratings using Relative Operating Characteristic (ROC) analysis. While four of the six factors discriminated well (area under the ROC curve 0–75 or more) between PSE ‘cases’ and ‘non-cases’, only one, depressive thoughts, was a good discriminator between depressed and non-depressed PSE ‘cases’.


1995 ◽  
Vol 7 (6) ◽  
pp. 1191-1205 ◽  
Author(s):  
Colin Fyfe

A review is given of a new artificial neural network architecture in which the weights converge to the principal component subspace. The weights learn by only simple Hebbian learning yet require no clipping, normalization or weight decay. The net self-organizes using negative feedback of activation from a set of "interneurons" to the input neurons. By allowing this negative feedback from the interneurons to act on other interneurons we can introduce the necessary asymmetry to cause convergence to the actual principal components. Simulations and analysis confirm such convergence.


Sign in / Sign up

Export Citation Format

Share Document