scholarly journals Analiza dejavnikov ribolovne učinkovitosti v metierjih Severnega Atlantika

Author(s):  
Janja Jerebic ◽  
Špela Kajzer ◽  
Anja Goričan ◽  
Drago Bokal

The management of fishing fleets is an important factor in the sustainable exploitation of marine organisms for human consumption. Therefore, regulatory services monitor catches and limit them based on data. In this paper, we analyze North Atlantic Fishing Organization (NAFO) data on North Atlantic catches to direct the effectiveness of fishing stakeholders. Data on fishing time (month and year), equipment, location, type of catch, and, for us, the most interesting, data on the fishing effort are given, and their quality is analyzed. In the last part, The Principal Component Analysis for individual activities, among which fishing stakeholders can decide, is performed on a selected data sample. The complexity of the connections between the set of observed activities is explained by new uncorrelated variables - principal components - that are important for achieving the expected fishing catch. We find that the proportions of variance explained by the individual principal components are low, which indicates the high complexity of the topic discussed.

2021 ◽  
Author(s):  
Amélie Fischer ◽  
Philippe Gasnier ◽  
Philippe Faverdin

ABSTRACTBackgroundImproving feed efficiency has become a common target for dairy farmers to meet the requirement of producing more milk with fewer resources. To improve feed efficiency, a prerequisite is to ensure that the cows identified as most or least efficient will remain as such, independently of diet composition. Therefore, the current research analysed the ability of lactating dairy cows to maintain their feed efficiency while changing the energy density of the diet by changing its concentration in starch and fibre. A total of 60 lactating Holstein cows, including 33 primiparous cows, were first fed a high starch diet (diet E+P+), then switched over to a low starch diet (diet E−P−). Near infra-red (NIR) spectroscopy was performed on each individual feed ingredient, diet and individual refusals to check for sorting behaviour. A principal component analysis (PCA) was performed to analyse if the variability in NIR spectra of the refusals was explained by the differences in feed efficiency.ResultsThe error of reproducibility of feed efficiency across diets was 2.95 MJ/d. This error was significantly larger than the errors of repeatability estimated within diet over two subsequent lactation stages, which were 2.01 MJ/d within diet E−P− and 2.40 MJ/d within diet E+P+. The coefficient of correlation of concordance (CCC) was 0.64 between feed efficiency estimated within diet E+P+ and feed efficiency estimated within diet E−P−. This CCC was smaller than the one observed for feed efficiency estimated within diet between two subsequent lactation stages (CCC = 0.72 within diet E+P+ and 0.85 within diet E−P−). The first two principal components of the PCA explained 90% of the total variability of the NIR spectra of the individual refusals. Feed efficiency was poorly correlated to those principal components, which suggests that feed sorting behaviour did not explain differences in feed efficiency.ConclusionsFeed efficiency was significantly less reproducible across diets than repeatable within the same diet over subsequent lactation stages, but cow’s ranking for feed efficiency was not significantly affected by diet change. The differences in sorting behaviour between cows were not associated to feed efficiency differences in this trial neither with the E+P+ diet nor with the E−P− diet. Those results have to be confirmed with cows fed with more extreme diets (for example roughage only) to ensure that the least and most efficient cows will not change.


2016 ◽  
Vol 77 (1) ◽  
pp. 165-178 ◽  
Author(s):  
Tenko Raykov ◽  
George A. Marcoulides ◽  
Tenglong Li

The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown to be (a) bounded from below by the smallest error variance in a variable from the analyzed set and (b) bounded from above by the largest error variance in a variable from that set. In the case of a unidimensional set of analyzed measures, it is pointed out that the reliability and criterion validity of any principal component are bounded from above by these respective coefficients of the optimal linear combination with maximal reliability and criterion validity (for a criterion unrelated to the error terms in the individual measures). The discussed psychometric features of principal components are illustrated on a numerical data set.


2005 ◽  
Vol 93 (6) ◽  
pp. 3560-3572 ◽  
Author(s):  
Serapio M. Baca ◽  
Eric E. Thomson ◽  
William B. Kristan

In response to touches to their skin, medicinal leeches shorten their body on the side of the touch. We elicited local bends by delivering precisely controlled pressure stimuli at different locations, intensities, and durations to body-wall preparations. We video-taped the individual responses, quantifying the body-wall displacements over time using a motion-tracking algorithm based on making optic flow estimates between video frames. Using principal components analysis (PCA), we found that one to three principal components fit the behavioral data much better than did previous (cosine) measures. The amplitudes of the principal components (i.e., the principal component scores) nicely discriminated the responses to stimuli both at different locations and of different intensities. Leeches discriminated (i.e., produced distinguishable responses) between touch locations that are approximately a millimeter apart. Their ability to discriminate stimulus intensity depended on stimulus magnitude: discrimination was very acute for weak stimuli and less sensitive for stronger stimuli. In addition, increasing the stimulus duration improved the leech's ability to discriminate between stimulus intensities. Overall, the use of optic flow fields and PCA provide a powerful framework for characterizing the discrimination abilities of the leech local bend response.


1991 ◽  
Vol 66 (3) ◽  
pp. 794-808 ◽  
Author(s):  
J. W. McClurkin ◽  
T. J. Gawne ◽  
L. M. Optican ◽  
B. J. Richmond

1. We used the Karhunen-Loeve (K-L) transform to quantify the temporal distribution of spikes in the responses of lateral geniculate (LGN) neurons. The basis functions of the K-L transform are a set of waveforms called principal components, which are extracted from the data set. The coefficients of the principal components are uncorrelated with each other and can be used to quantify individual responses. The shapes of each of the first three principal components were very similar across neurons. 2. The coefficient of the first principal component was highly correlated with the spike count, but the other coefficients were not. Thus the coefficient of the first principal component reflects the strength of the response, whereas the coefficients of the other principal components reflect aspects of the temporal distribution of spikes in the response that are uncorrelated with the strength of the response. Statistical analysis revealed that the coefficients of up to 10 principal components were driven by the stimuli. Therefore stimuli govern the temporal distribution as well as the number of spikes in the response. 3. Through the application of information theory, we were able to compare the amount of stimulus-related information carried by LGN neurons when two codes were assumed: first, a univariate code based on response strength alone; and second, a multivariate temporal code based on the coefficients of the first three principal components. We found that LGN neurons were able to transmit an average of 1.5 times as much information using the three-component temporal code as they could using the strength code. 4. The stimulus set we used allowed us to calculate the amount of information each neuron could transmit about stimulus luminance, pattern, and contrast. All neurons transmitted the greatest amount of information about stimulus luminance, but they also transmitted significant amounts of information about stimulus pattern. This pattern information was not a reflection of the luminance or contrast of the pixel centered on the receptive field. 5. In addition to measuring the average amount of information each neuron transmitted about all stimuli, we also measured the amount of information each neuron transmitted about the individual stimuli with both the univariate spike count code and the multivariate temporal code. We then compared the amount of information transmitted per stimulus with the magnitudes of the responses to the individual stimuli. We found that the magnitudes of both the univariate and the multivariate responses to individual stimuli were poorly correlated with the information transmitted about the individual stimuli.(ABSTRACT TRUNCATED AT 400 WORDS)


2021 ◽  
pp. 000370282110562
Author(s):  
Thomas G. Mayerhöfer ◽  
Oleksii Ilchenko ◽  
Andrii Kutsyk ◽  
Jürgen Popp

We have recorded attenuated total reflection infrared spectra of binary mixtures in the (quasi-)ideal systems benzene–toluene, benzene–carbon tetrachloride, and benzene–cyclohexane. We used two-dimensional correlation spectroscopy, principal component analysis, and multivariate curve resolution to analyze the data. The 2D correlation proves nonlinearities, also in spectral ranges with no obvious deviations from Beer’s approximation. The number of principal components is much higher than two and multivariate curve resolution carried out under the assumption of the presence of a third component, results in spectra which only show bands of the original components. The results negate the presence of third components, since any complex should have lower symmetry than the individual molecules and thus more and/or different infrared-active bands in the spectra. Based on Lorentz–Lorenz theory and literature values of the optical constants, we show that the nonlinearities and additional principal components are consequences of local field effects and the polarization of matter by light. Lorentz–Lorenz theory is, however, not able to explain, for example, the different blueshifts of the strong A2u band of benzene in the three mixtures. Obviously, infrared spectroscopy is sensitive to the short-range order around the molecules, which changes with content, their shapes, and their anisotropy.


Water ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 525 ◽  
Author(s):  
Abdessamad Tiouiouine ◽  
Suzanne Yameogo ◽  
Vincent Valles ◽  
Laurent Barbiero ◽  
Fabrice Dassonville ◽  
...  

The SISE-Eaux database of water intended for human consumption, archived by the French Regional Health Agency (ARS) since 1990, is a rich source of information. However, more or less regular monitoring over almost 30 years and the multiplication of parameters lead to a sparse matrix (observations × parameters) and a large dimension of the hyperspace of data. These characteristics make it difficult to exploit this database for a synthetic mapping of water quality, and to identify of the processes responsible for its diversity in a complex geological context and anthropized environment. A 10-year period (2006–2016) was selected from the Provence-Alpes- Côte d’Azur region database (PACA, southeastern France). We extracted 5,295 water samples, each with 15 parameters. A treatment by principal component analysis (PCA) followed with orthomax rotation allows for identifying and ranking six principal components (PCs) totaling 75% of the initial information. The association of the parameters with the principal components, and the regional distribution of the PCs make it possible to identify water-rock interactions, bacteriological contamination, redox processes and arsenic occurrence as the main sources of variability. However, the results also highlight a decrease of useful information, a constraint linked to the vast size and diversity of the study area. The development of a relevant tool for the protecting and managing of water resources will require identifying of subsets based on functional landscape units or the grouping of groundwater bodies.


1973 ◽  
Vol 30 (12) ◽  
pp. 2051-2058 ◽  
Author(s):  
M. A. Robinson

World population is expected to grow from its present level of 3.7 billion to 4.6 billion in 1980 and 6.6 billion by the end of the century. In order merely to maintain per capita fish consumption at present levels this will necessitate an increased fish supply of some 8 million tons by 1980 and 27 million tons by the end of the century. This excludes allowances for any increase in fish meal consumption.If under the influence of rising incomes per capita consumption levels also grow, then this will increase further the additional supplies required. On the basis of past trends, per capita demand on a world average might be expected to rise from its present level of 11.8 kg, to 13.3 kg in 1980, and more speculatively to 16.2 kg by the end of the century. On this assumption, the combined effect of population and income growth would be to add by 1980 some 18.5 million tons, and by the end of the century 63 million tons to the present world demand for fish. This again excludes any allowance for increased demand for fish meal for which, it is believed, due mainly to supply limitations, there will be no significant increase in consumption above present levels.The increases in demand for fish for direct human consumption will, nevertheless, push the exploitation of conventional fish resources to the limit of their potential yields. By 1980, it seems likely that the potential still remaining to be harvested from conventional fish stocks will have fallen from its present level of about 45% to some 30%, and by the end of the century the unexploited potential is likely to be negligible. This rate of utilization assumes, however, that there will be significant increases in production from cultured sources, which could be stimulated by the rising prices likely to be associated with the full exploitation of wild stocks. As more and more wild stocks reach this point, management will become increasingly necessary to prevent the build-up of useless excess capacity of the world’s fishing fleets, and in some cases to prevent fishing effort reaching the point at which the productive capacity of the resources becomes threatened.


2005 ◽  
Vol 13 (5) ◽  
pp. 255-264 ◽  
Author(s):  
T. Golebiowski ◽  
A.S. Leong ◽  
J.F. Panozzo

This work reports that the measurements of the likeness or the uniqueness of the 1100–2500 nm reflectance spectra of intact canola seed determined from principal component analysis (PCA) approximated spectra with global H or neighbourhood H statistics were not associated with oil concentration within the seed. The absence of stability in association between the H measurements and oil content was related to inconsistency in the amount and distribution (between principal components) of the spectral variation correlated to the oil content within and between different batches of canola seed. PCA was used to approximate variation in the 1100–2500 nm, second order derivative, reflectance spectra of intact canola seed, acquired from 15 batches of seed samples. The first eight principal components (PCs) captured 97.14% to 99.35% of the total variance in the spectra. The amount of variation captured by individual components was independent of the number of samples in the batch and oil content within the seed. The pattern of variance distribution among principal components was inconsistent and highlighted the uniqueness of the origin of the spectral variation in each batch of canola seed. In this study, the strength of correlation between oil content and principal components was used as a measure of component significance to the analysis of oil in the intact canola seed. In the examined sets of spectra, oil content was correlated to the low order components, PC1 to PC4. In the 15 files of spectra, oil content showed the strongest correlation to PC2 in eight sets of data, to PC3 in four sets of data and to PC1 in three sets of data. The strength of association between oil and the individual components varied considerably in magnitude among examined files of spectra; r2 = 0.28–0.81 for the first strongly correlated component, r2 = 0.05–0.29 for the second and r2 = 0.02–0.19 for the third. The position of the PCs in the correlation sequence was inconsistent and underlined differences of oil signal/spectral data interactions in the individual sets of data. Examination of principal component loadings showed that in the reported files of spectra, principal components correlated to the oil content frequently captured variance at segments, which denote absorptions specific and accidental to canola oil. The outline of the loadings did not conform to a single, regular pattern common to all sets of data. The reported results are in disagreement with rationale of the methodology, which involves the spectra matching techniques for validating the predictive efficiency of near infrared (NIR) calibrations. The reported results highlighted that the reliable NIR quantification of oil content from reflectance spectra of intact canola seed would require an independent validation for every acquired set of spectra.


2006 ◽  
Vol 27 (2) ◽  
pp. 87-92 ◽  
Author(s):  
Willem K.B. Hofstee ◽  
Dick P.H. Barelds ◽  
Jos M.F. Ten Berge

Hofstee and Ten Berge (2004a) have proposed a new look at personality assessment data, based on a bipolar proportional (-1, .. . 0, .. . +1) scale, a corresponding coefficient of raw-scores likeness L = ΢XY/N, and raw-scores principal component analysis. In a normal sample, the approach resulted in a structure dominated by a first principal component, according to which most people are faintly to mildly socially desirable. We hypothesized that a more differentiated structure would arise in a clinical sample. We analyzed the scores of 775 psychiatric clients on the 132 items of the Dutch Personality Questionnaire (NPV). In comparison to a normative sample (N = 3140), the eigenvalue for the first principal component appeared to be 1.7 times as small, indicating that such clients have less personality (social desirability) in common. Still, the match between the structures in the two samples was excellent after oblique rotation of the loadings. We applied the abridged m-dimensional circumplex design, by which persons are typed by their two highest scores on the principal components, to the scores on the first four principal components. We identified five types: Indignant (1-), Resilient (1-2+), Nervous (1-2-), Obsessive-Compulsive (1-3-), and Introverted (1-4-), covering 40% of the psychiatric sample. Some 26% of the individuals had negligible scores on all type vectors. We discuss the potential and the limitations of our approach in a clinical context.


Methodology ◽  
2016 ◽  
Vol 12 (1) ◽  
pp. 11-20 ◽  
Author(s):  
Gregor Sočan

Abstract. When principal component solutions are compared across two groups, a question arises whether the extracted components have the same interpretation in both populations. The problem can be approached by testing null hypotheses stating that the congruence coefficients between pairs of vectors of component loadings are equal to 1. Chan, Leung, Chan, Ho, and Yung (1999) proposed a bootstrap procedure for testing the hypothesis of perfect congruence between vectors of common factor loadings. We demonstrate that the procedure by Chan et al. is both theoretically and empirically inadequate for the application on principal components. We propose a modification of their procedure, which constructs the resampling space according to the characteristics of the principal component model. The results of a simulation study show satisfactory empirical properties of the modified procedure.


Sign in / Sign up

Export Citation Format

Share Document