Multivariate Principal Component Analysis for Production and Energy Consumption of Cutter Suction Dredger

2014 ◽  
Vol 644-650 ◽  
pp. 2211-2215
Author(s):  
Kai Kai Li ◽  
Huan Min Xu

Cutter suction dredgers perform a major part in the field of dredging engineering in harbors, fairways, and land reclamation. However, there are many parameters in cutter suction dredger operation so that it is difficult to guarantee the stability of production. In consideration of the issue of enormous parameters in dredging operation, mathematical dimensional reduction method which uses multivariate primary component analysis is proposed. The method can calculate the contribution rate and cumulative contribution rate of each parameter and then select the principal components which influents the production and energy consumption. These parameters represent the majority of the original data information, while not interrelated with each other. The primary components can be used to guide the regulation and control of the parameters, reduce regulatory parameters and operational complexity and provide a theoretical basis for intelligent automation of dredging operations.

Energies ◽  
2019 ◽  
Vol 12 (1) ◽  
pp. 196 ◽  
Author(s):  
Lihui Zhang ◽  
Riletu Ge ◽  
Jianxue Chai

China’s energy consumption issues are closely associated with global climate issues, and the scale of energy consumption, peak energy consumption, and consumption investment are all the focus of national attention. In order to forecast the amount of energy consumption of China accurately, this article selected GDP, population, industrial structure and energy consumption structure, energy intensity, total imports and exports, fixed asset investment, energy efficiency, urbanization, the level of consumption, and fixed investment in the energy industry as a preliminary set of factors; Secondly, we corrected the traditional principal component analysis (PCA) algorithm from the perspective of eliminating “bad points” and then judged a “bad spot” sample based on signal reconstruction ideas. Based on the above content, we put forward a robust principal component analysis (RPCA) algorithm and chose the first five principal components as main factors affecting energy consumption, including: GDP, population, industrial structure and energy consumption structure, urbanization; Then, we applied the Tabu search (TS) algorithm to the least square to support vector machine (LSSVM) optimized by the particle swarm optimization (PSO) algorithm to forecast China’s energy consumption. We collected data from 1996 to 2010 as a training set and from 2010 to 2016 as the test set. For easy comparison, the sample data was input into the LSSVM algorithm and the PSO-LSSVM algorithm at the same time. We used statistical indicators including goodness of fit determination coefficient (R2), the root means square error (RMSE), and the mean radial error (MRE) to compare the training results of the three forecasting models, which demonstrated that the proposed TS-PSO-LSSVM forecasting model had higher prediction accuracy, generalization ability, and higher training speed. Finally, the TS-PSO-LSSVM forecasting model was applied to forecast the energy consumption of China from 2017 to 2030. According to predictions, we found that China shows a gradual increase in energy consumption trends from 2017 to 2030 and will breakthrough 6000 million tons in 2030. However, the growth rate is gradually tightening and China’s energy consumption economy will transfer to a state of diminishing returns around 2026, which guides China to put more emphasis on the field of energy investment.


2014 ◽  
Vol 926-930 ◽  
pp. 4085-4088
Author(s):  
Chuan Jun Li

This article uses the PCA method (Principal component analysis) to evaluate the level of corporate governance. PCA is used to analyze the correlation among 10 original indicators, and extract some principal components so that most of the information of the original indicators is extracted. The formulation of the index of corporate governance can be got by calculating the weight based on the variance contribution rate of the principal component, which can comprehensively evaluate corporate governance.


Author(s):  
Y-H. Taguchi ◽  
Mitsuo Iwadate ◽  
Hideaki Umeyama ◽  
Yoshiki Murakami ◽  
Akira Okamoto

Feature Extraction (FE) is a difficult task when the number of features is much larger than the number of samples, although that is a typical situation when biological (big) data is analyzed. This is especially true when FE is stable, independent of the samples considered (stable FE), and is often required. However, the stability of FE has not been considered seriously. In this chapter, the authors demonstrate that Principal Component Analysis (PCA)-based unsupervised FE functions as stable FE. Three bioinformatics applications of PCA-based unsupervised FE—detection of aberrant DNA methylation associated with diseases, biomarker identification using circulating microRNA, and proteomic analysis of bacterial culturing processes—are discussed.


Minerals ◽  
2019 ◽  
Vol 9 (9) ◽  
pp. 532
Author(s):  
Georgios Louloudis ◽  
Christos Roumpos ◽  
Konstantinos Theofilogiannakos ◽  
Nikolaos Stathopoulos

Spatial modeling and evaluation is a critical step for planning the exploitation of mineral deposits. In this work, a methodology for the investigation of a multi-seam coal deposit spatial variability is proposed. The study area includes the Klidi (Florina, Greece) multi-seam lignite deposit which is suitable for surface mining. The analysis is based on the original data of 76 exploratory drill-holes in an area of 10 km2, in conjunction with the geological and geomorphological data of the deposit. The analytical methods include drill-hole data analysis and evaluation based on an appropriate algorithm, principal component analysis and geographic information techniques. The results proved to be very satisfactory for the explanation of the maximum variance of the initial data values as well as the identification of the deposit structure and the optimum planning of mine development. The proposed analysis can be also helpful for minimizing cost and optimizing efficiency of surface mining operations. Furthermore, the provided methods could be applied in other areas of geosciences, indicating the theoretical value as well as the important practical implications of the analysis.


2020 ◽  
Vol 23 (11) ◽  
pp. 2414-2430
Author(s):  
Khaoula Ghoulem ◽  
Tarek Kormi ◽  
Nizar Bel Hadj Ali

In the general framework of data-driven structural health monitoring, principal component analysis has been applied successfully in continuous monitoring of complex civil infrastructures. In the case of linear or polynomial relationship between monitored variables, principal component analysis allows generation of structured residuals from measurement outputs without a priori structural model. The principal component analysis has been widely used for system monitoring based on its ability to handle high-dimensional, noisy, and highly correlated data by projecting the data onto a lower dimensional subspace that contains most of the variance of the original data. However, for nonlinear systems, it could be easily demonstrated that linear principal component analysis is unable to disclose nonlinear relationships between variables. This has naturally motivated various developments of nonlinear principal component analysis to tackle damage diagnosis of complex structural systems, especially those characterized by a nonlinear behavior. In this article, a data-driven technique for damage detection in nonlinear structural systems is presented. The proposed method is based on kernel principal component analysis. Two case studies involving nonlinear cable structures are presented to show the effectiveness of the proposed methodology. The validity of the kernel principal component analysis–based monitoring technique is shown in terms of the ability to damage detection. Robustness to environmental effects and disturbances are also studied.


2019 ◽  
Vol 12 (1) ◽  
pp. 23
Author(s):  
Alissar Nasser

We study in this paper the performance of Hospitals in Lebanon. Using the nonparametric method Data Envelopment Analysis (DEA), we are able to measures relative efficiency of Hospitals in Lebanon. DEA is a technique that uses linear programming and it measures the relative efficiency of similar type of organizations termed as Decision Making Units (DMUs). In this study, due to the lack of individual data on hospital level, each DMU refers to a qada in Lebanon where the used data represent the aggregation of input and outputs of different hospitals within the qada. In DEA, the inclusion of more number of inputs and /or outputs results in getting a more number of efficient units. Therefore, selecting the appropriate inputs and outputs is a major factor of DEA results. Therefore, we use here the Principal Component Analysis (PCA) in order to reduce the data structure into certain principal components which are essential for identifying efficient DMUs. It is important to note that we have used the basic BCC-input model for the entire analysis. We considered 24 DMUs for the study, using DEA on original data; we got 17 DMUs out of 24 DMUs as efficient. Then we considered 1 PC for inputs and 1 PC for output with almost 80 percent variances, resulting in 3 DMUs as efficient and 21 as inefficient. Using 1 PC for input and 2 PCs for output with 90 percent variance for both input and output, we got 9 DMUs as efficient and 15 DMUs as inefficient. Finally, we have attempted to identify the efficient units with 2 PCs and for 2 PCs for input and outputs with variance more than 95 percent, resulting in 10 efficient DMUs and 14 inefficient DMUs. In Principal Component analysis, if the variance lies between 80 percent to-90 percent it is judged as a meaningful one. It is concluded that Principal Component Analysis plays an important role in the reduction of input output variables and helps in identifying the efficient DMUs and improves the discriminating power of DEA.


2011 ◽  
Vol 15 (1) ◽  
pp. 178
Author(s):  
Altien J Rindengan ◽  
Deiby Tineke Salaki

PENGELOMPOKKAN DATA WAJAH MENGGUNAKAN METODE AGGLOMERATIVE CLUSTERING DENGAN ANALISIS KOMPONEN UTAMA Altien J. Rindengan1) dan Deiby Tineke Salaki1) 1)Program Studi Matematika FMIPA Universitas Sam Ratulangi Manado 95115 ABSTRAK Pada penelitian ini dilakukan analisis pengelompokkan data wajah dengan analisis komponen utama untuk mengambil beberapa akar ciri yang cukup mewakili data tersebut dan pengelompokkannya menggunakan metode agglomerative clustering. Dengan menggunakan program Matlab, data wajah yang terdiri dari 6 orang dengan 10 image dapat dikelompokkan sesuai data aslinya.  Pengelompokkannya cukup menggunakan 3 akar ciri pada selang 68 %. Kata kunci: agglomerative clustering, analisis komponen utama, data wajah  FACE DATA CLUSTERING USING AGGLOMERATIVE CLUSTERING METHODS WITH PRINCIPAL COMPONENT ANALYSIS ABSTRACT In this research, face data is grouped using principal component analysis by getting some of its eigenvalues which are representative enough to describe the data and then by using agglomerative clustering the data is clustered.  By running the Matlab program, face data which is consist of 6 people with 10 images can be clustered to fit the original data.  The clustering is enough using 3 eigenvalues with 68 % of interval. Keywords: agglomerative clustering, principal component analysis, face data


2020 ◽  
Vol 23 ◽  
pp. 41-44
Author(s):  
Oļegs Užga-Rebrovs ◽  
Gaļina Kuļešova

Any data in an implicit form contain information of interest to the researcher. The purpose of data analysis is to extract this information. The original data may contain redundant elements and noise, distorting these data to one degree or another. Therefore, it seems necessary to subject the data to preliminary processing. Reducing the dimension of the initial data makes it possible to remove interfering factors and present the data in a form suitable for further analysis. The paper considers an approach to reducing the dimensionality of the original data based on principal component analysis.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Pei Heng Li ◽  
Taeho Lee ◽  
Hee Yong Youn

Various dimensionality reduction (DR) schemes have been developed for projecting high-dimensional data into low-dimensional representation. The existing schemes usually preserve either only the global structure or local structure of the original data, but not both. To resolve this issue, a scheme called sparse locality for principal component analysis (SLPCA) is proposed. In order to effectively consider the trade-off between the complexity and efficiency, a robust L2,p-norm-based principal component analysis (R2P-PCA) is introduced for global DR, while sparse representation-based locality preserving projection (SR-LPP) is used for local DR. Sparse representation is also employed to construct the weighted matrix of the samples. Being parameter-free, this allows the construction of an intrinsic graph more robust against the noise. In addition, simultaneous learning of projection matrix and sparse similarity matrix is possible. Experimental results demonstrate that the proposed scheme consistently outperforms the existing schemes in terms of clustering accuracy and data reconstruction error.


2020 ◽  
Vol 143 ◽  
pp. 02010
Author(s):  
Shanshan Ma ◽  
Shicong Geng ◽  
Guofeng Wang ◽  
Shuo Yang ◽  
Yu Gao

On the basis of understanding the indexes of different tributaries of Bai Ta Pu river and according to the contribution rate of each principal component, the comprehensive score of different tributaries was calculated through principal component analysis. Each tributary was ranked according to the score, and the evaluation results were analyzed. It is pointed out that the tributary with the highest comprehensive index of Bai Ta Pu river is in Shi Jia Zhai bridge and the tributary with the lowest comprehensive index is in De Sheng Tun bridge.


Sign in / Sign up

Export Citation Format

Share Document