scholarly journals Multimode Operating Performance Visualization and Nonoptimal Cause Identification

Processes ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 123 ◽  
Author(s):  
Yuhui Ying ◽  
Zhi Li ◽  
Minglei Yang ◽  
Wenli Du

In the traditional performance assessment method, different modes of data are classified mainly by expert knowledge. Thus, human interference is highly probable. The traditional method is also incapable of distinguishing transition data from steady-state data, which reduces the accuracy of the monitor model. To solve these problems, this paper proposes a method of multimode operating performance visualization and nonoptimal cause identification. First, multimode data identification is realized by subtractive clustering algorithm (SCA), which can reduce human influence and eliminate transition data. Then, the multi-space principal component analysis (MsPCA) is used to characterize the independent characteristics of different datasets, which enhances the robustness of the model with respect to the performance of independent variables. Furthermore, a self-organizing map (SOM) is used to train these characteristics and map them into a two-dimensional plane, by which the visualization of the process monitor is realized. For the online assessment, the operating performance of the current process is evaluated according to the projection position of the data on the visual model. Then, the cause of the nonoptimal performance is identified. Finally, the Tennessee Eastman (TE) process is used to verify the effectiveness of the proposed method.

2020 ◽  
Vol 15 ◽  
Author(s):  
Shuwen Zhang ◽  
Qiang Su ◽  
Qin Chen

Abstract: Major animal diseases pose a great threat to animal husbandry and human beings. With the deepening of globalization and the abundance of data resources, the prediction and analysis of animal diseases by using big data are becoming more and more important. The focus of machine learning is to make computers learn how to learn from data and use the learned experience to analyze and predict. Firstly, this paper introduces the animal epidemic situation and machine learning. Then it briefly introduces the application of machine learning in animal disease analysis and prediction. Machine learning is mainly divided into supervised learning and unsupervised learning. Supervised learning includes support vector machines, naive bayes, decision trees, random forests, logistic regression, artificial neural networks, deep learning, and AdaBoost. Unsupervised learning has maximum expectation algorithm, principal component analysis hierarchical clustering algorithm and maxent. Through the discussion of this paper, people have a clearer concept of machine learning and understand its application prospect in animal diseases.


2021 ◽  
Vol 13 (10) ◽  
pp. 5608
Author(s):  
Manjiang Shi ◽  
Qi Cao ◽  
Baisong Ran ◽  
Lanyan Wei

Global disasters due to earthquakes have become more frequent and intense. Consequently, post-disaster recovery and reconstruction has become the new normal in the social process. Through post-disaster reconstruction, risks can be effectively reduced, resilience can be improved, and long-term stability can be achieved. However, there is a gap between the impact of post-earthquake reconstruction and the needs of the people in the disaster area. Based on the international consensus of “building back better” (BBB) and a post-disaster needs assessment method, this paper proposes a new (N-BBB) conceptual model to empirically analyze recovery after the Changning Ms 6.0 earthquake in Sichuan Province, China. The reliability of the model was verified through factor analysis. The main observations were as follows. People’s needs focus on short-term life and production recovery during post-earthquake recovery and reconstruction. Because of disparities in families, occupations, and communities, differences are observed in the reconstruction time sequence and communities. Through principal component analysis, we found that the N-BBB model constructed in this study could provide strong policy guidance in post-disaster recovery and reconstruction after the Changning Ms 6.0 earthquake, effectively coordinate the “top-down” and “bottom-up” models, and meet the diversified needs of such recovery and reconstruction.


Water ◽  
2021 ◽  
Vol 13 (15) ◽  
pp. 2011
Author(s):  
Pablo Páliz Larrea ◽  
Xavier Zapata Ríos ◽  
Lenin Campozano Parra

Despite the importance of dams for water distribution of various uses, adequate forecasting on a day-to-day scale is still in great need of intensive study worldwide. Machine learning models have had a wide application in water resource studies and have shown satisfactory results, including the time series forecasting of water levels and dam flows. In this study, neural network models (NN) and adaptive neuro-fuzzy inference systems (ANFIS) models were generated to forecast the water level of the Salve Faccha reservoir, which supplies water to Quito, the Capital of Ecuador. For NN, a non-linear input–output net with a maximum delay of 13 days was used with variation in the number of nodes and hidden layers. For ANFIS, after up to four days of delay, the subtractive clustering algorithm was used with a hyperparameter variation from 0.5 to 0.8. The results indicate that precipitation was not influencing input in the prediction of the reservoir water level. The best neural network and ANFIS models showed high performance, with a r > 0.95, a Nash index > 0.95, and a RMSE < 0.1. The best the neural network model was t + 4, and the best ANFIS model was model t + 6.


2021 ◽  
pp. 1-12
Author(s):  
Li Qian

In order to overcome the low classification accuracy of traditional methods, this paper proposes a new classification method of complex attribute big data based on iterative fuzzy clustering algorithm. Firstly, principal component analysis and kernel local Fisher discriminant analysis were used to reduce dimensionality of complex attribute big data. Then, the Bloom Filter data structure is introduced to eliminate the redundancy of the complex attribute big data after dimensionality reduction. Secondly, the redundant complex attribute big data is classified in parallel by iterative fuzzy clustering algorithm, so as to complete the complex attribute big data classification. Finally, the simulation results show that the accuracy, the normalized mutual information index and the Richter’s index of the proposed method are close to 1, the classification accuracy is high, and the RDV value is low, which indicates that the proposed method has high classification effectiveness and fast convergence speed.


2021 ◽  
Author(s):  
Seyedeh Samira Moosavi ◽  
Paul Fortier

Abstract Currently, localization in distributed massive MIMO (DM-MIMO) systems based on the fingerprinting (FP) approach has attracted great interest. However, this method suffers from severe multipath and signal degradation such that its accuracy is deteriorated in complex propagation environments, which results in variable received signal strength (RSS). Therefore, providing robust and accurate localization is the goal of this work. In this paper, we propose an FP-based approach to improve the accuracy of localization by reducing the noise and the dimensions of the RSS data. In the proposed approach, the fingerprints rely solely on the RSS from the single-antenna MT collected at each of the receive antenna elements of the massive MIMO base station. After creating a radio map, principal component analysis (PCA) is performed to reduce the noise and redundancy. PCA reduces the data dimension which leads to the selection of the appropriate antennas and reduces complexity. A clustering algorithm based on K-means and affinity propagation clustering (APC) is employed to divide the whole area into several regions which improves positioning precision and reduces complexity and latency. Finally, in order to have high precise localization estimation, all similar data in each cluster are modeled using a well-designed deep neural network (DNN) regression. Simulation results show that the proposed scheme improves positioning accuracy significantly. This approach has high coverage and improves average root-mean-squared error (RMSE) performance to a few meters, which is expected in 5G and beyond networks. Consequently, it also proves the superiority of the proposed method over the previous location estimation schemes.


Author(s):  
Ke Li ◽  
Yalei Wu ◽  
Shimin Song ◽  
Yi sun ◽  
Jun Wang ◽  
...  

The measurement of spacecraft electrical characteristics and multi-label classification issues are generally including a large amount of unlabeled test data processing, high-dimensional feature redundancy, time-consumed computation, and identification of slow rate. In this paper, a fuzzy c-means offline (FCM) clustering algorithm and the approximate weighted proximal support vector machine (WPSVM) online recognition approach have been proposed to reduce the feature size and improve the speed of classification of electrical characteristics in the spacecraft. In addition, the main component analysis for the complex signals based on the principal component feature extraction is used for the feature selection process. The data capture contribution approach by using thresholds is furthermore applied to resolve the selection problem of the principal component analysis (PCA), which effectively guarantees the validity and consistency of the data. Experimental results indicate that the proposed approach in this paper can obtain better fault diagnosis results of the spacecraft electrical characteristics’ data, improve the accuracy of identification, and shorten the computing time with high efficiency.


Author(s):  
Galina Merkuryeva ◽  
Vitaly Bolshakov ◽  
Maksims Kornevs

An Integrated Approach to Product Delivery Planning and SchedulingProduct delivery planning and scheduling is a task of high priority in transport logistics. In distribution centres this task is related to deliveries of various types of goods in predefined time windows. In real-life applications the problem has different stochastic performance criteria and conditions. Optimisation of schedules itself is time consuming and requires an expert knowledge. In this paper an integrated approach to product delivery planning and scheduling is proposed. It is based on a cluster analysis of demand data of stores to identify typical dynamic demand patterns and product delivery tactical plans, and simulation optimisation to find optimal parameters of transportation or vehicle schedules. Here, a cluster analysis of the demand data by using the K-means clustering algorithm and silhouette plots mean values is performed, and an NBTree-based classification model is built. In order to find an optimal grouping of stores into regions based on their geographical locations and the total demand uniformly distributed over regions, a multiobjective optimisation problem is formulated and solved with the NSGA II algorithm.


2011 ◽  
pp. 24-32 ◽  
Author(s):  
Nicoleta Rogovschi ◽  
Mustapha Lebbah ◽  
Younès Bennani

Most traditional clustering algorithms are limited to handle data sets that contain either continuous or categorical variables. However data sets with mixed types of variables are commonly used in data mining field. In this paper we introduce a weighted self-organizing map for clustering, analysis and visualization mixed data (continuous/binary). The learning of weights and prototypes is done in a simultaneous manner assuring an optimized data clustering. More variables has a high weight, more the clustering algorithm will take into account the informations transmitted by these variables. The learning of these topological maps is combined with a weighting process of different variables by computing weights which influence the quality of clustering. We illustrate the power of this method with data sets taken from a public data set repository: a handwritten digit data set, Zoo data set and other three mixed data sets. The results show a good quality of the topological ordering and homogenous clustering.


Sign in / Sign up

Export Citation Format

Share Document