scholarly journals A Driving Cycle for a Fuel Cell Logistics Vehicle on a Fixed Route: Case of the Guangdong Province

2021 ◽  
Vol 12 (1) ◽  
pp. 5
Author(s):  
Su Zhou ◽  
Jie Jin ◽  
Yuehua Wei

The purpose of this research is to develop a representative driving cycle for fuel cell logistics vehicles running on the roads of Guangdong Province for subsequent energy management research and control system optimization. Firstly, we collected and preliminarily screened the 42-day driving data of a logistics vehicle through the remote monitoring platform, and determined the vehicle characteristic signal vector for analysis. Secondly, the principal component analysis method is used to reduce the dimensionality of these characteristic parameters, avoiding the linear correlation between them and increase the comprehensiveness of the upcoming clustering. Next, the dimensionality-reduced data are fed to a clustering machine. K-means clustering method is used to gather the segmented road sections into highway, urban road, national highway and others. Finally, several segments are chosen in accordance to the occurrence possibility of the four types of road conditions, minimizing the deviation with the original data. By joining the segments and using a moving average filtering window, a typical driving cycle for this fuel cell logistics vehicle on a fixed route is constructed. Some statistical methods are done to validate the driving cycle.The effectiveness analysis shows the driving cycle we constructed has a high degree of overlap with the original data. This positive result provides a solid foundation for our follow-up research, and we can also apply this method to develop other urban driving cycles of fuel cell logistics vehicle.

2014 ◽  
Vol 644-650 ◽  
pp. 2211-2215
Author(s):  
Kai Kai Li ◽  
Huan Min Xu

Cutter suction dredgers perform a major part in the field of dredging engineering in harbors, fairways, and land reclamation. However, there are many parameters in cutter suction dredger operation so that it is difficult to guarantee the stability of production. In consideration of the issue of enormous parameters in dredging operation, mathematical dimensional reduction method which uses multivariate primary component analysis is proposed. The method can calculate the contribution rate and cumulative contribution rate of each parameter and then select the principal components which influents the production and energy consumption. These parameters represent the majority of the original data information, while not interrelated with each other. The primary components can be used to guide the regulation and control of the parameters, reduce regulatory parameters and operational complexity and provide a theoretical basis for intelligent automation of dredging operations.


Processes ◽  
2019 ◽  
Vol 7 (7) ◽  
pp. 434 ◽  
Author(s):  
Andrés Morán-Durán ◽  
Albino Martínez-Sibaja ◽  
José Pastor Rodríguez-Jarquin ◽  
Rubén Posada-Gómez ◽  
Oscar Sandoval González

Fuel cells are promising devices to transform chemical energy into electricity; their behavior is described by principles of electrochemistry and thermodynamics, which are often difficult to model mathematically. One alternative to overcome this issue is the use of modeling methods based on artificial intelligence techniques. In this paper is proposed a hybrid scheme to model and control fuel cell systems using neural networks. Several feature selection algorithms were tested for dimensionality reduction, aiming to eliminate non-significant variables with respect to the control objective. Principal component analysis (PCA) obtained better results than other algorithms. Based on these variables, an inverse neural network model was developed to emulate and control the fuel cell output voltage under transient conditions. The results showed that fuel cell performance does not only depend on the supply of the reactants. A single neuro-proportional–integral–derivative (neuro-PID) controller is not able to stabilize the output voltage without the support of an inverse model control that includes the impact of the other variables on the fuel cell performance. This practical data-driven approach is reliably able to reduce the cost of the control system by the elimination of non-significant measures.


2009 ◽  
Vol 3 (4) ◽  
pp. 812-819 ◽  
Author(s):  
Edwin Tazelaar ◽  
Jogchum Bruinsma ◽  
Bram Veenhuizen ◽  
Paul van den Bosch
Keyword(s):  

Author(s):  
R. V. Chekhova ◽  
V. M. Pyshniy ◽  
L. A. Pyankova ◽  
V. A. Elokhin

The article deals with the results of experimental studies on statistical processing of diffraction spectra of solid drugs for the purpose of their separation and identification. Diffractograms of the original and falsified drugs Arifon were used for the study. They were obtained on a desktop diffractometer Difray 401 produced by Scientific Instruments Inc. (Saint Petersburg, Russia). The research was conducted in the Scilab environment distributed under a free license. The captured diffraction spectra were processed using a smoothing procedure that eliminated the influence of a random component in the original data. Analysis of the results of smoothing by the moving average method showed that the smoothing algorithm with the window 41 point is most preferable. The results of statistical processing of diffractograms of the drugs investigated by the principal component analysis (PCA) in graphical and numerical form, which showed good convergence and efficiency of this method in the separation of diffraction spectra, are presented. The conducted studies make it possible to create a technique that allows identifying solid drugs by X-ray diffraction.


2020 ◽  
Vol 16 ◽  
Author(s):  
Yasemin Taşcı ◽  
Rahime Bedir Fındık ◽  
Meryem Kuru Pekcan ◽  
Ozan Kaplan ◽  
Mustafa Çelebier

Background: Metabolomics is one of the main areas to understand cellular process at molecular level by analyzing metabolites. In recent years metabolomics has been emerged as key tool to understand molecular basis of disease, find diagnostic and prognostic biomarkers, and develop new treatment opportunities and drug molecules. Objective: In this study, an untargeted metabolite and lipid analysis were performed to identify potential biomarkers on premature ovarian insufficiency plasma samples. 43 POI subject plasma samples were compared with 32 healthy subject plasma samples. Methods: Plasma samples were pooled and extracted using chloroform:methanol:water (3:3:1 v/v/v) mixture. Agilent 6530 LC/MS Q-TOF instrument equipped with ESI source was used for analysis. A C18 column (Agilent Zorbax 1.8 μM, 50 x 2.1 mm) was used for separation of metabolites and lipids. XCMS, an “R software” based freeware program, was used for peak picking, grouping and comparing the findings. Isotopologue Parameter Optimization (IPO) software was used in order to optimize XCMS parameters. The analytical methodology and data mining process were validated according to the literature. Results: 83 metabolite peaks and 213 lipid peaks were found to be in semi-quantitatively and statistically different (fold change >1.5, p <0.05) between the POI plasma samples and control subjects. Conclusion: According to the results, two groups were successfully separated through principal component analysis. Among the peaks, phenyl alanine, decanoyl-L-carnitine, 1-palmitoyllysophosphatidylcholine and PC(O-16:0/2:0) were identified through auto MS/MS and matched with human metabolome database and proposed as plasma biomarker for POI and monitoring the patients in treatment period.


2021 ◽  
Vol 11 (4) ◽  
pp. 1829
Author(s):  
Davide Grande ◽  
Catherine A. Harris ◽  
Giles Thomas ◽  
Enrico Anderlini

Recurrent Neural Networks (RNNs) are increasingly being used for model identification, forecasting and control. When identifying physical models with unknown mathematical knowledge of the system, Nonlinear AutoRegressive models with eXogenous inputs (NARX) or Nonlinear AutoRegressive Moving-Average models with eXogenous inputs (NARMAX) methods are typically used. In the context of data-driven control, machine learning algorithms are proven to have comparable performances to advanced control techniques, but lack the properties of the traditional stability theory. This paper illustrates a method to prove a posteriori the stability of a generic neural network, showing its application to the state-of-the-art RNN architecture. The presented method relies on identifying the poles associated with the network designed starting from the input/output data. Providing a framework to guarantee the stability of any neural network architecture combined with the generalisability properties and applicability to different fields can significantly broaden their use in dynamic systems modelling and control.


2021 ◽  
Vol 11 (3) ◽  
pp. 359
Author(s):  
Katharina Hogrefe ◽  
Georg Goldenberg ◽  
Ralf Glindemann ◽  
Madleen Klonowski ◽  
Wolfram Ziegler

Assessment of semantic processing capacities often relies on verbal tasks which are, however, sensitive to impairments at several language processing levels. Especially for persons with aphasia there is a strong need for a tool that measures semantic processing skills independent of verbal abilities. Furthermore, in order to assess a patient’s potential for using alternative means of communication in cases of severe aphasia, semantic processing should be assessed in different nonverbal conditions. The Nonverbal Semantics Test (NVST) is a tool that captures semantic processing capacities through three tasks—Semantic Sorting, Drawing, and Pantomime. The main aim of the current study was to investigate the relationship between the NVST and measures of standard neurolinguistic assessment. Fifty-one persons with aphasia caused by left hemisphere brain damage were administered the NVST as well as the Aachen Aphasia Test (AAT). A principal component analysis (PCA) was conducted across all AAT and NVST subtests. The analysis resulted in a two-factor model that captured 69% of the variance of the original data, with all linguistic tasks loading high on one factor and the NVST subtests loading high on the other. These findings suggest that nonverbal tasks assessing semantic processing capacities should be administered alongside standard neurolinguistic aphasia tests.


2021 ◽  
pp. 1-15
Author(s):  
V. Indu ◽  
Sabu M. Thampi

Social networks have emerged as a fertile ground for the spread of rumors and misinformation in recent times. The increased rate of social networking owes to the popularity of social networks among the common people and user personality has been considered as a principal component in predicting individuals’ social media usage patterns. Several studies have been conducted to study the psychological factors influencing the social network usage of people but only a few works have explored the relationship between the user’s personality and their orientation to spread rumors. This research aims to investigate the effect of personality on rumor spread on social networks. In this work, we propose a psychologically-inspired fuzzy-based approach grounded on the Five-Factor Model of behavioral theory to analyze the behavior of people who are highly involved in rumor diffusion and categorize users into the susceptible and resistant group, based on their inclination towards rumor sharing. We conducted our experiments in almost 825 individuals who shared rumor tweets on Twitter related to five different events. Our study ratifies the truth that the personality traits of individuals play a significant role in rumor dissemination and the experimental results prove that users exhibiting a high degree of agreeableness trait are more engaged in rumor sharing activities and the users high in extraversion and openness trait restrain themselves from rumor propagation.


2021 ◽  
pp. 000370282098784
Author(s):  
James Renwick Beattie ◽  
Francis Esmonde-White

Spectroscopy rapidly captures a large amount of data that is not directly interpretable. Principal Components Analysis (PCA) is widely used to simplify complex spectral datasets into comprehensible information by identifying recurring patterns in the data with minimal loss of information. The linear algebra underpinning PCA is not well understood by many applied analytical scientists and spectroscopists who use PCA. The meaning of features identified through PCA are often unclear. This manuscript traces the journey of the spectra themselves through the operations behind PCA, with each step illustrated by simulated spectra. PCA relies solely on the information within the spectra, consequently the mathematical model is dependent on the nature of the data itself. The direct links between model and spectra allow concrete spectroscopic explanation of PCA, such the scores representing ‘concentration’ or ‘weights’. The principal components (loadings) are by definition hidden, repeated and uncorrelated spectral shapes that linearly combine to generate the observed spectra. They can be visualized as subtraction spectra between extreme differences within the dataset. Each PC is shown to be a successive refinement of the estimated spectra, improving the fit between PC reconstructed data and the original data. Understanding the data-led development of a PCA model shows how to interpret application specific chemical meaning of the PCA loadings and how to analyze scores. A critical benefit of PCA is its simplicity and the succinctness of its description of a dataset, making it powerful and flexible.


Sign in / Sign up

Export Citation Format

Share Document