scholarly journals Driving Style Recognition Model Based on NEV High-Frequency Big Data and Joint Distribution Feature Parameters

2021 ◽  
Vol 12 (3) ◽  
pp. 142
Author(s):  
Lina Xia ◽  
Zejun Kang

With the promotion and financial subsidies of the new energy vehicle (NEV), the NEV industry of China has developed rapidly in recent years. However, compared with traditional fuel vehicles, the technological maturity of the NEV is still insufficient, and there are still many problems that need to be solved in the R&D and operation stages. Among them, energy consumption and driving range are particularly concerning, and are closely related to the driving style of the driver. Therefore, the accurate identification of the driving style can provide support for the research of energy consumption. Based on the NEV high-frequency big data collected by the vehicle-mounted terminal, we extract the feature parameter set that can reflect the precise spatiotemporal changes in driving behavior, use the principal component analysis method (PCA) to optimize the feature parameter set, realize the automatic driving style classification using a K-means algorithm, and build a driving style recognition model through a neural network algorithm. The result of this paper shows that the model can automatically classify driving styles based on the actual driving data of NEV users, and that the recognition accuracy can reach 96.8%. The research on driving style recognition in this paper has a certain reference value for the development and upgrade of NEV products and the improvement of safety.

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3003
Author(s):  
Ting Pan ◽  
Haibo Wang ◽  
Haiqing Si ◽  
Yao Li ◽  
Lei Shang

Fatigue is an important factor affecting modern flight safety. It can easily lead to a decline in pilots’ operational ability, misjudgments, and flight illusions. Moreover, it can even trigger serious flight accidents. In this paper, a wearable wireless physiological device was used to obtain pilots’ electrocardiogram (ECG) data in a simulated flight experiment, and 1440 effective samples were determined. The Friedman test was adopted to select the characteristic indexes that reflect the fatigue state of the pilot from the time domain, frequency domain, and non-linear characteristics of the effective samples. Furthermore, the variation rules of the characteristic indexes were analyzed. Principal component analysis (PCA) was utilized to extract the features of the selected feature indexes, and the feature parameter set representing the fatigue state of the pilot was established. For the study on pilots’ fatigue state identification, the feature parameter set was used as the input of the learning vector quantization (LVQ) algorithm to train the pilots’ fatigue state identification model. Results show that the recognition accuracy of the LVQ model reached 81.94%, which is 12.84% and 9.02% higher than that of traditional back propagation neural network (BPNN) and support vector machine (SVM) model, respectively. The identification model based on the LVQ established in this paper is suitable for identifying pilots’ fatigue states. This is of great practical significance to reduce flight accidents caused by pilot fatigue, thus providing a theoretical foundation for pilot fatigue risk management and the development of intelligent aircraft autopilot systems.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Hossein Ahmadvand ◽  
Fouzhan Foroutan ◽  
Mahmood Fathy

AbstractData variety is one of the most important features of Big Data. Data variety is the result of aggregating data from multiple sources and uneven distribution of data. This feature of Big Data causes high variation in the consumption of processing resources such as CPU consumption. This issue has been overlooked in previous works. To overcome the mentioned problem, in the present work, we used Dynamic Voltage and Frequency Scaling (DVFS) to reduce the energy consumption of computation. To this goal, we consider two types of deadlines as our constraint. Before applying the DVFS technique to computer nodes, we estimate the processing time and the frequency needed to meet the deadline. In the evaluation phase, we have used a set of data sets and applications. The experimental results show that our proposed approach surpasses the other scenarios in processing real datasets. Based on the experimental results in this paper, DV-DVFS can achieve up to 15% improvement in energy consumption.


Energies ◽  
2019 ◽  
Vol 12 (1) ◽  
pp. 196 ◽  
Author(s):  
Lihui Zhang ◽  
Riletu Ge ◽  
Jianxue Chai

China’s energy consumption issues are closely associated with global climate issues, and the scale of energy consumption, peak energy consumption, and consumption investment are all the focus of national attention. In order to forecast the amount of energy consumption of China accurately, this article selected GDP, population, industrial structure and energy consumption structure, energy intensity, total imports and exports, fixed asset investment, energy efficiency, urbanization, the level of consumption, and fixed investment in the energy industry as a preliminary set of factors; Secondly, we corrected the traditional principal component analysis (PCA) algorithm from the perspective of eliminating “bad points” and then judged a “bad spot” sample based on signal reconstruction ideas. Based on the above content, we put forward a robust principal component analysis (RPCA) algorithm and chose the first five principal components as main factors affecting energy consumption, including: GDP, population, industrial structure and energy consumption structure, urbanization; Then, we applied the Tabu search (TS) algorithm to the least square to support vector machine (LSSVM) optimized by the particle swarm optimization (PSO) algorithm to forecast China’s energy consumption. We collected data from 1996 to 2010 as a training set and from 2010 to 2016 as the test set. For easy comparison, the sample data was input into the LSSVM algorithm and the PSO-LSSVM algorithm at the same time. We used statistical indicators including goodness of fit determination coefficient (R2), the root means square error (RMSE), and the mean radial error (MRE) to compare the training results of the three forecasting models, which demonstrated that the proposed TS-PSO-LSSVM forecasting model had higher prediction accuracy, generalization ability, and higher training speed. Finally, the TS-PSO-LSSVM forecasting model was applied to forecast the energy consumption of China from 2017 to 2030. According to predictions, we found that China shows a gradual increase in energy consumption trends from 2017 to 2030 and will breakthrough 6000 million tons in 2030. However, the growth rate is gradually tightening and China’s energy consumption economy will transfer to a state of diminishing returns around 2026, which guides China to put more emphasis on the field of energy investment.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 554
Author(s):  
Suresh Kallam ◽  
Rizwan Patan ◽  
Tathapudi V. Ramana ◽  
Amir H. Gandomi

Data are presently being produced at an increased speed in different formats, which complicates the design, processing, and evaluation of the data. The MapReduce algorithm is a distributed file system that is used for big data parallel processing. Current implementations of MapReduce assist in data locality along with robustness. In this study, a linear weighted regression and energy-aware greedy scheduling (LWR-EGS) method were combined to handle big data. The LWR-EGS method initially selects tasks for an assignment and then selects the best available machine to identify an optimal solution. With this objective, first, the problem was modeled as an integer linear weighted regression program to choose tasks for the assignment. Then, the best available machines were selected to find the optimal solution. In this manner, the optimization of resources is said to have taken place. Then, an energy efficiency-aware greedy scheduling algorithm was presented to select a position for each task to minimize the total energy consumption of the MapReduce job for big data applications in heterogeneous environments without a significant performance loss. To evaluate the performance, the LWR-EGS method was compared with two related approaches via MapReduce. The experimental results showed that the LWR-EGS method effectively reduced the total energy consumption without producing large scheduling overheads. Moreover, the method also reduced the execution time when compared to state-of-the-art methods. The LWR-EGS method reduced the energy consumption, average processing time, and scheduling overhead by 16%, 20%, and 22%, respectively, compared to existing methods.


2018 ◽  
Vol 768 ◽  
pp. 293-305 ◽  
Author(s):  
Chun Zhi Zhao ◽  
Yi Liu ◽  
Shi Wei Ren ◽  
Jiang Quan

along with the rapid development of commercial concrete industry and the continuous growth of concrete demand, the commercial concrete production has brought large energy consumption and mineral resource consumption; cement calcination and direct/indirect energy consumption within the boundary of ready-mixed concrete system have become the main source of concrete greenhouse gas. This paper mainly settles key problems such as boundary definition, data collection, calculation model, data acceptance/rejection and data calculation method concerned with concrete carbon emission calculation, establishes the national uniform concrete carbon emission calculation method and emission factor within the same cultural boundary, and provides theoretical and data calculation basis for determining the reference value and grade of concrete carbon emission. As for other products, the carbon emission of unit product may also be calculated by reference to this paper; therefore, inherent carbon emission data of buildings are accumulated, providing quantized data support for taking measures to reduce the carbon emission intensity.


Author(s):  
Zhang Xiao-Wen ◽  
Zeng Min

The fluctuation of the stock market has always been a matter of great concern to investors. People always hope to judge the trend of the stock market through the trend of the K line, so as to obtain the price difference through trading, Therefore, it is a theoretical research concerned by the academic circles to carry out empirical research through big data stock volatility prediction algorithm, so as to establish a model to predict the trend of the stock market. After decades of development, China's stock market has gradually matured in continuous exploration. However, compared with the stock market in developed countries, there are still imperfections. For example, the market value of China's stock market does not improve well with economic growth. Year-on-year growth and the development of the real economy. By studying the historical data from 2002 to 2017, we use the Multivariate Mixed Criterion Fuzzy Model (MMCFM) to predict the price changes in the stock market, and obtain the market in China through error statistical analysis. (SSE) is more unstable than the US stock market. Therefore, Multivariate Mixing Criterion (MMC) can be used as a reference indicator to visually measure market maturity. In this paper, we establish a multivariate mixed criteria fuzzy model, and use big data to predict the stock volatility. The algorithm verifies the reliability and accuracy of the model, which has a good reference value for investors.


2021 ◽  
pp. 1-12
Author(s):  
Li Qian

In order to overcome the low classification accuracy of traditional methods, this paper proposes a new classification method of complex attribute big data based on iterative fuzzy clustering algorithm. Firstly, principal component analysis and kernel local Fisher discriminant analysis were used to reduce dimensionality of complex attribute big data. Then, the Bloom Filter data structure is introduced to eliminate the redundancy of the complex attribute big data after dimensionality reduction. Secondly, the redundant complex attribute big data is classified in parallel by iterative fuzzy clustering algorithm, so as to complete the complex attribute big data classification. Finally, the simulation results show that the accuracy, the normalized mutual information index and the Richter’s index of the proposed method are close to 1, the classification accuracy is high, and the RDV value is low, which indicates that the proposed method has high classification effectiveness and fast convergence speed.


Author(s):  
Kevin O’Shea

Abstract The use of finite element analysis (FEA) in high frequency (20–40 kHz), high power ultrasonics to date has been limited. Of paramount importance to the performance of ultrasonic tooling (horns) is the accurate identification of pertinent modeshapes and frequencies. Ideally, the ultrasonic horn will vibrate in a purely axial mode with a uniform amplitude of vibration. However, spurious resonances can couple with this fundamental resonance and alter the axial vibration. This effect becomes more pronounced for ultrasonic tools with larger cross-sections. The current study examines a 4.5″ × 6″ cross-section titanium horn which is designed to resonate axially at 20 kHz. Modeshapes and frequencies from 17–23 kHz are examined experimentally and using finite element analysis. The effect of design variables — slot length, slot width, and number of slots — on modeshapes and frequency spacing is shown. An optimum configuration based on the finite element results is prescribed. The computed results are compared with actual prototype data. Excellent correlation between analytical and experimental data is found.


Sign in / Sign up

Export Citation Format

Share Document