scholarly journals Application of mathematical probabilistic statistical model of base – FFCA financial data processing

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Zhengqing Li ◽  
Jiliang Mu ◽  
Mohammed Basheri ◽  
Hafnida Hasan

Abstract In order to improve the detection and filtering ability for financial data, a data-filtering method based on mathematical probability statistical model, a descriptive statistical analysis model of big data filtering, probability density characteristic statistical design data filtering analysis combined with fuzzy mathematical reasoning, regression analysis according to probability density of financial data distribution, and threshold test and threshold judgment are conducted to realize data filtering. The test results show that the big data filtering and the reliability and convergence of the mathematical model are optimal.

2020 ◽  
Vol 165 ◽  
pp. 06009
Author(s):  
Jie Gao

In order to meet external regulation and challenges, and improve the quality of internal economic activity analysis, this study establishes a linkage analysis system from corporate strategy to strategic objectives to financial indicators to business indicators by building 3 independent and interrelated analysis models. One of them is the model of influencing factors of change of operating efficiency index, one of them is the traceability analysis model of the sales of electricity and electricity price, and the last one is an investment performance traceability analysis model. In this study, the actual data of a unit is used as an example. With the help of big data analysis, we fully tap the value of the company’s big data, accurately locate the weak links and risk points of management. By doing this we finely promote economic activity analysis system more comprehensive, more real-time, more dynamic and more intelligent, and thus improve the efficiency of business decision-making. The practicality of economic activity analysis based on “operation, value and performance” is confirmed.


2021 ◽  
Vol 2136 (1) ◽  
pp. 012057
Author(s):  
Han Zhou

Abstract In the context of the comprehensive popularization of network technical services and database construction system, more and more data are used by enterprises or individuals. It is difficult for the existing technology to meet the technical analysis requirements of the development of the era of big data. Therefore, in the development of practice, we should continue to explore new technologies and methods to reasonably use big data. Therefore, on the basis of understanding the current big data technology and its system operation status, this paper designs relevant algorithms according to the big data classification model, and verifies the effectiveness of the analysis model algorithm based on practice.


Information ◽  
2019 ◽  
Vol 10 (7) ◽  
pp. 222 ◽  
Author(s):  
Sungchul Lee ◽  
Ju-Yeon Jo ◽  
Yoohwan Kim

Background: Hadoop has become the base framework on the big data system via the simple concept that moving computation is cheaper than moving data. Hadoop increases a data locality in the Hadoop Distributed File System (HDFS) to improve the performance of the system. The network traffic among nodes in the big data system is reduced by increasing a data-local on the machine. Traditional research increased the data-local on one of the MapReduce stages to increase the Hadoop performance. However, there is currently no mathematical performance model for the data locality on the Hadoop. Methods: This study made the Hadoop performance analysis model with data locality for analyzing the entire process of MapReduce. In this paper, the data locality concept on the map stage and shuffle stage was explained. Also, this research showed how to apply the Hadoop performance analysis model to increase the performance of the Hadoop system by making the deep data locality. Results: This research proved the deep data locality for increasing performance of Hadoop via three tests, such as, a simulation base test, a cloud test and a physical test. According to the test, the authors improved the Hadoop system by over 34% by using the deep data locality. Conclusions: The deep data locality improved the Hadoop performance by reducing the data movement in HDFS.


2019 ◽  
Vol 11 (13) ◽  
pp. 3499 ◽  
Author(s):  
Se-Hoon Jung ◽  
Jun-Ho Huh

This study sought to propose a big data analysis and prediction model for transmission line tower outliers to assess when something is wrong with transmission line tower big data based on deep reinforcement learning. The model enables choosing automatic cluster K values based on non-labeled sensor big data. It also allows measuring the distance of action between data inside a cluster with the Q-value representing network output in the altered transmission line tower big data clustering algorithm containing transmission line tower outliers and old Deep Q Network. Specifically, this study performed principal component analysis to categorize transmission line tower data and proposed an automatic initial central point approach through standard normal distribution. It also proposed the A-Deep Q-Learning algorithm altered from the deep Q-Learning algorithm to explore policies based on the experiences of clustered data learning. It can be used to perform transmission line tower outlier data learning based on the distance of data within a cluster. The performance evaluation results show that the proposed model recorded an approximately 2.29%~4.19% higher prediction rate and around 0.8% ~ 4.3% higher accuracy rate compared to the old transmission line tower big data analysis model.


2018 ◽  
Vol 7 (3.33) ◽  
pp. 134
Author(s):  
Inhwan JUNG ◽  
He SUN ◽  
Jangmook KANG ◽  
Choong Hyong Lee ◽  
Sangwon LEE

The rapidly changing environment of the shipbuilding industry has put Korea’s shipbuilding industry in a crisis. The purpose of this study was to develop a business model to maintain, maintain and operate Big Data-based MRO(Maintenance, Repair, and Operation) consumables, which is expected to be the new growth engine for the domestic shipbuilding industry. Although Korean shipbuilders have world-class technologies for ship dogma, the market for ship maintenance and repair is still in its infancy. For Korean shipbuilders, MRO business can be a growth engine that will provide food for the next 30 years, but to do so, we need to make sure that everything that happens in the entire process, from ship design to maintenance and maintenance. Therefore, by systematically establishing Big Data related to components and developing MRO business models based on data analysis capabilities using Artificial Intelligence system concept, we can develop new growth engines for related industries in Ship Industry.  


Author(s):  
Minglei Song ◽  
Rongrong Li ◽  
Binghua Wu

The occurrence of traffic accidents is regular in probability distribution. Using big data mining method to predict traffic accidents is conducive to taking measures to prevent or reduce traffic accidents in advance. In recent years, prediction methods of traffic accidents used by researchers have some problems, such as low calculation accuracy. Therefore, a prediction model of traffic accidents based on joint probability density feature extraction of big data is proposed in this paper. First, a function of big data joint probability distribution for traffic accidents is established. Second, establishing big data distributed database model of traffic accidents with the statistical analysis method in order to mine the association rules characteristic quantity reflecting the law of traffic accidents, and then extracting the joint probability density feature of big data for traffic accident probability distribution. According to the result of feature extraction, adaptive functional and directivity are predicted, and then the regularity prediction of traffic accidents is realized based on the result of association directional clustering, so as to optimize the design of the prediction model of traffic accidents based on big data. Simulation results show that in predicting traffic accidents, the model in this paper has advantages of relatively high accuracy, relatively good confidence and stable prediction result.


1993 ◽  
Vol 03 (03) ◽  
pp. 745-755 ◽  
Author(s):  
TED JADITZ ◽  
CHERA L. SAYERS

This paper examines recent developments in nonlinear science in economics. Several claims of findings of chaos in economic data are reviewed. We discuss how each claim has been revised in light of further analysis, and point out several traps for empirical researchers in economic data. These traps suggest certain methodological refinements useful for researchers analyzing very small data sets, including diagnostic tests to detect ill conditioned data, filtering data to exclude nonchaotic alternatives, and nonparametric procedures to check the precision of parameter estimates. Most specialists in the field would say there is no conclusive evidence of chaos in economic or financial data.


Sign in / Sign up

Export Citation Format

Share Document