GENERAL ASSESSMENT OF THE PROSPECTS FOR THE APPLICATION OF THE BIG DATA PARADIGM IN SOCIO-ECONOMIC SYSTEMS

Author(s):  
V. A. Konovalov

The paper assesses the prospects for the application of the big data paradigm in socio-economic systems through the analysis of factors that distinguish it from the well-known scientific ideas of data synthesis and decomposition. The idea of extracting knowledge directly from big data is analyzed. The article compares approaches to extracting knowledge from big data: algebraic and multidimensional data analysis used in OLAP-systems (OnLine Analytical Processing). An intermediate conclusion is made about the advisability of dividing systems for working with big data into two main classes: automatic and non-automatic. To assess the result of extracting knowledge from big data, it is proposed to use well-known scientific criteria: reliability and efficiency. It is proposed to consider two components of reliability: methodical and instrumental. The main goals of knowledge extraction in socio-economic systems are highlighted: forecasting and support for making management decisions. The factors that distinguish big data are analyzed: volume, variety, velocity, as applied to the study of socio-economic systems. The expediency of introducing a universe into systems for processing big data, which provides a description of the variety of big data and source protocols, is analyzed. The impact of the properties of sample populations from big data: incompleteness, heterogeneity, and non-representativeness, the choice of mathematical methods for processing big data is analyzed. The conclusion is made about the need for a systemic, comprehensive, cautious approach to the development of fundamental decisions of a socio-economic nature when using the big data paradigm in the study of individual socio-economic subsystems.

2020 ◽  
Vol 3 (1) ◽  
pp. 17-35
Author(s):  
Brian J. Galli

In today's fiercely competitive environment, most companies face the pressure of shorter product life cycles. Therefore, if companies want to maintain a competitive advantage in the market, they need to keep innovating and developing new products. If not, then they will face difficulties in developing and expanding markets and may go out of business. New product development is the key content of enterprise research and development, and it is also one of the strategic cores for enterprise survival and development. The success of new product development plays a decisive role both in the development of the company and in maintaining a competitive advantage in the industry. Since the beginning of the 21st century, with the continuous innovation and development of Internet technology, the era of big data has arrived. In the era of big data, enterprises' decision-making for new product development no longer solely relies on the experience of decision-makers; it is based on the results of big data analysis for more accurate and effective decisions. In this thesis, the case analysis is mainly carried out with Company A as an example. Also, it mainly introduces the decision made by Company A in the actual operation of new product development, which is based on the results of big data analysis from decision-making to decision-making innovation. The choice of decision-making is described in detail. Through the introduction of the case, the impact of big data on the decision-making process for new product development was explored. In the era of big data, it provides a new theoretical approach to new product development decision-making.


Author(s):  
Edgard Benítez-Guerrero ◽  
Ericka-Janet Rechy-Ramírez

A Data Warehouse (DW) is a collection of historical data, built by gathering and integrating data from several sources, which supports decisionmaking processes (Inmon, 1992). On-Line Analytical Processing (OLAP) applications provide users with a multidimensional view of the DW and the tools to manipulate it (Codd, 1993). In this view, a DW is seen as a set of dimensions and cubes (Torlone, 2003). A dimension represents a business perspective under which data analysis is performed and organized in a hierarchy of levels that correspond to different ways to group its elements (e.g., the Time dimension is organized as a hierarchy involving days at the lower level and months and years at higher levels). A cube represents factual data on which the analysis is focused and associates measures (e.g., in a store chain, a measure is the quantity of products sold) with coordinates defined over a set of dimension levels (e.g., product, store, and day of sale). Interrogation is then aimed at aggregating measures at various levels. DWs are often implemented using multidimensional or relational DBMSs. Multidimensional systems directly support the multidimensional data model, while a relational implementation typically employs star schemas(or variations thereof), where a fact table containing the measures references a set of dimension tables.


Nowadays, the digital technologies and information systems (i.e. cloud computing and Internet of Things) generated the vast data in terabytes to extract the knowledge for making a better decision by the end users. However, these massive data require a large effort of researchers at multiple levels to analyze for decision making. To find a better development, researchers concentrated on Big Data Analysis (BDA), but the traditional databases, data techniques and platforms suffers from storage, imbalance data, scalability, insufficient accuracy, slow responsiveness and scalability, which leads to very less efficiency in Big Data (BD) context. Therefore, the main objective of this research is to present a generalized view of complete BD system that consists of various stages and major components of every stage to process the BD. In specific, the data management process describes the NoSQL databases and different Parallel Distributed File Systems (PDFS) and then, the impact of challenges, analyzed for BD with recent developments provides a better understanding that how different tools and technologies apply to solve real-life applications.


2021 ◽  
Vol 258 ◽  
pp. 07035
Author(s):  
Alexander Kuzminov ◽  
Alexandra Voronina ◽  
Margarita Bezuglova ◽  
Tatiana Medvedskaya

The object of research in this article is the category “human capital” as the foundation for the development of generations and the state in the digital future. This category is directly dependent on the complication of economic systems, changes in the information space and society digitalization. Highlighting of the features of the political and economic nature of “human capital” is possible from the standpoint of its assessment as a key state resource, the impact indicators of which are a combination of classical and newly identified information parameters. In an effort to expand the understanding of the phenomena of institutional changes, of economics and public administration, the multilevel causal model is proposed. The forces of the model act in two directions: macro-causes that explain behaviour at the micro level, and the microlevel laws affecting the entire system at the macrolevel. As a part of the development of interdisciplinary research, the article proposes a new conceptual approach to the formalization and management of the human capital structure. The basis of integration is cenological theory that allows to formalize the system of macro-rules ensuring the stability of complex systems, in particular, of the generations in the information space. The basic research paradigm is proposed and promising results are determined on the example of stratification of human capital parameters.


Author(s):  
Ekaterina Shebunova

We consider the impact of automation processes on the implementation of external financial control. We study the practical application features of new sources of data analysis – state information systems. In particular, the legal regulation of the functioning of such systems and their use for financial control purposes. We present methods for collecting and analyzing big data in order to improve the legal regulation of the budgetary process, as well as the law enforcement practice of using big data arising in the process of digitalization of the control and supervisory activities of external financial control bodies. We focus on the fact that big data analysis methods (for ex-ample, spatial analysis, social network analysis, machine learning, etc.) can be used to implement state financial control over the activities of nonprofit organizations. We find that improved methods of collecting and analyzing data helps not only to respond flexibly to sudden changes and make faster and more accurate decisions, but also to use large databases, which, in turn, allows us to move from monitoring the legality of spending to analyzing the effectiveness of use financial resources of the state. Based on the given ex-amples, we conclude that automation contributes to improving the methods of state financial control.


2021 ◽  
Vol 33 (6) ◽  
pp. 1-18
Author(s):  
Jianfei Li ◽  
Juxing Li ◽  
Jin Ji ◽  
Shengjun Meng

The coronavirus disease 2019 (COVID-19) epidemic that began in early 2020 quickly formed a global trend, bringing unprecedented shocks to many countries’ and even the global trade economy. Big data is the main feature of the Internet era, which has transformed the industrial development pattern of modern society and has now flourished in the field of trade economy; therefore, it is of great significance to apply the big data analysis technology to study the impact of the COVID-19 epidemic on the global trade economy. On the basis of summarizing and analyzing previous research works, this paper, expounded the research status and significance of the impact of the COVID-19 epidemic on the global trade economy, elaborated the development background, The study results of this paper provide a reference for further researches on the impact of the impact of the COVID-19 epidemic on the global trade economy based on big data analysis.


Sign in / Sign up

Export Citation Format

Share Document