scholarly journals Application of Big Data in the Financial Audit Procedures

Author(s):  
Kotryna Nagytė ◽  
Lina Dagilienė

Annotation. Big Data (BD) is one of the most commonly used terms in the modern world of business and information technology. The main features of BD (quantity, speed, and variety) introduce to unique processing of large information amounts, regardless of their scale, storage and computational complexity, analytical and statistical correlation. The significant emergence and potential use of BD has affected business accounting and financial auditing by replacing the long-used mechanical data collection and completion processes with automatic ones, comparing and searching for correlations between different structure and nature data. According to analysis, the main advantages of applying the BDA in the audit process are related to faster and more efficient execution of procedures, obtaining more detailed results, grouping and comparing data according to selected criteria. In the meantime, cons of BD application are related to the additional professional supervision requirements and the proper data analysis in order for the correct results interpretation. The paper presents the conceptual model, which shows the relationships between BDA tools and financial audit procedures. In addition, the model shows factors and risks, which have impacts on internal and external environment of clients, the applicability of specific audit procedures. It was found that the application of the model in the procedures includes testing of 5 relationships, i. e. classification, clustering, regression and time series analyses, the method of association rules and text research, visualization tool. The Aim of the Study is to identify the application of DDA tools in financial audit procedures. Research Methods: comparative and systematic analysis of the literature; content analysis; statistical data analysis; graphical analysis. Keywords: Big data, Big data Analytics, Financial Audit, Financial Audit Procedures. JEL Code: M15, M40, M42.

2020 ◽  
Vol 10 (1) ◽  
pp. 343-356
Author(s):  
Snezana Savoska ◽  
Blagoj Ristevski

AbstractNowadays, big data is a widely utilized concept that has been spreading quickly in almost every domain. For pharmaceutical companies, using this concept is a challenging task because of the permanent pressure and business demands created through the legal requirements, research demands and standardization that have to be adopted. These legal and standards’ demands are associated with human healthcare safety and drug control that demands continuous and deep data analysis. Companies update their procedures to the particular laws, standards, market demands and regulations all the time by using contemporary information technology. This paper highlights some important aspects of the experience and change methodology used in one Macedonian pharmaceutical company, which has employed information technology solutions that successfully tackle legal and business pressures when dealing with a large amount of data. We used a holistic view and deliverables analysis methodology to gain top-down insights into the possibilities of big data analytics. Also, structured interviews with the company’s managers were used for information collection and proactive methodology with workshops was used in data integration toward the implementation of big data concepts. The paper emphasizes the information and knowledge used in this domain to improve awareness for the needs of big data analysis to achieve a competitive advantage. The main results are focused on systematizing the whole company’s data, information and knowledge and propose a solution that integrates big data to support managers’ decision-making processes.


2018 ◽  
Vol 20 (1) ◽  
Author(s):  
Tiko Iyamu

Background: Over the years, big data analytics has been statically carried out in a programmed way, which does not allow for translation of data sets from a subjective perspective. This approach affects an understanding of why and how data sets manifest themselves into various forms in the way that they do. This has a negative impact on the accuracy, redundancy and usefulness of data sets, which in turn affects the value of operations and the competitive effectiveness of an organisation. Also, the current single approach lacks a detailed examination of data sets, which big data deserve in order to improve purposefulness and usefulness.Objective: The purpose of this study was to propose a multilevel approach to big data analysis. This includes examining how a sociotechnical theory, the actor network theory (ANT), can be complementarily used with analytic tools for big data analysis.Method: In the study, the qualitative methods were employed from the interpretivist approach perspective.Results: From the findings, a framework that offers big data analytics at two levels, micro- (strategic) and macro- (operational) levels, was developed. Based on the framework, a model was developed, which can be used to guide the analysis of heterogeneous data sets that exist within networks.Conclusion: The multilevel approach ensures a fully detailed analysis, which is intended to increase accuracy, reduce redundancy and put the manipulation and manifestation of data sets into perspectives for improved organisations’ competitiveness.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Fatao Wang ◽  
Di Wu ◽  
Hongxin Yu ◽  
Huaxia Shen ◽  
Yuanjun Zhao

PurposeBased on the typical service supply chain (SSC) structure, the authors construct the model of e-tailing SSC to explore the coordination relationship in the supply chain, and big data analysis provides realistic possibilities for the creation of coordination mechanisms.Design/methodology/approachAt the present stage, the e-commerce companies have not yet established a mature SSC system and have not achieved good synergy with other members of the supply chain, the shortage of goods and the greater pressure of express logistics companies coexist. In the case of uncertain online shopping market demand, the authors employ newsboy model, applied in the operations research, to analyze the synergistic mechanism of SSC model.FindingsBy analyzing the e-tailing SSC coordination mechanism and adjusting relevant parameters, the authors find that the synergy mechanism can be implemented and optimized. Through numerical example analysis, the authors confirmed the feasibility of the above analysis.Originality/valueBig data analysis provides a kind of reality for the establishment of online SSC coordination mechanism. The establishment of an online supply chain coordination mechanism can effectively promote the efficient allocation of supplies and better meet consumers' needs.


Have you ever wondered how companies that adopt big data and analytics have generated value? Which algorithm are they using for which situation? And what was the result? These points will be discussed in this chapter in order to highlight the importance of big data analytics. To this end, and in order to give a quick introduction to what is being done in data analytics applications and to trigger the reader's interest, the author introduces some applications examples. This will allow you, in more detail, to gain more insight into the types and uses of algorithms for data analysis. So, enjoy the examples.


Web Services ◽  
2019 ◽  
pp. 1301-1329
Author(s):  
Suren Behari ◽  
Aileen Cater-Steel ◽  
Jeffrey Soar

The chapter discusses how Financial Services organizations can take advantage of Big Data analysis for disruptive innovation through examination of a case study in the financial services industry. Popular tools for Big Data Analysis are discussed and the challenges of big data are explored as well as how these challenges can be met. The work of Hayes-Roth in Valued Information at the Right Time (VIRT) and how it applies to the case study is examined. Boyd's model of Observe, Orient, Decide, and Act (OODA) is explained in relation to disruptive innovation in financial services. Future trends in big data analysis in the financial services domain are explored.


Author(s):  
Ferdi Sönmez ◽  
Ziya Nazım Perdahçı ◽  
Mehmet Nafiz Aydın

When uncertainty is regarded as a surprise and an event in the minds, it can be said that individuals can change the future view. Market, financial, operational, social, environmental, institutional and humanitarian risks and uncertainties are the inherent realities of the modern world. Life is suffused with randomness and volatility; everything momentous that occurs in the illustrious sweep of history, or in our individual lives, is an outcome of uncertainty. An important implication of such uncertainty is the financial instability engendered to the victims of different sorts of perils. This chapter is intended to explore big data analytics as a comprehensive technique for processing large amounts of data to uncover insights. Several techniques before big data analytics like financial econometrics and optimization models have been used. Therefore, initially these techniques are mentioned. Then, how big data analytics has altered the methods of analysis is mentioned. Lastly, cases promoting big data analytics are mentioned.


2019 ◽  
Vol 26 (2) ◽  
pp. 981-998 ◽  
Author(s):  
Kenneth David Strang ◽  
Zhaohao Sun

The goal of the study was to identify big data analysis issues that can impact empirical research in the healthcare industry. To accomplish that the author analyzed big data related keywords from a literature review of peer reviewed journal articles published since 2011. Topics, methods and techniques were summarized along with strengths and weaknesses. A panel of subject matter experts was interviewed to validate the intermediate results and synthesize the key problems that would likely impact researchers conducting quantitative big data analysis in healthcare studies. The systems thinking action research method was applied to identify and describe the hidden issues. The findings were similar to the extant literature but three hidden fatal issues were detected. Methodical and statistical control solutions were proposed to overcome the three fatal healthcare big data analysis issues.


2019 ◽  
Vol 17 (5) ◽  
pp. 602-617
Author(s):  
Brian Schram

This paper critically interrogates the viability of “Queer” as an ontological category, identity, and radical political orientation in an era of digital surveillance and Big Data analytics. Drawing on recent work by Matzner (2016) on the performative dimensions of Big Data, I argue that Big Data’s potential to perform and create Queerness (or its opposites) in the absence of embodiment and intentionality necessitates a rethinking of phenomenological or affective approaches to Queer ontology. Additionally, while Queerness is often theorized as an ongoing process of negotiations, (re)orientations, and iterative becomings, these perspectives presume elements of categorical mobility that Big Data precludes. This paper asks: what happens when our data performs Queerness without our permission or bodily complacency? And can a Queerness that insists on existing in the interstitial margins of categorization, or in the “open mesh of possibilities, gaps, and overlaps” (Sedgwick 1993: 8), endure amidst a climate of highly granular data analysis?


2014 ◽  
Vol 484-485 ◽  
pp. 922-926
Author(s):  
Xiang Ju Liu

This paper introduces the operational characteristics of the era of big data and the current era of big data challenges, and exhaustive research and design of big data analytics platform based on cloud computing, including big data analytics platform architecture system, big data analytics platform software architecture , big data analytics platform network architecture big data analysis platform unified program features and so on. The paper also analyzes the cloud computing platform for big data analysis program unified competitive advantage and development of business telecom operators play a certain role in the future.


Sign in / Sign up

Export Citation Format

Share Document