Data Science and Big Data Analytics in Financial Services

Web Services ◽  
2019 ◽  
pp. 1301-1329
Author(s):  
Suren Behari ◽  
Aileen Cater-Steel ◽  
Jeffrey Soar

The chapter discusses how Financial Services organizations can take advantage of Big Data analysis for disruptive innovation through examination of a case study in the financial services industry. Popular tools for Big Data Analysis are discussed and the challenges of big data are explored as well as how these challenges can be met. The work of Hayes-Roth in Valued Information at the Right Time (VIRT) and how it applies to the case study is examined. Boyd's model of Observe, Orient, Decide, and Act (OODA) is explained in relation to disruptive innovation in financial services. Future trends in big data analysis in the financial services domain are explored.

Author(s):  
Suren Behari ◽  
Aileen Cater-Steel ◽  
Jeffrey Soar

The chapter discusses how Financial Services organizations can take advantage of Big Data analysis for disruptive innovation through examination of a case study in the financial services industry. Popular tools for Big Data Analysis are discussed and the challenges of big data are explored as well as how these challenges can be met. The work of Hayes-Roth in Valued Information at the Right Time (VIRT) and how it applies to the case study is examined. Boyd's model of Observe, Orient, Decide, and Act (OODA) is explained in relation to disruptive innovation in financial services. Future trends in big data analysis in the financial services domain are explored.


2019 ◽  
Author(s):  
Abhishek Singh

Abstract Background: The need for big data analysis requires being able to process large data which are being held fine-tuned for usage by corporates. It is only very recently that the need for big data has caught attention for low budget corporate groups and academia who typically do not have money and resources to buy expensive licenses of big data analysis platforms such as SAS. The corporates continue to work on SAS data format largely because of systemic organizational history and that the prior codes have been built on them. The data-providers continue to thus provide data in SAS formats. Acute sudden need has arisen because of this gap of data being in SAS format and the coders not having a SAS expertise or training background as the economic and inertial forces acting of having shaped these two class of people have been different. Method: We analyze the differences and thus the need for SasCsvToolkit which helps to generate a CSV file for a SAS format data so that the data scientist can then make use of his skills in other tools that can process CSVs such as R, SPSS, or even Microsoft Excel. At the same time, it also provides conversion of CSV files to SAS format. Apart from this, a SAS database programmer always struggles in finding the right method to do a database search, exact match, substring match, except condition, filters, unique values, table joins and data mining for which the toolbox also provides template scripts to modify and use from command line. Results: The toolkit has been implemented on SLURM scheduler platform as a `bag-of-tasks` algorithm for parallel and distributed workflow though serial version has also been incorporated. Conclusion: In the age of Big Data where there are way too many file formats and software and analytics environment each having their own semantics to deal with specific file types, SasCsvToolkit will find its functions very handy to a data engineer.


2020 ◽  
Vol 17 (6) ◽  
pp. 2806-2811
Author(s):  
Wahidah Hashim ◽  
A/L Jayaretnam Prathees ◽  
Marini Othman ◽  
Andino Maseleno

Data Science also known as Analytics, has a high demand in the industries right now, where professionals who are well trained in this field are being recruited by many large companies. Before the existence of data science, companies and industries search for software engineers and data analysis to sort IT related problems. However, as the internet start to being used by most of the people in the world, data keep on pouring in a large volume and velocity, software engineers and data analysis could not handle it anymore. Analyzing the tremendous size of data is called Big Data Analytics. Corporate companies have already started to realize that data scientists are the right person to tackle Big Data related problems. Low supply of data scientist has hiked in the salary of the data scientist, as the pay for a data scientist many more time higher compare to other IT related professionals. Knowledge in data science can solve any data related problems in this world. Data scientist are not only recruited by tech-giants like Google and Amazon, medium organizations also started to understand the importance of data science and they too recruit data scientist for their company. In this paper, we will explore on the requirement and knowledges of data science that can be covered in UNITEN’s computer science syllabus.


2018 ◽  
Vol 20 (1) ◽  
Author(s):  
Tiko Iyamu

Background: Over the years, big data analytics has been statically carried out in a programmed way, which does not allow for translation of data sets from a subjective perspective. This approach affects an understanding of why and how data sets manifest themselves into various forms in the way that they do. This has a negative impact on the accuracy, redundancy and usefulness of data sets, which in turn affects the value of operations and the competitive effectiveness of an organisation. Also, the current single approach lacks a detailed examination of data sets, which big data deserve in order to improve purposefulness and usefulness.Objective: The purpose of this study was to propose a multilevel approach to big data analysis. This includes examining how a sociotechnical theory, the actor network theory (ANT), can be complementarily used with analytic tools for big data analysis.Method: In the study, the qualitative methods were employed from the interpretivist approach perspective.Results: From the findings, a framework that offers big data analytics at two levels, micro- (strategic) and macro- (operational) levels, was developed. Based on the framework, a model was developed, which can be used to guide the analysis of heterogeneous data sets that exist within networks.Conclusion: The multilevel approach ensures a fully detailed analysis, which is intended to increase accuracy, reduce redundancy and put the manipulation and manifestation of data sets into perspectives for improved organisations’ competitiveness.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Fatao Wang ◽  
Di Wu ◽  
Hongxin Yu ◽  
Huaxia Shen ◽  
Yuanjun Zhao

PurposeBased on the typical service supply chain (SSC) structure, the authors construct the model of e-tailing SSC to explore the coordination relationship in the supply chain, and big data analysis provides realistic possibilities for the creation of coordination mechanisms.Design/methodology/approachAt the present stage, the e-commerce companies have not yet established a mature SSC system and have not achieved good synergy with other members of the supply chain, the shortage of goods and the greater pressure of express logistics companies coexist. In the case of uncertain online shopping market demand, the authors employ newsboy model, applied in the operations research, to analyze the synergistic mechanism of SSC model.FindingsBy analyzing the e-tailing SSC coordination mechanism and adjusting relevant parameters, the authors find that the synergy mechanism can be implemented and optimized. Through numerical example analysis, the authors confirmed the feasibility of the above analysis.Originality/valueBig data analysis provides a kind of reality for the establishment of online SSC coordination mechanism. The establishment of an online supply chain coordination mechanism can effectively promote the efficient allocation of supplies and better meet consumers' needs.


2019 ◽  
Vol 26 (2) ◽  
pp. 981-998 ◽  
Author(s):  
Kenneth David Strang ◽  
Zhaohao Sun

The goal of the study was to identify big data analysis issues that can impact empirical research in the healthcare industry. To accomplish that the author analyzed big data related keywords from a literature review of peer reviewed journal articles published since 2011. Topics, methods and techniques were summarized along with strengths and weaknesses. A panel of subject matter experts was interviewed to validate the intermediate results and synthesize the key problems that would likely impact researchers conducting quantitative big data analysis in healthcare studies. The systems thinking action research method was applied to identify and describe the hidden issues. The findings were similar to the extant literature but three hidden fatal issues were detected. Methodical and statistical control solutions were proposed to overcome the three fatal healthcare big data analysis issues.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Kehua Miao ◽  
Jie Li ◽  
Wenxing Hong ◽  
Mingtao Chen

The booming development of data science and big data technology stacks has inspired continuous iterative updates of data science research or working methods. At present, the granularity of the labor division between data science and big data is more refined. Traditional work methods, from work infrastructure environment construction to data modelling and analysis of working methods, will greatly delay work and research efficiency. In this paper, we focus on the purpose of the current friendly collaboration of the data science team to build data science and big data analysis application platform based on microservices architecture for education or nonprofessional research field. In the environment based on microservices that facilitates updating the components of each component, the platform has a personal code experiment environment that integrates JupyterHub based on Spark and HDFS for multiuser use and a visualized modelling tools which follow the modular design of data science engineering based on Greenplum in-database analysis. The entire web service system is developed based on spring boot.


Sign in / Sign up

Export Citation Format

Share Document