Big Data Overview

Big data is now a reality. Data is created constantly. Data from mobile phones, social media, GIS, imaging technologies for medical diagnosis, etc., all these must be stored for some purpose. Also, this data needs to be stored and processed in real time. The challenging task here is to store this vast amount of data and to manage it with meaningful patterns and traditional data structures for processing. Data sources are expanding to grow into 50X in the next 10 years. An International Data Corporation (IDC) forecast sees that big data technology and services market at a compound annual growth rate (CAGR) of 23.1% over 2014-19 period with annual spending may reach $48.6 billion in 2019. The digital universe is expected to double the data size in next two years and by 2020 we may reach 44 zettabytes (1021) or 44 trillion gigabytes. The zettabyte is a multiple of the unit byte for digital information. There is a need to design new data architecture with new analytical sandboxes and methods with an integration of multiple skills for a data scientist to operate on such large data.

2019 ◽  
Vol 4 (1) ◽  
pp. 14-25
Author(s):  
Saiful Rizal

The development of information technology produces very large data sizes, with various variations in data and complex data structures. Traditional data storage techniques are not sufficient for storage and analysis with very large volumes of data. Many researchers conducted their research in analyzing big data with various analytics models in big data. Therefore, the purpose of the survey paper is to provide an understanding of analytics models in big data for various uses using algorithms in data mining. Preprocessing big data is the key to turning big data into big value.


Author(s):  
Maria Petrescu ◽  
Brianna Lauer

Qualitative methods in marketing have become essential not only for their classical advantage in consumer behavior, but also for their benefits in dealing with big data and data Qualitative methods in marketing have become essential not only for their classical advantage in consumer behavior, but also for their benefits in dealing with big data and data mining. Research from International Data Corporation (IDC) shows that when it comes to online data, unstructured content accounts for 90% of all digital information. Under these circumstances, this study provides a literature review and analysis on the role and relation of qualitative methods with quantitative methods in marketing research. The paper analyzes research articles that include qualitative studies in the top marketing journals during the last decade and focuses on their topic, domain, methods used and whether they used any triangulation with quantitative methods. Starting from this analysis, the study provides recommendations that can help better integrate qualitative methods in marketing research, academics and practice.mining. Research from International Data Corporation (IDC) shows that when it comes to online data, unstructured content accounts for 90% of all digital information. Under these circumstances, this study provides a literature review and analysis on the role and relation of qualitative methods with quantitative methods in marketing research. The paper analyzes research articles that include qualitative studies in the top marketing journals during the last decade and focuses on their topic, domain, methods used and whether they used any triangulation with quantitative methods. Starting from this analysis, the study provides recommendations that can help better integrate qualitative methods in marketing research, academics and practice.


2018 ◽  
Vol 2 (1) ◽  
pp. 11-17
Author(s):  

Data mining is the set of computational techniques and methodologies aimed to extract knowledge from a large amount of data, by using sophisticated data analysis tools to highlight information structure underlying large data sets. Data scientist and data engineer are facing big challenges today in society because of global increases in the dataset in the industries and sector today. Machine learning methods represent one of these tools, allowing, not only data management but also analysis and prediction operations. Supervised learning, a kind of machine learning methodology, uses input data and products outputs of two types: qualitative and quantitative, respectively describing data classes and predicting data trends. Classification task provides qualitative responses whereas prediction or regression task offers quantitative outputs. In this paper, an attempt has been made to demonstrate how big data can be analyzed, classified and predicted using weka tool in industries.


Author(s):  
Vivek Raich ◽  
Pankaj Maurya

in the time of the Information Technology, the big data store is going on. Due to which, Huge amounts of data are available for decision makers, and this has resulted in the progress of information technology and its wide growth in many areas of business, engineering, medical, and scientific studies. Big data means that the size which is bigger in size, but there are several types, which are not easy to handle, technology is required to handle it. Due to continuous increase in the data in this way, it is important to study and manage these datasets by adjusting the requirements so that the necessary information can be obtained.The aim of this paper is to analyze some of the analytic methods and tools. Which can be applied to large data. In addition, the application of Big Data has been analyzed, using the Decision Maker working on big data and using enlightened information for different applications.


Author(s):  
. Monika ◽  
Pardeep Kumar ◽  
Sanjay Tyagi

In Cloud computing environment QoS i.e. Quality-of-Service and cost is the key element that to be take care of. As, today in the era of big data, the data must be handled properly while satisfying the request. In such case, while handling request of large data or for scientific applications request, flow of information must be sustained. In this paper, a brief introduction of workflow scheduling is given and also a detailed survey of various scheduling algorithms is performed using various parameter.


2018 ◽  
Vol 1 (1) ◽  
pp. 128-133
Author(s):  
Selvi Selvi

Economic globalization between countries becomes commonplace. Differences in financial rules are used for many parties to practice the Basic Erosion and Shifting Profit (BEPS) which leads to state losses. In tackling it has been agreed to implement Automatic Exchange of Information (AEoI), which automatically converts data into large data in the field of taxation.The research method of this paper is a literature study which combines several related literature and global and national implications using secondary data.Drawing up the conclusion that AEoI challenges have been theoretically overcome by Indonesia as a developing country. However, practically mash has not been able to find out whether it can be overcome or not because Indonesia still has not implemented AEoI


2020 ◽  
Author(s):  
Deodatt Madhav Suryawanshi ◽  
Divya Rajaseharan 2nd ◽  
Raghuram Venugopal 3rd ◽  
Ramchandra Goyal 4th ◽  
Anju Joy 5th

BACKGROUND Introduction: Gaming is a billion-dollar industry growing at a Compound annual growth rate (CAGR) of 9 %- 14.3% with biggest market in South East Asian countries. Availability of Low-cost smart phones, ease of internet access has made gaming popular among youth who enjoy it as a leisure activity. According to the WHO excessive indulgence in Gaming can lead to Gaming disorder. Medical students indulging in excessive gaming can succumb to gaming disorder which can affect their scholastic performance. Hence this study was done to assess the gaming practices and its effect on scholastic performance. OBJECTIVE Objective: 1. To assess the various Gaming practices and the Prevalence of Gaming addiction among medical students. 2. To study the effect of Gaming practices on Scholastic performance of medical students. METHODS Methods: The present study used a case control design where the 448(N) study participants were recruited using non probability sampling technique.91 (Nc) cases who were Gaming for past 6 months were identified using rapid preliminary survey .91 controls (Nco) who never played games were selected and matched for age and sex. Internal Assessment scores (%) of cases and controls were compared. Snedecor F test and Student t test were used to find out the association between the hours of gaming and internal assessment scores (%) and difference of Internal assessment scores between cases and controls respectively. Odds ratio was calculated to identify the risk of Poor scholastic performance. Prevalence of Gaming addiction was assessed using Lemmen’s Gaming addiction scale (GAS). RESULTS Results: Frequency of gaming (hrs) was not associated with the Mean internal assessment scores (p>0.05). Male students (cases) showed significant reduction in both their internal assessment scores (p<0.001,<0.01) whereas no reduction was observed in Female cases. A negative correlation was observed between GAS and internal assessment scores (r=-0.02). Prevalence of Gaming addiction using GAS was found to be 6.2% among the study population(N=448) and 31% among Cases (Nc=91).The risk of low scores was (OR =1.80-1.89) times more in cases than controls. CONCLUSIONS Conclusions: Excessive Gaming adversely affects scholastic performance in males than females.Awareness about Gaming addiction needs to be created among students, parents and teachers. Institutionalized De-addiction services should be made available to medical students. CLINICALTRIAL No


Author(s):  
Michael Goul ◽  
T. S. Raghu ◽  
Ziru Li

As procurement organizations increasingly move from a cost-and-efficiency emphasis to a profit-and-growth emphasis, flexible data architecture will become an integral part of a procurement analytics strategy. It is therefore imperative for procurement leaders to understand and address digitization trends in supply chains and to develop strategies to create robust data architecture and analytics strategies for the future. This chapter assesses and examines the ways companies can organize their procurement data architectures in the big data space to mitigate current limitations and to lay foundations for the discovery of new insights. It sets out to understand and define the levels of maturity in procurement organizations as they pertain to the capture, curation, exploitation, and management of procurement data. The chapter then develops a framework for articulating the value proposition of moving between maturity levels and examines what the future entails for companies with mature data architectures. In addition to surveying the practitioner and academic research literature on procurement data analytics, the chapter presents detailed and structured interviews with over fifteen procurement experts from companies around the globe. The chapter finds several important and useful strategies that have helped procurement organizations design strategic roadmaps for the development of robust data architectures. It then further identifies four archetype procurement area data architecture contexts. In addition, this chapter details exemplary high-level mature data architecture for each archetype and examines the critical assumptions underlying each one. Data architectures built for the future need a design approach that supports both descriptive and real-time, prescriptive analytics.


2021 ◽  
Vol 26 (4) ◽  
Author(s):  
Peter L. Molloy ◽  
Lester W. Johnson ◽  
Michael Gilding

A recent study assessed the investor performance of the Australian drug development biotech (DDB) sector over a 15-year period from 2003 to 2018. The current study builds on that research and extends the analysis to 2020, using a 10-year period starting 2010, to exclude the impact of the global financial crisis in 2008/09. Based on a value-weighted portfolio of all 41 DDB firms, the overall sector delivered a negative annualized return of -4.1%. Individual firm performance was also assessed using the compound annual growth rate (CAGR) in share price over the period as a measure of investor outcomes. On this basis 68% of firms produced negative CAGRs over the period, and of the 32% of firms that produced positive CAGRs, six firms produced CAGRs greater than 20% per annum and in three cases of recently-listed firms, the CAGR’s were greater than 50%. Overall however, the sector overall delivered very poor investor returns and despite a relatively large number of listed biotech firms, Australian biotechnology continues to be small and weak in terms of its contribution to global biotechnology industrialization. As such it lacks the critical mass to grow a robust bioeconomy based on drug development, which remains the standard-bearer of biotechnology industrialization.


Author(s):  
M.Dolores Ruiz ◽  
Juan Gomez-Romero ◽  
Carlos Fernandez-Basso ◽  
Maria J. Martin-Bautista

Sign in / Sign up

Export Citation Format

Share Document