Big Data and the Implications for Government

2014 ◽  
Vol 14 (4) ◽  
pp. 253-257 ◽  
Author(s):  
Andy Williamson

AbstractBig data is more than the sum of the parts; it is about scale but also how we inter-connect once disparate data sets, and mine and analyse them. It is something that has started to make a significant impact everywhere from financial markets to supermarkets. And, as Andy Williamson explains, it is starting to become an important factor in the development of public policy and for the delivery and analysis of public services. Big data offers us powerful new ways to see the world around us but this comes with the challenges of ownership, privacy and misuse: it can be used for good or to constrain people. We have to ensure that, as the uptake of big data increases, our legislation and practice keeps pace, ensuring that mistakes and misuse are prevented.

Author(s):  
Nitigya Sambyal ◽  
Poonam Saini ◽  
Rupali Syal

The world is increasingly driven by huge amounts of data. Big data refers to data sets that are so large or complex that traditional data processing application software are inadequate to deal with them. Healthcare analytics is a prominent area of big data analytics. It has led to significant reduction in morbidity and mortality associated with a disease. In order to harness full potential of big data, various tools like Apache Sentry, BigQuery, NoSQL databases, Hadoop, JethroData, etc. are available for its processing. However, with such enormous amounts of information comes the complexity of data management, other big data challenges occur during data capture, storage, analysis, search, transfer, information privacy, visualization, querying, and update. The chapter focuses on understanding the meaning and concept of big data, analytics of big data, its role in healthcare, various application areas, trends and tools used to process big data along with open problem challenges.


2013 ◽  
Vol 1 (1) ◽  
pp. 19-25 ◽  
Author(s):  
Abdelkader Baaziz ◽  
Luc Quoniam

“Big Data is the oil of the new economy” is the most famous citation during the three last years. It has even been adopted by the World Economic Forum in 2011. In fact, Big Data is like crude! It’s valuable, but if unrefined it cannot be used. It must be broken down, analyzed for it to have value. But what about Big Data generated by the Petroleum Industry and particularly its upstream segment? Upstream is no stranger to Big Data. Understanding and leveraging data in the upstream segment enables firms to remain competitive throughout planning, exploration, delineation, and field development.Oil Gas Companies conduct advanced geophysics modeling and simulation to support operations where 2D, 3D 4D Seismic generate significant data during exploration phases. They closely monitor the performance of their operational assets. To do this, they use tens of thousands of data-collecting sensors in subsurface wells and surface facilities to provide continuous and real-time monitoring of assets and environmental conditions. Unfortunately, this information comes in various and increasingly complex forms, making it a challenge to collect, interpret, and leverage the disparate data. As an example, Chevron’s internal IT traffic alone exceeds 1.5 terabytes a day.Big Data technologies integrate common and disparate data sets to deliver the right information at the appropriate time to the correct decision-maker. These capabilities help firms act on large volumes of data, transforming decision-making from reactive to proactive and optimizing all phases of exploration, development and production. Furthermore, Big Data offers multiple opportunities to ensure safer, more responsible operations. Another invaluable effect of that would be shared learning.The aim of this paper is to explain how to use Big Data technologies to optimize operations. How can Big Data help experts to decision-making leading the desired outcomes?Keywords:Big Data; Analytics; Upstream Petroleum Industry; Knowledge Management; KM; Business Intelligence; BI; Innovation; Decision-making under Uncertainty


Web Services ◽  
2019 ◽  
pp. 1588-1600
Author(s):  
Manjunath Thimmasandra Narayanapppa ◽  
A. Channabasamma ◽  
Ravindra S. Hegadi

The amount of data around us in three sixty degrees getting increased second on second and the world is exploding as a result the size of the database used in today's enterprises, which is growing at an exponential rate day by day. At the same time, the need to process and analyze the bulky data for business decision making has also increased. Several business and scientific applications generate terabytes of data which have to be processed in efficient manner on daily bases. Data gets collected and stored at unprecedented rates. Moreover the challenge here is not only to store and manage the huge amount of data, but even to analyze and extract meaningful values from it. This has contributed to the problem of big data faced by the industry due to the inability of usual software tools and database systems to manage and process the big data sets within reasonable time limits. The main focus of the chapter is on unstructured data analysis.


1999 ◽  
Vol 13 (2) ◽  
pp. 189-210 ◽  
Author(s):  
Franklin R Edwards

The Fed-engineered rescue of Long-Term Capital Management (LTCM) in September 1998 set off alarms throughout financial markets about the activities of hedge funds and the stability of financial markets in general. With only $4.8 billion in equity, LTCM managed to leverage itself to the hilt by borrowing more than $125 billion from banks and securities firms and entering into derivatives contracts totaling more than $1 trillion (notional). When LTCM's speculations went sour in the summer of 1998, the impending liquidation of LTCM's portfolio threatened to destabilize financial markets throughout the world. Public policy response to LTCM should focus on risks of systemic fragility and ways in which bank regulation can be improved.


2016 ◽  
Vol 15 (2) ◽  
pp. 49-55
Author(s):  
Pala SuriyaKala ◽  
Ravi Aditya

Human resources is traditionally an area subject to measured changes but with Big data, data analytics, Human capital Management, Talent acquisition and performance metrics as new trends, there is bound to be a sea change in this function. This paper is conceptual and tries to introspect and outline the challenges that HRM faces in Big Data. Big Data is as one knows the world of enormous generation which is revolutionizing the world with data sets at exabytes. This has been the driving force behind how governments, companies and functions will come to perform in the decades to come. The immense amount of information if properly utilized can lead to efficiency in various fields like never before. But to do this the cloud of suspicion, fear and uncertainty regarding the use of Big Data has to be removed from those who can use it to the benefit of their respective areas of application.HR traditionally has never been very data centric in the analysis of its decisions unlike marketing, finance, etc.


2021 ◽  
pp. 255-268
Author(s):  
Wei Zhang ◽  
Gabriel R. Fries ◽  
Joao Quevedo

Mental and behavioral disorders are becoming the leading cause of disability across the world. Along with the ongoing development of biomedical and computational technologies, more and more data are being constantly produced, including genomic, transcriptomic, metabolomic, proteomic, clinical, and imaging resources. As a consequence, scientists in the psychiatric field are actively changing their research ways from studies focused on individual investigators to large international consortia, which accelerate the data accumulation and increase its size. This chapter discusses the current publicly available data sets on psychiatry disorders and neuroscience, as well as their integrated analysis. The authors also list some studies using novel types of data, which will further extent the potential of big data in the study of psychiatric disorders.


Author(s):  
Manjunath Thimmasandra Narayanapppa ◽  
A. Channabasamma ◽  
Ravindra S. Hegadi

The amount of data around us in three sixty degrees getting increased second on second and the world is exploding as a result the size of the database used in today's enterprises, which is growing at an exponential rate day by day. At the same time, the need to process and analyze the bulky data for business decision making has also increased. Several business and scientific applications generate terabytes of data which have to be processed in efficient manner on daily bases. Data gets collected and stored at unprecedented rates. Moreover the challenge here is not only to store and manage the huge amount of data, but even to analyze and extract meaningful values from it. This has contributed to the problem of big data faced by the industry due to the inability of usual software tools and database systems to manage and process the big data sets within reasonable time limits. The main focus of the chapter is on unstructured data analysis.


Author(s):  
Vo Ngoc Phu ◽  
Vo Thi Ngoc Tran

Information technology, computer science, etc. have been developed more and more in many countries in the world. Their subfields have already had many very crucial contributions to everyone life: production, politics, advertisement, etc. Especially, big data semantics, scientific and knowledge discovery, and intelligence are the subareas that are gaining more interest. Therefore, the authors display semantics for massive data sets fully in this chapter. This is very significant for commercial applications, studies, researchers, etc. in the world.


Author(s):  
Stephen H. Kiasler ◽  
William H. Money ◽  
Stephen J. Cohen

The world of data has been evolving due to the expansion of operations and the complexity of the data processed by systems. Big Data is no longer numbers and characters but are now unstructured data types collected by a variety of devices. Recent work has postulated that the Big Data evolutionary process is making a conceptual leap to incorporate intelligence. This challenges system engineers with new issues as they envision and create service systems to process and incorporate these new data sets and structures. This article proposes that Big Data has not yet made a complete evolutionary leap, but rather that a new class of data—a higher level of abstraction—is needed to integrate this “intelligence” concept. This article examines previous definitions of Smart Data, offers a new conceptualization for smart objects (SO), examines the smart data concept, and identifies issues and challenges of understanding smart objects as a new data managed software paradigm. It concludes that smart objects incorporate new features and have different properties from passive and inert Big Data.


Sign in / Sign up

Export Citation Format

Share Document