scholarly journals Pemanfaatan Big Data pada Instansi Pelayanan Publik

2021 ◽  
Vol 4 (7) ◽  
pp. 543-548
Author(s):  
Ariraya Sulistya Sedayu ◽  
Andriyansah Andriyansah
Keyword(s):  
Big Data ◽  

Munculnya internet telah merevolusi cara dunia bekerja begitu cepat. Dunia kini memasuki era digitalisasi, era yang menjadi pola ekonomi digital dan big data. Big Data melibatkan proses pembuatan data, penyimpanan, penggalian informasi, dan analisis data yang menonjol dalam hal volume, velocity, dan variasi. Bagi industri atau praktisi, big data telah membuka peluang untuk menetapkan strategi bisnis. Penelitian ini ingin melihat sejauh mana teknologi Big Data telah digunakan di Instansi Pelayanan Publik Indonesia. Metode penelitian menggunakan studi pustaka dengan pendekatan tradisional-konseptual. Data primer adalah jurnal yang sudah berhubungan dengan topik yang penulis pelajari dan beberapa berita yang bersumber dari media sosial. Kesimpulan penggunaan Big Data di Indonesia sudah mulai berkembang di sektor publik. Hasil kajian pustaka dari penelitian ini merupakan kerangka konseptual yang akan dikembangkan untuk penelitian selanjutnya.

Author(s):  
Triparna Mukherjee ◽  
Asoke Nath

This chapter focuses on Big Data and its relation with Service-Oriented Architecture. We start with the introduction to Big Data Trends in recent times, how data explosion is not only faced by web and retail networks but also the enterprises. The notorious “V's” – Variety, volume, velocity and value can cause a lot of trouble. We emphasize on the fact that Big Data is much more than just size, the problem that we face today is neither the amount of data that is created nor its consumption, but the analysis of all those data. In our next step, we describe what service-oriented architecture is and how SOA can efficiently handle the increasingly massive amount of transactions. Next, we focus on the main purpose of SOA here is to meaningfully interoperate, trade, and reuse data between IT systems and trading partners. Using this Big Data scenario, we investigate the integration of Services with new capabilities of Enterprise Architectures and Management. This has had varying success but it remains the dominant mode for data integration as data can be managed with higher flexibility.


Author(s):  
Vedhas Pandit ◽  
Shahin Amiriparian ◽  
Maximilian Schmitt ◽  
Amr Mousa ◽  
Björn Schuller

2021 ◽  
Author(s):  
Fredrick Kockum ◽  
Nicholas Dacre

The era of Big Data has provided business organisations opportunities to improve their management processes. This developmental paper is adopting a mixed-method research approach where qualitative data will underpin a quantitative questionnaire. The early insights are based on an initial eleven qualitative interviews and conceptualised in the following three statements: (i) Project practitioners need to increase their data literacy; (ii) Project practitioners are not utilising the available Big Data based on the 3 Vs; Volume, Velocity and Variety; (iii) Project practitioners need to utilise the structured available data to augment the decision-making process to represent the complex environment of Big Data, the study adopts Complexity Theory as a theoretical framework. When completed, the research will demonstrate the results through System Dynamics modelling.


Author(s):  
N. G. Bhuvaneswari Amma

Big data is a term used to describe very large amount of structured, semi-structured and unstructured data that is difficult to process using the traditional processing techniques. It is now expanding in all science and engineering domains. The key attributes of big data are volume, velocity, variety, validity, veracity, value, and visibility. In today's world, everyone is using social networking applications like Facebook, Twitter, YouTube, etc. These applications allow the users to create the contents for free of cost and it becomes huge volume of web data. These data are important in the competitive business world for making decisions. In this context, big data mining plays a major role which is different from the traditional data mining. The process of extracting useful information from large datasets or streams of data, due to its volume, velocity, variety, validity, veracity, value and visibility is termed as Big Data Mining.


Author(s):  
Adem Çabuk ◽  
Alp Aytaç

Massive usage of internet and digital devices make it easier accessing the desired information. In the past, auditing was a periodic, reactive approach, but this must change. Today, volume, velocity, variety, veracity, and value of the information, which are the main criteria of big data, are crucial. Decision makers demand timely, true, and reliable information. This need has affected every sector including auditing. For this reason, the continuous auditing system comes to debate in the big data era. The main aim of this chapter is to shed light on how traditional auditing transformed into the continuous auditing and where big data stands in this transformation. It is concluded that even though many obstacles arise, continuous auditing systems and harvesting big data benefits are crucial to gain a competitive advantage. Also, using big data analytics and continuous auditing system together, management and shareholders gain detailed information about the company's present situation and future direction.


Author(s):  
Reema Abdulraziq ◽  
Muneer Bani Yassein ◽  
Shadi Aljawarneh

Big data refers to the huge amount of data that is being used in commercial, industrial and economic environments. There are three types of big data; structured, unstructured and semi-structured data. When it comes to discussions on big data, three major aspects that can be considered as its main dimensions are the volume, velocity, and variety of the data. This data is collected, analysed and checked for use by the end users. Cloud computing and the Internet of Things (IoT) are used to enable this huge amount of collected data to be stored and connected to the Internet. The time and the cost are reduced by means of these technologies, and in addition, they are able to accommodate this large amount of data regardless of its size. This chapter focuses on how big data, with the emergence of cloud computing and the Internet of Things (IOT), can be used via several applications and technologies.


Author(s):  
George Avirappattu

Big data is characterized in many circles in terms of the three V's – volume, velocity and variety. Although most of us can sense palpable opportunities presented by big data there are overwhelming challenges, at many levels, turning such data into actionable information or building entities that efficiently work together based on it. This chapter discusses ways to potentially reduce the volume and velocity aspects of certain kinds of data (with sparsity and structure), while acquiring itself. Such reduction can alleviate the challenges to some extent at all levels, especially during the storage, retrieval, communication, and analysis phases. In this chapter we will conduct a non-technical survey, bringing together ideas from some recent and current developments. We focus primarily on Compressive Sensing and sparse Fast Fourier Transform or Sparse Fourier Transform. Almost all natural signals or data streams are known to have some level of sparsity and structure that are key for these efficiencies to take place.


2022 ◽  
pp. 590-621
Author(s):  
Obinna Chimaobi Okechukwu

In this chapter, a discussion is presented on the latest tools and techniques available for Big Data Visualization. These tools, techniques and methods need to be understood appropriately to analyze Big Data. Big Data is a whole new paradigm where huge sets of data are generated and analyzed based on volume, velocity and variety. Conventional data analysis methods are incapable of processing data of this dimension; hence, it is fundamentally important to be familiar with new tools and techniques capable of processing these datasets. This chapter will illustrate tools available for analysts to process and present Big Data sets in ways that can be used to make appropriate decisions. Some of these tools (e.g., Tableau, RapidMiner, R Studio, etc.) have phenomenal capabilities to visualize processed data in ways traditional tools cannot. The chapter will also aim to explain the differences between these tools and their utilities based on scenarios.


2016 ◽  
Vol 11 (02) ◽  
Author(s):  
Ruchi Saxena

Big Data is described by 5 V’s, Volume, Velocity, Value, Veracity and variety by Bernard Marr. As the data of high volume at uncertain speed in different variety is travelling the architecture designed for data flow it is essential to gather the information out of the data and which can help in creating effective models later. The process of turning big data to meaningful information model is scaling. In this paper we will study scalability in details and the hardware requirement to achieve scalability. The study also includes the issues of scalability.


Sign in / Sign up

Export Citation Format

Share Document