scholarly journals Discussion on Big Data: TDFS Vs HDFS

2019 ◽  
Vol 8 (4) ◽  
pp. 10051-10056

In recent years, big data is huge amount of data to uncover hidden attributes. Today’s technologies has possible to analyze the data and get data is almost immediately. Why big data is very important? Because cost reduction, faster, and better decision making using Hadoop. For example a large warehouse of terabytes of data is generated daily from social media’s like Twitter, LinkedIn and Facebook are case of organization in the people to people communication area for big data. Big data has 3 most important challenges of Volume, Variety, and Velocity. In this paper we have studied about the performance of Traditional Distributed File System (TDFS) and Hadoop Distributed File System (HDFS). Benefits of HDFS has support for flume tool in Hadoop comparing with TDFS. Memory block size data retrieving time and security are used as metrics in evaluating the performance of TDFS and HDFS. Result shows HDFC performs better than TDFS in the above metrics and HDFS is more suitable for big data analysis comparing of TDFS.

2016 ◽  
pp. 1220-1243
Author(s):  
Ilias K. Savvas ◽  
Georgia N. Sofianidou ◽  
M-Tahar Kechadi

Big data refers to data sets whose size is beyond the capabilities of most current hardware and software technologies. The Apache Hadoop software library is a framework for distributed processing of large data sets, while HDFS is a distributed file system that provides high-throughput access to data-driven applications, and MapReduce is software framework for distributed computing of large data sets. Huge collections of raw data require fast and accurate mining processes in order to extract useful knowledge. One of the most popular techniques of data mining is the K-means clustering algorithm. In this study, the authors develop a distributed version of the K-means algorithm using the MapReduce framework on the Hadoop Distributed File System. The theoretical and experimental results of the technique prove its efficiency; thus, HDFS and MapReduce can apply to big data with very promising results.


2019 ◽  
Vol 16 (9) ◽  
pp. 3824-3829
Author(s):  
Deepak Ahlawat ◽  
Deepali Gupta

Due to advancement in the technological world, there is a great surge in data. The main sources of generating such a large amount of data are social websites, internet sites etc. The large data files are combined together to create a big data architecture. Managing the data file in such a large volume is not easy. Therefore, modern techniques are developed to manage bulk data. To arrange and utilize such big data, Hadoop Distributed File System (HDFS) architecture from Hadoop was presented in the early stage of 2015. This architecture is used when traditional methods are insufficient to manage the data. In this paper, a novel clustering algorithm is implemented to manage a large amount of data. The concepts and frames of Big Data are studied. A novel algorithm is developed using the K means and cosine-based similarity clustering in this paper. The developed clustering algorithm is evaluated using the precision and recall parameters. The prominent results are obtained which successfully manages the big data issue.


Big data security is the most focused research issue nowadays due to their increased size and the complexity involved in handling of large volume of data. It is more difficult to ensure security on big data handling due to its characteristics 4V’s. With the aim of ensuring security and flexible encryption computation on big data with reduced computation overhead in this work, framework with encryption (MRS) is presented with Hadoop Distributed file System (HDFS). Development of the MapReduce paradigm needs networked attached storage in addition to parallel processing. For storing as well as handling big data, HDFS are extensively utilized. This proposed method creates a framework for obtaining data from client and after that examining the received data, excerpt privacy policy and after that find the sensitive data. The security is guaranteed in this framework using key rotation algorithm which is an efficient encryption and decryption technique for safeguarding the data over big data. Data encryption is a means to protect data in storage with containing a key encryption saved and accessible to reuse the data while required. The outcome shows that the research method guarantees greater security for enormous amount of data and gives beneficial info to related clients. Therefore the outcome concluded that the proposed method is superior to the previous method. Finally, this research can be applied effectively on the various domains such as health care domains, educational domains, social networking domains, etc which require more security and increased volume of data.


2020 ◽  
Vol 6 ◽  
pp. e259
Author(s):  
Gayatri Kapil ◽  
Alka Agrawal ◽  
Abdulaziz Attaallah ◽  
Abdullah Algarni ◽  
Rajeev Kumar ◽  
...  

Hadoop has become a promising platform to reliably process and store big data. It provides flexible and low cost services to huge data through Hadoop Distributed File System (HDFS) storage. Unfortunately, absence of any inherent security mechanism in Hadoop increases the possibility of malicious attacks on the data processed or stored through Hadoop. In this scenario, securing the data stored in HDFS becomes a challenging task. Hence, researchers and practitioners have intensified their efforts in working on mechanisms that would protect user’s information collated in HDFS. This has led to the development of numerous encryption-decryption algorithms but their performance decreases as the file size increases. In the present study, the authors have enlisted a methodology to solve the issue of data security in Hadoop storage. The authors have integrated Attribute Based Encryption with the honey encryption on Hadoop, i.e., Attribute Based Honey Encryption (ABHE). This approach works on files that are encoded inside the HDFS and decoded inside the Mapper. In addition, the authors have evaluated the proposed ABHE algorithm by performing encryption-decryption on different sizes of files and have compared the same with existing ones including AES and AES with OTP algorithms. The ABHE algorithm shows considerable improvement in performance during the encryption-decryption of files.


Hadoop Distributed File System, which is popularly known as HDFS, is a Java-based distributed file system running on commodity machines. HDFS is basically meant for storing Big Data over distributed commodity machines and getting the work done at a faster rate due to the processing of data in a distributed manner. Basically, HDFS has one name node (master node) and cluster of data nodes (slave nodes). The HDFS files are divided into blocks. The block is the minimum amount of data (64 MB) that can be read or written. The functions of the name node are to master the slave nodes, to maintain the file system, to control client access, and to have control of the replications. To ensure the availability of the name node, a standby name node is deployed by failover control and fencing is done to avoid the activation of the primary name node during failover. The functions of the data nodes are to store the data, serve the read and write requests, replicate the blocks, maintain the liveness of the node, ensure the storage policy, and maintain the block cache size. Also, it ensures the availability of data.


2018 ◽  
Vol 3 (1) ◽  
pp. 49-60
Author(s):  
M. Elshayeb ◽  
◽  
Leelavathi Rajamanickam ◽  

Big Data refers to large-scale information management and analysis technologies that exceed the capability of traditional data processing technologies. In order to analyse complex data and to identify patterns it is very important to securely store, manage, and share large amounts of complex data. In recent years an increasing of database size according to the various forms (text, images and videos), in huge volumes and with high velocity, the services issues that use internet and desires big data come to leading edge (data-intensive services), (HDFS) Apache’s Hadoop distributed file system is in progress as outstanding software component for cloud computing joint with integrated pieces such as MapReduce. GoogleMapReduce implemented an open source which is Hadoop, having a distributed file system, present to software programmers the perception of the map and reduce. The research shows the security approaches for Big Data Hadoop distributed file system and the best security solution, also this research will help business by big data visualization which will help in better data analysis. In today’s data-centric world, big-data processing and analytics have become critical to most enterprise and government applications.


2020 ◽  
Vol 4 (2) ◽  
Author(s):  
Irfan Rizqi Prabaswara ◽  
Ragil Saputra

  Big data merupakan sumber data yang memiliki volume yang besar, variasi yang banyak, dan aliran data yang sangat cepat. Contoh big data antara lain data dari media sosial dan query pencarian Google. Data tersebut mampu melacak aktivitas penyakit dan data yang ada tersedia setiap saat. Pengolahan big data bukanlah suatu hal yang mudah, sehingga diperlukan suatu tools yang dapat membantu proses pengolahan terhadap big data. Salah satu tools tersebut adalah hadoop. Meskipun kinerja hadoop lebih unggul daripada RDBMS tradisional, akan tetapi pengolahan data menggunakan hadoop belum maksimal. Sehingga, diperlukan pengolahan data yang lebih cepat. Salah satu cara untuk meningkatkan kecepatan pengolahan data ialah menerapkan spark untuk proses pengolahan data yang ada di HDFS (Hadoop Distributed File System). Pada penelitian ini dilakukan plotting tren dan pemetaan pada data Demam Berdarah Dengue (DBD) yang berasal dari media sosial twitter. Penelitian ini bertujuan untuk membuat visualisasi data yang diperoleh dari twitter dengan menggunakan hadoop dan spark dalam memantau perkembangan DBD di wilayah Asia Tenggara. Hasil dari plotting tren menunjukkan adanya hubungan yang kuat antara data twitter, data asli kejadian DBD yang diperoleh dari WHO. Penelitian ini juga melakukan pengujian performa hadoop dan spark. Semakin besar alokasi memory executor yang diterapkan serta semakin besar dan serupa alokasi maksimal memory scheduler yang diterapkan pada tiap node, maka waktu yang dibutuhkan untuk menyelesaikan task semakin singkat. Akan tetapi, pada titik tertentu konfigurasi hadoop dan spark menemui titik puncaknya, sehingga jika alokasi diperbesar menghasilkan hasil yang sama.


Sign in / Sign up

Export Citation Format

Share Document