scholarly journals Comparison of NoSQL Database and Traditional Database-An emphatic analysis

2018 ◽  
Vol 2 (2) ◽  
pp. 51
Author(s):  
M. Sandeep Kumar ◽  
Prabhu .J

A Huge amount of data is manipulated by using the web application, Facebook, Twitter, social sites etc. Most of the data are unstructured data. It is not desirable for storing, performing and analyzing data in the relational database for huge data. It affords way towards performing NoSQL database and uses fully for handling the big data. In this paper, we present the performance in store and query operation in NoSQL database, estimating the performance of both reads and write operation using simple and complex queries. Result represents that comparing Cassandra with relation database, Cassandra outperforms the relation database. Most of the organization used only Hbase and Cassandra for benefit of cost. Comparison Various NoSQL Database, issues while performing NoSQL database. 

Author(s):  
Emrah Inan ◽  
Burak Yonyul ◽  
Fatih Tekbacak

Most of the data on the web is non-structural, and it is required that the data should be transformed into a machine operable structure. Therefore, it is appropriate to convert the unstructured data into a structured form according to the requirements and to store those data in different data models by considering use cases. As requirements and their types increase, it fails using one approach to perform on all. Thus, it is not suitable to use a single storage technology to carry out all storage requirements. Managing stores with various type of schemas in a joint and an integrated manner is named as 'multistore' and 'polystore' in the database literature. In this paper, Entity Linking task is leveraged to transform texts into wellformed data and this data is managed by an integrated environment of different data models. Finally, this integrated big data environment will be queried and be examined by presenting the method.


Author(s):  
Vinay Kumar ◽  
Arpana Chaturvedi

<div><p><em>With the advent of Social Networking Sites (SNS), volumes of data are generated daily. Most of these data are multimedia type and unstructured with exponential growth. This exponential growth of variety, volume and complexity of structured and unstructured data leads to the concept of big data. Managing big data and harnessing its benefits is a real challenge. With increase in access to big data repository for various applications, security and access control is another aspect that needs to be considered while managing big data. We have discussed area of application of big data, opportunities it provides and challenges that we face in the managing such huge amount of data for various applications. Issues related to security against different threat perception of big data are also discussed. </em></p></div>


Author(s):  
Caio Saraiva Coneglian ◽  
Elvis Fusco

The data available on the Web is growing exponentially, providing information of high added value to organizations. Such information can be arranged in diverse bases and in varied formats, like videos and photos in social media. However, unstructured data present great difficulty for the information retrieval, not efficiently meeting the informational needs of the users, because there are problems in understanding the meaning of documents stored on the Web. In the context of an Information Retrieval architecture, this research aims to The implementation of a semantic extraction agent in the context of the Web that allows the location, treatment and retrieval of information in the context of Big Data in the most varied informational sources that serves as the basis for the implementation of informational environments that aid the Information Retrieval process , Using ontology to add semantics to the process of retrieval and presentation of results obtained to users, thus being able to meet their needs.


Author(s):  
Sachin Arun Thanekar ◽  
K. Subrahmanyam ◽  
A. B. Bagwan

<p>Nowadays we all are surrounded by Big data. The term ‘Big Data’ itself indicates huge volume, high velocity, variety and veracity i.e. uncertainty of data which gave rise to new difficulties and challenges. Big data generated may be structured data, Semi Structured data or unstructured data. For existing database and systems lot of difficulties are there to process, analyze, store and manage such a Big Data.  The Big Data challenges are Protection, Curation, Capture, Analysis, Searching, Visualization, Storage, Transfer and sharing. Map Reduce is a framework using which we can write applications to process huge amount of data, in parallel, on large clusters of commodity hardware in a reliable manner. Lot of efforts have been put by different researchers to make it simple, easy, effective and efficient. In our survey paper we emphasized on the working of Map Reduce, challenges, opportunities and recent trends so that researchers can think on further improvement.</p>


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Yixuan Zhao ◽  
Qinghua Tang

Big data is a large-scale rapidly growing database of information. Big data has a huge data size and complexity that cannot be easily stored or processed by conventional data processing tools. Big data research methods have been widely used in many disciplines as research methods based on massively big data analysis have aroused great interest in scientific methodology. In this paper, we proposed a deep computational model to analyze the factors that affect social and mental health. The proposed model utilizes a large number of microblog manual annotation datasets. This huge amount of dataset is divided into six main factors that affect social and mental health, that is, economic market correlation, the political democracy, the management law, the cultural trend, the expansion of the information level, and the fast correlation of the rhythm of life. The proposed model compares the review data of different influencing factors to get the correlation degree between social mental health and these factors.


Author(s):  
Eman A. Khashan ◽  
Ali I. El Desouky ◽  
Sally M. Elghamrawy

The increasing of data on the web poses major confrontations. The amount of stored data and query data sources have become needful features for huge data systems. There are a large number of platforms used to handle the NoSQL database model such as: Spark, H2O and Hadoop HDFS / MapReduce, which are suitable for controlling and managing the amount of big data. Developers of different applications impose data stores on difficult tasks by interacting with mixed data models through different APIs and queries. In this paper, a complex SQL Query and NoSQL (CQNS) framework that acts as an interpreter sends complex queries received from any data store to its corresponding executable engine called CQNS. The proposed framework supports application queries and database transformation at the same time, which in turn speeds up the process. Moreover, CQNS handles many NoSQL databases like MongoDB and Cassandra. This paper provides a spark framework that can handle SQL and NoSQL databases. This work also examines the importance of MongoDB block sharding and composition. Cassandra database deals with two types of sections vertex and edge Portioning. The four scenarios criteria datasets are used to evaluate the proposed CQNS to query the various NOSQL databases in terms of optimization performance and timing of query execution. The results show that among the comparative system, CQNS achieves optimum latency and productivity in less time.


Author(s):  
Jaimin N. Undavia ◽  
Atul Patel ◽  
Sheenal Patel

Availability of huge amount of data has opened up a new area and challenge to analyze these data. Analysis of these data become essential for each organization and these analyses may yield some useful information for their future prospectus. To store, manage and analyze such huge amount of data traditional database systems are not adequate and not capable also, so new data term is introduced – “Big Data”. This term refers to huge amount of data which are used for analytical purpose and future prediction or forecasting. Big Data may consist of combination of structured, semi structured or unstructured data and managing such data is a big challenge in current time. Such heterogeneous data is required to maintained in very secured and specific way. In this chapter, we have tried to identify such challenges and issues and also tried to resolve it with specific tools.


Author(s):  
Ashok Kumar Wahi ◽  
Yajulu Medury ◽  
Rajnish Kumar Misra

Big data has taken the world by storm. Everyone from every industry is not only talking about the impact of big data but is looking for ways to effectively leverage the power of big data. This challenge has heightened with the huge amount of unstructured data flowing from every direction, bringing along with it the increasing pressure to make data driven decisions rather than the gut-driven decisions. This article sheds light on how big data can be an enabler for smart enterprises if the organization is able to address the challenges posed by big data. Enterprises need to equip themselves with relevant technology, desired skills and a supporting managerial attitude to swim through the challenges of big data. It also highlights the need for all enterprises making the journey from 1.0 stage to Enterprise 2.0 to master the art of Big Data if they have to make the transition successful.


Author(s):  
Saifuzzafar Jaweed Ahmed

Big Data has become a very important part of all industries and organizations sectors nowadays. All sectors like energy, banking, retail, hardware, networking, etc all generate a huge amount of unstructured data which is processed and analyzed accurately in a structured form. Then the structured data can reveal very useful information for their business growth. Big Data helps in getting useful data from unstructured or heterogeneous data by analyzing them. Big data initially defined by the volume of a data set. Big data sets are generally huge, measuring tens of terabytes and sometimes crossing the sting of petabytes. Today, big data falls under three categories structured, unstructured, and semi-structured. The size of big data is improving in a fast phase from Terabytes to Exabytes Of data. Also, Big data requires techniques that help to integrate a huge amount of heterogeneous data and to process them. Data Analysis which is a big data process has its applications in various areas such as business processing, disease prevention, cybersecurity, and so on. Big data has three major issues such as data storage, data management, and information retrieval. Big data processing requires a particular setup of hardware and virtual machines to derive results. The processing is completed simultaneously to realize results as quickly as possible. These days big data processing techniques include Text mining and sentimental analysis. Text analytics is a very large field under which there are several techniques, models, methods for automatic and quantitative analysis of textual data. The purpose of this paper is to show how the text analysis and sentimental analysis process the unstructured data and how these techniques extract meaningful information and, thus make information available to the various data mining statistical and machine learning) algorithms.


Author(s):  
Ashok Kumar J ◽  
Abirami S ◽  
Tina Esther Trueman

Sentiment analysis is one of the most important applications in the field of text mining. It computes people's opinions, comments, posts, reviews, evaluations, and emotions which are expressed on products, sales, services, individuals, organizations, etc. Nowadays, large amounts of structured and unstructured data are being produced on the web. The categorizing and grouping of these data become a real-world problem. In this chapter, the authors address the current research in this field, issues and the problem of sentiment analysis on Big Data for classification and clustering. It suggests new methods, applications, algorithm extensions of classification and clustering and software tools in the field of sentiment analysis.


Sign in / Sign up

Export Citation Format

Share Document