scholarly journals Data Tamper Detection from NoSQL Database in Forensic Environment

Author(s):  
Rupali Chopade ◽  
Vinod Pachghare

The growth of service sector is increasing the usage of digital applications worldwide. These digital applications are making use of database to store the sensitive and secret information. As the database has distributed over the internet, cybercrime attackers may tamper the database to attack on such sensitive and confidential information. In such scenario, maintaining the integrity of database is a big challenge. Database tampering will change the database state by any data manipulation operation like insert, update or delete. Tamper detection techniques are useful for the detection of such data tampering which play an important role in database forensic investigation process. Use of NoSQL database has been attracted by big data requirements. Previous research work has limited to tamper detection in relational database and very less work has been found in NoSQL database. So there is a need to propose a mechanism to detect the tampering of NoSQL database systems. Whereas this article proposes an idea of tamper detection in NoSQL database such as MongoDB and Cassandra, which are widely used document-oriented and column-based NoSQL database respectively. This research work has proposed tamper detection technique which works in forensic environment to give more relevant outcome on data tampering and distinguish between suspicious and genuine tampering.  

Author(s):  
Omoruyi Osemwegie ◽  
Kennedy Okokpujie ◽  
Nsikan Nkordeh ◽  
Charles Ndujiuba ◽  
Samuel John ◽  
...  

<p>Increasing requirements for scalability and elasticity of data storage for web applications has made Not Structured Query Language NoSQL databases more invaluable to web developers. One of such NoSQL Database solutions is Redis. A budding alternative to Redis database is the SSDB database, which is also a key-value store but is disk-based. The aim of this research work is to benchmark both databases (Redis and SSDB) using the Yahoo Cloud Serving Benchmark (YCSB). YCSB is a platform that has been used to compare and benchmark similar NoSQL database systems. Both databases were given variable workloads to identify the throughput of all given operations. The results obtained shows that SSDB gives a better throughput for majority of operations to Redis’s performance.</p>


Data ◽  
2021 ◽  
Vol 6 (10) ◽  
pp. 102
Author(s):  
Kalyani Dhananjay Kadam ◽  
Swati Ahirrao ◽  
Ketan Kotecha

Image forgery has grown in popularity due to easy access to abundant image editing software. These forged images are so devious that it is impossible to predict with the naked eye. Such images are used to spread misleading information in society with the help of various social media platforms such as Facebook, Twitter, etc. Hence, there is an urgent need for effective forgery detection techniques. In order to validate the credibility of these techniques, publically available and more credible standard datasets are required. A few datasets are available for image splicing, such as Columbia, Carvalho, and CASIA V1.0. However, these datasets are employed for the detection of image splicing. There are also a few custom datasets available such as Modified CASIA, AbhAS, which are also employed for the detection of image splicing forgeries. A study of existing datasets used for the detection of image splicing reveals that they are limited to only image splicing and do not contain multiple spliced images. This research work presents a Multiple Image Splicing Dataset, which consists of a total of 300 multiple spliced images. We are the pioneer in developing the first publicly available Multiple Image Splicing Dataset containing high-quality, annotated, realistic multiple spliced images. In addition, we are providing a ground truth mask for these images. This dataset will open up opportunities for researchers working in this significant area.


Author(s):  
Zhen Hua Liu ◽  
Anguel Novoselsky ◽  
Vikas Arora

Since the advent of XML, there has been significant research into integrating XML data management with Relational DBMS and Object Relational DBMS (ORDBMS). This chapter describes the XML data management capabilities in ORDBMS, various design approaches and implementation techniques to support these capabilities, as well as the pros and cons of each design and implementation approach. Key topics such as XML storage, XML Indexing, XQuery and SQL/XML processing, are discussed in depth presenting both academic and industrial research work in these areas.


Author(s):  
Zhikun Chen ◽  
Shuqiang Yang ◽  
Yunfei Shang ◽  
Yong Liu ◽  
Feng Wang ◽  
...  

NoSQL database is famed for the characteristics of high scalability, high availability, and high fault-tolerance. It is used to manage data for a lot of applications. The computing model has been transferred to “computing close to data”. Therefore, the location of fragment directly affects system's performance. Every site's load dynamical changes because of the increasing data and the ever-changing operation pattern. So system has to re-allocate fragment to improve system's performance. The general fragment re-allocation strategies of NoSQL database scatter the related fragments as possible to improve the operations' parallel degree. But those fragments may interact with each other in some application's operations. So the high parallel degree of operation may increase system's communication cost such as data are transferred by network. In this paper, the authors propose a fragment re-allocation strategy based on hypergraph. This strategy uses a weighted hypergraph to represent the fragments' access pattern of operations. A hypergraph partitioning algorithm is used to cluster fragments in the strategy. This strategy can improve system's performance according to reducing the communication cost while guaranteeing the parallel degree of operations. Experimental results confirm that the strategy will effectively contribute in solving fragment re-allocation problem in specific application environment of NoSQL database system, and it can improve system's performance.


2016 ◽  
Vol 78 (6-11) ◽  
Author(s):  
Arafat Al-Dhaqm ◽  
Shukor Abd Razak ◽  
Siti Hajar Othman ◽  
Asri Nagdi ◽  
Abdulalem Ali

Database Forensic investigation is a domain which deals with database contents and their metadata to reveal malicious activities on database systems. Even though it is still new, but due to the overwhelming challenges and issues in the domain, this makes database forensic become a fast growing and much sought after research area. Based on observations made, we found that database forensic suffers from having a common standard which could unify knowledge of the domain. Therefore, through this paper, we present the use of Design Science Research (DSR) as a research methodology to develop a Generic Database Forensic Investigation Process Model (DBFIPM). From the creation of DBFIPM, five common forensic investigation processes have been proposed namely, the i) identification, ii) collection, iii) preservation, iv) analysis and v) presentation process. From the DBFIPM, it allows the reconciliation of concepts and terminologies of all common databases forensic investigation processes. Thus, this will potentially facilitate the sharing of knowledge on database forensic investigation among domain stakeholders.  


Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1174
Author(s):  
Ashish Kumar Gupta ◽  
Ayan Seal ◽  
Mukesh Prasad ◽  
Pritee Khanna

Detection and localization of regions of images that attract immediate human visual attention is currently an intensive area of research in computer vision. The capability of automatic identification and segmentation of such salient image regions has immediate consequences for applications in the field of computer vision, computer graphics, and multimedia. A large number of salient object detection (SOD) methods have been devised to effectively mimic the capability of the human visual system to detect the salient regions in images. These methods can be broadly categorized into two categories based on their feature engineering mechanism: conventional or deep learning-based. In this survey, most of the influential advances in image-based SOD from both conventional as well as deep learning-based categories have been reviewed in detail. Relevant saliency modeling trends with key issues, core techniques, and the scope for future research work have been discussed in the context of difficulties often faced in salient object detection. Results are presented for various challenging cases for some large-scale public datasets. Different metrics considered for assessment of the performance of state-of-the-art salient object detection models are also covered. Some future directions for SOD are presented towards end.


Author(s):  
G Manoharan ◽  
K Sivakumar

Outlier detection in data mining is an important arena where detection models are developed to discover the objects that do not confirm the expected behavior. The generation of huge data in real time applications makes the outlier detection process into more crucial and challenging. Traditional detection techniques based on mean and covariance are not suitable to handle large amount of data and the results are affected by outliers. So it is essential to develop an efficient outlier detection model to detect outliers in the large dataset. The objective of this research work is to develop an efficient outlier detection model for multivariate data employing the enhanced Hidden Semi-Markov Model (HSMM). It is an extension of conventional Hidden Markov Model (HMM) where the proposed model allows arbitrary time distribution in its states to detect outliers. Experimental results demonstrate the better performance of proposed model in terms of detection accuracy, detection rate. Compared to conventional Hidden Markov Model based outlier detection the detection accuracy of proposed model is obtained as 98.62% which is significantly better for large multivariate datasets.


Sign in / Sign up

Export Citation Format

Share Document