scholarly journals Design and Analysis of Multisource Logs Forensic With Lock Technique for Cloud Security Enhancement

Multisource cloud log forensics (MCLF) strengthens the investigation method by means of detecting the malicious behavior of hackers thru deep cloud log evaluation. But, the accessibility attributes of cloud logs thwarts accomplishing the purpose to analyze cloud logs. Accessibility consists of the productions of cloud log get admission to, selection of suitable cloud log file, cloud log information integrity, and cloud logs trustworthiness. Hence, forensic investigators of cloud log files are dependent on cloud provider vendors (CSPs) to get entry to of diverse cloud logs. Accessing cloud logs from outside the cloud without depending at the CSPs is once more a hard, whereas the boom in cloud assaults has improved the need for MCLF to research the malicious activities of attackers. Criminals are easily hiding incriminating files within the cloud system and altering the log contents. Hence lock mechanism has been added to MCLF technique. This paper reviews the MCLF with lock technique and highlights diverse challenges and issues involved in examining cloud log data. The logging mode, the importance of MCLF, and cloud multisource-log-as-a-service are introduced. The MCLF security necessities, weakness points, and experiments are recognized to tolerate altered cloud log susceptibilities. This paper represents the design and analysis details of MCLF with Lock technique.

2021 ◽  
Vol 11 (13) ◽  
pp. 5944
Author(s):  
Gunwoo Lee ◽  
Jongpil Jeong

Semiconductor equipment consists of a complex system in which numerous components are organically connected and controlled by many controllers. EventLog records all the information available during system processes. Because the EventLog records system runtime information so developers and engineers can understand system behavior and identify possible problems, it is essential for engineers to troubleshoot and maintain it. However, because the EventLog is text-based, complex to view, and stores a large quantity of information, the file size is very large. For long processes, the log file comprises several files, and engineers must look through many files, which makes it difficult to find the cause of the problem and therefore, a long time is required for the analysis. In addition, if the file size of the EventLog becomes large, the EventLog cannot be saved for a prolonged period because it uses a large amount of hard disk space on the CTC computer. In this paper, we propose a method to reduce the size of existing text-based log files. Our proposed method saves and visualizes text-based EventLogs in DB, making it easier to approach problems than the existing text-based analysis. We will confirm the possibility and propose a method that makes it easier for engineers to analyze log files.


Author(s):  
Jozef Kapusta ◽  
Michal Munk ◽  
Dominik Halvoník ◽  
Martin Drlík

If we are talking about user behavior analytics, we have to understand what the main source of valuable information is. One of these sources is definitely a web server. There are multiple places where we can extract the necessary data. The most common ways are to search for these data in access log, error log, custom log files of web server, proxy server log file, web browser log, browser cookies etc. A web server log is in its default form known as a Common Log File (W3C, 1995) and keeps information about IP address; date and time of visit; ac-cessed and referenced resource. There are standardized methodologies which contain several steps leading to extract new knowledge from provided data. Usu-ally, the first step is in each one of them to identify users, users’ sessions, page views, and clickstreams. This process is called pre-processing. Main goal of this stage is to receive unprocessed web server log file as input and after processing outputs meaningful representations which can be used in next phase. In this pa-per, we describe in detail user session identification which can be considered as most important part of data pre-processing. Our paper aims to compare the us-er/session identification using the STT with the identification of user/session us-ing cookies. This comparison was performed concerning the quality of the se-quential rules generated, i.e., a comparison was made regarding generation useful, trivial and inexplicable rules.


Author(s):  
Ricardo Muñoz Martín ◽  
Celia Martín de Leon

The Monitor Model fosters a view of translating where two mind modes stand out and alternate when trying to render originals word-by-word by default: shallow, uneventful processing vs problem solving. Research may have been biased towards problem solving, often operationalized with a pause of, or above, 3 seconds. This project analyzed 16 translation log files by four informants from four originals. A baseline minimal pause of 200 ms was instrumental to calculate two individual thresholds for each log file: (a) A low one – 1.5 times the median pause within words – and (b) a high one – 3 times the median pause between words. Pauses were then characterized as short (between 200 ms and the lower threshold), mid, and long (above the higher threshold, chunking the recorded activities in the translation task into task segments), and assumed to respond to different causes. Weak correlations between short, mid and long pauses were found, hinting at possible different cognitive processes. Inferred processes did not fall neatly into categories depending on the length of possibly associated pauses. Mid pauses occurred more often than long pauses between sentences and paragraphs, and they also more often flanked information searches and even problem-solving instances. Chains of proximal mid pauses marked cases of potential hesitations. Task segments tended to happen within 4–8 minute cycles, nested in a possible initial phase for contextualization, followed by long periods of sustained attention. We found no evidence for problem-solving thresholds, and no trace of behavior supporting the Monitor Model. 


2018 ◽  
pp. 102-131
Author(s):  
Heather Hinton

Despite a rocky start in terms of perceived security, cloud adoption continues to grow. Users are more comfortable with the notion that cloud can be secure but there is still a lack of understanding of what changes when moving to cloud, how to secure a cloud environment, and most importantly, how to demonstrate compliance of these cloud environment for regulatory purposes. This chapter reviews the basics of cloud security and compliance, including the split of security responsibility across Cloud provider and Client, considerations for the integration of cloud deployed workloads with on-premises systems and most importantly, how to demonstrate compliance with existing internal policies and workload required regulatory standards.


Author(s):  
Daya Sagar Gupta ◽  
G. P. Biswas

In this chapter, a cloud security mechanism is described in which the computation (addition) of messages securely stored on the cloud is possible. Any user encrypts the secret message using the receiver's public key and stores it. Later on, whenever the stored message is required by an authentic user, he retrieves the encrypted message and decrypts it by using his secret key. However, he can also request the cloud for an addition of encrypted messages. The cloud system only computes the requested addition and sends it to the authentic user; it cannot decrypt the stored encrypted messages on its own. This addition of encrypted messages should be the same as the encryption of the addition of original messages. In this chapter, the authors propose a homomorphic encryption technique in which the above-discussed scenario is possible. The cloud securely computes the addition of the encrypted messages which is ultimately the encryption of the addition of the original messages. The security of the proposed encryption technique depends on the hardness of elliptic curve hard problems.


Author(s):  
Sagar Shankar Rajebhosale ◽  
Mohan Chandrabhan Nikam

A log is a record of events that happens within an organization containing systems and networks. These logs are very important for any organization, because a log file will able to record all user activities. Due to this, log files play a vital role and contain sensitive information, and therefore security should be a high priority. It is very important to the proper functioning of any organization, to securely maintain log records over an extended period of time. So, management and maintenance of logs is a very difficult task. However, deploying such a system for high security and privacy of log records may be overhead for an organization and require additional costs. Many techniques have been designed for security of log records. The alternative solution for maintaining log records is using Blockchain technology. A blockchain will provide security of the log files. Log files over a Blockchain environment leads to challenges with a decentralized storage of log files. This article proposes a secured log management over Blockchain and the use of cryptographic algorithms for dealing the issues to access a data storage. This proposed technology may be one complete solution to the secure log management problem.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Daniel Hofer ◽  
Markus Jäger ◽  
Aya Khaled Youssef Sayed Mohamed ◽  
Josef Küng

Purpose For aiding computer security experts in their study, log files are a crucial piece of information. Especially the time domain is very important for us because in most cases, timestamps are the only linking points between events caused by attackers, faulty systems or simple errors and their corresponding entries in log files. With the idea of storing and analyzing this log information in graph databases, we need a suitable model to store and connect timestamps and their events. This paper aims to find and evaluate different approaches how to store timestamps in graph databases and their individual benefits and drawbacks. Design/methodology/approach We analyse three different approaches, how timestamp information can be represented and stored in graph databases. For checking the models, we set up four typical questions that are important for log file analysis and tested them for each of the models. During the evaluation, we used the performance and other properties as metrics, how suitable each of the models is for representing the log files’ timestamp information. In the last part, we try to improve one promising looking model. Findings We come to the conclusion, that the simplest model with the least graph database-specific concepts in use is also the one yielding the simplest and fastest queries. Research limitations/implications Limitations to this research are that only one graph database was studied and also improvements to the query engine might change future results. Originality/value In the study, we addressed the issue of storing timestamps in graph databases in a meaningful, practical and efficient way. The results can be used as a pattern for similar scenarios and applications.


2020 ◽  
Vol 9 (1) ◽  
pp. 1045-1050

Nowadays, WWW has grown into significant and vast data storage. Every one of clients' exercises will be put away in log record. The log file shows the eagerness on the website. With an abundant use of web, the log file size is developing hurriedly. Web mining is a utilization of information digging innovations for immense information storehouses. It is the procedure of uncover data from web information. Before applying web mining procedures, the information in the web log must be pre-processed, consolidated and changed. It is essential for the web excavators to use smart apparatuses so as to discover, concentrate, channel and assess the ideal data. The information preprocessing stage is the most significant stage during the time spent web mining and is basic and complex in fruitful extraction of helpful information. The web logs are circulated in nature also they are non-versatile and unfeasible. Subsequently we require a broad learning calculation so as to get the ideal data.


Sign in / Sign up

Export Citation Format

Share Document