scholarly journals Importance of Security in Big Data Log Files on Cloud

Author(s):  
Madan Mohan ◽  
◽  
Aadarsh Malviya ◽  
Anuranjan Mishra ◽  
◽  
...  

Today cloud computing is a very popular technology, and many people use this technology in many ways. it's important to have it safe. This technology was primarily used to keep data safer and safer in the cloud, so in this article we suggest a security framework for large data logs in the cloud. There are many and many risks that threaten the integrity of this information in the great information. Therefore, in line with the development of technology, the level of security has also increased significantly over the years. Various technology techniques access several online activities, such as interaction with different internet sites and services, making the web more accessible to their plug-ins. As a result, these activities have created a global platform for malicious activities to add these devices that expose large data logs harmful attacks. Sky system is an online platform that requires proper security integration. In addition, the current state of online security threatens high data in the cloud, which has affected the performance and service model.

2018 ◽  
Vol 3 (1) ◽  
pp. 36
Author(s):  
Weiling Liu

It has been a decade since Tim Berners-Lee coined Linked Data in 2006. More and more Linked Data datasets have been made available for information retrieval on the Web.  It is essential for librarians, especially academic librarians, to keep up with the state of Linked Data.  There is so much information about Linked Data that one may wonder where to begin when they want to join the Linked Data community. With this in mind, the author compiled this annotated bibliography as a starter kit.  Due to the many resources available, this list focuses on literature in English only and of specific projects, case studies, research studies, and tools that may be helpful to academic librarians, in addition to the overview of Linked Data concept and the current state of Linked Data evolution and adoption.


A smart helmet is a kind of defensive headgear utilized by the rider which makes bike driving more secure than previously. The principle reason for this keen protective cap to give well being to rider.Here I proposed a work which is endeavor to plan a propelled vehicle’s security framework which utilizes GSM to avert burglary and to decide the area of vehicles. Now a daysburglary is going on the stopping or in some shaky spots. The wellbeing of the vehicles is incredibly fundamental. The point of the vehicles security framework is used to utilizes the remote communication innovatively for the car situations. The principle focal point of this undertaking is to ensure the stealing of vehicle. This is finished with the assistance of GSM modem and circuit which comprises of ARM 7 TDMI microcontroller, transfer and venture down transformer. The framework will be enacted simply in the wake of wearing the head protector or else the client can't ready to get to the vehicle. To achieve Automated Vehicle Location our system uses to transmit the area data continuously, Active systems are produced. Progressing vehicular after system joins a gear device introduced in the vehicle and a remote Tracking servers. The infowas conveyed to Tracking server utilizing GSM/GPRS modem on GSM mastermind by using SMS or utilized direct TCP/IP association with Tracking servers thruGPRS. Following servers in like way has GSM/GPRS modem that gets vehicle region data by techniques for GSM system and stores info into databases. This info is available to embraced clients of the systems by techniques for sites over the web.


2020 ◽  
Vol 4 (4) ◽  
pp. 191
Author(s):  
Mohammad Aljanabi ◽  
Hind Ra'ad Ebraheem ◽  
Zahraa Faiz Hussain ◽  
Mohd Farhan Md Fudzee ◽  
Shahreen Kasim ◽  
...  

Much attention has been paid to large data technologies in the past few years mainly due to its capability to impact business analytics and data mining practices, as well as the possibility of influencing an ambit of a highly effective decision-making tools. With the current increase in the number of modern applications (including social media and other web-based and healthcare applications) which generates high data in different forms and volume, the processing of such huge data volume is becoming a challenge with the conventional data processing tools. This has resulted in the emergence of big data analytics which also comes with many challenges. This paper introduced the use of principal components analysis (PCA) for data size reduction, followed by SVM parallelization. The proposed scheme in this study was executed on the Spark platform and the experimental findings revealed the capability of the proposed scheme to reduce the classifiers’ classification time without much influence on the classification accuracy of the classifier.


Author(s):  
Nor Azlinayati Abdul Manaf ◽  
Sean Bechhofer ◽  
Robert Stevens
Keyword(s):  

Author(s):  
Zongmin Ma ◽  
Li Yan

The resource description framework (RDF) is a model for representing information resources on the web. With the widespread acceptance of RDF as the de-facto standard recommended by W3C (World Wide Web Consortium) for the representation and exchange of information on the web, a huge amount of RDF data is being proliferated and becoming available. So, RDF data management is of increasing importance and has attracted attention in the database community as well as the Semantic Web community. Currently, much work has been devoted to propose different solutions to store large-scale RDF data efficiently. In order to manage massive RDF data, NoSQL (not only SQL) databases have been used for scalable RDF data store. This chapter focuses on using various NoSQL databases to store massive RDF data. An up-to-date overview of the current state of the art in RDF data storage in NoSQL databases is provided. The chapter aims at suggestions for future research.


Author(s):  
Zongmin Ma ◽  
Li Yan

The Resource Description Framework (RDF) is a model for representing information resources on the Web. With the widespread acceptance of RDF as the de-facto standard recommended by W3C (World Wide Web Consortium) for the representation and exchange of information on the Web, a huge amount of RDF data is being proliferated and becoming available. So RDF data management is of increasing importance, and has attracted attentions in the database community as well as the Semantic Web community. Currently much work has been devoted to propose different solutions to store large-scale RDF data efficiently. In order to manage massive RDF data, NoSQL (“not only SQL”) databases have been used for scalable RDF data store. This chapter focuses on using various NoSQL databases to store massive RDF data. An up-to-date overview of the current state of the art in RDF data storage in NoSQL databases is provided. The chapter aims at suggestions for future research.


2010 ◽  
pp. 1-22
Author(s):  
Nalin Sharda

Modern information and communication technology (ICT) systems can help us in building travel recommender systems and virtual tourism communities. Tourism ICT systems have come a long way from the early airline ticket booking systems. Travel recommender systems have emerged in recent years, facilitating the task of destination selection as well activities at the destination. A move from purely text-based recommender systems to visual recommender systems is being proposed, which can be facilitated by the use of the Web 2.0 technologies to create virtual travel communities. Delivering a good user experience is important to make these technologies widely accepted and used. This chapter presents an overview of the historical perspective of tourism ICT systems and their current state of development vis-à-vis travel recommender systems and tourism communities. User experience is an important aspect of any ICT system. How to define user experience and measure it through usability testing is also presented.


2017 ◽  
Vol 7 (1.1) ◽  
pp. 286
Author(s):  
B. Sekhar Babu ◽  
P. Lakshmi Prasanna ◽  
P. Vidyullatha

 In current days, World Wide Web has grown into a familiar medium to investigate the new information, Business trends, trading strategies so on. Several organizations and companies are also contracting the web in order to present their products or services across the world. E-commerce is a kind of business or saleable transaction that comprises the transfer of statistics across the web or internet. In this situation huge amount of data is obtained and dumped into the web services. This data overhead tends to arise difficulties in determining the accurate and valuable information, hence the web data mining is used as a tool to determine and mine the knowledge from the web. Web data mining technology can be applied by the E-commerce organizations to offer personalized E-commerce solutions and better meet the desires of customers. By using data mining algorithm such as ontology based association rule mining using apriori algorithms extracts the various useful information from the large data sets .We are implementing the above data mining technique in JAVA and data sets are dynamically generated while transaction is processing and extracting various patterns.


Author(s):  
Amrapali Zaveri ◽  
Andrea Maurino ◽  
Laure-Berti Equille

The standardization and adoption of Semantic Web technologies has resulted in an unprecedented volume of data being published as Linked Data (LD). However, the “publish first, refine later” philosophy leads to various quality problems arising in the underlying data such as incompleteness, inconsistency and semantic ambiguities. In this article, we describe the current state of Data Quality in the Web of Data along with details of the three papers accepted for the International Journal on Semantic Web and Information Systems' (IJSWIS) Special Issue on Web Data Quality. Additionally, we identify new challenges that are specific to the Web of Data and provide insights into the current progress and future directions for each of those challenges.


Neurology ◽  
2020 ◽  
Vol 94 (12) ◽  
pp. 526-537 ◽  
Author(s):  
Codrin Lungu ◽  
Laurie Ozelius ◽  
David Standaert ◽  
Mark Hallett ◽  
Beth-Anne Sieber ◽  
...  

ObjectiveDystonia is a complex movement disorder. Research progress has been difficult, particularly in developing widely effective therapies. This is a review of the current state of knowledge, research gaps, and proposed research priorities.MethodsThe NIH convened leaders in the field for a 2-day workshop. The participants addressed the natural history of the disease, the underlying etiology, the pathophysiology, relevant research technologies, research resources, and therapeutic approaches and attempted to prioritize dystonia research recommendations.ResultsThe heterogeneity of dystonia poses challenges to research and therapy development. Much can be learned from specific genetic subtypes, and the disorder can be conceptualized along clinical, etiology, and pathophysiology axes. Advances in research technology and pooled resources can accelerate progress. Although etiologically based therapies would be optimal, a focus on circuit abnormalities can provide a convergent common target for symptomatic therapies across dystonia subtypes. The discussions have been integrated into a comprehensive review of all aspects of dystonia.ConclusionOverall research priorities include the generation and integration of high-quality phenotypic and genotypic data, reproducing key features in cellular and animal models, both of basic cellular mechanisms and phenotypes, leveraging new research technologies, and targeting circuit-level dysfunction with therapeutic interventions. Collaboration is necessary both for collection of large data sets and integration of different research methods.


Sign in / Sign up

Export Citation Format

Share Document