scholarly journals Application of Big Data in Security Precaution by Network Technology and Video Image Database

2021 ◽  
Vol 2083 (4) ◽  
pp. 042020
Author(s):  
Yougang Yao ◽  
Linling Zou

Abstract The security industry is generated by the needs of modern public security precaution. With the popularization of hardware and network technology, the amount of data in the field of security increases rapidly. Under such a condition, big data technology in the field of security arises. This paper discusses the current situation of the security industry, and discusses the related security problems in the application process of big data in the field of public security precaution. Taking the video image database application platform as an example, combined with the national standards 28181, 25724, 35114, this paper discusses the data security problems involved in the process of data acquisition and data transmission. Based on the Kerberos authentication mechanism of Hadoop, this paper introduces the big data security and access control technology, and forms relevant solutions in the whole process of data acquisition, data transmission and data access.

2014 ◽  
Vol 687-691 ◽  
pp. 2764-2767
Author(s):  
Qiang Li ◽  
Kui Yang ◽  
Li Ma

With the popularity of the cloud computing, network technology obtains unprecedented development and revolution. Cloud computing is the direction of cyberspace storage in the computer and internet filed. Big data and multi-dimensional storage and calculation along with the cloud computing. Therefore, internet data security is the key point. For example, based on the cloud environment, the big data transmission and storage requires encryption technology. This article will research the data security of cloud computing, design the service model of data security encryption, and realize data transmission and storage security service of computer and internet cloud environment.


2020 ◽  
Vol 218 ◽  
pp. 04008
Author(s):  
Yang Shen

In the era of big data, due to the great influence of big data itself, Internet information security has also become the focus of attention. In order to avoid disturbing people’s lives, this article summarizes the opportunities and challenges in the era of big data based on previous work experience. This article analyzes and studies five aspects including establishing complete laws and regulations, protecting personal information, applying big data technology to public security systems, doing a good job in data management and classification, and ensuring the security of data transmission. The author discusses specific measures for the maintenance of Internet information security in the era of big data from the above five aspects.


2020 ◽  
pp. 101053952098436
Author(s):  
Olivia M. Y. Ngan ◽  
Adam M. Kelmenson

While many freedoms became halted by city lockdowns and restrictive travel bans amid coronavirus crisis, some countries and regions reopened with public health monitoring and surveillance measures in place. Technology applications such as real-time location data, geofencing technology, video camera footage, and credit card history are now used in novel and poorly understood ways to track movement patterns to stem viral spread. The use of big data analytics, which sometimes involve involuntary and unconsented data access and disclosure, raise public unease about data protection. The result is a balance between public health safety and ethical use of personal data that pushes the limits of privacy rights. Is it ethically permissible to use big data analytics instantiating the goal of public health by infringing on personal privacy in exchange for maximizing public security? Demonstrating the effectiveness of public health measures is difficult as scientific uncertainties and social complexities are presented. This article provides some public health ethics considerations in balancing benefits of public security and personal privacy infringement, supported with examples drawn from Asian countries and regions.


2020 ◽  
Vol 13 (4) ◽  
pp. 790-797
Author(s):  
Gurjit Singh Bhathal ◽  
Amardeep Singh Dhiman

Background: In current scenario of internet, large amounts of data are generated and processed. Hadoop framework is widely used to store and process big data in a highly distributed manner. It is argued that Hadoop Framework is not mature enough to deal with the current cyberattacks on the data. Objective: The main objective of the proposed work is to provide a complete security approach comprising of authorisation and authentication for the user and the Hadoop cluster nodes and to secure the data at rest as well as in transit. Methods: The proposed algorithm uses Kerberos network authentication protocol for authorisation and authentication and to validate the users and the cluster nodes. The Ciphertext-Policy Attribute- Based Encryption (CP-ABE) is used for data at rest and data in transit. User encrypts the file with their own set of attributes and stores on Hadoop Distributed File System. Only intended users can decrypt that file with matching parameters. Results: The proposed algorithm was implemented with data sets of different sizes. The data was processed with and without encryption. The results show little difference in processing time. The performance was affected in range of 0.8% to 3.1%, which includes impact of other factors also, like system configuration, the number of parallel jobs running and virtual environment. Conclusion: The solutions available for handling the big data security problems faced in Hadoop framework are inefficient or incomplete. A complete security framework is proposed for Hadoop Environment. The solution is experimentally proven to have little effect on the performance of the system for datasets of different sizes.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Mahdi Torabzadehkashi ◽  
Siavash Rezaei ◽  
Ali HeydariGorji ◽  
Hosein Bobarshad ◽  
Vladimir Alves ◽  
...  

AbstractIn the era of big data applications, the demand for more sophisticated data centers and high-performance data processing mechanisms is increasing drastically. Data are originally stored in storage systems. To process data, application servers need to fetch them from storage devices, which imposes the cost of moving data to the system. This cost has a direct relation with the distance of processing engines from the data. This is the key motivation for the emergence of distributed processing platforms such as Hadoop, which move process closer to data. Computational storage devices (CSDs) push the “move process to data” paradigm to its ultimate boundaries by deploying embedded processing engines inside storage devices to process data. In this paper, we introduce Catalina, an efficient and flexible computational storage platform, that provides a seamless environment to process data in-place. Catalina is the first CSD equipped with a dedicated application processor running a full-fledged operating system that provides filesystem-level data access for the applications. Thus, a vast spectrum of applications can be ported for running on Catalina CSDs. Due to these unique features, to the best of our knowledge, Catalina CSD is the only in-storage processing platform that can be seamlessly deployed in clusters to run distributed applications such as Hadoop MapReduce and HPC applications in-place without any modifications on the underlying distributed processing framework. For the proof of concept, we build a fully functional Catalina prototype and a CSD-equipped platform using 16 Catalina CSDs to run Intel HiBench Hadoop and HPC benchmarks to investigate the benefits of deploying Catalina CSDs in the distributed processing environments. The experimental results show up to 2.2× improvement in performance and 4.3× reduction in energy consumption, respectively, for running Hadoop MapReduce benchmarks. Additionally, thanks to the Neon SIMD engines, the performance and energy efficiency of DFT algorithms are improved up to 5.4× and 8.9×, respectively.


Sign in / Sign up

Export Citation Format

Share Document