Holographic Data Compression with JPEG Standard and Deep Learning

Author(s):  
Yang Gao ◽  
Shuming Jiao ◽  
Zhi Jin
2021 ◽  
pp. 1-12
Author(s):  
Gaurav Sarraf ◽  
Anirudh Ramesh Srivatsa ◽  
MS Swetha

With the ever-rising threat to security, multiple industries are always in search of safer communication techniques both in rest and transit. Multiple security institutions agree that any systems security can be modeled around three major concepts: Confidentiality, Availability, and Integrity. We try to reduce the holes in these concepts by developing a Deep Learning based Steganography technique. In our study, we have seen, data compression has to be at the heart of any sound steganography system. In this paper, we have shown that it is possible to compress and encode data efficiently to solve critical problems of steganography. The deep learning technique, which comprises an auto-encoder with Convolutional Neural Network as its building block, not only compresses the secret file but also learns how to hide the compressed data in the cover file efficiently. The proposed techniques can encode secret files of the same size as of cover, or in some sporadic cases, even larger files can be encoded. We have also shown that the same model architecture can theoretically be applied to any file type. Finally, we show that our proposed technique surreptitiously evades all popular steganalysis techniques.


2020 ◽  
Vol 398 ◽  
pp. 222-234 ◽  
Author(s):  
Joseph Azar ◽  
Abdallah Makhoul ◽  
Raphaël Couturier ◽  
Jacques Demerjian

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Emad B. Helal ◽  
Omar M. Saad ◽  
Ali G. Hafez ◽  
Yangkang Chen ◽  
Gamal M. Dousoky

2020 ◽  
Vol 5 (11) ◽  
Author(s):  
Andrew Glaws ◽  
Ryan King ◽  
Michael Sprague

Author(s):  
Nweso Emmanuel Nwogbaga ◽  
Rohaya Latip ◽  
Lilly Suriani Affendey ◽  
Amir Rizaan Abdul Rahiman

AbstractWith the increasing level of IoT applications, computation offloading is now undoubtedly vital because of the IoT devices limitation of processing capability and energy. Computation offloading involves moving data from IoT devices to another processing layer with higher processing capability. However, the size of data offloaded is directly proportional to the delay incurred by the offloading. Therefore, introducing data reduction technique to reduce the offloadable data minimizes delay resulting from the offloading method. In this paper, two main strategies are proposed to address the enormous data volume that result to computation offloading delay. First, IoT Canonical Polyadic Decomposition for Deep Learning Algorithm is proposed. The main purpose of this strategy is to downsize the IoT offloadable data. In the study, the Kaggle-cat-and-dog dataset was used to evaluate the impact of the proposed data compression. The proposed method downsizes the data significantly and can reduce the delay due to network traffic. Secondly, Rank Accuracy Estimation Model is proposed for determining the Rank-1 value. The result of the proposed method proves that the proposed methods are better in terms of data compression compared to distributed deep learning layers. This method can be applied in smart city, vehicular networks, and telemedicine etc.


2022 ◽  
Author(s):  
Hassan Noura ◽  
Joseph Azar ◽  
Ola Salman ◽  
Raphaël Couturier ◽  
Kamel Mazouzi

Author(s):  
Weixuan Liang ◽  
Youchan Zhu ◽  
Guoliang Li

Background: As the "three-type two-net, world-class" strategy proposed, the number of cloud resources in power grid continues to grow, there is a large amount of data to be filed every day. Which are key issues to be addressed, the long-term preservation of data, using back-up data for the operation and maintenance, fault recovery, fault drill and tracking of cloud platform are essential. The traditional compression algorithm faces severe challenges. Method: In this case, this paper proposes the deep-learning method for data compression. First, a more accurate and complete grid cloud resource status data is gathered through data cleaning, correction, and standardization. The preprocessed data is then compressed by SaDE-MSAE. Result: Experiments show that the SaDE-MSAE method can compress data faster. The data compression ratio based on neural network is basically between 45% and 60%, which is relatively stable and stronger than the traditional compression algorithm. Conclusion:: The paper can complete the compressed data quickly and efficiently in a large amount of power data. Improve the speed and accuracy of the algorithm while ensuring that the data is correct and complete, and improve the compression time and efficiency through the neural network. It gives better compression schemes cloud resource data grid.


Sign in / Sign up

Export Citation Format

Share Document