scholarly journals Neural Reversible Steganography with Long Short-Term Memory

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Ching-Chun Chang

Deep learning has brought about a phenomenal paradigm shift in digital steganography. However, there is as yet no consensus on the use of deep neural networks in reversible steganography, a class of steganographic methods that permits the distortion caused by message embedding to be removed. The underdevelopment of the field of reversible steganography with deep learning can be attributed to the perception that perfect reversal of steganographic distortion seems scarcely achievable, due to the lack of transparency and interpretability of neural networks. Rather than employing neural networks in the coding module of a reversible steganographic scheme, we instead apply them to an analytics module that exploits data redundancy to maximise steganographic capacity. State-of-the-art reversible steganographic schemes for digital images are based primarily on a histogram-shifting method in which the analytics module is often modelled as a pixel intensity predictor. In this paper, we propose to refine the prior estimation from a conventional linear predictor through a neural network model. The refinement can be to some extent viewed as a low-level vision task (e.g., noise reduction and super-resolution imaging). In this way, we explore a leading-edge neuroscience-inspired low-level vision model based on long short-term memory with a brief discussion of its biological plausibility. Experimental results demonstrated a significant boost contributed by the neural network model in terms of prediction accuracy and steganographic rate-distortion performance.

2018 ◽  
Author(s):  
Muktabh Mayank Srivastava

We propose a simple neural network model which can learn relation between sentences by passing their representations obtained from Long Short Term Memory(LSTM) through a Relation Network. The Relation Network module tries to extract similarity between multiple contextual representations obtained from LSTM. Our model is simple to implement, light in terms of parameters and works across multiple supervised sentence comparison tasks. We show good results for the model on two sentence comparison datasets.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jun Ogasawara ◽  
Satoru Ikenoue ◽  
Hiroko Yamamoto ◽  
Motoshige Sato ◽  
Yoshifumi Kasuga ◽  
...  

AbstractCardiotocography records fetal heart rates and their temporal relationship to uterine contractions. To identify high risk fetuses, obstetricians inspect cardiotocograms (CTGs) by eye. Therefore, CTG traces are often interpreted differently among obstetricians, resulting in inappropriate interventions. However, few studies have focused on quantitative and nonbiased algorithms for CTG evaluation. In this study, we propose a newly constructed deep neural network model (CTG-net) to detect compromised fetal status. CTG-net consists of three convolutional layers that extract temporal patterns and interrelationships between fetal heart rate and uterine contraction signals. We aimed to classify the abnormal group (umbilical artery pH < 7.20 or Apgar score at 1 min < 7) and the normal group from CTG data. We evaluated the performance of the CTG-net with the F1 score and compared it with conventional algorithms, namely, support vector machine and k-means clustering, and another deep neural network model, long short-term memory. CTG-net showed the area under the receiver operating characteristic curve of 0.73 ± 0.04, which was significantly higher than that of long short-term memory. CTG-net, a quantitative and automated diagnostic aid system, enables early intervention for putatively abnormal fetuses, resulting in a reduction in the number of cases of hypoxic injury.


Sign in / Sign up

Export Citation Format

Share Document