Deep learning is a good steganalysis tool when embedding key is reused for different images, even if there is a cover source-mismatch
Résumé
Since the BOSS competition, in 2010, most steganalysis approaches use a learning methodology involving two steps: feature extraction, such as the Rich Models (RM), for the image representation, and use of the Ensemble Classifier (EC) for the learning step. In 2015, Qian et al. have shown that the use of a deep learning approach that jointly learns and computes the features, was very promising for the steganalysis.
In this paper, we follow-up the study of Qian et al., and show that in the scenario where the steganograph always uses the same embedding key for embedding with the simulator in the different images, due to intrinsic joint minimization and the preservation of spatial information, the results obtained from a Convolutional Neural Network (CNN) or a Fully Connected Neural Network (FNN), if well parameterized, surpass the conventional use of a RM with an EC.
First, numerous experiments were conducted in order to find the best "shape" of the CNN. Second, experiments were carried out in the clairvoyant scenario in order to compare the CNN and FNN to an RM with an EC. The results show more than 16% reduction in the classification error with our CNN or FNN. Third, experiments were also performed in a cover-source mismatch setting. The results show that the CNN and FNN are naturally robust to the mismatch problem.
In Addition to the experiments, we provide discussions on the internal mechanisms of a CNN, and weave links with some previously stated ideas, in order to understand the results we obtained. We also have a discussion on the scenario "same embedding key".
Origine | Fichiers produits par l'(les) auteur(s) |
---|