Randomness Tests: Theory and Practice
Abstract
The mathematical theory of probabilities does not refer to the notion of an individual random object. For example, when we toss a fair coin n times, all 2 n bit strings of length n are equiprobable outcomes and none of them is more "random" than others. However, when testing a statistical model, e.g., the fair coin hypothesis, we necessarily have to distinguish between outcomes that contradict this model, i.e., the outcomes that convince us to reject this model with some level of certainty, and all other outcomes. The same question arises when we apply randomness tests to some hardware random bits generator. A similar distinction between random and non-random objects appears in algorithmic information theory. Algorithmic information theory defines the notion of an individual random sequence and therefore splits all infinite bit sequences into random and non-random ones. For finite sequences there is no sharp boundary. Instead, the notion of randomness deficiency can be defined, and sequences with greater deficiency are considered as "less random" ones. This definition can be given in terms of randomness tests that are similar to the practical tests used for checking (pseudo)random bits generators. However, these two kinds of randomness tests are rarely compared and discussed together. In this survey we try to discuss current methods of producing and testing random bits, having in mind algorithmic information theory as a reference point. We also suggest some approach to construct robust practical tests for random bits.
Origin | Files produced by the author(s) |
---|