Inequalities for space-bounded Kolmogorov complexity
Résumé
Finding all linear inequalities for entropies remains an important open question in information theory. For a long time the only known inequalities for entropies of tuples of random variables were Shannon (submodularity) inequalities. Only in 1998 Zhang and Yeung 1998 found the first inequality that cannot be represented as a convex combination of Shannon inequalities, and several other non-Shannon inequalities were found soon after that. It turned out that the class of linear inequalities for entropies is rather fundamental, since the same class can be equivalently defined in terms of subgroup sizes or projections of multidimensional sets (Chan 2001, Chan, Yeung 2002, Romashchenko, Shen, Vereshchagin 2000). The non-Shannon inequalities have interesting applications (e.g., to proofs of lower bounds for the information ratio of secret sharing schemes). Still the class of linear inequalities for entropies is not well understood, though some partial results are known (e.g., Matúš has shown in 2007 that this class cannot be generated by a finite family of inequalities). This class also appears in algorithmic information theory: the same linear inequalities are true for Shannon entropies of tuples of random variables and Kolmogorov complexities of tuples of strings (Hammer et al., 1997). This parallelism started with the Kolmogorov–Levin formula 1968 for the complexity of pairs of strings with logarithmic precision. Longpré proved in 1986 a version of this formula for the space-bounded complexities. In this paper we prove a stronger version of Longpré’s result with a tighter space bound, using Sipser’s trick 1980. Then, using this result, we show that every linear inequality that is true for complexities or entropies, is also true for space-bounded Kolmogorov complexities with a polynomial space overhead, thus extending the parallelism to the space-bounded algorithmic information theory.
Origine | Fichiers produits par l'(les) auteur(s) |
---|