Uniform hardness amplification in NP via monotone codes
Joshua Buresh-Oppenheim, Valentine Kabanets, and Rahul Santhanam
Abstract
We consider the problem of amplifying uniform average-case hardness of languages in NP, where hardness is with respect to BPP algorithms. We introduce the notion of monotone error-correcting codes, and show that hardness amplification for NP is essentially equivalent to constructing efficiently locally encodable and locally list-decodable monotone codes. The previous hardness amplification results for NP by Trevisan focused on giving a direct construction of some locally encodable/decodable monotone codes, running into the problem of large amounts of nonuniformity used by the decoding algorithm. In contrast, we propose the indirect approach to constructing locally encodable/decodable monotone codes, combining the uniform Direct Product Lemma of Impagliazzo et al. and arbitrary, not necessarily locally encodable, monotone codes. The latter codes have fewer restrictions, and so may be easier to construct.
We study what parameters are achievable by monotone codes in general, giving negative and positive results. We present two constructions of monotone codes. Our first code is a uniquely decodable code based on the Majority function, and has an efficient decoding algorithm. Our second code is combinatorially list-decodable, but we do not have an efficient decoding algorithm. In conjunction with an appropriate Direct Product Lemma, our first code yields uniform hardness amplification for NP from inverse polynomial to constant average-case hardness. Our second code, even with a brute-force decoding algorithm, yields further hardness amplification to 1/2-log-Ω(1) n. Together, these give an alternative proof of Trevisan's result. Getting any non-brute-force decoding algorithm for our second code would imply improved parameters for the problem of hardness amplification in NP.
Versions
- full version ECCC TR06-154, 2006.