For applications of hardness of approximation we often need a 'decoding scheme' - for a fixed code $C$ and a given word $f$, the scheme should return some possibly random codeword $f'$ which in some sense is a decoding of $f$. The codewords $f'$ that are returned may sometimes be totally uncorrelated with $f$, and yet in some sense they are still a reasonable decodings of $f$: an example may be the case where $C$ is the long-code, $f$ is a character $f=\chi_{\{7,69\}}$, and the decoding scheme returns either $f'=\chi_{7}$ or $f'=\chi_{69}$. These are reasonable decodings of $f$, and yet their correlation with $f$ is zero.
The problem is now how to define this 'reasonable decoding scheme'. Since I failed to give a full definition in class of a reasonable decoding scheme in the case of constraint testers, let me try to compensate by giving the definition here, together with some additional explanations. I will still repeat some of the last part of today's class next week.
Let up start with the definition of a decoding scheme.
A decoding scheme for a code $C$ is a mapping $D$ that associates each string $f$ with a distribution $D(f)$ over codewords from $C$ and possibly the bottom symbol $\bot$ (the bottom symbol will basically appear when the decoding scheme gives up).
Note that in the above definition we gave the decoding scheme total freedom - it can always return just one codeword, it can return the uniform distribution over all codewords, or it can just give up and always return bottom. There are definitely many lousy decoding schemes, but in the definition of a test we'll require good ones.
We decide if a decoding scheme is good by whether the decoding it returns satisfies the constraints that we want to verify, but as noted in class someone can just try to sell us a couple of decoding schemes that always return the same strings $f',g'$, which happen to satisfy the given constraint, and have nothing to do with the original $f$ and $g$. Indeed, to make the decoding problem interesting we need to look not only on one constraint, but on a family of constraints: the decoding scheme will have to figure out $f'$ and $g'$ (given $f$ and $g$ respectively) without knowing which constraint from the family is tested.
A constraint family over codes $C_1,C_2$ is a set $\{R_\lambda\}_{\lambda\in \Lambda}$ of relations $R_\lambda$ on $C_1$ and $C_2$. Given $f\in C_1$ and $g\in C_2$, we say that they satisfy the $\lambda$-constraint in the family if $fR_\lambda g$.
We are going to look at tests over families of constraints. The testing procedure will be given an index $\lambda$ of a relation in the family, and then query some bits from the given strings $f$ and $g$. Formally,
A $q$-query constraint tester for codes $C_1,C_2$ and a family $\{R_\lambda\}_{\lambda\in \Lambda}$, is a procedure that given two strings $f,g$ and an index $\lambda$, makes $q$ queries to $f$ and $g$ and then either accepts or rejects.
We say that the tester has completeness $c$ if when $f\in C_1$, $g\in C_2$ and $fR_\lambda g$, the test accepts with probability at least $c$. The more difficult part is to define soundness. We say that a tester has soundness $s=s(\delta)$ for $\delta$-satisfaction if there are decoding schemes $D_1$ for $C_1$ and $D_2$ for $C_2$ such that the following holds: for any strings $f,g$ and any index $\lambda$, if
$\Pr[$tester accepts on $f,g,\lambda]>s$,
implies
$\Pr_{f'\sim D_1(f), g'\sim D_2(g)}[f'R_\lambda g']\geq \delta$.
As food for thought, consider the problem of testing whether $f$ and $g$ are in the Hadamard code and that additionally $f=\chi_T g$ for a given $T$. Can you devise a three query constraint tester for this problem? Can you show some non-trivial bounds on its soundness?
Monday, April 28, 2008
Decoding schemes
Friday, April 18, 2008
Happy vacation
I wish you all a happy vacation - and to keep your brain from clogging on matzo's, don't forget to work on the home assignment.
On a slightly different note, a few of you are extremely late on handing exercises and this prevents me from talking about the solutions, which in turns hurts everyone else. If you are late on any of the submissions, you can submit without penalty until the class on Monday, April 28th. Any late submission past that date without my approval will not be accepted. There will be no postponements from now on without my explicit consent.
Thursday, April 10, 2008
Another home assignment
I am happy to announce that homework assignment no. 3 now proudly inhabits the course homepage. Due date is April 28, but you'd be doing yourselves a favor if you look at it much sooner, before you forget all the proof techniques from recent classes.
Wednesday, April 2, 2008
Shock and awe
I'm not really sure about awe, but I sensed a bit of shock in Monday's class. I guess the fact that this was the first class in three weeks, plus the late time, plus my jetlag, plus the fact that the proof I showed was quite complicated all took their toll.
Not being able to follow everything in class is actually not such a bad thing - it is often unavoidable when learning serious math. But to really gain something from a class were you didn't follow everything, or just barely did so, it is essential to go over the material again and make sure that you eventually understand it. There will be no written assignment this week - your assignment is to go over Monday's class and to really understand all the details and ideas. If you have questions, feel free to ask me.
To help you out a bit, let me try to have another short attempt at explaining the crux of the proof, which is the lemma saying that if $g=c+\sum a_ix_i$ and $\alpha=\sum a_i^2$ is 'small enough', then $||g-\sign(g)||_2^2\geq \alpha(1+o_\alpha(1))$. First, if $c$ is close to zero this is easy - but you should verify that you know why. If $c$ is closer to, say, $1$ than it is to zero, the point is that $g$ will almost always be positive (but it can also be negative sometimes, even if $\sum a_i^2$ is small - can you see why?). That's why the variance of $g$ will be very close to the variance of $|g|$, which is good because
(a) we know the variance of $g$ - it equals $\alpha$, and
(b) The term $||g-\sign(g)||_2^2$ can be bounded from below by the variance of $|g|$.
All that is left is therefore to bound the difference between the variance of $g$ and that of $|g|$, which we managed to show (modulo the claim that $c+\mathbb{E}[|g|]\leq 4$) is bounded by the expectation of the negative part of $g$. Again, this seems easy since we can use Chernoff to show that the probability that $g$ is negative is very small - the $x_i$'s with their smallish coefficients must win the battle against the big $c$ for this to happen. The problem is that we need a bound not on the probability that $g$ is negative but on the expectation of its negative part. We therefore use a formula that writes the expectation of a random variable in terms of the probabilities of it being large, namely $\mathbb{E}[X]=\int_{t=0}^\infty\Pr[X>t] dx$. That's how we use the Chernoff bound to estimate an expectation rather than a probability - this is actually a standard and important trick. Once we write down the integral, it's all over but the shouting.