Recent Progress on Pseudorandomness for Small Memory
One of the most important complexity-theoretic challenges is showing that randomized algorithms are not much more powerful than deterministic algorithms. Specifically, the two main challenges are to show that randomness “cannot save time” and that randomness “cannot save memory”.
Our focus here is on the latter. A major tool in this study is pseudorandom distributions that fool small memory computations – meaning that small memory computations behave the same when fed these distributions and when fed the uniform distributions. Pseudorandomness for small memory also generalize many of the most useful pseudorandom objects, such as epsilon-bias distributions and bounded independence.
In this talk I will focus on a recent kind of pseudorandom distributions that has been at the center of major progress in this area. The distributions are based on mild pseudorandom restrictions, meaning that the bits of the distributions are fixed in stages using much more basic pseudorandomness. Since part of the audience is not fluent in the research on pseudorandomness, I will aim for the talk to be as self-contained as possible.