**High Dimensional Linear Regression using Lattice Basis Reduction**

In this talk, we focus on the high dimensional linear regression problem where the goal is to eﬃciently recover an unknown vector β∗ from n noisy linear observations Y = Xβ∗ + W ∈ Rn, for known X ∈ Rn×p and unknown W ∈ Rn. Unlike most of the literature on this model we make no sparsity assumption on β∗. Instead we adopt a regularization based on assuming that the underlying vectors β∗ have rational entries with the same denominator Q ∈ Z>0. We call this Q-rationality assumption.

We propose a new polynomial-time algorithm for this task which is based on the seminal Lenstra-Lenstra-Lovasz (LLL) lattice basis reduction algorithm. We establish that under the Q-rationality assumption, our algorithm recovers exactly the vector β∗ for a large class of distributions for the iid entries of X and non-zero noise W. We prove that it is successful under small noise, even when the learner has access to only one observation (n = 1). Furthermore, we prove that in the case of the Gaussian white noise for W , n = o (p/ log p) and Q suﬃciently large, our algorithm tolerates a nearly optimal information-theoretic level of the noise.

Authors: Ilias Zadik