We study the complexity of PAC learning halfspaces in the presence of Massart
noise. In this problem, we are given i.i.d. labeled examples $(\mathbf{x}, y)
\in \mathbb{R}^N \times \{ \pm 1\}
,wherethedistributionof\mathbf{x}$ is
arbitrary and the label
y is a Massart corruption of
f(x), for an
unknown halfspace
f:RN→{±1}, with flipping probability
\eta(\mathbf{x}) \leq \eta < 1/2. The goal of the learner is to compute a
hypothesis with small 0-1 error. Our main result is the first computational
hardness result for this learning problem. Specifically, assuming the (widely
believed) subexponential-time hardness of the Learning with Errors (LWE)
problem, we show that no polynomial-time Massart halfspace learner can achieve
error better than
Ω(η), even if the optimal 0-1 error is small,
namely
OPT=2−logc(N) for any universal constant $c \in (0,
1)$. Prior work had provided qualitatively similar evidence of hardness in the
Statistical Query model. Our computational hardness result essentially resolves
the polynomial PAC learnability of Massart halfspaces, by showing that known
efficient learning algorithms for the problem are nearly best possible.