LAMA UMR8050
We prove the following type of discrete entropy monotonicity for sums of isotropic, log-concave, independent and identically distributed random vectors X1,,Xn+1X_1,\dots,X_{n+1} on Zd\mathbb{Z}^d: H(X1++Xn+1)H(X1++Xn)+d2log(n+1n)+o(1), H(X_1+\cdots+X_{n+1}) \geq H(X_1+\cdots+X_{n}) + \frac{d}{2}\log{\Bigl(\frac{n+1}{n}\Bigr)} +o(1), where o(1)o(1) vanishes as H(X1)H(X_1) \to \infty. Moreover, for the o(1)o(1)-term, we obtain a rate of convergence O(H(X1)e1dH(X1)) O\Bigl({H(X_1)}{e^{-\frac{1}{d}H(X_1)}}\Bigr), where the implied constants depend on dd and nn. This generalizes to Zd\mathbb{Z}^d the one-dimensional result of the second named author (2023). As in dimension one, our strategy is to establish that the discrete entropy H(X1++Xn)H(X_1+\cdots+X_{n}) is close to the differential (continuous) entropy h(X1+U1++Xn+Un)h(X_1+U_1+\cdots+X_{n}+U_{n}), where U1,,UnU_1,\dots, U_n are independent and identically distributed uniform random vectors on [0,1]d[0,1]^d and to apply the theorem of Artstein, Ball, Barthe and Naor (2004) on the monotonicity of differential entropy. In fact, we show this result under more general assumptions than log-concavity, which are preserved up to constants under convolution. In order to show that log-concave distributions satisfy our assumptions in dimension d2d\ge2, more involved tools from convex geometry are needed because a suitable position is required. We show that, for a log-concave function on Rd\mathbb{R}^d in isotropic position, its integral, barycenter and covariance matrix are close to their discrete counterparts. Moreover, in the log-concave case, we weaken the isotropicity assumption to what we call almost isotropicity. One of our technical tools is a discrete analogue to the upper bound on the isotropic constant of a log-concave function, which extends to dimensions d1d\ge1 a result of Bobkov, Marsiglietti and Melbourne (2022).
There are no more papers matching your filters at the moment.