# A reverse Cantelli inequality

The traditional Cantelli inequality states that, for any real-valued random variable \(X\) and \(\lambda > 0\),

\[\Pr[X - \mu \geq \lambda] \leq \frac{\sigma^2}{\sigma^2 + \lambda^2},\]where \(\mu\) is the expectation of \(X\) and \(\sigma^2\) is its variance. I recently needed a variant of this inequality of the following form:

\[\Pr[X - \alpha\mu \geq 0] \geq \mbox{ ? }\]where \(\alpha > 0\). To prove such a lower bound, I needed an additional assumption, namely that \(X\) is non-negative and bounded:

\[\Pr[0 \leq X \leq b] = 1.\]Then we have

**Lemma.** Let \(X\) be a random variable with \(0 \leq X \leq b\) almost surely. Let \(\mu\) and \(\sigma^2\) denote the mean and variance of \(X\), respectively, and let \(\alpha \geq 0\). Then

*Proof.* Let \(E = \{ X - \alpha\mu > 0 \}\). Then

Our goal is now to bound \(\mathbb E[(X - \mu)^2 \mid X - \alpha\mu \leq 0]\) from above. Note that

\[(X - \mu)^2 = X^2 - 2X\mu + \mu^2.\]Assuming \(0 \leq X \leq \alpha\mu\), this implies \(X^2 \leq \alpha^2\mu^2\). Hence

\[\begin{aligned} \mathbb E[(X - \mu)^2 \mid X \leq \alpha\mu] &= \mathbb E[X^2 - 2X\mu + \mu^2 \mid X \leq \alpha\mu]\\\\ &= \mathbb E[X^2 \mid X \leq \alpha\mu] - 2\mu\mathbb E[X \mid X \leq \alpha\mu] + \mu^2\\\\ &\leq \alpha^2\mu^2 + \mu^2\\\\ &= (\alpha^2 + 1)\mu^2. \end{aligned}\]This means

\[\begin{aligned} \sigma^2 &\leq \Pr[X - \alpha\mu > 0]b^2 + \mathbb E[(X - \mu)^2 \mid X - \alpha\mu < 0]\\\\ &\leq \Pr[X - \alpha\mu > 0]b^2 + (\alpha^2 + 1)\mu^2. \end{aligned}\]Rearranging then yields the stated result. \(\blacksquare\)

To be non-vacuous, it is necessary that

\[\sigma^2 \geq (\alpha^2 + 1)\mu^2.\]This always holds for \(\mu = 0\). When \(\mu \neq 0\), we can also translate this into a requirement on \(\alpha\):

\[\alpha \leq \sqrt{\frac{\sigma^2}{\mu^2} - 1},\]which assumes \(\sigma \geq \mu\). If \(\sigma < \mu\) then the bound is never useful, since it will always be negative.

Note that we can also turn this new result into a lower bound of the old Cantelli inequality:

\[\Pr[X - \mu > \lambda] = \Pr[X - (\mu + \lambda) > 0] = \Pr[X - \alpha\mu > 0]\]with

\[\alpha = 1 + \frac{\lambda}{\mu}.\]As such,

\[\begin{aligned} \Pr[X - \mu > \lambda] &> \frac{\sigma^2 - \left(\left(1 + \frac{\lambda}{\mu}\right)^2 + 1\right)\mu^2}{b^2}\\\\ &= \frac{\sigma^2 - \left(2 - 2\frac{\lambda}{\mu} + \frac{\lambda^2}{\mu^2}\right)\mu^2}{b^2}\\\\ &= \frac{\sigma^2 - 2\mu^2 + 2\mu\lambda - \lambda^2}{b^2}. \end{aligned}\]For \(\mu = 0\) this reduces to

\[\Pr[X > \lambda] > \frac{\sigma^2 - \lambda^2}{b^2},\]which is non-vacuous for \(\lambda^2 \leq \sigma^2\). Hence the two-sided bound:

\[\frac{\sigma^2 - \lambda^2}{b^2} \leq \Pr[X - \mu \geq \lambda] \leq \frac{\sigma^2}{\sigma^2 + \lambda^2}.\]In particular, this means

\[\frac{\sigma^2 - \lambda^2}{b^2} \leq \frac{\sigma^2}{\sigma^2 + \lambda^2}.\]Therefore,

\[\sigma^4 - \lambda^4 \leq b^2\sigma^2.\]Setting \(\lambda = 0\), we find

\[\sigma \leq b.\]This is a cute little side-effect we just proved, namely that any upper bound on \(X\) must be at least as large as its standard deviation. This is not really surprising if you know what the standard deviation means (indeed it would be very weird if this weren’t true), but it’s still nice. As it happens, this is actually a worse form of Popoviciu’s inequality on variances. For \(0 \leq X \leq b\), Popoviciu’s inequality implies

\[\sigma \leq \frac{b}{2},\]which is better than our corollary by a factor of two.

## Enjoy Reading This Article?

Here are some more articles you might like to read next: