On an expected value inequality.
up vote
6
down vote
favorite
Given $X$ a random variable that takes values on all of $mathbbR$ with associated probability density function $f$ is it true that for all $r > 0$
$$E left[ int_X-r^X+r f(x) dx right] ge E left[ int_X-r^X+r g(x) dx right]$$
for any other probability density function $g$ ?
This seems intuitively true to me and I imagine if it were to be true that it has been proven but I can't find a similar result on the standard textbooks, even a reference is welcome.
probability integration probability-theory inequality
add a comment |
up vote
6
down vote
favorite
Given $X$ a random variable that takes values on all of $mathbbR$ with associated probability density function $f$ is it true that for all $r > 0$
$$E left[ int_X-r^X+r f(x) dx right] ge E left[ int_X-r^X+r g(x) dx right]$$
for any other probability density function $g$ ?
This seems intuitively true to me and I imagine if it were to be true that it has been proven but I can't find a similar result on the standard textbooks, even a reference is welcome.
probability integration probability-theory inequality
add a comment |
up vote
6
down vote
favorite
up vote
6
down vote
favorite
Given $X$ a random variable that takes values on all of $mathbbR$ with associated probability density function $f$ is it true that for all $r > 0$
$$E left[ int_X-r^X+r f(x) dx right] ge E left[ int_X-r^X+r g(x) dx right]$$
for any other probability density function $g$ ?
This seems intuitively true to me and I imagine if it were to be true that it has been proven but I can't find a similar result on the standard textbooks, even a reference is welcome.
probability integration probability-theory inequality
Given $X$ a random variable that takes values on all of $mathbbR$ with associated probability density function $f$ is it true that for all $r > 0$
$$E left[ int_X-r^X+r f(x) dx right] ge E left[ int_X-r^X+r g(x) dx right]$$
for any other probability density function $g$ ?
This seems intuitively true to me and I imagine if it were to be true that it has been proven but I can't find a similar result on the standard textbooks, even a reference is welcome.
probability integration probability-theory inequality
probability integration probability-theory inequality
edited Nov 10 at 20:24
asked Nov 10 at 17:17
Monolite
1,4692925
1,4692925
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
up vote
6
down vote
accepted
Taking the particular case of small $r$ ($r to 0$) and continuous $f$, your inequality turns equivalent to
$$ int f^2 ge int f g $$
with the restrictions $int f = int g = 1$ and $fge 0$, $gge 0$. This is clearly false. For a fixed $f$ we maximize $int f g$, not by choosing $g=f$, but by choosing $g$ concentrated around the mode (maximum) of $f$.
Incidentally, your assertion has a simple interpretation: suppose I have to guess the value of a random variable $x$ with pdf $f$, so that I win if the absolute error $e=|x- hat x|$ is less than $r$. If the inequality were true, then the conclusion would be that my best estrategy (in terms of expected win rate) is to make a random guess , by drawing my $hat x$ as an independent random variable with the same density as $x$. But this is not true, the optimal guess is to choose a deterministic value, that which maximizes the respective integral; for small $r$ this is the mode of $f$ (maximum a posteriori).
add a comment |
up vote
3
down vote
Unfortunately, your intuitive conjecture is INCORRECT.
Let $f(x)$ be the PDF of the random variable $X$ and $F(x)$ be its cumulative PDF, so that $F'(x)=f(x)$, or
$$F(x)=int_-infty^x f(t)dt$$
Similarly, let $g(x)$ be another PDF with cumulative PDF $G(x)$. Then the expected value of the integral
$$int_X-r^X+r g(x)dx$$
is equal to
$$int_-infty^infty int_x-r^x+r f(x)g(t)dtdx=int_-infty^infty (G(x+r)-G(x-r))f(x)dx$$
By using integration by parts, we have that
$$int_-infty^infty (G(x+r)-G(x-r))f(x)dx=int_-infty^infty (F(x+r)-F(x-r))g(x)dx$$
Consider this simple counterexample. Let $r=1$, and suppose that
$$f(x)=frac1pifrac11+x^2$$
Then, if your conjecture is true, for no function $g$ will the integral
$$frac1piint_-infty^infty (arctan(x+1)-arctan(x-1))g(x)dx$$
even surpass the value
$$frac1pi^2int_-infty^infty fracarctan(x+1)-arctan(x-1)1+x^2dxapprox 0.1475$$
However, suppose that we let
$$g(x)=frac4pifrac11+4x^2$$
Then the value of our integral is equal to
$$frac4pi^2int_-infty^infty fracarctan(x+1)-arctan(x-1)1+4x^2dxapprox 0.3743$$
which disproves your conjecture.
add a comment |
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
6
down vote
accepted
Taking the particular case of small $r$ ($r to 0$) and continuous $f$, your inequality turns equivalent to
$$ int f^2 ge int f g $$
with the restrictions $int f = int g = 1$ and $fge 0$, $gge 0$. This is clearly false. For a fixed $f$ we maximize $int f g$, not by choosing $g=f$, but by choosing $g$ concentrated around the mode (maximum) of $f$.
Incidentally, your assertion has a simple interpretation: suppose I have to guess the value of a random variable $x$ with pdf $f$, so that I win if the absolute error $e=|x- hat x|$ is less than $r$. If the inequality were true, then the conclusion would be that my best estrategy (in terms of expected win rate) is to make a random guess , by drawing my $hat x$ as an independent random variable with the same density as $x$. But this is not true, the optimal guess is to choose a deterministic value, that which maximizes the respective integral; for small $r$ this is the mode of $f$ (maximum a posteriori).
add a comment |
up vote
6
down vote
accepted
Taking the particular case of small $r$ ($r to 0$) and continuous $f$, your inequality turns equivalent to
$$ int f^2 ge int f g $$
with the restrictions $int f = int g = 1$ and $fge 0$, $gge 0$. This is clearly false. For a fixed $f$ we maximize $int f g$, not by choosing $g=f$, but by choosing $g$ concentrated around the mode (maximum) of $f$.
Incidentally, your assertion has a simple interpretation: suppose I have to guess the value of a random variable $x$ with pdf $f$, so that I win if the absolute error $e=|x- hat x|$ is less than $r$. If the inequality were true, then the conclusion would be that my best estrategy (in terms of expected win rate) is to make a random guess , by drawing my $hat x$ as an independent random variable with the same density as $x$. But this is not true, the optimal guess is to choose a deterministic value, that which maximizes the respective integral; for small $r$ this is the mode of $f$ (maximum a posteriori).
add a comment |
up vote
6
down vote
accepted
up vote
6
down vote
accepted
Taking the particular case of small $r$ ($r to 0$) and continuous $f$, your inequality turns equivalent to
$$ int f^2 ge int f g $$
with the restrictions $int f = int g = 1$ and $fge 0$, $gge 0$. This is clearly false. For a fixed $f$ we maximize $int f g$, not by choosing $g=f$, but by choosing $g$ concentrated around the mode (maximum) of $f$.
Incidentally, your assertion has a simple interpretation: suppose I have to guess the value of a random variable $x$ with pdf $f$, so that I win if the absolute error $e=|x- hat x|$ is less than $r$. If the inequality were true, then the conclusion would be that my best estrategy (in terms of expected win rate) is to make a random guess , by drawing my $hat x$ as an independent random variable with the same density as $x$. But this is not true, the optimal guess is to choose a deterministic value, that which maximizes the respective integral; for small $r$ this is the mode of $f$ (maximum a posteriori).
Taking the particular case of small $r$ ($r to 0$) and continuous $f$, your inequality turns equivalent to
$$ int f^2 ge int f g $$
with the restrictions $int f = int g = 1$ and $fge 0$, $gge 0$. This is clearly false. For a fixed $f$ we maximize $int f g$, not by choosing $g=f$, but by choosing $g$ concentrated around the mode (maximum) of $f$.
Incidentally, your assertion has a simple interpretation: suppose I have to guess the value of a random variable $x$ with pdf $f$, so that I win if the absolute error $e=|x- hat x|$ is less than $r$. If the inequality were true, then the conclusion would be that my best estrategy (in terms of expected win rate) is to make a random guess , by drawing my $hat x$ as an independent random variable with the same density as $x$. But this is not true, the optimal guess is to choose a deterministic value, that which maximizes the respective integral; for small $r$ this is the mode of $f$ (maximum a posteriori).
edited Nov 10 at 19:23
answered Nov 10 at 18:46
leonbloy
39.6k645105
39.6k645105
add a comment |
add a comment |
up vote
3
down vote
Unfortunately, your intuitive conjecture is INCORRECT.
Let $f(x)$ be the PDF of the random variable $X$ and $F(x)$ be its cumulative PDF, so that $F'(x)=f(x)$, or
$$F(x)=int_-infty^x f(t)dt$$
Similarly, let $g(x)$ be another PDF with cumulative PDF $G(x)$. Then the expected value of the integral
$$int_X-r^X+r g(x)dx$$
is equal to
$$int_-infty^infty int_x-r^x+r f(x)g(t)dtdx=int_-infty^infty (G(x+r)-G(x-r))f(x)dx$$
By using integration by parts, we have that
$$int_-infty^infty (G(x+r)-G(x-r))f(x)dx=int_-infty^infty (F(x+r)-F(x-r))g(x)dx$$
Consider this simple counterexample. Let $r=1$, and suppose that
$$f(x)=frac1pifrac11+x^2$$
Then, if your conjecture is true, for no function $g$ will the integral
$$frac1piint_-infty^infty (arctan(x+1)-arctan(x-1))g(x)dx$$
even surpass the value
$$frac1pi^2int_-infty^infty fracarctan(x+1)-arctan(x-1)1+x^2dxapprox 0.1475$$
However, suppose that we let
$$g(x)=frac4pifrac11+4x^2$$
Then the value of our integral is equal to
$$frac4pi^2int_-infty^infty fracarctan(x+1)-arctan(x-1)1+4x^2dxapprox 0.3743$$
which disproves your conjecture.
add a comment |
up vote
3
down vote
Unfortunately, your intuitive conjecture is INCORRECT.
Let $f(x)$ be the PDF of the random variable $X$ and $F(x)$ be its cumulative PDF, so that $F'(x)=f(x)$, or
$$F(x)=int_-infty^x f(t)dt$$
Similarly, let $g(x)$ be another PDF with cumulative PDF $G(x)$. Then the expected value of the integral
$$int_X-r^X+r g(x)dx$$
is equal to
$$int_-infty^infty int_x-r^x+r f(x)g(t)dtdx=int_-infty^infty (G(x+r)-G(x-r))f(x)dx$$
By using integration by parts, we have that
$$int_-infty^infty (G(x+r)-G(x-r))f(x)dx=int_-infty^infty (F(x+r)-F(x-r))g(x)dx$$
Consider this simple counterexample. Let $r=1$, and suppose that
$$f(x)=frac1pifrac11+x^2$$
Then, if your conjecture is true, for no function $g$ will the integral
$$frac1piint_-infty^infty (arctan(x+1)-arctan(x-1))g(x)dx$$
even surpass the value
$$frac1pi^2int_-infty^infty fracarctan(x+1)-arctan(x-1)1+x^2dxapprox 0.1475$$
However, suppose that we let
$$g(x)=frac4pifrac11+4x^2$$
Then the value of our integral is equal to
$$frac4pi^2int_-infty^infty fracarctan(x+1)-arctan(x-1)1+4x^2dxapprox 0.3743$$
which disproves your conjecture.
add a comment |
up vote
3
down vote
up vote
3
down vote
Unfortunately, your intuitive conjecture is INCORRECT.
Let $f(x)$ be the PDF of the random variable $X$ and $F(x)$ be its cumulative PDF, so that $F'(x)=f(x)$, or
$$F(x)=int_-infty^x f(t)dt$$
Similarly, let $g(x)$ be another PDF with cumulative PDF $G(x)$. Then the expected value of the integral
$$int_X-r^X+r g(x)dx$$
is equal to
$$int_-infty^infty int_x-r^x+r f(x)g(t)dtdx=int_-infty^infty (G(x+r)-G(x-r))f(x)dx$$
By using integration by parts, we have that
$$int_-infty^infty (G(x+r)-G(x-r))f(x)dx=int_-infty^infty (F(x+r)-F(x-r))g(x)dx$$
Consider this simple counterexample. Let $r=1$, and suppose that
$$f(x)=frac1pifrac11+x^2$$
Then, if your conjecture is true, for no function $g$ will the integral
$$frac1piint_-infty^infty (arctan(x+1)-arctan(x-1))g(x)dx$$
even surpass the value
$$frac1pi^2int_-infty^infty fracarctan(x+1)-arctan(x-1)1+x^2dxapprox 0.1475$$
However, suppose that we let
$$g(x)=frac4pifrac11+4x^2$$
Then the value of our integral is equal to
$$frac4pi^2int_-infty^infty fracarctan(x+1)-arctan(x-1)1+4x^2dxapprox 0.3743$$
which disproves your conjecture.
Unfortunately, your intuitive conjecture is INCORRECT.
Let $f(x)$ be the PDF of the random variable $X$ and $F(x)$ be its cumulative PDF, so that $F'(x)=f(x)$, or
$$F(x)=int_-infty^x f(t)dt$$
Similarly, let $g(x)$ be another PDF with cumulative PDF $G(x)$. Then the expected value of the integral
$$int_X-r^X+r g(x)dx$$
is equal to
$$int_-infty^infty int_x-r^x+r f(x)g(t)dtdx=int_-infty^infty (G(x+r)-G(x-r))f(x)dx$$
By using integration by parts, we have that
$$int_-infty^infty (G(x+r)-G(x-r))f(x)dx=int_-infty^infty (F(x+r)-F(x-r))g(x)dx$$
Consider this simple counterexample. Let $r=1$, and suppose that
$$f(x)=frac1pifrac11+x^2$$
Then, if your conjecture is true, for no function $g$ will the integral
$$frac1piint_-infty^infty (arctan(x+1)-arctan(x-1))g(x)dx$$
even surpass the value
$$frac1pi^2int_-infty^infty fracarctan(x+1)-arctan(x-1)1+x^2dxapprox 0.1475$$
However, suppose that we let
$$g(x)=frac4pifrac11+4x^2$$
Then the value of our integral is equal to
$$frac4pi^2int_-infty^infty fracarctan(x+1)-arctan(x-1)1+4x^2dxapprox 0.3743$$
which disproves your conjecture.
answered Nov 10 at 18:03
Frpzzd
19.8k638101
19.8k638101
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2992859%2fon-an-expected-value-inequality%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown