4.1.4. Use regular conditional probability to get the conditional Holder inequality from the unconditional one, i.e., show that if with then
Proof: Note that is a nice space. Therefore, according to Theorem 4.1.17. there exists a which is the regular conditional distribution for given . In another word,
For each , is a version of .
For a.e. , is a probability measure on .
Now for a.e. , it follows from the unconditional Holder inequality that
Using Theorem 4.1.16., the above inequality implies that
as desired.
4.1.6. Show that if and then
Proof: Note that
Therefore we can verify the following Pythagorean law:
Also note that . So using this Pythagorean law three times, we get that the desired equality is equivalent to
This is trival.
4.1.9. Show that if and are random variables with and , then a.s.
Proof: Using the Pythagorean law (see the proof of exercise 4.1.6.), we have
So we have a.s.
4.1.10. If and has the same distribution as , then a.s.
Proof: First we proof that for each random variable satisfies the condition of this exercise, we have . In fact, on one hand, Jensen's inequality implies that
On the other hand, the condition that says that
So we must have as surely.
This leads us to
which forces that
So we must have . Noticing again we have
so we must have as required. Now take , we get
This complete the proof.
4.2.2. Give an example of a submartingale so that is a supermartingale.
Proof:.
4.2.3. Show that if and are submartingales w.r.t. then is also.
Proof: Obviously is adapted to . From , we know is integrable. Finally, we have
4.2.4. Let be a submartingale with . Let and suppose . Show that converges a.s..
Proof: For each define stopping time . From , we know . According to Theorem 4.2.11, this says that convergence a.s.. (Note that is also a submartingale due to Theorem 4.2.9.). Therefore convergence on the event . Note that the condition implies that . Therefore, converges a.s..
4.2.6. Let be nonnegative i.i.d. random variables with and . By example 4.2.3 that defines a martingale. (i) Show that a.s.. (ii) Use the strong law of large numbers to conclude .
Proof. (i) Since is a non-negative martingale, it convergence a.s.ly to a limit, say . Fix a such that . Then for each , since is independent of , we have
The left hand side converges to as , so we must have converges to as well. Therfore, a.s..
(ii) We can assume that , since if , it is easy to see that
which implies that .
Now, assuming , we can write
According to strong law of large numbers (Theorem 2.4.1. and 2.4.5.), we only have to show that .
Define , then both and are integrable. From Jensen's inequality, we have
By monotonicity, taking , we have
Now, we only have to show that . In fact, if , we have is integrable. So from and Exercise 1.6.1. we have a.s., which contradicts to the condition .
4.2.8. Let and be positive integrable and adapted to . Suppose
with a.s.. Prove that converges a.s. to a finite limit.
Proof: Let
which is positive, integrable and adapted to . From
we know is a supersomartingale. Theorem 4.1.12. says that converges a.s. to a finite limit, say . Since are positive, we have
From the condition a.s. and the fact that the left hand side of above is non-decreasing, we have converges a.s. to a finite limit. Therefore also converges a.s. to a finite limit.
4.2.9. Suppose and are supermartingales w.r.t. , and is a stopping time so that . Then
and
are supermartinales.
Proof: Clearely, and are integrable and adapted to . Note that
Therefore,
4.3.1. Give an example of a martingale with and for .
Proof: Suppose that are i.i.d. r.v. with uniform distribution in . Let . For each , if , let ; if , let . Then is a martingale since
Note that
So B.C. lemma says that , which says that a.s..
It is elementary to see that
Notice that we always have , so the above identity says that
Now, using Theorem 4.3.4. and above we have
in the sense of a.s.. This can only happen if .
We can also verify that
So from what we have proved, we know that a.s.ly
Using Theorem 4.3.4., we have that . Similarly, we have .
4.3.3. Let and be positive integrable and adpted to . Suppose , with a.s.. Prove that converges a.s. to a finite limit.
Proof: Define . Then is a supermartingale, since
Define stopping times
Then it is easy to see that, for each ,
is a non-negative supermartingale. Therefore, convergences a.s.ly on event . Finally, notice that event
is with probability .
4.3.5. Show implies .
Proof: Note that, there is a partition for the event satisfying that
Define a filtration such that
Notice also that is a partition for the underlying probablity space . Therefore, according to Example 4.1.5., we have
From the condition of this exercise, we have
Now, using Theorem 4.3.4. we get that
in the sense of almost sure. This can only happen if .
For the next two exercises, in the context of Kakutani dichotomy for infinite product measures on page 235, suppose , are concentrated on and have , .
4.3.9. Show that if and then .
Proof: Let . According to B.C. lemma, condition says that ; condition says that . So we must have .
4.3.10. Suppose . Show that is sufficient for in general.
Proof: Let be i.i.d. r.v. uniform distribution on w.r.t. probability space . Define random element by
Then we have has distribution and has distribution . Note that
therefore, according to B.C. lemma, we have
This says that there exists such that . On the other hand, it is obvious that
so from the independency, we have
Now, suppose that is not true, then according to Kakutani dichotomy, we have . This says that, there exists a subset , wuch that . In this case, we have
which says that
This is a contradiction.
4.3.13. Galton and Watson who invented the process that bears their names were interested in the survival of family names. Suppose each family has exactly 3 children but coin flips determine their sex. In the 1800s, only male children kept the family name so following the male offspring leads to a branching process with , , , . Compute the probability that the family name will die out when .
Proof: According to Theorem 4.3.12. we know that is the only solution of
in , where
Solving this gives that . 4.4.3. Suppose are stopping times. If then is a stopping time.
Proof: According to Theorem 7.3.6. we have . Therefore, for each , we have
From the above, we have .
4.4.5. Prove the following variant of the conditional variance formula. If then
Proof: Note that . So according to the Pythagorean law (see the Slution to Excise 4.1.6.) we get the desired result.
4.4.7. Let be a martingale with and . Show that
Proof: According to Theorem 4.2.6. we have is a submartingale where is an arbitrary real number. Therefore, for each , according to Doob's inequality
Now, taking we have
4.4.8. Let be a submartingale and . Prove
where .
Proof: Fix an . Note that
Doob's inequality then says that
Now, use the calculus fact that , we have
This says that
Finally, taking , using monotone convergence theorem, we get the desired result.
4.4.9. Let and be martingales with and . Show that
Proof: Since and we have by Theorem 4.4.7. that . Similarly we have . Now it is easy to calculate that
From this to the desired result is trival.
4.4.10. Let , be a martingale and let for . If then a.s. and in .
Proof: Using the result in Excise 4.4.9, we have
Therefore . According to Theorem 4.4.6. we get the desired result.
4.6.4. Let be r.v.'s taking values in . Let and assume
Use Theorem 4.6.9 to conclude that .
Proof: Let and . According to , we have by Levy's 0-1 law that a.s.. For each , and each element , there exists a sequence of integers such that for each , we have . Therefore, for this ,
So we must have on this event . This says that for each . Therefore, . Finally, noticing that , we must have the desired result.
4.6.5. Let be a branching process with offspring distribution . Use the last result to show that if then .
Proof: Let be the event of extinction. Let be i.i.d. r.v. used in (4.3.4.). Let . Now for each , on event we have
On event , we have . Now, using Exercise 4.6.4. we have
as desired.
4.6.7. Show that if and in then in .
Proof: According to Theorem 4.6.8. we have in . So we only have to show that
In fact,
4.7.3. Prove directly from the definition that if are exchangeable
Proof: Define
Note that, for each , there exists a purmutation on such that
Now, writting , we have
Similarly we have .
Therefore, we have
4.7.4. If are exchangeable with then .
Proof: Note that
4.7.5. If are i.i.d. with and then
Proof: Note that
Therefore, since , we know is a filtration with index . Therefore we have
According to Hewitt-Savage 0-1 law, we have is trival. So
4.8.3. Let where the are independent with and . is a martingale. Let . Then we have .
Proof: Without loss of generality, we assume . (Otherwise, the desired result is trival.) According to Wald's second identity (Excise 4.8.4. below), we have
4.8.4. Let where the are independent with and . Show that if is a stopping time with then .
Proof: Since is a martingale, we have
Therefore, we have
This tells us that is a -martingale. Therefore and
4.8.5. Let be independent with and where . Let and let . Theorem 4.8.9 tells us that . Let and note that and
then it follows that is a martingale. (a) Use this to conclude that when the variance of is
(b) Why must the answer in (a) be of the form ?
Proof. (a). Since is a stopping time with finite expectation. Using Wald's second identity (Excise 4.8.7.), we have
From the fact that and , we can calculate the desired result.
(b) Define . Then, according to , we have . From the fact that , and the fact that , we know that
Moreover, it can be verified that are i.i.d. random variables. ( are the time process spend from first hitting position to first hitting position .) So
4.8.7. Let be a symmetric simple random walk starting at , and let where is an integer. Find constants and so that is a martingale and use this to compute .
Proof: First, since is a martingale, we have is a martinale. Therefore
Note that is bounded by ; is monotonic in . Therefore, using bounded/monotonic convergence theorem, we get
It is elementary to verify that
Therefore, is a martingale iff and . Now, set . since is also a martingale, we have
Note that is bounded by ; is monotonic in ; is dominated by . Therefore, using bounded/monotonic/dominated convergence theorem, we get
From , we have .
4.8.10. Consider a favorable game in which the payoff are or with probability each. Use the results of the previous problem to compute the probability we ever go broke (i.e. our winings reach ) when we start with .
Proof: It is elementary to verify that, if , then
It is well known that is a martingale (the so-called exponential martingale). Note that it is non-negative, so it must have almost sure limit . In fact, since almost surely, we must have .
Now, consider the martingale where is the broken time (hitting time at 0). Note that , so is a bounded martingale. Therefore, we have
This implies that
6.1.1. Show that the class of invariant events is a -field, and if and only if is invariant, i.e., a.s.
Proof: is a sigma-field since(1) if , then , which says that .(2) since .(3) if is a sequence in , then
which says that .
Also note that
6.1.2. Call almost invariance if and call invariant in the strict sense if .(i) Let be any set, let . Show .(ii) Let be any set with and let . Show that .(iii) Show that is almost invariant if and only if there is a invariant in the strict sense with .
Proof: (i) .(ii) Since , we have that .
(iii) Define and as above. Since is invariance, we have . It can be verified that if two measurable subsets of satisfies , then . In fact,
Using this fact multiple times we have for any . Therefore . And we also have for any . This tells us that . Yet (ii) already shows that is strictly invariance.
6.1.3. (i) Show that if is irrational, mod is dense in . (ii) Use Theorem A.2.1. to show that if is a Borel subset of with , then for any there is an interval so that .(iii) Let be irrational. Combine this with (i) to conclude if is an a subset of which is invariant under the operator
and , then .
Proof: (i) Consider a 1-1 map . For any , there is a natural distance
We only need to prove that is dense on . More precisely, fixing an arbitrary on and a large , we only have to prove that there exists a such that .
In fact, it is easy to verify that
for all ;
all are distinct, so for some . Fix this and .
Now, for that fixed , we know from 3. that there exists a such that lies on the shorter arc connecting and . Therefore, for this ,
as desired.
(ii) Let . Using Theorem A.2.1. there exists countable disjoint intervals such that and . Suppose that non of those intervals satisfies the desired property that , then
Therefore . This is a contradiction.
(iii) Fix an arbitrary . Note that if is an interval satisfies the condition then either interval or interval also satisfy the same condition. This and (ii) implies that for any small , there exists an interval satisfies and . Fix this and interval .Let be the unique integer such that . Thanks to (i), for each , there exists an integer such that
Note that are disjoint intervals, is -invariant i.e. , and is measure preserving. Therefore
Since and are arbitrary, so .
6.1.4. For any stationary sequence , there is a two-sided stationary sequence such that .
Proof: Give a stationary process . According to Kolmogrove's extension theorem, there is a stochastic process such that for any , we have
(It is elementary to verify that hose finite dimensional distribution if consistent.) So, .
We also need to verify that is stationary. This is elementary from its definition.
6.1.5. If is a stationary sequence and is measurable then is a stationary sequence. If is ergodic then so is .
Proof: The shift operator is defined as usual
Define another operator with
then . It can also be verified that .
Therefore for each measurable subset , we have
Therefore, if is stationary, we have
which says that is also stationary.
Note that if is invariant, i.e. , then so is , since . Therefore, if is egodic, then for any invariant subset , we have
which says that is also ergodic.
6.1.6. Let be a stationary sequence. Let and let be a sequence so that are i.i.d. and . Finally, let be uniformly distributed on , independent of , and let for . Show that is stationary and ergodic.
Proof: The shift operator is defined as usual
It is easy to see that for each measurable , we have
Therefore
This says that is stationary.
Now assume that is shift invariant i.e. . Note that
Since is ergodic wrt operator and is shift invariant wrt operator , so we have . This says that . So is ergodic.
6.1.7. Let for and , where the largest integer . Then gives the continued fraction representation of , i.e.
Show that preserves for .
Proof: It can be verified that for each , we have
Therefore, we can calculate that
Using - theorem, we can verifyopen preserves .
Exercise 6.2.1. Show tha if with then the convergence in Theorem 6.2.1 occurs in .
Proof: Take an arbitrary . Let and . We claim that
In fact, on one hand we have
where the almost sure convergence is due to Ergodic theorem, and the convergence is then followed by bounded convergence theorem. On the other hand, we have
Now, since is arbitrary and that as , we get the desierd result.
Exercise 6.2.2 (1) Show that if a.s. and , then
Proof: We claim that
In fact, taking an arbitrary , we can define a almost sure finite random variable using the condition . Then we have by ergodic theorem that
According to the fact that and as , we have a.s. due to the dominated convergence theorem. Therefore, the claim is true. Applying this claim to , we get that
Exercise 6.2.3 Let , , and . Show that if then
Proof:Define , , and . Then, it is easy to see that , and . Lemma 6.2.2. says that . Therefore as desired.
Exercise 6.3.1 Let for and . Show that
Where and is the same as Theorem 6.3.1.
Proof: Note that
Therefore,
Exercise 6.3.2 Under the setting of Theorem 6.3.2. Show that if we assume , and the sequence is ergodic, then .
Proof: It is elementary analysis that if , then we must have
and
Ergodic theorem syas that
so we must have
and
Note, from the condition , we have
which now implies that
However, from Theorem 6.3.1. we already know that . Therefore, we must have .
Exercise 6.3.3 Show that if and then
Proof: We can find a two-side stationary process which has the same finite demisional distribution same as . With some abuse of notations, we denote such two-side stationary process as . Now, we can verify that
Exercise 6.3.4 Consider the special case in which , and let . Here and so . Show .
Proof: From Theorem 6.3.3. we know that . Therefore
On the other hand, with some abuse of natations, assuming that is a two-sided stationary sequence, we have