You are currently browsing the category archive for the ‘For Mathematicians’ category.

Introduction

In this post we will be looking at probabilistic coupling, which is putting two random variables (or proceses) on the same probability space. This turns out to be a surprisingly powerful tool in probability and this post will hopefully brainwash you into agreeing with me.

Consider first this baby problem. You and I flip coins and X_n, Y_n denote our n-th coin flip. Suppose that I have probability p_1 of landing heads and you have probability p_2 \geq p_1 of landing heads. Let

T_Z:=\inf\{n\geq 3: Z_nZ_{n-1}Z_{n-2}=HHH\}

be the first times Z sees three heads in a row. Now obviously you know that \mathbb{E}[T_X]\leq \mathbb{E}[T_Y], but can you prove it?

Of course here it is possible to compute both of the expectations and show this directly, but this is rather messy and long. Instead this will serve as a baby example of coupling.

Read the rest of this entry »

This post will be essentially about functions of bounded variation of one variable. The main source is the book “Functions of Bounded variation and Free Discontinuity Problems” by Ambrosio, Fusco and Pallara. Before we give the definition of a bounded variation function let us recall what exactly does is mean for a function to belong in W^{1,1}(0,1). Recall that any function f\in L^{1}(0,1) can be seen as a distribution T_{f} i.e.  as a bounded linear functional on C_{c}^{\infty}(0,1)T_{f}:C_{c}^{\infty}(0,1)\to\mathbb{R}  with

T_{f}(\phi)=\int_{0}^{1}f(x)\phi(x)dx,\quad \forall \phi\in C_{c}^{\infty}(0,1).

In that case we say that the distribution T_{f} is representable by the function f. Given any distribution T we can define its distributional derivative  DT  to be the distribution defined as

DT(\phi)=-D(\phi'),\quad \forall \phi\in C_{c}^{\infty}(0,1).

In the special case where the distribution T can be represented by a function in the way we show above the distributional derivative DT_{f} will be

DT_{f}(\phi)=-\int_{0}^{1}f(x)\phi'(x)dx,\quad \forall \phi\in C_{c}^{\infty}(0,1).

Read the rest of this entry »

If you haven’t read my previous post, I would suggest doing so before this. We denote by \mathcal{P}_\infty the partitions of \mathbb{N}. The thing to keep in mind here is that we want to think of a coalescent process as a history of lineage. Suppose we start with the trivial partition (\{1\},\{2\},\dots) and think of each block \{i\} as a member of some population. A coalescent process \Pi=(\Pi(t) :t \geq 0) on \mathcal{P}_\infty is essentially defines ancestries, in the sense that if i and j belong to the same block of \Pi(t) for some t\geq 0, then we think of that block as the common ancestor of i and j.

With this in mind, define the operator Coag:\mathcal{P}_\infty \times \mathcal{P}_\infty \rightarrow \mathcal{P}_\infty by

Coag(\pi,\pi')_i=\bigcup_{j \in \pi'}\pi_j.

With some conditions, we can define the same operator on \mathcal{P}_{[n]}, the partitions of [n]:=\{1,\dots,n\}. So for example if \pi=(\{1,3,5\},\{2\},\{4\}) and \pi'=(\{1,3\},\{2\}), then Coag(\pi,\pi')=(\{1,3,4,5\},\{2\}). The partition \pi' tells us in this case to merge the first and third block and leave the second block alone.

Read the rest of this entry »

In this post we will look at random partitions. A partition \pi of \mathbb{N} is a set of disjoint subsets \{\pi_i\}_{i=1}^\infty of \mathbb{N} such that \bigcup_i \pi_i=\mathbb{N}. We arrange these sets \pi_i in the order of their least element, so that \inf \pi_1 < \inf \pi_2< \dots, and if there are only finitely many subsets, we trail the sequence with emptysets, e.g. (\{1\}, \mathbb{N}\backslash \{1\},\emptyset,\dots). Denote the set of partitions of \mathbb{N} by \mathcal{P}_\infty.

Notice that each \pi \in \mathcal{P}_\infty induces an equivalence relation \buildrel \pi \over \sim on \mathbb{N} by i \buildrel \pi \over \sim j if and only if i and j belong to the same block of \pi. Now let \sigma be a permutation of \mathbb{N}, and we define a partition \sigma \pi by saying that i and j are in the same block of \sigma \pi if and only if \sigma(i) \buildrel \pi \over \sim \sigma(j).

Suppose now that \pi is a random partition. We say that \pi is exchangeable if for each permutation \sigma which changes finitely many elements, we have that \sigma\pi has the same distribution as \pi.

There is a wonderful theorem by Kingman which will follow shortly but for now let us look at some basic properties of random exchangeable partitions.

Read the rest of this entry »

There is a remarkably nice proof of the Lebesgue decomposition theorem (described below) by von Neumann. This leads immediately to the Radon-Nikodym theorem.

Theorem:

If \mu and \nu are two finite measures on (\Omega,\mathcal{F}) then there exists a non-negative (w.r.t. both measures) measurable function f and a \mu-null set B such that

\nu(A)=\int_A f \, d\mu+ \nu(A \cap B)

for each A \in \mathcal{F}.

Proof:

Let \pi:=\mu+\nu and consider the operator

T(f):=\int f\, d\nu. Read the rest of this entry »

The fundamental theorem of algebra states that \mathbb{C} is algebraically closed, that is;

Theorem:

For any non-constant polynomial p in \mathbb{C}, there exists a z\in \mathbb{C} such that p(z)=0.

Proof:

Let B=(B_t: t \geq 0) be a Brownian motion on \mathbb{C} and suppose for a contradiction that a non-constant polynomial p does not have any zero’s. Let f:=1/p, then f is analytic and tends to 0 at infinity. Pick such that \alpha < \beta and note that \{Re f \leq \alpha\} and \{Re f \geq \beta\} contain an open set, which can be done due to the fact that f is continuous and non-constant.

Now f(B_t) is a continuous local martingale (by using Ito’s formula) and moreover it is bounded. Hence by the Martingale convergence we have that f(B_t) \rightarrow f(B)_\infty a.s. and in L^1.

This last statement is contradicted by the fact that Brownian motion is recurrent on the complex plane, in particular, it visits \{Re f \leq \alpha\} and \{Re f \geq \beta\} infinitely many times which gives that

\lim\inf f(B_t) \leq \alpha < \beta \leq \lim \sup f(B_t) a.s.

directly contradicting the Martingale convergence.

I found this little gem in Rogers and Williams.

The optional stopping theorem (OST) gives that if X is a martingale and T is a bounded stopping time, then \mathbb{E}[X_T]=\mathbb{E}[X_0].

Now take a Brownian motion B=(B_t:t \geq 0), which is a martingale, and the stopping time T=\inf\{t>0:B_t=1\}. Obviously B_T=1 and OST says that 1=\mathbb{E}[B_T]=\mathbb{E}[B_0]=0.

So what went wrong? Well quite simply, I did not check that T was actually bounded. Why is it not bounded you say? No? Ok, well I’m going to tell you anyway; it is not bounded because there are many paths that trail below 0 which never hit 1. To properly show this, one must first calculate the supremum of the Brownian motion, then work out the probability that it is less than 1.

Rather nice reminder to always check that the stopping times are bounded before applying the OST. This was provided in a lecture by Nathanael Berestycki.