You are currently browsing Bati’s articles.

### Introduction

In this post we will be looking at probabilistic coupling, which is putting two random variables (or proceses) on the same probability space. This turns out to be a surprisingly powerful tool in probability and this post will hopefully brainwash you into agreeing with me.

Consider first this baby problem. You and I flip coins and denote our -th coin flip. Suppose that I have probability of landing heads and you have probability of landing heads. Let

be the first times sees three heads in a row. Now obviously you know that , but can you prove it?

Of course here it is possible to compute both of the expectations and show this directly, but this is rather messy and long. Instead this will serve as a baby example of coupling.

Ever wonder what happens when you send your card details over the internet? How exactly does public-key encryption work? I will write a brief, non-technical introduction to these concepts and the mathematical background in them, containing some sample code and plenty of examples.

Before we describe the RSA algorithm, there is one important mathematical concept, which is prime numbers and their factorization. Recall that a number is prime if no other number divides other than itself and . For technical reasons we exclude the number from being prime. So lets see some examples. Is prime? Well, yes, no other number other than and divide it. Is prime? No, because .

There is a fundamental theorem in number theory which says that every number can be uniquely written as a product of prime numbers, i.e. where are prime. So again, a few examples cannot hurt. Take the number . We know that , but now is not prime so . Hence . That’s what a prime factorisation is, and what the theorem says is pretty basic, if a number , then .

At this point now I can state what is the fundamental idea behind RSA:

**Factoring a number into prime factors is much harder than checking if a number is prime!**

But why is this true? There are technical reasons for this but I prefer to think along the following lines. Computers are much like humans, so imagine if a human is given the task of factorising numbers and checking if numbers are prime.

The trolley dilemma is summed in two parts as follows. Suppose that a trolley is running down a hill at a fast speed, heading towards five people at the bottom of the street. When it reaches them it will surely kill all of them. You notice that there is a switch next to you that could direct the trolley to a side path where there is one man standing and once you do, it will be the one man that dies. Would you do it?

Most people would answer this question with an affirmative. Let us call this the switch scenario. The second scenario is that a trolley is again running down a hill at fast speed, aimed at five people at the bottom which it will surely kill. However this time you are standing on a bridge with a fat man next to you. If you push the fat man off the bridge the trolley will stop but kill that fat man. Would you do it?

Read the rest of this entry »

If you haven’t read my previous post, I would suggest doing so before this. We denote by the partitions of . The thing to keep in mind here is that we want to think of a coalescent process as a history of lineage. Suppose we start with the trivial partition and think of each block as a member of some population. A coalescent process on is essentially defines ancestries, in the sense that if and belong to the same block of for some , then we think of that block as the common ancestor of and .

With this in mind, define the operator by

.

With some conditions, we can define the same operator on , the partitions of . So for example if and , then . The partition tells us in this case to merge the first and third block and leave the second block alone.

In this post we will look at random partitions. A **partition** of is a set of disjoint subsets of such that . We arrange these sets in the order of their least element, so that , and if there are only finitely many subsets, we trail the sequence with emptysets, e.g. . Denote the set of partitions of by .

Notice that each induces an equivalence relation on by if and only if and belong to the same block of . Now let be a permutation of , and we define a partition by saying that and are in the same block of if and only if .

Suppose now that is a random partition. We say that is **exchangeable** if for each permutation which changes finitely many elements, we have that has the same distribution as .

There is a wonderful theorem by Kingman which will follow shortly but for now let us look at some basic properties of random exchangeable partitions.

There is a remarkably nice proof of the Lebesgue decomposition theorem (described below) by von Neumann. This leads immediately to the Radon-Nikodym theorem.

**Theorem:**

*If and are two finite measures on then there exists a non-negative (w.r.t. both measures) measurable function and a -null set such that*

*for each .*

**Proof:**

Let and consider the operator

.* Read the rest of this entry »*

The fundamental theorem of algebra states that is algebraically closed, that is;

Theorem:

*For any non-constant polynomial in , there exists a such that .*

Proof:

Let be a Brownian motion on and suppose for a contradiction that a non-constant polynomial does not have any zero’s. Let , then is analytic and tends to 0 at infinity. Pick such that and note that and contain an open set, which can be done due to the fact that is continuous and non-constant.

Now is a continuous local martingale (by using Ito’s formula) and moreover it is bounded. Hence by the Martingale convergence we have that a.s. and in .

This last statement is contradicted by the fact that Brownian motion is recurrent on the complex plane, in particular, it visits and infinitely many times which gives that

a.s.

directly contradicting the Martingale convergence.

I found this little gem in Rogers and Williams.

The optional stopping theorem (OST) gives that if is a martingale and is a bounded stopping time, then .

Now take a Brownian motion , which is a martingale, and the stopping time . Obviously and OST says that .

So what went wrong? Well quite simply, I did not check that was actually bounded. Why is it not bounded you say? No? Ok, well I’m going to tell you anyway; it is not bounded because there are many paths that trail below 0 which never hit 1. To properly show this, one must first calculate the supremum of the Brownian motion, then work out the probability that it is less than 1.

Rather nice reminder to always check that the stopping times are bounded *before* applying the OST. This was provided in a lecture by Nathanael Berestycki.