This blog is written by three PhD students in mathematics and the mathematical posts are categorized by the level of technical knowledge required to appreciate them fully: Posts in the “For Everyone” category assume no knowledge of mathematics beyond that which it is compulsory to study at school. The posts in the “For Mathematicians” category contain or are about advanced mathematics studied at graduate level and beyond. The “For Students” category is for everything in between.

### Introduction

In this post we will be looking at probabilistic coupling, which is putting two random variables (or proceses) on the same probability space. This turns out to be a surprisingly powerful tool in probability and this post will hopefully brainwash you into agreeing with me.

Consider first this baby problem. You and I flip coins and $X_n, Y_n$ denote our $n$-th coin flip. Suppose that I have probability $p_1$ of landing heads and you have probability $p_2 \geq p_1$ of landing heads. Let

$T_Z:=\inf\{n\geq 3: Z_nZ_{n-1}Z_{n-2}=HHH\}$

be the first times $Z$ sees three heads in a row. Now obviously you know that $\mathbb{E}[T_X]\leq \mathbb{E}[T_Y]$, but can you prove it?

Of course here it is possible to compute both of the expectations and show this directly, but this is rather messy and long. Instead this will serve as a baby example of coupling.

In the previous post we discussed the number of primes less than a given number and derived some very poor estimates for this quantity. In this post, using no extra technical machinery whatsoever, we derive a slightly better estimate.

Previously, we used Euclid’s proof of the infinitude of the primes as inspiration for a way of arriving at an upper bound for the number of primes less than a given number. Here, we will come up with a slightly cleverer, more quantitative proof of Euclid’s Theorem and, as before, our estimate will grow out of the proof of Euclid’s Theorem, the improvement in the bound being testament to the fact that the proof is more illuminating.

If we are prepared to accept that every number greater than one has a prime divisor, then Euclid’s famous proof of the infinitude of prime numbers is one sentence long: Given any prime number p, if I multiply together all the prime numbers less than or equal to p and then add one, I get a number which leaves a remainder of one when divided by any of the prime numbers less than or equal to p and whence has a prime divisor greater than p. I try to explain why every number greater than one has a prime divisor in this post.

Today I would like to think a little bit more about prime numbers. Specifically, we will spend some time thinking about the number of prime numbers less than a given a number. We will start by seeing if we can get any quantitative information out of Euclid’s proof itself, before moving on to cleverer ways of achieving this.

The study of  whole numbers is one of the oldest lines of inquiry in mathematics which is still thriving today. After all, it is possible in pure mathematics for the relevance of work done thousands of years ago to be undiminished today. I want to talk today about what is possibly the prototypical example of such a piece of mathematics. I am referring to Euclid’s proof of the infinitude of prime numbers.

A divisor of a number n is another, smaller number, for which there is no remainder when n is divided by it. For example, three is a divisor of six, seven is a divisor of seven and thirty-nine is a divisor of one hundred and seventeen. A prime number is a number with exactly two divisors. Let’s think about what it means to have exactly two divisors. Firstly, we observe that every whole number has at least one divisor, namely the number itself. Secondly, every whole number except for ‘1’ has at least two divisors: The number itself and the number ‘1’. So, with the exception of the number ‘1’, being prime expresses the quality of having the fewest possible divisors. Why must there must be infinitely many such numbers?

Ever wonder what happens when you send your card details over the internet? How exactly does public-key encryption work? I will write a brief, non-technical introduction to these concepts and the mathematical background in them, containing some sample code and plenty of examples.

Before we describe the RSA algorithm, there is one important mathematical concept, which is prime numbers and their factorization. Recall that a number $p$ is prime if no other number divides $p$ other than itself and $1$. For technical reasons we exclude the number $1$ from being prime. So lets see some examples. Is $3$ prime? Well, yes, no other number other than $3$ and $1$ divide it. Is $24$ prime? No, because $24 = 12 \times 2$.

There is a fundamental theorem in number theory which says that every number $n$ can be uniquely written as a product of prime numbers, i.e. $n = p_1^{\alpha_1} \dots p_k^{\alpha_k}$ where $p_1, \dots, p_k$ are prime. So again, a few examples cannot hurt. Take the number $24$. We know that $24 = 12 \times 2$, but now $12$ is not prime so $12 = 6 \times 2 = 3 \times 2 \times 2$. Hence $24 = 3 \times 2 \times 2 \times 2 = 3 \times 2^3$. That’s what a prime factorisation is, and what the theorem says is pretty basic, if a number $n = 3 \times 2^3$, then $n = 24$.

At this point now I can state what is the fundamental idea behind RSA:

Factoring a number into prime factors is much harder than checking if a number is prime!

But why is this true? There are technical reasons for this but I prefer to think along the following lines. Computers are much like humans, so imagine if a human is given the task of factorising numbers and checking if numbers are prime.

The trolley dilemma is summed in two parts as follows. Suppose that a trolley is running down a hill at a fast speed, heading towards five people at the bottom of the street. When it reaches them it will surely kill all of them. You notice that there is a switch next to you that could direct the trolley to a side path where there is one man standing and once you do, it will be the one man that dies. Would you do it?

Most people would answer this question with an affirmative. Let us call this the switch scenario. The second scenario is that a trolley is again running down a hill at fast speed, aimed at five people at the bottom which it will surely kill. However this time you are standing on a bridge with a fat man next to you. If you push the fat man off the bridge the trolley will stop but kill that fat man. Would you do it?
Read the rest of this entry »

This post will be essentially about functions of bounded variation of one variable. The main source is the book “Functions of Bounded variation and Free Discontinuity Problems” by Ambrosio, Fusco and Pallara. Before we give the definition of a bounded variation function let us recall what exactly does is mean for a function to belong in $W^{1,1}(0,1)$. Recall that any function $f\in L^{1}(0,1)$ can be seen as a distribution $T_{f}$ i.e.  as a bounded linear functional on $C_{c}^{\infty}(0,1)$$T_{f}:C_{c}^{\infty}(0,1)\to\mathbb{R}$  with

$T_{f}(\phi)=\int_{0}^{1}f(x)\phi(x)dx,\quad \forall \phi\in C_{c}^{\infty}(0,1).$

In that case we say that the distribution $T_{f}$ is representable by the function $f$. Given any distribution $T$ we can define its distributional derivative  $DT$  to be the distribution defined as

$DT(\phi)=-D(\phi'),\quad \forall \phi\in C_{c}^{\infty}(0,1)$.

In the special case where the distribution $T$ can be represented by a function in the way we show above the distributional derivative $DT_{f}$ will be

$DT_{f}(\phi)=-\int_{0}^{1}f(x)\phi'(x)dx,\quad \forall \phi\in C_{c}^{\infty}(0,1).$

If you haven’t read my previous post, I would suggest doing so before this. We denote by $\mathcal{P}_\infty$ the partitions of $\mathbb{N}$. The thing to keep in mind here is that we want to think of a coalescent process as a history of lineage. Suppose we start with the trivial partition $(\{1\},\{2\},\dots)$ and think of each block $\{i\}$ as a member of some population. A coalescent process $\Pi=(\Pi(t) :t \geq 0)$ on $\mathcal{P}_\infty$ is essentially defines ancestries, in the sense that if $i$ and $j$ belong to the same block of $\Pi(t)$ for some $t\geq 0$, then we think of that block as the common ancestor of $i$ and $j$.

With this in mind, define the operator $Coag:\mathcal{P}_\infty \times \mathcal{P}_\infty \rightarrow \mathcal{P}_\infty$ by

$Coag(\pi,\pi')_i=\bigcup_{j \in \pi'}\pi_j$.

With some conditions, we can define the same operator on $\mathcal{P}_{[n]}$, the partitions of $[n]:=\{1,\dots,n\}$. So for example if $\pi=(\{1,3,5\},\{2\},\{4\})$ and $\pi'=(\{1,3\},\{2\})$, then $Coag(\pi,\pi')=(\{1,3,4,5\},\{2\})$. The partition $\pi'$ tells us in this case to merge the first and third block and leave the second block alone.

Let’s jump straight in. Suppose I have a collection of n different objects, where n is some unknown but fixed whole number. Suppose I want to arrange k of them in a row on my shelf, where k is some unknown but fixed whole number less than or equal to n. In how many ways can this be done?

This is a counting problem. You are being asked to count how many of something there are. The answer will be a formula involving the unknown quantities n and k. Counting is often thought of as being easy. While it is true that the things which must be counted are easy to describe (I have just done so in a couple of sentences), this does not mean that the problem of counting them is easy. So, in this post I continue my quest to explain some basic things about counting and how to do it.