You are currently browsing Spencer’s articles.

In the previous post we discussed the number of primes less than a given number and derived some very poor estimates for this quantity. In this post, using no extra technical machinery whatsoever, we derive a slightly better estimate.

Previously, we used Euclid’s proof of the infinitude of the primes as inspiration for a way of arriving at an upper bound for the number of primes less than a given number. Here, we will come up with a slightly cleverer, more quantitative proof of Euclid’s Theorem and, as before, our estimate will grow out of the proof of Euclid’s Theorem, the improvement in the bound being testament to the fact that the proof is more illuminating.

If we are prepared to accept that every number greater than one has a prime divisor, then Euclid’s famous proof of the infinitude of prime numbers is one sentence long: Given any prime number *p*, if I multiply together all the prime numbers less than or equal to *p* and then add one, I get a number which leaves a remainder of one when divided by any of the prime numbers less than or equal to *p *and whence has a prime divisor greater than *p*. I try to explain why every number greater than one has a prime divisor in this post.

Today I would like to think a little bit more about prime numbers. Specifically, we will spend some time thinking about the number of prime numbers less than a given a number. We will start by seeing if we can get any quantitative information out of Euclid’s proof itself, before moving on to cleverer ways of achieving this.

The study of whole numbers is one of the oldest lines of inquiry in mathematics which is still thriving today. After all, it is possible in pure mathematics for the relevance of work done thousands of years ago to be undiminished today. I want to talk today about what is possibly the prototypical example of such a piece of mathematics. I am referring to Euclid’s proof of the infinitude of prime numbers.

A divisor of a number *n* is another, smaller number, for which there is no remainder when *n* is divided by it. For example, three is a divisor of six, seven is a divisor of seven and thirty-nine is a divisor of one hundred and seventeen. A *prime number* is a number with exactly two divisors. Let’s think about what it means to have exactly two divisors. Firstly, we observe that every whole number has at least one divisor, namely the number itself. Secondly, every whole number except for ‘1’ has at least two divisors: The number itself and the number ‘1’. So, with the exception of the number ‘1’, being prime expresses the quality of having the fewest possible divisors. Why must there must be infinitely many such numbers?

This blog is written by three PhD students in mathematics and the mathematical posts are categorized by the level of technical knowledge required to appreciate them fully: Posts in the “For Everyone” category assume no knowledge of mathematics beyond that which it is compulsory to study at school. The posts in the “For Mathematicians” category contain or are about advanced mathematics studied at graduate level and beyond. The “For Students” category is for everything in between.

Let’s jump straight in. Suppose I have a collection of *n* different objects, where *n* is some unknown but fixed whole number. Suppose I want to arrange *k* of them in a row on my shelf, where *k* is some unknown but fixed whole number less than or equal to *n*. In how many ways can this be done?

This is a counting problem. You are being asked to count how many of something there are. The answer will be a formula involving the unknown quantities *n* and *k*. Counting is often thought of as being easy. While it is true that the things which must be counted are easy to describe (I have just done so in a couple of sentences), this does not mean that the problem of counting them is easy. So, in this post I continue my quest to explain some basic things about counting and how to do it.

Now that you’ve decided on your outfit, you take a trip to the cinema with a friend. When you get there, you see that there are only two seats left and they are next to each other. In how many different ways can you and your friend occupy these two seats? Well, once seated, either you will be sitting on the left and your friend will be to the right of you or you are sitting on the right with your friend on your left. So there are* two* ways in total. This is very easy to see, but what if there were three of you and three seats left? In how many ways can you and your two friends occupy the three seats?

*This post is in the ‘Mathematics for Everyone’ category. These posts will be short snippets and will, I hope, increase slowly in complexity.*

Suppose that you are going out to the cinema tonight and that you do not know what to wear. You have narrowed down your choice considerably but you cannot decide which of three different tops and which pair of two different pairs of jeans you would like to wear. You would wear any top with any pair of jeans, but you need to choose which top and which pair of jeans. How many different complete outfits are you choosing from?

This post is just to link Tim Gowers’s recent blog post on Plünnecke-type sumset estimates in groups because he discusses the recent work of his soon-to-be-former student Giorgis Petridis.

I had the pleasure of being taught (outstandingly) by Giorgis on more than one occasion during my undergraduate years and it was great to be in the audience for his recent talk on his work at the Isaac Newton Institute. His new papers are on the ArXiv here. Congratulations!

### Affinity in Generality

Let *V* be a vector space. Consider the set of all affine transformations of *V:* An *affine transformation* of *V* is a map from *V* to itself which can be expressed as

,

for some invertible linear map (automorphism) *L* and some vector *b* in *V*. The set of all affine transformations of *V* forms a group under composition and is called the *affine group* of *V*. Note that the set of automorphisms of *V* is a subgroup of the affine group and also that the set of translations of *V* is also subgroup of the affine group. Note that every element of the affine group can be expressed as a composition of an invertible linear map followed by a translation. Note also that the only affine transformation which is both an automorphism and a translation is the identity map. It is also quite easy to see that the translation group is a normal subgroup of the affine group. However, the automorphism group of *V*, which, since *V* is a vector space is known as the *general linear group* of *V*, is not.

Is the affine group the internal direct product of the translation group and the automorphism group? No, but it comes close. In the previous post, we saw that if *G* is the internal direct product of *H* and *K, *then not only can every element of *G* can be expressed (uniquely) in the form *g = hk,* but also the elements of *H* commute with the elements of *K*. This is the sense in which the two groups *H* and *K do not interact* with each other inside *G*. In the case of the affine group, there is interaction: The translation group of a vector space does not commute with its general linear group. This breakdown is evidenced by the fact that one of the two groups fails to be normal.

Recently, I have been reading some algebra. This has been immensely enjoyable; I had forgotten how much I used to like algebra. The material here is by no means advanced, but relies on some basic definitions from group theory.

Suppose I have two groups *H* and *K*. Can we combine them together? I learnt some years ago now that there is a straightforward way of forming a ‘sum’ of these two objects: One takes the cartesian product *H**K, *and bestows it with a group operation in the most basic and obvious way possible: *(h,k)**(h’,k’) = (hh’,kk’). *The group that results is known as the *direct product* of *H* and *K.*

The same can be done with vector spaces: Given two vector spaces *V* and *W*, I can form the direct sum of them, which is thought of as the vector space consisting of ordered pairs of the form *(v,w) *with *v* in *V* and *w* in *W*. It is easy to guess what the rules for addition and scalar multiplication must be. However, soon after learning these two definitions, I began to realise that at least a small amount of wool was obscuring my eyes: Suppose I have a vector space *X *and two subspaces *Y* and *Z* such that every vector *x* in *X* can be written uniquely in the form *x = y + z, *for *y* in *Y* and *z* in *Z*. I would then be invited to observe how this means that *X* was the direct sum of *Y* and *Z.* This didn’t sit well with me. It clearly wasn’t quite the same thing: There was no cartesian product; there was no way that x = *y + z* was the same as x = *(y,z)*.