# Renormalisation and Infinities

Renormalisation is one of those areas of quantum field theory people are generally bemused by. I won’t pretend I understand it in any great detail but one of the areas I think I can help people with is the whole “How does make sense?” thing people have. For those with the book, this is a stripped down overview of part of Section 7.5 of Peskin & Schroeder, covering dimensional regularisation. Wilsonian or other methods are sufficiently unfamiliar to me that I’m not going to touch them.

**Feynman Diagrams **

Let’s start with a quick run through of why people get to the above ‘calculation’. Quantum field theories start with a Lagrangian , an expression which formally summarises the kinetic and potential energies of all the particle fields in the system, from which the equations of motion of the fields can be computed. The terms in the Lagrangian tell which which fields couple to one another, what the charges of that coupling are, which fields are dynamic, which fields are massless, what the dimensions of the couplings are, what symmetries the theory has, all sorts of things.

A description of how particles interact is obtained by using the Feynman rules, which tell you how to convert a Feynman diagram into an integral. Feynman diagrams are brilliant things, they are a simple intuitive way to draw how particles come in, which ones interact with which and which particles come out at the end. Feynman’s insight was that you can take the Lagrangian and the diagram and pretty much immediately write down the corresponding huge mathematical expression which describes the contribution that diagram makes to your observable properties.

Unfortunately the huge mathematical expression is huge and typically extremely unpleasant. When a diagram includes a loop, ie a path which ends where it started, then the expression includes an integral. This is because the relevant quantities in the calculations are the momentum of various particles in the diagram. By momentum conservation if there are no loops you know the momenta of all the bits of the diagram. However, a loop means you can put in any amount of momentum around the loop and still satisfy momentum conservation. As a result you have to integrate over all possible momenta!

Unfortunately (again!) this can be a problem. Sometimes the integral doesn’t converge and gives an infinite amount. This can happen either when the momentum around the loop goes to zero or when it goes to infinite, much like the following examples,

.

The first has problems at and the second at . Let’s consider a typical case which we can do something with.

**Feynman Integrals**

We want to compute the following integral where we are working in d dimensional space-time,

.

The second expression is split into two parts. The second part is difficult to compute, while the first part is proportional to a unit d-dimensional sphere. That bit is easy to compute, in terms of the Gamma function,

.

We can just look up the value of for any d we want. That leaves the second integral. In our universe (setting aside string theory) we have d=4. Unfortunately when d=4 this integral is infinite. There’s a number of problems with it. Suppose k=0, so that the integrand is . There are d factors of x up top and 4 on the bottom. If d<4 this blows up at x=0. If d>4 it blows up at . At d=4 doesn’t scale much with but then you might be adding some O(1) value over an infinite region, so it’s a grey area. If k isn’t zero the issue with d>4 still exists, so all we do is remove the issue with x=0.

**Non-Integer Dimensions **

Since d=4 is a problem, it leads to the infinities people don’t like in quantum field theory, perhaps we should stay away from it. Fortunately if we don’t set d then we can actually do the integral. I won’t go into the details but when we compute it with the sphere volume we get

.

Now we can see explicitly the small x (otherwise known as infra-red) and large x (ultra-violet) divergences. The term is not a problem, neither is because the Gamma function is never zero. That leaves and . For simplicity we can relabel $n-\tfrac{d}{2} = m$ so we are considering and . Obviously if m>0 then is a problem, as previously commented. But what about ?. Let’s remember its definition,

.

The Gamma function is an extremely interesting function which appears all over the place in physics and pure mathematics. It appears in the expressions for n-sphere volumes (as is the case here), it appears in the original paper on string theory and it plays a role in the Riemann Zeta function’s properties. For m a positive integer, 1,2,3,… we have . It’s from this you can obtain 0!=1. However, if m is a negative integer, -1,-2,-3,…. then it is infinite. This can be obtained from its integral form or with a less than rigorous use of the factorial interpretation,

.

If n=1 we obtain but if n=0 we get , then $(-2)! = \frac{(-1)!}{-1} = \infty$. That’s a little arm wavey but it is indeed what you get if you compute the Gamma function for non-positive integers, including 0. Oddly enough it is finite for say -1/2, even though its infinite for 0 and -1. One of those weird things about mathematics. Anyway…. that means if for integer the above expression is infinite, ie . If d=4 then n needs to be larger than 2, else the denominator is doesn’t grow fast enough and things don’t decay properly. Unfortunately there’s no way around this integral, we want to make sense of it. So to do that we seperate the infinite from the finite!

**Laurent Expansions**

So how precisely can you do that? After all can’t you just do nonsense like so ? Well yes but what if everything is still finite, just perhaps really big? To illustrate this we go back to the Gamma function and let’s set n=2 and write . Now we have an expression involving . As we’ve just seen as this explodes. This implies there’s some sort of thing going on. In fact this can be made rigorous using a generalisation of a Taylor expansion, known as a Laurent expansion. Rather than summing up polynomial terms for where n is non-negative we now allow ALL of the integers!

There’s truck loads of mathematics devotes to things of this form. In fact, the Gamma function’s properties in the complex plane and the nature of its singularities are perhaps the most examined of all ‘standard’ functions, not lead because it relates to the other big name function, the Riemann Zeta function via its functional reflection formula. To cut a long and elaborate story short(er) it turns out that the Gamma function has an order 1 pole at all non-positive integers. This means the Laurent expansion doesn’t go ‘worse’ than ,

.

**Counter Terms**

So what does the above expansion tell us? The first term is ‘bad’, it blows up in the limit . The second term doesn’t change as we change , while all the other terms vanish in the limit. So we have bad, invariant and irrelevant. Thus if we could remove the bad we’d have something meaningful in the limit. Unlike working in the limit, where the bad swamps everything and we can’t compute anything, in this slightly perturbed setup we have a clear distinction. So the question is whether we can modify our construction in such a way to remove the bit but leave the bit unchanged. This is done by adding something known as a counter term to the Lagrangian, . It’s properties are designed to remove terms without altering anything else. You don’t want to do this for *every* calculation, a theory is renormalisable if you only need to add finitely many counter terms. In any good model you need to get out more than you put in and if you put in infinitely many counter terms you’re basically not getting any predictions. So hopefully once you’ve computed a few counter terms from the simpler processes you are all set and you don’t have to add any more. There are ways of estimating and sometimes proving whether or not a Lagrangian leads to a renormalisable theory. It’s generally easier to prove something isn’t renormalisable than prove it is. As just outlined with the Gamma function, the dimensionality of space can play a role. 2 dimensional quantum gravity is much easier than general quantum gravity for example, 2d has lots of quirky properties which make it great to work in, just ask your friendly neighbourhood string theorist (or take my word for it as I used to be one).

**Cut offs**

The conclusion is that by altering your formulation to allow you to work ‘close’ to the thing you’re interested in, ie d=4, you control the infinities and allow you to separate off the good from the bad. You introduce a cut off, stopping your calculations short of the problem region but close enough to give you insight. Other methods work on similar principles. For example, the integrals discussed above involve integrating up to infinite momentum. Instead, why not integrate up to some energy scale , work out everything and then try to remove anything which explodes as $latex \Lambda \to \infty$. This is a momentum cut off rather than a length cut off but in relativity energy scales go up as length scales go down so there’s a similar underlying principle in as .

Hopefully that helps someone understand WTF renormalisation tries to do with infinities. I’m sure someone who does renormalisation for a living will see plenty of mistakes here but it’s a mixture of cutting corners and my own somewhat ignorance on the details. As long as there’s enough to get people to see that it’s not just “Infinity minus infinity is whatever the hell I like, quantum field theory works!”, as some people try to claim, then this has served its purpose.