[Dr. Chris Oakley’s home page] [More comments about academic research]

The search for a quantum field theory

“[Renormalization is] just a stop-gap procedure. There must be some fundamental change in our ideas, probably a change just as fundamental as the passage from Bohr’s orbit theory to quantum mechanics. When you get a number turning out to be infinite which ought to be finite, you should admit that there is something wrong with your equations, and not hope that you can get a good theory just by doctoring up that number.”

— Paul Dirac, Nobel laureate 1933

“The shell game that we play ... is technically called ‘renormalization’. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.”

— Richard Feynman, Nobel laureate 1965

Quantum Field Theory purports to be the most fundamental of sciences in that it concerns the ultimate constituents of matter. The term “quantum field theory” is used interchangeably with “particle physics” and “high energy physics” on the grounds that the experimental support for this theory comes from expensive experiments involving high-energy beams of particles. Although such multi-billion-Euro experiments are needed to push the boundaries, the theories of course claim to be universal, and should apply equally to the familiar and everyday world.

Current practitioners in the field will no doubt bemoan the fact that taxpayers of the world are increasingly less willing to find the money to pay for this esoteric study. Do they really care whether there are Higgs particles or heavier flavours of quarks? Probably not. Why not? Because it really makes no difference to their own lives. Their message is clear: study theology, philosophy or “useless” branches of science if you will, but if the cost is more than a few academic salaries then don’t expect us to pay.

It was not always thus. In the late seventies and early eighties, many of the general public followed the unfolding drama of the fundamental particle world with keen interest, and did not seem to mind about the cost of the experiments. It was, after all, a prestige project, like the Apollo program.

A lot of it had to do with the names: “quarks” with their different “flavours” and “colours”, with qualities like “strangeness” and “charm”. The “gluons” that bound them together. The enormous and mind-blowing scale of the experiments at CERN or Fermilab or SLAC. The enthusiastic (if largely uncomprehended) explanations by academics in the field, with their diagrams and manic gesticulations. At one time, it seemed that every other popular science program on TV was about particle physics. Indeed, I remember a marathon one on BBC2 around 1976, whose climactic moment was completing of the pattern of SU(3) Hyperons by the Omega Minus. A particle predicted by theory and subsequently discovered by experiment. What could be more satisfying? I remember in particular interviews with Abdus Salam and Richard Feynman. Salam did his plug for Indian culture by talking about the domes of the Taj Mahal and how their symmetry made them beautiful. His point was that this principle of symmetry could be applied to physics as well. The other thing I remember was Feynman saying that he was not entirely comfortable with “gauge” theories, but then he was an old timer, and what did he know? (Looking back on it now, that was rich, coming from him, since he won a Nobel prize for the original gauge theory - quantum electrodynamics).

It was hard not to get swept up in this. Oxford’s contribution at the time (1980) was a series of public lectures at Wolfson College, the most memorable of which was given by Murray Gell-Mann, one of the leading lights in the field.

Both Tim Spiller, my tutorial partner, and I wanted to do research in the field, and both of us succeeded. He did a Ph.D. at Durham University and I a D.Phil. at Oxford, following a one-year course at Cambridge to study the relevant mathematics. Tim and I were the bane of our tutors as undergraduates because of the way we would never accept “hand-waving” (unrigorous) explanations. I like to think that the good side of this fussiness was that the theses we eventually produced (in totally different branches of field theory) were of higher quality than average.

The thing that was special about gauge theories, apparently, was that they were “renormalizable”. Just what gauge theories and renormalization were I did not discover until I went to Cambridge. Renormalization failed the hand-waving test dismally.

This is how it works. In the way that quantum field theory is done - even to this day - you get infinite answers for most physical quantities. Are we really saying that particles can be produced with infinite probabilities in scattering experiments? (Probabilities - by definition - cannot be greater than one). Apparently not. We just apply some mathematical butchery to the integrals until we get the answers we want. As long as this butchery is systematic and consistent, whatever that means, then we can calculate regardless, and what do you know, we get fantastic agreement between theory and experiment for important measurable quantities (the anomalous magnetic moment of leptons and the Lamb shift in the Hydrogen atom), as well as all the simpler scattering amplitudes.

“You may have eleven significant figures of agreement, but you cheated to get it, and so it’s not a valid theory,” I say. “It’s just rubbish. I did not spend years learning the rules of mathematics just to abandon them the first time they turn out to be inconvenient.”1

Other than it being the “cool” research area at the time, there was another reason for going into particle physics theory: to better understand quantum mechanics. In case you do not know, quantum mechanics opened a window into understanding the microscopic world like no other theory in history. Although arguments continue about what the wave function in quantum mechanics actually signifies, no-one really disputes its ability to get a mathematical handle on the world at the atomic level. Atomic physics and Solid state physics, as currently practised, for example, are just two branches of applied quantum mechanics. Amongst other things, the understanding of band gaps in semiconductors, via quantum mechanics, led to the development of microchips and thence the cheap, powerful computers we have today. So it is not an exaggeration to say that quantum mechanics has revolutionized our lives. As taught in our undergraduate physics course, though, there were holes in it, and this was not just the interpretation of the wave function: in particular, it was the way it dealt with radiation that was extremely hand-waving. This was a big problem as the study of radiation emitted by atoms, a subject known as spectroscopy, was the single most important thing in getting quantum mechanics started in the first place. The word “quantum” was originally just the term used for the packets of radiation that were needed to explain the results of black-body radiation and the photoelectric effect. The energy of the packets, which acquired the name “photons”, was a new, fundamental constant (Planck’s constant) multiplied by the frequency of the radiation, and this appearance in discrete packets was the new requirement for the electromagnetic radiation over and above the need to conform to the Maxwell equations developed in the 19th century. Spectral lines, it was discovered, were radiation emitted when an atom went from a higher to a lower energy state, the photon of radiation (light) having energy equal to this energy difference. One could thus work backwards from the spectral lines to the energy levels of the atoms, and the first triumph of the new quantum mechanics was to be able to calculate these energy levels from a postulated “wave equation” for the atom. This “wave equation” was exactly that: the atom, for the first time in its history, was being treated as a wave much in the way that radiation was treated as a wave using Maxwell’s equations. So the wave (radiation) had become also a particle (a photon), and the particles (electrons and protons) had also become waves. This is called wave-particle duality , and we are still trying to wrap our heads around it.

The wave equation may have allowed the calculation of atomic energy levels, but one of its limitations was that it did not allow the possibility of particles being created and destroyed. This was most pressing for the case of the photon created in the act of an atom transitioning from a higher to a lower energy state. For our undergraduate course, all we needed to know were fudges (Einstein “A” and “B” coefficients), but, even as an undergraduate, unhappy with this, I was reading about the highbrow version, which is known as quantum field theory, where particles are allowed to appear and disappear. The framework used to understand these changes is known as perturbation theory. Perturbation theory, in fact, is something that applies to all branches of physics - not just quantum mechanics. For it to work, a system needs to be in a state that is not too far away from one that you can calculate, and that you understand well. Then, if the perturbation is small, you can get a good idea of what the perturbed state looks like by calculating the effect of the original system on the changed part with the approximation that the change does not have significant repercussions on the original system. If it does, the whole thing breaks down. For example, Neptune was discovered by the perturbations it caused to Uranus’s orbit. But if the the knock-on effect of these on Neptune itself had been too large, it would have been much harder to solve, and probably inaccessible to perturbation theory. Perturbation theory is a valuable tool in quantum mechanics, but in quantum field theory it only works if you do not examine it too closely. We get accurate results in first-order, where the atom jumps states and the photon flies away to the detector, but in second-order and higher, where the photon can react further with its source, horrible things happen. Instead of getting small corrections to the emission rates, as we would hope, we get infinities. This can only mean that perturbation theory is inapplicable. This should not be a surprise as the perturbation, involving a whole new particle, is not small.

This failure was known about as early as 1928, and the founders of quantum mechanics, which is to say Heisenberg, Schrödinger, Pauli and Dirac, never felt that the problem was ever solved. This did not stop people trying, but the attempts by Feynman, Schwinger and Tomonaga twenty years later were no more successful, and Dyson’s distillation of the later work, which is what we were taught in class, although better accounting for special relativity, could not even reproduce most of elementary quantum mechanics.

The paradigm where one’s theory is defined by first-order perturbation theory, pretending that higher orders do not exist, is not so far removed from renormalized perturbation theory as currently practised, but as it never made any sense to me I was not going to work on it. However, I still wanted to get my degree, so I just chose research that avoided confronting the issue. I tried to get to grips with some of the things that one has to deal with before getting to renormalization: issues related to non-interacting fields, such as the spin-statistics theorem and Lagrangians for particles of higher spin. There were a few interesting things to explore, mostly in clarifying the connection between work in the area and the underlying principles, which is what I did for my doctoral thesis. This was done by March 1984, which left me with a few months in hand, so I started looking at renormalization again to see if I could make any more sense of it second time round.

I then discovered something very nice. If the field is written as a power series in the coupling constant, then the field equations enable a simple reduction of an interacting field in terms of the free field and any amplitude can be calculated just by inspection. I wrote this up in a preprint in 1984. The idea was so simple that I found it hard to believe that I was the first to see it. Well, there is nothing new under the sun, and sure enough - as I discovered in late 2005/early 2006 - what I had was an old idea that had just withered on the vine. Stueckelberg2 thought of it first, in 1934, but Källén3 (who seems not to have been aware of Stueckelberg’s work), also thought of it in 1949.

The idea has two consequences that ought to have given the founders of quantum mechanics a lot of grief. First, the local field equations they would expect to be able to use give nonsensical, infinite answers (a feature, incidentally, of every other treatment of quantum field theory). Secondly, properties such as orthonormality of a basis of particle states at constant time no longer apply. For the former, one way round it is not to use rigidly local field equations. The only reason for choosing one field equation over another is to get agreement with experiment (although normally in such a way as to incorporate experimentally-founded beliefs in invariance principles such as special relativity). If local field equations give infinite answers then obviously they are not agreeing with experiment. However, it is possible to make an adjustment - known as “normal-ordering” - which eliminates the problem, at least in the Källén-Stueckelberg approach. The latter problem is Haag’s theorem: in the presence of interactions, it is always assumed that the Hamiltonian can be split into a “free” part and an “interaction”. The “free” part is used to define an orthonormal basis of states to which the interaction applies. But Haag’s theorem says that this is not possible, or to put it another way, it is not possible to construct a Hamiltonian operator that treats an interacting field like a free one. Haag’s theorem forbids us from applying the perturbation theory we learned in quantum mechanics to quantum field theory, a circumstance that very few are prepared to consider. Even now, the text-books on quantum field theory gleefully violate Haag’s theorem on the grounds that they dare not contemplate the consequences of accepting it.

However, in my view, acceptance of Haag’s theorem is a very good place to start. The next paper I wrote, in 1986, follows this up. It takes my 1984 paper and adds two things: first, a direct solving of the equal-time commutators, and second, a physical interpretation wherein the interaction picture is rediscovered as an approximation.

With regard to the first thing, I doubt if this has been done before in the way I have done it4, but the conclusion is something that some may claim is obvious: namely that local field equations are a necessary result of fields commuting for spacelike intervals. Some call this causality, arguing that if fields did not behave in this way, then the order in which things happen would depend on one’s (relativistic) frame of reference. It is certainly not too difficult to see the corollary: namely that if we start with local field equations, then the equal-time commutators are not inconsistent, whereas non-local field equations could well be. This seems fine, and the spin-statistics theorem is a useful consequence of the principle. But in fact this was not the answer I really wanted as local field equations seemed to lead to infinite amplitudes. It could be that local field equations with the terms put into normal order - which avoid these infinities - also solve the commutators, but if they do then there is probably a better argument to be found than the one I give in this paper. Substituting Haag expansions (arbitrary sums of normal-ordered tensor products of free fields) directly into the commutators is an obvious thing to try here. I did make fumbling attempts around October 2001, without making much progress, but I think that with a little more ingenuity, there could be a solution out there. My latest thinking (2012 and later), though, is that local field equations are not the problem. The source of intractable infinities in quantum field theory is likely just the impossibility of expanding around e = 0 (I will post more when I have had more time to investigate).

With regard to the second thing, the matrix elements consist of transients plus contributions which survive for large time displacements. The latter turns out to be exactly that which would be obtained by Feynman graph analysis. I now know that - to some extent - I was just revisiting ground already explored by Källén and Stueckelberg5.

My third paper applies all of this to the specific case of quantum electrodynamics, replicating all scattering amplitudes up to tree level. As for reproducing the “successes” of traditional QED, namely the Lamb shift and the anomalous magnetic moment of leptons, I do not know. I would want to be confident that I had an understanding of bound states before I attempted a Lamb shift calculation and would want to be sure I understood the classical limit of the photon field before I tried to calculate the anomalous magnetic moment of leptons. Finding time to do it is the problem.

Here is the correspondence I had with the journals. It seems that my greatest adversaries were the so-called “axiomatic field theorists”, who not content just to disagree, appeared to be determined to ensure that nothing I wrote ever got into print. It is interesting, by the way, that such a group should exist at all. After all, one does not have “axiomatic” atomic physicists, “axiomatic” chemists or “axiomatic” motorists. Since “axiomatic” simply means obeying the rules, the usage here ought be a pleonasm . Either one is a field theorist, in which case - unless one can formulate better ones - one is bound by the rules, or one is not a field theorist at all. If you were stopped for speeding and told the policeman that you were not an axiomatic motorist and therefore not subject to traffic regulations it might not help your case. My interaction with the axiomatic field theorists, though, only reminded me of why they are an irrelevance, and  nowadays it seems that the best they can come up with is a scheme where extra degrees of freedom are introduced midstream and a contrived “limit” of the newly-introduced parameters taken (the Epstein-Glaser method). Outside their group, this would be called renormalization, and if the self-appointed guardians of mathematical propriety are sanctioning renormalization, to whom is one supposed to turn?

My proposal, in a nutshell, is this: write the interacting fields as sums of tensor products of free fields. Use coefficients in the expansion that are those which follow from the usual local equations of motion (maybe the terms must appear in normal order, but I am no longer sure about that). Then use the known properties of free fields to evaluate the matrix elements directly. Comparison of these expressions with time-dependent perturbation theory (from ordinary quantum mechanics) shows that these consist of transients plus the tree-level Feynman graph amplitudes. As no re-definition of the mass, coupling or field operators is required (infinite or otherwise), there is no renormalization.

Unfortunately for me, though, most practitioners in the field appear not be be bothered about the inconsistencies in quantum field theory, and regard my solitary campaign against infinite subtractions at best as a humdrum tidying-up exercise and at worst a direct and personal threat to their livelihood. I admit to being taken aback by some of the reactions I have had. In the vast majority of cases, the issue is not even up for discussion.

The explanation for this opposition is perhaps to be found on the physics Nobel prize web site . The six prizes awarded for quantum field theory are all for work that is heavily dependent on renormalization. These are as follows:

By these awards, the Swedish Academy is knowingly endorsing shoddy science, reflecting the fact that shoddy science has become mainstream. Particle physicists now accept renormalization more or less without demur. They think that have solved the problem, but the reality is just that they have given up trying. Some even seem to be proud of their efforts, lauding the virtues of makeshift “effective” field theories that can be inserted into the infinitely-wide gap defined by infinity minus infinity. Despite this, almost all concede that things could be better, it is just that they consider that trying to improve the situation is ridiculously high-minded and idealistic. None that I have talked to expect to see a solution in their lifetimes. They think it possible that we live in a universe comprised of tiny vibrating strings in 11 dimensions - despite the total lack of experimental support for this notion - but they absolutely do not believe that calculations (in four dimensions) that they studied when they were graduate students can be done without mathematical sleight of hand. Neither do any appear to be interested in investigating the possibility. As with a lot of things, Feynman had a nice way of putting it: “Renormalization is like having a toilet in your house. Everyone knows it’s there, but you don’t talk about it.” But personally, I do not see how fundamental physics can move on until the problem is solved. Before it can be solved, it needs to be addressed, and before it can be addressed, it needs to be acknowledged. I struggle even to get the problem acknowledged. We now have a situation where if you asked a theoretical particle physicist, “Would you like to be able to calculate the Lamb Shift without any renormalization?” you would get the answer, “Of course!” But if you then asked, “Are you prepared to sponsor a project that attempts this?” the answer would always be “No”. Ultimately, in taking this attitude, they only harm themselves. Botching may be an unavoidable part of many practical endeavours, where deadlines have to be met and customers have to be satisfied, but on the research frontier there can be no justification.

For those of you who are swayed by arguments from authority (I like to think that I am not one of them, by the way), one could almost make the case against renormalization on these grounds. Backing the view of one Nobel prizewinner (Dirac) against the fourteen listed above could be justified by saying that excluding Feynman - who in any case had plenty of doubts about renormalization - Dirac made more impact on physics than the others put together7.

Physicists are first and foremost scientists. They are not primarily mathematicians and they are not religious zealots (at least not in regard to work). The extent to which they are permitted to believe their explanations is the extent to which they are verified in experiments. They therefore are entitled to strong faith in quantum mechanics and special relativity, both of which seem to pass all of the multifarious experimental tests thrown at them. They are also entitled to believe in vector particles mediating the weak interaction. They are entitled to believe in quarks. The following however are less certain: general relativity as the theory of gravity and quantum chromodynamics as the theory of the “strong” nuclear force.

To take the first, the “proofs” of General Relativity are light bending, the precession of the perihelion of Mercury and gravitational red-shift. All of these are tiny effects, and whilst the results do not contradict G.R., they do not mean that it is the only possible explanation either. General Relativity is like quantum mechanics in that it is not so much a theory as a whole way of thinking, and it can be very hard to fit something as grandiose as this with other frameworks, quantum mechanics in particular. If there is a conflict between quantum mechanics and G.R. then the scientist (if not the mathematician) is forced to choose quantum mechanics. With gravity the experimental data, or at least, data that cannot be explained by Newtonian gravity are incredibly sparse compared to the results that support quantum mechanics. What we would like are experiments that test gravity at the microscopic level, in the same way that Quantum Optics tests electromagnetism at the quantum level, but will we ever get these? The inability to get an experimental handle on quantum gravity makes me wonder whether it even exists at all in its own right. Might gravity be just some kind of residual of other forces, like the van der Waals attraction in chemistry? Assuming that this notion is wrong, what about strong gravitational fields? The fact is that we know nothing about the world of strong gravitational fields, a fact which has not stopped Astrophysicists giving names to objects that are supposed to have such, such as neutron stars and black holes. Unfortunately, an observatory is not a laboratory. It is very hard to understand or even demonstrate the existence of such objects unless you have a degree of control over them.

The other area of uncertainty is, to my mind, the “strong” nuclear force. The quark model works well as a classification tool. It also explains deep inelastic lepton-hadron scattering. The notion of quark “colour” further provides a possible explanation, inter alia, of the tendency for quarks to bunch together in groups of three, or in quark-antiquark pairs. It is clear that the force has to be strong to overcome electrostatic effects. Beyond that, it is less of an exact science. Quantum chromodynamics, the gauge theory of quark colour is the candidate theory of the binding force, but we are limited by the fact that bound states cannot be done satisfactorily with quantum field theory. The analogy of calculating atomic energy levels with quantum electrodynamics would be to calculate hadron masses with quantum chromodynamics, but the only technique available for doing this - lattice gauge theory - despite decades of work by many talented people and truly phenomenal amounts of computer power being thrown at the problem, seems not to be there yet, and even if it was, many, including myself, would be asking whether we have gained much insight through cracking this particular nut with such a heavy hammer.

A talk (slides in PDF) on my work given at the University of Dortmund (Germany) on 12 May 2011. I got into e-mail correspondence with the Professor, Heinrich Päs , who was writing a popular science book that mentions my great-uncle Willem. In the course of our correspondence I managed to invite myself to give a talk on my quantum field theory work - the first in over 20 years. The audience was a young crowd of mostly phenomenologists. I found, to my slight surprise, that I was pushing at an open door here as the audience seemed to agree with my criticisms of renormalization. Whether anything will come of it, I know not, but if I give the talk again then there will be more slides as I had finished the main talk in about half an hour.

The only academic to have studied my arguments in the last 20 years I am aware of was former Harvard assistant professor Lubos Motl. He decided on, I guess, about a minute's examination that what I was doing was Feynman-Dyson perturbation theory in disguise. It is actually far simpler. Quantum field theory is, admittedly, a difficult subject, but more people ought to know at least the bare essentials. I therefore give them here. Knowledge of QFT to first-year graduate level is assumed.

On 26 February 2012 I was interviewed by Philip Mereton on WebTalk radio. The link is here . Saying that I was “driven out of the physics community for voicing objections to the fudging problem” is maybe dramatising a little, but it is certainly true to say that I did not leave fundamental physics voluntarily.

Luckily, I have not missed much in the last 35 years or so. Without anything much of relevance from the theorists since the 1970s, the high-energy physics experimental frontier at CERN is merely testing theories from that era. Theorists, when interviewed by the media now, like to talk of man-made black holes, higher dimensions, superstrings, parallel universes and other untestabilities. Anything to distract attention from the fact that, mostly, they are no longer relevant, and that much of what they do no longer even classifies as science. When, as will happen sooner or later, the funding agencies realise that these people are no longer a good use of scarce resources, and funding is withdrawn, they will not reasonably be able to object.

I would argue that acceptance of renormalization was the crucial compromising of scientific standards, and the one that eventually opened this door to pseudoscience.

Two dreams I had last night (5 June 2024), which I would like to record before I forget all the details:

(i) I am in a kind of Science Fiction world where we have elevators that one can see out of, and that can travel in three dimensions at will, but without any obvious machinery. The people I am with all seem to be mathematicians and theoretical physicists, and to that extent it reminds me of my young days. We are in this huge space, outside of which there is an enormous constructor fleet of space ships. My elevator stops and in through these angled flaps at the side walks Roger Penrose, who tells me that we are going to have to destroy this world from the inside. It sounds scary. Then I wake up.

(ii) There is a giant sewage pipe that services the whole city that runs through my old college (Trinity, Oxford), except that it has burst and there is raw sewage everywhere. There is also damage to a lot of the furniture. A huge team has appeared to clean up the mess, and I realise that although I personally did not have a big part in causing the disaster, I do have an important part in repairing the damage. As we get to work, a secondary annoyance is immediately apparent: none of the toilets are going to be functioning!

Interpret these as you will, but note that while no. (i) obviously was related to physics, the connection in no. (ii) was not so clear. It was totally disgusting, though.


Footnotes (click on the footnote number to return to the relevant point in the text):

1 If one accepts the nonsense that is renormalization, it is not a big step to accepting all sorts of other nonsense. See here for a explanation of why what is currently called ‘Quantum Electrodynamics’, far from being “the most accurate theory in science”, is actually not a theory at all: Something is wrong in the state of QED by Oliver Consa (2021)

2 Relativistisch invariante Störungstheorie des Diracschen Elektrons , by E.C.G. Stueckelberg, Annalen der Physik, vol. 21 (1934). My German not being everything it should be, I have relied on a pre-digested version of this paper given here: The Road to Stueckelberg’s Covariant Perturbation Theory as Illustrated by Successive Treatments of Compton Scattering , by J. Lacki, H. Ruegg and V. L. Telegdi (1999) . My thanks to Danny Ross Lunsford for drawing attention to the latter.

3 Mass- and charge-renormalizations in quantum electrodynamics without use of the interaction representation , Arkiv för Fysik, bd. 2, #19, p.187 (1950) and Formal integration of the equations of quantum theory in the Heisenberg representation , Arkiv för Fysik, bd. 2, #37, p.37 (1950). Both by Gunnar Källén. These are actually in the 1951 volume in the library I used (the Bodleian in Oxford). The work finds its way into his text book Quantum Electrodynamics (pub. by George, Allen and Unwin (1972)), pp.79-85, although instead of following the idea through as I did, he immediately gives up, using Feynman-Dyson perturbation theory in the remainder of the book.

4 Actually, this paper: On quantum field theories, Matematisk-fysiske Meddelelser, 29, #12 (1955), by Rudolf Haag, solves spacelike commutators in §5, but for the restricted case of Φ3 theory, and just to first order in the power series expansion. Unlike my analysis, Haag places no restrictions on the first time derivative of the field, and finds a slightly more general solution, where some derivative couplings are allowed.

5 A few comments, though. Stueckelberg uses the power series expansion in the coupling, and the residue of the energy conservation pole to provide the physical interpretation. Something very similar is used in my papers. However, Stueckelberg is not properly second quantized: his photons are modes in a cavity, and his electrons are wave functions rather than field operators. Fermions cannot be created or destroyed and so the only process he can treat is Compton scattering (the scattering of a single photon off a single electron). Interestingly, his subsequent papers seem to indicate that he soon abandoned the covariant approach, instead switching to, and often anticipating, the non-explicitly-covariant S-matrix methods of Dyson, Feynman and Schwinger. Could he have given up because of problems with the Interaction Picture? We may never know. Given that his 1934 methods are so much simpler, more elegant and more powerful, I am still amazed that he would willingly stop working on them. Källén’s papers, sixteen years later, are properly second quantised, but his physical interpretation is less elegant as his best ambition is just to reproduce Dyson’s results, which, strictly, apply only to scattering processes. Källén’s papers would be easier to read if he made more use of four-dimensional momentum space. He works out a graphical representation which he claims are just Feynman diagrams, but they are not. They are more like the diagrams in my papers which have two kinds of line for each particle type. The two types of line are confusingly drawn the same in his paper, even though he then goes on to calculate the amplitudes correctly by treating them differently.

6 Journalists please, please, please stop calling the Higgs the “God” particle! Firstly, it does not give mass to all particles in the universe. It only “gives mass” (if you want to call it that) to the massive vector bosons that mediate the weak interaction - a small subset. Secondly, if something has no rest mass it does not mean that it does not exist! Take the photon. Not having had benefited from the “God” particle, it has zero rest mass. Are you saying therefore that it does not exist? So you do not believe in electricity, magnetism or light?

7 From The Strangest Man, a biography of Dirac by Graham Farmelo, we have an indication of Dirac’s strength of feeling about renormalization (p. 409):

[In 1983, Pierre Ramond] invited Dirac to give a talk on his ideas at Gainesville any time he liked, adding that he would be glad to drive him there and back. Dirac responded instantly: “No! I have nothing to talk about. My life has been a failure!”
Ramond would have been less stunned if Dirac had smashed him over the head with a baseball bat. Dirac explained himself without emotion: quantum mechanics, once so promising to him, had ended up unable even to give a proper account of something as simple as an electron interacting with a photon - the calculations ended up with meaningless results, full of infinities. Apparently on autopilot, he continued with the same polemic against renormalisation he had been delivering for some forty years. Ramond was too shocked to listen with any concentration. He waited until Dirac had finished and gone quiet before pointing out that there already existed crude versions of theories that appeared to be free of infinities. But Dirac was not interested: disillusion had crushed his pride and spirit.