October is the month in which the Nobel Prize committee announces its annual winners and they usually stagger out them out across several days. So this year, let’s do this in order.
The physiology or medicine prize goes to Victor Ambros and Gary Ruvkun for their discovery of microRNA. As most will know, our genes encode all of the information needed to construct every part of our bodies. Yet how does each part know to specialize and make only the specific proteins needed for that part? Different types of cells must be able to specialize by executing only the genetic instructions relevant to them.
Last year’s Nobel Prize in the same category was for the development of mRNA vaccines and indeed by the 1960s, scientists knew that mRNA was involved in the regulation of genes. Ambros and Ruvken, working on the now famous roundworm Caenorhabditis elegans discovered a short RNA molecule that did not code for any proteins but does inhibit the activity of another gene. They found that this microRNA turns off a specific gene by binding to a complementary sequence in its mRNA, thus proving the existence of an entirely new principle of gene regulation.
Their announcement initially didn’t make much of an impact it was thought that this mechanism was specific to C. elegans. It was since been shown that this form of gene regulation is universal among multicellular organisms, hence the award of this Nobel Prize.
Next the physics prize goes for the technological achievement that is foremost in everyone’s minds right now, yet is very much not physics. Instead it goes to two computer scientists who developed the artificial neural networks that are the basis of today’s AI. Recreating the neural networks in biological brains in the form of computer simulations was an obvious objective but early efforts were discouraging. Then in the 1980s, John Hopfield was inspired by his background in physics and devised a network with a property that is equivalent to the energy in the spin system found in physics. It can be trained to remember data and later retrieve them.
Upon learning of the Hopfield network, Geoffrey Hinton set out to improve them by adding a probabilistic element. He called his version the Boltzmann machine as it makes uses of the Boltzmann distribution in statistical mechanics, named after Ludwig Boltzmann. It consists of two types of nodes, visible nodes into which information is fed, and a hidden layer of other nodes. As the values in the nodes are updated one at a time, the pattern can change but the properties of the network as a whole remain the same. In this way, the machine can learn from being given examples to recognize similar traits in different things.
Both of these developments are foundational to the field of machine learning and led to the huge neural networks consisting of billions of nodes and arranged in multiple layers that power the LLMs that we know today. It’s still not physics but it’s probably the closest category the committee could think of for the discoveries that undoubtedly do deserve the prize.
The prize for chemistry also goes for AI, or at least close enough. Proteins are the building blocks of biology and they consist of strings of amino acids twisted and folded together into three-dimensional structures. It is the specific structure that they have that gives them their unique properties and while the shape is theoretically predictable, the large number of ways that a given string of amino acids can fold makes it an overwhelmingly difficult problem. That’s where the computers come in.
Demis Hassabis co-founded DeepMind and developed AI models to play boardgames. The company was later bought by Google and its AI was improved until it was able to beat the world champion at playing Go. Their true goal however was always to predict protein structures and their AlphaFold model achieved an accuracy that beat the best humans but still fell far short of what was needed. Then DeepMind hired John Jumper who applied the transformers architecture of neural networks to the problem and managed to obtain results almost as good as X-ray crystallography.
David Baker too participated in the same competitions to predict protein structures and he made his own piece of software Rosetta to do so. Then he realized that he could also use Rosetta to work in reverse, allowing a user to specify the desired protein structure and obtain suggestions on the needed amino acid sequence. To test its effectiveness, they created an entirely new protein structure, obtained the amino acid sequence from the software and then made the novel protein structure. They then used X-ray crystallography to confirm that its structure matched what they had specified.
The economics prize is awarded for engaging with the question of why some countries or societies are so much richer than others. Daron Acemoglu, Simon Johnson and James Robinson jointly published the seminal paper The Colonial Origins of Comparative Development that divided the institutions established by European colonizers into two types: inclusive ones and extractive ones. One key factor was the density of the indigenous population at the time. In more populous places or o colonies with a high rate of settler mortality, due to the Europeans being poorly adapted to local diseases, the colonizers exploited the local supply of labor, creating extractive institutions.
In less populous places, the Europeans themselves moved in to settle there and in turn built more equitable, inclusive institutions. The authors call the result a reversal of fortune as the more populous and prosperous societies fell behind the newly established ones that promoted long-term prosperity. Even after the end of colonization, local elites simply took over the extractive institutions and had no interest in transitioning to a more equitable society. While this was an undeniably influential paper, it’s also a contentious one and historians for example question the neat division of extractive and inclusive institutions.