The scientific method
Computing the future
Mar 23rd 2006
From The Economist print edition
The practice of science may be undergoing yet another revolution
WHAT makes a scientific revolution? Thomas Kuhn famously described it as a “paradigm shift”—the change that takes place when one idea is overtaken by another, usually through the replacement over time of the generation of scientists who adhered to an old idea with another that cleaves to a new one. These revolutions can be triggered by technological breakthroughs, such as the construction of the first telescope (which overthrew the Aristotelian idea that heavenly bodies are perfect and unchanging) and by conceptual breakthroughs such as the invention of calculus (which allowed the laws of motion to be formulated). This week, a group of computer scientists claimed that developments in their subject will trigger a scientific revolution of similar proportions in the next 15 years.
That claim is not being made lightly. Some 34 of the world's leading biologists, physicists, chemists, Earth scientists and computer scientists, led by Stephen Emmott, of Microsoft Research in Cambridge, Britain, have spent the past eight months trying to understand how future developments in computing science might influence science as a whole. They have concluded, in a report called “Towards 2020 Science”, that computing no longer merely helps scientists with their work. Instead, its concepts, tools and theorems have become integrated into the fabric of science itself. Indeed, computer science produces “an orderly, formal framework and exploratory apparatus for other sciences,” according to George Djorgovski, an astrophysicist at the California Institute of Technology.
There is no doubt that computing has become increasingly important to science over the years. The volume of data produced doubles every year, according to Alexander Szalay, another astrophysicist, who works at Johns Hopkins University in Baltimore. Particle-physics experiments are particularly notorious in this respect. The next big physics experiment will be the Large Hadron Collider currently being built at CERN, a particle-physics laboratory in Geneva. It is expected to produce 800m collisions a second when it starts operations next year. This will result in a data flow of 1 gigabyte per second, enough to fill a DVD every five seconds. All this information must be transmitted from CERN to laboratories around the world for analysis. The computer science being put in place to deal with this and similar phenomena forms the technological aspect of the predicted scientific revolution.
Such solutions, however, are merely an extension of the existing paradigm of collecting and ordering data by whatever technological means are available, but leaving the value-added stuff of interpretation to the human brain. What really interested Dr Emmott's team was whether computers could participate meaningfully in this process, too. That truly would be a paradigm shift in scientific method.
And computer science does, indeed, seem to be developing a role not only in handling data, but also in analysing and interpreting them. For example, devices such as “data cubes” organise information as a collection of independent variables (such as the charges and energies of particles involved in collisions) and their dependent measurements (where and when the collisions took place). This saves physicists a lot of work in deciphering the links between, say, the time elapsed since the initial collision and the types of particle existing at that moment. Meanwhile, in meteorology and epidemiology, computer science is being used to develop models of climate change and the spread of diseases including bird flu, SARS (severe acute respiratory syndrome) and malaria.
Roboboffin
Stephen Muggleton, the head of computational bio-informatics at Imperial College, London, has, meanwhile, taken the involvement of computers with data handling one step further. He argues they will soon play a role in formulating scientific hypotheses and designing and running experiments to test them. The data deluge is such that human beings can no longer be expected to spot patterns in the data. Nor can they grasp the size and complexity of one database and see how it relates to another. Computers—he dubs them “robot scientists”—can help by learning how to do the job. A couple of years ago, for example, a team led by Ross King of the University of Wales, Aberystwyth, demonstrated that a learning machine performed better than humans at selecting experiments that would discriminate between hypotheses about the genetics of yeast.
And it is in biology that computing science is likely to have its greatest impact. The report argues that cells and complex cellular systems can be seen as information-processing systems, so there is a natural fit between them and computational logic circuits. That could lead to new developments in biology, biotechnology and medicine, as well as in computer science.
It is, perhaps, hardly unexpected that if 34 scientists with an interest in computing are asked to comment on the importance of computer science, they will find that it is, indeed, “The Future”. Even so, the team's case is a respectable one. Indeed, this week's issue of Nature has given it “earthquake coverage”—devoting several pages to news and comment about the report. And Microsoft Research Cambridge also announced that it will provide €2.5m ($3m) to support research that addresses policy areas outlined by the report, which include a reform of the education system and the creation of new kinds of research institutes. This is, admittedly, a small sum. If Microsoft wants the world to take its claims—and those of the scientists it commissioned to think about such things—seriously, then it should put more money where its mouth is. Otherwise the old guard might hang around rather longer than expected.