INSTITUTIONAL FORMS & URBAN LOGICS
Projects
Articles
Salons + Definitions
Workshops
Field Trips
Biographies + Acknowledgements
Projects
Articles
Salons + Definitions
Workshops
Field Trips
Biographies + Acknowledgements
The Article Archive
2014 - 2015
EXISTENTIAL UNCERTAINTY
Ourania Kondyli
Our Final Century: Will the Human Race Survive the Twenty-first Century is a 2003 book by one of Britain’s top scientists Royal Sir Martin Rees. The premise of the book is that the Earth and human survival are in far greater danger from the potential effects of modern technology than is commonly realised, and that the 21st century may be a critical moment in history when humanity’s fate is decided. Could this be one cosmologist’s apocalyptic belief, an implausible science fiction scenario or is it actually an important area of study that requires a great deal more scientific investigation than they presently receive?
Eight years later Lord Martin Rees, along with philosophy professor Huw Price and computer programmer and co-founder of Skype Jaan Tallinn, founded the Centre for the Study of Existential Risks (CSER) which is looking at four sources of threat: climate change, nuclear technology, biotechnology and artificial technology.
All three share the shame concern about near-term risks to humanity and aim ‘’to steer a small fraction of Cambridge’s great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future’’1.
What is an existential risk?
‘’One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential’’2.
This includes technologies that might permanently deplete humanity’s resources or block further scientific progress, in addition to ones that put the species itself at risk.
Nick Bostrom himself is the founding director of The Future of Humanity Institute. Established in 2005 as part of the Faculty of Philosophy and the Oxford Martin School at the University of Oxford, the FHI is an interdisciplinary research centre focused on predicting and preventing large-scale risks to human civilization.
Bostrom believes that humanity has entered a new kind of technological era with the capacity to threaten its future as never before. These existential risks are “threats we have no track record of surviving”2, therefore are unprecedented. In contrast to well-known natural threats to human existence such as nuclear holocaust, asteroids, super-volcanic eruptions, earthquakes etc., it is believed that the current radical transforming technologies could equally and potentially even more prove destructive for the human race. Existential risks, and other risks to civilization may come from natural or man-made sources. Whereas existential risks have been systematically divided to anthropogenic and non- anthropogenic risks (they may arise from natural or man-made sources), it has been argued that many existential risks are currently unknown.
FHI’s stated objective is to ‘’develop and utilize scientific and philosophical methods for reasoning about topics of fundamental importance to humanity, such as the effect of future technology on the human condition and the possibility of global catastrophes’’3. Apart from the Institute’s primary study on the effect of future technology on the human condition, other studies include potential obstacles on space colonization, dangers of advanced artificial intelligence and the probable impacts of technological progress on social and institutional risks.
And although this is happening on our side of the world, across the ocean we also discover the GLOBAL CATASTROPHIC RISK INSTITUTE, the MACHINE INTELLIGENCE RESEARCH INSTITUTE and THE FUTURE OF LIFE INSTITUTE. These three institutes are also concerned themselves with studies on the breadth of major GCRs and focus on big questions such as which GCRs are most likely to occur, which are the most effective ways of reducing GCRs and what ethical and other issues are raised by GCRs?
GCR stands for Global Catastrophic Risk and is defined as a hypothetical future event with the potential to inflict serious damage to human well-being on a global scale. Some such events could destroy or even potentially cripple modern civilization. Other, even more severe, scenarios could threaten permanent human extinction, referred to as existential risks as already mentioned before. Specific GCRs include emergence of technologies, environmental change, financial collapse, governance failure, infectious disease, nuclear war and astrobiology.
In search of a better understanding of this emerging area of research I contacted Dr. Anders Sandberg, a senior researcher in the FHI. Currently occupied with the FHI-Amlin collaboration analysing systemic risk and risk modelling. His research centres on management of low-probability high-impact risks, societal and ethical issues surrounding human enhancement and new technology, as well as estimating the capabilities of future technologies. Topics of particular interest include global catastrophic risk, cognitive biases, cognitive enhancement, collective intelligence, neuroethics and public policy.
1. Where does your funding come from and how difficult is it to convince people/funding bodies to invest in your research? We are mostly privately funded. I am on an industry collaboration project with a reinsurance company, but much of FHI has been based on private donations (and the original Oxford Martin School grant). Overall, it is surprisingly hard to get funding bodies to fund existential risk. Many acknowledge that the field is important, yet they seem to think that there is no research or no methodology that could help it. Given the interdisciplinary nature of the problem we often find that we do not fit with standard grant programs.
2. If there was a sustained large amount of funding what would you prioritise spending it on (researchers, equipment, publicity, a new building etc)? Right now researchers. Existential risk is an under-researched area, and one can do quite a lot with small resources. However, it also requires a lot of different kinds of people - philosophers, economists, disaster experts, risk analysts, statisticians etc. to become really useful. Later, when we know more, it might matter more how much publicity or policy impact one can get.
3.Is the nature of our political/economic system, which is based on short-term policies, simply incompatible with such long-term thinking, regardless of its importance? Not necessarily. Some institutions are surprisingly long term - consider the case of insurance and especially reinsurance companies, or pension funds - as long as the incentives are right. The problem is that key decision makers often have short-term incentives, and that is unhealthy.
4. What do the researchers at the FHI do on an average day? (Running computer calculations/simulations? Analysing current research and scientific innovations? Collecting data? Drinking tea and discussing the apocalypse Etc.) Yes to all of them. Right now me and some colleagues are hanging out with insurance people in the City of London in order to figure out how systemic risk emerges from their use of risk models - when we get back to the office there will be plenty of meetings analysing our notes, writing papers about it, reading up on cognitive bias, expertise and finance, as well as programming an agent based simulation of the insurance market.
We are methodologically flexible: some problems are best solved by the techniques of analytic philosophy, others by proving mathematical theorems, simulating them, comparing with historical examples, scenario planning, or surveying people. We look at the possibilities and reason about what might work well, and then experiment with it.
5. From my understanding FHI and CSER have been in collaboration. Would it be beneficial if these institutes merged? I think one risk is groupthink: everybody thinking about these hard problems in the same way. Having two friendly competing institutes (I like to view it as healthy sibling rivality) is a good way of seeing what conclusions are robust and which ones may be due to local bias.
6. Is there a tangible way that the public could feel the effects, or see the evidence, of FHI’s research within their lifetime? I think we may be having some policy impact already (some of our work have appeared in government reports, we are talking to some policymakers etc.), Nick’s book is apparently changing some minds, and so on. But the irony is of course that if we are successful in reducing existential risk, little will seem to happen - the world will be safer, but it is hard to point at an example of it being safer. Successful risk reduction is never as obvious as unsuccessful risk reduction.
Dr. Sandberg’s final comment suggests an interesting paradox. If risk reduction is successful then it is also invisible. The research done by these institutions could one day save the human-race without the public ever being aware that they were in danger.
All three share the shame concern about near-term risks to humanity and aim ‘’to steer a small fraction of Cambridge’s great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future’’1.
What is an existential risk?
‘’One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential’’2.
This includes technologies that might permanently deplete humanity’s resources or block further scientific progress, in addition to ones that put the species itself at risk.
Nick Bostrom himself is the founding director of The Future of Humanity Institute. Established in 2005 as part of the Faculty of Philosophy and the Oxford Martin School at the University of Oxford, the FHI is an interdisciplinary research centre focused on predicting and preventing large-scale risks to human civilization.
Bostrom believes that humanity has entered a new kind of technological era with the capacity to threaten its future as never before. These existential risks are “threats we have no track record of surviving”2, therefore are unprecedented. In contrast to well-known natural threats to human existence such as nuclear holocaust, asteroids, super-volcanic eruptions, earthquakes etc., it is believed that the current radical transforming technologies could equally and potentially even more prove destructive for the human race. Existential risks, and other risks to civilization may come from natural or man-made sources. Whereas existential risks have been systematically divided to anthropogenic and non- anthropogenic risks (they may arise from natural or man-made sources), it has been argued that many existential risks are currently unknown.
FHI’s stated objective is to ‘’develop and utilize scientific and philosophical methods for reasoning about topics of fundamental importance to humanity, such as the effect of future technology on the human condition and the possibility of global catastrophes’’3. Apart from the Institute’s primary study on the effect of future technology on the human condition, other studies include potential obstacles on space colonization, dangers of advanced artificial intelligence and the probable impacts of technological progress on social and institutional risks.
And although this is happening on our side of the world, across the ocean we also discover the GLOBAL CATASTROPHIC RISK INSTITUTE, the MACHINE INTELLIGENCE RESEARCH INSTITUTE and THE FUTURE OF LIFE INSTITUTE. These three institutes are also concerned themselves with studies on the breadth of major GCRs and focus on big questions such as which GCRs are most likely to occur, which are the most effective ways of reducing GCRs and what ethical and other issues are raised by GCRs?
GCR stands for Global Catastrophic Risk and is defined as a hypothetical future event with the potential to inflict serious damage to human well-being on a global scale. Some such events could destroy or even potentially cripple modern civilization. Other, even more severe, scenarios could threaten permanent human extinction, referred to as existential risks as already mentioned before. Specific GCRs include emergence of technologies, environmental change, financial collapse, governance failure, infectious disease, nuclear war and astrobiology.
In search of a better understanding of this emerging area of research I contacted Dr. Anders Sandberg, a senior researcher in the FHI. Currently occupied with the FHI-Amlin collaboration analysing systemic risk and risk modelling. His research centres on management of low-probability high-impact risks, societal and ethical issues surrounding human enhancement and new technology, as well as estimating the capabilities of future technologies. Topics of particular interest include global catastrophic risk, cognitive biases, cognitive enhancement, collective intelligence, neuroethics and public policy.
1. Where does your funding come from and how difficult is it to convince people/funding bodies to invest in your research? We are mostly privately funded. I am on an industry collaboration project with a reinsurance company, but much of FHI has been based on private donations (and the original Oxford Martin School grant). Overall, it is surprisingly hard to get funding bodies to fund existential risk. Many acknowledge that the field is important, yet they seem to think that there is no research or no methodology that could help it. Given the interdisciplinary nature of the problem we often find that we do not fit with standard grant programs.
2. If there was a sustained large amount of funding what would you prioritise spending it on (researchers, equipment, publicity, a new building etc)? Right now researchers. Existential risk is an under-researched area, and one can do quite a lot with small resources. However, it also requires a lot of different kinds of people - philosophers, economists, disaster experts, risk analysts, statisticians etc. to become really useful. Later, when we know more, it might matter more how much publicity or policy impact one can get.
3.Is the nature of our political/economic system, which is based on short-term policies, simply incompatible with such long-term thinking, regardless of its importance? Not necessarily. Some institutions are surprisingly long term - consider the case of insurance and especially reinsurance companies, or pension funds - as long as the incentives are right. The problem is that key decision makers often have short-term incentives, and that is unhealthy.
4. What do the researchers at the FHI do on an average day? (Running computer calculations/simulations? Analysing current research and scientific innovations? Collecting data? Drinking tea and discussing the apocalypse Etc.) Yes to all of them. Right now me and some colleagues are hanging out with insurance people in the City of London in order to figure out how systemic risk emerges from their use of risk models - when we get back to the office there will be plenty of meetings analysing our notes, writing papers about it, reading up on cognitive bias, expertise and finance, as well as programming an agent based simulation of the insurance market.
We are methodologically flexible: some problems are best solved by the techniques of analytic philosophy, others by proving mathematical theorems, simulating them, comparing with historical examples, scenario planning, or surveying people. We look at the possibilities and reason about what might work well, and then experiment with it.
5. From my understanding FHI and CSER have been in collaboration. Would it be beneficial if these institutes merged? I think one risk is groupthink: everybody thinking about these hard problems in the same way. Having two friendly competing institutes (I like to view it as healthy sibling rivality) is a good way of seeing what conclusions are robust and which ones may be due to local bias.
6. Is there a tangible way that the public could feel the effects, or see the evidence, of FHI’s research within their lifetime? I think we may be having some policy impact already (some of our work have appeared in government reports, we are talking to some policymakers etc.), Nick’s book is apparently changing some minds, and so on. But the irony is of course that if we are successful in reducing existential risk, little will seem to happen - the world will be safer, but it is hard to point at an example of it being safer. Successful risk reduction is never as obvious as unsuccessful risk reduction.
Dr. Sandberg’s final comment suggests an interesting paradox. If risk reduction is successful then it is also invisible. The research done by these institutions could one day save the human-race without the public ever being aware that they were in danger.
IMAGE LIST
1. Joseph Popper, 2012, Design Interactions, Space Capsule Film Set
2. Daro Montag, 1994, Photography, Fruit of the Sun
3. Hee Jin Kang, 2002, Photography, One
FOOTNOTES
1. Lewsey, Fred (25 November 2012). “Humanity’s last invention and our uncertain future”.
2. Sean Coughlan, How are humans going to become extinct?, http://www.bbc.co.uk/news/business-22002530, (24 April 2013)
3. “About FHI”. Future of Humanity Institute. Retrieved 01 December 2014.
1. Joseph Popper, 2012, Design Interactions, Space Capsule Film Set
2. Daro Montag, 1994, Photography, Fruit of the Sun
3. Hee Jin Kang, 2002, Photography, One
FOOTNOTES
1. Lewsey, Fred (25 November 2012). “Humanity’s last invention and our uncertain future”.
2. Sean Coughlan, How are humans going to become extinct?, http://www.bbc.co.uk/news/business-22002530, (24 April 2013)
3. “About FHI”. Future of Humanity Institute. Retrieved 01 December 2014.