I’ve been invited to join this organization. But you can join too:
I hadn’t heard of it before. Do you know anything about it? Here’s their mission statement:
The Lifeboat Foundation is a nonprofit nongovernmental organization dedicated to encouraging scientific advancements while helping humanity survive existential risks and possible misuse of increasingly powerful technologies, including genetic engineering, nanotechnology, and robotics/AI, as we move towards the Singularity.
Lifeboat Foundation is pursuing a variety of options, including helping to accelerate the development of technologies to defend humanity, including new methods to combat viruses (such as RNA interference and new vaccine methods), effective nanotechnological defensive strategies, and even self-sustaining spacecolonies in case the other defensive strategies fail.
We believe that, in some situations, it might be feasible to relinquish technological capacity in the public interest (for example, we are against the U.S. government posting the recipe for the 1918 flu virus on the Internet).
It seems to have Nick Bostrom and Ray Kurzweil as two of its guiding figures: the overview features quotes from both.
An existential risk is a risk that is both global and terminal. Nick Bostrom defines it as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential”. The term is frequently used to describe disaster and doomsday scenarios caused by non-friendly superintelligence, misuse of molecular nanotechnology, or other sources of danger.
The Lifeboat Foundation was formed to prevent existential events from happening, as once they occur, humanity may have no possibility to correct the error. Unfortunately governments, and humanity in general, always react AFTER a disaster has happened, and some disasters will leave no survivors so we must react BEFORE they occur. We must be proactive.
The Lifeboat Foundation is developing programs to prevent existential events (“shields”) as well as programs to preserve civilization (“preservers”) to survive such events.
“Our approach to existential risks cannot be one of trial-and-error. There is no opportunity to learn from errors. The reactive approach — see what happens, limit damages, and learn from experience — is unworkable. Rather, we must take a proactive approach. This requires foresight to anticipate new types of threats and a willingness to take decisive preventive action and to bear the costs (moral and economic) of such actions.” — Nick Bostrom
“We cannot rely on trial-and-error approaches to deal with existential risks… We need to vastly increase our investment in developing specific defensive technologies… We are at the critical stage today for biotechnology, and we will reach the stage where we need to directly implement defensive technologies for nanotechnology during the late teen years of this century… A self-replicating pathogen, whether biological or nanotechnology based, could destroy our civilization in a matter of days or weeks.” — Ray Kurzweil
You’ll note there’s no mention here of global warming, mass extinction of species, oil depletion and other minor nuisances. Some people consider these problems insufficiently severe to count as “existential threats”… and thus, perhaps, best left to others. Some argue that there are already enough people worrying about these problem—while other threats need more attention than they’re getting.
That would be an interesting discussion to have. But I’m afraid there’s a cultural divide between the “green crowd” and the “tech crowd” that hinders such a discussion. The green crowd worries about things like global warming, the mass extinction that may currently be underway, and peak oil. The tech crowd worries about things like nanotechnology, artificial intelligence and asteroids hitting the Earth. Each crowd tends to think the other is a bit silly… and they don’t talk to each other enough. Am I just imagining this? I don’t think so.
Of course, any generalization this vast admits many exceptions. I like Gregory Benford because he confounds naive expectations: he thinks global warming is a desperately urgent problem that overshadows all others, but he’s willing to contemplate high-tech solutions. According to my theory, that should annoy both the green crowd and the tech crowd.
Personally I think all significant threats to civilization and biosphere should be evaluated and addressed in a unified way. Setting some aside because they’re “non-existential” or overly studied seems just as dangerous as setting others aside because they seem improbable or science-fiction-esque.
For one thing, I can imagine scenarios where medium-sized problems snowball into big “existential” ones. What’s the chance that in this century, global warming leads to droughts and famines which combined with oil shortages lead to political instability, the collapse of democratic governments, wars… and finally a world-wide nuclear or biological war? Maybe low… but I bet it’s higher than the chance of an asteroid hitting the Earth in this century.
I’m pleased to see that the Lifeboat Foundation plans “future programs” that will appeal to the green crowd:
To protect against global warming and other unwanted climate changes.
To preserve animal life and diversity on the planet.
If our civilization ran out of energy, it would grind to a halt, so Lifeboat Foundation is looking for solutions.
However, their current programs are strongly focused on issues that appeal to the tech crowd. Maybe that’s okay, but maybe it’s a bit unbalanced:
To protect against unfriendly AI (Artificial Intelligence).
To protect against devastating asteroid strikes.
To protect against bioweapons and pandemics.
As the Internet grows in importance, an attack on it could cause physical as well as informational damage. An attack today on hospital systems or electric utilities could lead to deaths. In the future an attack could be used to alter the output that is produced by
nanofactories worldwide leading to massive deaths.
To protect against ecophages and nonreplicating
This shield strives to protect scientists from obstacles that would prevent latter day Max Plancks from completing their research.
To prevent nuclear, biological, and nanotechnological attacks from occurring by using surveillance and
sousveillance to identify terrorists before they are able to launch their attacks.
To build fail-safes against global existential risks by encouraging the spread of sustainable human civilization beyond Earth.