An Extinction-Level Threat, and Other Trivial Matters -- Soc, Tech
Nature comes knocking on our door... and doesn't realize we've already seen this movie. From The Guardian in Britain comes this report:
Scientists are monitoring the progress of a 390-metre wide asteroid discovered last year that is potentially on a collision course with the planet, and are imploring governments to decide on a strategy for dealing with it.And before you ask, yes, this is the kind of serious problem that we could really use widespread, superhuman intelligence to deal with. Unfortunately, the intelligence and technology that would be so useful for dealing with threats on this scale -- whether natural or artificial -- are perfectly capable of creating even greater, possibly terminal threats. So assuming our existing problems don't get us, something new could.
Nasa has estimated that an impact from Apophis, which has an outside chance of hitting the Earth in 2036, would release more than 100,000 times the energy released in the nuclear blast over Hiroshima. Thousands of square kilometres would be directly affected by the blast but the whole of the Earth would see the effects of the dust released into the atmosphere.
My point? At some level we're going to have to see an improvement in human morality and ethics. Or, if you prefer, an improvement in emerging superhuman morality and ethics. Genius doesn't mandate virtue. Nor does it forbid anti-social behavior. In the end, only our integrity can save us. Or rather, the morals and ethics of an ever-growing majority of the human race... and whoever else might end up sharing this planet with us.
-----
A further note from USAToday, on the research of MIT physicist Max Tegmark and Oxford University philosopher Nick Bostrom into the odds of a random extinction of our entire species:
Luckily, recent years have brought us better fixes on how quickly planets form and the age of the Universe (about 14.7 billion years). Added to the knowledge of how long it took an intelligent species to arise here on Earth, the evidence indicates that it's extremely unlikely than an inhabited planet would be randomly annihilated, say Tegmark and Bostrom...
...But insomniacs still have some fodder, the pair concludes. Their estimate "does not apply in general to disasters that become possible only after certain technologies have been developed, for example, nuclear annihilation or extinction through engineered microorganisms. So we still have plenty to worry about."
Which only reinforces, I suppose, my contention that we will increasingly rank as the highest threat to our own survival, and thus the need for a degree of moral development in tandem with our collective intellectual and technological development. You might have hoped that nuclear weapons might have clenched that argument, but oh well. At least we haven't received a "conclusive argument" in the form of an extinction event.
But on the positive side, if we ever do, you can rest assured I won't say "I told you so."
Future Imperative
0 Comments:
Post a Comment
<< Home