.comment-link {margin-left:.6em;}

Future Imperative

What if technology were being developed that could enhance your mind or body to extraordinary or even superhuman levels -- and some of these tools were already here? Wouldn't you be curious?

Actually, some are here. But human enhancement is an incredibly broad and compartmentalized field. We’re often unaware of what’s right next door. This site reviews resources and ideas from across the field and makes it easy for readers to find exactly the information they're most interested in.

Name:

The future is coming fast, and it's no longer possible to ignore how rapidly the world is changing. As the old order changes -- or more frequently crumbles altogether -- I offer a perspective on how we can transform ourselves in turn... for the better. Nothing on this site is intended as legal, financial or medical advice. Indeed, much of what I discuss amounts to possibilities rather than certainties, in an ever-changing present and an ever-uncertain future.

Sunday, July 17, 2005

A Glimpse of the Future -- Or, Has SF Gone Blind? -- SF

*
The question "Is Science Fiction About to Go Blind?" has arisen more and more often in recent years as the idea of a "technological Singularity" has caught on in SF -- literally a rate of accelerating technological change so swift as to be beyond modern human comprehension. Much less our ability to meaningfully predict its course. For obvious reasons, if you believe that we will experience technological progress that pronounced in the near future, the range of future scenarios you can meaningfully write about is correspondingly diminished. Many science fiction writers who anticipate such an era are constantly trying to expand that spectrum of possibilities, but it often proves challenging.

The article linked above describes some of the problems faced by the SF field. Situations from Charles Stross' novel Accelerando (which is given away free online here) are used to illustrate some of the radical changes that could take place in the event of runaway AI and nanotech breakthroughs. While that article is interesting and well worth reading, I'd like to look at a slightly different problem -- what do we lose if science fiction stops being a lens that surveys the future for the rest of humanity, if it loses the predictive power that its best examples have had over the last two centuries?

Consider Brave New World, 1984, R.U.R. or even modern films such as Gattacca. Or, for that matter, the venerable novels of Verne or Shelley or visionaries of human evolution from Olaf Stapledon to William Gibson and Vernor Vinge. Works such as these often introduce a wider audience to critical issues they had no idea existed.

It's been said that the greatest contribution that Brave New World and 1984 made in describing their respective dystopias was in insuring those futures never came to pass. These two potential worlds -- warped, respectively, by massive, misguided human social engineering and a ruthless, all-controlling totalitarian state -- are now classics in the genre that asks "What would be so bad about doing ___?" R.U.R., of course, coined the word "robot" while simultaneously asking what happens in a world where all human labor has been replaced by the efforts of intelligent machines.

Gattacca looked at how radically American society could change with just a single technology -- exceptionally cheap, fast and accurate genetic scans... which would enable the selection of superior embryos, the screening of the "genetically unfit" and the use of DNA analysis in every forensic crime scene. How quickly the future has come upon us.

And Gibson and Vinge, of course, are known for their respective visions of cyberpunk and technological Singularities -- both of which relate to this site's focus of radical human enhancement and which have, more importantly, influenced many futurists, philosophers and artificial intelligence researchers. There are more obscure works, such as Greg Bear's novel Blood Music, which anticipated nanotechnology well before Engines of Creation (as did the character of Warlock in the comic book The New Mutants, though no one wants to discuss that fact =) ), or Arthur C. Clarke's Fountains of Paradise, which envisioned a geostationary elevator out of Earth's gravity well... not to mention Clarke's non-fiction explanation of how to put geostationary communications satellites in orbit to revolutionize telecommunications. Which they did.

What's my point? Science fiction propogates otherwise obscure ideas about the future among many different audiences -- whether among potential nanotech innovators reading Blood Music, ordinary American voters watching Gattacca, early 20th Century labor organizers taking in R.U.R., early telecom or aerospace engineers reading Clarke or future civil rights activists contemplating 1984. In each case, the critical audience may differ dramatically -- a few scientists or inventors spurred to develop a technology in one case may serve as the idea's "critical mass," in other cases, it may be the widespread comprehension of millions regarding a technology's implications will change the course of history.

When science fiction dramatically restricts its vision to narrowly defined possibilities -- whether space opera stories, post-apocalyptic realities or your choice of post-Singularity/ post-humanity futures -- the field as a whole loses much of its ability to surprise as it treads and retreads the same overtaxed plot of ground. That's not to say that there aren't plenty of great stories left involving nanites or AIs (or space fleets or holocaust aftermaths), but if every "serious SF writer" ends up tramping down the same path, we're going to end up mssing a lot of insights.

In fairness to writers fascinated by Singularities, it's worth nothing that many writers, while their technological timescales may be greatly accelerated, do consider the impact of radically advanced technology on human society. It's just that they anticipate its arrival being just around the corner, and they generally don't expect "society as we know it" to last very long thereafter. Nevertheless, there are some interesting stories packed into those compressed timespans.

Perhaps more intriguing in this vein are writers such as Ken Macleod who anticipate the survival of some kind of human civilization in their stories -- if one that is much shakier and less populous than the one we have today. And which exists in the shadow of incomprehensibly powerful intelligences.

These are interesting scenarios to contemplate. However, lest the field one day devolve to a "cheesy space opera"/"bug-eyed monsters" level of recycled plots, I thought I'd do my part by pointing out just a few of the questions worthy of the SF's serious consideration, particularly at this stage of history. A few of which actually fit in pretty well in Singularity SF, if you think about them.

In what ways can human beings be enhanced, whether in terms of intelligence, health, speed, looks, whatever? To what degree can they be enhanced while still remaining fundamentally "human"? (And what does "human" mean, while we're at it?)

To what degree can various methods of radical human enhancement be synergized? Will biological beings be able to compete at all with non-biological intelligences? Or -- heresy though it is to ask -- will AIs be able to compete at all with biological or cyborged intellects?

Will recursively self-improving intelligences result in the development of unspeakably powerful AIs -- or unspeakably powerful human/post-human minds, if the world's computational resources and scientific innovation are turned towards the refinement of human/biological intelligence instead of artificial thought?

How many different versions (or factions) of "superior beings" might a technologically evolving Earth/solar system/galaxy en up playing host to? How might they get along? How might they learn to get along, if the only alternative were wasteful (if not genocidal) conflict?

How does ordinary humanity maintain its rights and independence in the face of a newly evolved "higher intelligence"? Will humanity (or a large proportion of it) be forced to self-evolve in response in a kind of "arms race" or at least a push to blunt the most dramatic advantages a superior intellect might hold over "masses of ordinary men"?

Will human beings -- either normal modern ones, geniuses, or significantly more advanced near-future near-humans -- be able to offer higher intelligences anything? Here's a fictional comparison of where various intelligences fall on one imaginary scale. Consider how far down even the most advanced of modern humans would sit on this measure of sentience, and then consider this yardstick was specifically designed to make "mere mortals" a measurable quantity next to the celestial minds it conemplates.

What happens if the difference between "transhuman" minds and conventional geniuses becomes as great as between ordinary genius and the severely retarded? Even if there are no issues of wealth, power and recognition, what happens if "the rest of us" become keenly aware of how irrelevent we are to the next step on the evolutionary ladder?

Is there anything ordinary humanity can do to effectively influence human evolution, whether dramatically hastening it, delaying it, redirecting it or "putting the genie back in the bottle"? How can national/international education, government and/or R&D funding shape these unfolding possibilities?

Whew. There's actually a lot more to talk about, most of which has nothing directly to do with human enhancement at all. But I think I've described enough already to illustrate my point. Even without journeying too far from "ye olde Singularity territory" I've found quite a bit of material that is foreboden in a strict, "the AI gods shall rule all flesh," AI/nano, "hard takeoff" Singularity SF. (An exhausting definition just to write. But oddly enough, an accurate one.)

Future Imperative

Augmentation: Is the Train Leaving the Station? -- AL, Bio, CPS, Plan, Soc

*

Given how fast augmentation techniques have been surfacing in recent years, the question arises: How long before we have augmentation methods too powerful to ignore? And for that matter, how long before we have neo-humans who are in many ways definitively superior to the previous model -- that is to say, the rest of us?

Personally, as a human-enhancement enthusiast, it has been surprising to see progress in this field consistently outracing even my expectations. Especially considering that no large organization or wealthy entity seems to be pushing such research. Indeed, given the position of the President's Council on Bio-Ethics, it would seem that the present attitude of the U.S. government is relatively hostile to the field. Yet progress is continuing regardless, and a number of breakthroughs enabling the creation of "superhumans" using existing technology may already be possible.

So far on this blog, I've either discussed or linked to significant new discoveries with the potential to create people who are "more than human." There's been the techniques for doubling muscle mass and amplifying cardio-vascular endurance through genetic manipulation -- already in use in animals, and which experts speculated could well be in use in our next Olympics, if they weren't already a part of the last one.

When injecting rats with the gene for IGF-1 and then having them exercise results in a doubling of their muscle mass (and when merely being injected and not exercising increases muscle size and strength by 15% to 30%), it becomes clear that only a relative handful of such modifications would be necessary for a genetically human "sub-species" to pull away from the rest of us in terms of performance. Particularly if they started at an elevated baseline of abilities.

And of course, there's been the various bio-energy manipulations described here, here, here and here. And the accelerated learning and creativity boosting techniques described here, here, here, here, here, here, here, here, here, here, here, here, here, and here, among others. Some of my previous articles, such as this one and this one, have surveyed the rate at which science has been making strides in biologically enhancing human beings. Obviously, there's no shortage of enhancement methods, even for those of us not yet (or perhaps never) biologically, pharmacologically or cybernetically augmented. Which raises the very basic question -- how long before someone realizes that any effective methods for significant human improvement could be synergistically combined, thus leading to ever more radically evolved beings? And that a relatively simple program could take a relatively small number of such people -- a few hundred to a few thousand or tens of thousands -- and with a fairly small budget (within the reach of many companies, not to mention nations) develop a host of "posthumans" to serve that organization's interests...?

Not a grandiose leap of the imagination if the people in said program included many of the leaders and thinkers of such a group, particularly people who would have reason to expect to see great improvements in their personal compensation in exchange for substantial improvements in the quality of their work and thought. And who might be personally, ideologically and/or philosophically devoted to the group and its goals.

Along these lines, here's my response to one query about potential augmented humans:

"As to the exceptional individuals I mentioned, that was technically one of the relatively fictional parts of the post (unlike, say, the two computers engaged in automated biotech research). But you're right to ask about them, because in fact both were based on human capabilities that have been developed not through genetic engineering, but through regular practice of particular disciplines.

"I'll save a more detailed discussion for another blog post, but to summarize, the guy with the unusually well-developed capillaries is based on simple techniques by Dr. Win Wenger for increasing healthy circulation to your brain. He's the easier of the two to create through simple, daily exercises -- especially given several years of work.

"The girl with the superior limbic access/learning skills is based on a number of accelerated learning techniques, including a number of hypnosis methods developed (and never written down) by Dr. Milton Erickson, Dr. Raikov, and others. She'd be more difficult to develop -- in particular the kind of automatic access to her superlearning gifts that I describe -- but given the research and an intensive project to create one or more such people, hardly impossible.

"One key point of this article was that there are a number of ways to develop a superhuman (or "superentity"). Also implied, I think, is the point that many of the methods available could be used to enhance each other -- a genetically augmented person using accelerated learning and mindtech to leverage their assets and then turning around to improve a computer system that researches biotech options.

"You quickly end up with a snowball effect here. Or a "Singularity." =)"

Future Imperative