.comment-link {margin-left:.6em;}

Future Imperative

What if technology were being developed that could enhance your mind or body to extraordinary or even superhuman levels -- and some of these tools were already here? Wouldn't you be curious?

Actually, some are here. But human enhancement is an incredibly broad and compartmentalized field. We’re often unaware of what’s right next door. This site reviews resources and ideas from across the field and makes it easy for readers to find exactly the information they're most interested in.

Name:

The future is coming fast, and it's no longer possible to ignore how rapidly the world is changing. As the old order changes -- or more frequently crumbles altogether -- I offer a perspective on how we can transform ourselves in turn... for the better. Nothing on this site is intended as legal, financial or medical advice. Indeed, much of what I discuss amounts to possibilities rather than certainties, in an ever-changing present and an ever-uncertain future.

Saturday, April 30, 2005

A Glimpse of the Future -- Or, Has SF Gone Blind? -- SF

*
The question "Is Science Fiction About to Go Blind?" has arisen more and more often in recent years as the idea of a "technological Singularity" has caught on in SF -- literally a rate of accelerating technological change so swift as to be beyond modern human comprehension. Much less our ability to meaningfully predict its course. For obvious reasons, if you believe that we will experience technological progress that pronounced in the near future, the range of future scenarios you can meaningfully write about is correspondingly diminished. Many science fiction writers who anticipate such an era are constantly trying to expand that spectrum of possibilities, but it often proves challenging.

The article linked above describes some of the problems faced by the SF field. Situations from Charles Stross' novel Accelerando (which is given away free online here) are used to illustrate some of the radical changes that could take place in the event of runaway AI and nanotech breakthroughs. While that article is interesting and well worth reading, I'd like to look at a slightly different problem -- what do we lose if science fiction stops being a lens that surveys the future for the rest of humanity, if it loses the predictive power that its best examples have had over the last two centuries?

Consider Brave New World, 1984, R.U.R. or even modern films such as Gattacca. Or, for that matter, the venerable novels of Verne or Shelley or visionaries of human evolution from Olaf Stapledon to William Gibson and Vernor Vinge. Works such as these often introduce a wider audience to critical issues they had no idea existed.

It's been said that the greatest contribution that Brave New World and 1984 made in describing their respective dystopias was in insuring those futures never came to pass. These two potential worlds -- warped, respectively, by massive, misguided human social engineering and a ruthless, all-controlling totalitarian state -- are now classics in the genre that asks "What would be so bad about doing ___?" R.U.R., of course, coined the word "robot" while simultaneously asking what happens in a world where all human labor has been replaced by the efforts of intelligent machines.

Gattacca looked at how radically American society could change with just a single technology -- exceptionally cheap, fast and accurate genetic scans... which would enable the selection of superior embryos, the screening of the "genetically unfit" and the use of DNA analysis in every forensic crime scene. How quickly the future has come upon us.

And Gibson and Vinge, of course, are known for their respective visions of cyberpunk and technological Singularities -- both of which relate to this site's focus of radical human enhancement and which have, more importantly, influenced many futurists, philosophers and artificial intelligence researchers. There are more obscure works, such as Greg Egan's novel Blood Music, which anticipated nanotechnology well before Engines of Creation (as did the character of Warlock in the comic book The New Mutants, though no one wants to discuss that fact =) ), or Arthur C. Clarke's Fountains of Paradise, which envisioned a geostationary elevator out of Earth's gravity well... not to mention Clarke's non-fiction explanation of how to put geostationary communications satellites in orbit to revolutionize telecommunications. Which they did.

What's my point? Science fiction propogates otherwise obscure ideas about the future among many different audiences -- whether among potential nanotech innovators reading Blood Music, ordinary American voters watching Gattacca, early 20th Century labor organizers taking in R.U.R., early telecom or aerospace engineers reading Clarke or future civil rights activists contemplating 1984. In each case, the critical audience may differ dramatically -- a few scientists or inventors spurred to develop a technology in one case may serve as the idea's "critical mass," in other cases, it may be the widespread comprehension of millions regarding a technology's implications will change the course of history.

When science fiction dramatically restricts its vision to narrowly defined possibilities -- whether space opera stories, post-apocalyptic realities or your choice of post-Singularity/ post-humanity futures -- the field as a whole loses much of its ability to surprise as it treads and retreads the same overtaxed plot of ground. That's not to say that there aren't plenty of great stories left involving nanites or AIs (or space fleets or holocaust aftermaths), but if every "serious SF writer" ends up tramping down the same path, we're going to end up mssing a lot of insights.

In fairness to writers fascinated by Singularities, it's worth nothing that many writers, while their technological timescales may be greatly accelerated, do consider the impact of radically advanced technology on human society. It's just that they anticipate its arrival being just around the corner, and they generally don't expect "society as we know it" to last very long thereafter. Nevertheless, there are some interesting stories packed into those compressed timespans.

Perhaps more intriguing in this vein are writers such as Ken Macleod who anticipate the survival of some kind of human civilization in their stories -- if one that is much shakier and less populous than the one we have today. And which exists in the shadow of incomprehensibly powerful intelligences.

These are interesting scenarios to contemplate. However, lest the field one day devolve to a "cheesy space opera"/"bug-eyed monsters" level of recycled plots, I thought I'd do my part by pointing out just a few of the questions worthy of the SF's serious consideration, particularly at this stage of history. A few of which actually fit in pretty well in Singularity SF, if you think about them.

In what ways can human beings be enhanced, whether in terms of intelligence, health, speed, looks, whatever? To what degree can they be enhanced while still remaining fundamentally "human"? (And what does "human" mean, while we're at it?)

To what degree can various methods of radical human enhancement be synergized? Will biological beings be able to compete at all with non-biological intelligences? Or -- heresy though it is to ask -- will AIs be able to compete at all with biological or cyborged intellects?

Will recursively self-improving intelligences result in the development of unspeakably powerful AIs -- or unspeakably powerful human/post-human minds, if the world's computational resources and scientific innovation are turned towards the refinement of human/biological intelligence instead of artificial thought?

How many different versions (or factions) of "superior beings" might a technologically evolving Earth/solar system/galaxy en up playing host to? How might they get along? How might they learn to get along, if the only alternative were wasteful (if not genocidal) conflict?

How does ordinary humanity maintain its rights and independence in the face of a newly evolved "higher intelligence"? Will humanity (or a large proportion of it) be forced to self-evolve in response in a kind of "arms race" or at least a push to blunt the most dramatic advantages a superior intellect might hold over "masses of ordinary men"?

Will human beings -- either normal modern ones, geniuses, or significantly more advanced near-future near-humans -- be able to offer higher intelligences anything? Here's a fictional comparison of where various intelligences fall on one imaginary scale. Consider how far down even the most advanced of modern humans would sit on this measure of sentience, and then consider this yardstick was specifically designed to make "mere mortals" a measurable quantity next to the celestial minds it conemplates.

What happens if the difference between "transhuman" minds and conventional geniuses becomes as great as between ordinary genius and the severely retarded? Even if there are no issues of wealth, power and recognition, what happens if "the rest of us" become keenly aware of how irrelevent we are to the next step on the evolutionary ladder?

Is there anything ordinary humanity can do to effectively influence human evolution, whether dramatically hastening it, delaying it, redirecting it or "putting the genie back in the bottle"? How can national/international education, government and/or R&D funding shape these unfolding possibilities?

Whew. There's actually a lot more to talk about, most of which has nothing directly to do with human enhancement at all. But I think I've described enough already to illustrate my point. Even without journeying too far from "ye olde Singularity territory" I've found quite a bit of material that is foreboden in a strict, "the AI gods shall rule all flesh," AI/nano, "hard takeoff" Singularity SF. (An exhausting definition just to write. But oddly enough, an accurate one.)


Future Imperative

Friday, April 29, 2005

(Non-)Human Evolution: The Public Debate -- Bio, Soc

Yes, even the Utne Reader has written an article on "transhumanism" -- the belief that humanity can and should seek to transcend its physical limits through technology. In light of mainstream articles like this one and Francis Fukuyama's declaration in Foreign Affairs that transhumanism is "the world's most dangerous idea" and Bill Joy's call to arms against the notion in Wired magazine, I think the transhumanist vision of human evolution is entering the public discourse.

Meanwhile, the National Academy of the Sciences has submitted guidelines on the ethics of inserting human genes into animals, among other matters. One bio-ethics panel has already endorsed a scientist's plan to create mouse brains almost entirely composed of human brain cells.

Oddly enough, the main red line offered up by the NAS actually had to do with avoiding any experiment that might result in a human brain being "trapped" in an animal body. But the mouse experiment has not gone forward, and the scientist, Irving Weissman, has no immediate plans to do so.

I just know I'm going to end up in a Labyrinth with a classic minotaur at this rate. =)

Seriously, the issue of human brains, even partially human brains, in animal bodies, is going to raise all kinds of ethical questions for many average people. Do such beings have a soul? Consciousness? Complex reasoning, feelings, etc?

How human is human? Regardless of your religion or your private philosophy? Where do you stand?

Sometimes entry into the public discourse doesn't mean a comfortable debut. =)

Ralph


Future Imperative

Wednesday, April 27, 2005

To Augment or Not to Augment... -- AL, Bio, CPS, Plan, Psych, Soc

*
A chap I occasionally discuss augmentation with brought up the old "make the Internet an AI" plan, and also mentioned these following concerns about human enhancement studies.

"There's a lot of ways intelligence can be augmented, even now, but the danger is always that if we screw around with bits we don't know about, we can shut off important brain areas by accident."

"That's the one thing I'm nervous about as far as government testing goes..."

So, naturally, I offered a relatively short and simple reply...

...

Actually, these are both good points, Daniel, but I look at things from a slightly different perspective.

First, you do want to cautious about causing harm to the brain/ mind/ personality/ soul/ etc. But there's plenty you can do to expand human capabilities without destroying the person -- in fact, much of what you might do will only improve people. Smarter, stronger, faster, healthier, better looking -- who could complain? =)

Think about the commonly accepted ways for improving a human being in the real world. Exercise, healthy food, reading, work demanding advanced physical or mental skills, martial arts, sports. Etc.

These kinds of lifestyle choices don't destroy a human being -- though embracing them may change you radically. Remember, there are lots of disciplined people out there who never went through Marine boot camp, but being highly disciplined would be a huge change to a lot of people's lifestyles. You could even argue that some highly gifted people wouldn't be nearly as productive if they were that focused and goal-oriented (some artistic, musical, literary and other creative types, for example).

On the other hand, most people could benefit from some improvement in their personal sense of discipline -- less of a tendency to procrastinate, for example.

The trick is, most of these "conventional" skills and virtues don't suddenly multiply with an injection or a pill in the modern-day world. So people can get nervous about a major improvement. Even one with no real side-effects.

But think of it this way -- if we start using all the tech and techniques that are already out there for improving human beings now...

Then when something comes along like your superintelligent computer network, or North Korean supersoldiers, or that protein that enhances rat brains (and maybe human brains)...
We won't have to jump into a huge crash program to develop supers of our own, "just to keep up." Instead, we'll have the time to consider each augmentation in turn, looking at the advantages/disadvantages, whether something can be pushed further, whether we've pushed it too far.

Cool stuff, if you're not rushing to get it done.

That's why taking a proactive stance, not just with using existing tech, but researching new enhancements, is such a good idea. We need smarter people just to solve the world's existing problems, but we don't need a small cabal super-augmenting itself while the rest of us are sleeping, and then taking over with absolutely no one having the requisite talents to oppose them.

One of the advantages about distributing exceptional abilities as widely as possible is that you get far more of your population actively involved in looking at issues and solving problems -- something that too many people leave to others. I want to get away from having an "elite" run our lives, be it "super" or otherwise.

Ralph


Future Imperative

Tuesday, April 26, 2005

Forming Your Own Micronation -- Part I -- "Good God, Why?" -- Hum, Plan, Soc, $$$

*
Ambitious futurists often want to reshape everything about their world, and for some, just changing their own lives isn’t enough. Some dreamers want nothing less than their own sovereign nation. Whether it’s a matter of changing society today, gaining independence from petty national governments or simply having a place where you can wield absolute power, the idea is popular with a surprisingly large number of futurists with very different backgrounds and philosophies.

Whatever else you can say about these people, you can’t fault them for thinking small. So let’s take a look at what someone who wanted to start their own country (or “independent microstate”) would have to deal with.

When you tell your friends and family “I’m planning to start my own micronation,” you’re apt to hear a number of replies. “Why on Earth?” “Are you serious?” “How would you even do that?” And my personal favorite, “Are you insane?”

These comments could phrased more tactfully, but they’re actually a good place to start. When someone asks you, “Good God, why?” – give it some thought.

What do you want an independent nation for? Until you answer that question, there’s no point to your grandiose (or even humble, realistic) plans.

Ask yourself: Do you want it to take advantage of banking loopholes? To engage in cutting edge research that some advanced countries don't approve of (stem cell research, perhaps future gene therapy enhancement work)?

Do you just want an escape route, a bolt hole to flee to if the world starts to disintegrate?

Or do you think that only a transhumanist nation would be accepting and supportive of the emergence of genuine transhumans/ superhumans/ posthumans?

Will this be the cornerstone of your all-conquering transcontinental/ global/ interplanetary/ interstellar/ intergalactic empire?

Or something else?

Here, for example, are the edited comments of one random enthusiast I asked this question of on the Net:

“Fears:
-Research restrictions that inhibit beneficial research based on ignorance or nearsightedness.
-Restrictions on applications of research.
-Fast tracked research on products can't be trusted due to the reasons behind the fast tracking... money (i.e. Vioxx and the new testosterone patch for women).
-Not trust in media information because major media companies are interested in viewers and ratings so they may sensationalize stories or report only part of the story... this overall has negative effects on society.

"Goals:
-A sovereign society built on democratic transhumanist ideals.
-Socially progressive - no money, no debt, no poverty, no greed, each person is self governed. Liberal, free. Free to do anything so long as it does not physically or emotionally harm another person. Fairness and equality.
-A government that is proportionally representative. And has checks and balances for human nature, to inhibit greed and corruption, and promotes ethics and transparency.
-Government documentation available without restriction to every citizen as every citizen is a member of the government.
-Computerized allocation of resources to make sure that everyone gets a fair share of the societal wealth.
-Self sustainability.

"I have a strong suspicion that yes, only a transhumanist nation would be totally accepting of genuine transhumans. I think this because of what has been shown again and again throughout human history. Innocent people and entities will die before change in an existing society will occur. If we create a society that is accepting of those people and entities even before they exist, then no one will have to suffer needlessly. That's just my opinion though, and i can't think of everything, so chances are there will still be plenty of risks, and maybe even a few mishaps.”

One thing the above comments illustrate is not to assume that someone interested in creating a micronation necessarily shares your goals. Prominent groups interested in establishing microstates are often assumed to be, among other things, radical libertarians, fascist militarists, apocalyptic cult groups, ethnic or nationalistic xenophobes, religious extremists and/or cultural throwbacks. A megalomaniacal leader is often consider a good accessory. Yet our above commentator sounds more like another social democrat with visions of a non-coercive utopian paradise. Time to add another category. Or just to toss our assumptions out the window.

That doesn’t mean nobody shares your precise dissatisfactions with modern life in the West. Just that most of them don’t, and statistically speaking, never will. People are diverse.
Deal with it.

The key thing here is to figure out what you want... and what you're scared of. If you can find enough people who share your interests (or concerns), or whose goals work well with your own, you might be able to organize a micronation on whatever scale you feel you need.

So given that you actually have some goals, let’s consider your options.

One thing I'd like to emphasize is that many ambitious goals could be accomplished without founding your own mini-country -- or by "micronations" of varying size and capabilities. This is important, because even if you're dead set on an independent nation as an absolutely necessary interim step to your goals (say, a new, supernation, or transhuman/ posthuman existence)... there are still smaller steps you could be taking on the way to your microstate.

For example, regarding the first three fears mentioned above (restrictions on research, restrictions on applying research, research directed unwisely)…

Option 1 (of Many): Move to a country that generously supports the research you're interested in. Most people realize they have this option -- but if you go this way, you should probably make sure you're doing it on your own terms. First off, pick a country with first class scientific facilities and infrastructure (unless you plan to move everything you need in yourself, and have some other way to attract top-notch talent).

For example, go to Canada (few restrictions, ready access to the U.S. for tech and more people, just a step across the border). Or look at Ireland or Sweden in the EU (vast English fluency, huge scientific/tech infrastructure, but variable political moods in these and other countries).

One way to move towards creating your independent state would be to start an independent city -- an incorporated research city or research zone inside the borders of an advanced nation. If you could insure the level of private support and talent such an enclave would need to be world class, there's plenty of countries that would jump at the chance to host it.

Such an option could also exist in the U.S., and arguably already does, in several places. Consider the San Francisco Bay Area (including Silicon Valley), Boston (with its tech industries and legions of universities, including Harvard and MIT), and others, including even Raleigh-Durham in North Carolina and Austin in Texas -- both "red states", for those following American politics. So you have a number of choices with respect to both human resources and political climate, even in a single country.

Going abroad (outside North America), your obvious alternatives include major research centers in the EU, Japan, Australia, South Korea, Singapore, India and New Zealand (contingent, of course, on just what you want to research and what kind of capital you will need to draw on).

The fourth fear, of blocked or distorted media coverage pushing society down the wrong path, is a harder one to influence. Of course, you could always start your own media company, or help others who supported similar goals start or take over one. Again, though, a nation with a smaller population might be easier to influence, but countries with more powerful tech and biotech lobbies might provide you with more allies.

I'm in business, not politics, so I couldn't say. So I’ll leave this issue to people who are actually interested in influencing politics.

And now, let’s look at some positive goals:

“-A sovereign society built on democratic transhumanist ideals.”

Cool, though obviously you'll want some agreement on what those ideals are. Take a look at the Transtopians for a very different viewpoint from that of Betterhumans or the Extropians. Or to put it another way, just because you’ve adopted a label doesn’t mean that everyone else under that label has the same plans. Folks with revolutionary ideas about the future often have incredibly divergent concepts about what is realistic, what is desirable and even what a small group can accomplish.

Once again, consider your bedfellows carefully.

”-Socially progressive - no money, no debt, no poverty, no greed, each person is self governed. Liberal, free. Free to do anything so long as it does not physically or emotionally harm another person. Fairness and equality.”

This would be harder. You've got a lot more libertarians (and also just well-paid professional and entrepreneurial types) in Transhumanism's ranks. They wouldn't all jump on board with a (perceived) far-left/ socialist idea. Also, you'd have to be able to trade with the outside world, which means keeping some track of your resources.

This doesn't necessarily have to be a deal-breaker, however. One option would be for the group to agree on a set of core principles, with a plan that at some future date, the various groups (hopefully no more than two or three) would split off one-by-one as they mustered the resources and form additional, separate but allied Transhumanist states. Splitting nations shouldn't be too hard, especially if you're looking at something physically small and mutable -- like a collection of ships bound together. Splitting a few vessels off from the main mass should be relatively easy.

Hence, you could have the radical Libertarian micronation not far from the social democrats micronation, not far from the Zen Buddhist micronation, etc. Any political or philosophical ideas considered too far from the stated "core principles" could be rejected up front (white supremacists, etc).

”-A government that is proportionally representative. And has checks and balances for human nature, to inhibit greed and corruption, and promotes ethics and transparency.”

An effective participatory democracy shouldn't be too hard to sell. Really, there’s not much else to say. On this point.

”-Government documentation available without restriction to every citizen as every citizen is a member of the government.
-Computerized allocation of resources to make sure that everyone gets a fair share of the societal wealth.
-Self sustainability.”

Regarding wealth, since this is going to be such a tricky issue, you might want to focus on mutual goals. You all want to be prosperous, and none of you want to be a "servant class" to people vastly richer than yourselves, with no hope of progressing up the ladder. And ultimately, as technologies capable of making Transhuman people emerge, you all want to have reasonably equal access to them.

You can probably use a few such points of agreement to come up with an effective model.

My favorite option? Get really rich (you and all of your confederates), so that you have the resources to get this thing started, and more of the contacts who could help make it happen. Having done that, you could conceivably set up some kind of mutual trust for yourselves that would serve as a fallback net for people who "bought into it," either through money/ resources, labor, or both.

There are doubtless other options, but with a fairly tight knit group of people who were all financially independent and all working hard at their projects or businesses, you could easily come up with something.

The key thing, I suspect, would be to agree to share new Transhuman enhancement tech as it emerged, to the extent that it was practical to do so. For example, if a safe gene therapy method emerged that could increase intelligence, then everyone on the raft/island/space station would get a chance to use it (and be free to say "No" to it as well). But if it were an incredibly expensive cybernetic implant, then the community wouldn't be obliged to come up with $50 to $100 million for each poorer applicant.

Obviously, your mileage may vary on any of these suggestions. But the point is that reasonable compromises can likely be found for any number of major issues -- especially if your organization is working from a position of strength… Say, a highly skilled, motivated populace with a great excess of working capital.

“I have a strong suspicion that yes, only a transhumanist nation would be totally accepting of genuine transhumans. I think this because of what has been shown again and again throughout human history. innocent people and entities will die before change in an existing society will occur. If we create a society that is accepting of those people and entities even before they exist, then no one will have to suffer needlessly. That's just my opinion though, and I can't think of everything, so chances are there will still be plenty of risks, and maybe even a few mishaps.”

And this is your motivation for forming a microstate. The next guy/gal will probably have a totally different reason and may want nothing to do with your plans. And that’s fine. If your positions were going to create that much friction between you, you probably didn’t want her/him onboard anyway. =)

Next time, Part II -- Some Practical Considerations. Yes, I did say practical. =)


Future Imperative