.comment-link {margin-left:.6em;}

Future Imperative

What if technology were being developed that could enhance your mind or body to extraordinary or even superhuman levels -- and some of these tools were already here? Wouldn't you be curious?

Actually, some are here. But human enhancement is an incredibly broad and compartmentalized field. We’re often unaware of what’s right next door. This site reviews resources and ideas from across the field and makes it easy for readers to find exactly the information they're most interested in.

Name:

The future is coming fast, and it's no longer possible to ignore how rapidly the world is changing. As the old order changes -- or more frequently crumbles altogether -- I offer a perspective on how we can transform ourselves in turn... for the better. Nothing on this site is intended as legal, financial or medical advice. Indeed, much of what I discuss amounts to possibilities rather than certainties, in an ever-changing present and an ever-uncertain future.

Saturday, January 21, 2006

Facing the Robot Menace? -- AI, Bio, Cyber, Soc, Tech

*


Now, if there's something that I find disturbing, it's not cybernetic implants in human brains, or the eventual development of more effective AI. No, it's something like this, the creation of an artificial "brain" of 25,000 neurons linked up to a simulated F-22 fighter. Lovely notion, eh? We can grow artificial brains trained to fly jet fighters into combat for us.

Talk about a fuzzy line to be trying to walk down. I can deal with the cybernetic therapies that have been used to deal with Parkinson's, depression and missing limbs, obviously, but growing a miniature brain to serve as nothing more than the control system for a weapon of war seems a little disturbing. Why? Because a sentient being has established rights and is apt to look out for their own welfare. An increasingly complex AI/living brain melding which is undergoing constant upgrades is more apt to cross that line into a dim consciousness (say, one capable of experiencing longing, or sorrow, or pain, or despair) without anyone knowing. And we probably don't want he/she/it controlling powerful weapons systems the day he/she/it crosses that line.

Meanwhile, at least in fiction the idea of an AI takeover has been spoofed, while in the real world more people seem to be talking about it than ever before. Freeman Dyson, for example, has described a "disturbing" conversation he had with researchers at Google who were planning to download all the world's books into a single, searchable database.

"We are not scanning all those books to be read by people," explained one of my hosts after my talk. "We are scanning them to be read by an AI."

Good to know.

Google, perhaps unsurprisingly, has spun the story somewhat differently.

Responding to a direct question from Tom Standage, technology editor of The Economist, Google's Levick did not outright deny that Google was developing AI technology. Instead he postulated that the Google employee's comments were probably referring to the idea of "intelligent networks" of information rather than artificial intelligence.

However Levick did admit that Google's founders believe that current search technology is still in its infancy and the future would look very different. "Larry [Page] and Sergey [Brin] would say that search is nothing like it could be right now," he said.

When questioned on whether a renaissance of the general paranoia about omnipotent and malign computers was underway now, Levick admitted that such concerns were more abundant, but insisted that Google's core philosophy of "Don't be evil" guides all its actions.

Comments by Google's Senior Research Scientist helped clarify (or muddy) the company's stance on artificial intelligence research.

"AI applications are using the infrastructure to get people useful information in interesting ways," said Sahami, according to reports. "There is no human intervention. Google News is an example of where AI is making a huge difference. It's used several million times a day," he added.

Sahami also reportedly hinted at AI-based research in progress at Google that has yet to be deployed, such as voice-driven search and query results clustering to help users navigate. "We want to combine information retrieval, large systems, and AI to work together towards the next generation of search engines," he said.

If all of this seems a little too real (or too bleak) for you, let me point you again towards the spoofery. There's even a book.

Of course, the main controversy surrounding Google at this hour is the question of whether the company will be forced to reveal a week's worth of Internet searches to the U.S. Government. While an interesting civil liberties question, I'll skip that debate (you can't even read this Times column without subscribing, hah!) and focus on the human enhancement/technological singularity issues raised (or mocked) above.

Personally, I'm still less concerned about the dehumanizing potential of radical human enhancement -- not because it couldn't happen, but because most of the means by which it could occur still have too many major drawbacks. Take cybernetics, for example. Most enthusiasts remember how Neo got jacked into the Matrix in The Matrix and rapidly downloaded martial arts skills into his nervous system.

That's an awesome concept. However, doing it with anything like our present technology -- or utilizing any other complex neural implants in the central nervous system -- is extremely problematical. Why? Because physically contacting and overriding so many nerves and then supplanting the signals being sent to them -- when you have no idea what each fiber is transmitting or receiving -- requires extremely advanced technology and exhaustive research into the functions of the brain and peripheral nervous system. Basically, you need at least a little nanotech to have even a chance of doing all that... or else an innovation that enables you to bypass the brute force method of connecting to and replacing each individual strand of your own "fiber optic" (and "fiber auditory," "fiber olfactory," etc) network.

Regarding artificial intelligence, some of the problems raised by Daniel Wilson in his book are actually relevant to that conversation. But let's ignore the very real stumbling blocks faced by today's AI researchers. I'm personally convinced that we'll achieve limited AI in the not-too-distant future -- we've already got machines doing basic scientific research, which means this prediction may, to a degree, have already come true. I'm not certain how the whole race to develop an all-powerful AI to rule over the known universe is going to go. Honestly, we could end up with biotech/genetically augmented humans in the near-future, since that technology is showing substantial progress, and superhumanly intelligent humans could conceivably maintain limited AI programs doing important support work that would enable their relatively slow biological brains to drive blindingly fast progress and keep biotech-based human augmentation well ahead of most basic capacities of computers.

(Say that five times fast. Thank you.)

I realize the above may be heresy to many AI programmers and enthusiasts. But I raise it as a possibility for exactly that reason -- it's one possibility. Any True Believer who claims to know What the Future Will Bring is likely fooling themselves. Or, as with my limited AI prediction above, is prophecizing something so close that it's technically already happened.


Future Imperative

0 Comments:

Post a Comment

<< Home