Feedback III on... Posthuman Dreams, or Being Both More and Less than Human... -- Dark, SF, Soc, Super
Michael Anissimov offered the following comments about my post.
You don't believe in the possibility of a hard takeoff, that's fine. Many do, and based on sound reasoning. An AI that runs on circuits millions of times faster than your brain will not wait years to advance. Advancement could take a while from the AI's perspective, but very rapidly from ours. You are discounting the possibility of beings that generate progress on faster-than-human timescales, because it *sounds* radical. But if a faster-than-human thinker gets its hands on faster-than-human mechanical actuators, progress will simply take off at a faster-than-human rate. Nothing religious or Rapturous about it - simply the laws of physics.
Actually, Michael, I don't discount the possibility of a hard takeoff at all. I am, however, realistic about the fact that any method of achieving a hard takeoff Singularity could take a lot longer than its proponents believe, and may well be eclipsed by another technology first.
For example, the accelerated speed with which a computer intelligence could function is the most powerful argument for the eventual obsolescence of biological life. But in practice, there may be serious unforeseen challenges to the emergence of full artificial general intelligence. For example, we don't know to what degree the neural networks of our brains are dependent upon the biochemistry of our neurons and other brain cells. If they are, if particular, complex series of biochemical reactions are required to model the probability of a neuron firing and other brain functions, then how much processing power will it take to model intelligence as we know it?
Will it necessarily be possible to do so? If we have to start modeling the quantum behavior of molecules in cells, then suddenly we're looking at a much tougher brute force problem. Can all of this be bypassed by a more elegant program? Quite possibly, but the beauty of the brute force method is that you create intelligence without necessarily fully appreciating it for what it is.
And then, as others have pointed out, we have no way of knowing whether an AI would actually be friendly, as well as wise enough to avoid obliterating us all in a well-meaning gesture. After all, a system that simply self-optimized to achieve a simple goal could outthink us in terms of raw speed, and with sufficient destructive power annihilate the whole of humanity while attempting to, say, render us all down into computronium. As a companionable gesture.
Does all of this preclude AI? No, and in fact, I believe that a very limited artificial intelligence is with us already. We already have a computer that sifts and collates the world's medical journals, looking for pharmaceuticals with multiple applications, and there's another that has been systematically testing the genome of a worm one gene at a time -- testing subjects to see what happens when a particular gene is knocked out, hypothesizing as to what that gene does, and then experimenting to test the theory. That's pretty impressive.
Of course, given AI enthusiasts insistence that computers will naturally be applied to developing more powerful computers, it's ironic that both of these tools are focused on biotech. Then again, there's great human interest in how the body functions, and biotech, despite popular assumptions, may be an easier field for formidable new computers to make a mark in. Especially so long as biologists are restricted by inadequate processing power (The Human Genome Project) or have data best gathered by rapid, automated systems (see above). Their existance actually makes the point that limited AI could exist without making human beings irrelevant. In fact, this kind of tech could enable hyperbright scientists and inventors to delegate "intellectual grunt work" to "lowly computers" while they and their students/colleagues/employees focused on other challenges. In other words, near-light-speed computing could leverage mere mortal and transhuman minds as well as posthuman ones.
Meanwhile, it is probably possible to vastly amplify the speed at which the unmodified human brain functions at certain intellectually critical tasks. Not up to the speed of an optimized computer, at least not without radical physical changes or undiscovered principles, but substantially greater than a mere 10x shift. (I'd say more on this, but I may be patenting one technology for accomplishing this, so I'm afraid I'm just going to keep quiet about this technique for now.) If the human brain can be radically optimized even before any augmentations, imagine what the synergy of all the human enhancement fields could theoretically be. Especially if the biotech industry takes a page from AI boosters and applies ever-improving computers and biological intelligence to the problem of developing yet smarter scientists, etc.
But the point, again, isn't that true AI isn't possible. Or even that it isn't imminent.
But consider, what if you're absolutely correct? About everything you stated above?
Tell me, what makes more sense, to accelerate these achievements by applying the best resources possible to enhancing the minds of researchers (assuming this didn't distract them inexcusably from their work) or letting our present research teams to proceed without aid? For that matter, given that a potentially hostile nigh-omnipotent supercomputer (whether fully conscious or not) poses an existential risk to all human life and possibly all life, period, wouldn't it make sense for people to take this challenge (that seems almost impossible to humans) and let actual, functional transhumans solve it?
Am I proposing these options? Actually, I'm proposing that the various human augmentation/ intelligence enhancement sub-fields need to mature a bit and start talking to each other. Not only could they find great help in each other, but in their respective discoveries as well. Why shouldn't already smart people use accelerated learning, creativity enhancement, nootropics, non-invasive mindtech, and/or biotech augments to amplify their intellectual talents? (Why shouldn't we all?)
And remember, that's not entirely a rhetorical question. I'm asking it because I think it's a question the public should start to answer for itself. Because whatever the world's immediate decision, our best answers will come when as many people as possible not only have a say, but at least know what is being discussed. It will be far harder for simple ignorance to prevail if everyone knows the facts to begin with. And right now, most researchers don't know the facts about related, much less more distant, fields.
My point about Ms. Theron? Among other things, if we have a slow takeoff, people like her, with a great deal of wealth, drive, status, gifts and talent, are apt to remain at the top of the heap. The same situation is likely to prevail if we have a fast biotech takeoff. This shouldn't necessarily be the case -- most people reading this post have the tremendous advantage of knowing these things are possible, and being able to find links to unique resources... many of which are free. Why aren't aspiring transhumans pushing themselves to the limit? Wouldn't this make a lot more sense? If only in getting their research launched, their projects complete and their funding secured?
-----
Incidentally, as a nine-year-old back in 1979, I had an imaginary storyline in which machines built out of individual atoms and molecules had reconstructed a force of supermen into godlike beings who thought at the speed of light and whose "enemies," a legion of hideously advanced yet atavistic cybernetic warriors, were merely angry children who posed the trivial challenge -- how were they going to let these cyberguys enjoy their freedom this week without allowing them to hurt themselves or anyone else. Simplistic, yes. My storylines got more sophisticated after I turned 10 or 11 or so. =)
Or, to cut a long story short, I've been aware of the possibilities of nanotech and posthuman life since years before Engines of Creation. I've got nothing against seeing a nano-utopia come into existance. I've probably been anticipating it longer than anyone now reading these words. But it isn't the only possibility. And the faster people realize that, the faster we'll be create the future we want, and avoid those we despise. =)
Hi there Ralph. I really like your blog! I just wanted to complain about one little thing - the title categories. It's distracting when they appear in the subject lines. Perhaps you could put them at the end of your posts, like technorati tags? Or transfer your blog to a service like Typepad or Wordpress that automatically integrates the ability to categorize posts? I'm just sayin', is all. :)
An excellent point, Michael. The reason I'm doing that now is because I want these posts to be relatively searchable on Blogger. I considered putting them at the bottom of the post just now, but it occurs to me that if someone looks into the archives for this site, they're only going to have article titles to go on, and guessing a post's contents on the basis of its title doesn't make searching scores if not hundreds of them any easier.
Nevertheless, making the site easier and more enjoyable to read is also a priority for me.
Any suggestions out there? Who would prefer to see the category abbreviations at the bottoms of the posts? Who likes them at the top? If you have a passionate opinion about them (or any opinion at all) please say so in your comment on this post. Thanks a lot.
Future Imperative
0 Comments:
Post a Comment
<< Home