Perfect Soldiers
Someone on another list posed the following question:
Given that most western democratic states have a fear of casualty lists for any conflict - will we see a rise of the use of robots in war in the near future, and what role will they take? I've collected a few links to the use of robots by the US:
http://www.cbsnews.com/stories/2003/01/13/tech/main536254.shtml
http://www.usatoday.com/money/industries/technology/2003-05-12-robotwars_x.htm
http://news.bbc.co.uk/2/hi/americas/4199935.stm
I currently help maintain robots in an industrial setting and can testify to their shortcomings. Robots are expensive, hard to maintain, and prone to breakdown - so is the rise of the robot a smokescreen, or are the military contractors looking for the next weapon to sell to the armed forces?
I replied:
I should probably dig up some links, but the U.S. military is already leaning heavily towards making its most powerful conventional forces primarily robotic. Noises keep coming out of the Pentagon complaining that the Air Force's latest next generation fighter-bomber will be obsolete in just a couple of decades or so... because unmanned combat air vehicles (UCAVs) will be replacing them on missions as soon as they can equal or outperform human pilots.
Which raises the intriguing question of how people will feel about artificial brains controlling the most powerful conventional weapons on Earth. Supposedly these vehicles require someone human to throw a switch to authorize their attacks, but given issues of hacking and jamming, we may face serious stumbling blocks regardless. Does someone just need a password and an encryption key to take over each of these weapons? Or are they strictly limited to an "Attack or Abort" decision once launched? If such weapons are to compete effectively with living beings, either unjammable, uninterceptable communications will be required (such as quantum entanglement), or a greater degree of autonomy. Otherwise, an enemy would simply have to jam their communications before engaging them in an aerial "dogfight."
Now imagine a 20,000 neuron brain that can fly an F-22. Because apparently, one already exists. This fusion of limited living minds with advanced computers is also a serious issue -- aside from the basic problem of enslaving a living intelligence, there is the question "When do you achieve some degree of sentience?" Sentience is key to many issues, not only of human rights, but the capacity to know hostility and resentment, and to plot rebellion.
Both of these options raise an important point. Even without full-blown, superintelligent AI, even relatively limited intelligences could increase in complexity and worldly power until one or more of them were in a position to do considerable harm. Though it may seem like a B-movie plotline, we don't have to have an outright "robot rebellion" to raise some vexing practical and ethical issues. Imagine a partial brain/computer fusion that achieves consciousness accidentally as part of an upgrade and whose entire "childhood" has been spent blowing up either people or nifty graphics on its overlaid digital map of the world. What sense of morals does this broken cyborg have? What sense of empathy, or even awareness of the existence of other beings outside of its limited circle of associations (other fighter-bombers, the people giving orders, etc)?
A revolution by intelligent robots may seem far-off or implausible. But what about an idiot savant making fatal mistakes or suffering mental illness while in control of hundreds of small, guided, anti-tank bombs over a battlefield... or simply near a city? What if that happens while such an aircraft is carrying nuclear weapons?
Future Imperative
0 Comments:
Post a Comment
<< Home