Two weeks ago the New York Times ran a piece on a researcher, Dr. Ronald C. Arkin, from Georgia Tech, who is being funded by the US military to research robots that could "behave more ethically in the battlefield than humans currently can". The article claims that the technology, particularly software technology, to “allow autonomous systems to operate within the bounds imposed by the warfighter". is only a matter of time.
One of the rationales for this research, according to Arkin, is a "2006 survey by the surgeon general of the Army, which found that fewer than half of soldiers and marines serving in Iraq said that noncombatants should be treated with dignity and respect, and 17 percent said all civilians should be treated as insurgents. More than one-third said torture was acceptable under some conditions, and fewer than half said they would report a colleague for unethical battlefield behavior."
The study quoted here is the "Mental Health Advisory Team (MHAT IV)Operation Iraqi Freedom 05-07 Final Report" and it can be found here. One of the study's most interesting and most distubing findings is that one third of soldiers and marines reported encountering ethical situations in Iraq in which they didn't know how to respond" (p.42). The study's recommendations included making ethical training an important component of allbevahioural health counselling, anger management classes, especially those conducted in the combat theatre, and making ethics a part of all after action reviews. The study says nothing about intelligent robots, as far as I could see, but the study is obviously being used as ammo by Arkin and the battlefield AI researches.
The NYT article about Arkin's robot research is here.
By chance, I also came across an excellent discussion by the science journalists John Horgan and George Johnson on the website Bloggingheads TV on combat robotics and ethics. Hear their discussion.
A question for another day - would ethical robot soldiers require robot padres?
No comments:
Post a Comment