The Next Warrior | General News & Politics | Hudson Valley | Chronogram Magazine
Pin It
Favorite

The Next Warrior 

Last Updated: 08/13/2013 3:50 pm

Its name is SWORDS, or the Special Weapons Observation Reconnaissance Detection System. Moving about on tank-like tracks, this soldier can be armed with a variety of different weapons, including machine guns and grenade launchers. Its 360-degree camera can pan and tilt, read people’s name tags at 400 yards, see the expressions on their faces, what weapons they are carrying, or even if a weapon’s safeties are on or off. SWORDS can accomplish this day or night, in the thick of sand or snow storms, and even drive underwater at depths of 100 feet only to pop up in unexpected places and take out a target with 100 percent accuracy. SWORDS is joined by flying counterparts with names such as Global Hawk, Shadow, and Raven. Equipped with similar sci-fi-like cameras that can see far better than the human eye, these daredevils can pinpoint enemy targets from the sky: the tiny Wasp can skim over rooftops and give views of enemy activities, while the armed Predator can fly as high 26,000 feet, peer through smoke, clouds, and dust to read license plates from two miles away and lock onto a target via its laser designator.

All are remote-controlled robots. All send images back to soldiers controlling the robots from computer or TV screens either on or near the battlefield. Or, in the case of the armed Predator drone, flown by “reachback” or “remote-split” operations, in converted single-wide trailers located 7,500 miles away in military bases in the US. Pilots connect to the drones via satellite using control panels that look like 1980s-type two-player arcade video games. Working 12-hour shifts, seven days a week, scanning three TV screens for suspicious activities, these modern-day “aces” can kill with the touch of a control, and leave work each day in time to be home for dinner.

With most born in the minds of sci-fi writers over the years, these are just a few of the modern day soldiers highlighted by P. W. Singer, in his latest book, Wired for War: The Robotics Revolution and Conflict in the 21st Century. Having worked for Harvard, the Pentagon, and recently served as coordinator of the defense policy advisory task force for the Obama campaign, the 34-year-old Singer is the youngest senior fellow ever at the Brookings Institute. Senior Editor Lorna Tychostup spoke with Singer recently about the capabilities of these futuristic warriors, the ramifications of their presence on the battlefield on the laws and ethics of war, and the profound effects these robotic warriors will have on the front lines and the political atmosphere back home.

The media tells us all about the surge, successes on the ground, but we don’t hear about the use of these robots in Iraq.
Part of what drove me to write this book, was the sense that in many ways people were in a little bit of denial simply because robotic devices sound so much like science fiction. There hasn’t been much reporting on these systems because one, the growth rate and use of these technologies happened incredibly quickly. In the air, we go from a handful—5,300 when I wrote the book—to 7,000 now. In the last few months, we’ve added another thousand in the air. That’s a lot in a short amount of time. Two, writing about robotics often comes across as science fiction—and that often makes it easy to ignore battlefield reality. Third, the way our media approaches things is complex. What doesn’t fit within previous understandings can’t be summed up in a single bullet point in 15 seconds. Also, stories that don’t fit pre-existing storylines often don’t get reported. The surge was an incredibly complex operation with different facets, yet has very different meanings to different people. For some, it added more troops to Iraq. For others, it’s represented by the Sunni Awakening, the turning of the tribal leaders to our side. Yet others see that the US figured out how to better use our technology. The book cites the role of Task Force Odin, which broke the IED-makers asymmetric advantage by finding and killing more than 2,400 insurgents either making or planting bombs, as well as capturing 141 more, all in just one year. The surge was all of these things and if you didn’t have just one element you wouldn’t have the success. But that’s a very complex story to tell in 15 seconds on CNN or FOX. And even with this amazing technology, war is still driven by human psychology that drives how it’s utilized and the dilemmas that come out of it. The main argument of the book is that you can’t forget the human side of war, even when you’re talking about this incredible technology.

One example you give is the $5,000 MARCBOT that operates like a remote-controlled child’s toy car. One day an innovative soldier attached explosives to it, directed it toward insurgents, and blew them and the robot up. The combined use of technology and the human mind.
That’s exactly what happened with airplanes historically. At the start of WWI, they were just used for observation. Then human ingenuity jury-rigged a little box to drop out of it and suddenly you had armed airplanes. Then someone said, “They’re doing it, we’ll do it too.” Then, “Well, if they’re dropping bombs on us and we’re dropping bombs on them, shouldn’t we try to shoot each other down in the air?” All the various ripple-effects outwards onto politics and law—“What’s legal with aerial bombing?” followed. All came out of just a little bit of human ingenuity. It’s the same with robotics.



The emerging technology of robotics is the new hot potato even among such organizations as the International Committee of the Red Cross and Human Right Watch, who find it easier to raise funds for issues garnering media headlines then for investigations of weapons not yet on donor’s radar screens. You talk about how the military is enjoying this time of no regulation coupled with obscene spending and absolutely no oversight while these usual watchdog agencies are hiding their heads in the sand waiting for the issues to hit the newsstands. Without media coverage, how can legal issues be addressed when the public isn’t being made aware?

I have a somewhat jaundiced view of this from my experience with a previous book I did on private military contractors that came out right before the Iraq war. Writing about Halliburton, Blackwater, DynCorp and how we are increasingly turning over military roles to private military companies, the issue was basically ignored. In fact, there was a research study that showed—despite the fact that we had more contractors in Iraq than U.S. soldiers—only one percent of all media stories coming out of Iraq even mentioned contractors. It’s not until you have incidents like the Halliburton billing scandals, or Abu Ghraib involving Titan and CACI, or the Blackwater shootings that we get coverage. They don’t really become news stories until the problems break. The same is true now with robotics. I am hopeful my work will generate attention now, before the worst problems break.

Artificial Intelligence (AI) is what corrects my spelling and grammar mistakes as I write. You write that there is software being developed that will give robots abilities to be creative—“to write catchy pop songs, design soft drinks, discover substances harder than diamonds, optimize missile warheads, and search the Internet for terrorist communications.” Fears, real or imagined, are associated with robots, their autonomy, and specifically, with AI and the implications of a Terminator-type takeover scenario.
One, we are increasingly surrounded by more and more AI, from voicemail to the little annoying paperclip that pops up in Microsoft Word, to the more sophisticated stuff in counter-terrorism surveillance and data mining. We just don’t call it AI. Second, we see this doubling effect happening every two years in computing power, Moore’s law operating—particularly with intelligence. Combined with greater memory storage and connectivity, you really do get into the world of the potential of science fiction coming true. One area that really needs to be talked about is evolutionary software programs. That is, programs that don’t work in the way you exactly plan them, but where they build upon themselves and move into new, unexpected directions. People think you need to work on this because that’s the most efficient way for these programs to get better and better. The problem is, you’re building it to go into unexpected directions so you shouldn’t be surprised when it does. But I think that’s the part that’s questionable. We have to figure out the contexts where that’s allowable, and what are the ethics behind such development. You can’t simply say, “I’m working on this and hope it works out for the best.” Unfortunately, that’s kind of where we are right now.

It makes me uneasy that we would create something that would be unstoppable, like “Hal” in 2001. I wonder if in addition to the link to science fiction, there is another factor where people just simply cannot accept the ramifications of this new technology. If a Martian landed in front of people, some would not be able to see it because their brains would not allow it.
When something seems like science fiction, it’s often hard for people to digest it as reality. A classic example is atomic weaponry. The politicians and citizens at the time said horse pucky to the idea of using radioactive materials to build a bomb that could blow up an entire city. The flipside of science fiction is that it often inspires people to go out and make those things real. And yet you had some people living in denial regarding atomic weapons, the very name of which comes from science fiction writer H. G. Wells. And you have other people who were inspired to build this science fiction real in the Manhattan Project. What I’m intrigued and troubled by is when it comes to AI and robotics, and also broader areas of research today, things like genetics and nano-technology, our policy-makers remain woefully ignorant of not merely where we are going to be in five years, but where we already are right now, or even where we were five years ago. A top defense advisor to the Pentagon made a joke about the Internet: “One day the Internet will probably be in 3D and you’ll be able to have little fingers to walk around inside.” My wife actually works for Linden Lab, the company that created Second Life, so I asked him, “You’re talking like this is something futuristic yet it’s been operative for five years! Haven’t you heard of Second Life?” It’s not like Second Life is hidden away. It’s been featured on everything from “CSI” to “The Office.” You don’t even have to access it to be aware of it—you simply have to keep your finger on the pulse of pop culture. But this story shows just how far behind policymakers often are in knowing where [the state of] technology is. So when it comes to wrestling with the effects outside of the technology, they’re very ill equipped. My hope is to show people what exists. They may not agree with everything I say, but at least I’m giving them a realistic framework for how to talk about it—not a science fiction way.

The book reads like an overwhelming candy menu. You list a item, supply a customer review or comment, explain its ingredients and what it will do—good or bad—to your diet. “Network-enabled telepathy” where chips are implanted into the brains of paralyzed people allowing them to move a cursor just by thinking—again, AI, the coupling of robotics with human brains. You talk a lot about the positives, but what are some of the negatives when we talk implanted chips or robots operating directly from human brains or vice-versa?
You get into very interesting questions, like: Where’s the exact dividing line between man and machine? Who should or shouldn’t be allowed to have these enhancements? Who regulates it? What is it like to have these and then have them taken away from you? If you’re equipping soldiers, is it something they get for life? Or just during deployment? From what we’re learning about people who have had some of these implants it has a psychological ripple effect on them—how they look at and think about themselves. I quote one scientist who implanted himself with a chip talking about how he felt superior to everyone around him. It sounds like something right out of a “Star Trek” episode, but he’s talking about a real-world experience. All these things are way out there and yet the example of the guy moving the computer cursor via thought—you talk to someone in DARPA, and they’re like “Oh, that’s old news! We did that five years ago.”

You also write about advances in prosthetics, the implants of radio-frequency-identification chips that allow entry into a health club or automatically charge groceries, memory-chips implanted into the human brain that will allow it access to access gigabytes of information automatically, thus enabling people to make cell phone calls and send e-mails. You wrote: “Technological enhancements are creating a new type of human species, the first time in 25,000 years we have more than one type among us.” This all sounds fantastic and great, but you state that these technological advancements will go to the empowered and the rich. Will this cause more of a mega-divide between haves and have-nots, and possibly new frontiers of war, new Davids fighting against Goliaths?
That’s something I certainly fear. The example of the self-implanted scientist talking about how he felt superior is key. We need to remember that while these things give you enhanced capabilities; we still have age-old human psychology in us. We’ve fought wars over all sorts perceived differences among us. We used the color of someone’s skin pigment to determine whether we thought they were superior or not, whether they could be owned as a slave or not. And now when we’re talking about something that is a real difference—a person who doesn’t just think that they’re more intelligent, but truly is more intelligent, or truly has greater strength but didn’t have the discipline to build it themselves, they just simply had it because they were richer—this creates a lot of strange stuff when you add in our own normal psychology. A futurist who presented before me at a conference basically made the argument that we’re reaching our next step in evolution. He called it homo roboticus and talked about everything from the chip implants to artificial legs it in this wonderful, exciting, “Isn’t this really cool!” way. I brought up two things. One, the military is paying for all of this because of war, not due to general human betterment. Second, the story of evolution is one of winners and losers, and everyone sitting in the room was actually going to be on the “losers” side according to what he said.



You say that much of what is written in human history is simply the history of warfare.
It’s creativity that has truly distinguished us as a species; allowed us to take our species to the stars, create art and literature. And now we’re now using our creativity to build this incredibly fascinating technology. Some people even argue that one day we’ll produce a new species or a next step in the stage for our species. But we also must be completely honest with ourselves—the reason we’re mainly doing it is because of war. And that’s really sad.

You write about “Singularity”—a qualitative advance where prediction of what comes next becomes difficult and all the rules change, in part because we are no longer making the rules.
The idea is that every so often, something comes along in history that rewrites the rules. And it becomes almost impossible for people living before that time to really have a good sense of the possibilities and dilemmas that people will be facing after it. The classic example is the printing press. If you were living before its creation and are shown this rickety contraption, you could not fathom that it was going to create mass literacy, the Reformation, The Thirty Year War that would leave half of Europe dead, democracy, or help lead to the liberation of women in society. It is simply not possible. Today, there is a belief that robotics, and most importantly, AI, will reach that point. We can try to predict some things, but if we are honest with ourselves, we know that stuff is going to happen that we can’t even figure out yet. That’s a singularity—a break point in history. The book mentions the really interesting and active debate among very serious people who think we’re going to reach it before most of us pay off our mortgages. Take gunpowder. Within the realm of war the rules of the game were a lot different before gunpowder versus afterwards. Robotics is akin to that, but maybe in a greater context historically because it doesn’t just change the “how” of war, it changes the “who” of war at the most fundamental level. We are living it through the breakdown of humankind’s 5,000-year old monopoly on the fighting of war. That’s a rather big deal.

You also write about advancement theory, a school of thought that explains how old paradigms are broken by people who look at the world in a fresh way; how brilliant people can do something that makes no sense to 99 percent of the population at the time, but later on seems like pure genius. You make clear that presently, the robotics field’s exponential growth, specifically as it pertains to war, absolutely lacks a doctrine. Add in rivalries between the Army, Navy, and Air Force, and what one commentator in the book called the military’s “attention deficit disordered” way of purchasing these systems. The war in Iraq is on. The next front is Afghanistan. It sounds like a mess—robotics stepping out of science fiction into an archaic, bureaucratic world, where no one is steering the ship of its development.
A great quote in the book from an Air Force officer that encapsulates the current situation, “There’s gotta be way to think this better. Right now it’s just ‘give me more.’” We’re not asking, “How do I do this better?” There are two layers to the doctrine issue. First, you have to create a system of thinking around it. That work is just starting in the military. There is the question, “Will you be able to pull it off while we’re still fighting a war?” The lesson of WWI is that it’s often tough to create doctrine during a war; it’s only after the fact when they figure it out. Second, there’s no one set doctrine. There will be a debate over the best way to use these systems with someone being right and someone being wrong. It’s not as simple as “Let me figure out how to do this.” It’s “Let me figure out the best way how to do this.” I’m worried that today’s US military’s “bigger is better mentality” regarding technology and how we develop and buy it could turn out to be completely wrong. Add in the current state of manufacturing in the US, and that of our science and mathematics education [system] and you have a pretty scary brew. I certainly don’t want to see America be the losers of this worldwide robotics revolution.

Plus what you call “open source warfare”—corporate control of robotics not just necessarily regulated either by a government or military.

Exactly. Open source warfare is just like what happened in the software world with open source software. War is no longer dominated by one or two major players and is not a space that a couple of superpowers, or even governments control. Non-state actors, large and small, from organizations like Hezbollah all the way down to individual rogue terrorists have entered. The scary thing with robotics is that it’s not aircraft carrier or atomic bomb technology where you need a huge industrial structure to build it. It uses a lot of commercial technology—you can even do it yourself. For approximately a thousand dollars you can build a drone at home that’s very much comparable to the Raven drone our soldiers use in Iraq. Scary things are created when you have this cross between the current war on terrorism and these new technologies coming in. It means you have a number of actors who are going to be able to access pretty dangerous technologies rather easily. We’ve already seen that. Hezbollah used four drones in its war with Israel. In war between a state and a non-state actor we have non-state actors using just as sophisticated technology. How does this empower a future Unabomber, let alone an al Qaeda-type organization?

This book isn’t just about robotics and war. It’s about a mega-shift in world power where power is accessible by individuals, or a failed state where children rule; where the uneducated and unsocialized can get their hands on mass destructibles—“losers” gaining control and power. A world where “any sufficiently advanced technology is indistinguishable from magic.” You quote inventor and futurist Ray Kurzweil: “It feels like all 10 billion of us are standing in a room up to our knees in flammable liquid waiting for someone, anyone, to light a match.”
The forms of government we have often become tied in with technology. It’s the gunpowder revolution that allows kingdoms and the rise of the state to happen versus city-states or dukedoms, linked with the idea of mass mobilization, which is how democracies ultimately triumphed in war. Well, what happens when you have another technological revolution? We’ve already seen so many different ways that the state is under siege today—the inability to control its borders, not just regarding immigration, but broader forces in globalization: global war, disease flow, and now the terrorism game. And so we may be seeing the end of two long-held monopolies. For the last 400 years the state was the dominant player in war and for the last 5,000 years war was something involving only human participation. Now we’ve seen these other entities come along and challenge the state, plus we’re seeing more and more machines being utilized in the fighting of war.

Robots are only as responsible as the person operating them, according to what you’ve written. When robots operate on AI, who takes the responsibility of the act? Couple that with the human need to blame someone, to have someone that must to take responsibility, to extract punishment, someone who must pay. What is the potential for robotics to change the face not only of law, but also of human behavior?

There’s a great scene in the book where Human Rights Watch staff argue over accountability, and start referencing not the Geneva Conventions, but the Star Trek Prime Directive. It illustrates how we’re grasping at straws to figure out right from wrong in this new world. When rules are being rewritten in relation to what’s possible with technology they also start to be rewritten on what is legally and ethically proper. How does this play in the war of ideas against radical groups? This idea that if a mistake happens because of the technology, there’s an assumption that we must have meant for it to happen because the US has lost the benefit of the doubt in a lot of countries around the world. It is hard for people to digest that the mistake was just a technologic error, that there was no human behind that accident. People won’t believe that. And yes, people want to place fault on someone. But where do you place fault when there’s not a distinct “this one decision is what made it happen,” but rather, “it was a series of decisions” or “it was no decision that made it happen.” If the system kills the wrong person whom do you hold responsible? The operator? Commander? Software programmer? No one has a really good answer. I try to identify certain pathways that can certainly be shored-up through the law—some sense of responsibility within the system. You can’t say, “Well, you know, I turned it on and then it did what it wanted.” No. You still hold responsibility for turning it on. I use the parallel of dog ownership. Even if a dog bites someone, the owner bears some responsibility if they helped set that chain of events into motion, such as if they trained the dog wrong, or they put the dog into a situation where it was likely to bite a kid. They can’t just say, “Well, dog did it, it’s not my fault.” People need to bear some responsibility for the things that happen with these systems, even if they get more and more technologically advanced. That’s the endpoint for me. The very few times that people talk about the ethics of robotics, they really only talk about it in the Isaac Asimov way—of the machines themselves. But the real ethical discussion needs to be of the people behind the machines.

I don’t think history is going to look kindly on us if we don’t have these discussions right now. It may look at us the same way we look at the inventors of the atomic bomb where we’re looking back at them thinking, “You got so excited about the technology, you forgot all of the ripple effects it was going to have. How could you not have taken a deep breath and said, “Let’s think about this”? The difference is that this new technological revolution isn’t happening in some secret desert lab that no one knows about. I was able to write a book about it all, it’s right in our face. So we’re not going to have that excuse that a previous generation did for why they got it wrong.

click to enlarge Two US Army soldiers prepare to launch a Raven drone. According to one report, one of the unexpected results of the new technologies is a “military culture clash between teenaged video gamers and veteran flight jocks for control of the drones”; from P. W. Singer’s book Wired for War.
  • Two US Army soldiers prepare to launch a Raven drone. According to one report, one of the unexpected results of the new technologies is a “military culture clash between teenaged video gamers and veteran flight jocks for control of the drones”; from P. W. Singer’s book Wired for War.
click to enlarge Soldiers control unmanned drones from a US Army base thousands of miles away.
  • Soldiers control unmanned drones from a US Army base thousands of miles away.
click to enlarge The swords robot briefly used by the US military in Iraq.
  • The swords robot briefly used by the US military in Iraq.
Pin It
Favorite

Comments

Subscribe to this thread:

Add a comment

Hudson Valley Events

submit event
Elizabeth "Mum Bett" Freeman: From Slavery to Freedom @

Elizabeth "Mum Bett" Freeman: From Slavery to Freedom

Thu., Jan. 27, 7-8:30 p.m. — Berkshire County has been home to many prominent African Americans, including W.E.B....
Oblong Online: Lisa Lutz, THE ACCOMPLICE @

Oblong Online: Lisa Lutz, THE ACCOMPLICE

Thu., Jan. 27, 7-8 p.m. — Oblong Online: Lisa Lutz will discuss her new book, THE ACCOMPLICE....

View all of today's events

Chronogram on Instagram

Latest in News & Politics

It’s high time Chronogram made a newsletter about marijuana. Stay in the know with the latest on dispensary openings, industry news, cultivation tips, and more as we cover the emerging cannabis scene in New York and the Northeast. Welcome to High Society.

Subscribe

* indicates required