Chapter 4 Research Series: Not Much Ado About AEGIS

The research series consists of five lectures, and five seminars that cover one of my research projects that relates to the course. The lectures will demonstrate the utility of approaching a contemporary issue of war and technology (lethal autonomous weapon systems) from an historical perspective.21 My argument in a nutshell is that most of the important building blocks for lethal autonomous weapon systems were in place before The Terminator was even filmed, let alone international campaigns to ban lethal autonomous weapon systems. The seminars consist of counterpoints to the lectures, examining similar issues from a different theoretical perspective.

This research series is designed to complement the final evaluation for this module, with discussions to enable you to design your own 5000 word research project. The point of this first research series is that you will be using a substantial portion of your time in class to discuss and debate your own research projects. Unlike lectures in the first term, we will be paying specific attention to the practicalities of designing and conducting a research project in each and every class. Roughly 50% of the readings for this section of the course will relate to research design and research methods.

The idea behind this research series is that you will bring to each lecture your own thoughts on the topic, related to the research project that you intend to follow. You do not have to fix your research project ahead of schedule, and you are free to change your project. However, no matter how your idea for your own research project evolves, you should consider the question for the week’s lecture in relation to your own research. In-class group discussions will involve you discussing each other’s ideas, but please remember that the focus is upon constructive engagement with each other’s work.

4.1 The Big Picture: Autonomous Weapon Systems and the American Way of War (Lecture and Seminar)

Please take time to consider what you would like to do for your final assessment prior to attending this class.

This lecture will introduce five of the general components for a successful research essay: Identifying a research area, identifying an interesting research problem, constructing a theoretical framework, posing an answerable research question, and considering the implications of your research. We will be covering one of these in detail each week. In this lecture, we will discuss different processes of identifying research areas.

This lecture also provides an outline of my own research project, namely, the early development and deployment of automated and autonomous weapon systems. I will walk you through the project and my paper, but the emphasis of the lecture will be about the process of identifying a research area. The lecture will cover the current debates about the development and use of lethal autonomous weapon systems, alongside a couple of bodies of existing academic literature on military innovation and the American way of war. We will discuss ways of working from a topic of personal interest or contemporary policy problem to a research area that connects with existing academic research.

  • Discussion question:
    • Do “ways of war” exist?
  • Research discussion question:
    • What makes an academic research project worth doing?
  • Seminar discussion questions:
    • To what extent is a “research puzzle” necessary for the research essay that you wish to do?
  • Reading:
    • Echevarria, Antulio J. Reconsidering the American Way of War: US Military Practice from the Revolution to Afghanistan. Georgetown University Press, (2014). Introduction and chapter 1
    • Roff, Heather M. “Responsibility, Liability, and Lethal Autonomous Robots.” In Routledge Handbook of Ethics and War: Just War Theory in the 21st Century, edited by Fritz Allhoff, Nicholas G Evans, and Adam Henschke, 352–64. Routledge, (2013).
    • Gustafsson, Karl, and Linus Hagström. “What Is the Point? Teaching Graduate Students How to Construct Political Science Research Puzzles.” European Political Science 17, no. 4 (2018): 634–48.
    • Roland, Alex. “Technology, Ground Warfare, and Strategy: The Paradox of American Experience.” The Journal of Military History 55, no. 4 (1991): 447–68.

4.2 Research Problem: Autonomy at Sea from NTDS to AEGIS

How do you go from an interesting area of research to an interesting research problem? In this lecture we’ll discuss the identification of research gaps and research puzzles. This will guide the content of the lecture, which covers the early history of automatic and autonomous weapon systems. We will discuss the development of early cruise missiles, and the problems that they posed for the US Navy. The lecture will cover the computerisation and networking of USN ships, and the development of AEGIS and coupled weapon systems that enable surface ships to survive attacks that are too fast, or multiple, for humans to handle.

To tie this back to the previous lecture, we’ll be covering a key issue with contemporary debates about lethal autonomous weapon systems - the concept of “meaningful human control” - and the relative lack of similar discussion when many systems shifted humans into (at least) a supervisory role around half a century ago. We’ll go through some of the arguments against LAWS, and look at like systems and military practices that have been in place for decades. Why, therefore, are LAWS framed as a contemporary or future problem, when “killing by algorithm” has been routine in many domains for decades?

  • Discussion question:
    • Did the USS Ticonderoga concretise a “quiet revolution” in weapon autonomy?
  • Research discussion question:
    • Is your research descriptive, causal, or normative? Why? Why not?
  • Reading:
    • Roff, Heather M. “The Strategic Robot Problem: Lethal Autonomous Weapons in War.” Journal of Military Ethics 13, no. 3 (2014): 211–27.
    • Jenks, Chris. “False Rubicons, Moral Panic and Conceptual Cul-de-Sacs: Critiquing and Reframing the Call to Ban Lethal Autonomous Weapons.” Pepperdine Law Review 44 (2016): 1–70.
    • De Landa, Manuel. War in the Age of Intelligent Machines. Zone Books, (1991). Chapter 2 (It is long, read it over Christmas!)

4.3 Theoretical Frame: Human Autonomy in Distributed Systems

We’ll start this lecture by discussing what is meant by a theoretical framework, and how to figure out an appropriate research framework to tackle a given research problem. In this lecture I’ll discuss a number of different ways in which the development of autonomous weapon systems can be approached from an academic perspective, and how each would influence subsequent research questions, and research methods. We’ll be covering arguments over utility and reliability from a defence planning/strategic perspective, alongside the rise of practical ethics, and military ethics, as a means of analysing emerging technologies. We will be looking at accidents, where automated/integrated systems result in the “wrong” target being destroyed, and their centrality, or non-centrality, in distinct bodies of academic research.

  • Discussion question:
    • What can the controversy surrounding the Vincennes disaster tell us about the impact of technological change on command responsibility?
  • Research discussion question:
    • What are the important theoretical commitments of your research?
  • Reading:
    • Scharre, Paul. Army of None: Autonomous Weapons and the Future of War. WW Norton & Company, (2018). Chapter 10
    • Perrow, Charles. “Normal Accident at Three Mile Island.” Society 18, no. 5 (1981): 17–26.
    • Pidgeon, Nick. “In Retrospect: Normal Accidents.” Nature 477 (2011): 404 EP. https://doi.org/10.1038/477404a.

4.4 Research Question: Vietnam’s Digital Battlefields

In this lecture we will discuss the role that framing research questions and hypotheses plays in shaping subsequent work. An important element of this is scoping research questions so that they are answerable in a given wordcount. As such, we’ll also discuss different kinds of academic research projects and outputs. As part of this, I’ll continue talking you through my own research. This research series is centred around a research paper, but we’ll expand the scope from the US Navy to the wider context of the development and deployment of battlefield electronics during the Cold War. I’ll discuss a number of other overlapping domains (air, land), as well as how the project could be widened to a greater temporal scope (back to the early origins of ballistics, or to the present or future). We’ll look at how some kinds of questions are only really answerable in long-form work,22 Ahem, “books” or are unaswerable. This is also a good point to reflect upon the intent behind general histories of military technology (or technology in general) and the degree of specificity we should expect from more expansive histories.

  • Discussion question:
    • How do technological capabilities shape cultural perceptions of legitimate military conduct?
  • Research discussion question:
    • What are the strongest counter-arguments to your research conclusions?
  • Reading:
    • Owens, Patricia. “Accidents Don't Just Happen: The Liberal Politics of High-Technology ‘Humanitarian’ War.” Millennium 32, no. 3 (2003): 595–616.
    • Coker, Christopher. Humane Warfare. Routledge, (2003). Chapter 1

4.5 Implications: Autonomous Recognition Systems and Future Warfare

This lecture highlights three directions of future research from the same project. The point of the final lecture in this series is that it also provides each student some time to discuss how they see their own research fitting in with existing research, and how it could be taken forwards in radically different directions. This is an important thing to consider for longer research projects, and may help when it comes to your dissertation. In essence, after all is said and your analysis is done, how do you conclude a research project in a productive manner? At graduate level, it’s not about saying “I’m right, because x, y, and z”, it’s about knowing your material so thoroughly that you are able to make constructive connections to wider research, or discern interesting pathways for future research.

The three things I will be talking about in this lecture are the strategic implications of automated and autonomous recognition systems, ethics and emerging technologies, and data ethics in armed conflict. My hope is that you will see how each of these could naturally flow from the project we have covered in this series.

  • Discussion question:
    • Do ethical objections to a technology stand any chance against perceived military utility?
  • Research discussion question:
    • How have your ideas for your research project evolved so far this term?
  • Reading:
    • Bostrom, Nick, and Eliezer Yudkowsky. “The Ethics of Artificial Intelligence.” In The Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish and William M. Ramsey. Cambridge University Press, (2014).
    • Lodge, Julia. “The Dark Side of the Moon: Accountability, Ethics and New Biometrics.” In Second Generation Biometrics: The Ethical, Legal and Social Context, edited by Emilio Mordini and Dimitros Tzovaras. Springer Science & Business Media, (2012).

References

Bostrom, Nick, and Eliezer Yudkowsky. 2014. “The Ethics of Artificial Intelligence.” In The Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish and William M. Ramsey. Cambridge University Press.

Coker, Christopher. 2003. Humane Warfare. Routledge.

De Landa, Manuel. 1991. War in the Age of Intelligent Machines. Zone Books.

Echevarria, Antulio J. 2014. Reconsidering the American Way of War: US Military Practice from the Revolution to Afghanistan. Georgetown University Press. http://ebookcentral.proquest.com/lib/kcl/detail.action?docID=1707204.

Gustafsson, Karl, and Linus Hagström. 2018. “What Is the Point? Teaching Graduate Students How to Construct Political Science Research Puzzles.” European Political Science 17 (4):634–48. https://doi.org/10.1057/s41304-017-0130-y.

Jenks, Chris. 2016. “False Rubicons, Moral Panic and Conceptual Cul-de-Sacs: Critiquing and Reframing the Call to Ban Lethal Autonomous Weapons.” Pepperdine Law Review 44:1–70.

Lodge, Julia. 2012. “The Dark Side of the Moon: Accountability, Ethics and New Biometrics.” In Second Generation Biometrics: The Ethical, Legal and Social Context, edited by Emilio Mordini and Dimitros Tzovaras. Springer Science & Business Media.

Owens, Patricia. 2003. “Accidents Don’t Just Happen: The Liberal Politics of High-Technology ‘Humanitarian’ War.” Millennium 32 (3):595–616. https://doi.org/10.1177/03058298030320031101.

Perrow, Charles. 1981. “Normal Accident at Three Mile Island.” Society 18 (5):17–26. https://doi.org/10.1007/BF02701322.

Pidgeon, Nick. 2011. “In Retrospect: Normal Accidents.” Nature 477 (September). Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved. SN -:404 EP. https://doi.org/10.1038/477404a.

Roff, Heather M. 2013. “Responsibility, Liability, and Lethal Autonomous Robots.” In Routledge Handbook of Ethics and War: Just War Theory in the 21st Century, edited by Fritz Allhoff, Nicholas G Evans, and Adam Henschke, 352–64. Routledge.

Roff, Heather M. 2014. “The Strategic Robot Problem: Lethal Autonomous Weapons in War.” Journal of Military Ethics 13 (3). Routledge:211–27. https://doi.org/10.1080/15027570.2014.975010.

Roland, Alex. 1991. “Technology, Ground Warfare, and Strategy: The Paradox of American Experience.” The Journal of Military History 55 (4):447–68. http://www.jstor.org/stable/1985764.

Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. WW Norton & Company.