Jeremy Davis - "Can algorithms justify killing?" [click for info]

Jeremy Davis
University of Georgia
Middlebush 310

Abstract: Recent years have witnessed a surge in the use of data-driven algorithmic systems by government institutions to aid in making complex and consequential decisions. One particularly important arena in which such algorithmic systems play an increasingly central role is the military—most notably, in making decisions about when, whether, and whom to kill. While much has been written about related technologies—such as so-called ‘killer robots’—the distinctive issues raised by these big data systems have received hardly any philosophical attention. In this talk, I explore what I take to be the central pressing ethical question these systems face: when (if ever) do the predictions made by these systems furnish their user with an evidence-relative justification to kill? I identify an appealing yet strong view, which holds that the evidence provided by these systems does not, in general, suffice to justify killing. Though it is intuitively appealing, this view struggles to account for the moral value of the harms such killings aim to prevent. The right account for evidence-relative justification must therefore be sensitive to this value. However, a complete account of the evidence-relative justification for killing on the basis of these systems’ guidance must also be sensitive to other central moral requirements, such as the principle of necessity. Ultimately, the use of big data systems in aiding soldiers raises myriad thorny ethical questions, and the philosophical discussion of these issues is long overdue.