I don’t know why Amazon put Oryx and Crake into the little section marked “You might also like” after I added a copy of Woman on the Edge of Time to my digital basket. The last book I had bought by Margaret Atwood was A Handmaid’s Tale, but I bought that in person, at Foyles in Charing Cross, in part because I had liked the thought of re-reading it and the particular paperback edition had red-edged pages. I doubt Foyles and Amazon share my data, but maybe they do. Either way, I don’t know and I doubt that they would tell me. I doubt Amazon could know about the copy of Oryx and Crake that I grew up with, since the edition my mum owned probably predates the systematic storage of data in easily accessible computer form. Nor, for that matter, do Amazon probably know about my copy of The Blind Assassin, since that was bought for me by an ex-girlfriend, and Amazon would probably have had to consult data held on me by either Facebook or Google to figure that connection out, or maybe they could have talked to my mobile phone provider to figure out whom I was calling quite often three years ago. The point about all of that is that for some reason Amazon deduced that I might like to buy Oryx and Crake, at which point I clicked yes. A few days later, to the horror of independent book shops everywhere, the book was delivered by the postman. I managed to put off reading it again for a while, but eventually succumbed and enjoyed it more than I had previously done. No person actually looked at me while this happened, but I imagine that a human had a hand in designing the algorithm that checked against my previous purchase history, all sorts of other data that Amazon probably holds on me, and then served me the recommendation. As recommendations go, it was a good one. I really enjoy Margaret Atwood’s work, even though I have read only a fraction of her published novels.
I use Margaret Atwood as an example because she is one of eight cultural heavyweights highlighted by The Guardian, out of 500, who have written an open letter about mass surveillance. I don’t have the inclination to read through the list to find a apter candidate, and I haven’t bought anything by Don DeLillo or Arundhati Roy from Amazon, so they would make pretty bad examples. I might have bought something by Ian McEwan from Amazon, but I really couldn’t remember, though I’m certain that Amazon knows the answer to that question. There’s plenty that I know about Amazon (I buy books from them when I am too busy to go to a bookshop, which is more fun), there are things that I can assume (they’ve got a marvellous way of recommending books that occasionally offers me books I already own, and the odd WTF?! recommendation) and bits that I know that I can’t know (“Hey, Mr Bezos, can I take a look at your proprietary book-recommending algorithm? No? Okay then.”). Despite my gaps in knowledge, they’re quite good at selling me books when I am procrastinating, or not pulling works off Project Gutenberg to stuff my Kindle with. The big question, however, is whether I should be worried about Amazon surveilling me. It remembers me. It collects data on me, but is anyone actually looking at me? I think there’s an important distinction between the three, since the letter that inspired me to write this considers that the act of storing data is a violation on par with being actively observed and investigated. One way to think of surveillance as a category is that there is surveillance as observation (the recording of information), surveillance as memory (the storing of information) and surveillance as investigation (the active search for information). All three are intertwined, but our common picture of surveillance is closely aligned to fear arising from the third strand of surveillance: people in dark rooms sifting through our private lives. The NSA files highlight a specific problem: the collision of routine observation via metadata collection, and quasi-perpetual storage and memory of such information. The sword of Damocles thus hangs over the head of anyone unlucky enough to come up on the security services’ radar. In positing a solution for the many, many issues raised by the activities of spy agencies, the letter treats humans as if we can exist as a society with a ‘do not track’ notice stapled to our foreheads, when we can’t.
The letter posits an atomic theory of democracy in a way that I think Ayn Rand would be quite happy with. In particular, the line “all humans have the right to remain unobserved and unmolested” is somewhat preposterous. Would there be enough space in the theoretical forest of unobserved falling trees for all such humans to sit? The idea of such a fundamental right conflicts with what human beings do. For better or worse, we observe each other pretty much constantly when we’re in company. I surveil you, you surveil me: society is founded on social surveillance. Maybe somewhere out there, a libertarian survivalist is existing in perfect non-surveilled freedom, but I doubt it. Such an heroic figure is seductive, but probably deserves a seat alongside other fictional constructs of political thought like More’s Utopia and Hobbes’ state of nature. The purpose of this private hero is that they live in our collective imagination, helping us to persuade ourselves that being observed is an aberration, rather than a normal aspect of human existence. Humans observe and remember each other. It’s only when we willingly forget our friends, families and everyone else that we’re tied to that we can state that ‘surveillance is theft’ without considering what that makes every stranger who looks at us in the street. Actively discarding the normal and everyday surveillance of others makes it possible to speck in such forceful terms about surveillance by governments and corporations, but it also raises serious questions about the solutions offered to such surveillance. A society in which each individual dictates to everyone else the exact terms on which they wish to be observed and remembered isn’t a society: it’s a collection of self-important arseholes shouting at each other. The digital storage of everything forever highlights the need for mechanisms that enable us to delete bits and pieces of our past that would otherwise have fallen by the wayside (for example, the marks in my GCSE english homework which I might not want a future employer searching). Yet it also introduces the idea that it’s possible or plausible, for a person to control the exact record of their interaction with the world, and that society can function without memory.
The reason that I think the focus on data collection and memory is important is because when we think in terms of surveillance as investigation, we tend to forget that our entire society is functions on the tacit understanding that it’s okay for the government (and other people, and corporations) to remember us, and that we don’t get to control this all the time as individuals. Would you like government-provided healthcare? Yes? Well I’m afraid that requires the government, or government funded agencies, knowing quite a lot of private information about you (and everyone else), storing it in perpetuity, and looking it up as and when it is needed. Of course, one or two of us could kick up a fuss about this arrangement and throw rattles out of prams via the European Court of Human Rights, but if we do so in a collective fashion, then I’m afraid the government won’t know what healthcare options to provide, because it won’t bloody know who needs to use them. The same applies, too, to privately-provided healthcare. Even where healthcare is provided by a community over a ‘leviathan’, this requires observation and memory in a manner that isn’t dictated by the individual. In all three cases, our heroic unobserved survivalist bleeds to death alone with a ‘do not observe’ sign taped to their front door. Our society of the heroically unobserved falls apart because the basic support functions of society fail when we stop pooling data as we have done, in some form or other, since humans evolved. I have no problem with restricting investigation, but society, and common social activities such as buying things, working and providing government services like healthcare cannot function without memory. Rather than positing that the ‘involiable integrity of the individual’ requires that they be able to dictate the terms on which their actions are noticed and recorded by the world at large, we should be thinking about ways in which to limit government access to private databases, and provide for clearer notification mechanisms when the private sector transfers information about us to a government agency.
I don’t think anyone has the answer at the moment, but I’m pretty sure that the correct one doesn’t involve a retreat into political fantasy that would render society unworkable. A good answer would be the erection of somewhat solid walls between public and private databases, and the prevention of mass acquisition of data by government. The corporate backlash against the NSA is going pretty well, I think. Try thinking about government and corporate surveillance in terms of “Where did I buy the food that I ate this morning?” or “How did the government know to employ enough A&E nurses so that I didn’t have to wait 24 hours in casualty?” instead of “When are the Orwellian hit men coming to boot stomp my face, forever?” and the society of the heroically unobserved appears untenable.
I will admit that I walked into the Imperial War Museum’s (free) display of Omer Fast’s film with low expectations. Most artistic takes on ‘drones’ have, for the most part, struck me as badly researched, ill-thought out or plain lazy reactions to a ‘hot’ topic in politics. This reflects my personal preferences (I prefer art that happens to challenge my beliefs, rather than re-enforce them, and bad ‘drone art’ does neither) and what I might call professional competence (I happen to know a bit about them and the reasons people use them). The other day, a friend showed me the leftovers from some piece, in which an artist had printed out a lot of pictures culled from a google image search of drones (yes, the people that write about this stuff know which basic images indicate a cull from google search, and which image indicates the author can’t tell the difference between the US military and US Customs) and spread them around a space to incite debate. The reason this general laziness annoys me is that drones and other semi-automated ways of killing people don’t exactly make criticism difficult. Hell, I’m not opposed to the things, and I could reel off a list of maybe 10 serious issues I have with them without thinking. Drones raise so many related issues that, well, producing work that incites debate is almost a given. Therefore the intermittent stream of work that resembles an intellectual circle-jerk copied verbatim from a Guardian comment piece is something of a disservice to the adults that it is aimed at, especially if the intent is to incite debate.
A good example of this duality is two pieces of work recently displayed at the Photographer’s gallery for the 2013 Deutsche Börse Photography Prize. There is the ‘war is bad mmkaaay’ War Primer 2 by Broomberg and Chanarin that won the damn thing, and then there’s Mishka Henner’s No Man’s Land which, in my opinion, raises most of the key issues about drones and surveillance while not featuring anything remotely akin to a remotely piloted aircraft. I should add that from a technical perspective, I thought Chris Killip’s work deserved to win, but this post is focused on the drones. Why am I critical of War Primer 2? Because it is lazy. Technically, sure, it’s great. I definitely could not produce something like that. But as an idea? It’s a con. Calling George W. Bush a war criminal in 2013 has about the intellectual impact of, say, calling Barack Obama a bit of a disappointment. Juxtaposing a picture of Donald Rumsfeld with Hitler is as boring as every other juxtaposition of [(in)famous right wing dude] with Hitler. Parodying and pillorying establishment dudes whose actions are openly criticised and pilloried by right wingers is, well, pointless. Art like that might have been daring and thought provoking in 2001-2004, but I doubt you will find a respected analyst who will hand on heart call the Iraq War a good idea in 2013. At heart, the combination of contemporary images, plastering into Brecht and a couple of lines of poetry reduces most of the topics (there’s a lot in there) to the level of Reddit memes. In short, you’re likely to find more nuance and free thought in r/Atheism. Buried on page 23 of the (free) eBook/PDF version is the work’s take on drones. Two drone pilots are plastered over the picture of a WW2 bomber’s cockpit, accompanied by the lines:
It’s we who fly above your city, woman
Now trembling for your children. From up here
We’ve fixed our sights on you and them as targets.
If you ask why, the answer is: from fear.
Either Broomberg and Chanarin don’t understand the fundamental differences between WW2 bombing campaigns and the use of targeted killings, or they ignore them. Maybe they’re ignorant of the differences between Americans and Brits in WW2, or Americans in Europe and Americans in the Pacific. Either way, to a cautious eye, it looks like they have a set of preconceptions, a couple of nice lines to go with them, and they flip this into a work of staggering stupidity. The problem with drones is precisely that they aren’t supposed to kill women and children, but they still do. It’s that they are meant to be a clean way of killing, which is still as questionable a prospect as it was back in Kosovo. It’s that all the (somewhat effective) target selection and clearing processes don’t change the fact that somewhere along the line, when bombs get dropped, women and children will die in the process. And the answer is anything but fear. Sure, in World War 2, fear might have governed some bombing. But there was a lot of hate as well. Plenty, in fact. More to the point, it’s 2013, fear doesn’t govern American actions any more. It’s somewhat cold political calculus combined with a particular interpretation of the ‘national interest’ and national security and bureaucratic politics and the list goes on. If the point of art is to make people think and reconsider the world around them, then refraining from interrogating the deep complexity of the world in favour of simplistic emotional pastiche probably isn’t a good way to go about it. As it stands, War Primer 2 is a great exercise in re-enforcing confirmation bias, and not much else. Which brings me to Mishka Henner.
Had Mishka Henner’s work won, then I’m sure there would have been some Daily Mail types up in arms about screen grabs from Google Streetview winning a photography prize. Full view of the work: here. On a simple level, that’s what his output is: large scale images taken from google. The difference between these and the culled drone images that I criticised earlier is that Henner’s work focuses on surveillance and identity. The images he culled are of women (faces auto-blanked by Google) standing in otherwise abandoned roadsides. The women are supposedly prostitutes – Henner selected the images after researching where they were likely to trade in south Europe. The question that came to mind after reading his methodology and research was “How does he know?” How could an artist know what a somewhat scantily-clad woman standing by the side of the road was actually doing there, at the point that a google street car passed her by? For those thinking about, say, ‘signature strikes’, this is precisely the question that differentiates a military strike from random murder. In many senses, Henner’s work has far more to do with war in the contemporary world than the entirety of War Primer 2. Henner pretty much forces the viewer to make a judgement (are these women prostitutes, or not? Is that woman a prostitute?) The process of forcing such a judgement throws up a host of other questions. For example, does an artist have the right to present some random female stranger’s picture and frame her as a likely prostitute for art fans half a world away? Do image rights even matter in a world where Google can buzz by and leave your picture on the internet forever? Drawing conclusions from some of the images would be questionable, others appear to give us enough information that most people would probably conclude that a given woman was probably engaged in the sex trade at the time. It’s a challenging work for these reasons and more. At heart, it’s a work that challenges the viewer on their standards for judgement. Returning to the whole ‘war’ aspect, it’s worth considering what anti-drone persons would constitute a standard for defining someone as a legitimate military target. After all, what makes a soldier? A uniform? A gun? Membership of a professional military, or an armed organisation? All these standards are fluid. Uniforms don’t have to be worn by non-state militaries in internal wars, in some parts of the world, most people own guns, and most non-state military forces aren’t divvied up cleanly between military and non-military. For reference, I’m one of those people that thinks that militaries should be able to kill on such rules of thumb, or inferred knowledge, and I found Henner’s work unsettling precisely because it made me question my own judgement.
So from here to Omer Fast. 5000 Feet is the Best is a looped 30 minute sequence, based on an interview with a drone pilot, which features the interactions between a drone pilot and an interviewer, and the stories the pilot spins. Ten minutes are available on Vimeo, but this doesn’t include the best bits (IMHO). The looped repetition bit is somewhat cliched. It’s no Rashomon for sure. The fictional narratives, too, aren’t exactly mind-blowing. There’s a bit where an American family is accidentally blown to bits like many families in Iraq, Afghanistan, Pakistan and Yemen have probably been killed. But still, the stories themselves are smarter than average, and well done. What really hit me about the piece were the interspersed aerial shots of Las Vegas and its suburbs, overlaid by narration drawn from the interviews with the pilots. It’s where the ’5000 feet is the best’ tagline comes from. On one hand, it’s a somewhat shlocky way to confront viewers with issues (“Here’s how we see Baghdad, but it’s Vegas.”), but there’s something to it, which I think is perhaps more important, which is the calmness of the whole affair. The somewhat placid imagery, the tracking of a bike through the suburbs. These are scenes and perspectives that have been on display in innumerable places for as long as casinos and aerial pictures of suburbs felt aesthetically pleasing to photographers, but the context and narration change them irrevocably. The functional focus of the narration – less ‘bangs’, more dry technicality – helps communicate the immersed detachment of a drone pilot’s experience better than a million ‘video game warfare’ analogies. The second aspect of these pieces is precisely the immersion that it emphasises – the normal, routine and mundane place of killing in via drone. This, again, is something that ‘killing by remote control’ doesn’t convey well, in part because it is so focused on the moment of killing. It’s natural to focus on these moments, but the impact of Fast’s work is the way in which it conveys the monotony and mundane experience of piloting one of these craft. As it stands, I can think of no major cultural work which conveys this, or attempts to do so. We have plenty of paeans to the sniper, for example, Jarhead in its book and film forms, or, dare I mention it, the highly improbable sniper bit in The Hurt Locker. I think Anthony Swofford’s book does a good job of conveying boredom, but the visual versions don’t. Sniping is invariably characterised by a tense wait, and then a kill. I cannot begin to imagine how boring a long film comprised entirely of camera footage focusing on a given house for 12 or 24 hours would be, but the point is, no one bothers to attempt to convey that aspect of war by remotely piloted vehicles. ‘War by video game’ conjures images of Call of Duty or Grand Theft Auto, when perhaps people need to be thinking more of the infamous Desert Bus minigame in Penn & Teller’s Smoke and Mirrors collection. When the operator speaks of playing video games to rest up after a shift, the normal impulse is to scream about the ‘normal’ video game connection/references, but I also thought Fast’s piece was all the better by pointing out how routine an activity that playing video games is. After all, NFL players play American Football games between matches, why shouldn’t soldiers? What I left the Imperial War Museum pondering was whether the ‘video game’ criticism that some level at this type of warfare risks attempting to preserve soldiers as barbarians and blood thirsty types. After all, do we really want to preserve military culture from time immemorial? (I know, Van Creveld says ‘yes’) If we don’t, then the kind of issues that Fast’s film highlights is perhaps less different from the rest of society. The gap between some soldiers killing people, and coming home to kiss their kids, play a bit of Skyrim and catch some sleep is going to continue to decrease, while for others it won’t. Rather than this being something earth shattering, it’s no different to the fact that some middle managers probably go home and play Railway Tycoon, while some professional footballers play FIFA in their spare time.
The Act of Killing is a documentary about some ambling old men who also killed thousands of Communists in their youth. Documentaries about mass killing aren’t new, nor for that matter, is it truly possible to shock with archival footage. Anyone born in the UK likely encounters footage of Auschwitz at some point before their 18th birthday, and as such, become somewhat vaccinated to imagery of mass killing. At a certain point, many documentaries cross the line between the attempt to convey the horror of atrocity, and the attempt to shock the viewer with genocide-porn. Much to its credit, The Act of Killing doesn’t do this. The most cringe-worthy moment of footage were scenes where one of the characters visits the dentist, and where he also attempts some himself, with a pair of pliers. That’s not to say that this film isn’t shocking, but since none of the violence is real, it actively discards the ‘authentic shock’ in favour of a rather more chilling explanatory approach. I can think of few moments that caused my blood to freeze as when the rather friendly Anwar Congo explains how used to take a piece of wire, tie it to a post, and then use it to strangle one of the thousand-odd people that he killed during the 1965-1966 killings in Indonesia. Though Anwar isn’t the sole focus of the documentary, he might as well be. The narrative arc of his realisation/reckoning with his actions in his twenties is so perfect that it could almost be scripted. His tears, though somewhat cajoled by a question from the director, appear heartfelt. As a means of challenging those who commit war crimes, getting them to act out their own torture/killing routines with themselves as a victim is rather novel, but it apparently works, at least in Anwar’s case. His body reactions when being strangled with wire are also consistent with him actually being strangled, so it appears that safety took a back seat to realism when it came to filming those scenes.
The main selling point of The Act of Killing is that it is completely at odds with the European experience of such atrocities. Though the killings are passed over in official histories, they aren’t actively suppressed. People are quite comfortable with the idea that almost 50 years ago, about 500,000 to 1,000,000 supposed Communists were slaughtered at the onset of a military dictatorship. The film constantly cites 2,000,000 deaths, but I’m minded to pay attention to scholarship here. The Act of Killing doesn’t draw us into a parallel world, since the equivalents of say, The Man in the High Castle and Fatherland are both still rooted in Anglo-American attitudes of suppression. The point made is that this world is entirely alien to us. Except, well, when we think about how ‘normal’ killing in war tends to be a cause of celebration. I imagine that watching The Act of Killing alongside, say, a documentary on D-Day, or Iwo Jima, would illustrate some uncomfortable parallels regarding the glorification of violence in our own culture. Still, even in the wars which British culture (as a whole) appears comfortable with, it’s hard to imagine a TV presenter crowing about the enemy body count on TV, which is one of the key moments in the documentary.
The problem with the documentary is that it is poorly cut. At an hour and 55 minutes, the theatrical cut is approximately 45 minutes shorter than the director’s cut, and even at this length, it feels flabby. This is totally understandable from a director’s perspective – the material presented is, for the most part, dynamite. However, once the third or fourth high level political figure ambles onto the screen demonstrating their blithe indifference to the mass-murder of communists, it begins to wear on the viewer. Yes, we get it, they don’t happen to agree with us on the point of mass killing and atrocity. The problem with amassing such incredible footage is that at some point, bits and pieces must be left on the cutting room floor (or, left on the hard disk). The Act of Killing doesn’t quite succeed here. Without any attempt to describe the political culture, the succession of high ranking politicians supportive of the mass killings gets a little repetitive. The same can be said of some of the film footage. While for the most part, individual scenes in which the once-killers re-enact their torture routines are presented as single sections, other sequences are presented throughout the film, with little to link or connect it. After the second or third time, this feels like psycho-active filler, intended to break up the perfectly good footage of the people talking about what they did, or their every day life.
Some critics have argued that the film-makers were somewhat unethical when filming The Act of Killing. Certainly, the film presents a clearly critical opinion of the perpetrators (in fairness, a sympathetic portrait would be almost impossible). For the most part, we are left in the dark regarding the grounds on which the film was made, and the relationship between the film makers and their subjects. I think the film poses some wider questions about the role of documentary film making and such subjects. In particular, one of the scenes depicting the recreated burning of a village features the perpetrators preparing by shouting and chanting. In this scene, the senior government figure present clearly understands how this will look, and makes a quite important speech, which on one hand proclaims that this behaviour isn’t how they acted, but should be shown to remind people of how they can be. It is a simple piece of dialogue that both defangs the entire scene for a domestic audience (“This is fiction”) and also turns the film into a warning for a domestic audience (“This is how nasty we can be”). Such a speech presents a dilemma for film makers: the man is clearly conscious of the film’s likely effects, and in making the speech, he turns the film to his own ends. Yet it would be impossible to show this scene without including the speech, because to do so would lead to accusations of twisting the reality of what happened. Personally, I was left with the impression that the film makers had been somewhat outfoxed.
The reason I point to this as an ethical issue is that the acceptance or acknowledgement of the mass killings after 1965 is probably far more important to Indonesians than ourselves. Anything dealing with the issue should probably keep that in mind. Therefore, a government official managing to turn a key scene into a domestic warning should, perhaps, keep us on guard about the film’s role. For the western viewer, we get a trippy vision through scarcely imaginable crimes. But let’s not kid ourselves, the 20th Century is replete with such actions, on a variety of scales. Since individual instances of mass killing are quite numerous, most get lost in the shuffle of history, in favour of grander narratives like ‘the Cold War’, or ‘when democracies gave fascism a kicking between 1939 and 1945′. Therefore the primary purpose of documentaries on mass killings is usually to inform the general audience that they actually happened and why we should remember them. Given the lack of framing information, what separates The Act of Killing from other such documentaries is the outstanding access to its subjects, and the somewhat psychedelic window into a mash of Indonesian/Hollywood aesthetics. Other than that, there is little provided to situate this film in Indonesia itself, nor is there much attempt at helping the viewer understand the political background to the killings. Suffice to say, the meaning for an Indonesian of Chinese ethnicity is going to be very different. On one level this film works, I’m reminded of the sheer force of Yang Jisheng’s book, Tombstone. Here, the author chose to present the desolation of the Great Chinese Famine without any detailed explanation of the framing ideology or politics, leaving the reader to wonder what the hell these ‘three red banners’ are and why people followed them to the point of killing millions of people. The problem is that Yang Jisheng follows this opening chapter with hundreds of pages of detailed insight into the politics and ideology surrounding the Great Famine. It is the combination of the two which allows the reader to get valuable insight into the events, and some form of critical understanding. In place of chapters two and onwards, The Act of Killing rolls to credits.
The rain room, as exhibited in the Barbican
Why discuss automated targeting and art installations? In part, the genesis of this article (and some others that I have planned) is that I’m something of a war-junkie who can’t switch off thinking about war when stepping into art galleries. I do my best to engage, but sometimes an artist’s work makes me think about my ‘day job’ a little differently.
At the moment, I’m doing a fair bit of research about autonomous weapons and targeting. What’s commonly known as drones, and what is commonly depicted as Skynet. Like many, I think automated machines are likely to be further integrated into the conduct of warfare. But I’m slightly unsatisfied about the way in which most people appear to think that it will happen, the, uhhh, ‘Terminator’ model for robot war. Think, for a second, about the development of UAVs and drones. They no longer attempt to replicate human-piloted machine capabilities, and instead provide capabilities that could never be attempted with humans in the cockpit. Furthermore, the personal robots are an area of growth – drones that can be launched and operated by infantry, rather than piloted from afar. Giving them lethal capability is a Rubicon, of sorts, and giving them some form of autonomy is another. Ensuring that autonomous robots with guns don’t shoot the wrong person (or anyone) is an important issue, perhaps the most technically challenging one, but the opposition to it appears rooted in a very simplistic idea of how people are routinely targeted in war. Soldiers don’t recognise fellow combatants as individuals with their own identity, they identify whether killing them adheres to standard rules of engagement, and theatre-specific ROE. To do that, they make quick and dirty calculations (“Does that guy have a gun? Is he pointing it at me?”) and kill people on that basis. Computers can’t do that, but they can make other calculations (“Is that drone on our side? Is blowing it out of the sky going to hurt anyone?”) and I think this second, not directly lethal, way of thinking is likely to be an interesting area in future. Call it parallel robot warfare – humans kill each other, robots destroy the robots helping the humans on each side. One side might use robots to kill humans, but most Western militaries have serious reservations about that point, so might restrict themselves to building badass robot-killing robots.
Imagine, if you will, a military force that doesn’t want to use robots to kill humans, not even those sitting in tanks, fighting a force that has the same sort of technology, but doesn’t have the same mis-givings about HAL-on-human uses of force. Team Humanity is going to be at a distinct disadvantage versus Team HAL. HAL can put a couple of hundred tactical robots armed with guns into the air, they’re going to be quad-copters, and they’re going to be better than humans at killing humans. They’re going to be fast, and they’re going to be effective. For those who believe in the inevitability of robot war, Team Humanity has the choice to abandon their principles, or die a principled death. Quad-copters packing guns, limited autonomy and human identification algorithms will do to the 21st century squad what the machine gun did to the massed infantry charge. Team Humanity could adapt, maybe, dispersing among the population (which, umm, violates the laws of war that they’re seeking to protect) but if Team HAL doesn’t mind killing civilians, that might not work. The nightmare scenario is that this is what happens, and therefore it is inevitable that Team humanity will collectively say ‘Sod it’ and swap strict adherence to their principles for their own swarm of gun-toting quadcopters. The principle of humans being ‘in the loop’ for killing other humans dies a death. Continue reading Kill Boxes and Rain Rooms
I have an article in the latest edition of World Defence Systems on the use of targeted killings to contain al-Qaeda, it is available (for free)
McDonald J (2012) Containing terrorist networks – the role of targeted killings. World Defence Systems 2012(1): 21-23
Targeted killings work, as Byman tells us, but they don’t solve conflicts. Thinking about targeted killings in terms of containment allows us to consider the long term detrimental effect that such programmes have on a democracy. While proponents of the targeted killings programme would like to frame this debate in terms of purely military activity, I think that targeted killings pose questions that challenge the political philosophies that underpin liberal democracies.
[Edit - I'm afraid the link doesn't seem to be working, I will try and get a copy online asap. Jack]
Dan Rezner has been writing some good articles on PhD study, which made me think about mine (coming to the end right now). There isn’t much out there on part-time PhDs, but when it comes down to it, I’ve always been a part-time student, considering the amount of paid employment I’ve done while studying. This article is therefore written from the perspective of someone without much money trying to complete a PhD.
So, without further ado: some thoughts on part-time PhDs, some (odd) benefits and some survival mechanisms for what is without any shadow of a doubt the most insanely knackering experience of my life.
Five thoughts on a part-time PhD.
1. Don’t do it.
Seriously, don’t. Reading around on the internet, you’ll see a million post-doc articles all complaining of the existential angst that accompanies completing a PhD. Take that, multiply it by five, add in shifting revenue streams and unemployment and you have the recipe for a nervous breakdown. Having almost completed one, I would never advise someone to do a part-time PhD. Move heaven and earth to do one full time. With that said, I love knowledge, and I love learning, so don’t let me put you off doing one if that’s what you want, just try and get someone else to pay for it.
2. Don’t let the funding-differential get you down.
In the UK, funding is scarce. As a part-time student you will be faced with the fact that you are working insane hours, then more insane hours to feed yourself, and all the while your fully-funded colleagues seem to swan in and out. They use phrases like “Oh, I’m not feeling it today.” and go home. You might not feel like studying today, but you have to, because this is the three hour slot that you have to work on your PhD. Whether you ‘feel’ like studying or not is entirely irrelevant. This ‘work ethic gap’ will rub on you at some point – trust me. The point is: forget about it. There is nothing you can do to change this, so every second spent complaining is a second spent not researching and a second less that you get to chill out at the end of the day.
3. The next 3-5 years of your life are going to suck.
There are no two ways about it. You are committing to a double work-week for an extended period of time. Worse, financial pressures have a double-suck effect – slowing your PhD progress down, and therefore increasing the amount of time that you have to endure the whole process. It will suck. Period. All the people with funding don’t get this. They get to moan and complain about all the standard PhD stuff, but as a part time student, you get no definitive end-date, low pay and constant stress. Here I could shoe-horn a million trite comparisons such as prison-sentences, tours of duty and so on, but I won’t. Hunker down, keep going, grind it out.
4. It’s a job.
Most research isn’t fun. You don’t get to read the things that you want to. Why? Because the bit of information that you want to research are wedded to an insane lattice of other peoples’ work, and if you don’t read that, then someone, somewhere will call ‘plagiarism’ and your career is over. Worse, most of this stuff isn’t fun to read because academia demands that you take out all the flowery language, and replace it with as precise terms as possible, even if to the reader, these sentences are the linguistic equivalent of a repeated baseball-bat to the face. You will read pointless papers. You will read works in which people apply quantitative measures to try and assess the p value of plain common sense. You will read works that will make you bang your head on the desk within three sentences. Worse, you will read someone else coming up with your bright idea that you’ve spent a couple of years on about a year before you’re ready for publication. A PhD is incredibly rewarding as an intellectual endeavour, but don’t have any pretensions about the fact that it is hard, hard work.
5. You are not a precious snowflake (and other references to Chuck Palahniuk).
You don’t matter. There are probably four or five people that could fill your shoes. The ‘big idea’ that you have? It probably won’t change the world. The REF is organised so that the “value” of what you produce is interchangeable with the work of other people. You might do some incredible original research, you might be into some insane theoretical cross-comparisons. The people that would take your spot would do that, too. The world does not ‘need’ more politics or international relations PhDs. I know, it’s natural to consider your topic important, but get some perspective. The market is flooded with people like you, so what makes you so special? As a part-timer, you won’t have attached funding, and likely you won’t have a post-doc lined up from the start, so be ready to take every opportunity from the outset. I learned this the hard way because “networking” did not come naturally to me. You’ll need to learn your research “elevator pitch” for when you meet people that might be able to throw work your way, you will need to get business cards and keep connections. This can get depressing but you need to make yourself important to someone that might give you a job/opportunity because no-one is going to do it for you. To make this sound slightly less American Psycho/Gordon Gekko – be a good person and network for other people, it won’t get you anything but connecting colleagues to people that might help them will prevent the “networking” mentality turning you into a total dick.
The upsides – What do you get from a part-time PhD?
All the above leads to two basic responses: break down or hunker down. I’ve done both, the latter has been far, far better at getting me closer to my goal of PhD completion. Flip the question – What do you get that all those closeted people don’t? The answer is quite a lot.
1. Understanding time.
You will get to know the real value of time. All those fully funded people don’t have a fucking clue. Okay, maybe late-stage PhD students who’ve come from the military/professions might, but the rest? Nah. When you wake up at 8am on four hours’ sleep after a night shift, and drag yourself in for four hours of research before your next night shift, you tend to understand quite how important four hours can be, and what can be done in that time frame. Mopping puke off a bar floor at 3am is also a good motivating factor – either you complete, or this is your life. After you see the bottom in minimum wage, backbreaking work, anything ‘up’ is a god-send. It makes you hunger for it in a way that, well, funded people can’t understand.
Academics are precious types. They have fights in journals about matters so insignificant to the rest of the world that it would be considered a sign of insanity if these spats made it outside the academic bubble. Preciousness tends to be institutional. As a part-timer, you get to pop that bubble pretty quickly, or, better still, never buy into it. Don’t get me wrong, I’m not arguing against ideas here, or the value of knowledge or anything like that, just the insularity from the outside world. Most ‘fun’ career paths weed people out in this manner. That thirty-something media guru in post-production? The difference between him/her and a dozen other people is that they crawled on their hands and knees for a decade as a runner, then working eighty/hundred hour weeks for under £20k. The ones that dropped out are probably working at Starbucks, the ones that make it take those snazzy jobs entitled “Creative Director” and so on. Academia isn’t much different. I know, universities are meant to be these grand institutions that value knowledge above all else, but the plain hard truth is that if you’re poor, you have a lesser chance of making it through, particularly at PhD level. The upside of this is that you can see it for the game that it is. If you want the job, then you’re going to have to struggle for it and take it, because otherwise a cosseted rich-kid will swan into the position. When you fail, though, you understand that it’s not necessarily your fault – the game is rigged against you. When they fail, they tend to take it personally. In today’s academic world, the struggle that you endure as a part-time student continues after graduation – we all have at least a year or two’s worth of continued scrabbling for post-docs and part-time teaching fellowships. Since you’ve built up 3-5 years of resilience to living on the poverty line and insecurity, another couple of years of this existence will be no big deal. Your funded colleagues who have gone on the BA-MA-PhD treadmill will be in for a shock.
3. Contact with the real world.
Understand this: most people don’t like their job. You’re working towards a sweet job. Personally speaking, lecturing is one of the few things that I have done work-wise that has actually made me feel better for doing it. I get a buzz out of preparing lectures. Figuring out what is important, tailoring it for a specific audience, working out how to get your points across, all of it. I even dig the theatricality of giving a good lecture. Lectures also put your knowledge on the spot – students will ask difficult questions, in fact, in many ways they’re better than academics because they’ll ask questions that are too ‘big’ for a single answer. Some people want to do research and policy, I’d rather be a lecturer and researcher. The first time I did an undergraduate lecture, I knew that was a job that I could do for a very long time and not get bored. But I also know that it was a sweet job, because I went home from that lecture and went to run a bar. I had to deal with ejecting drunk idiots, broken glass, fire alarms going off and the rest of it. Being a part-timer gives you that grounding. It’s not a case of undergrad-postgrad-PhD, it’s a case of constant struggle to get there, but one that provides valuable perspective on the world at large. People locked into the academic bubble don’t get that. An internship at a nice DC policy institute or one of London’s many think tanks does not compare to working a minimum wage job to stay alive. Without the experience of the latter, one can’t really understand how great a decently paid, intellectually stimulating job is.
4. No complaints.
Again, with the preciousness. Complaints. People complain about things that they don’t need to. As a part timer, you generally see how pointless that is. Complaints don’t make it to the CV. In fact, if they do, the CV may as well have “file in waste paper basket” stamped across the top. You won’t have time to sit down and pen your refereed article. You won’t be able to afford taking time off work to go to conferences. But you will find a way to re-arrange your life to take advantage of whatever gets thrown your way, because that’s all you’ll get. As a part-timer, you will take a million knocks. People with full funding might complain, but they have a steady stream of income for their entire period of study, so in a sentence: they can shut the hell up. Occasionally part-timers bitch and moan, but as a rule of thumb it is about stuff twice as important as full timers, and at least half as often.
5. Discipline and focus
Here’s the important bit – at some point your life will empty out temporarily, and you will attack your thesis like a rabid pitbull. It happened to me last summer. I got six weeks clear and wrote about 135,000 words. There were downsides to this – I realised that there were a few key points of my thesis that required evidence that was unattainable, so I switched and shuffled and produced an argument that was defensible on the evidence available. Maybe I am a freak of nature, but I now laugh when I hear someone complaining about writing a hundred words in a day. But I’m not alone – all the part timers that I know have the ability to ‘burst’ like that. We’re so used to having to deal with the entire world disrupting our day that an eight hour stretch of uninterrupted study becomes insanely productive. This is, however, good preparation for the ‘real world’ as academics don’t get to sit around all day reading books (at least not in the UK). They have to teach classes, file paperwork, file research grant applications and so on and so forth. If you’re part time, you learn to switch on and off in seconds, and it’s a damn sight more productive than sighing at the fact that there is admin to be done.
All the above might read a bit like Nietzsche. It’s not meant to, but a part-time PhD is a hell of a struggle. Here’s tips that I learned (sometimes the hard way) on how to cope.
1. Do something else.
Seriously, do it. Even if it overloads you and kills every second of free time that you have, a long-term project or hobby that is in no way related to your PhD is important. In my spare time I do brazilian jiu jitsu and judo, I also write fiction. As a result, I have no free time. I maybe get to see my friends once a week (outside of training with them). A PhD is a long, long slog. You will see no appreciable progress for extended periods of time. A part-time PhD is an even longer slog. Having something else that is no-pressure gives you something that can provide perspective. In my time spent on my PhD I’ve concurrently gone from blue belt to purple belt in BJJ, which is a pretty big deal. I’ve also written a couple of fiction books, (but no sales yet, so I can’t call myself a ‘writer’). There’s been no-one looming over me to do either of these things, so I derive pretty much pure pleasure from them. I’ve come to realise, however, that both of these activities are a pressure-release valve. When I’m completely stuck on my PhD, I maybe sit down and write for an hour and then go back. Similarly, having a solid training regime gives a life that is otherwise shift-patterned a little regularity. Every time I get a little better in either activity, I notice. It may only be me noticing, but little improvements like that are good for the soul, especially if you have your life locked into an extended research project which your future career depends upon.
2. Look back and laugh.
As a part-timer you will take knocks. In fact, there will be dozens. If, like me, you end up hanging onto your PhD by your fingertips, it will get really depressing. The key to getting past this is learning that failure has a sell-by date. I spent a couple of years about a month away from having to quit for financial reasons. Somehow I survived. There were a few times when prospects arose that could have lifted me out of this precarious position, and they all fell through. I got made redundant at 46 hours’ notice at one point, which was a bit of a kick in the teeth, and almost brought that one-month horizon crashing in on me. The point is, I’m still here, and I’m probably going to finish. The bits that almost made me not finish aren’t going to hold me up any more than they already have done. Reading through all of this, I’ve thrown in enough anecdotes that it probably sounds like the whinging of Eeyore at some point. That isn’t why I wrote those bits, I wrote them because nowadays I can laugh about it. Seriously. Laugh. It makes the world go round. Spending time cataloguing knocks, scrapes and bumps is pointless. Treat everything that doesn’t stop you as some sort of Heller punch line. You have no ability to change the past, so don’t let it impede your future.
3. Learn to work on the fly.
In the full time world, you have a desk, internet access and time to peruse libraries. In the part-time world, you probably can’t count on these things. At bare minimum, most of the time you have two places to be in any given day (that aren’t your bed). Here is where time management kicks in. If you can’t manage time, then you don’t get to complete. By time management, I mean “creating as much stationary time outside of paid employment” as you can. Mostly, this involves working out of cafes. There are two ways of doing this, the first is to rock up to somewhere, complain that they don’t have internet access and generally piss people off by trying to recreate a desk in someone else’s business, the other is to adjust your work protocol to their constraints. Have five articles on hand that you need to read at all times. Make sure that no matter how big or small a space you get to work, you can get something done. Pack a pair of decent earplugs to shut out the world. My personal gadget of choice is a kindle, since you can pack hundreds of articles onto the things and it saves having to need a printer/plug socket to use it.
4. Learn when to quit.
I learnt this lesson the hard way, specifically I had a single 3 day weekend break in my first two years. I didn’t realise that I was burned out until way past the point of burn out. Nowadays I can tell when I need to take a break. The upside of a part-time degree is that in the grand scale of things, a day off won’t kill you, so take it when necessary. As a rule of thumb, if your work-rate drops below 50%, you need a day off.
5. Track yourself.
100,000 words is a daunting prospect. Being 0.1% closer to completion makes it bearable. I made a quick excel sheet where I would punch in my day’s word count and other such details. It meant that when I wrote 500 words, or footnoted a paragraph for a chapter, the stats would alert me to the fact that I was 0.2% closer to completing. This bore no reference whatsoever to reality – the words might have been crap, the paragraph might not make it to the final draft. But this method is a powerful tool to positively re-enforce yourself over the long haul. Condition yourself, it’ll make you feel better. I actively encourage you to lie to yourself because sometimes we need the little white lies that see us through to tomorrow.
I wrote a post on the Kony 2012 video here, it went viral and got me mentioned on al-Jazeera, quoted in the National Post (online here) and The Age. I wrote another post on the subject here, got interviewed on Toronto talk radio and Foxnews.com and did a podcast here. It’s now a second past my fifteen minutes of fame so I’m going back to writing bits and bobs that no-one reads.
New post over at Kings of War by myself. Check it here.