Ben Abraham interviews Ben Abraham

I was having trouble writing proper blog posts, so I decided to interview myself. After all, who better to know the good questions to ask me than myself? So here we go.

Ben: What are you interested in right now? What are you reading, and why?

Ben: Right now I’m interested in a lot of neuroscientific philosophy, like theory of mind, philosophy of the mind and it’s relationship to the body, and stuff like that. Why now, and why that particularly? Partly because I think I’m at a stage in my research where I need to know some facts about reality. As much as I love philosophy, philosophy only gets you so far. That’s not to subsume philosophy beneath the all-powerful sight of Science! because there are lots of instances where science doesn’t quite do it all for me, and there are plenty of people out there willing to point out science’s shortcomings (Bruno Latour being just one of them).

I’m also deeply indebted to recently reading a lot of both Quentin Meillassoux and Timothy Morton, who both could best be described as science-influenced-philosophers. I’m of the firm opinion that, ultimately, science and philosophy are somewhat fundamentally the same thing, they just take seriously different rhetorical and theoretical approaches to a common goal.

Ben: What are your feelings on videogame blogging / criticism at the moment?

Ben: Ambivalence. Huge, slathers of layer upon layer of ambivalence. I love and hate it in equal measure, the hate probably intensified because I love it so much. ‘Calls to arms’ and new manifestos happen with disturbing regularity and are almost never taken up widely (and all too often for perfectly good reasons). I made my own appeal for more ‘mere description’ in Latour’s sense, along with appeals to go just an extra half-step further in analysis, as well as pleading for the dismantling of compound terms (replayability, gameplay, etc) for the sake of clarity, almost none of which have been heeded.

When Dan Cook wrote his polemic piece back in May I was reminded that I am, for better or worse, arguably the best positioned person to get a sense of the top-down view of the “whole” blogosphere… and all I really get from that ‘view’ is an increasing sense of the immensity and scale of this endeavour that is ‘critical videogame blogging’. I get the real sense that there is a high chance there are tens of blogs out there that could be included in Critical Distance but are not ‘connected’ in the community in the same way, for whatever reason. Language was one barrier I pointed out a while ago; another is the Austro-America leaning of my circle of acquaintances.

But even further, I think there’s something about the way we are going about writing about games that’s not-quite-right anymore. Perhaps it’s just an accumulation or ossification over time, but hardly any writing about games seems weird enough anymore. The most interesting and unexpected insights that I remember (and I could be remembering with rose tinted glasses) from the quote-unquote “early days” of the blogosphere were always from weird posts, from strange or forced comparisons, and from jarringly unexpected angles. Have we turned inwards too much, forgoing writing about the intersection of games and other stuff too much? Possibly, but there we are again. The unavoidability of the manifesto and ‘top down’ directives about How To Do Videogame [Criticism/Journalism/Writing/Blogging] (erase as applicable). The answer has always been to just do it yourself but I really don’t have the energy or enthusiasm these days. I wish those that do, however, Godspeed, and I’ll be the first to pat them on the back and link them in TWIVGB.

Ben: What other kinds of writing would you like to be doing at the moment?

Ben: Writing about weird, apparently unrelated things/topics/issues which actually share strange and non-linear connections. Maybe even just saying ‘Fuck it’ about games for a while and writing whatever is interesting. For example, I want to write about JUSTICE’s video for their new song ‘Civilization’, and view it through a Tim Morton Eco-Thought-lens.

I find it interesting how it positions the Buffalo herds as subject – it’s plain that we’re meant to be paying attention to animals. That right there, is odd, but it hints at Morton’s dissolution (following Darwin) of the categorical difference between human and animal. The video, perhaps echoing the lyrics of the song, seems to be showing off the precarious and destructive nature of not just human civilization (thought the extinction of the Buffalo herds of North America being the ultimate example of this) but of the whole world. It tilts on an axis and everything goes sliding around.

It’s particularly Tim Morton-esque, however, in that it avoids the frankly boring “nature is good / civilization is bad” dichotomy of so much green/environmentalism. When the whole world is turned physically upside-down as well as metaphorically, it doesn’t matter if the things about to crush you from above are man-made statues and buildings and bridges or if they’re just giant slabs of stone – that’s The Ecological Thought: they’ll kill you all the same.

The final shot of the clip, of a buffalo leap-falling into the golden, shining human face is interesting as well: the whole inside of this ‘planetarium’ (Planisphere?) seems to be in miniature, and so this could be the shining, golden face of a real-human, peering inside the strange interior globe. Why depict the human as golden? Perhaps to depict, somewhat critically, the privilege and majesty of humanity (which The Eco Thought would deny to it). Gold is wealth, riches and priviledge, but it’s also inanimate, and inhuman. The Human is also inhuman, it seems to say. More inhuman than buffalo!

I quite like the idea that a band I love is ahead of the curve. Morton suggests that typically artists catch us theoreticians unawares. Perhaps a good rule of thumb for original theory might be that if someone’s thought of it, and written about it, an artist has already made some artwork expressing elements of it already. Morton’s The Ecological Thought is rife with examples of artworks that exhibit The Ecological Thought (don’t mistake The Eco Thought for merely thinking ‘green’ or ‘environmentally’ – it’s about thinking the interconnectedness of everything, not just plants and animals and us). The conditions (predicates?) for thought and changed-thinking are really interesting to me, but damned if I know how to investigate something that huge with anything approaching rigour.

Ben: What are you finding really difficult right now?

Ben: Dealing with things across multiple fronts: a small personal crisis of realisation about my self-image; a professional crisis of needing to get shit done as well as wondering what-comes-after-this; and an acknowledgment that my current lifestyle is ultimately unsustainable in the long term.

On the first count, I’m trying to come to terms with the fact that other people clearly have a very differently picture of me than I have of myself in my head. Trying to reconcile my own internal image with the ‘reality’ (at least as far as others indicate contrary to my internal model) is, suffice to say, hard. But it’s something that I need to do because it’s a mental block to a lot of things that I would really like to do. A strange aside: playing around with the twitter account that is ‘FUTURE10RDBEN’ has actually been really great in this respect. Think of it as doing something like performing the role of ‘visualising yourself more how you’d like to be’ but I don’t have to visualise it because he tweets just tweets crazy stuff and I go “man, that’d be kinda awesome – why aren’t don’t I be (a little bit) like that?”

One the second, I have to actually produce writing, that’s what a PhD is. I’m coming up with all sorts of amazing shit-hot ideas, but if I can’t get them to the page it’s kinda worthless. And then once that’s all said and done and submitted (at the end of 2012, no less)… what then? My life may look quite different by then, but I wouldn’t mind applying for some work around the nation, or even around the world. The idea of just uprooting myself and going somewhere else for a few years is kinda appealing. I’ve lived in Sydney all my life, essentially, and while I’ve been lucky to have travelled quite a bit, it’s not really the same as having to live somewhere for an extended time. I still live a mere 20 minute car trip away from my parents, which is actually really awesome because my parents are amazing, but it means I haven’t really moved all that far yet.

Thirdly, and finally, I realise that my sitting around reading books all day, scanning the internet for interesting things, and writing about things on the internet is not going to be my routine for life. It’s all going to end, and possibly sooner rather than later. Time is flying and it’s a worry.

Ben: Thank you for your time.

The New Situationist International

I read the introductory few pages of McKenzie Wark’s The Beach Beneath The Street and was immediately inspired to organise or join an artist/philosophical collective like the Situationist International. I don’t have a great understanding of them, their goals and practices, having learnt virtually everything about them everything through reading Wikipedia, a couple of chapters in books and from hearing them mentioned in a reverential awe by a few people. But the idea of being part of a collective of collaborators – like-minded thinkers and artists, etc – has appealed to me for a long time.

So I was thinking – how would I create a New Situationist International? How would I bring together some of the best thinkers to work on some nebulous, aspirational project? I don’t know enough people in Sydney, and I barely know enough in Australia – but then it hit me, I don’t need to do anything like that because the new SI already exists: it’s there if we want to think about it in that way.

What am I talking about? The videogame blogosphere. This week we’ve turned Kirk Hamilton’s wall into a shitty temporary exhibition of macros of our own satirical-critique of videogame conventions; like a hyper-localised version of Warhol’s soup can images. We’ve got this amazing Wiki that effectively parodies the same things, and a fantastic podcast to go with it. Just this week we’ve (this is the collective ‘we’ not the royal we) also written about What It’s Like To Attend E3, ‘How I Get My Hair So Pink’, and made fun of academics using questionable jargon; and that’s just the fun stuff. It’s the new SI because it’s a strange group, and probably not even a real ‘group’ except that we ‘go together’ in the Latour/Callon sense. We make stuff. We also make stuff up. We work, and we get paid, and we do things for fun and for serious.

The temptation might be to think that the VG blogosphere needs ‘inciting’ or organising to do or to start something, particularly something political. But it’s becoming incredibly obvious that everything is political, almost especially play and playfulness. Perhaps ‘playfulness’ isn’t the perfect word for it – it’s certainly not the whole of it. Non-seriousness, perhaps. Fun, with an ambiguous deployment of irony. Fun for adults. In his book The Ecological Thought, Tim Morton writes about a new aesthetic he calls ‘Dark Ecology’ that ‘puts hesitation, uncertainty, irony, and thoughtfulness back into ecological thinking’. Morton feels that ‘Democracy is well served by irony, because irony insists that there are other points of view we must acknowledge.’ (Contrast: the modern, fashionably gentrified ‘irony’ that isn’t really ironic in the slightest)

Rather than trying to get the blogosphere to do stuff, it’s a more productive approach to instead change our thinking about what the blogosphere already does. From a vantage point of 30 years in the future, what is going to appear valuable that is being done in the VG blogosphere right now? That’s a really interesting question, and I’m not sure the obvious answers are the right ones. Is it really the position essays, the philosophical pieces that people will remember?

But perhaps (hopefully) it’s not even specifically the CV blogosphere that’s unique in this. Perhaps it’s the internet community sui generis that is going to become this century’s ascendant social structure. Structure isn’t even precisely the right word because it’s not all that structured, but I use it because it helps contrast it with 20th Century social structures like Church, School, Clubs and other organisations. The internet community is already these things, and more. But not in the same way, in a different (and interesting) way.

So thinking entirely selfishly for a moment – how to document this stuff for my own research? Is it important that I save links to all the image macros on Kirk’s Facebook wall? Do I need to document for future observers what the whole ‘trinketmonger’ half-joke is actually about? No. Why not? Because a) it’s too labour intensive, and b) because it’s largely irrelevant anyway.

The point of an Actor-Network Theory approach in a digital environment, or to use Morton’s more reader-inclusive terminology, the point of an approach that obeys ‘The Ecological Thought’ in all its networked and connected, ultra-massively expansive fullness, is to make backups, duplicates, redundancies. The point is to re-share, re-link, re-post, and re-tweet. The account of the (present) happens later, or else we end up with an already-past-present where everything is seen through the lens of the camera, or in our case, through the lens of the future ANT/Eco documenting-theorist. The ultimate irony then: an ANT/Eco theorist from the future is influencing the present/past already.

Philosophy is encouraging

Philosophy is the invention of strange forms of argumentation, necessarily bordering on sophistry, which remains it’s dark structural double. To philosophize is always to develop an idea whose elaboration and defence require a novel kind of argumentation, the model for which lies neither in positive science – not even in logic – nor in some supposedly innate faculty for proper reasoning. – Quentin Meillassoux, After Finitude, pp.76-7

Indeed. Why would it be otherwise? To be novel is to be creative; to create. Whence originality? If we knew precisely where, it would cease being original. That’s still to say that there exists better and worse strategies/approaches/stances/etcetera. than others for discovering (uncovering?) the original. This may help contextualise my love for rhetoric and persuasion.

Another quotation, from Tim Morton this time:

The modern age compels us to think big… Any thinking that avoids this “totality” is part of the problem. So we have to face it. Something about modern life has prevented us from thinking “totality” as big as we could. Now we can’t help but think it. Totality looms like a giant skyscraper shadow into the flimsiest thought about, say, today’s weather. We may need to think bigger than totality itself, if totality means something closed, something we can be sure of, something that remains the same. It might be harder to imagine evolution than to imagine infinity. It’s a little humiliating. This “concrete” infinity directly confronts us in the actuality of life on Earth. Facing it is one of the profound tasks to which the ecological thought summons us. – Morton, The Ecological Thought, p.4

The ecological thought seems to be what I’ve been unconsciously struggling to realise, explain, and understand in almost all my research practice. Starting with the videogame blogosphere was just convenient, because naturally we are going to end up “everywhere” anyway, since everything is connected.

The ecologcical thought it intrinsically open, so it doesn’t really matter where you begin.

This is what attracted me to Latour in the first place – here is a thinker with an approach, a way of studying (and, read through Harman even a whole philosophy!) of capturing the irreducibility of the particular while keeping sight of the inexpressible magnitude of the ‘everything’. No Actor-Network account of anything is ever fully closed off. History marches on and adds and subtracts from the original document. New things happen, new actors appear, new phenomena reveal themselves. Old voices disappear, perhaps to reappear again at a later date, like Maggie Greene going quiet from the internet for a number of years, then suddenly she’s back again, making herself heard.

Thinking the ecological thought is difficult: it involves becoming open, radically open – open forever, without the possibility of closing again…

So what original, creative strategy will my PhD contribute? What novel philosophy will it argue, and will I find the conceptual courage to prosecute it tactically (and well). Where is my original thought, and what does it pertain to? I have a few suspects; PhD writing as in-essence actor-network-theory in practice (assembling from traces); assembling from traces as a modern tech-savvy bread-and-butter activity (i.e. ANT becomes doable for any kid with a net connection and an interest in wikipedia’s history pages); the digital subject of knowledge (the blogosphere as an entity itself that ‘knows’ (erm…)); the extended mind as (another!) challenge to correlationism’s desuetude from absolutes (making techno-cognitive-prosthesis a moral act?); a (doomed?) attempt at a non-subjective, non-perceptive bound notion of affordance (affordance is not the right word, but it’s the best I’ve got at the moment. This project runs the risk of becoming kinematics, engineering, or physics/chemistry).

Abstract: More fun writing than playing: the critical videogame blogosphere as emerging approach to knowledge creation

As I said in my stereotypical update post, I’ve been writing a paper to present at a conference in Oxford, in July. Here’s the abstract and the bibliography for the paper, to whet your interest in the final piece (which will be published in the conference proceedings ebook).

It’s a happy coincidence actually that while I’ve been considering this idea of the “distributed subject” in this (and one other) paper, Michael Abbott of The Brainy Gamer has just written about the confluence of Buddhist practice/insight and videogames, with a particular focus on the lack of the “self”  (or it’s illusory nature). I recommend his “Notes on Becoming” as something in a similar vein.


In the following paper I will look at the section of the online community of videogame bloggers I have been involved with since 2007, beginning with an outline of some of the steps that allowed them to arrive at the status of ‘experts’ in the field of game criticism. Along the way I will need to examine the nature of community in online situations, and apply Benedict Anderson’s insights on the nature of community to examine how this particular community imagines itself. I will look at what it says it does and how it frames what it is for, but examining this alone is not quite enough, as the community is as much a product of technical forces as human or social ones. I will avoid the assertion that community is some kind of ‘stuff’ like a social fabric, thus avoiding putting all my explanatory eggs in the basket of ‘social forces’. Instead I will adopt the agnostic approach of Bruno Latour who examines associations between all things, emerging with a conception of the community as a network of actors, presenting a safe path through the twin minefields of technodeterminism and social constructivism. I will then take a slight detour to recent discussions on the nature of cognition and the mind, and find parallels with the network community that enable me to suggest the epistemic conditions are emerging for a successful challenge to the long-dominant conception of the humanist subject. It is my contention that a new Foucaultian episteme is fast approaching, and that a new ‘subject of knowledge’ is perhaps visible at work in the internet community I call ‘the critical videogame blogosphere’.

Key Words: Videogames, Community, Blogging, The blogosphere, Criticism, Expertise, Theory of Mind, Benedict Anderson, Bruno Latour



Alexander, Leigh. “Analysis: Developer Disdain For Games Writing Illuminates Wider Gulf”. Gamasutra. May 11, 2011. Accessed May 27, 2011.

Anderson, Benedict. Imagined Communities: Reflections on the Origin and Spread of Nationalism. (London New York: Verso, 2006).

Bogost, Ian. “Persuasive Games: Exploitationware”. Gamaasutra, May 3, 2011. Accessed May 27, 2011.

Bryant, Levi. “It Thinks – Some Reflections on Blogging”. Larval Subjects. May 25, 2011. Accessed May 27, 2011.

Burch, Anthony. ‘Far Cry 2 and the Pragmatism of Evil’ (Presentation at the Game Developers Conference 2010, San Francisco, California, March 9–13, 2010).

Clark, Andy. Being There: Putting Brain, Body and World Together Again. (Cambridge: MIT Press, 1997)

Clark, Andy & Chalmers, David. “The Extended Mind”. Analysis, Vol. 58, No. 1, Jan., 1998.

Cook, Dan. “A Blunt Critique of Game Critism”. Lost Garden. May 7, 2011. Accessed May 27, 2011.

Doull, Andrew. “Dear Dan”. ASCII Dreams. May 9, 2011. Accessed May 27, 2011.

Golding, Daniel. “Doing Things With Critics”. RedKingsDream. May 9, 2011. Accessed May 27, 2011.

Harman, Graham. Prince of Networks. (Melbourne:, 2009).

Hocking, Clint. ‘Dynamics: The State of the Art’ (Presentation at the Game Developers Conference 2010, San Francisco, California, February 28 – March 4, 2011).

Kellogg, Ronald T. “Professional Writing Expertise”, in The Cambridge handbook of expertise and expert performance. eds. K. Anders Ericsson, Neil Charness, Robert R. Hoffman, Paul J. Feltovich, (Cambridge New York: Cambridge University Press, 2006), 393.

McCrea, Christian. “Playstayxian”. Twitter. May 8, 2011. Accessed May 27, 2011.!/playstayxian/status/67115233780568064.

O’Neil, Matheiu. “Wikipedia and Authority”, in Critical Point of View, A Wikipedia Reader. eds. Geert Lovink & Nathaniel Tkacz, 312.

Plante, Chris. “A million ways to die: With nukes, tanks and guns, players have killed Osama for years”. The Daily. May 3, 2011. Accessed May 27, 2011.

Ruch, Adam. “A Riposte to Blunt Criticism”. Flickering Colours. May 10, 2011. Accessed May 27, 2011.

Wikipedia contributors. “Expert”. Wikipedia, The Free Encyclopedia. (accessed May 27, 2011).


Why do I write?

Why do I write?

I haven’t stopped to think about it lately, and I probably should. So I sat down for an afternoon and tried to come up with all the reasons why I write. Here, in no particular order, are presented the main reasons I write:


1. Because I’m reasonably good at it. I started blogging because I’d learnt I had the knack for turning words into sentences and sentences into paragraphs; paragraphs into chapters; chapters into theses. That’s the essence of what writing is. Turning words into something larger.

Words on their own mean something, but the relationship between words when they are placed in order is vastly more important. Much like binary code in which the significance of any individual ‘1’ or ‘0’ is simultaneously and paradoxically nil and ultimate (the significance coming from a relation to all the 1’s and 0’s that precede and follow) so too every word means simultaneously almost-nothing and almost-anything. Their individual significance is minor to the point of being generally interchangeable. Like any binary ‘1’ on a spinning magnetic disk, swap it for any other ‘1’ and the meaning remains the same. Just so, words together can cumulatively enlarge and grow and warp and twist and crackle across the page with such fire and power that it seems as though the very world was enveloped by words!

The world is not enveloped by words, but one can at least better understand the attraction to philosophy’s near-all-consuming ‘linguistic turn’.

2. Because the act and process of writing helps expose me to my own thinking, and develop my own ideas. When I’m writing a piece and connecting logical dots, when I come to one or more seemingly contradictory conclusions (or, more commonly, am overtaken by a growing realisation of contradiction or confusion) I have to re-examine my premises, or the terms that I am using, or some other aspect of my approach entirely. I have to wonder, what do I really mean here? I have to comprehend my own unarticulated intimations and somehow untangle the mess of connections as though a snarl of many twisted wires.

It’s kind of like The Socratic Method for solo cogito, where you have a lone dialogue with yourself by way of externalising thoughts through words.

3. Because I like the way a particular turn of phrase or a particular use of words can make me think in a completely new direction. Take, for example the following completely functional sentence:

Leaving food in my bedroom attracts rats and cockroaches.

There’s absolutely nothing wrong with that sentence whatsoever. It contains four nouns – ‘food’, ‘bedroom’, ‘rats’ and ‘cockroaches’ – as well as the transitive verb ‘leaving’ . It carries the meaning efficiently and effectively, warning readers not to leave food lying around in my bedroom unless they want to encourage rats and cockroaches. Now take the following modification of that sentence.

Leaving food in my bedroom attracts vermin.

All I have done is substituted ‘rats and cockroaches’ for the word ‘vermin’. The difference from the first sentence is both subtle and profound. For starters, it has at once simplified the sentence, reducing the number of words and nouns to three – ‘food’, ‘bedroom’ and ‘vermin’). It has also changed the scope of the statement, increasing the range of the implied admonishment to encompass the entire category of creatures that are considered pestilent. Even further, the word ‘vermin’ brings with it connotations of disease. Now, instead of our imaginary food attracting merely two species of pest, it attracts a whole lot more. On Wikipedia’s page for vermin it discusses the word’s scope:

Disease-carrying rodents and insects are the usual case, but the term is also applied to larger animals—especially small predators — on the basis that they exist out of balance with a human-defined (desired) environment…Pigeons, which have been widely introduced in urban environments, may be considered vermin

There is so much more possible meaning to be drawn from the second sentence than the first: now a reader’s mental image of the consequences of leaving food in my bedroom includes a virtual menagerie of all types of vermin; adding it’s presence along with the rat on the side-table and the cockroach on the plate is now the pigeon that flies in my window to nibble on leftover crumbs, the mouse nibbling on some mince, and any other ‘vermin’ the reader’s imagination might conjure up. All this from using one word instead of two (well, three if you count the conjunction ‘and’).

4. Because words are the things that grant me access to ‘things’. Using new words gives me access to new things; everything from thoughts and emotions to new words for composite activities and entire processes. As a process of discovery it’s exciting to be able to attach a word to something that was previously indescribable, held only in the mind as a vague miasma of thoughts, actions or emotions. Try and concieve something that has no word (or group of words) for it, or some that you don’t know the word for, and what results? A vague sense of wrongness, uneasiness, a sense of indeterminacy and a reliance on broad, childlike strokes at attempting to describe something in an inevitably not-quite-right way.

Take a word like ‘thanatosis’, a word which roughly means the act of feigning death in an animal, usually as a reflex action. Sure, you could always just describe that as “the act of feigning death in an animal, usually as a reflex action” but to have a word-tool available gives it the sense of coherence, or a unity. This is a phenomena, it exists, whereas before all we had was a compound series of words/sentences. It’s a relatively powerful aide to thought.

5. Because writing is non-literal (or doesn’t have to be literal). It can be allusive, as well as functional; persuasive as well as descriptive; figurative as well as useful. Computer code is functional in that it does things, and this results in the inseparability of understanding what a piece of code does from an understanding of what it is. The IF/THEN statement is exactly what it does, quite unlike languages and writing which hold a non-linear, indeterministic relationship between what a unit of writing is and does.


6. Because writing can be its own reward! Thus, if my words change the world, so be it. If they do not, so be it.

7. Because the end result of writing (having a piece of writing, contrasted with not having a piece of writing) is something that I can point to and say ‘That is something; something that I have made and that reflects something about me, be it my character, my prejudices, my perspective, my limitations and boundaries, my insights, my vocabulary, my speech-thought patterns, my philosophical predisposition, my proclivities and peccadilloes, or my command over my very own thoughts.

8. Because writing is communication and I am hungry to communicate – to reach out and touch other people.

To understand and to be understood is a deeply powerful, even sacred, relationship. Comprehension is both skill and choice; as a skill it’s one that many people seem to lack but it can be developed.

If writing is practice comprehending myself, then reading back over your own writing can be practice at comprehending yourself as comprehended by someone else.

9. Because writing is technical in that there is a right way and a wrong way to do it. Words have correct spellings (leaving aside differences between regions) and grammar is essentially a semi-rigid system of rules. Oftentimes there are good and better ways of writing (particularly when writing with a purpose or audience in mind), but there are also right and wrong ways. That is a comfort.

10. Because writing can do amazing things, as well as be amazing. It can do art as well as be art. Out of the same ‘stuff’ is fashioned the most withering critique of the vapid artist and the utmost fantastic exploration into the character of 1920’s Parisian expatriates.


Further reading: Lyndon Warren’s ‘Wittgenstein, Games, and Language‘, and Philosophy Bro’s excellent ‘Wittgenstein’s “On Certainty”: A Summary‘.

A response to Dan Cook’s “blunt critique of game criticism”

NB: Since posting this piece the original post in question has been edited to remove many of the phrases I initially took aim at. As such I now feel like the fool flailing away at thin air, so thanks for doing that to me, Dan. It is, as always, his prerogative, but it’s also exactly what I was talking about in the final post-script. I leave my original response as-is.

N.B.B: My attention has been drawn to this copy of the original version of Dan Cook’s essay, which is important context for reading this response in.

This is a response of sorts to Dan Cook’s self-described “A blunt critique of game criticism”. If you haven’t read it yet the rest of this post is likely to make little sense, so please go read it first to get the full context of my remarks.

What I’m going to be attempting here is exercise what Wayne C. Booth termed ‘Listening Rhetoric’. In his book, The Rhetoric of RHETORIC Booth described listening rhetoric like this:

When LR is pushed to its fullest possibilities, opponents in any controversy listen to each other not only to persuade better but also to find the common ground behind the conflict. They pursue the shared assumptions (beliefs, faiths, warrants, commonplaces) that both sides depend on as they pursue their attacks and disagreements. (The Rhetoric of RHETORIC, p.10.)

So to help me make sure I’m not misunderstanding it, here’s a summary, my take on what his piece is about laid out in the simplest fashion possible:

  • Dan Cook feels passionately about game criticism (as do I)
  • But he has a very particular view of what does and does not constitute good criticism (one that I do not and cannot share)
  • He doesn’t see enough of his idea of what is good criticism and is understandably frustrated

But frustration does not excuse the fact that he’s said a lot of things in an offensive and condescending manner, things that are only true if one shares Cook’s underlying premise. Disagree with Cook on the point that criticism has to be useful to “[improving] future games” and his critique comes falling down like a house of cards.

He’s not helped by the fact that there is the stench of the troll around the piece. I have tried extremely hard not to take offense at Cook’s post, to exercise ‘listening rhetoric’ as it were, particularly given how civil he’s been on twitter to me. His comments on the post are practically conciliatory but his position remains narrow-minded and dogmatic. So confident is he in this position that he seems unwilling to extend the courtesy of listening rhetoric to his critics.

I think it is an offensive post. Whether you want to call it a rant or ‘hyperbole’ or a ‘revisable draft’ or whatever else, it retains a number of assumptions and cranks them out to their logically unsound conclusion. So here’s my “response” or “feedback”, for what it’s worth: Cook’s article is condescending, inaccurate and unfair. His response to others reactions leaves me equally with the strong sense of a hypocrite trying to wriggle out of it and a dogmatic priest trying vainly to hold onto a dying faith.


It’s condescending and he should be ashamed.

According to Cook, the vast majority of games criticism is “a waste of [his] time as a game developer”. Similarly, critics’ experience of games (i.e. anyone not like Cook – both writer and developer) are “impoverished” because those of us who have never correctly loaded a C++ header file “know little to nothing about the philosophy and process of making games”.  The charge of wasting his time is patently absurd because, frankly, this is the internet.

The second charge is equally absurd but harder to demonstrate. In the comments section, numerous examples have been given that highlight why knowledge of the process of creation should not be made into a standard to measure critics against (leaving aside the equally absurd notion that the ‘philosophy of making games’, whatever that is, needs to be known to make any kind of critique). Cook has been presented with arguments for this, notably by Fraser Allison, to which his response is extremely telling.

We’ve already established that Cook’s piece assumes incorrect things about the nature and purpose of game criticism (specifically: that its purpose is to help developers produce better games) and yet having had that pointed out to him he still wants to assert that he’s not “putting game criticism in a box” (which is precisely what he’s doing – that’s what a claim about the nature of a thing does), that he’s just “asking [us] to be better at [our] job” (which is another offensive assumption since he clearly doesn’t understand what our jobs are).

Consider an imaginary line of text taken from a manifesto I wrote in a parallel universe. In this manifesto I an heatedly entreating Game Developers to be better at your jobs, which is (naturally) making more games like Far Cry 2. Far Cry 2 is clearly the pinnacle of game design (for me) so anything else you could possibly do is a waste of my time. Please developers, stop wasting my time. Imagine how cross Dan Cook (and hundreds of other developers!) would rightly be if I wrote that and sent it around the developer circles in search of “feedback”.


It’s inaccurate and he should be ashamed.

While calling for “clarion clarity” in game criticism, he relies upon a straw-man argument. While railing against the fact that much criticism is “useless” to him as a developer, he later goes on to say that “most writing is by gamers for gamers”.

I wish there were a better way of putting this, but: no-fucking-duh Mr Cook they’re not writing for you as a developer. Cook knows this; he clearly understands it and (otherwise he wouldn’t be lamenting it!) yet for whatever reason he still decides to turn these innocents into an enemy. How does that aid clarity? Is this not just wilfully misconstruing the point of a whole field of work that you find wastes your time? Is this anything less than bordering on trolling? I turn his own words against him: “you can do better.


It’s unfair and he should be ashamed.

Cook attempts to diagnose the malaise currently afflicting game criticism and it is actually afflicted; part of the reason I was so ecstatic upon first sight and skim of the piece was that I thought this was going to be a sister piece to my ‘Rhetorical Questions’. On the way to diagnosing the current problems criticism faces Cook gets lazy, and elides some of the logical leaps he makes. Observe the following passage:

…most game criticism suffers from an immense lack of hands-on knowledge about what it takes to make a competent game. In the past week of essays on Critical Distance, I found 1 writer of 12 had any declared experience making games.

Cook is not stupid: he’s added the caveat “declared”, as he knows very well that it’s entirely possible for a person to possess the very “hands-on knowledge about what it takes to make a competent game” and fail to produce useful criticism. He says as much himself. “I have a friend who makes games, but publicly writes gamer-esque drivel.” But hold on a second there, Cook – where’s the fairness in directing a screed like this only at critics given that there are numerically more developers well placed to become critics (according to your standards) but aren’t?

Furthermore, given that ‘being a developer’ is certainly no guarantee of the ability to write useful criticism, what then are we to make of the following statement?

If you are writing about games in language that suggest intelligent analysis, state upfront in your bio or perhaps even at the start of the article your perspective and experience.

Forgetting for the moment the problematic notion of possessing a never-changing ‘perspective’, why should we then do this? What would be the point of outlining ones developer credentials if that is still not enough to guarantee good criticism? Why even bother? Surely Cook knows that good criticism is either evident within a piece or it’s not, in which case, what does it even matter what experience a critic has as a developer? Could you reader even guarantee 100% that a writer of a “good” piece of criticism has had game developing experience? Of course not – statements of “authentic experience” are empty and pointless, contributing to little more than the pointless goal of oneupmanship. “Oh yeah, well I have even more experience developing games than you, so my critiques are even more valid.” Cf.this Monty Python sketch.

So we’ve seen that Cook’s position can be summarised as holding to the following two points:

–          Being a developer is necessary to write good criticism,

–          Being a developer is no guarantee of being able to write good criticism.


Cook gives some examples of what he’s advocating at the end of his article (vetted for candidates with real game design experience, one presumes!). Amongst them is one AJ Glasser, former journalist/critic. From what I gather, Glasser left journalism for development sometime after 2009, and the article Cook is citing is from 2011. So in the space of a year (I’ve been informed it’s even less), and with who-know-how-much hands on actual game design experience (has she shipped a game? Has Cook checked?) she’s passed from being an utterly useless and time wasting critic (Cook’s assertion! Remember, no development exp means no good criticism!) to a useful developer-critic. It’s a stretch to believe.

I’m using a definition of critic as distinct from over-enthusiastic commenter, and one that Cook seems to share: a ‘critic’ in this sense is not any old Tom, Dick, or Harry that comments on an IGN post or that has a Destructoid community blog. A critic is someone that gets linked on Critical Distance. This type of critic may be interested in thinking and writing about games from any number of perspectives, not just a technical or design perspective without fear of being labelled ‘a waste of time’.

But Cook protests! “Games have a functional heart that resists being reduced to the softest of sciences in the same way there is little room [for] ‘rock criticism’ in the practice of geology”, and further adds: “Games have more in common with functional works involving mathematics, psychology, governments, economics or other complex systems.” My favourite thinker at the moment is Bruno Latour and in his book We Have Never Been Modern he describes the same movement Cook is attempting to make: it’s none other than the exact same attempt made by the (failed) project of modernity.

The problem with Cook’s assertions is that games are no more pure “science” than they are pure “human construct” (in the sociological sense) and no more than they are mere “text” (in the humanities sense). Latour calls things like games ‘quasi-objects’ – not quite objective enough to be entirely the purview of science, not quite relative enough to be the mere products human perception. Here’s Latour:

Quasi-objects are much more social, much more fabricated, much more collective than the ‘hard’ parts of nature, but they are in no way the arbitrary receptacles of a full-fledged society. On the other hand they are much more real, nonhuman and objective than those shapeless screens on which society needed to be ‘projected’. (We Have Never Been Modern, p.55)

I’m sorry to say, but you cannot merely wish-away the ‘soft’ elements of games, whether you want to or not (I suspect Cook, if presented with this question directly, would answer ‘not’ despite actually arguing for it in the piece). There is and will always be an element of games that studying and critiquing “the object in itself” will never reveal, and in one wave of the hand Cook seems to both acknowledge and dismissing this as irrelevant to his position as a developer.

As has been pointed out in the comments section (by developer-critic Darius Kazemi no less), outlets for criticism like The Border House are vital. Injunctions against criticism targeting anything but the ‘hard’ parts of a game will always marginalise legitimate and worthy concerns about things like discrimination, racism, sexism, etc, etc. And that’s without getting into the importance of reception and reappropriation.

Yes, Cook rightly asserts that designers have a lot of knowledge of how games are received (probably, even more than we often give them credit for), but they can never have complete knowledge either. Clint Hocking in the years of development that went into Far Cry 2 never in his life dreamed someone would turn his game into an experimental exercise in player imposed permadeath and a machinma novel documenting it.

Reception and audiences matter, and I respectfully disagree in the strongest terms with Cook’s following comment:

I would also like more people to write about games in a way that moves game development as opposed to game playing forward. That’s me being selfish.

That’s certainly his prerogative, but as a player and critic I think there is still not enough being done to actually change the way players play. Taken together with the tone of the rest of the article (think “waste of time”, &tc) and the impression one gets is that Cook has the (unstated) assumption that the criticism of development should be privileged over play. Developers are working ever day to push game design in new directions (or I assume they do? Most are not aiming to cynically cash in on rehashed and recycled ideas, surely?) and who or what do players have to challenge their assumptions about play, the ways they play, and the purposes their play can have? They have criticism, that’s what they have. Cook can bemoan the lack of discussion amongst the game developer fraternity till the cows come home but ultimately game design will keep chugging along so long as there is money to be made. Players and critics do not have to be unwitting, ignorant, and slavishly thankful accepters of whatever received piece of gaming design developers want to dish out. But neither are players and critics the mightily empowered, all powerful adjutants Cook seems to be afraid they are.

The balance is not right (and I have said as much myself) but my answer to the imbalance is not to advocate doing away with the kind of criticism that has no direct ‘utility’ for a developer, rather it is to redress the imbalance directly. Dan has been told this by others so I won’t labour that point. I can only hope however that he has taken it to heart.

Dan asked for feedback and I have given it. My advice to him would be, to reconsider whether he’s actually interested in ‘criticism’ or not; as a practice more like art, and one that does not depend on a utilitarian purpose. If he does, he should cut his losses and start again. Rethink his premise. What variables are actually involved in the production of ‘criticism’? Don’t write in anger or out of frustration.

P.S. I’m calling bullshit on the positioning of Cook’s piece as a “draft”. Drafts don’t get released to the public. In my view, he’s merely trying to push buttons and, judging by the reactions, it’s working. If he’s going to put his words out there, he should have the temerity to damn well stick by them. Rants don’t get revisions.

On The Right To Be Interrupted; or, Why Nicholas Carr’s ‘The Shallows’ lacks insight

I’ve been thinking recently about attention and concentration. Part of my PhD is contingent on the idea of technology transforming the human brain/psyche/consciousness, and I’m particularly interested in how internet technology can help or hinder concentration. So too, apparently, is Nicholas Carr.

As a brilliant illustration of the point, right as I was trying to write the second sentence in the above paragraph I was interrupted, this time by the television, and I had to take a break from trying to write, temporarily unable to concentrate on the task. Interruptions like this happen all the time – in fact, we’ve developed a society based on the right to interruption. For this reason, I am more than a little sceptical of many of the claims that Nicholas Carr presents in his occasionally excellent, occasionally over-the-top book The Shallows. Carr positions Internet technology as one that is fundamentally interrupting  (though he never considers it in so many words, and perhaps if he did he’d see the problems with his thesis) and he talks much about the speed with which we mutli-task, switch browser windows to check emails, tweets, Facebook and other internet services.

Carr seems to think these interruptions are somehow more common, more distracting, or somehow categorically different from the cacophony of non-digital interruptions that routinely happen as part of modern living. An example I’ve been considering of late, and one so ubiquitous it’s effects are virtually invisible, is the car. The car is a fundamentally interruptive technology, yet you won’t hear Carr railing against the noise pollution the motor vehicles put out and the disrupting and degenerate effects it has on people who live with the constant interruptions typical of high traffic environments.

More than half the world’s population now lives in cities. Cities are full of cars and other vehicles. The sound of vehicles is fundamentally an interruption: a listener has no say over when a car arrives, it’s convenience and degree of interruption left mainly up to chance, and one can easily come to dominate the soundscape (particularly if it’s fitted with an obnoxious muffler). It is every car drivers right to interrupt your train of thought with the sound of their approaching vehicle, not just those drivers who sit on their car horns. By driving down my street, a car’s operator interrupts the evenness of whatever I was aurally percieving (consciously or otherwise) by asserting their presence sonically. Yet when confronted with this ‘evidence’ of the combustion engine’s fundamentally disruptive nature, most sane and rational people would simply suggest I move to a quieter suburb or obtain ear-plugs. So why isn’t Carr satisfied with advocate moving to a quieter digital suburb? Apparently it’s not possible to switch off your phone, ignore your emails, or even deliberately neglect your Facebook account.

Interruption is a right built into the very foundations of our society. Even beyond the objects with which we give explicit permission to others to interrupt us (the telephone, the email client, the IM chat window, the social network, the emergency siren, the car horn) we give permission to things like: other people’s stereo’s and televisions; to chairs as people scrape them backwards to stand up; to keyboards and appliances as they click and whirr in the office space; to the aircraft flying overhead; even something as common as the speech of anyone within ear-shot… interruption is an officially sanctioned part of society.

But imagine if we prioritised the ability to concentrate over the ability to ‘actually get things done’ (which inevitably will require at some stage being loud and/or interruptive). Imagine if the rights of everyone else to be uninterrupted by your presence and activities superseded your right to be loud, or to be heard, or to drive a car, or to walk with heavy footfalls, etcetera, etcetera. Imagine if we wrote it into our laws. It’s an impractical world from the current perspective, and for various reasons, but it’s also not an impossible one. We could never truly silence everything (animals, plants, the wind, etc) but nor would we want to. The differences between these sounds and human made sounds however is that the latter often conveys semantic information (i.e. ‘this is a car approaching’, ‘there are road-workers in that direction’). And studies have found that  restoring human capacity for attention involves the kind of non-semantic, unfocussed attention that is most often associated with staring at plants (to put it overly simply).

Obviously I’m overstating the case against interruption here, but only for the sake of rhetorical comparison. Does Carr do any less? Flip open any page of The Shallows and pull out a quote at random warning of the dangerously interruptive properties of digital technology. Here’s one plucked at random from the page where I’m currently reading:

“…the powerful tools for discovery, filtering, and distributing information developed by companies like Google ensure that we are forever inundated by information of immediate interest to us – and in quantities well beyond what out brains can handle. As the technologies for data processing improve, as our tools for searching and filtering become more precise, the flood of relevant information only intensifies.” (p.170)

Lets re-write the above as if it were about the interrupting effects of the motor vehicle:

“…the powerful tools for transport, contact, and distribution developed by companies like Ford ensure that we are forever inundated by motor cars of immediate interest to us – and in quantities well beyond what our brains can handle. As the technologies for motor vehicle travel improve, as our tools for directing and guiding traffic become more precise, the flood of relevant information only intensifies.”

It’s more than a little bit specious an example, but it’s not entirely beyond the scope of imagining. When read as a lament aimed at ‘interruption’, many of Carr’s assertions about the fundamentally brain-altering nature of the Internet tend to lose their persuasiveness. Perhaps, however, they actually should be taken seriously – but rather as a generalised caution about the problems involved with a whole suite of interruptions. It seems quite unlikely, however, that this idea could ever be taken seriously outside the pages of Sci-Fi.


What is a blog?

Or rather, what is inherent to the blog format? This is partially a response to Dilyan, who took umbridge at my comments on the latest CDC podcast. He wrote a response on his own blog which further betrays some of the unspoken assumptions he holds about the blog format and comes awfully close to usefully articulating some differences between blogging and publishing.

One of the great things about working with the internet and digital media is that it is so binary – situations and digital functions are often quite black and white. So lets put ourselves in the ‘digital’ frame of mind – what is the standard of ‘reality’ we can appeal to when talking about blogs and the lke? I believe it’s code; it’s protocol; it’s a close reading of the content of this digital technology.

A blog is a specific subset of a website. Websites are built generally on HTML and can be as complex or as simple as the limits of processing power allow. Close to the simplest website possible would be an empty “index.html” file in the root directory. In this situation the server upon receiving the request to load the page over TCP/IP finds the index.html and serves it up to the user. This bare-bones structure meets the minimum requirements for ‘being a website’ if the server is accessible from ‘the internet’, i.e. it’s has a unique address recorded on a DNS server. What the server does, in fact, is almost irrelevant to the status of the website – worldwide accessibility from the internet (a function of its address being recorded in a DNS server) is the only necessary component of a website.

It is probably not possible to itemise all the expectations one could hold, or have ever been held, about websites quawebsites, but for the sake of comparison to blogs, certainly no one expects websites to inherently possess comments. It would be equally rare to find someone in this day and age who expects a website to be completely static. Websites change, are redesigned, go offline, come back online, and fluctuate through many series of transformations, entirely  without generating much protest (with a few exceptions less related to their status as websites and more to their function as other things; portals, as community hubs, etcetera). The timeline for these changes, however, is often on the scale of weeks, months, or even years.

Let’s contrast that to blogs: are they inherently expected to change, or  ‘update’? If a blog does not update for a long time (i.e. stretches of years or more) does it become any less of a blog than it was when it was in the middle of a regular posting (i.e. ‘update’) schedule? No, therefore it is far to say that regular updating is not an essential quality that goes into making a blog a blog (even if it is an expected one).

At last we come to it: Are comments an inherent function of blogs? Again, it would be impossible to itemise every expectation held about a blog in the whole wide world so what does the reality of the ‘code’ tell us? Better still, what does the code of one specific blog platform, WordPress, tell us?

It tells us that comments are a function of WordPress, but like many other functions it can be enabled or disabled. Only in the absolute strictest sense can this be construed as making comments an ‘inherent’ part of the blog format. They are only inherently in that they can be “there or not there”  which is a meaningless observation to make. It’s certainly an unfalsifiable statement, thereby failing scientific (and logical?) rigour. In what conceivable circumstance would that sentence ever be wrong? Only if something could be both there and not there at the same time (quantum mechanics suggests possible situations but lets not go there).

What then, if removed, would prevent a blog from being a blog? Nothing to do with appearance (Cf. – a feedburner RSS feed of an actual blog). Not comments, as we’ve already established. No. Instead there are three things I count as being inherent and necessary components of a blog:

– Reverse chronological presentation.

– One or more posts (or updates).

– A (semi)permanent archive.

I think I’ll write about these three in greater detail at a later date.


Cahiers du Multijoueur

At the end of last year, Mitch Krpata wrote a near throw-away paragraph amongst a series of paragraphs on the games he played in 2010 that didn’t quite make the year-end best-of lists. The paragraph is reproduced below in its entirety:

Battlefield: Bad Company 2: I don’t think I ever wrote about this game. I am generally tired of military-themed shooters, but I dug the big maps and the vehicles, and I want to apologize to the dozens of teammates I inadvertently murdered whenever I was driving.

The key is in the sentence right at the start (and note, in the original context, the lack of a hyperlink in the title) “I don’t think I ever wrote about this game”. Every other game on the list of Krpata’s honourable mentions was written about at least once and had a hyperlink directing readers to the relevant piece of writing. But not Bad Company 2.

This is a reasonably insignificant observation but let’s ask ourselves: is this occurrence fully explained by Krpata’s declaration of military-themed shooter fatigue at the time? Perhaps partially, but I think there may also be another, perhaps an even more compelling factor to consider. For help, we turn to Robert Yang and the footnotes of an article he wrote about ‘On level design, hookers, cybernetic architecture, Tony Hawk and all that converges’. In a footnote to his excellent piece comparing Tony Hawk levels to Thief maps we find this comment:

(…game critics wonder why there isn’t more scholarship on non-MMOG multiplayer games? Well here are my excuses for shying away from the subject: (1) they’re all basically rock paper scissors at their cores, (2) popular strategies emerge over years of play, so scholarship actually becomes “obsolete” and relegated to history as key players change the way the game is played, (3) almost all abandon any pretense of narrative, (4) to write adequately about the highest level of multiplayer play, you have to be a really fantastic player, which you probably aren’t.)

Think for a minute – how often in the critical videogame blogosphere has someone written about the multiplayer (and more specifically, the competitive multiplayer) of a game that wasn’t an MMO? If you don’t believe me, consult the Critical Distance game writing search engine for mentions of “multiplayer” – the results are few and far between.

It’s a truism that to be able to talk about multiplayer gaming and have anything useful to say one needs to be highly skilled at a particular game, and David Sirlin has argued that time invested =/= skill, most memorably in his post ‘World of Warcraft teaches the wrong things’. But I want to write about Battlefield Bad Company 2, not only to help fill the critical vacuum around the game (which I’ve attempted before, to little personal satisfaction), but also because I believe that this truism about multiplayer gaming is in high need of challenging.

I want to try writing about multiplayer gaming, and BFBC2 is my first (and perhaps only) target, but I need to presage this with an acknowledgment of the fact that a) I’m not very good at multiplayer gaming, and b) I don’t often enjoy multiplayer gaming (see point a) as to why. So how do I write about BFBC2 without being a high-end player (my personal stats in all their insufficient glory; woeful K/D ratio and all) and without the kinds of tactical insights that someone playing in the upper echelons of the game can bring, how do I say anything useful about the game? I think I have my answer, but before we get to that a brief detour is in order.

One of the things I wasn’t completely happy with in the reactions ‘Rhetorical Questions’ provoked was how often it seemed as though people believed I was advocating a retreat from specifics, or details, in my fight against analysis in favour of persuasion. Persuasion and details, however, should not be enemies; On the contrary, they should be fast friends. There is little as persuasively strong as cold-hard facts.

In response to RQ, David Carlton expressed some reservations, deploying the efficiency of the list format to convey a few of them. One point (number 4) that stood out for me was a comment he made in reply to my slighting of the GDC 2010 talk about the change to the Halo 3 sniper rifle reload time (which I didn’t actually attend but heard great things about). Carlton asserts that: “talks about the effects of changes in sniper rifle reload time are fabulous. Embrace details!

And I find myself agreeing with him. But in the comments I added my own reply, saying that, “Details are great! A mere assemblage of details does not make a Dr. Zhivago, however.” So I still think there’s more to it than just having ‘more details’, a point that I think Adrian Forest picked up in his own critique of ‘Rhetorical Questions’. Forest said in his ‘Rhetorical Answers’ that,

What Ben seems to be arguing is that while analysis of games is good and worthy, it’s not enough. We need to be more persuasive in our writing about games, he says. Games writing should be more persuasive than analytical. But to me, that immediately raises the question: what should we be trying to persuade people of?

Which on the surface seems like a good point fairly made, but it’s actually contrary to the position Bruno Latour would take. For Latour (following the post-structuralist tradition) every single ‘thing’ is made up of components, other things that go together to make up whatever the thing is. These components of the ‘thing’ are in turn made up of other components and so on ad infinitum, the result being that everything is a ‘network’ of relations. Therefore, what we want to know is what goes into making a thing, Latour says. For him,

…the opposition between description and explanation is another of these false dichotomies that should be put to rest… Either the networks that make possible a state of affairs are fully deployed [i.e. fully described] – and then adding an explanation will be superfluous – or we ‘add an explanation’ stating that some other actor or factor should be taken into account, so that it is the description that should be extended one step further. If a description remains in need of an explanation, it means that it is a bad description.

Reassembling the Social, p.137

So here’s my assessment of the situation: (competitive) multiplayer gaming has been hard to talk about unless you are something akin to a top-tier player. But that may only be the case if you are trying to add an explanation of things – anyone should be able to attempt a complete description of a multiplayer game regardless of their skill. The compulsion to explain so exemplarily embodied by the critical essay and its relentless push towards the ‘conclusion’ is a habit from the English department that we perhaps aught to consider jettisoning alongside the Comp. Science faculty’s reliance on ‘concreteness, definitiveness and finitude’, as I put it in Rhetorical Questions.

Latour’s approach, which I am hoping to make my approach, will be to stick to description. Description that will come, mind you, from a very specific and embodied perspective – that is, from my own. In this way we get to hold onto the best of the Sciences reliance on ‘facts’ and ‘objects’ and concrete things while maintaining our commitment to relativism, rhetorical persuasion and… beauty, I think.

The kind of game writing that I am going to be attempting in the coming weeks (and maybe even months – yes, it may take that long) is going to be ‘mere’ description. I’m going to describe Battlefield Bad Company 2 from the bottom up, rather than the top down as a critical essay might. For example, I may describe the subtle dynamic feel of the ‘USAS-12 Auto’ shotgun with 12guage slug and upgraded magazine attachments by talking about how the reticule begins in a tight bunch of four white rectangles at the centre of the screen, before exploding outwards following the concussive blast of the weapon, noting also how quickly it springs back to re-centre into that tight bunch in the middle again. I will be doing more of this type of description than I will be saying things like “Battlefield Bad Company 2 is a First Person Shooter, which means that it comes from the tradition of XYZ…”

It’s quite a striking, and crucial difference. It means that I will be writing much, much longer posts (in fact, a series of posts) and perhaps it will not hold the interest of many people. But to those that it does appeal – readers interested in what this kind of approach can do for game criticism, and to those who write game criticism – I invite you to join me on this journey into the wilds of description.

Post-script: Since writing this piece and it’s publishing I’ve read the conversation hosted by Paste Magazine between Tom Bissell and Simon Ferrari. Cf. Simon: “Precious few writers know enough about descriptive writing to make an experiential account of a singleplayer experience in any way exciting to me…” Hmmm.


I’m suffering from a hangover.

A hangover induced by Phonogram – it’s actually a comic, and not the latest alcoholic beverage with an overly generous marketing budget. But a hangover is what I’ve got. My head hurts. My body hurts, and worst of all my heart hurts.

Phonogram I love you, but you’re bringing me down. (See – I can do it too, this quoting from songs thing that is at the same time pithy and affecting. This one’s LCD Soundsystem, ‘New York I Love You’)

But Phonogram. It’s the kind of thing that makes you want to simultaneously share it with the whole fucking world and keep it entirely to your self. It’s the kind of personal thing that touches you in a frankly embarrassing way, embarrassing as it reaches deep down into the bottom of that well of shame and worry and fear and brings it to the surface and swishes it around in a big old mess. A beautiful mess, but a mess, nonetheless.

It’s the kind of thing that provokes responses like this one – why the fuck am I even writing this? No one who reads this is going to learn anything actually about Phonogram except on a tenuously connected emotive level. But respond I must – I almost have no say in the matter.

So what do I say about Phonogram, as I sit here and defer eating to write this? I say that it’s Kieron Gillen’s best work (that I’ve read) and his voice reaches out from behind every page. His tone (if that’s even the right word?) is almost… didactic. Here’s your favourite big brother delivering some sage advice straight into your fucking chest.

What the fuck is this, my eyes got misty as I wrote that last sentence. Shit, I’ve never even met the dude and it’s embarrassing me that I feel about it; that I feel he’s speaking right at me.

Because I see a lot of ‘me’ in these pages. Not ‘me’ in the same way that I’m someone with the name ‘Ben Abraham’ – there’s no connection to that. And not ‘me’ in the sense that any of it was ever written ‘about’ me, or for me. I don’t know who it was written for, if it was written for anyone.

But me. I’m right there. In the page with David Kohl (who is clearly a case of Gillen writing semi-autobiographically or my last name isn’t Abraham) and with Laura Heaven, and Seth (oh fuck me, Seth! God damn – who like me doesn’t want to be Seth?) and Kid-with-Knife, and yes, even Penny (though I fucking loathe Penny and everything she stands for, naturally) and Lloyd (Ah, too close to home! Let’s not talk about Lloyd – that would be real shame right there). These are like real people.

And it’s beautiful that it’s probably the end and there’ll never be more Phonogram (barring some actual magic happening) because it… adds to the poignancy in that kind of bullshit-saccharine-romantic kind of way that I’m so fond of. That kind of faux-exclusivity that appeals to people like me (again, it’s that do-I-want-to-evangelise-this-goddamn-amazing-work-or-keep-it-secret).

I just remembered; when I was composing this thing in my head earlier I was going to open with a line about the utter shock of the completely new. Phonogram is shocking in its newness, but at the same time you recognise that it’s not like the metaphor of ‘music as magic’ hasn’t been made before (I have a Bachelor’s Degree in music; I know right? I keep forgetting) but it’s never, ever been taken to such an extreme. Nor such an attractive extreme, and attractive it really is. Can I impress that point? Everything about it I found attractive. Even, no, especially the rule at the Singles Club about ‘No Boy Singers’ – fucking hell! For an awkwardly-trying-my-best-to-be-feminist smart guy like myself that’s fucking catnip right there.

Carrying on the from the point that it’s attractive, I’ll admit that the initial appeal for me was Gillen’s input (and I only got put onto it because Fraser Allison recc’d it, so thanks for that, I guess) but the pictures are also very attractive. I’ve never been a comics appreciator before (and still probably aren’t) but I appreciate just about everything that’s gone into this series. When I read Watchmen (yes, blasphemously only after post- the Zach Snyder adaption) I was annoyed by what I felt was heavy-handed foreshadowing. But that was probably unfair of me since I already knew how it ended by virtue of the film. With Phonogram I had no such preconceptions and so every visual detail added as much as every written one. I know from the glossary at the end of The Single Club that a lot of the visual detail was specified by Gillen, too, but it’s undoubtedly a composite effort, so a hat-tip to McKelvie too.

I read them in order, in the space of about three days – Rue Brittania first, which picked me up and sold me on the world of Phonogram probably in the space of about two pages; but The Singles Club got all of me in one morning. I mainlined it like an addict after a week long hiatus. I love that it was in colour. I love that it was sexy, and full of sex. I love that it was… British, a thing I most definitely am not. But what I am notices and sees resonances in that. I’m something that’s come from that but isn’t that (won’t ever be the same as that) but which calls across the pages and across the distances between our shores and theirs and says something like ‘G’day comrade, good to see you’re having exactly the same shitty problems as us’.

I wrote a week ago that I missed Gillen as a videogames writer. Well, I still do – that hasn’t changed. But I feel better about letting him go as I know that this – this fucking stuff right here? Yeah, it’s not a waste of time. Congratulations Gillen, I’m a convert. Comics aren’t a waste (like I will totally admit I thought they were). Or at least, they don’t have to be. Like games, you know? Games don’t have to be shit (thought frequently they are) and that’s just a fact to deal with and move on.

I’m not going to become a ‘comics nerd’ as one of my friend’s (who is a comics nerd) partner suggested the other day. I’m still not that keen on them, but I’ve been converted from a non-believer into a…. an agnostic perhaps? I don’t know what I am, except that I’m writing this and it’s time to end.

That’s the thing about Phonogram though – it knows people. It knows being people involves trying; trying to be people. What was the line, “And who in God’s name are you trying to be?” (paraphrased)? It was addressed to Laura Heaven, who was trying really hard to be someone else. David Kohl almost tried to be someone else in Rue Brittania, but he couldn’t – it wasn’t worth the effort.

If my identity (games, like obv) were about to die and sublimate everything that was founded on it – just like Brittania and Kohl – if I had to change something, re-center myself on comics or face disappearing, becoming nothing?  If the choice was between becoming someone else and becoming nothing (is that a metaphor for death? Possibly…)? Well, like Kohl, it simply wouldn’t be worth the effort. And that’s cool. That’s totally fine.

I’ll catch up with you later, okay Phonogram? Make some more issues and I’ll throw money at you faster than you can say ‘Damon Albarn is a slick git’. But until then, you’ll just have to settle for fucking me up a bit and letting me go.

Fucking British.