Vikings S1 – what may laughably be called a review a year late

Apparently US Memorial Day means we can’t have another episode of Game of Thrones S4 this week (though seemingly we can have another episode of Mad Men S7 culminating in a surreal dance number featuring a dead character so what this says about USians’ priorities is anybody’s guess). Owing to this sword/beard-shaped lacuna in my weekend, plus the rain and an unhappily placed door jamb, I ended up spending most of yesterday on the sofa with my foot in the air mainlining series 1 of Vikings.

I so wanted to love it, and this is why I end up petulantly reviewing stuff on the internet about a year after everybody else – by the time I get around to seeing something I am invested. I am not someone who gets precious about historical accuracy, mind, and in the case of Vikings this is just as well. They have run together a retelling of the Ragnar Lodbrook sagas with bits and bobs from earlier and later in attested history. Ragnar himself is a misty, possibly mythical figure. King Aelle really did rule Northumbria, but somewhat later than 793, the date of the first Viking raids on Lindisfarne, which this series chooses as its anchor. I don’t really get why any show or film does this, by the way – suddenly pops a date onto the screen in curly writing in a way which contributes nothing whatever to the story while inviting every Wikipedia-cruising carper on the internet to take potshots at your whole enterprise. Why tie yourself down like that, especially when you’re basically doing a bit of myth-retelling? Most people can situate the Viking Age adequately in their mental model of north west European history – after the Romans, before the kings and castles – and if they can’t, a date isn’t going to tell them anything. If you need to introduce pacing it would make more sense to use cues like “The Following Summer”, though in the case of a series based around seasonal raiding no writer should really need to do that.

But while I’m happy for people to chop history and myth about I think it’s a shame when it’s underused. History is stuffed with grand narratives, Viking history perhaps even more than most. There were Vikings on the personal bodyguard of the Byzantine Emperors, they traded and raided all over the Baltic and down the great Eurasian rivers as far as Mesopotamia. You could easily tell a story that starts with a band of brothers in a modest collection of farms and fishing towns – like this one does – and show these familiar caricatures in all sorts of unfamiliar settings, encountering distant lands and undreamed-of riches and politics that might challenge their thuggishly-presented socio-economic arrangements. Yet there is only really one grand narrative in Vikings a full seven episodes in, by which time Game of Thrones had plunged us into about five different worlds and killed or put into serious jeopardy key people in most of them, and the narrative is of course “Go West”. Bullet-headed, gold-chasing Ragnar who wants to sail west and conquer frankly not dissimilar lands to his own bar the odd fjord, versus risk-averse, sly Earl Haraldsan (Gabriel Byrne reminding everybody that you can point the camera at a real actor for thirty seconds and get taken through ten times as many emotions and nuances as appear in their scripting) who insists his boats continue to raid eastwards.

And it’s true that the first encounters between the Vikings and the monks and soldiers of Northumbria were easily the best scenes in the whole thing. Culture clash is one of those themes that focuses a story-teller’s mind and the mind of the audience on the same pinpoint of time, and it nearly always comes off because everyone can see what the stakes are – this is the moment, everything screams at the viewer, where it all begins. Cautious welcomes turn to puzzlement as people start babbling mutually incomprehensible languages and then somebody makes a wrong move and the whole thing turns bloody – very believable and great to watch. A lot of these early encounter scenes are staged so that most of the Vikings really are taller than most of the surrounding English, as well as messier, nastier-looking and with more hipsterish hair, and it works both as a metaphor for where these people’s fortunes are going and as homage to the terrified writers of ninth-century England who recorded them very much like this. But after that promising beginning we seem to have got locked onto a rather tedious storyline involving lots of people tramping through heathland, some home politics of the serious-expression-in-firelight variety, the perpetually ratty King Aella whose Archbishop, or whatever he is, is being played by Peter Cook, and generally a gentle, predictable slide towards the Danelaw (spoiler for you right there). If there is going to be a game-changer that doesn’t involve Earl Haraldsan’s hot widow and/or Ragnar’s sulky brother Rollo, well, this is a better show than I am currently giving it credit for.

One thing I did love was all the reverse-engineered archaeology – the opening titles basically show a stylised hoard/burial/shipwreck hybrid deposit event taking place (viz. they fall in the water) and at odd intervals in the series you can just hear, under the doomy music, the researchers gibbering excitedly about how cool it would be to show a strangled-sacrifice-and-hoard, or skulls-next-to-bottoms burial, and provide an imaginative context for them. And it is pretty cool. One of these pleasing little moments features by far and away the best character, Floki, a trickster figure in the spirit of Loki himself, who gigglingly flips a coin from King Aella’s massive buy-off into the Tyne as the Vikings sail for home, thereby reminding all archaeologists who spend their time reasoning out contexts for deposits that sometimes, a thing is in the ground (or in the river bed in this case) because an unpredictable psychopath put it there.

The fact that Floki is the best character points up what is lacking here, I think. Tricksters are fabulous devices for powering a story onwards and have been used as such by every culture that has ever written stories, but while you can use them as sources of profound change in a story (Floki is the boat builder who makes Ragnar’s raids possible) or for flipping a situation over (it is Floki’s sudden capricious grab for the necklace of an Northumbrian soldier that prompts the first proper Viking-English skirmish) I shouldn’t be watching that character alone for signs that things are moving on. I also want to see relationships evolving between the others, Ragnar, his band of raiders and his family that prompts them to take conflicting actions, rather than all of them swirling obediently round whatever Ragnar has smirkingly decided he fancies doing next. I also want some evidence of tensions between thirty-odd people fighting from the seat of their leather pants in a foreign land, rather than having to be written into the scenes afterwards where they’re all safely home and Rollo the Sulker is away from the band griping with somebody else. Much the liveliest relationship in the whole thing is that between Ragnar and Aethelstan the ex-Lindisfarne monk turned God-doubting slave, and the script flares into life when they talk because their interactions play successfully on differential status. Ragnar could, as Aethelstan points out, have him beaten to death on a whim and suffer no penalty, and we are invited to guess at Ragnar’s motives for taking a pretty indulgent line towards him – indulgent in the sense that he allows him to live as a slave in his household after slaughtering his defenceless companions anyway.

It’s the only relationship in the whole thing I really care about, that feels like it could go sour, or that generally shows signs of having unpredictable emergent qualities. I want more unpredictability, I want Ragnar to have a proper opponent and real, evolving relationships with other people, and I want some honest-to-Odin bromance, goddammit. I have an almost limitless capacity for any amount of nonsense involving slightly grubby men charging about with swords (see also The Musketeers, which compellingly featured open shirts too), but at the moment I won’t be racing to download S2 unless there are some seriously good switcheroos in the last two episodes of this season, or of course unless I break the toes on my other foot on another rainy bank holiday weekend.

Brain dump #2: depression and the city

Mental disorders in the ancient world:

The examination of mental disorders would seem to be the almost exclusive domain of psychiatrists and psychologists, not humanities scholars. Yet William V. Harris, the William R. Shepherd Professor of History, has spent his time in recent years studying his chosen field—the history of ancient Greece and Rome—through the lens of mental illness.

This article doesn’t go into a lot of depth, but it made me wonder if, and how, and whether, mental illness could be approached through archaeology. Has anybody tried? The wikipedia entry for the History of Depression heralds a sub-section of coverage from “Prehistory to medieval periods”, but in fact begins with classical Greece.

What would a pre-text history of mental illness look like? In a sense I think the very inaccessibility of prehistory makes mental illness as realistic a lens of study as any other kind of worldview; we are not being confused and bombarded with textual representations of more-or-less neuro-typical thinking, as we are in common-or-garden classical Greece. If individuals and groups produce material culture which perpetuates or modifies their social norms, depression for example (assuming it is a neurological condition inherent to being human, and not a sort of Durkheimian malaise of the modern age) must be among the social norms represented in material culture.

It should be possible to detect an archaeology of depression.

Behavioural epigenetics:

According to the new insights of behavioral epigenetics, traumatic experiences in our past, or in our recent ancestors’ past, leave molecular scars adhering to our DNA. Jews whose great-grandparents were chased from their Russian shtetls; Chinese whose grandparents lived through the ravages of the Cultural Revolution; young immigrants from Africa whose parents survived massacres; adults of every ethnicity who grew up with alcoholic or abusive parents — all carry with them more than just memories.

It strikes me this would be a neat way of accounting for what Childe called “the urban revolution” – that very strange period occurring spontaneously in different parts of the world where humans began to live, permanently or at least seasonally, in large numbers side by side in the same place. If extremes of behaviour can alter gene expression then it only takes a handful of random sets of extreme circumstances worldwide to alter the behaviour of a whole community, such that this community is predisposed to what would become urban life.

Throw in a suitable agricultural environment and climatic conditions for population growth and you have History.

So that’s that settled.

How do archaeology conferences work?: engagement from my sofa

Lately I’ve been doing a lot of going to archaeology conferences and tweeting from them like a particularly chirrupy and attention-deficient budgie, so I’m finding it quite interesting to observe the goings-on at the Theoretical Archaeology Group conference in Bournemouth on #TAG2013. As a reasonably informed outsider (edit: to this particular conference, I mean. C2DE I ain’t), how much of a picture am I getting of discussions and papers as they unfold?

The first thing I observe is that, ooh, about 80% of the tweets I’ve seen so far this afternoon are from the CASPAR Researching Audiences session (see p17 here (pdf)). Not so much the session about cave art or the one about animal-human interaction or the one about actor-network theory, which if I’m reading the programme correctly are running concurrently. So, yeah, a lot of people who are interested in AV/digital media and archaeology are (a) attending the session about AV/digital media and archaeology and (b) on Twitter, and it is their output which currently constitutes the public face of #TAG2013. On which I, another person interested in AV/digital media and archaeology, am now commenting, and perhaps someone else interested in AV/digital media and archaeology will write a little post about Twitter engagement at #TAG2013 using these very tweets and this blog post as material, and so it goes on until we all disappear up our collective meta-arse, or at least until more archaeologists with wider interests start using Twitter. Lots of people who already know other people are talking to them about stuff they know is of mutual interest. No surprises there.

The second thing, which surprises me, is that I am unfeasibly thrilled when someone posts a picture. This is something I scarcely thought about at Monstrous Antiquities, CHAT and Sharing the Field, although I posted pictures from the first two. A spectacularly dumb reptilian bit of my brain likes images of places and people and having a visual hook on which to hang conversations I might have, or read about. This is despite the fact that I do at least vaguely know some of the people there. What seems to matter are pictures of that particular experience.

The reason I comment on all this is that, meta-related cynicism notwithstanding, I suspect academic conferences are currently under-utilised as a potential source of public engagement with the subject. Conferences, as live events, have a natural, attractive tempo that really works on social media pretty much regardless of how dry the content – and Twitter, even more than most other social networks, is tempo-driven. That’s why BBC Question Time amongst other things is so famous there. It’s not that the socio-economic group that watches QT is disproportionately represented on Twitter – although I’m sure they are – it’s that all those people sit down on a Thursday night and start tweeting about it at once.

Plenty of academics thoroughly see the point of Twitter and other social media as a way of sharing their work and engaging both their colleagues and the public. But there is an inevitably static quality to a lot of this busy communication, which is particularly obvious to me as a former political blogger. In politics we had the natural tempo of The News to blog to. Making allowance for hobby horses and specialist subjects, a lot of the time we were all blogging about the same stuff. Reading and writing blogs fitted seamlessly into a large single conversation that was also being conducted on social networking platforms, in newspaper columns, and on TV. And it was important to Have Your Say there and then, because, you know, otherwise The News might have gone by the time you next looked!

Archaeology and history blogging, with rare news-driven exceptions, doesn’t have a natural tempo (unless you count the business of being an academic itself, the calendar, the pressures etc; referencing all this is effective social glue for academics but does not engage anybody else or speak to the work they would like to promote). Everyone is talking about their own stuff, in the order that seems best to them, and there is inevitably a certain narrative-driven cast about a lot of historical and archaeology blogs. They know Stuff, we don’t, they are going to tell us Stuff, and unless we are specifically heading out into the internet with the aim of learning about iron age oppida, or whatever, we probably aren’t going to experience the Stuff as being of relevance to the concerns that are uppermost in our minds, far less be moved to post a reply. “Joining the debate”, which is what every empty comment box tempts you to do, feels less urgent if the debate has no particular timetable.

But if there is any regular feature of a researcher’s life where debate might usefully be showcased and used to pique interest, it’s academic conferences. They’re highly stylised as a form of debate, of course, and it’s difficult to report debate accurately anyway, but they do at least have the potential to suggest that archaeological and historical research might be about something more than everybody agreeing on a narrative – which as Donald Henson’s opening paper in the Researching Audiences session apparently pointed out, is the picture presented on TV. Differing interpretations, debate, discussion are largely ironed out by producers who believe the viewer wants a single story.

It could be that the producers are right, of course, and nobody wants debate or discussion or to follow the salient points of an archaeology paper as it’s given and respond to it. Social media offers a cheap and low effort way of testing this point, but on the basis of my impressionistic scan of the #TAG2013 feed, there needs to be a lot more content and a lot more participants for the test to be fair.

Put “palaeo-diet” into a title field, and wait

I like watching how archaeology news plays out in the mainstream press, so I’ll be keeping an eye out for incarnations of this (via @archaeologynews). It’s a great example of a buzz concept being bolted onto an entirely sober press release:

Nutrients in food vital to location of early human settlements: The original ‘Palaeo-diet’

Research led by the University of Southampton has found that early humans were driven by a need for nutrient-rich food to select ‘special places’ in northern Europe as their main habitat. Evidence of their activity at these sites comes in the form of hundreds of stone tools, including handaxes.

A study led by physical geographer at Southampton Professor Tony Brown, in collaboration with archaeologist Dr Laura Basell at Queen’s University Belfast, has found that sites popular with our early human ancestors, were abundant in foods containing nutrients vital for a balanced diet. The most important sites, dating between 500,000 to 100,000 years ago were based at the lower end of river valleys, providing ideal bases for early hominins – early humans who lived before Homo sapiens (us).

Professor Brown says: “Our research suggests that floodplain zones closer to the mouth of a river provided the ideal place for hominin activity, rather than forested slopes, plateaus or estuaries. The landscape in these locations tended to be richer in the nutrients critical for maintaining population health and maximising reproductive success.”

So not at all research into “palaeo diets” in the twenty-first century sense of the term, as indicated by that cheeky “original”. Researchers are sometimes known to harrumph about press officers’ happy ways with reporting their work, but this is probably one of the more restrained ways to do spin. Most people will have read half the thing before they’ve figured out that it’s just research into, you know, diet, and in fact it’s clearly using “balanced diet” in the same sense as government guidelines do. Hard to see how anyone is going to twist this into an invitation to scarf down bacon and eggs.

As Sir Humphrey said of the Open Government paper, “Always dispose of the difficult bit in the title. It does less harm there than in the text.”

Question is, is it still newsworthy?

Incidentally, this is how it finishes:

The nutritional diversity of these sites allowed hominins to colonise the Atlantic fringe of north west Europe during warm periods of the Pleistocene. These sites permitted the repeated occupation of this marginal area from warmer climate zones further south

Professor Brown comments: “We can speculate that these types of locations were seen as ‘healthy’ or ‘good’ places to live which hominins revisited on a regular basis. If this is the case, the sites may have provided ‘nodal points’ or base camps along nutrient-rich route-ways through the Palaeolithic landscape, allowing early humans to explore northwards to more challenging environments.”

If you are an archaeologist you’re probably now as interested as I am, but basically that’s why you’re at the bottom of a hole and I’m writing on the internet at the kitchen table and neither of us are allowed into rooms where we might try to influence people.

Making stuff up in prehistory

I learn, or relearn, surprising amounts about archaeology and history from books that have nothing to do with either. Viz. this from Daniel Kahneman’s Thinking Fast and Slow* (this passage starts out discussing The Black Swan, which I haven’t read):

Narrative fallacies arise inevitably from our continuous attempt to make sense of the world. The explanatory stories that people find compelling are simple; are concrete rather than abstract; assign a larger role to talent, stupidity, and intentions than to luck; and focus on a few striking events that happened rather than on the countless events that failed to happen. Any recent salient event is a candidate to become the kernel of a causal narrative…

The ultimate test of an explanation is whether it would have made the event predictable in advance. No story of Google’s unlikely success will meet that test, because no story can include the myriad of events that would have caused a different outcome. The human mind does not deal well with nonevents. The fact that many of the important events that did occur [in the history of Google's rise] involve choices further tempts you to exaggerate the role of skill and underestimate the part that luck played in the outcome…

At work here is that powerful WYSIATI [what you see is all there is] rule. You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from the information available to you, and if it is a good story, you believe it. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.

That’s essentially how we approach early Holocene prehistory, isn’t it. The only difference is prehistorians are usually well aware of their ignorance, in fact they need very little prompting to admit that they are making a lot of stuff up. What they mean is that they operate in a field of unknowns and unknown unknowns exceptional even within the historically contingent world of social sciences, and have to work with what they’ve got. If we are not satisfied with saying the limited things that can be said by scientific inductive process, then we have to be at home with more humanistic approaches, to wit handwavy bullshit which is just as rigorous as we can make it, according to standards for rigour that have largely been developed along the same lines.

The only caveat to the above is that all the stuff about individuals doesn’t apply; prehistorians are saved from this error by the fact that they can’t usually identify historically situated individual choices or even, a lot of the time, group choices. Instead, they fret about agency, which might look at first glance like a wistful abstract substitute for identifiable individual choices but seen in the context of the Kahneman passage is actually a far superior approach, because it forces you to reason out to yourself the pitfalls of relying on intentionality to construct a historical explanation.

I say “early Holocene” because my sense is that deep prehistory suffers less from the narrative fallacy problem; the timescales of change are sufficiently large to make contingent historical explanation of the kind that plays tricks on the mind unhelpful. There are by common consent bigger questions to ask about the earlier bits of human story and shinier toolsets from other disciplines to apply to them. It’s only in that moment between the end of the Palaeolithic and the successful outsourcing of human memory into writing, a moment you can easily perceive on human timescales because the material cultures change so fast, that you find yourself pulled towards constructing historic-type explanations that are even less falsifiable than your proper “historical” historic narrative usually is, and that stuff’s bad enough.

It’s all very awkward.

* (I will finish this book soon, honestly, and stop banging on about it, and start reading something else and see the entire world through the prism of throwaway concepts in that for six weeks instead; there is probably a phrase in psychology for the act of doing this and it’s probably derived from the Greek for “bullshitting dilettante”).

Artists and archaeologists – what we learned

I have a self-imposed time limit of six months from dissertation hand-in on sneaking back into the Institute of Archaeology. Or I’ll become, you know, one of those people who sort of hang around institutions and can’t move on. Happily I’m only two months into that grace period, so let dysfunction be unconfined! And so it was that I rolled up to the Sharing the Field conference on Saturday last, beautifully Storified here (small digression: I am the last person in the universe to realise that Storify is actually very neat and allows you to intercut explanatory text with the tweets; it’s just that most people don’t). Partly due to the comprehensive round-up in the Storify and partly in an act of mercy to the reader, there follow only a couple (no, really) of selective observations.

The first thing we learned, from Robyn Mason of RealSim, is that seventeenth-century Galway looked exactly like Skyrim. (Incidentally, this is the second archaeology conference I have been to lately that mentioned Dr Who and computer games more or less in its opening moments, the first written up fabulously here et seq.). Not really, of course. RealSim’s 3D virtual model of seventeenth-century Galway designed for smartphones was visibly built by people who play a lot of Skyrim, but it actually models a very detailed map with a fascinating embedded political history. The Q&A to this paper drew out a number of interesting possibilities for the enhancement and usage of the sim, and most were focussed – as the creators are – on the heritage communication and public archaeology side of things. There have apparently been suggestions from archaeologists about including detailed findspot and stratigraphy information, but this would have inevitable impacts on usability and accessibility, and so one possible approach is perhaps to build different versions – heritage consumer and heritage pro, I guess.

What we didn’t get into very heavily were the possibilities for archaeological research, which left me wanting to find out more. Robyn noted that her own virtual footslogging round the town had led her to a new interpretation of the archaeology of a particular bridge, and there was a good question (can’t remember from whom, sorry) on the potential to represent time depths and change in Galway which particularly feeds into the question of using these things for research. I’ve been turning over in my head for a while the idea of a huge visual simulation of the process of Neolithic and Chalcolithic sedentarization and built environment creation in the Near East and trying to figure out what the variables might be, and when I say “turning over in my head” I mean “wishing someone else who knows a lot more would build it so that I can play with it”. I’ve no idea at all whether anyone is working on this project or anything similar, but thanks to my excitable tweeting during this paper I have at least learned that one current IoA PhD student is doing his research in the general area of augmented reality in archaeological practice, so perhaps there are people out there I can pester about this apart from my long-suffering software-programming partner (what a tactical error that career was on his part).

The second thing we learned – or I did – is that I understand nothing about art as a process and an experience, and as such I am very grateful to have been exposed to a lot of more informed people talking it. I tend to sit there, being baffled by what someone is saying about experience, and metaphor, and representation, and wonder why they have decided to do what they’re doing, and why it is that I don’t understand this decision, and what pair of synapses it is that have failed to grow in my brain and are sitting there all stubby and underformed while a load of chemicals hurl themselves lemminglike into the neural soup between. I am a cold fish, in other words. And it was with this doleful self-knowledge in mind that I was interested to hear the suggestion – from Caitlin Easterby of Red Earth environmental arts group, who co-organized the conference – that data can come between archaeologists and a landscape, interfering with their intuitive understanding of it, whereas artists have more freedom to respond to the intuitive, and to present an intuitive vision to the public.

This is an intriguing suggestion. On one level I think it is absolutely true. Speaking for myself, my body is, as Ken Robinson puts it, basically a means of getting my head to meetings. I have only recently been reading about intuition (or Kahneman’s “system 1″), its uses and its limitations as against the “system 2″ of conscious, deliberative thought, and it is clear that archaeological analysis is essentially a system 2 operation just like any other academic endeavour, so any self-conscious “thinking” you start to do about a landscape is going to be conducted in your non-intuitive mode. But as Kahneman’s account makes clear, not only do we use system 2 a whole lot less than we think we do, but also system 2 is constantly being informed and prompted by system 1. This is presumably why cold, hard facts about an era, or a site, or a landscape can feel as warm and alive and enriching as the process which Kent Flannery (I think) characterises wonderfully somewhere as “looking at a pot and emoting”. At least, they can to me. They are jumping off points for my imagination. I can’t see inside anyone else’s head, of course, and it may well be that my understanding of landscape is emotionally and psychologically impoverished by my desire to know Things about it, such as can be known, rather than just responding to the environment purely as an embodied being. Perhaps I have a whole untapped resource of intuitive embodied understanding locked up in me which will slowly unfold over the years – I certainly hope so, that would make life more fun. I am mindful also that there is a whole spectrum of practically-minded archaeologists who learn in embodied ways far more than I do.

But I would also question the degree to which anyone moves through a landscape purely as an embodied being. Like it or not, that is not the deal with being human. If it was, everybody would be able to meditate easily, and meditation is hard. One bone of contention that came up after Rachel Henson’s paper – featuring a fantastic little film about the experience of walking Box Hill, and encountering the various wildlife there – was the use of music in art which drew its inspiration from embodied experiences. Surely, the bone-maker suggested, the background music was not authentic to the experience of walking along. As it happens I think the question somewhat missed the point anyway, because Rachel’s core projects are these beautiful, tactile and extremely practical flickbooks of stills representing each step of a journey through a landscape (I was fortunate to have the opportunity to riffle through them in the pub later) which are intended to be used as guides and companions on the user’s journey, whereas the film was obviously for sitting in a darkened room and watching away from the landscape in question, just as we did at the conference.

My wider problem with that bone of contention is its implication that there is a sort of essential purity about immersive, sensuous experience of a landscape, and that music is an intervention that sullies it. Again, I do not think this is what being human is about, and surely both art and archaeology are trying to elucidate what it means to be human. As we were discussing in the pub at lunchtime later, some of us listen to music when we go walking, and we experience landscape and music together – is that not visceral? Is it not human? Some of us run in landscapes, and experience a different pace from the walker or the idler. All of us always walk in social contexts, even though we might be intending, at times, to escape or forget them (“I’m going for a walk.”) And anybody, at any point in human history, might wander through a landscape singing, arguing, keeping an eye on the known troublemakers among their sheep, nursing their broken heart or plotting their revenge on a sheep-rustling neighbour, all activities which combine system 1 and system 2 thinking and probably take a lot of cues from embodied experience, but are never entirely about embodiment. To experience a landscape in that deliberate fully immersive, senses-on-full-throttle way is only one way of doing it, it’s actually extremely difficult to do, and I don’t believe it is a way that needs to be privileged above others for us to “understand” a landscape. And certainly we should resist the temptation to make attributions of this kind of purity to landscape-walkers of previous eras. There’s a pretty fine line, it strikes me, between observing that previous cultures may have been more ecologically and agriculturally aware of the physical and symbolic layers of landscapes, and buying into a sort of bucolic version of the noble savage myth.

Which brings me back to the point about archaeologists and their intuitive understanding, or lack thereof. I guess my suspicion is that this is part of a modern narrative about ivory towers and the disconnect between modern Westerners and their natural physical environments, which are valid concerns and topics of conversation, but which run the risk of setting “intellectual” understanding into a false opposition with intuitive or embodied understanding. Not only is there not really any such thing as a purely intuitive understanding, because stripping away “facts” from a landscape doesn’t mean system 2 can’t work in it, and adding “facts” doesn’t suppress system 1; but an embodied understanding without facts, if you could achieve such a thing, is only one perspective on a landscape, and who wants to limit themselves to one?

I learned plenty of other things which may dribble out in time, but for now I really want to say thank you to the conference organizers for making that experience so fun, and making it free.

Daniel Kahneman and my unexpected stupid

About half-way through the Masters degree I’ve just finished I began to get interested in how and when I had got so stupid, and not just in a trivial sense that might be explicable by diminished memory function or a fight-or-flight response to the alarmingly patterned trousers that suddenly surrounded me as I went among The Young again. My self-image was that of a reasonably clever person who found studying easy. But writing my dissertation was like repeatedly pressing a switch and not understanding why nothing was happening. Somehow, the appropriate facts were not crashing into each other in the right way, in the way I was pretty sure they had last time I had attempted to do something like this. At one point a tutor told me I was good at spotting the flaws in other people’s arguments. This formed a poignant counterpoint in my violently over-inducting brain to what another tutor said to me about ten years ago, which was that I was good at seeing what really mattered. Interesting category difference there, I think.

So naturally as a fan of bullshit pop psych I first wondered about the whole 10,000 hours thing. Most people in postgrad study are building on the subject they chose when they were 18, which gives them an advantage in both data-set and mindset familiarity. Was it just too much of a stretch to acquire basic mastery of Near Eastern prehistory sufficient to enable me to write meaningfully about it? When I did my first postgraduate work I’d been studying for the previous sixteen years, and the particular subject of my postgrad work for the previous three. That has to make a difference.

And maybe there are other consequences to getting older that are more about changing your mental landscape than depleting it. Maybe I am epistemologically harder on myself these days. I know more in general, I have a higher standard of what it means to have a sound understanding of something than I used to. Probably late teens and early twenties are the optimal time for learning big difficult stuff because you don’t yet comprehend the extent of your own ignorance and would have the crap quite terrified out of you if you did.

But I don’t think any of that fully explains what was going on, and nor did any of the chirpy “Seventy-two reasons why the internet is turning you into a hopeless moron” type posts I turned up on, uh, the internet in search of the answer (although I did come across a link to a finding that men get stupider just by being in a woman’s presence, which has the worrying implication that roughly 50% of the people I am using as a reference point for my own stupidity are actually even smarter than they appear to me.)

But never fear. I still have a bullshit pop psych explanation, just a slightly more complicated and respectable one. One of the things I am finally getting around to reading is Daniel Kahneman’s Thinking, Fast and Slow. Kahneman’s system 1 and system 2 concepts are shorthand for respectively fast, intuitive, impressionistic thinking and slow, effortful, “rational” thinking. “Slow” system 2 thinking is hard and energy-expensive, which is why people have a natural resistance to it and are prone to over-rely on “fast” system 1 thinking (despite believing a lot of the time that they are using system 2 i.e. making rational judgements and decisions).

System 1 serves important purposes – impressionistic judgements enable accurate forecasting in many scenarios – but it is not good at handling certain types of problem, especially those with statistical and logical components. It is subject to various biases which can cause its conclusions and forecasts to be inaccurate, of which I think my favourite is attribute substitution (answering a different question to the one actually posed as if it was an answer to the posed question) because it explains about 80% of political commentary. Attribute substitution is built into the way people construct their political views – onlookers as well as politicians. Political problems are vast and complex, information is hard to come by and analyse, and yet people in public life and public house alike are culturally expected to take views on things they could not possibly carry out full system 2 analysis on. Attribute substitution is probably the single most useful mental tool a person commenting on a political problem has access to, if we define “useful” as “helps me avoid admitting that I do not have a solution to this problem and thereby losing status among my peers.”

However, Kahneman also refers to situations where system 1 thinking does provide reasonably accurate forecasts even where the material is logically or statistically complex, simply because some people in some situations have the system 2 knowledge database to support intuitive leaps. His illustrations are chess masters having an instant grasp of all possible future moves on a board without having to reason it through, and a physician making an instant diagnosis – both specialists recognise familiar cues in the situation and are able to make leaps to judgement which are reasonably accurate. This is where I think the relevance to academia comes in. In these terms, by the time I got to postgrad level in medieval history, I had done all the slow, logical, effortful system 2 thinking required to fix the basic rules in my head. This meant I was able to do a whole lot of informed system 1 thinking – that is, the frequent employment of low-effort intuitive thinking to make leaps and solve problem. This in turn freed up my capacity to do dogged system 2 thinking that was genuinely meaningful.

This is basically like being on drugs – the satisfying animal hit of “stands to reason” system 1 thinking plus the rational knowledge that the  system 2 slogging you’re doing is actually important – which is why I did the Masters in the first place and I guess why anyone sticks with academia at all. Finding thinking uniformly hard was something I had forgotten. I associated academic success with system 1 thinking, because that was the last state in which I had experienced it. I kept waiting for system 1 to kick in, and it didn’t, I didn’t have the database for it, I was grasping for the intuitive before I had done the basic crunchy bit that makes the intuition work. My back then-tutor was identifying my success in system 1 thinking – my now-tutor was describing a process he could observe in my tentative system 2 thinking (picking holes in other people’s argument is a great way of kicking off the system 2 crunchy bit).

For me the only implication here is “If you do a PhD, be crunchier about it”, but I also think it has interesting implications for academic careers in general. If you’ve done all your basic system 2 thinking in the first years of your career, you are able to take effective shortcuts in problem-solving and more of your expensive system 2 capacity is freed up for the boundary-pushing work which will move you forward as a researcher. But it has its downsides too from a point of view of both research and pedagogy. You’re no longer well-equipped to describe to your students – mired as they are in system 2 – what it is you’re really doing. Your shortcuts once laid down are less likely to get truly re-examined, which may mean the perpetuation of specialized versions of heuristic biases in your work.

Perhaps this provides another perspective on why intellectual revolutions are as fraught as they are. It is not mere social and professional defensiveness at work when new paradigms are rejected – we can usually detect these kinds of bias. At a higher level of abstraction, a call to embrace a new paradigm is a call to put down the lovely, easy, satisfying system 1 toys and start again from scratch with the unpromising lego bricks of system 2, which is what I have just had to do.