The Mind Shapes the City – adventure at the Whitechapel Gallery

I am in Adventures of the Black Square. This is a sprawling exhibition of abstract art produced between 1915 and 2015 at the Whitechapel Gallery. Amongst other things, it examines “how geometric abstraction was conceived of in three dimensions as built environment and social space”. Before we came out, AJ was reading a book called Has Modernism Failed?, which according to the back is about the fine art world, but since we’re about to look at a series of carefully ruled squares drawn by people who dream of order and functionality and who also as a sideline built tower blocks, the title triggers associations about modernist architecture. The conventional argument is that this too was a failure, that Le Corbusier’s machines for living failed to foster actual life, failed fatally to integrate occupants socially either with each other or the world beyond the tower block. I’ve never actually read a counter-argument to this, but I presume a good one would run along the lines of questioning the chain of causality. Did these streets in the sky cut off social relationships, or were social relationships going that way anyway as the postwar consensus receded?

Anyway, none of this is important, or so I thought when I arrived at the Gallery. I go to exhibitions about abstract modernism, which I basically believed to be a bit of a crock, for the same reason I go to see anything in which I have barely the least interest – as a place to put my eyes while my mind is clanking away on a quite limited series of the same old puzzles.

***

The great puzzle of human history is why so much of it is blank. Anatomically modern humans began to emerge around 200,000 years ago. Symbolic language is thought to go back even further than that. There is no inherent reason why early homo sapiens’ abilities and inclinations should have been any different from our own. But for millennia, innovation in linguistic ability, tool use and social organisation was gradual to the point of imperceptibility. There are stylistically indistinguishable stone tools from single geographic areas made hundreds of generations apart. That’s an incredible length of time to keep doing things the same way. From our perspective it seems like that must have actually involved strenuous cognitive effort.

Of course, that’s one possibility. Others, of a more “silver bullet” nature, exist. The archaeologists Ian Hodder and Jacques Cauvin wrote separate and seminal works some twenty years apart on the same theme, seeking to explain the dramatic telescoping of intensive materially focussed human activity – the development of agriculture, domesticity, permanent (or at least seasonally recurring) settlement, urbanism, writing, inequality, money, mechanisation, empires, literature, pollution etc etc – into the last twelve thousand years or so. Their different but related theories focussed on the catalyst in the whole process – agriculture – and accounted for the change in terms of a cognitive switch. For Cauvin, religion altered people’s behaviour such that agriculture was incidentally created, while Hodder talks in terms of dual taxonomies, of the development of a way of thinking that set “inside” against “outside”, “domestic” against “wild” and thus gave rise to concepts like home, domestication, this land which I cultivate, that land which I do not, in what had formerly been an undifferentiated landscape. Both of them are talking about cognitive shifts.

Palaeolithic experts have objected to this general strain of theory on the grounds that it is “othering”, it shirks the task of understanding deep human history by simply writing it off under the heading of “cognitively different”. This reminds me somewhat of medievalists complaining that the early modernists have inaccurately co-opted all the great narratives about the beginnings of modern law, society, power structures and so on. Justly in my view, but then I’m a medievalist so I would think that. Everybody wants to believe that their own period is the pivotal one, and no-one is exactly wrong, so any attempts to categorize an era as a sort of prequel to the real deal will meet with exactly these sorts of objections.

Anyway, it should be stressed these are not crazy-horse theories. Hodder and Cauvin are well within the conventional fold of academic archaeology, and by the standards of another famous otherer who preceded Hodder by some fifteen years, their proposals are mild. Psychologist Julian Jaynes also proposed a cognitive shift as a mechanism in deep history, but his model is both set more recently in time and if anything is more neurologically profound. He argued that humans only acquired modern consciousness in the Bronze Age under selective pressure from the economic and environmental forces that ultimately brought about the Bronze Age collapse across the Old World. Prior to this point, Jaynes proposed, human cognitive operations were divided across a “bicameral” structure in which, essentially, one half of the brain would tell the other half what to do, and the telling would be experienced literally as an externalised voice. Meta-consciousness, the ability to think about thinking which today is the most commonly cited distinction between humans and other primates, was not a feature of the bicameral mind.

The corollaries of Jaynes’ theory are highly appealing. For one thing, it makes a fresh sense of many themes of ancient world literature and philosophy. When the earliest Greek writers captured oral stories about gods conversing with mortals, they were not dealing in metaphor; they were fossilizing a phenomenon that for the original story-tellers occurred literally. And wherever we find a current of lamentation that the gods were no longer talking to men, this reflects an echo of a very real lost angst about a perceived change, rather than the generic hand-wringing about the state of the world and morals and corruption in public life as we have generally taken it to be.

Another appealing corollary, the theory places what are now perceived as mental health conditions in a new perspective. In Jaynesian terms, schizophrenic hallucinations are nothing more or less than an accidental survival or revival of bicameral neurological organisation. To put the implications into clearer perspective, this blogpost uses Jaynes’ model to extrapolate forward to a theoretical future in which humans have phased out (through constant immersion in electronic media) the ability to daydream, and hence see it wherever it does still occur as aberrant, a form of mental illness. And when these descendants of ours come across our references to this common and unremarkable cognitive phenomenon, they will either discount it in puzzlement or see it as in some way metaphorical, just as we see Plato’s vanished Golden Age in which gods walked the earth as metaphorical. They will try to separate us, retrospectively save us, from what they see as a form of mental illness. This projection rings true to me. Whatever in history cannot be readily explained as the workings of a “normal” human mind – whatever that is taken to be in the contemporary setting – is often quietly passed over or even altered as an unacceptable piece of dissonance. Hence historical novelists writing about the medieval or post-medieval periods cannot resist making their sympathetic characters a bunch of secret atheists. It turns out humans looking at history naturally do syllogistic “othering” all the time, one way or another.

However, none of this really matters either. The point is not that Jaynes’ theory has to be accepted in its particulars; indeed there are enormous problems with it, not least the fact that he isn’t using it to explain a sequence of events anything like as profound as the beginnings of agriculture, so one is inclined to think, what’s the point, why did you come up with this in the first place? But then I’m a Neolithicist, so I would think that. A Bronze Age archaeologist would cite puzzles pertaining to their era which necessitate just such a profound theory to solve them. Anyway, the point is simply to illustrate the possibility of different forms of consciousness, and the bold attempts, on both the “respectable” and “radical” sides of the academic fence, to propose this as a mechanism in history. All these thinkers entertained the possibility that people in different eras – or even, frankly the same eras – may experience consciousness differently, in ways that it is very hard for subsequent eras to recapture even with plentiful documentation. And while I get the “othering” objection, I think the alternative possibility of “sameing” is on even shakier ground. It would be a bold theoretician who proposed that humans have always thought in precisely the same way. Just because something looks like a silver bullet, doesn’t mean it doesn’t work.

***

The Guinness Cake in the Whitechapel Gallery is, we agree, not as good as mine. Most of the works we have seen have felt literally “modern” to me in terms of materials, images and techniques used, as well as modernist. But I am reflecting on the fact that very occasionally an artist has captured a geometric form in something old, imperfect, rugged, even rustic, like Gaspar Gasparin’s gelatin silver print of an undulating patterned pavement in São Paulo or Ivan Serpa’s weirdly disorienting print of stacked wine barrels seen from an oblique angle. The modernism here is all in the eye of the beholder.

It strikes me that both these images are about organisation of people and things in urban space, and from this comes an idea. Perhaps creators – of anything – in urban space select geometric patterns sub-consciously because they are efficient in the widest possible sense. By “creators” I mean both the man (as I presume it was) who decided what pattern should be on the pavement in São Paulo, and all the contributors in the whole millennia-long sequence of cultural history that led us to keep wine in barrels of this shape and stack them in warehouses like this. Geometric patterns are predictable by machines – by which I mean both computers and also the collective workforce which unthinkingly stacks the wine barrels in the warehouse. Nobody needs to reason out from first principles how to stack a wine barrel on some others. Regularity of form makes it obvious. The hard work of creating the pattern has been done, centuries ago. All your have to do is follow it, and get the next barrel. Geometric patterns are self-generating and therefore conserve energy, which in a city is always, one way or another, a thing in short supply. Presumably, this is the only way cities ever get built, because otherwise you wouldn’t have the time. Even humans collectively wouldn’t have the time to constantly reinvent matter and material culture in every new iteration from scratch.

I start thinking about the human mind and its affection for order and symmetry. It has been proven in visual and auditory terms that the brain finds irregularity disturbing, and regular patterns soothing. So why, to restate the question above about deep human history in another form, did it take us so long to create it in the form of permanent spatial organisation? I do not mean literal symmetry here – although there is plenty of that in the early buildings of the Neolithic Near East and Mesopotamia – so much as just a reasonably permanent, set-in-mudbrick patterning of the way human social space was used. Is it likely that humans nursed for millennia an innate longing for the soothing qualities of orderly, repetitive patterns in physical space and never tried to play out these qualities in the material world around them (allowing for conditions of preservation and survival)? Is it not more likely that in fact humans lacked this particular longing? Perhaps there was a time when the human mind was simply happier with chaos than we are now.

Or consider another feature of many early Old World societies whose main feature was urbanism: weaving. In the Black Square, Adrian Esparza has deconstructed a traditional Mexican blanket into its constituent threads and strung them across the wall like an impossibly complex rainbow harp, creating a visual that you assume has been generated by some design software and printed on the wall until you look closely. The stripey colourful blanket is soothing to look at in both these forms. Textile production was an outcome of what’s called the Secondary Products revolution of domesticated husbandry (where the First and Original Product was simply meat and the Secondary Products are milk, textiles, transport and traction). To look closely at a woven fabric is almost as soothing as to make it. And you don’t even need to be as technical as that to do some spinning. One spins by holding a spindle – common currency finds along with loom weights on ancient Near Eastern sites – aloft and twiddling it in a constant oblique motion to draw threads together. It is pleasantly mindless work, and that is an instructive phrase. It allows for multi-tasking, and for building social relationships. One might also sing, socialise, or mind children. Spinning and textile-making in the prehistoric Near East were communal – and we think female – activities. That calming sense that comes from a repetitive action, a thread inexorably drawn from chaos and introduced to an ordered series, a warp and weft successfully married time and again, perhaps that began here, or rather made use of a cognitive inclination newly acquired.

Perhaps the development of urbanism is not so much a discovery of new social arrangements as a development – somehow – of new forms of consciousness which fostered the realisation in space of the self-generating, self-replicating form. We decided we needed patterns to feel happy. And once you have placed self-replicating patterns in your physical environment, they start doing an awful lot of work for you, freeing you to do other creative work like build political systems. They remove the thought element from social and spatial processes and speed them up, foster knowledge transition, and ultimately do away with the need for a human to teach another human how to do something at all. Because urban space is heavily patterned with the patterns encoded in permanent architecture, we only need to use a limited set of behaviours to operate successfully in it. Once you have learned how to wait at the bus stop at the end of the road, you can successfully wait at any bus stop in any urban space in the world. If all this were truly based on a cognitive shift, it would entirely explain the lopsided telescoping of history – thousands of years of very little material culture and a mind which was not remotely bothered about ordering any, followed by 12,000 years of intense and increasing activity. We have been, to use a simile invoking another order-loving and structure-building species, as busy as bees.

***

Seen in this light, the modernist architectural movement represents the logical endpoint of a process than began (as you care to take it) in 5000BC in Sumer, or 9000BC in the Fertile Crescent. Urbanism changed our way of perceiving space but it also followed on a change that had been wrought in the mind. The more we liked order and the more our brains sought to see it and where it could not be seen, to create it, the more order came in to society and art. But it took a few centuries of post-Enlightenment philosophy, a mind-bending war and a period conducive to supporting a new spirit of architecture afterwards before modernism in its pure form emerged. And there is no purer form of order that can be realised materially. No wonder our aesthetic conception of what it means to be modern  – think of the cues employed by any director, set designer or costumer whenever “modern/futuristic” is a requirement in a script – hasn’t really moved on since the 1960s.

Vikings S1 – what may laughably be called a review a year late

Apparently US Memorial Day means we can’t have another episode of Game of Thrones S4 this week (though seemingly we can have another episode of Mad Men S7 culminating in a surreal dance number featuring a dead character so what this says about USians’ priorities is anybody’s guess). Owing to this sword/beard-shaped lacuna in my weekend, plus the rain and an unhappily placed door jamb, I ended up spending most of yesterday on the sofa with my foot in the air mainlining series 1 of Vikings.

I so wanted to love it, and this is why I end up petulantly reviewing stuff on the internet about a year after everybody else – by the time I get around to seeing something I am invested. I am not someone who gets precious about historical accuracy, mind, and in the case of Vikings this is just as well. They have run together a retelling of the Ragnar Lodbrook sagas with bits and bobs from earlier and later in attested history. Ragnar himself is a misty, possibly mythical figure. King Aelle really did rule Northumbria, but somewhat later than 793, the date of the first Viking raids on Lindisfarne, which this series chooses as its anchor. I don’t really get why any show or film does this, by the way – suddenly pops a date onto the screen in curly writing in a way which contributes nothing whatever to the story while inviting every Wikipedia-cruising carper on the internet to take potshots at your whole enterprise. Why tie yourself down like that, especially when you’re basically doing a bit of myth-retelling? Most people can situate the Viking Age adequately in their mental model of north west European history – after the Romans, before the kings and castles – and if they can’t, a date isn’t going to tell them anything. If you need to introduce pacing it would make more sense to use cues like “The Following Summer”, though in the case of a series based around seasonal raiding no writer should really need to do that.

But while I’m happy for people to chop history and myth about I think it’s a shame when it’s underused. History is stuffed with grand narratives, Viking history perhaps even more than most. There were Vikings on the personal bodyguard of the Byzantine Emperors, they traded and raided all over the Baltic and down the great Eurasian rivers as far as Mesopotamia. You could easily tell a story that starts with a band of brothers in a modest collection of farms and fishing towns – like this one does – and show these familiar caricatures in all sorts of unfamiliar settings, encountering distant lands and undreamed-of riches and politics that might challenge their thuggishly-presented socio-economic arrangements. Yet there is only really one grand narrative in Vikings a full seven episodes in, by which time Game of Thrones had plunged us into about five different worlds and killed or put into serious jeopardy key people in most of them, and the narrative is of course “Go West”. Bullet-headed, gold-chasing Ragnar who wants to sail west and conquer frankly not dissimilar lands to his own bar the odd fjord, versus risk-averse, sly Earl Haraldsan (Gabriel Byrne reminding everybody that you can point the camera at a real actor for thirty seconds and get taken through ten times as many emotions and nuances as appear in their scripting) who insists his boats continue to raid eastwards.

And it’s true that the first encounters between the Vikings and the monks and soldiers of Northumbria were easily the best scenes in the whole thing. Culture clash is one of those themes that focuses a story-teller’s mind and the mind of the audience on the same pinpoint of time, and it nearly always comes off because everyone can see what the stakes are – this is the moment, everything screams at the viewer, where it all begins. Cautious welcomes turn to puzzlement as people start babbling mutually incomprehensible languages and then somebody makes a wrong move and the whole thing turns bloody – very believable and great to watch. A lot of these early encounter scenes are staged so that most of the Vikings really are taller than most of the surrounding English, as well as messier, nastier-looking and with more hipsterish hair, and it works both as a metaphor for where these people’s fortunes are going and as homage to the terrified writers of ninth-century England who recorded them very much like this. But after that promising beginning we seem to have got locked onto a rather tedious storyline involving lots of people tramping through heathland, some home politics of the serious-expression-in-firelight variety, the perpetually ratty King Aella whose Archbishop, or whatever he is, is being played by Peter Cook, and generally a gentle, predictable slide towards the Danelaw (spoiler for you right there). If there is going to be a game-changer that doesn’t involve Earl Haraldsan’s hot widow and/or Ragnar’s sulky brother Rollo, well, this is a better show than I am currently giving it credit for.

One thing I did love was all the reverse-engineered archaeology – the opening titles basically show a stylised hoard/burial/shipwreck hybrid deposit event taking place (viz. they fall in the water) and at odd intervals in the series you can just hear, under the doomy music, the researchers gibbering excitedly about how cool it would be to show a strangled-sacrifice-and-hoard, or skulls-next-to-bottoms burial, and provide an imaginative context for them. And it is pretty cool. One of these pleasing little moments features by far and away the best character, Floki, a trickster figure in the spirit of Loki himself, who gigglingly flips a coin from King Aella’s massive buy-off into the Tyne as the Vikings sail for home, thereby reminding all archaeologists who spend their time reasoning out contexts for deposits that sometimes, a thing is in the ground (or in the river bed in this case) because an unpredictable psychopath put it there.

The fact that Floki is the best character points up what is lacking here, I think. Tricksters are fabulous devices for powering a story onwards and have been used as such by every culture that has ever written stories, but while you can use them as sources of profound change in a story (Floki is the boat builder who makes Ragnar’s raids possible) or for flipping a situation over (it is Floki’s sudden capricious grab for the necklace of an Northumbrian soldier that prompts the first proper Viking-English skirmish) I shouldn’t be watching that character alone for signs that things are moving on. I also want to see relationships evolving between the others, Ragnar, his band of raiders and his family that prompts them to take conflicting actions, rather than all of them swirling obediently round whatever Ragnar has smirkingly decided he fancies doing next. I also want some evidence of tensions between thirty-odd people fighting from the seat of their leather pants in a foreign land, rather than having to be written into the scenes afterwards where they’re all safely home and Rollo the Sulker is away from the band griping with somebody else. Much the liveliest relationship in the whole thing is that between Ragnar and Aethelstan the ex-Lindisfarne monk turned God-doubting slave, and the script flares into life when they talk because their interactions play successfully on differential status. Ragnar could, as Aethelstan points out, have him beaten to death on a whim and suffer no penalty, and we are invited to guess at Ragnar’s motives for taking a pretty indulgent line towards him – indulgent in the sense that he allows him to live as a slave in his household after slaughtering his defenceless companions anyway.

It’s the only relationship in the whole thing I really care about, that feels like it could go sour, or that generally shows signs of having unpredictable emergent qualities. I want more unpredictability, I want Ragnar to have a proper opponent and real, evolving relationships with other people, and I want some honest-to-Odin bromance, goddammit. I have an almost limitless capacity for any amount of nonsense involving slightly grubby men charging about with swords (see also The Musketeers, which compellingly featured open shirts too), but at the moment I won’t be racing to download S2 unless there are some seriously good switcheroos in the last two episodes of this season, or of course unless I break the toes on my other foot on another rainy bank holiday weekend.

Brain dump #2: depression and the city

Mental disorders in the ancient world:

The examination of mental disorders would seem to be the almost exclusive domain of psychiatrists and psychologists, not humanities scholars. Yet William V. Harris, the William R. Shepherd Professor of History, has spent his time in recent years studying his chosen field—the history of ancient Greece and Rome—through the lens of mental illness.

This article doesn’t go into a lot of depth, but it made me wonder if, and how, and whether, mental illness could be approached through archaeology. Has anybody tried? The wikipedia entry for the History of Depression heralds a sub-section of coverage from “Prehistory to medieval periods”, but in fact begins with classical Greece.

What would a pre-text history of mental illness look like? In a sense I think the very inaccessibility of prehistory makes mental illness as realistic a lens of study as any other kind of worldview; we are not being confused and bombarded with textual representations of more-or-less neuro-typical thinking, as we are in common-or-garden classical Greece. If individuals and groups produce material culture which perpetuates or modifies their social norms, depression for example (assuming it is a neurological condition inherent to being human, and not a sort of Durkheimian malaise of the modern age) must be among the social norms represented in material culture.

It should be possible to detect an archaeology of depression.

Behavioural epigenetics:

According to the new insights of behavioral epigenetics, traumatic experiences in our past, or in our recent ancestors’ past, leave molecular scars adhering to our DNA. Jews whose great-grandparents were chased from their Russian shtetls; Chinese whose grandparents lived through the ravages of the Cultural Revolution; young immigrants from Africa whose parents survived massacres; adults of every ethnicity who grew up with alcoholic or abusive parents — all carry with them more than just memories.

It strikes me this would be a neat way of accounting for what Childe called “the urban revolution” – that very strange period occurring spontaneously in different parts of the world where humans began to live, permanently or at least seasonally, in large numbers side by side in the same place. If extremes of behaviour can alter gene expression then it only takes a handful of random sets of extreme circumstances worldwide to alter the behaviour of a whole community, such that this community is predisposed to what would become urban life.

Throw in a suitable agricultural environment and climatic conditions for population growth and you have History.

So that’s that settled.

How do archaeology conferences work?: engagement from my sofa

Lately I’ve been doing a lot of going to archaeology conferences and tweeting from them like a particularly chirrupy and attention-deficient budgie, so I’m finding it quite interesting to observe the goings-on at the Theoretical Archaeology Group conference in Bournemouth on #TAG2013. As a reasonably informed outsider (edit: to this particular conference, I mean. C2DE I ain’t), how much of a picture am I getting of discussions and papers as they unfold?

The first thing I observe is that, ooh, about 80% of the tweets I’ve seen so far this afternoon are from the CASPAR Researching Audiences session (see p17 here (pdf)). Not so much the session about cave art or the one about animal-human interaction or the one about actor-network theory, which if I’m reading the programme correctly are running concurrently. So, yeah, a lot of people who are interested in AV/digital media and archaeology are (a) attending the session about AV/digital media and archaeology and (b) on Twitter, and it is their output which currently constitutes the public face of #TAG2013. On which I, another person interested in AV/digital media and archaeology, am now commenting, and perhaps someone else interested in AV/digital media and archaeology will write a little post about Twitter engagement at #TAG2013 using these very tweets and this blog post as material, and so it goes on until we all disappear up our collective meta-arse, or at least until more archaeologists with wider interests start using Twitter. Lots of people who already know other people are talking to them about stuff they know is of mutual interest. No surprises there.

The second thing, which surprises me, is that I am unfeasibly thrilled when someone posts a picture. This is something I scarcely thought about at Monstrous Antiquities, CHAT and Sharing the Field, although I posted pictures from the first two. A spectacularly dumb reptilian bit of my brain likes images of places and people and having a visual hook on which to hang conversations I might have, or read about. This is despite the fact that I do at least vaguely know some of the people there. What seems to matter are pictures of that particular experience.

The reason I comment on all this is that, meta-related cynicism notwithstanding, I suspect academic conferences are currently under-utilised as a potential source of public engagement with the subject. Conferences, as live events, have a natural, attractive tempo that really works on social media pretty much regardless of how dry the content – and Twitter, even more than most other social networks, is tempo-driven. That’s why BBC Question Time amongst other things is so famous there. It’s not that the socio-economic group that watches QT is disproportionately represented on Twitter – although I’m sure they are – it’s that all those people sit down on a Thursday night and start tweeting about it at once.

Plenty of academics thoroughly see the point of Twitter and other social media as a way of sharing their work and engaging both their colleagues and the public. But there is an inevitably static quality to a lot of this busy communication, which is particularly obvious to me as a former political blogger. In politics we had the natural tempo of The News to blog to. Making allowance for hobby horses and specialist subjects, a lot of the time we were all blogging about the same stuff. Reading and writing blogs fitted seamlessly into a large single conversation that was also being conducted on social networking platforms, in newspaper columns, and on TV. And it was important to Have Your Say there and then, because, you know, otherwise The News might have gone by the time you next looked!

Archaeology and history blogging, with rare news-driven exceptions, doesn’t have a natural tempo (unless you count the business of being an academic itself, the calendar, the pressures etc; referencing all this is effective social glue for academics but does not engage anybody else or speak to the work they would like to promote). Everyone is talking about their own stuff, in the order that seems best to them, and there is inevitably a certain narrative-driven cast about a lot of historical and archaeology blogs. They know Stuff, we don’t, they are going to tell us Stuff, and unless we are specifically heading out into the internet with the aim of learning about iron age oppida, or whatever, we probably aren’t going to experience the Stuff as being of relevance to the concerns that are uppermost in our minds, far less be moved to post a reply. “Joining the debate”, which is what every empty comment box tempts you to do, feels less urgent if the debate has no particular timetable.

But if there is any regular feature of a researcher’s life where debate might usefully be showcased and used to pique interest, it’s academic conferences. They’re highly stylised as a form of debate, of course, and it’s difficult to report debate accurately anyway, but they do at least have the potential to suggest that archaeological and historical research might be about something more than everybody agreeing on a narrative – which as Donald Henson’s opening paper in the Researching Audiences session apparently pointed out, is the picture presented on TV. Differing interpretations, debate, discussion are largely ironed out by producers who believe the viewer wants a single story.

It could be that the producers are right, of course, and nobody wants debate or discussion or to follow the salient points of an archaeology paper as it’s given and respond to it. Social media offers a cheap and low effort way of testing this point, but on the basis of my impressionistic scan of the #TAG2013 feed, there needs to be a lot more content and a lot more participants for the test to be fair.

Put “palaeo-diet” into a title field, and wait

I like watching how archaeology news plays out in the mainstream press, so I’ll be keeping an eye out for incarnations of this (via @archaeologynews). It’s a great example of a buzz concept being bolted onto an entirely sober press release:

Nutrients in food vital to location of early human settlements: The original ‘Palaeo-diet’

Research led by the University of Southampton has found that early humans were driven by a need for nutrient-rich food to select ‘special places’ in northern Europe as their main habitat. Evidence of their activity at these sites comes in the form of hundreds of stone tools, including handaxes.

A study led by physical geographer at Southampton Professor Tony Brown, in collaboration with archaeologist Dr Laura Basell at Queen’s University Belfast, has found that sites popular with our early human ancestors, were abundant in foods containing nutrients vital for a balanced diet. The most important sites, dating between 500,000 to 100,000 years ago were based at the lower end of river valleys, providing ideal bases for early hominins – early humans who lived before Homo sapiens (us).

Professor Brown says: “Our research suggests that floodplain zones closer to the mouth of a river provided the ideal place for hominin activity, rather than forested slopes, plateaus or estuaries. The landscape in these locations tended to be richer in the nutrients critical for maintaining population health and maximising reproductive success.”

So not at all research into “palaeo diets” in the twenty-first century sense of the term, as indicated by that cheeky “original”. Researchers are sometimes known to harrumph about press officers’ happy ways with reporting their work, but this is probably one of the more restrained ways to do spin. Most people will have read half the thing before they’ve figured out that it’s just research into, you know, diet, and in fact it’s clearly using “balanced diet” in the same sense as government guidelines do. Hard to see how anyone is going to twist this into an invitation to scarf down bacon and eggs.

As Sir Humphrey said of the Open Government paper, “Always dispose of the difficult bit in the title. It does less harm there than in the text.”

Question is, is it still newsworthy?

Incidentally, this is how it finishes:

The nutritional diversity of these sites allowed hominins to colonise the Atlantic fringe of north west Europe during warm periods of the Pleistocene. These sites permitted the repeated occupation of this marginal area from warmer climate zones further south

Professor Brown comments: “We can speculate that these types of locations were seen as ‘healthy’ or ‘good’ places to live which hominins revisited on a regular basis. If this is the case, the sites may have provided ‘nodal points’ or base camps along nutrient-rich route-ways through the Palaeolithic landscape, allowing early humans to explore northwards to more challenging environments.”

If you are an archaeologist you’re probably now as interested as I am, but basically that’s why you’re at the bottom of a hole and I’m writing on the internet at the kitchen table and neither of us are allowed into rooms where we might try to influence people.

Making stuff up in prehistory

I learn, or relearn, surprising amounts about archaeology and history from books that have nothing to do with either. Viz. this from Daniel Kahneman’s Thinking Fast and Slow* (this passage starts out discussing The Black Swan, which I haven’t read):

Narrative fallacies arise inevitably from our continuous attempt to make sense of the world. The explanatory stories that people find compelling are simple; are concrete rather than abstract; assign a larger role to talent, stupidity, and intentions than to luck; and focus on a few striking events that happened rather than on the countless events that failed to happen. Any recent salient event is a candidate to become the kernel of a causal narrative…

The ultimate test of an explanation is whether it would have made the event predictable in advance. No story of Google’s unlikely success will meet that test, because no story can include the myriad of events that would have caused a different outcome. The human mind does not deal well with nonevents. The fact that many of the important events that did occur [in the history of Google’s rise] involve choices further tempts you to exaggerate the role of skill and underestimate the part that luck played in the outcome…

At work here is that powerful WYSIATI [what you see is all there is] rule. You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from the information available to you, and if it is a good story, you believe it. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.

That’s essentially how we approach early Holocene prehistory, isn’t it. The only difference is prehistorians are usually well aware of their ignorance, in fact they need very little prompting to admit that they are making a lot of stuff up. What they mean is that they operate in a field of unknowns and unknown unknowns exceptional even within the historically contingent world of social sciences, and have to work with what they’ve got. If we are not satisfied with saying the limited things that can be said by scientific inductive process, then we have to be at home with more humanistic approaches, to wit handwavy bullshit which is just as rigorous as we can make it, according to standards for rigour that have largely been developed along the same lines.

The only caveat to the above is that all the stuff about individuals doesn’t apply; prehistorians are saved from this error by the fact that they can’t usually identify historically situated individual choices or even, a lot of the time, group choices. Instead, they fret about agency, which might look at first glance like a wistful abstract substitute for identifiable individual choices but seen in the context of the Kahneman passage is actually a far superior approach, because it forces you to reason out to yourself the pitfalls of relying on intentionality to construct a historical explanation.

I say “early Holocene” because my sense is that deep prehistory suffers less from the narrative fallacy problem; the timescales of change are sufficiently large to make contingent historical explanation of the kind that plays tricks on the mind unhelpful. There are by common consent bigger questions to ask about the earlier bits of human story and shinier toolsets from other disciplines to apply to them. It’s only in that moment between the end of the Palaeolithic and the successful outsourcing of human memory into writing, a moment you can easily perceive on human timescales because the material cultures change so fast, that you find yourself pulled towards constructing historic-type explanations that are even less falsifiable than your proper “historical” historic narrative usually is, and that stuff’s bad enough.

It’s all very awkward.

* (I will finish this book soon, honestly, and stop banging on about it, and start reading something else and see the entire world through the prism of throwaway concepts in that for six weeks instead; there is probably a phrase in psychology for the act of doing this and it’s probably derived from the Greek for “bullshitting dilettante”).

Artists and archaeologists – what we learned

I have a self-imposed time limit of six months from dissertation hand-in on sneaking back into the Institute of Archaeology. Or I’ll become, you know, one of those people who sort of hang around institutions and can’t move on. Happily I’m only two months into that grace period, so let dysfunction be unconfined! And so it was that I rolled up to the Sharing the Field conference on Saturday last, beautifully Storified here (small digression: I am the last person in the universe to realise that Storify is actually very neat and allows you to intercut explanatory text with the tweets; it’s just that most people don’t). Partly due to the comprehensive round-up in the Storify and partly in an act of mercy to the reader, there follow only a couple (no, really) of selective observations.

The first thing we learned, from Robyn Mason of RealSim, is that seventeenth-century Galway looked exactly like Skyrim. (Incidentally, this is the second archaeology conference I have been to lately that mentioned Dr Who and computer games more or less in its opening moments, the first written up fabulously here et seq.). Not really, of course. RealSim’s 3D virtual model of seventeenth-century Galway designed for smartphones was visibly built by people who play a lot of Skyrim, but it actually models a very detailed map with a fascinating embedded political history. The Q&A to this paper drew out a number of interesting possibilities for the enhancement and usage of the sim, and most were focussed – as the creators are – on the heritage communication and public archaeology side of things. There have apparently been suggestions from archaeologists about including detailed findspot and stratigraphy information, but this would have inevitable impacts on usability and accessibility, and so one possible approach is perhaps to build different versions – heritage consumer and heritage pro, I guess.

What we didn’t get into very heavily were the possibilities for archaeological research, which left me wanting to find out more. Robyn noted that her own virtual footslogging round the town had led her to a new interpretation of the archaeology of a particular bridge, and there was a good question (can’t remember from whom, sorry) on the potential to represent time depths and change in Galway which particularly feeds into the question of using these things for research. I’ve been turning over in my head for a while the idea of a huge visual simulation of the process of Neolithic and Chalcolithic sedentarization and built environment creation in the Near East and trying to figure out what the variables might be, and when I say “turning over in my head” I mean “wishing someone else who knows a lot more would build it so that I can play with it”. I’ve no idea at all whether anyone is working on this project or anything similar, but thanks to my excitable tweeting during this paper I have at least learned that one current IoA PhD student is doing his research in the general area of augmented reality in archaeological practice, so perhaps there are people out there I can pester about this apart from my long-suffering software-programming partner (what a tactical error that career was on his part).

The second thing we learned – or I did – is that I understand nothing about art as a process and an experience, and as such I am very grateful to have been exposed to a lot of more informed people talking it. I tend to sit there, being baffled by what someone is saying about experience, and metaphor, and representation, and wonder why they have decided to do what they’re doing, and why it is that I don’t understand this decision, and what pair of synapses it is that have failed to grow in my brain and are sitting there all stubby and underformed while a load of chemicals hurl themselves lemminglike into the neural soup between. I am a cold fish, in other words. And it was with this doleful self-knowledge in mind that I was interested to hear the suggestion – from Caitlin Easterby of Red Earth environmental arts group, who co-organized the conference – that data can come between archaeologists and a landscape, interfering with their intuitive understanding of it, whereas artists have more freedom to respond to the intuitive, and to present an intuitive vision to the public.

This is an intriguing suggestion. On one level I think it is absolutely true. Speaking for myself, my body is, as Ken Robinson puts it, basically a means of getting my head to meetings. I have only recently been reading about intuition (or Kahneman’s “system 1”), its uses and its limitations as against the “system 2” of conscious, deliberative thought, and it is clear that archaeological analysis is essentially a system 2 operation just like any other academic endeavour, so any self-conscious “thinking” you start to do about a landscape is going to be conducted in your non-intuitive mode. But as Kahneman’s account makes clear, not only do we use system 2 a whole lot less than we think we do, but also system 2 is constantly being informed and prompted by system 1. This is presumably why cold, hard facts about an era, or a site, or a landscape can feel as warm and alive and enriching as the process which Kent Flannery (I think) characterises wonderfully somewhere as “looking at a pot and emoting”. At least, they can to me. They are jumping off points for my imagination. I can’t see inside anyone else’s head, of course, and it may well be that my understanding of landscape is emotionally and psychologically impoverished by my desire to know Things about it, such as can be known, rather than just responding to the environment purely as an embodied being. Perhaps I have a whole untapped resource of intuitive embodied understanding locked up in me which will slowly unfold over the years – I certainly hope so, that would make life more fun. I am mindful also that there is a whole spectrum of practically-minded archaeologists who learn in embodied ways far more than I do.

But I would also question the degree to which anyone moves through a landscape purely as an embodied being. Like it or not, that is not the deal with being human. If it was, everybody would be able to meditate easily, and meditation is hard. One bone of contention that came up after Rachel Henson’s paper – featuring a fantastic little film about the experience of walking Box Hill, and encountering the various wildlife there – was the use of music in art which drew its inspiration from embodied experiences. Surely, the bone-maker suggested, the background music was not authentic to the experience of walking along. As it happens I think the question somewhat missed the point anyway, because Rachel’s core projects are these beautiful, tactile and extremely practical flickbooks of stills representing each step of a journey through a landscape (I was fortunate to have the opportunity to riffle through them in the pub later) which are intended to be used as guides and companions on the user’s journey, whereas the film was obviously for sitting in a darkened room and watching away from the landscape in question, just as we did at the conference.

My wider problem with that bone of contention is its implication that there is a sort of essential purity about immersive, sensuous experience of a landscape, and that music is an intervention that sullies it. Again, I do not think this is what being human is about, and surely both art and archaeology are trying to elucidate what it means to be human. As we were discussing in the pub at lunchtime later, some of us listen to music when we go walking, and we experience landscape and music together – is that not visceral? Is it not human? Some of us run in landscapes, and experience a different pace from the walker or the idler. All of us always walk in social contexts, even though we might be intending, at times, to escape or forget them (“I’m going for a walk.”) And anybody, at any point in human history, might wander through a landscape singing, arguing, keeping an eye on the known troublemakers among their sheep, nursing their broken heart or plotting their revenge on a sheep-rustling neighbour, all activities which combine system 1 and system 2 thinking and probably take a lot of cues from embodied experience, but are never entirely about embodiment. To experience a landscape in that deliberate fully immersive, senses-on-full-throttle way is only one way of doing it, it’s actually extremely difficult to do, and I don’t believe it is a way that needs to be privileged above others for us to “understand” a landscape. And certainly we should resist the temptation to make attributions of this kind of purity to landscape-walkers of previous eras. There’s a pretty fine line, it strikes me, between observing that previous cultures may have been more ecologically and agriculturally aware of the physical and symbolic layers of landscapes, and buying into a sort of bucolic version of the noble savage myth.

Which brings me back to the point about archaeologists and their intuitive understanding, or lack thereof. I guess my suspicion is that this is part of a modern narrative about ivory towers and the disconnect between modern Westerners and their natural physical environments, which are valid concerns and topics of conversation, but which run the risk of setting “intellectual” understanding into a false opposition with intuitive or embodied understanding. Not only is there not really any such thing as a purely intuitive understanding, because stripping away “facts” from a landscape doesn’t mean system 2 can’t work in it, and adding “facts” doesn’t suppress system 1; but an embodied understanding without facts, if you could achieve such a thing, is only one perspective on a landscape, and who wants to limit themselves to one?

I learned plenty of other things which may dribble out in time, but for now I really want to say thank you to the conference organizers for making that experience so fun, and making it free.