Brain dump #2: depression and the city

Mental disorders in the ancient world:

The examination of mental disorders would seem to be the almost exclusive domain of psychiatrists and psychologists, not humanities scholars. Yet William V. Harris, the William R. Shepherd Professor of History, has spent his time in recent years studying his chosen field—the history of ancient Greece and Rome—through the lens of mental illness.

This article doesn’t go into a lot of depth, but it made me wonder if, and how, and whether, mental illness could be approached through archaeology. Has anybody tried? The wikipedia entry for the History of Depression heralds a sub-section of coverage from “Prehistory to medieval periods”, but in fact begins with classical Greece.

What would a pre-text history of mental illness look like? In a sense I think the very inaccessibility of prehistory makes mental illness as realistic a lens of study as any other kind of worldview; we are not being confused and bombarded with textual representations of more-or-less neuro-typical thinking, as we are in common-or-garden classical Greece. If individuals and groups produce material culture which perpetuates or modifies their social norms, depression for example (assuming it is a neurological condition inherent to being human, and not a sort of Durkheimian malaise of the modern age) must be among the social norms represented in material culture.

It should be possible to detect an archaeology of depression.

Behavioural epigenetics:

According to the new insights of behavioral epigenetics, traumatic experiences in our past, or in our recent ancestors’ past, leave molecular scars adhering to our DNA. Jews whose great-grandparents were chased from their Russian shtetls; Chinese whose grandparents lived through the ravages of the Cultural Revolution; young immigrants from Africa whose parents survived massacres; adults of every ethnicity who grew up with alcoholic or abusive parents — all carry with them more than just memories.

It strikes me this would be a neat way of accounting for what Childe called “the urban revolution” – that very strange period occurring spontaneously in different parts of the world where humans began to live, permanently or at least seasonally, in large numbers side by side in the same place. If extremes of behaviour can alter gene expression then it only takes a handful of random sets of extreme circumstances worldwide to alter the behaviour of a whole community, such that this community is predisposed to what would become urban life.

Throw in a suitable agricultural environment and climatic conditions for population growth and you have History.

So that’s that settled.

How do archaeology conferences work?: engagement from my sofa

Lately I’ve been doing a lot of going to archaeology conferences and tweeting from them like a particularly chirrupy and attention-deficient budgie, so I’m finding it quite interesting to observe the goings-on at the Theoretical Archaeology Group conference in Bournemouth on #TAG2013. As a reasonably informed outsider (edit: to this particular conference, I mean. C2DE I ain’t), how much of a picture am I getting of discussions and papers as they unfold?

The first thing I observe is that, ooh, about 80% of the tweets I’ve seen so far this afternoon are from the CASPAR Researching Audiences session (see p17 here (pdf)). Not so much the session about cave art or the one about animal-human interaction or the one about actor-network theory, which if I’m reading the programme correctly are running concurrently. So, yeah, a lot of people who are interested in AV/digital media and archaeology are (a) attending the session about AV/digital media and archaeology and (b) on Twitter, and it is their output which currently constitutes the public face of #TAG2013. On which I, another person interested in AV/digital media and archaeology, am now commenting, and perhaps someone else interested in AV/digital media and archaeology will write a little post about Twitter engagement at #TAG2013 using these very tweets and this blog post as material, and so it goes on until we all disappear up our collective meta-arse, or at least until more archaeologists with wider interests start using Twitter. Lots of people who already know other people are talking to them about stuff they know is of mutual interest. No surprises there.

The second thing, which surprises me, is that I am unfeasibly thrilled when someone posts a picture. This is something I scarcely thought about at Monstrous Antiquities, CHAT and Sharing the Field, although I posted pictures from the first two. A spectacularly dumb reptilian bit of my brain likes images of places and people and having a visual hook on which to hang conversations I might have, or read about. This is despite the fact that I do at least vaguely know some of the people there. What seems to matter are pictures of that particular experience.

The reason I comment on all this is that, meta-related cynicism notwithstanding, I suspect academic conferences are currently under-utilised as a potential source of public engagement with the subject. Conferences, as live events, have a natural, attractive tempo that really works on social media pretty much regardless of how dry the content – and Twitter, even more than most other social networks, is tempo-driven. That’s why BBC Question Time amongst other things is so famous there. It’s not that the socio-economic group that watches QT is disproportionately represented on Twitter – although I’m sure they are – it’s that all those people sit down on a Thursday night and start tweeting about it at once.

Plenty of academics thoroughly see the point of Twitter and other social media as a way of sharing their work and engaging both their colleagues and the public. But there is an inevitably static quality to a lot of this busy communication, which is particularly obvious to me as a former political blogger. In politics we had the natural tempo of The News to blog to. Making allowance for hobby horses and specialist subjects, a lot of the time we were all blogging about the same stuff. Reading and writing blogs fitted seamlessly into a large single conversation that was also being conducted on social networking platforms, in newspaper columns, and on TV. And it was important to Have Your Say there and then, because, you know, otherwise The News might have gone by the time you next looked!

Archaeology and history blogging, with rare news-driven exceptions, doesn’t have a natural tempo (unless you count the business of being an academic itself, the calendar, the pressures etc; referencing all this is effective social glue for academics but does not engage anybody else or speak to the work they would like to promote). Everyone is talking about their own stuff, in the order that seems best to them, and there is inevitably a certain narrative-driven cast about a lot of historical and archaeology blogs. They know Stuff, we don’t, they are going to tell us Stuff, and unless we are specifically heading out into the internet with the aim of learning about iron age oppida, or whatever, we probably aren’t going to experience the Stuff as being of relevance to the concerns that are uppermost in our minds, far less be moved to post a reply. “Joining the debate”, which is what every empty comment box tempts you to do, feels less urgent if the debate has no particular timetable.

But if there is any regular feature of a researcher’s life where debate might usefully be showcased and used to pique interest, it’s academic conferences. They’re highly stylised as a form of debate, of course, and it’s difficult to report debate accurately anyway, but they do at least have the potential to suggest that archaeological and historical research might be about something more than everybody agreeing on a narrative – which as Donald Henson’s opening paper in the Researching Audiences session apparently pointed out, is the picture presented on TV. Differing interpretations, debate, discussion are largely ironed out by producers who believe the viewer wants a single story.

It could be that the producers are right, of course, and nobody wants debate or discussion or to follow the salient points of an archaeology paper as it’s given and respond to it. Social media offers a cheap and low effort way of testing this point, but on the basis of my impressionistic scan of the #TAG2013 feed, there needs to be a lot more content and a lot more participants for the test to be fair.

Put “palaeo-diet” into a title field, and wait

I like watching how archaeology news plays out in the mainstream press, so I’ll be keeping an eye out for incarnations of this (via @archaeologynews). It’s a great example of a buzz concept being bolted onto an entirely sober press release:

Nutrients in food vital to location of early human settlements: The original ‘Palaeo-diet’

Research led by the University of Southampton has found that early humans were driven by a need for nutrient-rich food to select ‘special places’ in northern Europe as their main habitat. Evidence of their activity at these sites comes in the form of hundreds of stone tools, including handaxes.

A study led by physical geographer at Southampton Professor Tony Brown, in collaboration with archaeologist Dr Laura Basell at Queen’s University Belfast, has found that sites popular with our early human ancestors, were abundant in foods containing nutrients vital for a balanced diet. The most important sites, dating between 500,000 to 100,000 years ago were based at the lower end of river valleys, providing ideal bases for early hominins – early humans who lived before Homo sapiens (us).

Professor Brown says: “Our research suggests that floodplain zones closer to the mouth of a river provided the ideal place for hominin activity, rather than forested slopes, plateaus or estuaries. The landscape in these locations tended to be richer in the nutrients critical for maintaining population health and maximising reproductive success.”

So not at all research into “palaeo diets” in the twenty-first century sense of the term, as indicated by that cheeky “original”. Researchers are sometimes known to harrumph about press officers’ happy ways with reporting their work, but this is probably one of the more restrained ways to do spin. Most people will have read half the thing before they’ve figured out that it’s just research into, you know, diet, and in fact it’s clearly using “balanced diet” in the same sense as government guidelines do. Hard to see how anyone is going to twist this into an invitation to scarf down bacon and eggs.

As Sir Humphrey said of the Open Government paper, “Always dispose of the difficult bit in the title. It does less harm there than in the text.”

Question is, is it still newsworthy?

Incidentally, this is how it finishes:

The nutritional diversity of these sites allowed hominins to colonise the Atlantic fringe of north west Europe during warm periods of the Pleistocene. These sites permitted the repeated occupation of this marginal area from warmer climate zones further south

Professor Brown comments: “We can speculate that these types of locations were seen as ‘healthy’ or ‘good’ places to live which hominins revisited on a regular basis. If this is the case, the sites may have provided ‘nodal points’ or base camps along nutrient-rich route-ways through the Palaeolithic landscape, allowing early humans to explore northwards to more challenging environments.”

If you are an archaeologist you’re probably now as interested as I am, but basically that’s why you’re at the bottom of a hole and I’m writing on the internet at the kitchen table and neither of us are allowed into rooms where we might try to influence people.

Making stuff up in prehistory

I learn, or relearn, surprising amounts about archaeology and history from books that have nothing to do with either. Viz. this from Daniel Kahneman’s Thinking Fast and Slow* (this passage starts out discussing The Black Swan, which I haven’t read):

Narrative fallacies arise inevitably from our continuous attempt to make sense of the world. The explanatory stories that people find compelling are simple; are concrete rather than abstract; assign a larger role to talent, stupidity, and intentions than to luck; and focus on a few striking events that happened rather than on the countless events that failed to happen. Any recent salient event is a candidate to become the kernel of a causal narrative…

The ultimate test of an explanation is whether it would have made the event predictable in advance. No story of Google’s unlikely success will meet that test, because no story can include the myriad of events that would have caused a different outcome. The human mind does not deal well with nonevents. The fact that many of the important events that did occur [in the history of Google’s rise] involve choices further tempts you to exaggerate the role of skill and underestimate the part that luck played in the outcome…

At work here is that powerful WYSIATI [what you see is all there is] rule. You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from the information available to you, and if it is a good story, you believe it. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.

That’s essentially how we approach early Holocene prehistory, isn’t it. The only difference is prehistorians are usually well aware of their ignorance, in fact they need very little prompting to admit that they are making a lot of stuff up. What they mean is that they operate in a field of unknowns and unknown unknowns exceptional even within the historically contingent world of social sciences, and have to work with what they’ve got. If we are not satisfied with saying the limited things that can be said by scientific inductive process, then we have to be at home with more humanistic approaches, to wit handwavy bullshit which is just as rigorous as we can make it, according to standards for rigour that have largely been developed along the same lines.

The only caveat to the above is that all the stuff about individuals doesn’t apply; prehistorians are saved from this error by the fact that they can’t usually identify historically situated individual choices or even, a lot of the time, group choices. Instead, they fret about agency, which might look at first glance like a wistful abstract substitute for identifiable individual choices but seen in the context of the Kahneman passage is actually a far superior approach, because it forces you to reason out to yourself the pitfalls of relying on intentionality to construct a historical explanation.

I say “early Holocene” because my sense is that deep prehistory suffers less from the narrative fallacy problem; the timescales of change are sufficiently large to make contingent historical explanation of the kind that plays tricks on the mind unhelpful. There are by common consent bigger questions to ask about the earlier bits of human story and shinier toolsets from other disciplines to apply to them. It’s only in that moment between the end of the Palaeolithic and the successful outsourcing of human memory into writing, a moment you can easily perceive on human timescales because the material cultures change so fast, that you find yourself pulled towards constructing historic-type explanations that are even less falsifiable than your proper “historical” historic narrative usually is, and that stuff’s bad enough.

It’s all very awkward.

* (I will finish this book soon, honestly, and stop banging on about it, and start reading something else and see the entire world through the prism of throwaway concepts in that for six weeks instead; there is probably a phrase in psychology for the act of doing this and it’s probably derived from the Greek for “bullshitting dilettante”).

Artists and archaeologists – what we learned

I have a self-imposed time limit of six months from dissertation hand-in on sneaking back into the Institute of Archaeology. Or I’ll become, you know, one of those people who sort of hang around institutions and can’t move on. Happily I’m only two months into that grace period, so let dysfunction be unconfined! And so it was that I rolled up to the Sharing the Field conference on Saturday last, beautifully Storified here (small digression: I am the last person in the universe to realise that Storify is actually very neat and allows you to intercut explanatory text with the tweets; it’s just that most people don’t). Partly due to the comprehensive round-up in the Storify and partly in an act of mercy to the reader, there follow only a couple (no, really) of selective observations.

The first thing we learned, from Robyn Mason of RealSim, is that seventeenth-century Galway looked exactly like Skyrim. (Incidentally, this is the second archaeology conference I have been to lately that mentioned Dr Who and computer games more or less in its opening moments, the first written up fabulously here et seq.). Not really, of course. RealSim’s 3D virtual model of seventeenth-century Galway designed for smartphones was visibly built by people who play a lot of Skyrim, but it actually models a very detailed map with a fascinating embedded political history. The Q&A to this paper drew out a number of interesting possibilities for the enhancement and usage of the sim, and most were focussed – as the creators are – on the heritage communication and public archaeology side of things. There have apparently been suggestions from archaeologists about including detailed findspot and stratigraphy information, but this would have inevitable impacts on usability and accessibility, and so one possible approach is perhaps to build different versions – heritage consumer and heritage pro, I guess.

What we didn’t get into very heavily were the possibilities for archaeological research, which left me wanting to find out more. Robyn noted that her own virtual footslogging round the town had led her to a new interpretation of the archaeology of a particular bridge, and there was a good question (can’t remember from whom, sorry) on the potential to represent time depths and change in Galway which particularly feeds into the question of using these things for research. I’ve been turning over in my head for a while the idea of a huge visual simulation of the process of Neolithic and Chalcolithic sedentarization and built environment creation in the Near East and trying to figure out what the variables might be, and when I say “turning over in my head” I mean “wishing someone else who knows a lot more would build it so that I can play with it”. I’ve no idea at all whether anyone is working on this project or anything similar, but thanks to my excitable tweeting during this paper I have at least learned that one current IoA PhD student is doing his research in the general area of augmented reality in archaeological practice, so perhaps there are people out there I can pester about this apart from my long-suffering software-programming partner (what a tactical error that career was on his part).

The second thing we learned – or I did – is that I understand nothing about art as a process and an experience, and as such I am very grateful to have been exposed to a lot of more informed people talking it. I tend to sit there, being baffled by what someone is saying about experience, and metaphor, and representation, and wonder why they have decided to do what they’re doing, and why it is that I don’t understand this decision, and what pair of synapses it is that have failed to grow in my brain and are sitting there all stubby and underformed while a load of chemicals hurl themselves lemminglike into the neural soup between. I am a cold fish, in other words. And it was with this doleful self-knowledge in mind that I was interested to hear the suggestion – from Caitlin Easterby of Red Earth environmental arts group, who co-organized the conference – that data can come between archaeologists and a landscape, interfering with their intuitive understanding of it, whereas artists have more freedom to respond to the intuitive, and to present an intuitive vision to the public.

This is an intriguing suggestion. On one level I think it is absolutely true. Speaking for myself, my body is, as Ken Robinson puts it, basically a means of getting my head to meetings. I have only recently been reading about intuition (or Kahneman’s “system 1”), its uses and its limitations as against the “system 2” of conscious, deliberative thought, and it is clear that archaeological analysis is essentially a system 2 operation just like any other academic endeavour, so any self-conscious “thinking” you start to do about a landscape is going to be conducted in your non-intuitive mode. But as Kahneman’s account makes clear, not only do we use system 2 a whole lot less than we think we do, but also system 2 is constantly being informed and prompted by system 1. This is presumably why cold, hard facts about an era, or a site, or a landscape can feel as warm and alive and enriching as the process which Kent Flannery (I think) characterises wonderfully somewhere as “looking at a pot and emoting”. At least, they can to me. They are jumping off points for my imagination. I can’t see inside anyone else’s head, of course, and it may well be that my understanding of landscape is emotionally and psychologically impoverished by my desire to know Things about it, such as can be known, rather than just responding to the environment purely as an embodied being. Perhaps I have a whole untapped resource of intuitive embodied understanding locked up in me which will slowly unfold over the years – I certainly hope so, that would make life more fun. I am mindful also that there is a whole spectrum of practically-minded archaeologists who learn in embodied ways far more than I do.

But I would also question the degree to which anyone moves through a landscape purely as an embodied being. Like it or not, that is not the deal with being human. If it was, everybody would be able to meditate easily, and meditation is hard. One bone of contention that came up after Rachel Henson’s paper – featuring a fantastic little film about the experience of walking Box Hill, and encountering the various wildlife there – was the use of music in art which drew its inspiration from embodied experiences. Surely, the bone-maker suggested, the background music was not authentic to the experience of walking along. As it happens I think the question somewhat missed the point anyway, because Rachel’s core projects are these beautiful, tactile and extremely practical flickbooks of stills representing each step of a journey through a landscape (I was fortunate to have the opportunity to riffle through them in the pub later) which are intended to be used as guides and companions on the user’s journey, whereas the film was obviously for sitting in a darkened room and watching away from the landscape in question, just as we did at the conference.

My wider problem with that bone of contention is its implication that there is a sort of essential purity about immersive, sensuous experience of a landscape, and that music is an intervention that sullies it. Again, I do not think this is what being human is about, and surely both art and archaeology are trying to elucidate what it means to be human. As we were discussing in the pub at lunchtime later, some of us listen to music when we go walking, and we experience landscape and music together – is that not visceral? Is it not human? Some of us run in landscapes, and experience a different pace from the walker or the idler. All of us always walk in social contexts, even though we might be intending, at times, to escape or forget them (“I’m going for a walk.”) And anybody, at any point in human history, might wander through a landscape singing, arguing, keeping an eye on the known troublemakers among their sheep, nursing their broken heart or plotting their revenge on a sheep-rustling neighbour, all activities which combine system 1 and system 2 thinking and probably take a lot of cues from embodied experience, but are never entirely about embodiment. To experience a landscape in that deliberate fully immersive, senses-on-full-throttle way is only one way of doing it, it’s actually extremely difficult to do, and I don’t believe it is a way that needs to be privileged above others for us to “understand” a landscape. And certainly we should resist the temptation to make attributions of this kind of purity to landscape-walkers of previous eras. There’s a pretty fine line, it strikes me, between observing that previous cultures may have been more ecologically and agriculturally aware of the physical and symbolic layers of landscapes, and buying into a sort of bucolic version of the noble savage myth.

Which brings me back to the point about archaeologists and their intuitive understanding, or lack thereof. I guess my suspicion is that this is part of a modern narrative about ivory towers and the disconnect between modern Westerners and their natural physical environments, which are valid concerns and topics of conversation, but which run the risk of setting “intellectual” understanding into a false opposition with intuitive or embodied understanding. Not only is there not really any such thing as a purely intuitive understanding, because stripping away “facts” from a landscape doesn’t mean system 2 can’t work in it, and adding “facts” doesn’t suppress system 1; but an embodied understanding without facts, if you could achieve such a thing, is only one perspective on a landscape, and who wants to limit themselves to one?

I learned plenty of other things which may dribble out in time, but for now I really want to say thank you to the conference organizers for making that experience so fun, and making it free.