Session 9 reflections: Listening to and sounding soundscapes

Dr John Drever

For the penultimate workshop of the series we were treated to a session by John Drever, a sonic artist and sound researcher based at Goldsmiths, University of London. If one had to choose a single keyword for the session, it would have to be the neologism ‘soundscapes’. Attending to soundscapes – which might be defined as the ambient sound or background noise as-it-happens in particular environments – commands, John seemed to suggest, a rather different approach than typically generic approaches to sound.

Approaching soundscapes or sonic environments has a long and diverse tradition, and John cited several interesting examples. Humphrey Jennings’ Listen to Britain, a 1942 propaganda film without narration that depicted life in England during the blitz, was an example of how the use of site-specific sound could introduce a totally new dimension to film. Pierre Schaeffer’s musique concrete often involved the recording of environmental sound, its division into snippets, and the use of looping and other methods to create innovative musical rhythms. Ludwig Koch was renown for recording a huge array of animal sounds, leading not only to a better appreciation of wildlife, but also the founding of the modern sound archive.

The above explore soundscapes in that they simultaneously privilege both site- and sound-specificity. And for this reason, they are a contrast in particular to what John called ‘generic sound’ – emblematised in particular by the pervasiveness of sound effects, for which John again gave some excellent and often intriguing examples. Thanks to vast sound effect catalogues, such as those produced by the BBC’s Radiophonic Workshop, we usually expect when watching television that, for example, a midnight visit to a cemetery invariably involves a hooting owl, or that a day visit to the seaside will involve the sound of seagulls. We also expect that, when a large passenger airplane touches down on the runway, we will hear squealing tires, even though we don’t tend hear this sound when travelling by air in real life. Indeed, the first use of this squealing tire sound was recorded not from airplane tires but from a braking car. Then there is the ‘Willhelm Scream’ which, even if we don’t know it by name, we’ve nevertheless likely heard (it’s originally from the film Distant Drums) many times over in television and film. What’s more, we even have professional creators of generic sound: Foley artists.

While it is probably obvious why sound effects or generic sounds are so frequently used in film, television and radio, their downside, John argued, is that they create an assumption that all the sound one would ever need is available (for purchase) amongst the many sound effects catalogues. This creates a certain irony for sound engineers: people who are deeply interested in sound, actual sound, yet who are normally engaged in working with sounds of a rather less interesting pedigree. John was suggesting, I think, that this speaks to how we tend to regard sound more generally. There is a tendency to treat sound instrumentally, typically an add-on to the primacy of the visual. What this means is that we tend to ignore or downplay the reality, depths, dimensions, and nuance of sound and sonic environments.

One thing I loved about John’s session was the way he periodically reached into a large bag he’d brought and unveiled for us a new piece of sound kit. Mainly, this ‘kit’ was different iterations of microphones. Actually, John was slightly apologetic about doing this; as he remarked, he didn’t want to be seen as advertising different tools of the sound recording trade. But what was interesting here was that he wasn’t just highlighting a range of technical devices for recording sound, but also, by proxy, illustrating the many dimensions of researching soundscapes. We began, for example, with the highly directional ‘shotgun’ microphone, often used in film and television, and in sporting events and other field recordings to focus on a narrow source of sound and cancel out sound coming from other angles. We also were introduced (most of us for the first time) to binaural microphones, which look rather like small headphone buds, making them ideal for covert recording. Aside from being covert, however, they also record sound much like the human ear hears it, making them ideal for making recordings to listen to with headphones. And things got more and more unfamiliar, from contact microphones, which sense audio vibrations through objects or masses like water (John played us a remarkable underwater recording of shrimp), to telephone pickups, cheap devices available at places like Maplin or Radio Shack that allow for the recording of phone calls (and of course phone tapping) but also pickup electromagnetic fields. As we discovered, my mobile phone has a most interesting sonic landscape of its own! In each of these examples, we were introduced not only to means of recording sound, but the cultural and spatial dimensions of sound in our everyday worlds.

One of the most useful devices was, as John described it, perhaps one of the more ordinary and affordable (his Zoom H2 Handy Recorder). This device allows recording of 360 degrees of sound across 4 channels, and it’s portability and ease of use has opened up, amongst other things, some very interesting preliminary research into the proliferation of high speed hand dryers such as the Dyson Airblade. As John noted, most think of these dyers as a very positive new development; rather than using heated air, they use a much more effective thin layer of cold air at high speed (about 400 mph), which additionally has benefits in terms of energy consumption. Yet it has a major if unacknowledged drawback: a significant increase in noise pollution. These devices are extremely loud, and once placed in a public toilet, can reach decibel levels that are normally considered unacceptable – particularly for those with special needs (e.g. dementia, blind people). In highlighting this emerging research, John pointed out that doing research on soundscapes is about more than generally exploring the nuances and specificity of sound environments. It is not simply the terrain for theorists of sound, or for audiophiles; it is also an area with real potential to open up new areas of pressing ethical and policy concern.

By Scott Rodgers

Session 8 reflections: Using qualitative data software

Gareth Harris

For a change, in Session 8 we were based not in our usual seminar room, but a computer lab in Birkbeck’s main Malet Street building. After all, one guiding theme of Session 8 was to provide an introduction to a qualitative data analysis software package – NVivo. So, as we waited for things to commence, amongst rows of PCs, facing Windows XP login boxes, we might have been forgiven for thinking this workshop session would be little more than an introductory overview of a software application.

And we would have been wrong. Before getting into anything of the kind, Gareth Harris instead took us on a highly interesting and thought-provoking journey through some of the epistemological issues and debates associated with the broader world of computer assisted qualitative data analysis software (CAQDAS – also, see the CAQDAS Networking Project). As it turns out, CAQDAS are much more than mundane research tools, but in many ways are at the fulcrum of contemporary debates about the interface of research and technology. Gareth kindly provided his slides used in this journey, which augment much of my reflections below, as well as provide some useful links.

Gareth began by pointing out the CAQDAS has proceeded through three fairly distinct generations:

1. Search and retrieval of text

2. The coding of multiple textual fragments, which can then be retrieved as coded themes or categories

3. Theory-building, in other words, looking at the relations between categories (e.g. though the use of visualisation tools) in order to build higher-order classifications and categories.

In the 1990s, there were quite a number of critiques of and debates about the uses of software for qualitative research, and Gareth pointed out that these were pitched almost completely in relation to the second generation – using software for simple data coding. As a result, there has been little debate, at least so far (Gareth estimates they may indeed kick off soon), about the emergence of third generation CAQDAS – using software for more complex theory-building. What is so interesting about this third generation, Gareth noted, is that it aligns software like NVivo ever closer to the inductive approaches of Grounded Theory. This, of course, it not necessarily a bad thing, but it does highlight the implicit incorporation of a fairly specific inductive methodology into a software package. This at least potentially raises rather more sticky issues than the second generation, which is little more than a faster and more secure way to code data. As in, it did little more than computerise that which one would otherwise have achieved through the use of such materials (technologies?) as a stack printed photocopies, multi-coloured highlighters, a pair of scissors, and a glue stick.

In this context, Gareth highlighted a very important question, harking back to the overall themes of the workshop, and particularly our introductory session. Does CAQDAS have its own ‘effects’ (positive and negative) on our research?  In other words, is a CAQDAS package like NVivo a neutral tool of our autonomous methodological actions, or does it have agency and channel our research in some ways? On the one hand, one response is that it is indeed primarily a tool for our research practices and decisions. This is a response often made to counter critiques of CAQDAS – which suggest that it is mechanistic, decontextualising, a fetishisation of coding even (see Gareth’s slides for more) – with an argument that all this really depends on how CAQDAS is actually used. On the other hand, however, perhaps it is naïve to take this claim too far. After all, CAQDAS must channel research, in the same way that writing an essay using word processing software entails a fundamentally different process than writing by hand (e.g. one continuously edits, rather than in more fixed stages). So, it might be seen as technologies with certain capacities and constraints, but which also comes into contact with a researchers’ know-how, practical work and ethics in doing research. And in this process of contact, Gareth seemed to suggest, we find a tool which allows for research practice to potentially be much more transparent and  accountable than that based on paper (rather different, Gareth emphasised, than any erroneous claim that CAQDAS makes qualitative research ‘reliable’, a concept with strong connections to positivism)

It’s worth mentioning an interesting tidbit Gareth pointed out early on in the workshop: there is a rather good chance that we might see a situation in 5 or 10 years time where the more extensive and thorough literature reviews are being conducted using CAQDAS. Assuming one has a library of electronic materials, this is easy to see. It would not only provide a very effective way to code, organise and retrieve text fragments of interest across several sources, but would provide interesting ways to compare how authors have dealt with similar concepts, to visualise connections between groups of scholarly communities and their ideas, and much more. Like any technological change, there would be drawbacks; but the advantages for large-scale literature reviews, on complex subjects, seem quite clear.

Now, you’ll remember of course, we were in the lab, ready for ‘training’. And we did spent some time taking NVivo 9 through its paces. Certainly, participants had a chance to gain some initial exposure to the software, see its overall architecture. But in having had such a good and intellectually interesting overview, I’d expect most participants likely left thinking about their exposure to NVivo within a much bigger picture of research practice and technology.

By Scott Rodgers

Session 7 Reflections: Open Access Journals

We were lucky to be joined by Robert Kiley, Head of Digital Services at the Wellcome Trust this week to talk about the current challenges and changes that are happening at with regards to the publication of Open Access Journals.

To conduct research, as Robert pointed, required the following elements

  1. Access to research
  2. The right to re-use the content and
  3. The right to create derivative work from the content

At the moment, most researchers gained access to journals via personal subscriptions or via their university library subscriptions. It is a system that works rather well for most of us who conduct our research in Higher Education institutions. But what about those who do not? Robert illustrates the problem with a personal story. In 2003, the Wellcome Trust’s new director Mark Walport, who had joined the Trust from Imperial College attempted to view an article that was co-funded by the Wellcome Trust only for his computer screen to show that as the Wellcome Trust was not a subscriber of said journal, he was denied access to the article. This incident raised the following issues at the trust.

Firstly, this article was only possible due to money from the Trust, why was it not possible for the Trust to obtain a copy of the article? Secondly, what is the point of funding research if no one can read the results and lastly, wouldn’t it be easier to track outputs and impacts if articles were made available to everyone?

It was this incident that instigated the Wellcome Trust to include dissemination costs as part of their research funding costs. This is interesting to consider as a researcher as often dissemination costs are considered separate from the amount of money requested for doing research. This has resulted in the policy that all papers funded or co-funded by the Wellcome Trust is now freely available (within 6 months) at PubMed or the UKPMC repositories.

 Here we can see how technology is having an impact not only on the way we do our research but on the way in which our research can now be disseminated. The implications of this are manifold

 On a personal level, as Robert points out, if more articles from peer-reviewed medical research journals were made available, patients suffering from various illnesses would be able to access proper research rather than rely on wacky ‘cures’ found via search engines. On a macro level, the growth of online open access journals from respected publishers such as Sage (with SageOpen), Springer (SpringerOpen) and Wiley (Wiley Open Access) shows that there is an increasing recognition of the importance of open access journals. There are obviously issues about payment and copyright but there are issues which the Wellcome Trust has been in negotiation with various journals with the result that in 2009, 98% of articles attributed to the Wellcome Trust were published in journals that were ‘Wellcome Compliant’ with regards to copyright.

 What does this mean for researchers? Kiley points out that the Study of Open Access Publishing funded by the European Commission found that almost 90% of researchers felt that open access journals would benefit either their research or their research field.

At the crux of this issue about Open Access is perhaps the question: Why do we do research? Very often, and I’m guilty of this too, I have mostly done research to examine an issue that I find interesting. I do consider who might read this research, what I rarely do however, is consider how interested researchers might be able to access this work that I do? Seeing that I work in a field where my research might be of interest to not only academics based in Higher education institutions but government bodies and policy institutes, the issues of Open Access that I need to consider in future!

By Lorraine Lim

Session 6 Reflections: Researching On Screen

Dr Stamatia Portanova

Media Studies Lecturer Dr Stamatia Portanova opened her talk with a film clip from Michelangelo Antonioni’s film Blow Up. A film made in 1966 about the discovery of a murder via the continual enlargement of a set of photographs, Dr Portanova highlighted how the protaganist in the film was able to conduct his ‘research’ via a screen and used that idea as a launching point to the main discussion points of her talk which was how there was now a shift and a multiplicity of approaches towards ‘screens’ in the digital age.

Going back to Antonioni, she points out that in the film, the photographs used were a form of representation: a form of truth. Here the photographs examined by the Thomas Hemming’s character  in the film told him, and the audience, something about the real world. But as researchers now know with the advent of feminist studies and post-colonial studies that film is not a neutral medium. Film, has a point of view too. What was too follow next in Dr Portanova’s talk was a fascinating whistle-stop tour of how ideas about the ‘screen’ had changed since 1966 that questioned our relationship with the ‘screen’ and how we use it, not only in our day to day lives but how it can even influence the way we move in our day to day lives.

One of the biggest changes was how the ‘screen’ was now ubiquitous. Simply put, ‘screens’ are now everywhere from ATM machines to supermarkets to giant billboards. However, while we used to be content to just look at the screen and interpret what we see from those screens, very much like the protagonist in Blow Up, we now expect to be able to manipulate these very images we see now. Dr Portanova would highlight this very notion with a clip from Ridley Scott’s film Blade Runner where we see Harrison Ford as Decker manipulating an Esper Machine in order to uncover further details in a photograph so as to help him in his hunt for rogue replicants. Made in 1982, the film predicts accurately how, we today, manipulate photos in the same way, thus highlighting a key change in our relationship with the ‘screen’. We no longer accept what is shown to us, we expect to be able to change these very images.

What is shown is no longer a faithful representation of the world; we can now construct our own reality.

Drawing upon the work of media theorist Kevin Kelly, she says this ability to ‘play’ with images is akin to gaining a new language. With the advent of software and website such as Photoshop and youtube, everyone that has access to these tools can have the chance to learn this new language. The implications of these, is that the notion of ‘authorship’ is different now. There is no longer the idea of a single author but a network of creators. This brings up issues of copyright for example, something that did not exist before the digital age.

The confluence of all these changes it seems would be represented in the giant permanent screens we see today in public squares such as the giant screen, a participant points out that exists in Walthamstow. Where people watch, or even interact with the screens and their display of images and by the very fact that these screens exist, how we change our behaviour towards it.

My reflection here is just one part of Dr Portanova’s expansive talk, however, it was the part that stuck with me after the sessions which for me was how all this seems ‘normal’. Perhaps I’m a bit wary of the rapidness of these changes, but there seems to be no ‘defining’ moment in my life on how digital technology has changed my life or changed the way I do something. It is something that has always been around and is part of the norm. My parents will remember the day they first saw a TV or saw something on TV and perhaps recount how that moment changed their behaviour but I would be hard-pressed to name a similar moment with regards to using a computer or a camera or a mobile phone. This almost ‘gentle’ creeping up of technology into our everyday lives, influencing everything from the way we think to the way we move deserves more attention and the work that Dr Portanova and her fellow colleagues do certainly attempts to discover this.

I’m not saying that we should abandon digital technology altogether, the advantages for the moment, look to me, to outweigh the disadvantages. Perhaps what I’m saying is that, the next time I marvel at how technology allows me to do something amazing, I should also perhaps think about how technology now has just changed me in a minute way that I’m not quite aware of yet or even sure I can describe…

By Lorraine Lim

Session 5 Reflections: Harnessing the Power of Big Data and the Mundanities of Archival Research

Week 5 was a mini-showcase of the exciting research currently being done by lecturers at Birkbeck. Our first speaker Senior Lecturer Dr Dell Zhang opened his talk, Harnessing the Power of Data, with the quote ‘In God We Trust, All Others Bring Data’ attributed to Dr W. Edwards Deming. In the next hour, Dell proceeded to show everyone present how our daily interactions on the web generate collections of data that can affect in a variety of ways. Closer to home, Dell discussed how research in Computer Science is changing due to the way data can now be collected. Where researchers use to find a theory or model and then conduct the necessary fieldwork to collect data to support or disprove their ideas, there is a trend now for Computer Scientists to start their research using large data sets, whereby by following the data, they are able to extrapolate ideas. In this era of big data, Dell is convinced that more is different, as with enough data, numbers can speak for themselves. He highlights how social scientists use to analyse small scale networks to examine social interaction but with Twitter and Facebook, social connections can easily be observed as these social interactions are recorded. Hence social scientists can determine some models that would not have happen with small data sets.

Dell also highlighted how big data can help solve some real-world problems such as the creation of effective spam filter programmes. He points out that while spam on the web has been increasing, users of Google’s mail programme Gmail report a rate of less than 1% of spam. He argues that this is due to how Gmail has a large access to data via the e-mails their users receive. The sheer number of e-mails produces a large data set that allows for the creation of a spam filter programme that works because the programme is able to calculate the probabilities of key words and filter them out making it an effective programme! As long as there is more data, performance will improve and the very act of engaging on the web generates data, hence the improvement, as Dell points out, is limitless.

After Dell’s talk of technology and data, it felt like a step back in time with our second speaker Dr Jose Bellido a Lecturer in Law with his talk ‘Mundane Research Issues- Notes on Legal Archives and Copyright’ where a researcher usually has to visit an archive in person to find the information one needs. For Jose, his data is not ‘placeless’ but rather linked to a specific time and place and often not yet digitised. Instead of search engines, researchers get indexes or cards. Jose’s research might sound almost archaic but there were clearly some benefits to the lack of ‘technology’ so to speak. He pointed out that while technology and research gave one immediate results, archival research was akin to fishing- you never know what you might get! Misspellings on index cards might lead you onto a different path, or hints dropped by the person in charge of the archive can often open up a new avenue of research. There is an element of chance or luck in this type of research which might not be that easily present if one only worked with large data sets.

As Jose points out, he is more interested in the how data emerges.

 What perhaps was most striking about Jose’s talk was about how decisions would have to be made for data to be collected and if one assumes that a certain type of information is not valuable then there is potential for this information to be lost. Hence, archival research is still important today because what is not recorded can be just as important as what is. Discussions on how a law is enacted might be more important than the enactment of the law itself! Personal archives which contain memorabilia of everyday life can often shed new light on persons or events! This is not to say that Jose’s area of research shuns technology altogether. His research on copyright can be found at www.copyrighthistory.org and he points how UK Supreme Courts now allow for the recording of up to 20 hours of the sitting of certain cases which can be shown for teaching purposes. On a personal level, being able to take digital images of records so that one could read them at leisure at home was one of the most direct ways in which technology had impacted on Jose’s research. Jose’s talk was particularly useful for students who were worried about the reproduction of images of archives or the reproduction of material from these archives and here Jose was a wealth of information. Drawing on his experience of doing research in Cuba and Argentina, he was able to offer valuable advice on what were the standard problems researchers in this field had to consider. While it might seem that Jose’s talk would have little in connection with Dell’s, it became clear quite quickly that these two methods of conducting research while seemingly eons apart shared a remarkable similarity! Dell had remarked earlier in his talk that data in itself was meaningless, a lot of noise and what was needed about context to make sense of the data and what technology could do today was create algorithms based on key words so as to be able to filter out the ‘noise’ to make sense of the data. Jose says this shares a correlation to archival research where researchers too had to work out what the key words were in archives to find the relevant information, what words to avoid that led to dead ends. What both these methods have in common is that a need for knowledge to make the right choices.

If one of the end goals of conducting research is to add to existing knowledge, it was extremely useful to reflect upon how Dell and Jose are going about using different technologies to produce this knowledge drawn from collection of data or archives.

By Lorraine Lim

Calling participants for Guardian HE Network ‘Live Chat’ on academia and the Internet

We (Sophie Hope, Lorraine Lim and Scott Rodgers) have been invited to partake in a ‘Live Chat’ on Friday 3 June, from 1-4pm very much related to the aims of this workshop series. The live chat is hosted by the Guardian Higher Education Network, and addresses the topic: ‘Breaching the digital divide: How could HE better use the internet?

Sophie and Lorraine have graciously agreed to let Scott serve as a representative on the panel. But the main thing about the chat is that it has participants (the panel as such is mainly to ensure designated discussants for the duration of the live chat). So, please do consider joining in at some point on 3 June between 1pm and 4pm. To partake, you simply need to visit the above link and add comments to the article (you will need to register on The Guardian website, but it’s very simple).

Hope to ‘see’ some of you there!

Session 4 reflections: Creative Research Online

Paul Baran's diagram of communication networks (1964)


Art historian Charlotte Frost and artist Ele Carpenter joined us for session 4 to discuss their recent research. It proved to be a session jam packed with information, here are just a few of the things I managed to jot down!

Charlotte began by talking us through her work on the impact of the internet on art and knowledge practices, specifically virtual environments for art history and how different kinds of conversations emerge from discussing, presenting, editing and distributing your work online. She has recently set up the PhD2Published site, which offers advice on publishing for early-career academics and Art Future Book, a research project looking into the future of academic publishing.

Charlotte mentioned a whole plethora of examples, some of which were:
In Media Res, a collaborative approach to online scholarship through themed weekly debates and presentations.
Voice Thread, a site where you can share conversations around slideshows, videos etc.
Networked Book, a site where you can comment, revise and translate chapters of a communally edited book, developed by the Institute for the Future of the Book.
Open Humanities Press, a site which makes peer-reviewed literature available, free of charge.

She also presented a screenshot of furtherfield.org’s Visitors Studio, which she referred to as her muse. As Charlotte said herself, it’s a shame we couldn’t have a play with Visitors Studio ourselves. I’ve had a go with a group in a workshop before and you get to mix and match audio and visuals in an online environment, collaborating with others online without physically meeting (charlotte called it real time internet jamming).


Ele Carpenter followed, with a presentation about her recent projects and interest in distributed networks as a medium. Her work explores the movement between social and online spaces and the possibilities, pitfalls and languages of social relations that form in these spaces (on and off line). She explained her ‘open’ methodology which works across craft practice and software production, for example in her Open Source Embroidery project which involved the collective making of an HTML Patchwork (with, amongst others, users of Access Space, Sheffield). Ele referenced Ada Lovelace’s writings on the analytical engine in the 1830s used by Charles Babbage to create his mechanical computer and a number of current initiatives, such as Sketch Patch, the HumLab in Sweden and the Budapest Open Access Initiative. We discussed the process and maintenance of craft and code – that it needs constant updating and reworking and that there is a symbiotic relationship between the object and the process (Ele referred to how people often never finish their knitting, but undo and redo it over time). Both coding and crafting take time and skill and to a certain extent require the maker/programmer to slow down – both require ‘close reading’. Ele mentioned, she has met a lot of patchworkers on her travels who are also mathematicians, but also had women walk out of html patchworking sessions because they felt a return to embroidery was an oppressive backwards step.

Ele and Charlotte’s presentations led on to discussions about the ethics and practicalities of ‘open’ research, what this really means and the issues of research as tourism. To what extent does this open up a process for others to engage in, and to what extent does it shut things down?

By Sophie Hope