Tag Archives: technology

Session 9 reflections: Listening to and sounding soundscapes

Dr John Drever

For the penultimate workshop of the series we were treated to a session by John Drever, a sonic artist and sound researcher based at Goldsmiths, University of London. If one had to choose a single keyword for the session, it would have to be the neologism ‘soundscapes’. Attending to soundscapes – which might be defined as the ambient sound or background noise as-it-happens in particular environments – commands, John seemed to suggest, a rather different approach than typically generic approaches to sound.

Approaching soundscapes or sonic environments has a long and diverse tradition, and John cited several interesting examples. Humphrey Jennings’ Listen to Britain, a 1942 propaganda film without narration that depicted life in England during the blitz, was an example of how the use of site-specific sound could introduce a totally new dimension to film. Pierre Schaeffer’s musique concrete often involved the recording of environmental sound, its division into snippets, and the use of looping and other methods to create innovative musical rhythms. Ludwig Koch was renown for recording a huge array of animal sounds, leading not only to a better appreciation of wildlife, but also the founding of the modern sound archive.

The above explore soundscapes in that they simultaneously privilege both site- and sound-specificity. And for this reason, they are a contrast in particular to what John called ‘generic sound’ – emblematised in particular by the pervasiveness of sound effects, for which John again gave some excellent and often intriguing examples. Thanks to vast sound effect catalogues, such as those produced by the BBC’s Radiophonic Workshop, we usually expect when watching television that, for example, a midnight visit to a cemetery invariably involves a hooting owl, or that a day visit to the seaside will involve the sound of seagulls. We also expect that, when a large passenger airplane touches down on the runway, we will hear squealing tires, even though we don’t tend hear this sound when travelling by air in real life. Indeed, the first use of this squealing tire sound was recorded not from airplane tires but from a braking car. Then there is the ‘Willhelm Scream’ which, even if we don’t know it by name, we’ve nevertheless likely heard (it’s originally from the film Distant Drums) many times over in television and film. What’s more, we even have professional creators of generic sound: Foley artists.

While it is probably obvious why sound effects or generic sounds are so frequently used in film, television and radio, their downside, John argued, is that they create an assumption that all the sound one would ever need is available (for purchase) amongst the many sound effects catalogues. This creates a certain irony for sound engineers: people who are deeply interested in sound, actual sound, yet who are normally engaged in working with sounds of a rather less interesting pedigree. John was suggesting, I think, that this speaks to how we tend to regard sound more generally. There is a tendency to treat sound instrumentally, typically an add-on to the primacy of the visual. What this means is that we tend to ignore or downplay the reality, depths, dimensions, and nuance of sound and sonic environments.

One thing I loved about John’s session was the way he periodically reached into a large bag he’d brought and unveiled for us a new piece of sound kit. Mainly, this ‘kit’ was different iterations of microphones. Actually, John was slightly apologetic about doing this; as he remarked, he didn’t want to be seen as advertising different tools of the sound recording trade. But what was interesting here was that he wasn’t just highlighting a range of technical devices for recording sound, but also, by proxy, illustrating the many dimensions of researching soundscapes. We began, for example, with the highly directional ‘shotgun’ microphone, often used in film and television, and in sporting events and other field recordings to focus on a narrow source of sound and cancel out sound coming from other angles. We also were introduced (most of us for the first time) to binaural microphones, which look rather like small headphone buds, making them ideal for covert recording. Aside from being covert, however, they also record sound much like the human ear hears it, making them ideal for making recordings to listen to with headphones. And things got more and more unfamiliar, from contact microphones, which sense audio vibrations through objects or masses like water (John played us a remarkable underwater recording of shrimp), to telephone pickups, cheap devices available at places like Maplin or Radio Shack that allow for the recording of phone calls (and of course phone tapping) but also pickup electromagnetic fields. As we discovered, my mobile phone has a most interesting sonic landscape of its own! In each of these examples, we were introduced not only to means of recording sound, but the cultural and spatial dimensions of sound in our everyday worlds.

One of the most useful devices was, as John described it, perhaps one of the more ordinary and affordable (his Zoom H2 Handy Recorder). This device allows recording of 360 degrees of sound across 4 channels, and it’s portability and ease of use has opened up, amongst other things, some very interesting preliminary research into the proliferation of high speed hand dryers such as the Dyson Airblade. As John noted, most think of these dyers as a very positive new development; rather than using heated air, they use a much more effective thin layer of cold air at high speed (about 400 mph), which additionally has benefits in terms of energy consumption. Yet it has a major if unacknowledged drawback: a significant increase in noise pollution. These devices are extremely loud, and once placed in a public toilet, can reach decibel levels that are normally considered unacceptable – particularly for those with special needs (e.g. dementia, blind people). In highlighting this emerging research, John pointed out that doing research on soundscapes is about more than generally exploring the nuances and specificity of sound environments. It is not simply the terrain for theorists of sound, or for audiophiles; it is also an area with real potential to open up new areas of pressing ethical and policy concern.

By Scott Rodgers


Session 8 reflections: Using qualitative data software

Gareth Harris

For a change, in Session 8 we were based not in our usual seminar room, but a computer lab in Birkbeck’s main Malet Street building. After all, one guiding theme of Session 8 was to provide an introduction to a qualitative data analysis software package – NVivo. So, as we waited for things to commence, amongst rows of PCs, facing Windows XP login boxes, we might have been forgiven for thinking this workshop session would be little more than an introductory overview of a software application.

And we would have been wrong. Before getting into anything of the kind, Gareth Harris instead took us on a highly interesting and thought-provoking journey through some of the epistemological issues and debates associated with the broader world of computer assisted qualitative data analysis software (CAQDAS – also, see the CAQDAS Networking Project). As it turns out, CAQDAS are much more than mundane research tools, but in many ways are at the fulcrum of contemporary debates about the interface of research and technology. Gareth kindly provided his slides used in this journey, which augment much of my reflections below, as well as provide some useful links.

Gareth began by pointing out the CAQDAS has proceeded through three fairly distinct generations:

1. Search and retrieval of text

2. The coding of multiple textual fragments, which can then be retrieved as coded themes or categories

3. Theory-building, in other words, looking at the relations between categories (e.g. though the use of visualisation tools) in order to build higher-order classifications and categories.

In the 1990s, there were quite a number of critiques of and debates about the uses of software for qualitative research, and Gareth pointed out that these were pitched almost completely in relation to the second generation – using software for simple data coding. As a result, there has been little debate, at least so far (Gareth estimates they may indeed kick off soon), about the emergence of third generation CAQDAS – using software for more complex theory-building. What is so interesting about this third generation, Gareth noted, is that it aligns software like NVivo ever closer to the inductive approaches of Grounded Theory. This, of course, it not necessarily a bad thing, but it does highlight the implicit incorporation of a fairly specific inductive methodology into a software package. This at least potentially raises rather more sticky issues than the second generation, which is little more than a faster and more secure way to code data. As in, it did little more than computerise that which one would otherwise have achieved through the use of such materials (technologies?) as a stack printed photocopies, multi-coloured highlighters, a pair of scissors, and a glue stick.

In this context, Gareth highlighted a very important question, harking back to the overall themes of the workshop, and particularly our introductory session. Does CAQDAS have its own ‘effects’ (positive and negative) on our research?  In other words, is a CAQDAS package like NVivo a neutral tool of our autonomous methodological actions, or does it have agency and channel our research in some ways? On the one hand, one response is that it is indeed primarily a tool for our research practices and decisions. This is a response often made to counter critiques of CAQDAS – which suggest that it is mechanistic, decontextualising, a fetishisation of coding even (see Gareth’s slides for more) – with an argument that all this really depends on how CAQDAS is actually used. On the other hand, however, perhaps it is naïve to take this claim too far. After all, CAQDAS must channel research, in the same way that writing an essay using word processing software entails a fundamentally different process than writing by hand (e.g. one continuously edits, rather than in more fixed stages). So, it might be seen as technologies with certain capacities and constraints, but which also comes into contact with a researchers’ know-how, practical work and ethics in doing research. And in this process of contact, Gareth seemed to suggest, we find a tool which allows for research practice to potentially be much more transparent and  accountable than that based on paper (rather different, Gareth emphasised, than any erroneous claim that CAQDAS makes qualitative research ‘reliable’, a concept with strong connections to positivism)

It’s worth mentioning an interesting tidbit Gareth pointed out early on in the workshop: there is a rather good chance that we might see a situation in 5 or 10 years time where the more extensive and thorough literature reviews are being conducted using CAQDAS. Assuming one has a library of electronic materials, this is easy to see. It would not only provide a very effective way to code, organise and retrieve text fragments of interest across several sources, but would provide interesting ways to compare how authors have dealt with similar concepts, to visualise connections between groups of scholarly communities and their ideas, and much more. Like any technological change, there would be drawbacks; but the advantages for large-scale literature reviews, on complex subjects, seem quite clear.

Now, you’ll remember of course, we were in the lab, ready for ‘training’. And we did spent some time taking NVivo 9 through its paces. Certainly, participants had a chance to gain some initial exposure to the software, see its overall architecture. But in having had such a good and intellectually interesting overview, I’d expect most participants likely left thinking about their exposure to NVivo within a much bigger picture of research practice and technology.

By Scott Rodgers