Preview: “Mapping the Long Women’s Movement”

Text:
Increase font size
Decrease font size

We’re quickly approaching the official launch of our DH Press Pilot Project, “Mapping the Long Women’s Movement,” a collaboration with the Southern Oral History Program.

This project visualizes a collection of about fifty oral histories conducted with Appalachian women whose activism centered around space and place. Given the connection between space and activism, we started out trying to map excerpts of each oral history to help visualize the interconnectivity of these women’s experiences. In the process, we have been experimenting with new ways of accessing oral history collections, ways that bring the audio and text transcript together and that allow for new types of exploration.

Challenges and Possibilities

As with any DH project, the intellectual work of scoping a project and building a dataset is one of the most laborious, and most important, tasks.

Each marker color represents a conceptual grouping across the entire collection.

Each marker color represents a conceptual grouping across the entire collection.

Working with a team of students at SOHP and the DIL, we manually went through each and every transcript, identifying historical concepts to associate with each audio excerpt. The resulting set of concepts exceeded one hundred, so we then had to organize these concepts into related groupings, using a traditional “parent-child” relationship to implement the groupings. These groupings will drive the map-based visualization; each concept (such as “race and the women’s movement” or “women’s health”) will appear as a unique marker color on the map.

Perhaps one of the most innovative aspects of this project has been in linking audio and text—see this early demo to get a sense of the way you can move around in the text and audio simultaneously. We devised this approach so that we could avoid slicing each interview into many smaller bits. We believe it is important that users experience these interviews in context, rather than simply listen to a short segment here and there. My earlier blog post explains the process we developed to facilitate this user experience.

One of the biggest unanticipated problems we faced in accomplishing an audio-transcript linkage was the size of the audio files. Because we were not able to stream the files, we were experiencing browser performance issues that delayed our launch.

The Streaming Problem

Streaming a media file means that the file “is constantly received by and presented to an end-user while being delivered by a provider.” This allows you to begin watching or listening to something while the file is still loading. The alternative is to make the user wait for the entire file to download before playback can begin. This can be extremely problematic for large files, as load time can be quite significant and the process can prove taxing for some browsers. (Learn more.) 

In our initial implementation of the project, we pointed to the audio files that “live” in the UNC Library’s Catalog, which does not currently provide streaming services. So anytime a user in our project would want to listen to an audio file, he/she would have to wait for the entire file to download. Unfortunately, this process would occur each and every time a user might click on a marker—even a marker associated with an interview previously downloaded. So if someone clicked one too many times, the browser would inevitably crash. And a crashing browser is a surefire way to deter users from using a DH project.

Our Workaround

While we are still working on long-term, sustainable solutions to the streaming problem, we have begun playing with a short-tem workaround: streaming audio via the Web 2.0 audio sharing platform, SoundCloud. Because SoundCloud is a streaming service, we are able to stream our audio content without taxing the browser.

Furthermore, we can “seek to” different points in the audio file based on pre-determined excerpts. Simply by telling the system what the start and end points are in the file, SoundCloud will take you right there. This, in turn, allows us to link our timestamped transcript to that streaming audio “clip,” thereby allowing users simultaneous access to the sound and text.

Here’s a demo of streaming audio embedded in a marker info bubble. We hope the final version of the project will function similarly:

[vimeo]https://vimeo.com/68229276[/vimeo]

Next Steps

Once we launch the project, we will work hard to bring new visualizations online, including a timeline and a concept-based tree map.

Look for the launch, which should be coming very soon!