Brandon Locke

History | Digital Humanities | Libraries | Gender

Yes We Can. But Should We? (Cultural Heritage Edition)

Over the last few days, a post entitled “Yes We Can. But Should We? The unintended consequences of the maker movement” has been circulating around my own filter bubble. I’d recommend reading it – it’s a nice critique of the disruption mythology as it surrounds the maker (and specifically the 3D printer) movement – but it actually has little to do with my interests here.

I’ve recently begun working in the History Department at Michigan State University as Director of LEADR, the Lab for the Education and Advancement in Digital Research. The lab is an incredible space for students to learn more about digital research techniques in the social sciences and humanities, and it’s well stocked with most any kind of equipment students would need to conduct their own research and create digital projects. Of course, every time a student or faculty member comes into LEADR I get the same response with one item in particular:


“So, what are you going to do with THAT thing?”

I don’t blame them, I asked the same thing when I first heard about humanities labs at Western and UVic getting 3D printers. If it’s a student that asks, my first response is usually, “I don’t know….you tell me?” But I generally follow through by describing critical making and I tell them about UVic’s Early Wearables Kits and Devon Elliot’s work on magic. I always follow it up by telling them about all of the museums that are creating and making available digital scans of their objects. Lately, however, I’ve been thinking of another pedagogical process that could take advantage of the use (or disuse) of 3D printers.

A quick thought experiment:

We all know that it would be wrong to grab an object from a museum display, throw it on the ground, and stomp on it. To start with the obvious, it’s likely one-of-a-kind or extremely rare and worthy of some protection, and it’s not yours to destroy. But are there other reasons? For example, if you were to cause harm to a Killer Whale Hat, a sacred object that once belonged to the leader of the Tlingit Dakl’aweidí clan of Alaska, this would also be an act of disrespect and aggression toward the Tlingit Dakl’aweidí people. So, you could eliminate the first two issues by printing a replica of the Killer Whale Hat (3D plugin may be needed). But then what? I feel that it’s abundantly clear that any desecration of it would still be a violation of this third issue. However, a closer examination of customs surrounding sacred or significant objects may reveal, or at least invite discussion of the possibility, that this meaning is really only attached to the original object.

So, the act of printing (or scanning!) an object should be preceded by a discussion of what replication of a cultural heritage object actually means, in addition to a discussion of how the object should be treated if it is printed. Obviously, there are much more nuanced decisions to be made than whether or not it’s appropriate to stomp on something. By discussing this replication process, students would not only be forced to investigate and analyze an object’s role within a culture and a society, but also think more critically about what particular traits make the object significant and meaningful. I would love to hear from anyone who has conducted studies or taught about the materiality of cultural heritage objects – I would love to bring this aspect into the classroom in as thoughtful and productive a manner as I can.

On the flipside, if you’d like your dog to have a dolphin skull fossil as a chew toy, you have my blessing.

‘Harvesting’ Community Knowledges: Crowdsourcing and Community Representation

I presented this paper at the James A. Rawley Annual Conference in the Humanities on March 15, 2014 in Lincoln, Nebraska. Disclaimer: I worked as project manager for History Harvest for over a year, and was involved in two of the History Harvest events (I participated in my third after this presentation). I want to clarify that I am no longer officially associated with the project, and I do not speak on its behalf, nor on the behalf of the UNL History Dept. The slides for this talk are also available.

slateI want to open with one of my favorite ongoing series on the web. Every few months, Slate covers an American story through the same lens that American journalists cover news stories that occur outside of the US and Europe. The pieces critically and often demeaningly question some very basic tenets of American culture, highlighting the social, cultural, and political lenses through which we view others, without even being aware of it. Librarians, archivists, and curators have a long and tumultuous relationship with these biases and the way they present truth. Curators are faced with the impossible task of relating truths about culturally constructed topics that have innumerable meanings. Although museums in the postmodern era have made tremendous strides to speak from the perspective of the cultures represented and move away western viewpoints and colonialist narratives, a central problem remains. In nearly all cases, the descriptions, metadata, and contexts surrounding objects and exhibits, as well as their inclusion in the first place, are created by a staff of elite experts writing from the perspective of the ivory tower (Srinivasan, 2009a, 667). Cultural heritage institutions, operated by dominant cultures, often miss or intentionally ignore the significance and meaning of objects and histories in other cultures and communities. The result of these practices are collections which represent communities only through the paradigm of the dominant culture, and infuse shared memories with the ideological, political, and cultural influences of the present dominant community (Somerville and Echohawk, 650). UNESCO has highlighted this as an issue throughout the world, and has encouraged projects over the past two decades to reflect diversities and avoid what they call a “world without memories (Somerville and Echohawk, 651).”

Cultural heritage institutions have largely tried to combat these problems by better educating their experts, understanding and confronting their biases, and by consulting community leaders. Again, these attempts have significantly improved collections and descriptions, but still rely on authorities and (in most cases) standardized language to describe and define histories, and often to nothing to improve the collection to better reflect the community. Like decisions of significance and inclusion, the descriptions and the language in the metadata is nearly always written from the standpoint and language of the dominant cultural elite. The issue of problematic controlled vocabulary has been discussed for decades, since the late 1960s when “radical cataloger” Sandford Berman began criticizing the Library of Congress for only reflecting the white, Christian, middle-class viewpoints, and for continuing to use terms considered offensive to the very people they were meant to represent. Although Berman’s “radical cataloging” gained traction and has made significant progress, controlled vocabulary, by nature, is going to be insufficient to represent all things to all people and to keep progress with language as it evolves. The practice of subject heading assignment is what Boast et, al. describes as an “…imposition of the efficiency driven priorities the public institution upon its diverse publics (Boast, Bravo, and Srinivasan, 397).” The efficiency and usefulness of subject headings does have a great deal of value. Rather than throwing them out, I believe libraries should continue to improve them, while also supplementing other kinds of terminology knowledge collection methods.

In recent years, many cultural heritage institutions have turned to technology and the participatory nature of the so-called “Web 2.0” to improve descriptions and add community knowledge. Users can add tags to materials using whatever language they feel is appropriate. These collaborative tags, known as folksonomies, can enable better search functionality for users by broadening the metadata terms and injecting the vocabulary that is likely to be used by the users. There have been a number of successful crowdsourcing projects that have added valuable metadata to objects with sparse records, but thus far, folksonomies have been generally unsuccessful in adequately representing minority communities. While tagging does allow communities to apply their knowledge to the collection, they can be lost in the tags of member of the dominant community (Bates and Rowley, 445). Users can also add tags which lack specificity or constructive knowledge, or can add tags that are not useful to other users. Folksonomies can be useful if specific knowledge communities are targeted and solicited for input, but simply opening up the collection to tagging will likely result in tags that only represent dominant groups.

Many museum informaticists recognize that the current model is fundamentally flawed because of its inability to represent multiple knowledges, and are calling for fundamental changes to museums and cultural heritage institutions. The authors cite a number of progressive Web 2.0 projects that bring multivocality into core documentation by “…fundamentally changing the philosophy with which these institutions approach documentation and description (Srinivasan, et al., 2009b, 275).” Attempts to do so have primarily involved partnering with community leaders to document and represent objects from the perspective of the community. While this is certainly a vast improvement over current methods in representing localized and culturally-specific knowledge, I feel that this model has some shortcomings worth examining. By reaching out to authorities within a community, heritage institutions are only getting knowledge from those in authoritative positions. A true representation of a community would include the multivocality within its population, and would give individuals the opportunity to voice their own histories. There has yet to be a study on the ways in which a truly crowdsourced collection can contribute to the understanding and representation of diverse communities.

History Harvest -

History Harvest –

For the past several years, History Harvest has gone out into selected communities and asked individuals to bring in objects of significance to contribute to the historical record. The significance of these objects is determined by the contributors, and an oral history interview is conducted often revolving around the objects the contributor is sharing. History Harvest’s collection process is worth investigating through the prism of the shortcomings in cultural heritage institutions. A crowdsourced collection, combined with metadata derived from rich contextual knowledge, provides the ability to represent items with the knowledge of the contributing community, rather than the knowledge of the institution and the dominant culture (Srinivasan, et al. 2010, 766-767). It is essential that community stories are constituted with a plethora of first-hand knowledge and stories to make up the collective community narrative.

Only by knowing their identity as a function of their unfolding biographical history, and their engagement with multiple knowledge groups, can [objects] be set within the dynamic and expanding negotiations that constantly work to constitute the knowledges of which they are an active part. If a dynamic and situated knowledge is discredited, overruled, or abandoned, it does not lose its validity, for it retains its place and time in the local negotiations that are knowledge. If, however, it loses its associations, it becomes uprooted, displaced, and therefore severed from the people and places that validate its social meaning (Boast, Bravo, and Srinivasan, 400-401).

Warren Taylor's Penny

Warren Taylor’s Penny

One example of the benefits of a crowdsourced model of collection development can be seen in the 2012 History Harvest. Warren Taylor, a History Harvest contributor from the predominantly African American North Omaha area, brought in a “liberty penny” owned by his great-great-grandmother. It was a family heirloom, given to him by his great aunt along with a few other objects. The penny was obtained by his great-great-grandmother while she was enslaved in the south, and, given the rarity of an enslaved person having money combined with the ‘liberty’ message in the penny, the object had value and meaning within his family. The penny also serves as a symbol of the Great Migration, which brought thousands of African Americans, including Warren Taylor’s family, northward to cities like Omaha. In this instance, the object itself, devoid of the contributor’s story and the context surrounding it, has little value or instruction in a cultural heritage collection. However, by documenting Mr. Taylor’s story and studying the intertwined biographies of the penny and his family, the public is able to understand a bit more about the experiences of African Americans directly from the source.

The biographies of objects and the meanings they hold to individuals are not always excluded from the historical record or from cultural heritage institutions, but when they are, they’re generally from someone from the dominant social group who was widely considered important by their peers. When keepsakes or talismans of this kind are represented, they’re most often from the wealthy and powerful who, as part of the dominant society, are empowered to donate their possessions to museums which reflect the value and significance of the items. History Harvest reflects the significance that each individual person recognized for their own possessions and histories, and makes those available to the public. These reflections, taken in combination with others, can share some exceptional insights into communities and their cultures, customs, and histories.

Metadata in the History Harvest is also written primarily using the language and dialect of the contributor and does not used controlled vocabulary for the majority of the fields. This avoids the longstanding issue of subject term usage, which can erase or flatten the languages and vocabularies that natively applied to the artifacts. More importantly, oral history interviews are posted along with the objects, to represent the entire context in the most complete possible way. The major benefit of a process like History Harvest’s is in providing items to the historical record that are context-rich and fully informed from community knowledge. Even with the best intentions, the nature of controlled vocabularies and slow-moving institutions means that objects in typical GLAMs are described through a limited vocabulary that is not necessarily representative of all communities and vantage points.

Although the method employed by History Harvest minimizes the institutional imprint, it is not perfect. First, the oral history interviews are edited for time, and the metadata is necessarily edited and reformulated to some extent. These are largely unavoidable, but a consciousness of the concerns will go a long way to minimize the impact of editing and reformulation. Second, the information obtained (that is, not just the information that is recorded and displayed, but the initial interview) is shaped by the interviewer. Their cultural lens, as well as their knowledge of the topic will shape the information received, and will impact the direction of the knowledge. Like the first problem, this is unavoidable, but can be minimized through careful training, education, interviewer selection, and other considerations. Third, this method does not necessarily aid problems of intellectual property and different cultural contexts. Contributors could be asked for their preferred rights and customary sharing options, but then a great deal of work must go into creating an infrastructure that that supports these. The social, technical, and infrastructural issues created by such a collection are many, and they must be addressed in the near future.

History Harvest, and its experiences with crowdsourced content show promise for the future in building collections and adding community knowledges to existing collections. The methods put a much needed focus on community inclusion, diverse knowledges, and the full social, political, and cultural context of historical items. These experiences with very basic, stripped down methods, can positively contribute to diversity, intellectual tension, and community knowledge grounded in the lives of those in the community. With this groundwork, institutions can either publish the work on its own as grassroots archives like the History Harvest, or they may retain this information while adding more layers of knowledge derived from multiple different places.

Works Cited

Jo Bates and Jennifer Rowley, “Social Reproduction and Exclusion in Subject Indexing: A Comparison of Public Library OPACs and LibraryThing Folksonomy,” Journal of Documentation 67, no. 3 (April 26, 2011), doi:10.1108/00220411111124532.

Robin Boast, Michael Bravo, and Ramesh Srinivasan, “Return to Babel: Emergent Diversity, Digital Resources, and Local Knowledge,” Information Society 23, no. 5 (October 2007), doi:10.1080/01972240701575635.

Mary M. Somerville and Dana Echohawk, “Recuerdos Hablados/Memories Spoken: Toward the Co-Creation of Digital Knowledge with Community Significance,” Library Trends 59, no. 4 (Spring 2011).

Ramesh Srinivasan et al., “Blobgects: Digital Museum Catalogs and Diverse User Communities,” Journal of the American Society for Information Science & Technology 60, no. 4 (April 2009a).

Ramesh Srinivasan et al., “Digital Museums and Diverse Cultural Knowledges: Moving Past the Traditional Catalog,” Information Society 25, no. 4 (July 2009b), doi:10.1080/01972240903028714.

Ramesh Srinivasan et al., “Diverse Knowledges and Contact Zones Within the Digital Museum,” Science, Technology & Human Values 35, no. 5 (September 2010).

Building “The Military-Masculinity Complex”

In my last post I discussed the conceptual framework behind the design of my History MA thesis. In this post, I’ll describe the workflow I ended up using as I completed the project. This workflow changed quite often, and ended up being more complicated than I would have liked.

My development environment was largely dictated by a requirement that had little to do with my actual project. The Graduate College required a PDF of my thesis for submission to DigitalCommons, and this (meticulously formatted) PDF was essential to the granting of my degree. So, I had to write my thesis in such a way that I could quickly and easily flip my web project into a viable PDF which would (unfortunately) be accessed by interested scholars alongside my project.

Markdown Screenshot

Mou has a clear, simple display, syntax highlighting, and an output preview.

The obvious choice for me was to use Markdown, a very simple markup syntax that allows users to easily convert plain text to HTML, XHTML, RTF, or a number of other document formats. Markdown enabled me to make a single draft that could be pressed into HTML and RTF instantaneously, satisfying my needs for both formats. I wrote my entire first draft in WriteMonkey on my PC, and encountered quite a few annoying bugs. I did like the writing environment and plethora of shortcuts, but I was constantly plagued with crashes. I purchased a MacBook Pro in the Spring, installed Mou and never looked back. It offered great functionality, elegant design, an instant preview, and syntax highlighting — and it never crashed or froze on me.

Obviously, there is much more to the display of a web project than sufficient HTML text markup in the body. I also had to work in the header and footer, navigation, and footnotes using other methods. Ideally, the framework around the text would be done with PHP or some other scripting mechanism that pulls in the corresponding text body, along with any other entities I wish to display. However, my long-term accessibility plan required flat HTML pages, so I had to come up with another solution. For that, I drew upon Jekyll, a Ruby application that creates static websites and blogs from plain text files. With Jekyll, you can develop templates with a call for a text input.

Template Screenshot

The gist of the template — the top menu is shown in the top portion, the middle highlighted portion is the Jekyll {{content}} call, and the bottom portion shows the hidden footnote divs.

Jekyll then runs on a plain text file and produces an HTML file with the text content inserted into the template. Jekyll is great for blogs, when you want to create lots of static pages that are identical, besides the body text. I elected to complicate things and create a different template for each of my pages. While this would seemingly defeat the purpose of Jekyll, the sidebars for each page required different text, and the footnotes would add a lot of superfluous text and code to the plaintext, which I routinely sent in the form of a PDF to my advisors and others for input. Because of this, my Markdown files only supplied the content of the large ‘main’ div on the page, and everything else was included in the HTML ‘layout’ files.

Footnoting Digital Writing

My last hurdle was the one that required the most time thinking, rethinking, overhauling, and staring at code. I’ve never been completely happy with footnotes the web. Anchor links (even with with return links) to notes at the bottom are sufficient, but can be disruptive. In the past, I elected to use parenthetical footnotes, with a full note visible on hover and a link to an anchor in the bibliography. I was a fan of the sidebar Grantland uses in its articles, but I didn’t want to overwhelm readers with a screen full of text, and I wasn’t sure how I could space them out so that they were next to the citation.

jQuery Screenshot

The jQuery snippet that makes footnotes appear in the right portion of the page.

I initially planned to use the parenthetical technique on my thesis, but decided I would have to abandon it because of touchscreen compatibility. Around the same time, I created a website that used jQuery .hide(), and it occurred to me that I could hide/show footnotes in a sidebar. I investigated more, and found that I could make the jQuery adjust to the vertical height based on the vertical height of the trigger in the div. The code was actually pretty simple, although it had to be adjusted manually based upon the amount of space above the div on each side.

Markdown Screenshot

The text of the actual footnotes, which were embedded in the template and appeared on the right side.

Markdown Screenshot

The footnote code is shown at left, with the HTML display at right

All of my citations were embedded in the layout file for the pages, and were given IDs based on a very simple naming schema. The source as a whole was given a unique name, and references to individual pages or sections were given their own ID with the page number(s) at the end. Every in-text citation (which appeared as “[^]”) then had a javascript trigger with a corresponding ID. I kept files open that contained all of the triggers and all of the footnotes, and added to them or inserted them as necessary. I can’t help but think there may be a better way to dynamically produce the page numbers and reduce the total number of divs, but I couldn’t come up with a viable solution. This ended out working well for me, though it did take a considerable amount of time and effort to organize and implement.

Analoging the Project

The final step in my process was probably the most painful. To meet the graduation requirements, I had to smash my entire project down into a PDF, and lose all of the wonderful hypertextual connections I had made. I had to do it though, and Markdown made it fairly simple. I did a Find & Replace to delete all of the HTML in the citation triggers, and leave them with just the ID name and “@@” added to the end. I used this as a quick reference to find all of the footnotes I needed. I then used Pandoc to process all of the files into .docx and used the Zotero plugin for Microsoft Word to change all of the @@ placeholders to actual footnotes.

Conceptualizing “The Military-Masculinity Complex”

Now that I’ve had a bit of time to get settled in to my new program and think about things that aren’t my thesis, I feel comfortable reflecting on the project and my process a bit. The structure and scope of my project were the result of a long tug-of-war between conceptual experimentation and pragmatism. First, here’s a bit from my ‘About’ page explaining my goals for the project.

With this project, I’m seeking to establish an archive embedded within a nonlinear scholarly argument. Rather than dictate a linear progression, I hope to create a network of scholarship and sources the reader can explore and examine as he or she wishes. Here I have attempted to be as hands-off as possible with the ordering of the argument, though I have sought to place enough direction that my argument is coherent. I was thoroughly impressed with and influenced by early previews of the Scalar platform, and the way in which it facilitated different intersecting and conversational themes readers could follow. When I began building, Scalar was not yet open to the public, so I began to create my project in a similar vein, but in an even more open and free form.

I initially planned to build intersecting narrative themes (a la Scalar) with a MySQL/PHP structure and add some dynamic content with Javascript. However, as I put more thought into the long- and short-term viability, I had to reconceptualize. First, I was a Master’s student who was approaching a relatively new topic. This meant that I had a very limited time to tackle all of the historiography, find and digitize primary sources, develop and refine a thesis, and learn and build a fairly complex infrastructure. Second, and most importantly, was my need for long-term preservation of my project in its digital form. I didn’t want to be tied to purchasing server and domain space to maintain it, and I couldn’t expect the UNL History Department or CDRH to host, maintain, and update it in the long term – they already have their hands full with their funded projects. Therefore, I felt that the best plan for the long term would be to keep everything in flat plain text files that could be stored in DigitalCommons, UNL’s institutional repository. So, given these constraints, I had to develop a design concept that could be carried out through plain text HTML files, while still offering the free and nonlinear narrative I desired.

The ‘Navigation Bar Backbone’

Navigation bar illustrated

The navigation bar has six sections, one of which drops down into four subsections.

Although I wanted a nonlinear and mostly unstructured project, some structures were required to add organization and give the reader some logical mapping and a strategy for exploring. I employed a hierarchical structure adapted from Robert Darnton’s 1999 “layered pyramid” concept. A network of small, linked hierarchies enables readers to quickly and easily grasp my argument, while allowing much deeper exploration and more in-depth argument. In addition to this, I did have a central thesis to argue, and it was important that I make it explicit throughout. My structured hierarchy all stems from a strong navigation bar that appears on each page.

The ‘Layered Pyramid’

Linked sources at bottom

Each option links to a subsection where each instance of the archetype in the specified work is shown.

The items on the primary navigation set out the major sections of my thesis – introduction, theoretical grounding (Hegemonic Masculinity), historiography (American Manhood), original research and argument (Military-Masculinity Complex), and bibliography (Sources). The Military-Masculinity Complex section is then further broken down into four main sections of argument, each of which include a narrative that relates secondary sources and several exemplary pieces of primary material to my argument. At the bottom of each of these pages is a section that links to a several other examples of the archetype in other texts, and explores them in greater detail. Users can choose to read more about a specific archetype, or further explore specific subsections of my corpus (i.e. postwar Army documents or WWII-era Marines comics) as they desire.

The ‘Network of Scholarship’

Pop-out example

This pop-out window shows all relevant sources and related pages.

The navigation bar backbone and ‘layered pyramid’ structure, while useful in organizing my work and helping users understand where they are, did not do much in the way of free exploration across concepts. Users would be able explore the different layers of the pyramid, as Darnton suggested, but weren’t really able to browse and explore in a non-linear way. In order to facilitate a better understanding of the scholarly context, and promote movement between pages and topics, I added a pop-out panel of resources and additional reading. This came largely from meditating on Vannevar Bush’s famous piece, “As We May Think,” which challenged me to think more deeply about the connections between scholarship and sources, as well as the usefulness of embedded links between them. I wanted to place a collection of links to all of the resources cited or otherwise related to the page, so that readers can easily jump to another section that utilizes a similar concept, argument, or source. This offers a much freer way for users to read and explore my argument, or further explore key concepts or resources. This portion, which expands out over the page, serves as an “enhanced” bibliography — that is, it includes every source that is linked from the page, as well as pages that link to the page, and other pages within the same hierarchical class.

Through these methods – a “layered pyramid” with networked links lain over the top – I developed a structure and design that I hope leads to useful exploration and discovery. In my next post, Building “The Military-Masculinity Complex”, I will tackle the workflow, code, and applications I used to create my project.

Revisiting #AHAGate

Over the summer, when the aptly titled “#AHAGate” controversy broke out, I considered writing a post about Open Access publishing and embargoes. I never got around to putting my thoughts to words, however, as I was busy finishing up several projects and packing up my things. After sitting back and reading for a few days, I felt as though the community had spoken with resounding disdain for the statement, and had said most everything that needed to be said.[1] I began to revisit the topic when a number of students spoke against the use of Creative Commons licensing and Open Access scholarship in the Humanities in a recent class discussion. They echoed many of the same troubling (to me, at least) sentiments that informed the AHA statement, those which resigned all ownership and control of scholarship to publishers. The conversation reminded me that I had developed a filter where I only encountered people who I agreed with. All of the Open Access Week events here have encouraged me to revisit #AHAGate a bit, and put some of my thoughts down.

My frustration with the AHA statement doesn’t stem from a disagreement that it may be necessary for early-career humanities PhDs to embargo their dissertation. My frustration is that the AHA does not see this necessity as a serious problem with academia, scholarship, and scholarly communication. I seriously question the logic and motivations of an academic system which sees limited access as a positive aspect, rather than an economic and technological barrier which must be reduced or overcome. I understand that some barriers are unavoidable. If some publishers are refusing to publish monographs derived from open electronic dissertations (a very small percentage do, it seems), and academic careers depend so heavily upon publishing monographs, I agree that scholars should have the option to embargo dissertations. However, I don’t believe these policies have a positive impact on scholars and scholarship, and I’m not entirely sure that they are necessary. The AHA should have used its substantial voice to address the roots of the problem — namely the over-reliance on publishing companies to sort out issues of tenure and promotion and the problematic state of academic publishing in the humanities — rather than proposing a temporary patch.

Academic departments and publishers must recognize the changing publishing landscape and begin to modernize and decouple from the mutually destructive relationship they find themselves in. Monograph budgets at academic libraries are being reduced in favor of journal budgets, making books less profitable for publishers.[2] Despite this economic reality, tenure and promotion relies heavily upon this gatekeeping process to grant prestige. Rather than independently assessing the strength of a work, its impact and citation factors, or its general reception, T&P processes are built around this one form of scholarship and one (at least somewhat) independent entity. This forces scholars to seek approval from book publishers who may severely limit access to their work to break even or profit, rather than making the work accessible and usable.[3] With accessibility and usability comes more citations and more recognition from other scholars in the field — metrics that advance the field as well as the career of the scholar. Publishers may need to change their model of publishing and delivery in a way that is less economically intensive, while maintaining their valuable roles as editors, peer-review administrators, and assessors of quality.

In the meantime, I’m interested in exploring the issue of monograph sales and open electronic theses and dissertations (ETDs). During #AHAgate, many were quick to point to “Do Open Access Electronic Theses and Dissertations Diminish Publishing Opportunities in the Social Sciences and Humanities? Findings from a 2011 Survey of Academic Publishers” to show that only a few publishers took ETDs into consideration. What I felt was missing was a study, or even a conversation, about whether or not any publishers need to be weary of ETDs. According to Ramirez, et al.:

ProQuest, an electronic and microfilm publisher of theses and dissertations seldom receives requests by students or university personnel to remove access to their ETDs because publishers considered these works “prior publication.” This constitutes a fraction (0.002) of the 70,000 theses and dissertations made electronically accessible via ProQuest in 2011.[4]

Publishers are much less concerned with ProQuest and InterLibrary Loan acquisition because they make the dissertation slightly more difficult to access, though it still circulates and is available to many scholars.  With services like ProQuest and ILL, the vast majority of the potential market already has access, but this generally does not diminish a paper’s chance at publication.

I think it’s reasonable to believe that an open dissertation would actually increase the sales of a monograph. For the vast majority of scholars, a dissertation is quite different from an academic monograph, and they rarely work as decent or even acceptable substitutes. I cannot fathom someone deciding to read a dissertation in lieu of purchasing, checking out, or requesting a book. Conversely, I’ve personally found several dissertations that led me to purchase, checkout, or request the corresponding book. A well-read and well-circulated dissertation is an excellent advertisement, and can aid a newly minted PhD immediately.

Academic libraries compose nearly the entire market for humanities monographs, and I have yet to see any evidence that librarians would avoid purchasing a book because a similar dissertation is available. I wouldn’t presume to know more about scholarly publishing than the publishers themselves, but I would like to see data on this issue. If it is true that publishing an open ETD is detrimental to a publisher’s sales or reputation, they should share this information with scholars to develop a better plan to alleviate these problems together, either through alternative publishing, on-demand printing, purchasing contracts, or some other way. If this is not the case, the publishers can assuage young scholars’ concerns and encourage them to make their work available in such a way that it can garner them a strong reputation and maybe even drive book sales. After all, if publishers are reticent to accept ETDs because of prior publishing, isn’t it worth a study into the buying practices of librarians to determine how the purchase, or perhaps librarians should study how they should purchase with regard to prior publication?

Bonus read: Disembargo: An Open Diss, One Letter at a Time Mark Sample’s dissertation is “opening” itself over the span of six years — one letter at a time. Update: Mark recently wrote a Chronicle piece about this.

[1] The University of Michigan Press and Micah Vandegrift wrote thorough and well-reasoned syntheses of the response.

[2] Robert Darnton: The Library: Three Jeremiads

[3] Stephen Ramsay wrote an excellent piece on this topic, “The American Shrugshouldercal Society.”

[4] Marisa L. Ramirez, et al., “Do Open Access Electronic Theses and Dissertations Diminish Publishing Opportunities in the Social Sciences and Humanities? Findings from a 2011 Survey of Academic Publishers,” College & Research Libraries 74, no. 4 (July 2013): 368–80;

An Update

I’m finally returning to blogging after a bit of an absence, but this time I return from a new location with new credentials.

This summer was unbelievably hectic – full of anxiety, excitement, and more coffee than any person should ever consume. Through May and early June, my plans for the Fall remained up in the air as I searched for employment and funding for further education. Things finally came through for me in mid-June (more on that in a minute), and I was able to fully focus on my MA thesis again.

My defense came soon after — I successfully defended my thesis just in time for DH 2013, which was conveniently located six blocks down the street from my apartment. I was able to enjoy the conference and meet some amazing scholars despite my need to continually shuffle away and anxiously finish my edits and fix some pesky javascript errors.

Once the conference ended, I was still reeling from the shock of being done and not having writing guilt hanging over my head. I had three and a half weeks to sit back and relax for the first time in years. Oh, and reorganize myself, pack up everything, and move to a new city and program.

In mid-June I was offered a pre-professional graduate assistantship with Grainger Engineering Library at the University of Illinois at Urbana-Champaign. It was a fantastic opportunity for me to gain experience in academic libraries while attending the top-ranked Graduate School of Library and Information Science at Illinois. I’m currently leaning towards a Data Curation specialization with a focus on Digital Humanities. I’ve also been fortunate enough to be able to continue my DH work on Emblematica Online! I’m now finishing up my third week in Champaign, and I couldn’t imagine a better group of people to work with.

I’m hoping to post here more regularly than in the past, but it has taken me three weeks to put together this blog post, so I suppose I should temper my expectations. In the meantime, here is some valuable information I’ve recently learned:

If this is a monument to the past, why is there a line to use the computers?

I recently came across a statement from a Lincoln (Nebraska) City Council candidate who, in my opinion, completely missed the public library’s role in society. I’m not writing this to attack him or politicize his statement, but rather to address a common misconception of what libraries are, and what services they provide to the community. These same sentiments arose in the past year when the city discussed the future of Bennett Martin, Lincoln’s downtown library. Here is the relevant passage from Nebraska Watchdog:

[Roy Christensen] said the city needs to look at what a library needs to be in the future, as opposed to “a monument to the past.”

He said reads about two books a week, for example, but hasn’t read a print book for about three years. Sixty percent of Lincoln residents who use the library access it online, he said, so the city needs to look for savings that take that into account — perhaps by buying fewer printed books.

“I think we need to have a discussion about what libraries ought to look like in the future,” he said.

First, I think he’s precisely on point that we need to have a discussion about what libraries ought to look like in the future. However, I think this discussion needs take place with a more nuanced understanding of what libraries are. I’ll return to this.

Second, ebooks have not been kind to libraries. It is unclear at this point if significant savings can be made in the long- or short-term by transitioning to more ebooks and fewer printed books, and licensing issues complicate acquisitions. First-sale doctrine, the portion of copyright law that has allowed physical lending in the past, does not apply to digital content. Publishers have responded to e-lending in a number of ways, and many are bad news for libraries. From Forbes (Dec 2012):

The challenge to libraries is not insignificant.  Four of the six publishers are not providing eBooks to libraries at any price.  The other two – Random House and HarperCollins lead the industry with two different models.  Random House adjusted eBook pricing in 2012.  While the prices on some books were lowered, the most popular titles increased in price – some dramatically.  Author Justin Cronin’s post-apocalyptic bestseller “The Twelve” whose print edition costs the Douglas County Libraries $15.51 from Baker & Taylor and whose eBook is priced at $9.99 on Amazon was priced at $84 to Douglas County on October 31st.

HarperCollins meanwhile has adopted a different model, selling eBooks to libraries at consumer prices but electronically limiting them to 26 lends and then requiring that the book be repurchased…

User access is also a concern with ebooks — users must overcome an initial cost barrier (or the library must provide them) in order to access the books.

Third, his focus on printed books and his statement that libraries are “a monument to the past” denotes, in my mind, the sentiment that was echoed repeatedly in the discussion about Bennett Martin’s future: libraries are book warehouses. The library is not a free version of Barnes & Noble; it is a portal to information and knowledge, staffed with experts at navigating the landscape. Libraries have always been more than simply a building stuffed with books and documents, but the digital age has brought about more obvious departures from the bound volume. In order to fulfill this traditional role, libraries now not only provide books (digital and print), but also access to technology and the web for those without access. This service is essential to follow through with another of the library’s traditional goals: literacy.

Returning to Mr. Christensen’s suggested discussion about the future of libraries, I will say this: they need to focus on digital literacy. By simply swapping out printed books for ebooks, patrons are missing out on growing quantities of essential media that do not come in EPUB format. The library must aid in the development of good digital citizens, who are capable of finding and evaluating information online, applying for jobs, creating media, accessing essential services, and conducting business on the web. The first, and most essential, part of this focus is to provide access to computers and high-speed internet to help bridge the digital divide. The second part is fostering innovation and development by keeping the technology up to date and offering assistance to those who wish to create new content. The Lincoln City Libraries are already fostering digital literacy, as is evident by the high rates of computer usage on site, and by glancing at their programming schedule.

I (reluctantly) understand that the public library is not immune to budget cuts or freezes, but when we discuss the future of our libraries, we must keep in mind that digital literacy is a growing component of a library’s role, and we must look to maintain this as the budget develops. Moving from printed volumes to ebooks (if it does end up saving money) does nothing to fulfill this need.

#Alt-Academy: Professionalizing Unconventional Academic Careers

I wrote the following review in the Spring of 2012 as part of my ‘Internship in the Digital Humanities’ graduate course at the University of Nebraska-Lincoln. My classmates and I have other blog posts and reviews located on the class blog, DH Internship @ UNL. — BL

But yield who will to their separation,
My object in living is to unite
My avocation and my vocation
As my two eyes make one in sight.

Robert Frost, “Two Tramps in Mud Time”

Bethany Nowviskie opens her introduction to the #Alt-Academy project with Frosts words on the beauty of a career that is meaningful and fulfilling (Nowviskie 2011, Two Tramps in Mud Time). For many inside the academy, there exists a disconnect between this notion and a career like Nowviskie’s – in academia, but outside of the tenure track. Nowviskie put together the project after a few Tweets about her excitement about her work sparked a conversation about the enjoyment many get from similar careers (Ibid.). #Alt-Academy is an innovative collection of media that seek to unite, inform, professionalize, and lobby on behalf of scholars in academic roles outside of tenure-track academic appointments. The majority of active contributors work in the digital humanities, but the project does not limit itself to those who consider themselves digital humanists. The project includes six “clusters” of posts with similar topics, including one introductory cluster. Individuals with advanced degrees in the humanities that work in these positions are often seen as failures, or are otherwise excluded from discussions about the academy. Although the project defines alt-academy very broadly, most of the active contributors and topics are also under the big tent of Digital Humanities, where scholars are creating an entire new genre of scholarship and careers. This project aims to redefine and professionalize these new roles in the academy, and bring more attention, respect, and job stability to those in the field.

#Alt-Academy is a vital resource for junior scholars and grad students who are “building skills and experience in…those areas of the academy that are most in flux, and most in need of guidance and attention by sensitive, capable, imaginative, and well-informed #alt-ac scholar-practitioners (Nowviskie 2011, #alt-ac in Context).” Nowviskie’s introductory essay also speaks to the idea of unifying a disparate segment of the academic community that has long been considered failed tenure-seekers. The innovative publishing model fosters quick, multifaceted communication between #alt-ac members and attempts to synthesize the unification and professionalization of the community by focusing on both changing institutions and encouraging junior scholars to pursue work in the field.

The two clusters that focus on institutional change and senior scholars are essential to solidifying the field by clarifying labor issues and creating permanent spots in institutions. The two clusters, Labor and Labor Issues and Making Room, address topics that are important to establishing a more permanent field, because so many in alt-ac are paid with temporary funds or are otherwise limited by impermanence atypical in the traditional academy. These positions must also be created using structures very different from traditional practices, because tenure does not fit in such a dynamic field where values are changing and risks must be taken. These articles sketch out arguments for the necessary changes and structures that are needed for new “alternative” positions. The arguments even extend to the most basic of pay models, as Julia Flanders discusses at length in her article “You Work at Brown. What do you Teach?” ( Flanders, 2011.) Flanders points out that academic teachers are salaried, have relatively specific hours, production expectations, and calendars, but many in the alt-academy are paid hourly or by project and do not have very specific long-term goals or signposts. These labor and structural changes have very profound effects not only on legitimizing the jobs and attracting more top talent, but also in adding value and stability to employees.

The other three clusters focus on training and career paths for alt-academics – both junior and senior scholars. The clusters Careers and CredentialsGetting There, and Vocations and Identities are more focused on individuals and provide support, advice, and inspiration to other alt-ac scholars looking to succeed in the field. The clusters provide a great deal of information for aspiring and junior scholars, and truly showcase the breadth of field. The latter, Vocations and Identities, really branches out, including many emerging positions in libraries, public historians, and scholarly publishing. Collectively, these articles really drive home the point that no one path for an alt-ac career exists, and everyone in it has followed a very different track. These also show the richness and diversity in training, and the way that these contribute to a rich field with a wide variety of thought and development.

The highly innovative publishing methods of the project promote a higher level of interactivity and review in scholarly communication. #Alt-Academy employs a publish-then-filter system, so pieces are published right away, reviewed, and then selected works are featured on the home page. The site’s “Response” function – a vast improvement on typical web commenting – appears prominently along the side rather than being relegated to the bottom of the page. The responses are also brought to life through photographs and biographies, and the front page shows how many responses each piece has received. To date, however, there hasn’t been a wealth of responses on the articles. This is, of course, not to say that the project has not had a lot of traffic or that the articles have not been well received – the project has garnered a great amount of attention – but much of the conversation has taken place on individuals’ blogs and on Twitter. Unfortunately, such a thoughtful and positive scholarly communication tool has gone underused. If the project could harness more of the conversation that surrounds it (whether through plug-ins and integration or by encouraging responders to double-post), it could be an even more powerful tool for communication and innovation in the community.

By raising the profiles of humanities scholars working in non-traditional routes, and bringing their contributions and expertise to the table, the scholarly environment becomes richer and stronger for everyone else. There needs to be a large shift for the academy to move from those who teach the humanities to those who are professionally trained in the humanities. While the “#alt-ac movement” will not directly do that, it can make an essential first step to raising the profile of alt-acs and moving this work into great acceptance and training more graduate students in it. This also enlarges the overall community, brings in new perspectives, and eliminates the loss of half the humanities PhDs (not to mention the master’s degrees) that will not work in tenure track positions.

Although the idea of an alternative academy existed prior to this project, the use of the term as a uniting point for scholars can be problematic. In taking a stand as the alternative academy they separate themselves as an alternative – outside of the mainstream focus and establishment of the current conception of “the academy.” The nomenclature and separatist attitude has its advantages and disadvantages. On the one hand, the separation causes an unnecessary distinction – is it impossible for scholars to float between traditional and alternative academy? Should there be two separate paths through graduate school? A different terminology – perhaps something like ‘Broadening the Academy,’   ‘Nontraditional Scholars,’ or something of that ilk, would serve the purposes of fostering an idea of a united academy or the acknowledgement that alt-ac careers are just a different form of academic scholar, neither “alternative” nor less deserving of respect and employment. The term “hybrid humanities” sometimes describes humanist scholars with training in other fields, although this is problematic in that it gives the impression that the scholars are mixing humanities with other disciplines (which isn’t always true) or that it is not “pure” humanities. The term Humanities 2.0, as presumptuous as it may seem, may be best, because it announces that the humanities must broaden in the digital age, and names these roles as necessarily inclusive.

Perhaps a differentiation is necessary to establish this “alternative” career as a viable and stable form of academic work. Tenure, at least in its current and common practice, does not work well for most alt-ac jobs, as tenure often discourages innovation through continuing traditional standards and make scholars overly risk-averse. Perhaps differentiation will provide stable employment within the university, and non-teaching careers can be institutionalized, permanent, and well-paying without the pressures and rigidity of tenure. In Toward a Third Way: Rethinking Academic Employment, Tom Scheinfeldt says he remains unwilling to accept second class academic status as his digital humanities work wins awards and funding but remains untenurable (Scheinfeldt, 2011.). He adds, “It can’t be tenure track or nothing. My work requires a ‘third way’” (Ibid.). Bethany Nowviskie’s #alt-ac project is among the best resources for those seeking to build that third way.

Works Cited

Flanders, Julia. “You Work at Brown. What do you Teach?” May 6, 2011. #Alt-Academy. (accessed March 12, 2012).

Nowviskie, Bethany. #alt-ac in Context. Undated. #Alt-Academy. alt-ac-context  (accessed March 10, 2012).

Nowviskie, Bethany. Two Tramps in Mud Time. January 24, 2011. #Alt-Academy. (accessed March 10, 2012).

Scheinfeldt, Tom. Toward a Third Way: Rethinking Academic Employment. May 6, 2011. #Alt-Academy. (accessed March 11, 2012).

Pursuing a Hypertextual Argument with ‘No Reservations’

Over the last few weeks, I’ve been thinking a lot about how, exactly, I want to arrange the navigation of my digital MA thesis. I’m creating a digital project that examines the depictions of masculinity and gender in a corpus of media created by the US military. I feel as though a digital medium is the best medium for this project because it allows me to fully integrate the media (movies, posters, comics, and potentially radio presentations) in their complete form. Because my analysis of the visual aspects of these pieces drives my project, I want these sources to be central to the reader, and fully integrated within my argument. I’ve given a lot of thought to the kind of navigational structure that allows for a clear and cohesive argument, while also taking advantage of the free navigation and hypertextual benefits the web facilitates.

During this same time, I’ve been burning through all of the episodes of ‘Anthony Bourdain: No Reservations’ on Netflix. A large part of his appeal comes from the way he travels — he makes a concerted effort to avoid the typical tourist attractions and tourist foods, gets advice from local people, and generally plays it by ear. Although I doubt the show is as relaxed as impromptu as it appears, he does seem to follow the paths that he comes across and is interested in, not just the prearranged places marked on a visitor’s map or the stops on a bus tour. Rather than adhering to a premade itinerary, he’s (seemingly) free to explore whatever interests him at the moment. The ability to freely interact with his surroundings and have a tactile experience with a culture is irreplaceable. I began to think more about the ways n which I could allow my readers to have the same freedom of choice and facilitate this kind of immersion.

This isn’t the strongest metaphor, I admit, but the show got me thinking about the process of immersion into new environments. I think that many scholarly arguments, especially those dealing with cultural history and/or significant archives could benefit greatly from this free-form scholarly navigation. If the linear monograph is the carefully guided and structured bus tour, I’m hoping that my thesis will be the Anothony Bourdain experience. My ultimate goal is to lay out an argumentative landscape, provide the reader with a map and compass, and allow them to explore the argument in his or her own way. I think this method can allow the reader to immerse themselves in the argument and sources and branch off in a way a guided tour (to continue the metaphor) could not afford.

The ability to experience digital projects through different lenses or views have fascinated me for some time. My own department chair, Will Thomas, has employed them in his “Railroads and the Making of Modern America” project, and my former mentor Doug Seefeldt has long been a proponent of malleability of digital scholarship. I’m also fairly well versed in the literature on hyperlinks in historical texts, including works from Jerome McGann, Johanna Drucker, and many other leaders in digital scholarship. While the potential for hyperargument is clear, its implementation remains unclear to me.

Last Fall, I was floored by Jentery Sayers‘s Nebraska Digital Workshop presentation on his in-progress project using the Scalar platform. Scalar is a platform currently under development that allows users to choose different preset pathways through the project that shed light on the topic from different angles. I was so impressed with Scalar, in fact, that in the Fall 2011 Readings in Digital Humanities class, I gave a presentation on the future of my field (20th c. American cultural history) and I more or less just pointed to Scalar. I was really excited because, as Jentery illustrated, Scalar allows for numerous interweaving views and keeps a wide variety of media as the focal point of the argument. [Nicholas Mirzoeff’s “We Are All Children of Algeria” is a great example of Scalar in use]

My goal for my thesis is to dispatch with these ‘views’ or premade paths (at least partially) and allow for a freeform experience in an argumentative landscape. Of course, this landscape offers its own set of challenges and porential failures. Without proper guidance, the reader can easily get lost or wander in ciricles. There’s always the chance that important “landmarks” and context can be missed, or that one’s scholarly journey will be repeitive or circular (I call this the Rockbottom Brewery Effect, and it can be incredibly frustrating, as my AHA Chicago collegues can attest). To avoid this, one must create functional design that allows for a clear flow throughout the landscape — this seems simple and clearcut…until the HTML editor is open. I’m currently employing a pyramid structure (inspired by, but differing significantly from Robert Darnton’s 1999 “layered pyramid” premise), where I will lay out a few central premises from the start, and allow readers to flow deeper into the argument and explore the details of my argument. This way, I can use clear and simple visual clues (go to bottom of page, click on one of many branches) that allow the reader to have some semblance of location within the project.

I’m beginning to form a more solid idea of what this project will look like and how things will flow, but I would certainly appreciate feedback or recommendations. Are there any dangers I have not recognized? Are there projects out there that succeed at this? Any that try and fail? Is there a Guy Fieri method that I should follow instead?

About this blog

I’m a graduate student at the University of Nebraska-Lincoln, studying 20th century American History, Digital Humanities, and Women and Gender Studies. My MA thesis (in progress) examines depictions of masculinity in movies, comic books, and manuals produced by the US military from 1940-1963.

After lots of encouragement from Brian SarnackiJason Heppler, and Dan Cohen, (via this post), I realized I have waited far too long to put together a blog. I will be posting some reviews and papers from my coursework, as well as some thoughts on my fields of interest and my MA thesis. I’ve really been enthused by the amount of feedback some  of my colleagues have received, so some of my posts may read as much like a question in a forum as they do a blog post.