Events

Language, Politics, Literature:
What digitally assisted text analysis can(not) do for you

Wednesday, September 30, 2009
8:30am - 4:00pm

James Allen Center, Northwestern University, Evanston, Illinois

You are cordially invited to a one-day colloquium that brings together researchers from different disciplines  to reflect on the uses and limits of digitally assisted text analysis in a field bounded by the terms language, politics, and literature.

Text analysis, aka reading, has always been a balance of serial and non-serial moves, attending to the syntagmatic or paradigmatic aspects of a text, to use Roman Jakobson's terminology: texts as sequence and texts as pattern.  From the early days of computer-generated concordances, digital technology has provided powerful tools for paradigmatic analysis. In the past two decades the scale, power, and ease of these tools have increased exponentially, enabling divide and conquer strategies that let you isolate micro-rhetorical acts such as lexical, phrasal, or syntactic choices and track their patterns across thousands of documents. You can read a very large corpus paradigmatically before you focus on a particular part of it for closer analysis.

Digital archives and tools are changing the syntagmatic/paradigmatic balance and order of text analysis.  These changes play out differently in different disciplines.  Capturing those differences and learning from them is the major goal of this colloquium, whose participants come from Computational Linguistics, Interface Design, Political Science, Management, Economics, Rhetoric, and Literary Studies.

Conference schedule. (Subject to change.)

The participants:

Mark Davies
is a professor of Corpus Linguistics at Brigham Young University. He has created and provided sophisticated forms of access to a number of large English and Spanish corpora, most recently the 385 million word Corpus of Contemporary American English. He is at work on an NEH sponsored Corpus of Historical American English.

Daniel Diermeier
is the IBM Distinguished Professor of Regulation and Competitive Practice at Northwestern University. One of his research interests is Language and Politics, where he has worked with Beigman, Kaufman, Beigman Klebanov, and Yu on new techniques for identifying and analysing political disagreements and affilications from word choices.

Suguru Ishizaki
is an associate professor of English at Carnegie Mellon University. His research focuses on develping tools for communication. He is the author of Improvisational Design: Continuous Responsive Digital Communication (2003), and has collaborated with David Kaufer on Docuscope.

David Kaufer
is professor of English at Carnegie Mellon University. He is the author of The Power of Words: Unveiling the Speaker and Writer's Hidden Craft (2004). He compiled the underlying lexicon for Docuscope, in which words and phrases are categorized in a fashion that supports the interpretation of documents as ensembles of microrhetorical acts.

Stefan Kaufmann
is associate professor of linguistics at Northwestern University.  His research focues on the language of uncertainty, rational linguistic behaviour, and data-driven approaches to meaning.  One of his recent essays, co-authored with Daniel Diermeier and Bei Yu is Classifying Party Affiliation from Political SpeechClassifying Party Affiliation from Political Speech.”

Beata Beigman Klebanov
is a computational linguist, currently a postdoctoral fellow at the Northwestern Institute of Complex Systems and Kellogg School of Management. Her work focuses on automatic semantic analysis of political speech, as well as on computational approaches to the detection of metaphor in text. Articles reporting on this work were recently published in Political Analysis journal and in Journal of Information Technology and Politics, both co-authored with Eyal Beigman and Daniel Diermeier.

Martin Mueller
is professor of English and Classics at Northwestern University. He is the general editor of WordHoard, an application for the close reading and scholarly analysis of deeply tagged texts. His most recent essay is "Digital Shakespeare, or towards a literary informatics", Shakespeare 4 (2008): 300-17.

Brad Pasanek
is assistant professor of English at the University of Virginia, where he specializes in the eighteenth century and in digital humanities. He is at work on a book about Eighteenth-Century Metaphors of Mind, ”materials for which are available from his website The Mind is a Metaphor.  His most recent essay is "Meaning and Mining: The Impact of Implicit Assumptions in Data- Mining for the Humanities" (co-authored with D. Sculley, Google Pittsburgh), Literary and Linguistic Computing (2008) 23:4. 409-424.

Robin Valenza
is assistant professor of English at the University of Wisconsin, Madison. With a background in both English and computer science, she works on the relationship between literature and the intellectual disciplines in Great Britain, focusing on the long eighteenth century and the romantic period.  Her recent essay “"How Literature Becomes Knowledge: a Case Study", ELH 76 (2009) 215-245, reflects on the resemblances between current digital text analysis and 18th century taxonomic enterprises.

Matthew Wilkens
is a postdoctoral Mellon fellow at Rice University. He has an MS in physical chemistry from Berkely and a PhD from Duke’s Literature Program. He works on allegory in contemporary American fiction and wonders whether allegory leaves measurable traces in the molecular structure of a text. His allegory project is described in one of his blogs and more fully in his essay "Towards a Banjaminian Theory of Dialectical Allegory" (NLH 2006).
Topic: "On Not Reading Rhetoric: How and Why to Measure Allegory"

Michael Witmore
is professor of English at the University of Wisconsin, Madison, and author of Culture of Accidents: Unexpected Knowledges in Early Modern England (Stanford, 2001) and of Pretty Creatures: Children and Fiction in the English Renaissance (Cornell, 2007). He is currently carrying out experiments on the suitability of David Kaufer's Docuscope for the analysis of 17th century texts, a promising earnest of which appeared in Shakespeare by the Numbers: On the Linguistic Texture of the Late plays, Early Modern Tragicomedy, ed. S. Mukherji and R. Lyne ( 2007). You can read more about it at his Wine Dark Sea Blog.

Bei Yu
is an assistant professor at the School of Information Studies at Syracuse University. Her research focuses on text mining methods, especially opinion analysis approaches, to support data-driven scholarship in humanities and social science research. A recent paper, co-authored with Diermeier and Kaufmann, is entitled “Exploring the Characteristics of Opinion Expressions for Political Opinion Classification” and was published in the Proceedings of the 9th Annual Internal Conference on Digital Government Research (2008)

For more information or to register, please contact:
Denita Linnertz
Ford Center for Global Citizenship
D-linnertz@kellogg.northwestern.edu