| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Bibliography by Liz Shayne

Page history last edited by eshayne 11 years, 11 months ago

Annotated Bibliography Assignment

 

By Liz Shayne, Playful Visualizations at Work/Working Visualizations at Play  Team

 

 1. Bollan, Johan, Lee Dirks, Joshua Greenberg & Jo Goldie. “Data Visualization and the Future of Research.” South by South West Conference. Sheraton Austen, Austen, TX. March 10th, 2012. Panel.

http://schedule.sxsw.com/2012/events/event_IAP10546

 

This panel from the South by Southwest conference focuses primarily on what visualizations are currently being asked to do and how they provide access to big data. Greenberg introduces the idea of the visualization as a “macroscope,” a term he adopts from Joël de Rosnay’s book on systems of the same name. The macroscope is a tool that allows one to see on a much larger scale than ordinary vision—the conceptual opposite of a microscope—and invites one to think on a comparatively large scale. All four panelists spoke about how visualization was changing the way interactions happened with big data. Bollan sees promise in using the macroscope to analyze academic influence, while Dirks works as part of Microsoft’s Research team to make large-scale data about the timeline of the universe or the changing history of the earth’s surface available. Goldie, the sole humanities academic in the group, focuses on how big data is becoming crucial to the different fields in the humanities. She stresses the importance of partnerships between the developers who create the macroscopes and the academics who need them for their research, arguing that the tool and the task must fit one another. The panel was in agreement about the need to make room for and understand big data, though Greenberg was the most strident. He called for an epistemology of the macroscope, as distinct from either the scientific process of the sciences and the hermeneutics of the humanities: an iterative investigation that uses big data as its source and that produces visualizations as a new form of knowing. 

 


2. Craig-Zeta Analysis.

https://files.nyu.edu/dh3/public/TheZeta&IotaSpreadsheet.html

 

The Craig-Zeta analysis is a set of algorithms that relies on the computational power of the digital to compare texts on the more micro-level of word usage. Analyses such as these rely on the computer's ability to detect patterns of usage on the lexical level and use those patterns to pinpoint stylistic behavior in a text or group of texts and tend to be part of a field known as Computational Stylistics. The Craig-Seta analysis in particular compares two texts or text sets by looking at their respective word frequencies and creating a list of what the program calls "marker words" - words that are relatively present in one text and absent in the other and vice versa. This word list of marker words allows the Craig-Zeta analysis spreadsheet to graph the texts in the form of 2500 word sections based on the percentage of marker words from both lists each section has, with the x axis corresponding to the percentage of marker words for the first text set and the y axis corresponding to the percentage of marker words from the second. This allows the viewer to see how different the texts are in their use of these marker words and it also provides a testing ground to determine whether a new text is more similar to the first or second text. The version of the Zeta analysis linked here was originally developed by John Burrows, altered by Hugh Craig so that it could handle two texts at once and automated as an Visual Basic macro by David Hoover.

 

Craig-Zeta was originally developed as an authorship analysis tool with the assumption that the micro-level analysis of word-usage would be helpful in discovering the stylistic differences between two authors. The nature of the software, however, means that it can be used to explore the stylistic differences between any two texts and is often useful as a first step in comparative textual analysis. The graph Craig-Zeta creates answers the basic question of textual analysis: Is there something to be gained from studying this particular text (or texts) using computational text analysis? Based on the percentage of difference, which the graph makes clear through a scatter-plot, one can easily see how similar the two texts are and whether answers regarding distinctions between the texts should be sought on the lexical level as well as the semantic. Having determined the amount of difference, Craig-Zeta also provides the user with the aforementioned statistically determined marker words so she has a useful starting point should she wish to examine word usage on the sentence level within the text itself. I have found it particularly interesting to generate word trees and word networks that focus on the marker words from each text as a first foray into how and perhaps why the texts differ.

 

Craig-Zeta is not the easiest or most intuitive tool to use, though David Hoover provides an in-depth guide to its use and to interpreting its data. It does not have a particularly steep learning curve, however, and does a good job of providing direction for future exploration. Some familiarity with Microsoft Excel is highly recommended.

 

Below is the graph created by running the Craig-Zeta analysis on George Eliot’s Daniel Deronda once it has been split according to main character. Below that is a screencap of what the spreadsheet looks like:

 


3. Jones, Sarah. “When Computers Read: Literary Analysis and Digital Technology.” Bulletin of the American Society for Information Science and Technology, 38:4. April 2012. 27-30.

 

Jones’ article examines two different ways that data visualization is changing critical practice within the domain of literary studies. She emphasizes the objective nature of such analyses and notes that they rest on a “kind of knowledge that was once regarded as the antithesis of the humanities: hard facts” (27). She contrasts the traditional critical essay, which she describes as “the personal commentary of a single reader who is recognized as an expert qualified to interpret texts” (28) to the critical practice that is building around data visualization which, though it often has explanatory elements, relies far more on the reader’s interpretations. As the factual elements of the charts themselves gain authority—coming, as they do, out of hard data—the interpreter loses his or her authority and becomes one voice among many in contemplating what the data “means”. Jones also addresses what sorts of inquiries are best suited for data visualization and argues that those based around visualizing a network are the most useful. She sees these digital humanities based inquiries as breaking with older models of understanding and moving towards a newer kind of hermeneutics that embraces a proliferation of interpretation. She ends her article with something of a contradiction, observing that while data visualization constrains the interpretive act of the critical author by demanding a certain degree of objectivity, it invites the reader to provide her own view of the visualization and understand it as she sees fit. The introduction of computational methods of knowing into literary criticism requires its practitioners to reconceive their own interpretive practice and move from an analysis of hierarchy and themes to one of patterns and networks.

 


4. Lima, Manuel. Visual Complexity. www.visualcomplexity.com 

--. Visual Complexity: Mapping Patterns of Information. New York: Princeton Architectural Press. 2011.

 

Lima created the website and later authored the book Visual Complexity as a tribute to outstanding visualizations of complex networks. His project resembles that of a curator; he gathers together what he feels are exemplary displays of comprehensible visualizations created from complex data networks as well as other visualizations that are innovative either in their form or in their content. As Lima explains on the website, “all projects have one trait in common: the whole is always more than the sum of its parts.” In Lima’s view, the goal of the visualization is to provide a gestalt understanding of the information. Rather than drilling down into the data, visualizations force the viewer to step back and see the data as a totality. As the name Visual Complexity suggests, the majority of images are of networks so complex that it is impossible to view the individual links or even, in some cases, the nodes. Lima's curated gallery underscores the idea that the purpose of visualizations is not to provide actual understanding of data. The data itself is meaningless compared to its place in the network and the individual nodes and vertices of the network are equally meaningless in the face of the network’s totality. The information is less important than the global shift in perception that it engenders. Visual Complexity reminds its viewers that the purpose of visualization is to see anew; it asks for a radical re-viewing of data that shifts the viewer's focus away from the individual instances of the data points and towards an understanding of the data as a visually striking whole. The images on this site are a constant reminder that visualization is a profoundly interpretive act.

 


5. Ramsay, Stephen. “In Praise of Pattern”. TEXT Technology, 14:2, 2005. 177-190.

 

In this article, Ramsay provides something of a manifesto for those looking to explore the possibilities inherent in textual analysis and visualization. He emphasizes that scholars are not out to provide “scientific solutions to interpretive problems” (177) and that the goal of analysis in particular is to open up avenues of inquiry rather than provide factual confirmation. In searching for a program that would perform the sort of analysis he was looking for, Ramsay designed and implemented his own analytic software that graphs the locations in Shakespeare’s plays according to scenes. He uses his experiences while creating this software to demonstrate how understanding a program’s provenance is helpful in putting it to use, then discusses what kinds of “knowing” do or do not come from the process of interpreting visualizations. Ramsay argues that visualization forces us to see the text differently; to look at features that were invisible prior to the analysis. Analysis is an aid to hermeneutic practice; it is a method of discovery rather than a method of validation. Ramsay’s approach towards visualization is one that underpins an approach towards analysis that appreciates the intersection of work and play in research and his goal, “to say something new, provocative, noteworthy, challenging, inspiring—to put these texts back into play as artifacts reconstituted by re-reading” (189) makes for an excellent rallying cry.

 

Comments (0)

You don't have permission to comment on this page.