The challenge of 21st-century scholarship is finding efficient and innovative ways to process and synthesize data. Scholars face unprecedented quantities of information, made available online and in growing digital repositories such as this database. Our research project uses computer-assisted textual analysis is to navigate and analyse large volumes of text.
Recent work from the European Groupe d’Analyse de Données Textuelles emphasizes the growing desire of scholars to reconcile linear, qualitative text research with reticular, quantitative approaches (Mayaffre 2007; Adam 2006). For instance, Viprey (2005) shows how human or natural reading is linear and therefore informed by writing conventions such as continuity and progression, while computer-assisted readings are reticular or network-driven, each word being associated with a number of other words not necessarily in the same sequence. While linear approaches are in no danger of disappearing within the humanities, increasingly large quantities of data do not always allow for effective human reading. The discipline of computer-aided text analysis, meanwhile, has often been criticized for its tendency to focus on the statistical treatment of frequencies—the number of times a word appears in a piece of writing, regardless how vast. As Mayaffre (2007), points out, “a text corpus is not only an urn full of linguistic data, but also a space where this data is sequenced and organized to form a text” (translation, Richard) Therefore it is necessary to situate text-based data within the immediate co-text (the words that surround it) and within a greater body of text (many other texts that may be written by a person or group), preferably over a longer period of time. As well, it is critically important to situate any writing in a broader socio-historical context. It is the goal of this project to understand and combine conventions of linear reading with the broader, reticular view of a large body of text that may reveal unexpected connections.
Additional reading on computer-assisted text analysis can be found here:
http://lexicometrica.univ-paris3.fr/
To analyze the documents in our database, we use the following text analysis software:
Freeware:
Commercialized Software: