Feeds:
Posts
Comments

Posts Tagged ‘Digital humanities’

adam-crymble-cropThis year the Digital History seminar will again be streaming live over the internet.  The first of these will be this coming Tuesday (15 October) when long-time attendee Adam Crymble (King’s College London) will be discussing his doctorate study.  Please feel free to join us either in person (in the Bedford room G37, Senate House) or live online at History SPOT.  Full details below:

 

 

The Programming Historian 2: Collaborative Pedagogy for Digital History

Adam Crymble (King’s College London)

Digital History seminar

Tuesday, 15 October 2013, 5:15pm (BST/GMT+1)

Bedford Room G37, Senate house, Ground floor 

Abstract

The Programming Historian 2 offers open access, peer reviewed tutorials designed to provide historians with new technical skills that are immediately relevant to their research needs. The project also offers a peer reviewed platform for those seeking to share their skills with other historians and humanists. In this talk, Adam will discuss the project from behind the scenes, looking at how it has grown and hopes to continue to grow, as an enduring digital humanities project and alternative publishing and learning platform.

Biography

Adam Crymble is one of the founding editors of the Programming Historian 2. He is the author of ‘How to Write a Zotero Translator: A Practical Beginners Guide for Humanists’ and is finishing a PhD in history and digital humanities at King’s College London. Adam is also a Fellow of the Software Sustainability Institute.

Advertisements

Read Full Post »

Digital History seminar
23 October 2012
Luke Blaxill (King’s College London)
Quantifying the Language of British Politics, 1880-1914

shutterstock_9709540[1]Abstract: This paper explores the power, potential, and challenges of studying historical political speeches using a specially constructed multi-million word corpus via quantitative computer software. The techniques used – inspired particularly by Corpus Linguists – are almost entirely novel in the field of political history, an area where research into language is conducted nearly exclusively qualitatively. The paper argues that a corpus gives us the crucial ability to investigate matters of historical interest (e.g. the political rhetoric of imperialism, Ireland, and class) in a more empirical and systematic manner, giving us the capacity to measure scope, typicality, and power in a massive text like a national general election campaign which it would be impossible to read in entirety.

The paper also discusses some of the main arguments against this approach which are commonly presented by critics, and reflects on the challenges faced by quantitative language analysis in gaining more widespread acceptance and recognition within the field.

To listen to this podcast or video click here.

Read Full Post »

Digital History seminar
Ben Schmidt (Princeton University)
Unintended consequences: digital reading and the loci of cultural change

Tuesday 12 March 2013, 5.15pm GMT

Live Stream (click here on Tuesday to view the live stream)
digital readingAbstract: Large scale digital reading is, as its critics have noticed, quite poor at telling us about individual intentions. But digital texts do create new fields for investigation of broad cultural trends which—where reasonably good metadata is available—can help historians to describe changes that appear largely driven by disciplinary or geographical structures rather than the choices of an individual author.

I will investigate this in two contexts; in the emergence of a new vocabulary of attention in the 20th century directly contrary to the ambitions of the psychological establishment; and the particular places authors of historical fiction fail to notice changes in language and culture.

Biography: Ben Schmidt is a Ph.D. Candidate in American intellectual history at Princeton and the Graduate Fellow at the Cultural Observatory at Harvard. His dissertation studies the emergence of modern conceptions of attention in psychology, advertising, and mass media in the early 20th century century United States. He co-developed Bookworm, a system for visual and statistical exploration of millions of books, newspaper pages, or journal articles, and writes about text analysis and the digital humanities at sappingattention.blogspot.com.

Read Full Post »

Tim Sherratt (Independent scholar)

Exposing the Archives of White Australia

Digital History Seminar, Institute for Historical Research

http://www.history.ac.uk/events/seminars/321 | #dhist

Bedford Room G37, Senate House, Ground floor, 5:15 pm (GMT)

729px-Australia_satellite_planeWith the passing of the Immigration Restriction Act in 1901, the new
Australian nation put in place a framework to protect its racial
purity – what was to become known as the White Australia Policy. While
the outlines of this policy are well known, what is less
well-recognised is the White Australia Policy was a massive
bureaucratic exercise.

The Invisible Australians project (invisibleaustralians.org) is using
a variety of digital technologies to explore and analyse the archives
generated by the administration of the White Australia Policy. Many
thousands of people sought to build lives and families within this
discriminatory regime. Invisible Australians aims to recover their
personal stories, while also documenting the workings of the
bureaucracy itself.

How can we re-use archival data to build new forms of access? How can
we track the flow of power through surviving bureaucratic traces? How
can we construct an online research project without any funding or
institutional support? This presentation will introduce Invisible
Australians and reflect on how the digital realm enlarges our scope
both for understanding and for action.

Dr Tim Sherratt (@wragge) is a freelance digital historian, web
developer and cultural data hacker who has been developing online
resources relating to archives, museums and history since 1993. He has
written on weather, progress and the atomic age, and developed
resources including Bright Sparcs, Mapping our Anzacs and QueryPic. He
was a Harold White Fellow at the National Library of Australia in 2012
and is currently an Adjunct Associate Professor in the Digital Design
and Media Arts Research Cluster at the University of Canberra. Tim is
one of the organisers of THATCamp Canberra and a member of the interim
committee of the Australasian Association for the Digital Humanities.
He blogs at discontents.com.au.

The live stream will be available from History SPOT Podcasts page where there is a video pop-out available.

Read Full Post »

Example page from the Text Mining course

Example page from the Text Mining course

The Institute of Historical Research now offer a wide selection of digital research training packages designed for historians and made available online on History SPOT.  Most of these have received mention on this blog from time to time and hopefully some of you will have had had a good look at them.  These courses are freely available and we only ask that you register for History SPOT to access them (which is a free and easy process).  Full details of our online and face-to-face courses can also be found on the IHR website.

I thought that it might be useful to talk a little more about these courses on the blog and provide a brief sample.  Over the coming months I will post up a series of blog posts about each of our training courses, and give you a little sneak peak so that you have a better idea what to expect.

I have chosen the Text Mining module as the first, for several reasons.  First, because it is probably the one that exemplifies what we are trying to do the best.  That is, to make digital tools accessible to historians through a series of introductory training courses.  The Text Mining for Historians module does just this, beginning from the very simple and slowly moving forward toward the more complex.

Text mining is not a tool of itself, but a series of tools that enables us to explore, interrogate, and analyse large bodies of text or texts.  Imagine, if you will, that you have gathered together a corpus of text – perhaps it’s a diary or series of diaries from a particular period, perhaps it’s a series of publications on a particular subject, or maybe it’s a set of official records spanning many decades or even centuries.  Normally you would wade through these documents one at a time and take notes.  Text mining allows you to automate certain elements of this task and helps you to discover trends and connections that you might never be able to do looking at the texts through traditional methods.

This training module takes you from the theory (i.e. what is text mining all about) through to its application for historical texts, and eventually on to the more complex areas of what is called topic modelling, natural language processing, and named entity recognition.  In this post I’m going to quote from the opening section of this course as it gives a description of what historians might consider a good use for text mining.  In this example we are looking at the Old Bailey Trial accounts used on the popular Old Bailey Proceedings Online website:

 ****

Would you like to know how often the word ‘guilty’ appears in the Old Bailey trial accounts? The answer is findable using a standard search engine on the Old Bailey Online website (it’s 182612). How about how many people were found guilty? The answer is 163261. What about the number of defendants found guilty of murder? The answer is 1518. These last two figures are not possible to find through the standard search engine as they are an entirely different type of question; we are not looking for how many times the word ‘guilty’ appears in the proceedings but how many trials resulted in a guilty verdict. We want to discover something meaningful within the body of texts, automatically rather than manually checking each and every trial account.

This is a relatively simple example of text mining where the original documents have been marked up and tagged by surname, given name, alias, offence, verdict, and punishment. To calculate those results manually you would have to work your way through 197,745 criminal trial accounts (some 127 million words in total).

This form of text mining, however, is little more than an advanced search engine – useful but limited. As the creators of the Old Bailey Online themselves admit (and have attempted to redress in a subsequent project):

‘Analyzing this kind of data by decade, or trial type, or defendant gender etc., can re-enforce the categories, the assumptions, and the prejudices the user brings to each search and those applied by the team that provided the XML markup when the digital archive was first created’.

– Dan Cohen et al, ‘Data Mining with Criminal Intent’, Final White Paper (31 August 2011), p. 12.

In other words the search options and text tagging were emphasising and reinforcing a pre-determined expectation of what the resource creators believed was the important data. Text mining tools can help to explore alternative questions more openly.

The Data Mining with Criminal Intent (DMCI) project has done just this by enabling researchers not only to query the Old Bailey site but to export those results to a Zotero library to be managed and from there toVoyeur and other text mining tools for text analysis and visualisation.

The team behind the project uses the example of an investigator trying to understand the role poison might have had in murder cases. Using the search engine brings up 448 entries for ‘poison’ but doesn’t tell us much about what this means. Using Zotero and Voyeur it is possible to filter out the stop words and legal terminology common in all entries to find out what other words commonly appear near to the word ‘poison’. Through this method of text mining it was possible to conclude that poison was probably more commonly administered through drinks such as coffee than through food (see pp. 6-7 of the white paper report Data Mining with Criminal Intent’).

****

If you would like to have a look at this module please register for History SPOT for free and follow the instructions (http://historyspot.org.uk).  If you would like further information about this course, and the others that the IHR offer please have a look at our Research Training pages on the IHR website.

Read Full Post »

Lancaster University
Friday 30th November, 2012
Geographical Information Systems (GIS) are becoming increasingly used by historians, archaeologists, literary scholars, classicists and others with an interest in humanities geographies. Take-up has been hampered by a lack of understanding of what GIS is and what it has to offer to these disciplines. This free workshop, sponsored by the European Research Council’s Spatial Humanities: Texts, GIS, Placesproject and hosted by Lancaster University, will provide a basic introduction to GIS both as an approach to academic study and as a technology. Its key aims are: To establish why the use of GIS is important to the humanities; to stress the key abilities offered by GIS, particularly the capacity to integrate, analyse and visualise a wide range of data from many different types of sources; to show the pitfalls associated with GIS and thus encourage a more informed and subtle understanding of the technology; and, to provide a basic overview of GIS software and data.

Timetable:
9:30   Registration
10:00 Welcome and Introductions
10:15 Session 1: Fundamentals of GIS from a humanities perspective.
11:45 Session 2: Case studies of the use of GIS in the humanities.
13:00 Lunch
14:00 Session 3: Getting to grips with GIS software and data.
15:30 Roundtable discussion – going further with GIS.
16:30 Close

Who should come?
The workshop is aimed at a broad audience including post-graduate or masters students,members of academic staffcurriculum and research managers, and holders of major grants and those intending to apply for major grants.  Professionals in other relevant sectors interested in finding out about GIS applications are also welcome.  This workshop is only intended as an introduction to GIS, so will suit novices or those who want to brush up previous experience. It does not include any hands-on use of software – this will be covered in later events to be held 11-12th April and 15-18th July 2013.

How much will it cost?
The workshop is free of charge.  Lunch and refreshments are included. We do not provide accommodation but can recommend convenient hotels and B&Bs if required.

How do I apply?
Places are limited and priority will be given to those who apply early. As part of registering please include a brief description of your research interests and what you think you will gain from the workshop. This should not exceed 200 words.
For more details of this and subsequent events see:http://www.lancaster.ac.uk/spatialhum/training.html. To register please email a booking form (attached or available from the website) to: I.Gregory@lancaster.ac.uk who may also be contacted with informal enquiries.

Read Full Post »

%d bloggers like this: