With a proposal title like that, this session is sure to get some attention.
In digital humanities, we work with electronic data. The major advantage of this is that we can use computers to enrich it, search it, find structure, etcetera, without having to do all of that manually. In this session, we would like to talk specifically about text, and the integration of natural language processing (NLP) in DH research.
We will give a brief overview of some of the methods, tools and corpora that we know of, and would then like to kick off a discussion about some of the following questions:
- Which text processing tools do you use, if any? Preprocessing, named entity recognition, clustering, topic modeling, annotation tools, Perl or Python libraries, …
- Do you think your research could benefit from (more) NLP?
- What are the barriers to entry?
- Which tools, courses, … are missing?
Looking forward to a lively session!