A multidisciplinary approach to the use of technology in research using interview methods
|Date||Tuesday 9 July 2019|
|Time||9:00 - 13:00|
|Location||The Netherlands, Utrecht, TivoliVredenburgh, Room: ???|
|Co-located with Digital Humanities (DH) Conference 2019|
|Contact||This workshop is supported by SSHOC|
Aim of the workshop
When considering research processes that involve interview data, we observe a variety of scholarly approaches, that are typically not shared across disciplines. Scholars hold on to engrained research practices drawn from specific research paradigms and they seldom venture outside their comfort zone. The inability to ‘reach across’ methods and tools arises from tight disciplinary boundaries, where terminology and literature may not overlap, or from different priorities placed upon digital skills in research. We believe that offering accessible and customized information on how to appreciate and use technology can help to bridge these gaps.
This workshop aims to break down some of these barriers by offering scholars who work with interview data the opportunity to apply, experiment and exchange tools and methods that have been developed in the realm of Digital Humanities.
As a multidisciplinary group of European scholars, tools and data professionals, spanning the fields of speech technology, social sciences, human computer interaction, oral history and linguistics, we are interested in strengthening the position of interview data in Digital Humanities. Since 2016 we have organized a series of workshops, supported by CLARIN on this topic (See here on this website).
Our first concrete output was the development of the T-Chain, a tool that supports transcription and alignment of audio and text in multiple languages. Second, we developed a format for experimenting with a variety of annotation, text analysis and emotion recognition tools as they apply to interview data.
This half-day workshop will provide a fruitful cross-disciplinary knowledge exchange session.
- Show how you can convert your AV-material into a suitable format and then use automatic speech recognition via the OH portal
- Demonstrate the correction of the ASR-results and the annotation of the resulting text
- Demonstrate how you can do some text-analysis (Voyant) and make nice graphics
- Demonstrate the possibility of emotion extraction with Open Smile