Logo long text klein

    • Welcome to this website
    • What drives us
    • Who we are
    • Supporting Partners
    • Contact
    • Member login
    • About
  • Research domains
    • Oral History
    • Computational Linguistics
    • Sociolinguistics
    • Language and Speech Technology
  • Tech & Tools
    • Technologies and tools for speech data
    • Automatic Speech Recognition
    • Forced Alignment
    • Transcription
    • Emo-Spectre
    • Qualitative Data Analysis
    • Computational Linguistics
    • Subtitles
    • Software developed by our team
  • Workshops
    • EHRI 2022
    • ICMI 2020
    • CLARIAH OH Workshop 2019
    • CLARIN Workshop Sofia 2019
    • DH2019 Workshop
    • München Workshop 2018
    • Arezzo Workshop 2017
    • Utrecht Workshop 2016
    • Oxford Workshop 2016
  • Publications
  • Transcription Portal
  • Guidelines
    • Converting Audio
    • About Transcription
    • Metadata Schemas
    • New Digital Recordings
  • News
  • Data

manuals

  • Technologies and tools for speech data
  • Automatic Speech Recognition
  • Forced Alignment
  • Transcription
  • Emo-Spectre
  • Qualitative Data Analysis
  • Computational Linguistics
  • Subtitles
  • Software developed by our team

Emo-Spectre

A tool for multi-modal feature exploration in time-series data

Kok, P.

Research on storytelling has focused strongly on the analysis of verbal transcripts. Innovative methods of computational analysis increasingly allow for the analysis of nonverbal expressions of emotions, including prosodic features and facial expressions. The workshop familiarizes researchers with a tool for the analysis and visualization of multimodal expressions of emotions in audiovisual materials. It provides an overview of relevant interdisciplinary work in psychology and affective computing, the application of the tool in a research project on life stories of older persons, and a hands-on workshop where participants can experiment with the tool.

The presentation (showed here) was part of the online workshop "Multimodal visualization of emotion expression" organized by the University of Twente in collaboration with the Netherlands eScience Center. It was part of the UTwente project "Emotion Recognition in Dementia".

For GitHub, see here.