arXiv Open Access 2018

Multi-modal Approach for Affective Computing

Siddharth Siddharth Tzyy-Ping Jung Terrence J. Sejnowski
Lihat Sumber

Abstrak

Throughout the past decade, many studies have classified human emotions using only a single sensing modality such as face video, electroencephalogram (EEG), electrocardiogram (ECG), galvanic skin response (GSR), etc. The results of these studies are constrained by the limitations of these modalities such as the absence of physiological biomarkers in the face-video analysis, poor spatial resolution in EEG, poor temporal resolution of the GSR etc. Scant research has been conducted to compare the merits of these modalities and understand how to best use them individually and jointly. Using multi-modal AMIGOS dataset, this study compares the performance of human emotion classification using multiple computational approaches applied to face videos and various bio-sensing modalities. Using a novel method for compensating physiological baseline we show an increase in the classification accuracy of various approaches that we use. Finally, we present a multi-modal emotion-classification approach in the domain of affective computing research.

Topik & Kata Kunci

Penulis (3)

S

Siddharth Siddharth

T

Tzyy-Ping Jung

T

Terrence J. Sejnowski

Format Sitasi

Siddharth, S., Jung, T., Sejnowski, T.J. (2018). Multi-modal Approach for Affective Computing. https://arxiv.org/abs/1804.09452

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2018
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓