Wednesday, March 23
5:30 PM-7:00 PM
EDT
Harborside Center

Quantifying the Quality of Facial Expressions

Poster/Demo ID: 48769
  1. Tomoki Matsunaga
    National Institute of Technology, Toyama College
  2. aaa
    Todd Cooper
    National Institute of Technology, Fukui College
  3. Akira Tsukada
    National Institute of Technology, Toyama College

Abstract: English education in Japan doesn’t provide enough time for students to be able to learn how to speak or communicate effectively. To solve this problem, we have been developing the Virtual Interview System (VIS) that virtualizes an interview test between students and teachers, increasing speaking practice times in language class. The VIS focuses on speaking and listening, but nonverbal communication, such as gestures, facial expression, and tone of voice are just as important. However, unlike spelling and punctuation it is very subjective. How can we assess the quality? In this report, we will highlight our preliminary research into how to quantify nonverbal communication: specifically with regards to facial expression. We used the Kinect 3D motion sensor camera to identify and quantify the quality or richness of facial expression, in our effort to help learners become better speakers and communicators of English.

Presider: Mehmet Ali Ozer, New Mexico State University

Topics

Conference attendees are able to comment on papers, view the full text and slides, and attend live presentations. If you are an attendee, please login to get full access.
x