Tuesday, June 23
4:45 PM-5:00 PM
Room 4

Virtual Facial Expression Analysis: Analyzing Nonverbal Communication with the Interview and Presentation Assistant (IPA) 4.0

Brief Paper: Live Presentation ID: 56690
  1. aaa
    Todd Cooper
    University of Toyama
  2. Akira Tsukada
    National Institute of Technology, Toyama College
  3. Miki Takashima
    National Institute of Technology, Toyama College

Abstract: Preparing EFL (English as a Foreign Language) learners for public speeches, presentations, and interviews in their non-native tongue requires personalized and tailored advice from educators. These activities require an advanced level of teaching and learning as they incorporate both verbal (VC) and nonverbal communicative (NVC) skills. However, in Japan, with class sizes averaging forty students, it is difficult for one teacher to do an adequate job for either VC or NVC, resulting in a missed learning opportunity. Our system utilizes the Microsoft Kinect 3D sensor camera and addresses one of the most important elements of nonverbal communication: facial expression (FE). In this paper, we focus on using the Kinect to analyze the speaker’s FE while they are responding to actual job interview questions from local and regional companies here in Japan. A standard video camera simultaneously captures the interviewee. This video is then viewed and scored by a human judge. The data from the Kinect analysis is compared with the scoring of the human judge resulting in a system that virtually replicates interview scoring. With our system, we are extending the personalized and tailored advice for NVC, normally found in small classes or one-on-one learning environments, in large-sized classrooms.

No presider for this session.


Conference attendees are able to comment on papers, view the full text and slides, and attend live presentations. If you are an attendee, please login to get full access.