Friday, June 27, 2008

Lost the remote? Use your face


A researcher has discovered a way to use facial expressions to speed and slow video playback.

By using a combination of facial expression recognition software and automated tutoring technology Jacob Whitehill, a computer science Ph.D. student from UC San Diego's Jacobs School of Engineering, is leading the project that ultimately is part of a larger venture to use automated facial expression recognition to make robots more effective teachers.

The researchers recently conducted a pilot test with 8 people that demonstrated information within the facial expressions people make while watching recorded video lectures can be used to predict a person's preferred viewing speed of the video and how difficult a person perceives the lecture at each moment in time.

"If I am a student dealing with a robot teacher and I am completely puzzled and yet the robot keeps presenting new material, that's not going to be very useful to me. If, instead, the robot stops and says, 'Oh, maybe you're confused,' and I say, 'Yes, thank you for stopping,' that's really good," said Whitehill in a release.

Recent advances in the fields of pattern recognition, computer vision, and machine learning have made automatic facial expression recognition in real-time a viable resource for intelligent tutoring systems (ITS), researchers added. As facial expression recognition technology improves in accuracy, the range of its application will grow. One particular application we are currently developing is a \smart video player" which modulates the video speed in real-time based on the user's facial expression so that the rate of lesson presentation is optimal for the current user, researchers said.

In the pilot study, researcher said the facial movements people made when they perceived the lecture to be difficult varied widely from person to person. Most of the 8 test subjects, however, blinked less frequently during difficult parts of the lecture than during easier portions of the lecture, which is supported by findings in psychology.

One of the next steps for this project is to determine what facial movements one person naturally makes when they are exposed to difficult or easy lecture material. From here, researchers could train a user specific model that predicts when a lecture should be sped up or slowed down based on the spontaneous facial expressions a person makes.

The goal of UC San Diego's Machine Perception Laboratory is to gain insights into how the brain works by developing systems that perceive and interact with humans in real time using natural communication channels. Researchers are also developing algorithms for robots that develop and learn to interact with people on their own. Applications include personal robots, perceptive tutoring systems, and system for clinical assessment, monitoring, and intervention.

No comments:

Post a Comment

MSPGCL Tenders as on 17/1/2024