Speech-based emotion characterization using postures and gestures in CVEs
Dr. Rasika Ranaweera ,Senior Lecturer/Dean ,Faculty of Computing ,ranaweera.r@nsbm.ac.lk
Abstract :-
Collaborative Virtual Environments (CVEs) have become increasingly popular in the past two decades. Most CVEs use avatar systems to represent each user logged into a CVE session. Some avatar systems are capable of expressing emotions with postures, gestures, and facial expressions. In previous studies, various approaches have been explored to convey emotional states to the computer, including voice and facial movements. We propose a technique to detect emotions in the voice of a speaker and animate avatars to reflect extracted emotions in real-time. The system has been developed in “Project Wonderland,” a Java-based open-source framework for creating collaborative 3D virtual worlds. In our prototype, six primitive emotional states— anger, dislike, fear, happiness, sadness, and surprise— were considered. An emotion classification system which uses short time log frequency power coefficients (LFPC) to represent features and hidden Markov models (HMMs) as the classifier was modified to build an emotion classification unit. Extracted emotions were used to activate existing avatar postures and gestures in Wonderland