An engineering blog for engineers working in Audiovisuals, Telecommunications, Electronics, ICT, Computer Engineering, Multimedia and Telematics.

01 October 2010 | Posted by Redacción Ingeniería

CTMedia paper published in IEEE Transactions on Multimedia

A paper by the CTMedia researcher Dr. Francesc Alías has been published in the October issue of the IEEE Transactions on Multimedia. The paper is entitled   "Reliable pitch marking of affective speech at peaks or valleys using restricted dynamic programming"  and has been published in a Special issue on Multimodal Affective Interaction of the journal.

Complete reference:

Francesc Alías, Natàlia Munné;  "Reliable pitch marking of affective speech at peaks or valleys using restricted dynamic programming" , IEEE Transactions on Multimedia (Special issue on Multimodal Affective Interaction), vol. 12 (6), pp. 481-489, October.

Abstract:

The affective communication channel plays a key role in multimodal human-computer interaction. In this context, the generation of realistic talking-heads expressing emotions both in appearance and speech is of great interest. The synthetic speech of talking-heads is generally obtained from a text-to-speech (TTS) synthesizer. One of the dominant techniques for achieving high-quality synthetic speech is unit-selection TTS (US-TTS) synthesis. Affective US-TTS systems are driven by affective annotated speech databases. Since affective speech involves higher acoustic variability than neutral speech, achieving trustworthy speech labeling is a more challenging task. To that effect, this paper introduces a methodology for achieving reliable pitch marking on affective speech. The proposal adjusts the pitch marks at the signal peaks or valleys after applying a three-stage restricted dynamic programming algorithm. The methodology can be applied as a post-processing of any pitch determination and pitch marking algorithm (with any local criterion for locating pitch marks), or their merging. The experiments show that the proposed methodology significantly improves the results of the input state-of-the-art markers on affective speech.

Share