This paper presents the validation of the expressive content of an acted corpus produced to be used in speech synthesis, due to this kind of emotional speech can be rather lacking in authenticity. The goal is to obtain an automatic classifier able to prune the bad utterances -from an expressiveness point of view-. The results of a previous subjective test are used for training a multistage emotional identification system based on statistical features computed from the speech prosody and voice quality. Finally, the system provides a set of utterances to be checked and definitely eliminated if appropriate.