A modified version of the non-linear analysis technique formulated indigenously for the analysis of speech signals is introduced here as a method. Using this method, it has been shown that the parameter calculated using this method can distinctively classify speech signals spoken out of two elementary emotions namely ‘anger’ and ‘sadness’. This parameter is also sufficient to assess any possible mental dysfunction like depression and suicidal tendency.

Nowadays depression is a well-known cause of various mental health impairments, impacting the quality of our personal and social lives. Severe depression leads to suicidal tendency, particularly in young people. Detection and assessment of depression and suicidal tendency is a difficult task, due to their complex clinical attributes. Their key symptoms include transformed emotions which are eventually reflected in speech. Hence emotion detection from speech signals is an important field of research in the area of Human Computer Interaction (HCI). In this work we have attempted to extract a novel feature from the nonlinear and non-stationary aspects of the speech signals generated out of different emotions. We have introduced a modified version of the Visibility Graph analysis technique for the analysis of speech signals and extraction of a quantitative feature named mPSVG (Modified Power of Scale-freeness of Visibility Graph). This parameter efficiently classifies the contrasting emotions of anger and sadness, and we have proposed to use this parameter as a precursor for assessing suicidal tendency. This method of Modified Visibility Graph analysis is computationally efficient and suitable for realtime applications but still retains the nonlinear and nonstationary aspects of the speech signal. This is a constructive step towards the assessment of suicidal tendency and other cognitive disorders, using a nonlinear and non-stationary analysis of speech.

Authors: Susmita Bhaduri, Agniswar Chakrabarti and Dipak Ghosh
Journal of Neurology and Neuroscience (Probable Impact Factor: 1.45)
DOI: 10.21767/2171-6625.1000100
J Neurol Neurosci 2016: Volume 7, Issue 3
Status :Published
Area : Speech Analysis