A novel and rigorous non-linear methodology, capable of determining the complexity of waveforms, based on fractal studies provided the theoretical foundations of our work and was used to analyze the internal dynamics of the acoustics of digitized audio signals. The test data included speech (non-musical), drone (periodically musical) and music samples of INDIC Raga-s (having different musicality). It was found that the degree of complexity and multi-fractality (measured by the width of multi-fractal spectrum) changes from the beginning towards the end of each audio sample, however the range of this variation is different in different cases. The normalized value of the width of the multi-fractal spectrum is strikingly different enabling the categorization of various moods and mental stages.

From the experiments conducted by us so far, we can infer here that the degree of complexity increases with increasing musicality and hence the width of the multi-fractal spectrum increases as we go from drone towards music samples. Also the degree of complexity varies largely for samples with mutually exclusive musicality and mood content. The findings of our experiments led us to believe that we may hereafter be able to differentiate and alienate the musicality and moods in different Raga-s which are mutually inclusive in some aspects and yet different in nature. Moreover this approach is non-linear, less approximate and analyses the basic amplitude waveforms of the signals.