Music Based Mood Classification
Satyapal Yadav, Akash Saxena "Music Based Mood Classification". International Journal of Computer Trends and Technology (IJCTT) V48(3):139-147, June 2017. ISSN:2231-2803. www.ijcttjournal.org. Published by Seventh Sense Research Group.
Abstract -
Music is the pleasant sound (vocal or instrumental) that leads us to experience harmony and higher happiness. Music is one of the fine arts. Like other forms of art, it requires creative and technical skill and the power of imagination. As dance is an artistic expression of movement and painting of colours, so music is of sounds. What a pretty sight is to the eyes, aroma is to the nose, delicious dish is to the palate and soft touch is to the skin, so music is to the ears. We most often choose to listen to a song or music which best fits our mood at that instant. In spite of this strong correlation, most of the music software’s present today is still devoid of providing the facility of mood-aware play-list generation. This increase the time music listeners take in manually choosing a list of songs suiting a particular mood or occasion, which can be avoided by annotating songs with the relevant emotion category they convey. The problem, however, lies in the overhead of manual annotation of music with its corresponding mood and the challenge is to identify this aspect automatically and intelligently. Our focus is specifically on Indian Popular Hindi songs. We have analyzed various data classification algorithms in order to learn, train and test the model representing the moods of these audio songs and developed an open source framework for the same. We have been successful to achieve a satisfactory precision of 70% to 75% in identifying the mood underlying the Indian popular music by introducing the bagging (ensemble) of random forest approach experimented over a list of 4600 audio clips.
References
[1] Kate Hevner, (1936), Experimental studies of the elements of expression in music, American Journal of Psychology, 48:246268.
[2] Paul R. Farnsworth, (1958), The social psychology of music, The Dryden Press.
[3] Mirenkov, N., Kanev, K., Takezawa, H., (2008), Quality of Life Supporters Employing Music Therapy", Advanced Information Networking and Applica-
Applications - Workshops(AINAW).
[4] Dalibor Mitrovic, Matthias Zeppelzauer, Horst Eidenberger, (2007), Analysis of the Data Quality of Audio Descriptions of Environmental Sounds", Journal of Digital Information Management, 5(2):48.
[5] Capurso, A., Fisichelli, V. R., Gilman, L., Gutheil, E. A.,Wright, J. T., (1952),Music and Your Emotions", Liveright Publishing Corporation.
[6] Weihs, C., Ligges, U., Morchen, F., Mullensiefen, D., (2007), Classification in music research.", Advance Data Analysis Classification, vol. 1, no. 3, pp. 255291.
[7] Scaringella, N., Zoia, G., Mlynek, D., (2006), Automatic genre classification of music content, A survey.", IEEE Signal Processing Magazine, vol. 23, no. 2, pp. 133141.
[8] Zhouyu Fu, Guojun Lu, Kai Ming Ting, Dengsheng Zhang, (2011), A Survey of Audio-Based Music Classification and Annotation", IEEE Transactions on multimedia, Vol. 13, No. 2.
[9] Russell J. A., (1980), A circumflex model of affect”, Journal of Personality and Social Psychology, 39: 1161-1178.
[10] Thayer, R. E., (1989), The Bio-psychology of Mood and Arousal", New York: Oxford University Press.
[11] JungHyun Kim, Seungjae Lee, SungMin Kim, WonYoung Yoo, (2011), Music Mood Classification Model Based on Arousal-Valence Values", ICACT2011, ISBN 978-89-5519-155-4.
[12] Tsunoo, E., Akase, T., Ono, N., Sagayama S., (2010), Musical mood classification by rhythm and bass-line unit pattern analysis", Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[13] McEnnis, D., McKay, C., Fujinaga, I., Depalle P., (2005), jAudio: A feature extraction library", Proceedings of the International Conference on Music Information Retrieval. 6003.
[14] Dalibor Mitrovic, Matthias Zeppelzauer, Horst Eidenberger, (2007), Analysis of the Data Quality of Audio Descriptions of Environmental Sounds", Journal of Digital Information Management, 5(2):48.
[15] Ekman P., (1982), Emotion in the Human Face", Cambridge University Press, Second ed.
[16] Duda, R. O., Hart P. E., (2000)Pattern Classification”, New York Press:Wiley.
[17] Marsyas, http://opihi.cs.uvic.ca/marsyas.
[18] Attribute-Relation File Format, http://www.cs.waikato.ac.nz/ml/weka/arff .html
[19] Weka, http://www.cs.waikato.ac.nz/ml/weka/
Keywords
We have analyzed various data classification algorithms in order to learn, train and test the model representing the moods of these audio songs and developed an open source framework for the same.