| 研究生: |
李侃儒 Li, Kan-Ru |
|---|---|
| 論文名稱: |
個人化情緒/情境音樂檢索系統 Personalized Music Retrieval Based on Emotions / Situations |
| 指導教授: |
陳良弼
Chen, Arbee |
| 學位類別: |
碩士
Master |
| 系所名稱: |
理學院 - 資訊科學系 |
| 論文出版年: | 2006 |
| 畢業學年度: | 94 |
| 語文別: | 中文 |
| 論文頁數: | 53 |
| 中文關鍵詞: | 音樂檢索 、情緒 、情境 、個人化 、音樂表示法 、音樂特徵自動調整重要性 |
| 外文關鍵詞: | music retrieval, emotion, situation, personalization, music representation, features re-weighting |
| 相關次數: | 點閱:208 下載:90 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在本論文中我們提出了一種個人化情緒/情境音樂檢索方法。主要的概念為根據使用者的feedback來找出符合該使用者情緒/情境的音樂在features上所具備的特性,藉此達到個人化的效果。為了更明確表示出音樂的特性,我們利用統計features分布情況的方式來做為音樂的表示法。同時,定義了兩層features自動weighting的方法來決定每個feature在不同情緒/情境下的鑑別度。最後,我們將探討音樂特性與音色對不同的情緒/情境會造成什麼樣的影響,並試著分析音樂與情緒/情境間的關係。
In this paper, an approach for personalized music retrieval based on emotions / situations is proposed. The main concept is to find out the properties of music that caused the user have emotions / situations responses via the user feedback. And using the user feedback will help us to establish a personalized music retrieval system based on emotions / situations. To represent the music properties clearly, we proposed a new method of music representation based on statistics. And we defined a two-phase features re-weighting method to find out the importance of features in different emotions / situations. At last, we will discuss the influence of music properties and timbre on different emotions / situations, and try to analyze the relationship between the emotions / situations.
第一章 導論.................................................1
1.1 背景...................................................1
1.2 音樂檢索................................................1
1.2.1 Annotation-based music information retrieval.........1
1.2.2 Content-based music information retrieval............2
1.2.2.1 Query by Music Segments: An Efficient Approach for Song Retrieval [10]........................................2
1.2.2.2 An Approximate String Matching Algorithm for Content-Based Music Data Retrieval [11]....................6
1.2.3 Emotion/Situation-based music information retrieval.................................................13
1.2.3.1 利用音樂與心理方面所建立的模型進行音樂的分類 [2][3]....14
1.2.3.1.1 Automatic Mood Detection from Acoustic Music Data [2].......................................................14
1.2.3.1.2 Music Information Retrieval by Detecting Mood via Computational Media Aesthetics [3]........................16
1.2.3.2 根據預先分類好的歌詞資料利用decision tree等方式建立歌詞分類法則[4] ................................................23
1.2.3.2.1 Disambiguating Music Emotion Using Software Agents [4]...............................................23
1.2.3.3 根據預先分類好的音樂資料利用SVM等方式建立音樂分類法則 [5]......................................................24
1.2.3.3.1 Detecting emotion in music [5] ................24
1.4 個人化音樂情緒的重要性..................................25
1.5 系統架構..............................................27
1.6 論文結構..............................................28
第二章 音樂檢索與相似度計算..................................29
第三章 即時個人化..........................................34
第四章 實驗結果............................................36
4.1 情緒與情境的定義.......................................36
4.2 系統實作與介紹.........................................37
4.3 實驗結果及數據.........................................41
第五章 結論...............................................51
參考資料..................................................52
[1] 孟憲福主譯, 殷于涵審校(2004), S. Sadie and A. Latham原著, 劍橋音樂入門(The Cambridge Illustrated Guide of Music), 台北市:果實出版, 城邦文化發行.
[2] D. Liu, L. Lu, and H.J. Zhang, 2003. “Automatic Mood Detection from Acoustic Music Data.” International Symposium on Music Information Retrieval.
[3] Y. Feng, Y. Zhuang, and Y. Pan, 2003. “Music Information Retrieval by Detecting Mood via Computational Media Aesthetics,” IEEE/WIC International Conference on Web Intelligence, 235-241.
[4] D. Yang and W.S. Lee, 2004. “Disambiguating Music Emotion Using Software Agents.” International Symposium on Music Information Retrieval, 218-223.
[5] T. Li and M. Ogihara, 2003. “Detecting emotion in music,” International Symposium on Music Information Retrieval, 239-240.
[6] 游恆山編譯(1990), Philip G. Zimbardo原著, 心理學, 台北市:五南圖書公司.
[7] 王毓雅, 如何進行幼兒音樂教學~由幼兒音樂概念發展觀之, 國教新知第47卷第3期.
[8] S. Khalfa, I. Peretz, B. Jean-Pierre, and R. Manon , 2002. “Event-related skin conductance responses to musical emotions in humans. ” Neuroscience Letters, 328, 145-149.
[9] N. Gosselin, I. Peretz, M. Noulhiane, D. Hasboun, C. Beckett, M. Baulac, and S. Samson, 2005. “Impaired recognition of scary music following unilateral temporal lobe excision. ” Brain, 128, 628-640.
[10] A. Chen, M. Chang, J. Chen, J. L. Hsu, C. H. Hsu, and Spot Y. S. Hua, 2000. “Query by Music Segments: An Efficient Approach for Song Retrieval.” Proceedings of the IEEE International Conference on Multimedia and Expro, 873-876.
[11] C. C. Liu, J. L. Hsu, and A. Chen, 1999. “An Approximate String Matching Algorithm for Content-Based Music Data Retrieval,” Proceedings of the IEEE International Conference on Multimedia Computing and Systems, 451-456.
[12] J. Russell.“A circumplex model of affect, ” Journal of Personality and Social Psychology, 39, 1161-1178, 1980
[13] 陳美如主譯, 洪小萍、石曉蓉編輯, 篠田知璋、加藤美知子主編, 日野原重明審訂, 標準音樂治療入門, 台北市:五南圖書公司.
[14] 謝俊逢, 音樂療法-理論與方法, 台北市:大陸書局