跳到主要內容

簡易檢索 / 詳目顯示

研究生: 莊彥哲
Chuang, Yan-Che
論文名稱: 深度學習於台灣指數期貨之應用 : 經驗模態分解下之長短期記憶神經網路建模
Application of Deep Learning in Taiwan Index Futures : Long-term and Short-term Memory Neural Network Modeling Based on Empirical Mode Decomposition
指導教授: 陳樹衡
口試委員: 戴中擎
池秉聰
學位類別: 碩士
Master
系所名稱: 社會科學學院 - 經濟學系
Department of Economics
論文出版年: 2019
畢業學年度: 107
語文別: 中文
論文頁數: 53
中文關鍵詞: 人工智慧深度學習神經網路長短期記憶模型遞迴神經網路機器學習K-近鄰演算法經驗模態分解日內資料金融時間序列趨勢預測當沖交易
DOI URL: http://doi.org/10.6814/NCCU201900955
相關次數: 點閱:186下載:12
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本文中我們主要的目標是想要基於深度學習模型(Long Short Term Memory Network,縮寫LSTM),並結合經驗模態分解(Empirical Mode Decomposition,EMD分解)將期貨的分鐘頻率日內資料分解為有意義的頻率信號,將人工智慧應用於預測金融時間序列走勢,且實際用於期貨市場的當沖交易。預測金融時間序列走勢的一直都不是個簡單的任務,主要是因為金融時間序列的非定態,具有序列相關。於是我們想結合專門將時間序列分解成多個獨立且有頻率意義信號的經驗模態分解(EMD分解):以及具有長期記憶、寫入、清除、輸出,專門處理時間序列資料的長短期記憶神經網路模型(LSTM),並運用模型輸出結果實際在歷史資料上交易回測,然後計算模型效能、策略績效,最後與傳統的機器學習(本文中以具有隱藏層以及多神經元的深度學習區分傳統上統計學的機器學習方法)演算法K-近鄰演算法(K Nearest Neighbor. KNN)做比較。經過實驗我們成功找出EMD分解與LSTM、KNN的最佳預測區間長度,且經由實驗證明EMD分解確實能有效幫助中、短期的金融時間序列趨勢預測,以及深度學習模型LSTM的效能在相同資料處理方式下明顯優於傳統機器學習方法KNN。


    摘要 1
    一、緒論 4
    1.1 研究緣起 4
    1.2 本文貢獻 6
    1.3 本文架構 7
    二、研究背景及文獻回顧 8
    2.1 機器學習模型用於分類簡介 8
    2.2 深度學習模型用於分類簡介 9
    2.3 文獻回顧 10
    三、研究方法 13
    3.1 資料處理 13
    3.1.1 台灣指數期貨資料簡介 13
    3.1.2 資料集切割與標籤 14
    3.2 機器學習演算法,K-近鄰演算法(K Nearest Neighbor,KNN) 20
    3.3 深度學習演算法-長短期記憶模型((Long short-term memory. LSTM) 21
    3.3.1遞迴神經網路(Recurrent Neural Networ,RNN) 21
    3.3.2 長短期記憶模型(Long short-term memory. LSTM) 24
    3.3.3 本文實驗採用的LSTM架構 27
    四、實驗 36
    4.1實驗設計 36
    4.2 衡量指標 37
    4.2.1 判斷模型效能的指標 37
    4.2.2 交易績效指標 38
    4.3 實驗數據 39
    4.3.1 不同模型與資料處理方式的效能交叉比對 40
    4.3.2 不同模型與資料處理方式的交易效能交叉比對 44
    五、結論與展望 50
    5.1結論 50
    5.2 展望 51
    參考文獻 52

    [1] Bengio, Yoshua, Patrice Simard, and Paolo Frasconi.(1994) “Learning long-term dependencies with gradient descent is difficult.” Neural Networks, IEEE Transactions on 5.2 (1994): 157-166.
    [2] Cover, T.;P. Hart(1967) .“Nearest neighbor pattern classification.” in IEEE Transactions on Information Theory, vol. 13, no. 1, pp. 21-27, January 1967.
    doi: 10.1109/TIT.1967.1053964
    [3] Cox, D.R. (1958). “The Regression Analysis of Binary Sequences.” Journal of the Royal Statistical Society: Series B, 20, 215-242.
    [4] Diederik, Kingma & Ba, Jimmy. (2014). Adam: A Method for Stochastic Optimization. International Conference on Learning Representations.
    [5] Doering , Jonathan & Fairbank, Michael & Markose, Sheri. (2017). “Convolutional neural networks applied to high-frequency market microstructure forecasting.” 31-36. 10.1109/CEEC.2017.8101595.
    [6] Fisher, R.A. (1936). “The Use of Multiple Measurements in Taxonomic Problems.” Annals of Eugenics, 7, 179-188.
    [7] Gode, D. K., & Sunder, S. (1993). Alloca;ve efficiency of markets with zero-intelligence traders: Market as a par;al subs;tute for individual ra;onality. Journal of poli;cal economy, 101(1), 119–137
    [8]Hochreiter, Sepp, and Jürgen Schmidhuber(1997). “Long short-term memory.” Neural computation 9.8 (1997): 1735-1780.
    [9] Huang, Norden E.;Zheng Shen;Steven R. Long3(1998). “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis” 454Proc. R. Soc. Lond. A
    [10] Krizhevsky, Alex & Sutskever, Ilya & E. Hinton, Geoffrey. (2012). “ImageNet Classification with Deep Convolutional Neural Networks.” Neural Information Processing Systems. 25. 10.1145/3065386.
    [11] Le, Quoc V. Navdeep Jaitly, Geoffrey E. Hinton(2015). “A Simple Way to Initialize Recurrent Networks of Rectified Linear Units”.arXiv:1504.00941v2 [cs.NE] 7 Apr 2015
    [12] Li, Edwin (2018). “LSTM Neural Network Models for Market Movement Prediction” (Dissertation). Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231627
    [13] Lipton, Zachary C. John Berkowitz, Charles Elkan(2015). “A Critical Review of Recurrent Neural Networks for Sequence Learning.” arXiv:1506.00019v4 [cs.LG] 17 Oct 2015
    [14] Loffe, Sergey. Christian Szegedy(2015). “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” arXiv:1502.03167v3 [cs.LG] 2 Mar 2015
    [15] McCulloch, Warren S.;Walter Pitts(1943). “A logical calculus of the ideas immanent in nervous activity.” Bulletin of Mathematical Biology, 52, 99-115.
    [16] Navon, Ariel Yosi Keller(Nov 2017). “Financial Time Series Prediction using Deep Learning.” arXiv:1711.04174v1 [eess.SP] 11 Nov 2017
    [17]Rosenblatt, F(1958). “The perceptron: A probabilistic model for information storage and organization in the brain.” _Psychological Review_ 65 (6):386-408.
    [18] Rumelhart, David E Geoffrey E. Hinton, Ronald J. Williams(1986). “Learning representations by back-propagating errors” . Nature. 323 (6088): 533–536. doi:10.1038/323533a0. ISSN 1476-4687.
    [19] SUBHA, M.V & Nambi, S.T.. (2012). “Classification of stock index movement using k-nearest neighbours (k-NN) algorithm.” WSEAS Transactions on Information Science and Applications. 9. 261-270.
    [20] Srivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov(2014). “Dropout: A Simple Way to Prevent Neural Networks from Overfitting” Journal of Machine Learning Research 15 (2014) 1929-1958 Submitted 11/13; Published 6/14
    [21] Teixeira, LA & Oliveira, A.L. (2010). “A method for automatic stock trading combining technical analysis and nearest neighbor classification.” Expert Syst. Appl., 37, 6885-6890.
    [22]Williams, R. J. (1989). "Complexity of exact gradient computation algorithms for recurrent neural networks. Technical Report Technical Report NU-CCS-89-27". Boston: Northeastern University, College of Computer Science.
    [23]Zhang, Boning. (2018). Foreign exchange rates forecasting with an EMD-LSTM neural networks model. Journal of Physics: Conference Series. 1053. 012005. 10.1088/1742-6596/1053/1/012005.
    [24] Zheng, Huiting & Yuan, Jiabin & Chen, Long. (2017). “Short-Term Load Forecasting Using EMD-LSTM Neural Networks with a Xgboost Algorithm for Feature Importance Evaluation.” Energies. 10. 1168. 10.3390/en10081168.

    QR CODE
    :::