| 研究生: |
陳元熙 CHEN, YUAN-HSI |
|---|---|
| 論文名稱: |
應用設計思考來改善強化學習作業服務 Apply the design thinking concept to improving the RLOps service |
| 指導教授: |
蔡瑞煌
Tsaih, Rua-Huan |
| 口試委員: |
林怡伶
Lin, Yi-Ling 周承復 Chou, Cheng-Fu |
| 學位類別: |
碩士
Master |
| 系所名稱: |
商學院 - 資訊管理學系 Department of Management Information System |
| 論文出版年: | 2024 |
| 畢業學年度: | 112 |
| 語文別: | 英文 |
| 論文頁數: | 54 |
| 中文關鍵詞: | 設計思考 、強化學習 、強化學習作業服務 |
| 外文關鍵詞: | Design thinking, Reinforcement learning, RLOps service |
| 相關次數: | 點閱:26 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
此研究以強化學習作業服務 (RLOps service) 為基礎,欲利用設計思考的方式提升此服務,減緩強化學習陡峭的學習曲線,降低強化學習進入障礙,並增進開發上的實驗效率與簡化流程。而透過 RLOps service所提供的部署和管理方式,將可再進一步協助使用者分析與版本控制所訓練出的代理人策略。
並提出了定位於金融投資領域的強化學習作業服務,InvestPRL 服務,來邀請受測者進行實驗,以將其使用情形作為考量,來探討此研究的主要目的。即為設計思考所帶給強化學習作業服務在採用度上的改進以增進強化學習的運用潛力,與瞭解未來RLOps service 在提供服務予使用者時需注意的議題。最後,透過此實驗的結果了解到,將設計思考應用在RLOps service 當中時,將可提升其服務的採用度,且特別在於其中的易用性與適配度的部分最為顯著。
Through the base of Reinforcement Learning Operations Service (RLOps service) in this study, employing design thinking aims to ease the steep learning curve in reinforcement learning, reduce entry barriers, enhance experimental efficiency, and simplify the development process. Moreover, the management capabilities provided by RLOps service further assist users in analyzing and version-controlling the trained agent strategies.
The study introduces the InvestPRL service for the experiment, an RLOps service positioned in the financial investment field, and invites participants to interact with it. By considering their usage, the study explores the primary objective: understanding how design thinking improves the adoption of RLOps services, enhances its potential applications, and identifies key issues for future RLOps services.
The experimental results demonstrate that applying design thinking to the RLOps service increases its adoption, particularly improving ease of use and compatibility.
Chapter 1. Introduction 1
Chapter 2. Literature review 4
2.1 DESIGN THINKING 4
2.2 REINFORCEMENT LEARNING 6
2.3 RLOPS 8
2.4 THE ADOPTION FACTORS OF TECHNOLOGY ACCEPTANCE AND THE DIFFUSION OF INNOVATIONS 11
Chapter 3. The InvestPRL Service 12
3.1 SERVICE OBJECTIVE 12
3.2 RLOPS PROCESS 12
3.3 THE IMPROVEMENT OF INVESTPRL SERVICE 14
Chapter 4. Experiment 18
4.1 PROCEDURE OVERVIEW AND PARTICIPANT DEMOGRAPHICS 18
4.2 EXPERIMENT SCENARIO 20
4.3 GROUP DISCUSSION OF TWO DESIGN THINKING PROCESS 23
4.3.1 Group discussion of the first design thinking process 23
4.3.2 Group discussion of second design thinking process 24
4.4 THE OBSERVATION OF TWO DESIGN THINKING PROCESSES 26
4.4.1 The observation of the first design thinking process 26
4.4.2 The observation of the second design thinking process
29
4.4.3 The difference of observation between two design thinking processes 31
4.5 UI DIFFERENCE BETWEEN THE TWO VERSIONS OF SERVICES 32
4.6 EXPERIMENT RESULT 40
4.7 THE SUGGESTIONS FOR THE RLOPS SERVICES 45
Chapter 5. Conclusion 47
5.1 THE EFFECTIVENESS OF DESIGN THINKING IN RLOPS SERVICE 47
5.2 CONCLUSION 48
5.3 FUTURE WORK 49
Reference 50
Appendix 53
Achiam, J. (2018). Spinning up in deep reinforcement learning.
Awad, A. L., Elkaffas, S. M., & Fakhr, M. W. (2023). Stock Market Prediction Using Deep Reinforcement Learning. Applied System Innovation, 6(6), 106.
Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., & Zaremba, W. (2016). OpenAI Gym. arXiv:1606.01540. Retrieved June 01, 2016
Brown, T. (2009). Change by Design: How Design Thinking Transforms Organizations and Inspires Innovation. HarperCollins.
Chen, X., Yao, L., McAuley, J., Zhou, G., & Wang, X. (2021). A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions. arXiv:2109.03540. Retrieved September 01, 2021
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly, 319-340.
DeepLearning.AI. (2021). Introduction to Machine Learning in Production.
Dewi, R. N. P. N., Suzianti, A., & Puspasari, M. A. A. (2022). Design of Driver Monitoring System for Logistics Truck with Design Thinking Approach Proceedings of the 4th Asia Pacific Conference on Research in Industrial and Systems Engineering, Depok, Indonesia.
Fujimoto, S., van Hoof, H., & Meger, D. (2018). Addressing Function Approximation Error in Actor-Critic Methods. arXiv:1802.09477. Retrieved February 01, 2018
Google Cloud Architecture Center. (2020). MLOps: Continuous delivery and automation pipelines in machine learning.
Hasselt, H. (2010). Double Q-learning
Hasso Plattner Institute of Design at Stanford University. (2010). An Introduction to Design Thinking Process Guide.
Irpan, A. (2018). Deep Reinforcement Learning Doesn't Work Yet.
Jensen, M. B., Lozano, F., & Steinert, M. (2016). The Origins of Design Thinking and the Relevance in Software Innovations. Product-Focused Software Process Improvement, Cham.
Kreuzberger, D., Kühl, N., & Hirschl, S. (2022). Machine Learning Operations (MLOps): Overview, Definition, and Architecture. arXiv:2205.02302. Retrieved May 01, 2022
Li, P., Thomas, J., Wang, X., Khalil, A., Ahmad, A., Inacio, R., Kapoor, S., Parekh, A., Doufexi, A., Shojaeifard, A., & Piechocki, R. (2021). RLOps: Development Life-cycle of Reinforcement Learning Aided Open RAN. arXiv:2111.06978. Retrieved November 01, 2021
Li, Z., Liu, X.-Y., Zheng, J., Wang, Z., Walid, A., & Guo, J. (2021). FinRL-Podracer: High 51
Performance and Scalable Deep Reinforcement Learning for Quantitative Finance. arXiv:2111.05188. Retrieved November 01, 2021
Li, Z., Peng, X. B., Abbeel, P., Levine, S., Berseth, G., & Sreenath, K. (2024). Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control. arXiv:2401.16889. Retrieved January 01, 2024
Liu, X.-Y., Yang, H., Chen, Q., Zhang, R., Yang, L., Xiao, B., & Wang, C. D. (2020). FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance. arXiv:2011.09607. Retrieved November 01, 2020
Liu, X.-Y., Yang, H., Gao, J., & Wang, C. D. (2021). FinRL: Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance. arXiv:2111.09395. Retrieved November 01, 2021
Masias, R. M. S. G., & Intal, G. L. D. (2023). Design of a Productivity Monitoring System for an Asset Maintenance Group Using Design Thinking Methodology Proceedings of the 2023 5th International Conference on Management Science and Industrial Engineering, Chiang Mai, Thailand.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529-533. https://doi.org/10.1038/nature14236
Myrbakken, H., & Colomo-Palacios, R. (2017). DevSecOps: A Multivocal Literature Review. Software Process Improvement and Capability Determination, Cham.
Paulus, R., Xiong, C., & Socher, R. (2017). A Deep Reinforced Model for Abstractive Summarization. arXiv:1705.04304. Retrieved May 01, 2017
Rogers, E. M. (2003). Diffusion of Innovations, 5th Edition. Free Press.
Rogers, E. M., Singhal, A., & Quinlan, M. M. (2014). Diffusion of innovations. In An integrated approach to communication theory and research (pp. 432-448). Routledge.
Rowe, P. G. (1991). Design thinking. MIT press.
Samuylova, E. (2020). Machine Learning in Production: Why You Should Care About Data and Concept Drift.
Sarkar, S. (2023). Quantitative Trading using Deep Q Learning. arXiv:2304.06037. Retrieved April 01, 2023
Simon, H. A. (1996). The sciences of the artificial. MIT press.
Stickdorn, M., & Schneider, J. (2012). This is service design thinking: Basics, tools, cases. John Wiley & Sons.
Sutton, R. S., & Barto, A. G. (1998). Reinforcement Learning: An Introduction. MIT Press.
Zhang, J., & Lei, Y. (2022). Deep Reinforcement Learning for Stock Prediction. Scientific Programming, 2022, 5812546. https://doi.org/10.1155/2022/5812546 52
Zheng, G., Zhang, F., Zheng, Z., Xiang, Y., Yuan, N., Xie, X., & Li, Z. (2018). DRN: A Deep Reinforcement Learning Framework for News Recommendation. https://doi.org/10.1145/3178876.3185994
全文公開日期 2029/07/12