| 研究生: |
蔡欣妤 Tsai, Hsin-Yu |
|---|---|
| 論文名稱: |
生成式AI新聞可信度比較研究:基於台灣讀者對兩岸關係感知態度 The Credibility of AI-Generated News in Taiwan’s Cross-Strait Discourse: A Study on Readers with different attitudes towards Cross-strait relationships |
| 指導教授: |
侯宗佑
Hou, Tsung-Yu |
| 口試委員: |
施琮仁
Shih, Tsung-Jen 袁千雯 Yuan, Chien-Wen |
| 學位類別: |
碩士
Master |
| 系所名稱: |
創新國際學院 - 全球傳播與創新科技碩士學位學程 Master’s Program in Global Communication and Innovation Technology |
| 論文出版年: | 2025 |
| 畢業學年度: | 114 |
| 語文別: | 英文 |
| 論文頁數: | 94 |
| 中文關鍵詞: | 生成式人工智慧 、AI新聞 、政治傾向 、可信度 、兩岸議題 |
| 外文關鍵詞: | Taiwan-China relations |
| 相關次數: | 點閱:79 下載:64 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在生成式新聞報導日益普及的時代,人工智慧(AI)所生成的新聞內容如何影響受眾對新聞可信度的判斷,特別是在具高度政治敏感性的情境中,成為亟待探討的議題。本研究採取混合方法設計,透過2×2×2之實驗法與後續訪談,探究兩岸政治新聞的實際撰寫者(AI或人類記者)、作者標示方式、政治立場一致性,以及受試者對中國態度等變項,如何影響其對新聞可信度的認知。
研究結果顯示,受試者對新聞可信度的評價受到「標示作者」的顯著影響,而非實際作者身份。被標示為AI撰寫的新聞,整體上被視為可信度較低;反之,標示為人類記者撰寫的新聞則獲得較高信任。雖然新聞的政治立場與個人政治傾向相符與否並未對可信度造成顯著差異,但受試者對中國的整體態度對「標示作者」與「可信度」之間的關係具有調節作用。具反中態度者較信任標示為人類記者撰寫的新聞。
質性資料進一步揭示,受試者在評估新聞時高度依賴作者標示,並對AI處理政治性新聞的能力表現出審慎但多元的觀點。部分受試者認為AI或許在台灣的政治媒體環境中能提供相對中立的聲音,然亦有人指出AI所依據的資料來源與演算邏輯,終究反映人類的意識形態與偏見,難以完全客觀。
本研究拓展了AI新聞應用於政治傳播領域的理解,突顯在新聞可信度建構中,受眾對人類記者標示的信任依舊具有關鍵地位,並指出AI新聞在面對政治敏感議題時,其公信力仍受限於社會脈絡與受眾政治認同的交互作用。
In the age of automated journalism, the increasing use of AI-generated content raises important questions about credibility perception, particularly in politically sensitive contexts. This mixed-method study examines how news authorship (AI vs. human), labeled authorship, political stance similarity, and participants’ attitudes toward China affect perceived news credibility. Using a 2×2×2 between-subjects experiment and follow-up interviews, we found that labeled authorship had a significant influence on credibility perceptions, whereas actual authorship did not. Articles labeled as written by AI were perceived as less credible than those labeled as written by human journalists. While political stance similarity did not significantly affect credibility ratings, participants’ attitudes toward China moderated the effect of labeled authorship. Specifically, those with anti-China attitudes perceived human-labeled news as more credible, whereas those with pro-China attitudes were more accepting of AI-labeled news. Qualitative findings further revealed that participants relied on author labels when making credibility judgments and expressed mixed views on AI’s capacity to handle ideologically charged topics. This study contributes to the understanding of AI’s role in political news by highlighting the persistent influence of human authorship perception, and with different or same political stance credibility performance.
Introduction 1
Theoretical background 4
2.1 Evolution and Integration of AI in News Production 4
2.2 Declining Trust in News Media: Global and Taiwanese Context 7
2.3 AI and Human Authorship 7
2.4 The Impact of AI Attribution on News Credibility 9
2.5 The Role of AI in Producing Neutral and Factual News 10
2.6 Political Bias in News Content 11
2.7 Cross-Strait Relations, Fake News, and Reader Perception 14
Methodology 17
3.1 Participants 18
3.2 Design 18
3.3 Experimental Stimuli 19
3.4 Procedure 22
3.5 Measure 23
3.5.1 Credibility Judgements 23
3.5.2 Political Orientation and Attitudes Toward China 24
3.6 Interview recruitment 25
3.7 Data Analysis 26
Result 27
4.1 Participant Demographics and Attribution 27
4.2 Perceived Credibility Based on Real News Authorship 28
4.3 Credibility by label authorship 30
4.3.1 Composite Credibility 30
4.3.2 Welch’s t-Tests on Single-Item Credibility Test 31
4.4 Political stance similarity in labeled author news credibility 33
4.4.1 Political Stance Classification 36
4.4.2 Two-Way ANOVA: Effects of Stance Similarity and Labeled Author 36
4.5 Interaction Between Political Attitude and Labeled Author on News Credibility 38
4.5.1 Regression Analysis Using Continuous China Attitude Score and Labeled Author 39
4.5.2 Within-Group t-Tests Comparing Labeled Author Credibility 41
4.6 Qualitative result 41
4.6.1 Effects of News Source and Author Labeling on Credibility 43
4.6.2 Standards for Evaluating News Credibility 44
4.6.3 Perceived Credibility Across Political Attitudes and News Alignment 45
4.6.4 AI’s Role: Supplementary but Not Substitutive in Political News 46
Discussion 48
5.1 Theoretical and practical implications 48
5.2Limitation and Future Directions 55
Conclusion 57
Reference 59
Chinese Section 59
English Section 59
Appendix 65
Chinese Section
黃淑惠. (2025). 獨家/陸客來台旅遊玩真的!上海、福建的旅行業最快2月12日來台踩線. 經濟日報. https://money.udn.com/money/story/5612/8504743
周湘芸. (2025). 福建上海將開放陸客團 業者樂喊話:解除有名無實禁團令、釋更多航點. 聯合新聞網. https://udn.com/news/story/7331/8496012
政治中心. (2025). 中國解觀光客「赴台禁令」?兩岸官員雙手一攤「沒具體方案」. 三立iNEWS. https://inews.setn.com/news/1600522
莊文仁. (2024). 上海擬恢復中客團赴台引中國追星族暴動 台灣網友憂亂象叢生. 自由時報. https://news.ltn.com.tw/news/life/breakingnews/4896302
Taiwan Media Watch Foundation. (2019). 2019台灣新聞媒體可信度研究報告 [Report on media credibility in Taiwan 2019]. https://www.mediawatch.org.tw/news/9911
English Section
Appelman, A., & Sundar, S. S. (2016). Measuring Message Credibility:Construction and Validation of an Exclusive Scale. Journalism & Mass Communication Quarterly, 93(1), 59-79. https://doi.org/10.1177/1077699015606057
AppliedXL. (2023). Francesco Marconi on AI in News: Testimony Before UK Parliament [Video]. YouTube. https://www.youtube.com/watch?v=f9S32ugkgWs
Bien-Aimé, S., Wu, M., Appelman, A., & Jia, H. (2024). Who Wrote It? News Readers’ Sensemaking of AI/Human Bylines. Communication Reports, 38(1), 46–58. https://doi.org/10.1080/08934215.2024.2424553
Boyer, M. M., Aaldering, L., & Lecheler, S. (2022). Motivated Reasoning in Identity Politics: Group Status as a Moderator of Political Motivations. Political Studies, 70(2), 385-401. https://doi.org/10.1177/0032321720964667
Boczek, K., Dogruel, L., & Schallhorn, C. (2023). Gender byline bias in sports reporting: Examining the visibility and audience perception of male and female journalists in sports coverage. Journalism, 24(7), 1462-1481. https://doi.org/10.1177/14648849211063312
Bloomberg. (2023). Introducing BloombergGPT, Bloomberg’s 50-billion parameter large language model, purpose-built from scratch for finance. Bloomberg. https://www.bloomberg.com/company/press/bloomberggpt-50billion-parameter-llm-tuned-finance/
Bauer, F., & Wilson, K. L. (2022). Reactions to China-linked fake news: Experimental evidence from Taiwan. The China Quarterly, 249, 21–46. https://doi.org/10.1017/S030574102100134X
Chiang, C.-F., & Shih, H.-H. (2017). Consumer Preferences regarding news slant and accuracy in news programs. Jing Ji Lun Wen Cong Kan, 45(4), 515-545.
Clerwall, C. (2014). Enter the Robot Journalist: Users’ perceptions of automated content. Journalism Practice, 8(5), 519–531. https://doi.org/10.1080/17512786.2014.883116
Davies, R.T. (2024). An update on the BBC’s plans for generative AI (GenAI) and how we plan to use AI tools responsibly. BBC. https://www.bbc.com/mediacentre/articles/2024/update-generative-aiand-ai-tools-bbc
Dörr, K. N. (2015). Mapping the field of Algorithmic Journalism. Digital Journalism, 4(6), 700–722. https://doi.org/10.1080/21670811.2015.1096748
Davis, M., Attard, M., & Main, L. (2023). Gen AI and journalism. UTS Centre for media transition. https://doi.org/10.6084/m9.figshare.24751881.v3
Eun-Ju Lee, Hyun Suk Kim, & Jong Min Lee (2023). When Do People Trust Fact-Check Messages? : Effects of Fact-Checking Agents and Confirmation Bias. Journal of communication research, 60(3), 47-88.
Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149-1160. https://doi.org/10.3758/BRM.41.4.1149
Gherheș, V., Fărcașiu, M. A., Cernicova-Buca, M., & Coman, C. (2025). AI vs. Human-Authored Headlines: Evaluating the Effectiveness, Trust, and Linguistic Features of ChatGPT-Generated Clickbait and Informative Headlines in Digital News. Information, 16(2), 150. https://www.mdpi.com/2078-2489/16/2/150
Graefe, A., Haim, M., Haarmann, B., & Brosius, H.-B. (2016). Perception of automated computer-generated news: Credibility, expertise, and readability. Journalism, 19(5).
Jung, J., Song, H., Kim, Y., Im, H., & Oh, S. (2017). Intrusion of software robots into journalism: The public's and journalists' perceptions of news written by algorithms and human journalists. Computers in Human Behavior, 71, 291-298. https://doi.org/https://doi.org/10.1016/j.chb.2017.02.022
Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making, 8(4), 407-424. https://doi.org/10.1017/S1930297500005271
Longoni, C., Fradkin, A., Cian, L., & Pennycook, G. (2022). News from Generative Artificial Intelligence Is Believed Less Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea. https://doi.org/10.1145/3531146.3533077
Lermann Henestrosa, A., Greving, H., & Kimmerle, J. (2023). Automated journalism: The effects of AI authorship and evaluative information on the perception of a science journalism article. Computers in Human Behavior, 138, 107445. https://doi.org/https://doi.org/10.1016/j.chb.2022.107445
Lermann Henestrosa, A., & Kimmerle, J. (2024). The Effects of Assumed AI vs. Human Authorship on the Perception of a GPT-Generated Text. Journalism and Media, 5(3), 1085-1097. https://www.mdpi.com/2673-5172/5/3/69
Lazaridou, K., & Krestel, R. (2016). Identifying political bias in news articles. Bulletin of the IEEE TCDL, 12(2), 1-12.
Lin, L. (2024). Reuters Institute Digital News Report 2024 (p. 150). Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2024/taiwan
Li, X. (2021). More than Meets the Eye: Understanding Perceptions of China Beyond the Favorable–Unfavorable Dichotomy. Studies in Comparative International Development, 56(1), 68-86. https://doi.org/10.1007/s12116-021-09320-1
London School of Economics and Political Science. (2018). Journalism credibility: Strategies to restore trust. https://www.lse.ac.uk/media-and-communications/truth-trust-and-technology-commission/journalism-credibility
Meade, A. (2023, July 31). News Corp using AI to produce 3,000 Australian local news stories a week. The Guardian. https://www.theguardian.com/media/2023/aug/01/news-corp-ai-chat-gpt-stories
Marconi/Reuters Institute. (2023, March 23). Is ChatGPT a threat or an opportunity for journalism? Five AI experts weigh in. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/news/chatgpt-threat-or-opportunity-journalism-five-ai-experts-weigh
McCroskey, J. C., & and Teven, J. J. (1999). Goodwill: A reexamination of the construct and its measurement. Communication Monographs, 66(1), 90-103. https://doi.org/10.1080/03637759909376464
Moon, W.-K., & Kahlor, L. A. (2025). Fact-checking in the age of AI: Reducing biases with non-human information sources. Technology in Society, 80, 102760. https://doi.org/https://doi.org/10.1016/j.techsoc.2024.102760
Nielsen, R. K., Cornia, A., & Kalogeropoulos, A. (2016). Challenges and Opportunities for News Media and Journalism in an Increasingly Digital, Mobile, and Social Media Environment. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2879383
Nadeem, M. U., & Raza, S. (2016). Detecting bias in news articles using NLP models (Stanford CS224N Custom Project). Stanford University.
Newman, N. (2025). Overview and key findings of the 2025 Digital News Report. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2025/dnr-executive-summary
OpenAI. (2022, November 30). Introducing ChatGPT. OpenAI. https://openai.com/index/chatgpt/
Peiser, J. (2019, Feb 5). The rise of the robot reporter. The New York Times. https://www.nytimes.com/2019/02/05/business/media/artificial-intelligence-journalism-robots.html?smid=url-share
Pecquerie, B. (2018, January 5). AI is the new horizon for news. Global Editors Network. Retrieved from https://medium.com/global-editors-network/ai-is-the-new-horizon-for-news-22b5abb752e6
Simon, F. M. (2024, February 6). Artificial intelligence in the news: How AI retools, rationalizes, and reshapes journalism and the public arena. Columbia Journalism Review. https://www.cjr.org/tow_center_reports/artificial-intelligence-in-the-news.php
Saez-Trumper, D., Castillo, C., & Lalmas, M. (2013). Social media news communities: gatekeeping, coverage, and statement bias Proceedings of the 22nd ACM international conference on Information & Knowledge Management, San Francisco, California, USA. https://doi.org/10.1145/2505515.2505623
Shu, P. L. (2000). What Does the News Tell Us: Examining the Impact of Group Identity on News Coverage of Taiwan-China Relations (Order No. 28442498). Available from ProQuest Dissertations & Theses A&I; ProQuestDissertations & Theses Global. (2498537956).
Sundar, S. S. (1999). Exploring Receivers' Criteria for Perception of Print and Online News. Journalism & Mass Communication Quarterly, 76(2), 373-386. https://doi.org/10.1177/107769909907600213
Sundar, S. S., & Shyam. (2008). The MAIN Model: A Heuristic Approach to Understanding Technology Effects on Credibility. In.
Toff, B., & Simon, F. (2023). “Or they could just not use it?”: the paradox of AI disclosure for audience trust in news. In (pp. 1-38): SocArXiv.
Waddell, T. F. (2019). Attribution Practices for the Man-Machine Marriage: How Perceived Human Intervention, Automation Metaphors, and Byline Location Affect the Perceived Bias and Credibility of Purportedly Automated Content. Journalism Practice, 13(10), 1255–1272. https://doi.org/10.1080/17512786.2019.1585197
Waddell, T. F. (2017). A Robot Wrote This? How perceived machine authorship affects news credibility. Digital Journalism, 6(2), 236–255. https://doi.org/10.1080/21670811.2017.1384319