ISSN 1016-1007 GPN2005600032
前期出版
前期出版
頁數:93﹣142 從新聞中的擬人化隱喻探討AI的能動性: 以ETtoday為例 Investigating AI’s Agency through Anthropomorphic Metaphors in News: An Exploratory Study of ETtoday
研究論文
作者(中)
章璟、謝吉隆
作者(英)
Jing Chong, Ji-Lung Hsieh
關鍵詞(中)
Anthroscore、人工智慧、能動性、新聞分析、語料研究、擬人化
關鍵詞(英)
AnthroScore, Artificial Intelligence, agency, news analysis, corpus analysis, Anthropomorphism
中文摘要
新聞媒體常透過擬人化隱喻協助閱聽眾理解複雜的科技概念,此一語言策略亦普遍見於人工智慧(Artificial Intelligence, 簡稱AI)相關報導。然而,若運用不當,亦可能導致公眾誤解AI及其相關現象。既有研究多採用文本分析以探討AI新聞的敘事框架,或透過實驗法檢驗擬人化敘述對受眾理解的影響,針對大規模中文新聞語料進行語彙辨析的實證研究仍缺乏。本研究將國外研究中的擬人化指標延伸應用至中文語境,分析《ETtoday新聞雲》近八年間共14,487篇AI相關報導。研究發現,AI新聞的擬人化程度近年明顯上升,主要與生成式AI題材的出現有關。不同新聞類別呈現出能動性差異:生活類涵蓋較完整的能動性光譜;娛樂、社會與政治新聞多將AI描繪為「聊天者/互動者」;科技類則在擬人化與技術化間形成張力;產經類則傾向工具化與制度化呈現。進一步的動詞分析顯示,高擬人化語句多透過感知、語言與互動動詞建構AI主體性,形成「社會互動框架」,而低擬人化語句則偏重任務與功能操作,對應「功能互動框架」。本研究最終提出「能動性光譜」概念,說明媒體如何透過語言,將AI由技術工具逐步建構為社會行動者。
英文摘要
The rapid development of artificial intelligence (AI) has become one of the major topics of news reporting in Taiwan. To introduce technological artifacts to the general public, news media frequently employ anthropomorphic metaphors for explaining the complexity of AI technologies. Anthropomorphism, attributing human features such as emotions, intentions, or agency to non-human entities, helps simplify abstract concepts, but may also facilitate public understanding and thus shape trust, fear, or ethical concerns. However, the literature has shown that anthropomorphic representations in news stories can amplify unwanted expectations, obscure issues of accountability, and frame AI as either a cooperative partner or a threatening rival. Few studies provide systematic, large-scale, corpus-based evidence of how anthropomorphism constructs AI’s agency in news discourse.

This paper fills the gap in the literature by examining 14,487 AI-related news articles published on ETtoday between 2016 and 2024. We specifically investigate: (1) whether anthropomorphic expressions of AI have increased over time; (2) how the degree of anthropomorphism varies across news categories; and (3) what linguistic strategies (like subjects, verbs, and collocations) are employed to construct AI’s agency, which is understood as a spectrum ranging from basic action to autonomy, intentionality, affectivity, and even accountability. The main questions are: How do news media anthropomorphize AI? Under what contexts do these metaphors appear? What kinds of agency are attributed to AI in news discourse?

Anthropomorphic rhetoric has a long history in the representation of technologies, ranging from naming and metaphor to descriptions of social action. Terms such as artificial intelligence and neural networks explicitly invoke the human brain as a reference point, while chatting bots like ELIZA, ChatGPT, and Gemini reinforce humanlike qualities through naming and dialogic forms. Media depictions such as “AI doctor” not only assign a professional role, but also imply agency and capacity for collaboration. Psychological studies explain this tendency as arising from three needs: (1) detecting intentional agents, (2) reducing complexity, and (3) fulfilling social connection (Epley et al., 2007). Even when users recognize a program like ELIZA as artificial, its conversational form can lead one to imagine it as possessing psychological states. Once technologies display social cues, people intuitively respond to them as they would to other humans.

Agency emerges as the central dimension of anthropomorphism. Studies define it as encompassing capacity for action, autonomy, intentionality, affectivity, and even accountability (Tipler & Ruscher, 2014; Trafton et al., 2024; Cheng et al., 2024). Rather than fixed categories, we argue that these dimensions form a spectrum from basic descriptions of action to autonomy and intentionality, to emotional motivation and accountability. When news reports describe AI as learning, replacing, or outperforming, they suggest autonomy and intention. When AI is said to enjoy or refuse an action, it is further implied with emotions and moral implications.

Such constructions produce both risks and benefits. On the one hand, attributing agency to AI can blur responsibility, hide important system limitations, and lead to unwarranted levels of trust. (Placani, 2024; Gros et al., 2022; Deshpande et al., 2023). On the other hand, anthropomorphism can enhance trust and acceptance among vulnerable groups, improve interaction in education and therapy, and increase engagement in news and consumer contexts (Darling, 2017; Jang et al., 2023; Konya-Baumbach et al., 2023). These findings demonstrate that agency is not inherent but continuously constructed through language and interaction with far-reaching consequences for how society understands AI.

This study methodologically employs a computational-linguistic approach. It adapts AnthroScore, an automated indicator proposed by the Stanford NLP group (Cheng et al., 2024), to the Chinese context. The metric utilizes a pretrained masked language model (Chinese-RoBERTa-wwm-ext) to compare the probabilities of human pronouns (“he,” “she”) versus a non-human pronoun (“it”) substituting AI-related terms (e.g., AI, 人工智慧, ChatGPT). By averaging across sentences, the score quantifies the anthropomorphism of each article. Systematic preprocessing, including filtering out irrelevant, duplicated, or promotional texts, and a thresholding procedure help retain only semantically meaningful sentences (final sample covers 4,402 articles and 9,823 sentences). Human validation confirms high consistency between model outputs and intuitive judgments (Cohen’s κ = .94).

The findings reveal several key patterns. First, although our temporal analysis indicates a significant upward trend in anthropomorphism, this pattern is largely tied to recent interest in ChatGPT. The turning point appears in 2022, coinciding with ChatGPT’s entry into public discourse. When examined separately, ChatGPT-related articles show higher AnthroScore values than other AI-related keywords, suggesting that the growing popularity of generative AI, rather than a uniform shift across all AI coverage, is the main factor behind the observed increase.

Second, by comparing highly anthropomorphic and less anthropomorphic sentences, it is possible to observe how different news categories employ anthropomorphic rhetoric in different ways. Entertainment and social news display the highest proportion of highly anthropomorphic sentences, yet often portray AI as a chatting partner or emotional agent. Lifestyle news shows richer variety, assigning AI roles ranging from medical advisor to romantic companion, thereby covering the full agency spectrum. By contrast, economic and technology news emphasize functional and institutional roles with anthropomorphism mainly as rhetorical embellishment.

Third, through dependency parsing, verb collocation analysis shows that highly anthropomorphic texts rely on perception, language, and interaction verbs (“say,” “understand,” “accompany”), constructing AI within a social interaction frame. Low-anthropomorphism sentences emphasize technical or task-oriented verbs (“solve,” “provide”), aligning with a functional interaction frame.

These results contribute one important theoretical insight. The proposed agency spectrum model clarifies how language progressively constructs AI’s capacity, from mere action to autonomy, intentionality, emotion, and accountability. The study also demonstrates how framing through anthropomorphism is not uniform, but varies by journalistic genre and socio-technical context. Entertainment and lifestyle frames encourage readers to see AI as a humanlike companion, whereas economic and political frames may normalize AI as a structural force or policy object.

In terms of contributions, the study combines AnthroScore with corpus-based linguistic analysis to create an operational framework for examining how agency is constructed in news discourse. This approach moves beyond reliance on subjective interpretation or isolated lexical cues, offering a replicable and extensible method for future research. Theoretically, the findings extend the works of Tipler and Ruscher (2014), Trafton et al. (2024), and Cheng et al (2024) by advancing the concept of an agency spectrum, showing how news discourse linguistically positions AI along a continuum from functional tool to social actor. This provides empirical evidence from the Chinese-language context and lays a groundwork for further research on how anthropomorphic rhetoric shapes trust, acceptance, and ethical evaluations, while also highlighting the central role of journalistic framing in constructing societal imaginaries of technology.
275次下載
2026/ 冬
No.166
X

忘記您的密碼了?