Demystify ChatGPT: Anthropomorphism around generative AI

Authors

  • Junyi (Joey) Ji

Keywords:

AI, anthromorphism, generative AI

Abstract

The recent release of Large Language Models GPT, DALLE, and Bard undoubtedly marks the advent of the era of generative artificial intelligence. Different from traditional AI systems designed specifically for specific tasks, generative AIs are able to handle cross-context, general-purpose tasks. The fact that they can better mimic humankind leads to a heightened propensity of anthropomorphism around it. Anthropomorphism refers to the attribution of human-like qualities and intentions to AI systems. However, whether it is justified to compare AI systems to human intelligence in the case of generative AI has rarely been discussed in the current literature. I 1) identify the differences between generative AI and traditional AI from a technological perspective, 2) take a conceptual analysis by drawing on Chomsky’s theory of “CALU” to illustrate why generative AI is still not comparable to human intelligence, 3)  conduct both quantitative and qualitative analysis to study the current manifestation of anthropomorphism around generative AI in the public discourse. My argument is that although perceived as an AI system that can perform cross-context, general-purpose tasks, generative AI is still not comparable to human beings as its “computational” nature is fundamentally different from the “creative” nature of human languages. Thus, the anthropomorphism around generative AI is not justified, leading to false expectations of AI systems and overblown fears towards them. The rhetorics around generative AI are of great importance because they shape how the public perceives AI. We need to “demystify” the AI systems such that the public representations of generative AI are genuine, complete, and authentic.

Downloads

Published

2024-01-22