Anthropomorphizing AI: Dire consequences of mistaking human-like for human have already emerged
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More In our rush to understand and relate to AI, we have fallen into a seductive trap: Attributing human characteristics to these robust but fundamentally non-human systems. This anthropomorphizing of AI is not just a harmless quirk of human nature — it is becoming an increasingly dangerous tendency that might cloud our judgment in critical ways. Business leaders are comparing AI learning to human education to justify training practices to lawmakers crafting policies based on flawed human-AI analogies. This tendency to humanize AI might inappropriately shape crucial decisions across industries and regulatory frameworks. Viewing AI through a human lens in business has led companies to overestimate AI capabilities or underestimate the need for human oversight, sometimes with costly consequences. The stakes are particularly high in copyright law, where anthropomorphic thinking has led to problematic comparisons between human learning and AI training. The language trap Listen to how we talk about AI: We say it “learns,” “thinks,” “understands” and even “creates.” These human terms feel natural, but they are misleading. When we say an AI model “learns,” it is not gaining understanding like a human student. Instead, it performs complex statistical analyses on vast amounts of data, adjusting weights and parameters in its neural networks based on mathematical principles. There is no comprehension, eureka moment, spark of creativity or actual understanding — just increasingly sophisticated pattern matching. This linguistic sleight of hand is more than merely semantic. As noted in the paper, Generative AI’s Illusory Case for Fair Use: “The use of anthropomorphic language to describe the development and functioning of AI models is distorting because it suggests that once trained, the model operates independently of the content of the works on which it has trained.” This confusion has real consequences, mainly when it influences legal and policy decisions. The cognitive disconnect Perhaps the most dangerous aspect of anthropomorphizing AI is how it masks the fundamental differences between human and machine intelligence. While some AI systems excel at specific types of reasoning and analytical tasks, the large language models (LLMs) that dominate today’s AI discourse — and that we focus on here — operate through sophisticated pattern recognition. These systems process vast amounts of data, identifying and learning statistical relationships between words, phrases, images and other inputs to predict what should come next in a sequence. When we say they “learn,” we’re describing a process of mathematical optimization that helps them make increasingly accurate predictions based on their training data. Consider this striking example from research by Berglund and his colleagues: A model trained on materials stating “A is equal to B” often cannot reason, as a human would, to conclude that “B is equal to A.” If an AI learns that Valentina Tereshkova was the first woman in space, it might correctly answer “Who was Valentina Tereshkova?” but struggle with “Who was the first woman in space?” This limitation reveals the fundamental difference between pattern recognition and true reasoning — between predicting likely sequences of words and understanding their meaning. The copyright conundrum This anthropomorphic bias has particularly troubling implications in the ongoing debate about AI and copyright. Microsoft CEO Satya Nadella recently compared AI training to human learning, suggesting that AI should be able to do the same if humans can learn from books without copyright implications. This comparison perfectly illustrates the danger of anthropomorphic thinking in discussions about ethical and responsible AI. Some argue that this analogy needs to be revised to understand human learning and AI training. When humans read books, we do not make copies of them — we understand and internalize concepts. AI systems, on the other hand, must make actual copies of works — often obtained without permission or payment — encode them into their architecture and maintain these encoded versions to function. The works don’t disappear after “learning,” as AI companies often claim; they remain embedded in the system’s neural networks. The business blind spot Anthropomorphizing AI creates dangerous blind spots in business decision-making beyond simple operational inefficiencies. When executives and decision-makers think of AI as “creative” or “intelligent” in human terms, it can lead to a cascade of risky assumptions and potential legal liabilities. Overestimating AI capabilities One critical area where anthropomorphizing creates risk is content generation and copyright compliance. When businesses view AI as capable of “learning” like humans, they might incorrectly assume that AI-generated content is automatically free from copyright concerns. This misunderstanding can lead companies to: Deploy AI systems that inadvertently reproduce copyrighted material, exposing the business to infringement claims Fail to implement proper content filtering and oversight mechanisms Assume incorrectly that AI can reliably distinguish between public domain and copyrighted material Underestimate the need for human review in content generation processes The cross-border compliance blind spot The anthropomorphic bias in AI creates dangers when we consider cross-border compliance. As explained by Daniel Gervais, Haralambos Marmanis, Noam Shemtov, and Catherine Zaller Rowland in “The Heart of the Matter: Copyright, AI Training, and LLMs,” copyright law operates on strict territorial principles, with each jurisdiction maintaining its own rules about what constitutes infringement and what exceptions apply. This territorial nature of copyright law creates a complex web of potential liability. Companies might mistakenly assume their AI systems can freely “learn” from copyrighted materials across jurisdictions, failing to recognize that training activities that are legal in one country may constitute infringement in another. The EU has recognized this risk in its AI Act, particularly through Recital 106, which requires any general-purpose AI model offered in the EU to comply with EU copyright law regarding training data, regardless of where that training occurred. This matters because anthropomorphizing AI’s capabilities can lead companies to underestimate or misunderstand their legal obligations across borders. The comfortable fiction of AI “learning” like humans obscures the reality that AI training involves complex copying and storage operations that trigger different legal obligations in other jurisdictions. This fundamental misunderstanding of AI’s











