Artificial Intelligence

The Language of Artificial Intelligence Analyzing Anthropomorphism and Mental Verbs in Modern Media

The rapid integration of artificial intelligence into the fabric of daily life has brought with it a complex linguistic challenge: how to describe the operations of non-human systems without imbuing them with human consciousness. As large language models like ChatGPT become ubiquitous, the vocabulary used to describe their functions—words such as "think," "know," "understand," and "remember"—has come under intense academic scrutiny. A recent comprehensive study led by researchers at Iowa State University suggests that while the temptation to anthropomorphize these systems is high, professional news writers are maintaining a surprisingly disciplined distance. This research, titled "Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT," published in Technical Communication Quarterly, explores the delicate balance between relatable language and technical accuracy in the burgeoning field of AI reporting.

The Linguistic Foundations of Anthropomorphism in Technology

Anthropomorphism is the attribution of human characteristics, emotions, or intentions to non-human entities. In the context of technology, this phenomenon is not new; humans have long named their cars, cursed at their computers, and spoken to their voice assistants as if they were sentient companions. However, the stakes have shifted with the advent of generative AI. Unlike a simple calculator or a traditional software program, AI systems produce outputs that mimic human reasoning and creativity with startling efficiency.

Jo Mackiewicz, a professor of English at Iowa State University and a lead member of the research team, notes that the use of mental verbs—verbs that describe cognitive processes—is a natural human inclination. "We use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines—it helps us relate to them," Mackiewicz explained. Yet, she warns of a significant risk: "At the same time, when we apply mental verbs to machines, there’s also a risk of blurring the line between what humans and AI can do."

The research team, which included Jeanine Aune of Iowa State, Matthew J. Baker of Brigham Young University, and Jordan Smith of the University of Northern Colorado, sought to quantify this blurring of lines. By focusing on "mental verbs," the researchers targeted the specific words that imply a system possesses an internal life, beliefs, or awareness.

Methodology: Analyzing the News on the Web Corpus

To conduct a study of this magnitude, the researchers turned to the News on the Web (NOW) corpus. This massive digital database contains more than 20 billion words culled from English-language news articles published across 20 different countries. It provides a real-time snapshot of how language evolves in professional settings, making it an ideal resource for tracking the emergence of AI-related terminology.

The team focused their analysis on how frequently mental verbs such as "learns," "means," "knows," "thinks," and "wants" were paired with subject terms like "AI" and "ChatGPT." By analyzing thousands of instances within the corpus, the researchers were able to move beyond anecdotal evidence to reach a data-driven conclusion about the state of modern journalism.

The findings challenged the common assumption that media coverage is rife with sensationalist anthropomorphism. Contrary to expectations, the study found that news writers are relatively conservative in their use of humanizing language when discussing artificial intelligence.

Data Breakdown: The Frequency of Mental Verbs

The statistical analysis revealed that while anthropomorphism is a staple of casual conversation, it remains an outlier in professional news writing. The researchers identified specific patterns in how certain verbs were paired with AI technologies:

  1. The Dominance of "Needs": The word "needs" was the most frequent mental verb paired with "AI," appearing 661 times in the dataset. However, the researchers noted that in most cases, "needs" did not imply a human desire. Instead, it referred to functional requirements, such as "AI needs large amounts of data" or "AI needs human oversight."
  2. The Rarity of "Knows" for ChatGPT: Despite the perception that ChatGPT is an all-knowing entity, the pairing of "ChatGPT" with "knows" appeared only 32 times in the multi-billion-word corpus. This suggests that journalists are making a conscious effort to describe the system’s outputs as the result of data processing rather than genuine knowledge.
  3. The Spectrum of Usage: The researchers found that anthropomorphism exists on a spectrum. While some phrases were strictly functional, others, such as "AI needs to understand the real world," moved closer to suggesting that the technology possesses a form of ethical or situational awareness.

This data suggests a high level of linguistic restraint among journalists, likely influenced by evolving editorial standards and a growing awareness of the technical realities of machine learning.

The Technical Reality: Why "Thinking" is a Misnomer

To understand why the choice of verbs matters, one must look at the underlying mechanics of artificial intelligence. Large Language Models (LLMs) do not "think" in the biological or philosophical sense. They operate through complex statistical architectures known as transformers. These systems analyze patterns in vast datasets to predict the most likely next word in a sequence.

When a user asks an AI a question and it provides a coherent answer, the AI is not "recalling" a fact it "knows." It is calculating a probability distribution across its trained vocabulary. By using words like "understand" or "decide," writers may inadvertently suggest that the AI has an internal monologue or a moral compass.

"AI does not possess beliefs or feelings," the researchers emphasized. "It produces responses by analyzing patterns in data, not by forming ideas or making conscious decisions." When a journalist writes that "the AI decided to ignore certain data," they are using a shorthand that can be misleading. In reality, the "decision" was a result of weights and biases set by human developers during the training process.

A Chronology of AI Terminology and Public Perception

The way we talk about AI has shifted dramatically over the decades, following the "AI winters" and subsequent booms in development:

  • The 1950s-1960s (The Logic Era): Early pioneers like Alan Turing and John McCarthy used terms like "thinking machines" and "artificial intelligence." During this era, the focus was on symbolic logic and the "Turing Test," which fundamentally linked machine success to the ability to mimic human conversation.
  • The 1980s-1990s (The Expert Systems Era): As AI moved into industrial applications, the language became more clinical. Systems were "expert databases" or "neural networks." The focus was on "data processing" rather than "consciousness."
  • The 2010s (The Deep Learning Explosion): With the rise of Siri and Alexa, tech companies intentionally used anthropomorphic language to make products feel more accessible. "Siri knows your schedule" became a standard marketing trope.
  • 2022-Present (The Generative AI Era): The release of ChatGPT brought anthropomorphism back to the forefront of public discourse. Users began reporting "conversations" with AI, leading to a renewed debate over the ethical implications of humanizing software.

The Iowa State study marks a pivotal moment in this chronology, suggesting a "correction" phase where professional communicators are beginning to push back against the humanizing trends of the previous decade.

Institutional Influence: The Role of Editorial Guidelines

One of the primary reasons for the identified restraint in news writing is the influence of major editorial organizations. The Associated Press (AP), which sets the standard for thousands of newsrooms globally, has issued specific guidance on reporting on AI.

The AP Stylebook advises journalists to avoid attributing human emotions or traits to AI. It suggests using neutral terms like "generated," "calculated," or "processed" rather than "thought" or "felt." This institutional gatekeeping serves as a critical buffer against the spread of misinformation regarding AI sentience.

Jeanine Aune, a teaching professor of English at Iowa State, noted that these guidelines are essential because "certain anthropomorphic phrases may even stick in readers’ minds and can potentially shape public perception of AI in unhelpful ways." By adhering to strict linguistic standards, journalists help maintain a clear distinction between the tool and the user.

The Hidden Danger: Obscuring Human Accountability

Perhaps the most significant implication of anthropomorphizing AI is the potential for "responsibility drift." When we say "the AI made a mistake" or "the algorithm decided to deny the loan," we shift the focus away from the humans who designed, trained, and deployed the system.

The researchers found that the use of passive voice in news writing—such as "AI needs to be trained"—is actually a positive sign. It implies an external actor (a human) is performing the action. Conversely, active anthropomorphic verbs can act as a shield for corporate or developer accountability. If a system is described as having its own "intentions," it becomes easier for organizations to claim they have no control over its "behavior."

"The language we choose shapes how readers understand AI systems, their capabilities, and the humans responsible for them," Mackiewicz said. Ensuring that the human element remains visible in tech reporting is vital for ethical oversight and legal regulation.

Analysis of Implications: The Future of Technical Communication

The study’s findings offer a roadmap for the future of technical and professional communication. As AI tools become integrated into the writing process itself, professionals must become "meta-aware" of the language they use.

1. Literacy and Education: There is a growing need for "AI literacy" that includes a linguistic component. Educators and trainers should emphasize the difference between functional descriptions (what the machine does) and cognitive descriptions (what the machine is perceived to be doing).

2. The Spectrum of Sentience: Since anthropomorphism exists on a spectrum, writers should aim for the lower end of that spectrum. Using "outputs" instead of "says," or "identifies" instead of "realizes," can significantly change the tone and accuracy of a piece of technical writing.

3. Public Trust: In an era of "deepfakes" and AI-generated misinformation, the credibility of news organizations depends on their ability to describe technology accurately. Over-hyping AI through humanizing language can lead to a "hype cycle" that eventually erodes public trust when the technology fails to live up to its "human-like" promises.

Conclusion: Staying Mindful in a Transforming Landscape

The research conducted by Mackiewicz, Aune, and their colleagues serves as a vital reminder that language is not merely a reflection of reality, but a tool that shapes it. While the news media is currently performing better than expected in avoiding the pitfalls of anthropomorphism, the pressure to use relatable language will only increase as AI becomes more sophisticated.

As the research team concluded in their published study, "Our findings can help technical and professional communication practitioners reflect on how they think about AI technologies as tools in their writing process and how they write about AI."

The challenge for the next generation of writers will be to describe the incredible capabilities of artificial intelligence with the wonder they deserve, while maintaining the clinical distance necessary to remind the world that, at the end of the day, these are machines built by human hands, governed by human data, and requiring human responsibility. Future studies will likely continue to monitor this linguistic frontier, exploring whether even the rare instances of anthropomorphism have a disproportionate impact on how the public perceives the "mind" of the machine.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button