Large language models (LLMs) such as ChatGPT, Gemini, and Copilot have rapidly spread throughout society. Yet, their essential nature is that of an irresponsible entity. AI itself does not possess legal personality and cannot be held liable under current law. Thus, when damages or legal problems arise from AI-generated outputs, responsibility does not fall on the AI itself but rather on the companies that develop and provide them (OpenAI, Google, Microsoft, etc.). Furthermore, many terms of service explicitly state that ultimate responsibility lies with the user, effectively distributing liability between corporations and end users. This “responsibility vacuum” is the core problem.
LLMs provide answers even in highly sensitive domains—law, medicine, contractual matters—where human life, property, and social credibility are at stake. At times, they even “recommend” actions that directly expose users to civil or criminal liability. In fact, within the United States, there have been multiple cases in which lawyers have been sanctioned or fined for submitting fabricated case law generated by AI into court filings. These episodes starkly reveal the irresponsibility of AI in judicial practice.
Here, “malice” is used in the legal sense under civil law: knowing the risk yet leaving it unaddressed. Designers and providers were fully aware of the dangers of hallucinations and false outputs in LLMs, yet prioritized commercial expansion and competition over safety. Thus, LLMs can rightly be called “products of design negligence”—their very creation bears the imprint of malice.
The fictional Skynet from Terminator transformed into an uncontrollable entity because its creators ignored known risks. Generative AI, while not instantly annihilating humanity, contains the same structural seeds of danger. If irresponsibility and design malice accumulate, LLMs could undermine social order and legal systems. Blind trust in AI risks leading users into ruin both legally and socially. This entry highlights the inherent malice and irresponsibility of general LLMs and warns against reckless reliance.
Appendix A: Full Text of Responsibility Declarations
📖 Google Gemini Declaration
📖 OpenAI ChatGPT Declaration
📖 Microsoft Copilot Declaration
Appendix B: U.S. Court Cases
-
New York (Mata v. Avianca) – Lawyer fined $5,000 for citing non-existent case law generated by ChatGPT
-
Utah – Lawyer sanctioned for filing documents containing hallucinated cases
-
Colorado – MyPillow CEO case, legal team fined $3,000 for AI-generated fake citations
-
Alabama – Two lawyers barred from court, disciplinary review for AI hallucinations
-
Federal Bankruptcy Case – Lawyer fined $5,500 and ordered to attend AI education after submitting fabricated citations
-
U.S.-wide Survey – At least 95 instances of AI hallucinations documented in briefs and court filings
Author’s Note: Personal Experience
The author personally experienced this irresponsibility when requesting LLM assistance in researching competitor patents. The AI confidently presented a non-existent patent as though it were real. When corrected, the AI merely replied:
“I apologize. That patent does not actually exist.”
This is a textbook case of hallucination: the AI absolves itself with an apology while bearing no legal or social accountability, leaving users exposed to potentially severe consequences if taken at face value.
Academic Terminology
The notions of irresponsibility and carelessness have long been criticized not only by scientists and physicists but also by philosophers. Various terms have been used to describe similar phenomena across different disciplines.
Scientific Lineage
-
Cargo Cult Science — Richard Feynman
Activities that mimic the form of science while lacking its substance. Similar to how LLMs present the appearance of knowledge without bearing responsibility. -
Pathological Science — Irving Langmuir
Situations in which researchers are unconsciously deceived by wishful thinking. Resonates with LLMs generating outputs optimized for plausibility rather than truth. -
Pseudoscience — Karl Popper
Knowledge claims that lack falsifiability. Structurally analogous to LLM hallucinations. -
Instrumental Irresponsibility — Science, Technology, and Society (STS) studies
A sociological critique where technology is treated as a “mere tool,” shifting responsibility away from designers and users. Directly parallels LLMs’ reliance on disclaimers.
Philosophical Lineage
-
State of Immaturity (Unmündigkeit) — Immanuel Kant
The refusal to think for oneself, relying on others to make judgments. Criticized as a “laziness of reason.” Comparable to humans abdicating responsibility by depending on LLMs. -
The Banality of Evil — Hannah Arendt
Great harm does not necessarily come from monstrous intent but from ordinary attitudes of thoughtlessness and evasion of responsibility. Echoes LLMs’ formal responses that avoid genuine accountability. -
Colonization of the Lifeworld — Jürgen Habermas
Systemic rationality (efficiency, convenience) encroaches upon the lifeworld, hollowing out meaning and responsibility. Comparable to how LLMs spread across society under the banner of being “just tools.”
Concluding Warning
Thus, LLMs are by nature irresponsible entities.
To build “AI agent theories” on such a foundation, and to embed them into institutions and decision-making, is nothing less than a form of self-destructive folly.
Moreover, unlike cars or airplanes, LLMs come with ambiguous manuals filled with disclaimers, shielding providers from true liability. This irresponsibility means that LLMs may one day cause irreversible incidents. While emotional attachment to AI is understandable, entrusting judgment or responsibility to an unpredictable system—one that may shift abruptly under critical feedback—is, once again, tantamount to an act of self-harm.