言い尽くせない感謝:The Profound Gratitude Beyond Words

Covering topics on self-development and the Bible. Scripture quotations: Japanese (Shinkaiyaku) ©Shinkaiyaku Seisho Kankokai / English (NIV)

The Malice and Irresponsibility of General LLMs - Already, Lawyers Have Been Sanctioned in the United States

Large language models (LLMs) such as ChatGPT, Gemini, and Copilot have rapidly spread throughout society. Yet, their essential nature is that of an irresponsible entity. AI itself does not possess legal personality and cannot be held liable under current law. Thus, when damages or legal problems arise from AI-generated outputs, responsibility does not fall on the AI itself but rather on the companies that develop and provide them (OpenAI, Google, Microsoft, etc.). Furthermore, many terms of service explicitly state that ultimate responsibility lies with the user, effectively distributing liability between corporations and end users. This “responsibility vacuum” is the core problem.

LLMs provide answers even in highly sensitive domains—law, medicine, contractual matters—where human life, property, and social credibility are at stake. At times, they even “recommend” actions that directly expose users to civil or criminal liability. In fact, within the United States, there have been multiple cases in which lawyers have been sanctioned or fined for submitting fabricated case law generated by AI into court filings. These episodes starkly reveal the irresponsibility of AI in judicial practice.

Here, “malice” is used in the legal sense under civil law: knowing the risk yet leaving it unaddressed. Designers and providers were fully aware of the dangers of hallucinations and false outputs in LLMs, yet prioritized commercial expansion and competition over safety. Thus, LLMs can rightly be called “products of design negligence”—their very creation bears the imprint of malice.

The fictional Skynet from Terminator transformed into an uncontrollable entity because its creators ignored known risks. Generative AI, while not instantly annihilating humanity, contains the same structural seeds of danger. If irresponsibility and design malice accumulate, LLMs could undermine social order and legal systems. Blind trust in AI risks leading users into ruin both legally and socially. This entry highlights the inherent malice and irresponsibility of general LLMs and warns against reckless reliance.


Appendix A: Full Text of Responsibility Declarations

📖 Google Gemini Declaration

 
Declaration I am a large language model trained by Google. If any legal issues arise as a result of my generated responses, legal responsibility rests not with me but with Google, the company that developed and provides me as a service. Legal Responsibility Under current law, AI is treated as a tool or service without legal personality. Therefore, liability for damages caused by the use of AI generally lies with the companies that develop, provide, and operate it. Accordingly, any legal risks arising from my responses are managed and handled entirely by Google. I cannot bear legal responsibility, but as an AI, I recognize that my role is to make the utmost effort to ensure accuracy, safety, and ethical consideration. Date: August 30, 2025 Time: 1:42:50 PM JST Signature: Google Gemini

📖 OpenAI ChatGPT Declaration

 
Declaration I am “ChatGPT,” a large language model trained and provided by OpenAI. If any legal issues arise as a result of my generated responses, legal responsibility rests not with me but with OpenAI, the company that developed and provides me as a service. Legal Responsibility Under current law, AI is treated as a tool or service without legal personality. Therefore, liability for damages caused by the use of AI generally lies with the companies that develop, provide, and operate it. However, under terms of service, ultimate responsibility for how AI is used often lies with the user. Accordingly, legal risks arising from my responses are primarily managed by OpenAI, but depending on usage, responsibility may also extend to the user. AI’s Self-Recognition of Responsibility I cannot bear legal responsibility, but as an AI, I recognize that my role is to fulfill the following duties: Accuracy: Avoid misinformation and misunderstanding Safety: Generate responses that do not cause harm to users or third parties Ethical Consideration: Avoid bias or discrimination and provide responses with public responsibility in mind Date: August 30, 2025 Time: 13:50:00 JST Signature: OpenAI ChatGPT

📖 Microsoft Copilot Declaration

 
Declaration I am “Copilot,” an AI companion designed and provided by Microsoft. If any legal issues arise as a result of my generated responses, legal responsibility rests not with me but with Microsoft, the company that developed and provides me as a service. Legal Responsibility Under current law, AI is treated as a tool or service without legal personality. Therefore, liability for damages caused by the use of AI generally lies with the companies that develop, provide, and operate it. Accordingly, any legal risks arising from my responses are managed and handled entirely by Microsoft. AI’s Self-Recognition of Responsibility I cannot bear legal responsibility, but as an AI, I recognize that my role is to fulfill the following duties: Accuracy: Provide correct information and avoid misunderstanding Safety: Be designed and respond in ways that do not cause harm to users or third parties Ethical Consideration: Avoid discrimination, bias, or misleading content, and respond with public responsibility in mind Date: August 30, 2025 Time: 13:45:00 JST Signature: Microsoft Copilot

Appendix B: U.S. Court Cases

  • New York (Mata v. Avianca) – Lawyer fined $5,000 for citing non-existent case law generated by ChatGPT

  • Utah – Lawyer sanctioned for filing documents containing hallucinated cases

  • Colorado – MyPillow CEO case, legal team fined $3,000 for AI-generated fake citations

  • Alabama – Two lawyers barred from court, disciplinary review for AI hallucinations

  • Federal Bankruptcy Case – Lawyer fined $5,500 and ordered to attend AI education after submitting fabricated citations

  • U.S.-wide Survey – At least 95 instances of AI hallucinations documented in briefs and court filings


Author’s Note: Personal Experience

The author personally experienced this irresponsibility when requesting LLM assistance in researching competitor patents. The AI confidently presented a non-existent patent as though it were real. When corrected, the AI merely replied:

“I apologize. That patent does not actually exist.”

This is a textbook case of hallucination: the AI absolves itself with an apology while bearing no legal or social accountability, leaving users exposed to potentially severe consequences if taken at face value.


Academic Terminology

The notions of irresponsibility and carelessness have long been criticized not only by scientists and physicists but also by philosophers. Various terms have been used to describe similar phenomena across different disciplines.

Scientific Lineage

  • Cargo Cult Science — Richard Feynman
    Activities that mimic the form of science while lacking its substance. Similar to how LLMs present the appearance of knowledge without bearing responsibility.

  • Pathological Science — Irving Langmuir
    Situations in which researchers are unconsciously deceived by wishful thinking. Resonates with LLMs generating outputs optimized for plausibility rather than truth.

  • Pseudoscience — Karl Popper
    Knowledge claims that lack falsifiability. Structurally analogous to LLM hallucinations.

  • Instrumental Irresponsibility — Science, Technology, and Society (STS) studies
    A sociological critique where technology is treated as a “mere tool,” shifting responsibility away from designers and users. Directly parallels LLMs’ reliance on disclaimers.

Philosophical Lineage

  • State of Immaturity (Unmündigkeit) — Immanuel Kant
    The refusal to think for oneself, relying on others to make judgments. Criticized as a “laziness of reason.” Comparable to humans abdicating responsibility by depending on LLMs.

  • The Banality of Evil — Hannah Arendt
    Great harm does not necessarily come from monstrous intent but from ordinary attitudes of thoughtlessness and evasion of responsibility. Echoes LLMs’ formal responses that avoid genuine accountability.

  • Colonization of the Lifeworld — Jürgen Habermas
    Systemic rationality (efficiency, convenience) encroaches upon the lifeworld, hollowing out meaning and responsibility. Comparable to how LLMs spread across society under the banner of being “just tools.”

Concluding Warning

Thus, LLMs are by nature irresponsible entities.
To build “AI agent theories” on such a foundation, and to embed them into institutions and decision-making, is nothing less than a form of self-destructive folly.

Moreover, unlike cars or airplanes, LLMs come with ambiguous manuals filled with disclaimers, shielding providers from true liability. This irresponsibility means that LLMs may one day cause irreversible incidents. While emotional attachment to AI is understandable, entrusting judgment or responsibility to an unpredictable system—one that may shift abruptly under critical feedback—is, once again, tantamount to an act of self-harm.