This week’s issue of AI Ethics is a little different involving an issue related to the responsible conduct of research (RCR). Anyone who receives federally funded grants must be trained in RCR by federal law. As a research assistant at UAB, I created RCR training materials used university wide. Research misconduct is defined as falsification, fabrication, or plagiarism (FFP) of data or citations. While the two cases under examination this week do involve grant participants, they do involve FFP.
Case 1: Plagiarism
This week’s case examines the misuse of generative AI in the creation of official documents. On 22 May 2025, RFK Jr. publicly released the Make America Healthy Again (MAHA) report, which contained numerous citation errors. Several references were entirely hallucinated by ChatGPT and included the placeholder code oaicite
, a marker used during AI-generated drafting to indicate where a citation should be inserted after human review (OpenAI Developer Community).
Thirty-seven citations appeared repeatedly throughout the report, though not as internal citations. Other references involved partial hallucinations, such as incorrect journal titles assigned to real articles, or combinations of real authors and journals with fabricated publication data. In addition, the report included hallucinated data interpretations of legitimate studies (Earl 2025).
Policy Recommendations
Under current ORI guidelines, RFK Jr’s team did not commit FFP.
“ORI considers plagiarism to include both the theft or misappropriation of intellectual property and the substantial unattributed textual copying of another’s work. It does not include authorship or credit disputes” (HHS ORI).
Clearly in the new age of generative AI the ORI policy needs to be updated to cover cases of hallucinated citations either in whole or in part. Falsification and Fabrication only applies to the data within research reports and not to citations.
Case(s) 2: Legal Misconduct
Judges are seeing multiple instances of attorneys presenting hallucinated legal cases in briefs filed with the court. Disbarred Michael Cohen presented Google Bard hallucinated case citations to his attorneys who then submitted those hallucinated case citations to a judge. Two sets of human reviewers failed to check the sources contained in the court filing (The Associated Press 2023).
In another case, attorneys from the law firms Ellis George LLP and K&L Gates LLP submitted a brief to Special Master Michael Wilner containing numerous hallucinated citations created by AI tools CoCounsel, Westlaw Precision, and Google Gemini. Special Master caught two of the errors allowing the attorneys to file a corrected brief. However, the corrected brief still contained multiple hallucinated case citations including two completely nonexistent cases (Ambrogi 2025).
Jisuh Lee, a Toronto attorney, presented a factum to the court containing two completely hallucinated cases. When questioned by the judge, Lee stated that it is not her firm’s policy to use generative AI tools but that “she would check with her clerk on the matter”. After the hearing in the case of Ko v Li, the judge found another completely hallucinated citation and one legitimate citation that supported the opposite conclusion from that claimed by Lee (Ambrogi 2025).
Judicial Response
Judges are cracking down on attorney’s presenting hallucinated citations to the court. In the instance of Jisuh Lee and the Ko v Lee case, the judge:
-
Denied the discovery relief they sought.
-
Ordered the law firms to jointly pay $31,100 in the defendant’s legal fees.
-
Required disclosure of the matter to their client.
-
Struck all versions of the attorneys’ supplemental brief (Ambrogi 2025).
In other instances, cases have been dismissed entirely, attorneys fined upwards of $5000, and attorneys threatened with or received other disciplinary actions including contempt of court (Jacob 2025 and Ambrogi 2025).
Policy Recommendations
At minimum, presenting unverified, hallucinated case citations is legal incompetence due to laziness. The American Bar Association’s Rule 8.4(c) defines “misconduct” as:
“It is professional misconduct for a lawyer to engage in conduct involving dishonesty, fraud, deceit or misrepresentation”
(American Bar Association).
Including hallucinated citations in court filings, especially if done intentionally, is legal misconduct. I argue, presenting hallucinated case citations also violates Rule 3.3(a)(1): Candor Towards the Tribunal which states:
“A lawyer shall not knowingly make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal by the lawyer”
(American Bar Association).
If one is using generative AI, one knows, or is reasonably expected to know, that generative AI makes errors (“hallucinates”). Failure to confirm output created by generative AI technology is a failure of due diligence; therefore, violates both aforementioned rules. The aforementioned judicial penalties issued by judges support this conclusion.
Conclusion
The penalties for committing research fraud (RCR violations) range from having the publication retracted to a life-time ban on receiving federal grant funds. In rare circumstances a person(s) who commits FFP may lose one’s professional licenses (such as Andrew Wakefield) or result in criminal charges. Similar penalties should apply in these cases.
The user of generative AI is considered the output author. The user is responsible for the validity of the output. The user is also responsible for properly citing the output in official documents and publications. Some argue that any form of public posting, including social media posts, of AI-generated outputs should include a proper citation.
Here is a sample guideline on how to properly cite LLM outputs:
“Summarize the attached emails and generate a list of action items.” 05 June 2025. ChatGPT o4-mini, OpenAI. https://chatgpt.com/
The title is the prompt that was used to generate the output. If the prompt is more than one sentence (most likely the case), then the title is a one-sentence summative description of the prompt.
Ethical Analysis
The cases presented here (and numerous others within the legal field) underscore a fear of AI critics; that generative AI will make people lazy and dull critical reasoning skills rather than be a tool for making our lives more efficient and to augment human reasoning. Both Kantian deontology and Natural Law Theory ground moral duties in human reason.
These cases highlight a failure to comply with a very simple Kantian-like imperative that is not a novel concept even within the context of emerging technology such as generative AI:
“Always check your sources.”
Failing to check one’s sources or presenting hallucinated sources are not actions that can be universalized. If universalized, the entire legal profession collapses. Furthermore, presenting hallucinated citations treats others as a means towards one’s own goals rather than as a rational being as an end in themselves. Lawyers presenting hallucinated citations treats the judge as a means towards the attorney’s goals. Intentional deception constitutes a direct violation of moral duty.
None of the cases presented herein explicitly entail any intentional deception. However, negligent oversight still fails the duty of due diligence expected of rational agents.
From a Natural Law perspective, the moral problem runs deeper still. These cases are not only violations of professional codes of conduct but also ethical violations towards oneself. Within the Natural Law framework, humans are naturally rational creatures where cultivating reason and acting rationally are moral duties. The telos of academic research and the law is to rigorously seek the truth and advance justice.
Acting contrary to this purpose/goal is unnatural. Utilizing generative AI technologies in ways that dull our use of reason and/or promote laziness is immoral act towards oneself. Generative AI technologies morally ought to be used in ways that make our lives more efficient and to augment rather than replace human reason.
In addition to violating duties grounded in reason and nature, these practices also reveal a deficiency in moral character. From the standpoint of Virtue Ethics, the cases presented reflect a deterioration of moral character among professionals who rely on generative AI tools without verification.
Rather than focusing on rules or consequences, Virtue Ethics asks:
What kind of person would act this way? and What kind of character traits does this action reveal?
Legal and academic professions are traditionally associated with intellectual virtues such as:
-
Practical wisdom (phronesis): discerning how to act rightly in context
-
Honesty: valuing truth and avoiding deception
-
Diligence: being thorough and conscientious in one’s duties
-
Integrity: maintaining consistency between one’s values and actions
Presenting hallucinated citations either in whole or in part, whether intentionally or through negligence, and presenting citations or data that supports the opposite conclusion from what one claims reflects moral viciousness. It reflects:
-
Sloth: by failing to verify LLM-generated text
-
Dishonesty: by allowing falsehoods to masquerade as knowledge or legal precedent
-
Cowardice: if one is deferring to technological tools to avoid the intellectual labor of critical evaluation
Taken together these ethical frameworks agree on a common conclusion:
Both Kantian deontology and Natural Law Theory affirm that individuals utilizing generative AI technologies have a categorical and natural duty to verify its outputs. Failure to do so constitutes not only a professional lapse but a moral violation against reason itself. Virtue Ethics complements this analysis by exposing how such failures reflect and reinforce poor moral character. Virtue ethics holds that using generative AI technologies in ways that replace rather than augment human reason reflect poor moral character.
Whether framed as a breach of rational obligation, a violation of natural purpose, or the expression of vice, the misuse of generative AI tools is not merely unprofessional; it is unethical.