Bold warning: AI-generated fake judgments are threatening the very integrity of our courts. That’s the core issue here, and it’s why India’s Supreme Court stepped in with strong measures after a junior judge in Andhra Pradesh relied on AI-produced orders to decide a property dispute. And this is the part most people miss: the problem isn’t just a single misquote; it’s a systemic risk to due process when automated content can be mistaken for genuine legal authority.
Here’s what happened, in plain terms. A lower court in Vijayawada issued an order in a property case after a survey and report were filed by an official. The defendants challenged the order, arguing that four cited precedents were used to back the decision. It later emerged that all four cited judgments were generated by artificial intelligence, not real, verifiable cases. Generative AI can create plausible-sounding but false information, even fabricating sources. This phenomenon, often called “hallucination,” is a known flaw in many AI systems and is particularly dangerous in legal settings where accuracy matters.
The defendants appealed to the state high court. The high court recognized the fake citations but judged the error to be in “good faith” and nonetheless upheld the trial court’s ruling, stating that incorrect or non-existent rulings cited in a decision do not necessarily undermine the outcome if the substantive legal reasoning is sound. It also asked the junior judge to explain her use of AI. She explained it was her first time with an AI tool and that she believed the citations were authentic, attributing the mistake to reliance on an automatic source rather than intent to misquote. The high court urged a shift toward “actual intelligence over artificial intelligence.”
The defendants then brought the matter to the Supreme Court, which took a sterner stance. Last week, it stayed the lower court’s property-order decision and characterized the use of AI in judicial drafting as not merely an error but a form of misconduct. The Supreme Court emphasized that the concern isn’t only the decision on the case’s merits but the entire adjudicatory process and the risks AI poses to institutional integrity. Notices were issued to the Attorney General, the Solicitor General, and the Bar Council of India as part of a deeper review.
This episode isn’t isolated. Earlier this year, the Supreme Court flagged worries about lawyers using AI to draft petitions, calling such practices “absolutely uncalled for.” The question looms globally: how should courts regulate AI so it supports, rather than erodes, fair adjudication?
India has started addressing these questions with ongoing debates and policy work. The Supreme Court published a white paper on AI in the judiciary, outlining best practices and guidelines for judicial bodies, lawyers, and court staff, while stressing the necessity of human oversight and sturdy safeguards. The overarching message is clear: embrace AI’s benefits carefully, but maintain rigorous checks to protect the integrity of the justice system.
What do you think about AI-assisted judging and legal drafting? Should AI aids be allowed only with strict human verification, or should courts ban AI tools altogether in formal proceedings? Do you believe the safeguards proposed in India’s white paper are sufficient, or would you push for stronger controls and transparency about AI-generated material used in court filings?