Bold claim: AI-generated fake judgements are shaking the foundations of judicial trust in India—and this controversy isn’t just about one mistaken order. Here’s a clear, beginner-friendly rewrite that preserves every key fact while expanding a bit for context and understanding.
But here’s where it gets controversial: India’s Supreme Court has warned of legal consequences after a junior judge in Andhra Pradesh relied on AI-produced judgments that turned out to be fake. The higher court is reviewing the lower court’s ruling in a property-dispute case, signaling that the problem isn’t isolated to one decision but touches the integrity of the adjudicatory process itself.
What happened, simply explained: in the Vijayawada trial court, a junior civil judge issued an order in a property dispute last August. The judge had previously ordered a survey of the property and a report, which the defendants contested. The judge dismissed the objection, citing four prior judgments—only to learn later that all four were generated by AI and were not real, legitimate precedents.
Why this matters: AI tools can generate outputs that look convincing but are incorrect or entirely fabricated. In legal contexts, citing fake sources can mislead decisions, undermine due process, and erode public confidence in courts. Generative AI is known for “hallucinating” data or inventing sources, even when it seems to be providing helpful information.
What the defendants did next: they challenged the order in the Andhra Pradesh High Court, pointing out that the cited orders were fake. The High Court acknowledged the mistake but deemed the judge’s error as having occurred in good faith and nevertheless upheld the trial court’s decision. The High Court reasoned that even if some citations were non-existent, the trial court could still be correct if it applied the law properly to the facts.
The High Court’s stance and the human element: the court asked the junior judge for an explanation. She admitted using AI for the first time and believed the citations were genuine. She asserted the mistake arose from relying on an automated source rather than intentional misquoting. The court emphasized the need to prioritize actual intelligence over artificial intelligence.
Supreme Court intervention: the defendants appealed again, and the Supreme Court took a stricter view of AI’s impact on justice. It stayed the lower court’s property-order ruling, stressing that using AI in judging isn’t merely a misstep in decision-making but potentially misconduct. The Court framed the case as an institutional concern that goes beyond the merits of the dispute to the integrity of adjudication itself.
What happens next and who’s involved: the Supreme Court will review the case in more depth and has issued notices to India’s Attorney General, Solicitor General, and the Bar Council of India to participate in the proceedings.
Broader context: this isn’t only India’s issue. Global jurisdictions are grappling with AI in courts. For example, U.S. federal judges faced scrutiny last year over AI-assisted rulings containing errors, and the High Court of England and Wales advised lawyers to be cautious about using AI-generated case material after several instances of fictitious or largely invented precedents.
What to watch for: India’s legal system has already published a white paper on AI in the judiciary, outlining best practices and guidelines for judges, lawyers, and court staff, with a strong emphasis on human oversight and robust safeguards.
Bottom line: as AI tools become more common, courts must balance innovation with accountability, ensuring that automated assistance supports, rather than undermines, fair, transparent adjudication. Do you think judges should be allowed to use AI at all in drafting opinions or citations, or should AI use be restricted to non-legal tasks only? How would you design safeguards to prevent this kind of error while still benefiting from AI’s efficiency? Share your thoughts in the comments.