The Punjab and Haryana High Court has asked judicial officers not to use artificial intelligence tools — including ChatGPT, Gemini, Microsoft Copilot and Meta AI — for writing judgments or conducting legal research, while warning that any violation “would be viewed seriously”.
Available information suggests that the High Court’s decision, communicated to the District and Sessions Judges in the two states and the UT, states: “The Chief Justice has been pleased to ask you to direct judicial officers working under your control not to use AI tools, including but not limited to Chat GPT, Gemini, Copilot or Meta, for writing judgements and legal research. Any violation of these instructions will be viewed seriously.”
The Punjab and Haryana High Court is the second HC in the country to restrict AI’s use. It is learnt that the Gujarat High Court on Saturday, April 4, unveiled a detailed framework drawing a firm line against AI in adjudication while permitting limited support use. It prohibited the use of AI for any form of decision-making, judicial reasoning, order drafting or judgment preparation, bail sentencing considerations, or any substantive adjudicatory process.
According to its policy, “AI should be used to improve the speed and quality of justice delivery, rather than as a replacement for judicial reasoning.”
The direction, issued on the administrative side by the High Court here, came days after a judicial caution against the premature integration of AI into adjudication. At a recent North Zone-I Regional Conference on “Advancing Rule of Law through Technology: Challenges & Opportunities”, Justice Ashwani Kumar Mishra had sounded a strong note of caution against the premature integration of artificial intelligence into judicial decision-making, warning of systemic risks if adopted without a robust legislative and institutional framework.
Justice Mishra had asserted that its direct incorporation into adjudication posed serious concerns—particularly given the tendency of the subordinate judiciary to follow precedential signals from higher courts. “We have to have a very strong note of caution… When we endorse a particular viewpoint, the lower judiciary also starts following it. Then this becomes a very serious problem.”
Justice Mishra had also stressed that the legal ecosystem was not yet ready for its deployment in core judicial functions. “The minute we start taking it in the judicial dispensation itself, we are in for a very, very serious situation—a crisis of sorts.”
The administrative instruction mirrors these concerns, signalling a calibrated approach where technological assistance may be acknowledged, but the core of judicial decision-making should remain insulated from unregulated AI use.
