Introduction
Generative AI tools can now produce realistic text, images, video and audio with minimal human input. This has significant implications for civil litigation and AI-related disputes. In Indian courts, parties may seek to introduce AI-generated reports, chat outputs or deepfake videos as evidence. At the same time, content creators and AI developers face novel AI copyright law India questions: Did an AI “write” a story, and if so who owns it? Did an image-generating system infringe someone’s photograph when trained on it?
This article examines how Indian law treats AI-generated content disputes in civil cases, focusing on evidence admissibility and copyright. We discuss both text and image outputs of generative AI, recent Indian case law and official guidance, and emerging global trends. Finally, we outline key uncertainties and practical advice for creators, developers and litigants.
AI-Generated Evidence in Indian Courts
AI-produced content is still evaluated under traditional evidence rules. Under the Indian Evidence Act, any information from a device is an “electronic record” (Section 65B). To admit digital documents or media, Section 65B requires a certificate from a responsible official detailing how the data was created. For example, courts have long held that email printouts or computer-generated charts need such authentication certificates (Anvar P.V. v. P.K. Basheer, 2014). In practice, the Supreme Court later allowed that the certificate may be filed later if the originating device is produced in court (Arjun Panditrao Khotkar v. Gorantyal, 2020). These rules apply even if the output is from an AI system.
However, AI tools blur the line between “computer output” and human assertion. One analysis notes that “AI-checked outputs, like class scores, create a grey area”. Is a chatbot reply just processed information or an independent record? If AI software flags a document as relevant, must the AI “certify” it or a human? Indian courts have yet to answer these new questions.
The Delhi High Court (DHC) has so far given cautious guidance. In Christian Louboutin SAS & Anr. v. The Shoe Boutique (Shutiq) (CS(COMM) 583/2023), the DHC refused to base any finding on ChatGPT’s answers. The plaintiffs had queried ChatGPT about a trademark issue; the court tested the chatbot with a different query and found its answers inconsistent. The DHC emphasized that AI responses “cannot serve as the basis for adjudicating legal or factual matters in a court of law”. In other words, AI evidence in Indian courts is not inherently reliable. The Delhi court noted ChatGPT can “generate incorrect responses, fictional case laws, imaginative data, etc.”, making its accuracy a “grey area”. The judge concluded that AI (in its current form) “cannot replace human intelligence or the humane element in the adjudicatory process”.
By contrast, lower courts have only begun to experiment. In Jaswinder Singh v. State of Punjab (CRM-M-22496-2022), the Punjab & Haryana High Court became the first Indian court to pose a query to ChatGPT in a bail hearing. That court used the AI’s response as preliminary research on global bail jurisprudence. But even there, the emphasis was on research support, not evidence of fact. As one commentary noted, ChatGPT may be a “starting point for research” but its outputs have “low evidential relevance” and must be supplemented by traditional proof.
Deepfake images and videos pose a related challenge. Indian evidence law still lacks a specific deepfake rule, but courts are aware of the risk. In Nirmaan Malhotra v. Tushita Kaul (2024), a Delhi High Court bench cautioned parties about deepfakes in a family law case. The husband presented graphic photos of his wife with other men to contest a maintenance order.
The DHC remarked that “we may take judicial notice of the fact that we are living in the era of deepfakes”. The judges found it “not clear” if the wife was actually in the pictures and refused to credit them without proof. In short, AI-generated or manipulated media are viewed with skepticism: a party must independently establish their authenticity.
In practical terms, lawyers presenting AI-generated evidence should be prepared to explain its source and reliability. Under Section 65B, they should treat any AI output as an electronic record. A certificate of authenticity (from the software operator or device custodian) would likely be required. As Arjun Panditrao Khotkar makes clear, producing the original device can waive the certificate requirement – but for cloud-based AI services (like ChatGPT), there is no “device” to present.
Thus AI evidence may fall under hearsay or computer evidence rules, forcing reliance on other admissible proof. Furthermore, India’s new Evidence Act (the Bharatiya Sakshya Adhiniyam, 2023) retains stringent certification. It even adds that a certificate must now be signed by the machine’s operator and an expert, and include a hash of the data. This suggests future courts will demand technical validation (for instance, digital forensic reports) before treating AI outputs as evidence.
In sum, the admissibility of AI-generated evidence in Indian courts remains uncertain. Current practice treats AI outputs like any other electronic evidence: relevant only if authenticated and reliable. The emerging judicial message is that judges and juries should not blindly trust AI content – “seeing” an image or text is no longer sufficient if it might be a deepfake or hallucination.
AI-Generated Content and Copyright Law in India
Generative AI also raises new copyright questions. Two main issues arise: (1) can AI-generated works be copyrighted, and (2) does using copyrighted material to train or produce AI outputs infringe rights? India’s Copyright Act (1957) has no specific AI provisions, so courts apply traditional rules. Notably, the Act only awards copyright to “original” works of human authorship. Section 2(d)(vi) defines the author of a “computer-generated” work as “the person who causes the work to be created”.
This sweeping language suggests anyone triggering the AI (e.g. by giving it a prompt) is the author. However, in practice, only natural persons are recognized as authors. As India’s Commerce Ministry has noted, courts in India “have taken a different view that only a natural person can claim authorship”.
The text of Section 2(d)(vi) is ambiguous for modern generative AI. One analysis explains: when the law was amended in 1994, computer-assisted works had clear human “causers.” But with advanced AI, multiple people might “cause” a work – from the software developer to the user giving the prompt. For example, one real-world episode involved an AI named “RAGHAV” being credited as the sole author of a painting. The Copyright Office first denied the application, and though a later registration listed a person and an AI jointly, officials later questioned that decision. In short, while the statute’s wording could allow an AI “author,” Indian practice still presumes a human. Until courts directly address it, purely AI-generated text or art likely lacks copyright protection on its own.
On the other hand, if a human provides significant creative input (for instance, by crafting detailed prompts or post-editing the result), the human involvement may suffice for originality. One scholarly proposal is a “significant human input” test: copyright should attach to an AI output only if human ingenuity substantially shaped it.
For example, a one-line prompt yielding a short poem might not meet India’s originality standard (akin to the pre-1994 “sweat of the brow” doctrine), whereas a complex storyboard guiding the AI might. This approach seeks balance: it would reward genuine creativity while not attributing authorship to software.
The second issue is infringement. If AI systems are trained on copyrighted texts or images, does that violate the rights of the original creators? This question has become urgent worldwide, and India is no exception. In a landmark move, ANI News (an Indian media agency) sued OpenAI in Delhi last year. ANI alleges that ChatGPT was trained on its copyrighted news stories without permission, and that ChatGPT even fabricated news attributing false stories to ANI.
In January 2025, the Delhi High Court framed four issues: whether storing and using copyrighted data for AI training constitutes infringement, whether AI-generated responses using that data infringe, whether such use is allowed as “fair use” under Section 52 of the Act, and whether the Indian court has jurisdiction. The case echoes similar disputes abroad (e.g. various U.S. lawsuits). OpenAI’s defense is that it relied on publicly available information and fair use, and has now blocked ANI’s content. The court has yet to rule, but the issues are clear – India’s courts will soon decide if unlicensed data scraping and AI-generated summaries fall outside Section 52 exceptions. Until then, content producers should assume their work used in AI training could be challenged.
Globally, most courts have been skeptical of granting AI itself any copyright. For instance, the U.S. Copyright Office requires an “original work of authorship” to be created by a human. The landmark Feist v. Rural Telephone decision (U.S.) held that copyright protects only the products of human intellect. Some EU proposals (e.g. the EU AI Act) focus on transparency and watermarking of AI-generated content rather than granting new author rights. In India, there is no dedicated “AI copyright law,” and policy bodies have so far endorsed using the existing framework rather than creating new rights for AI works.
Key Issues and Advice for Clients
The intersection of AI, litigation, and IP law is evolving rapidly. Clients and lawyers should note several key uncertainties and best practices:
- Authenticity Checks. Parties using AI-generated content as evidence should be prepared to authenticate it. Courts will expect chain-of-custody proof (e.g. metadata, hash values) and expert verification if needed. Given the emphasis on “hash values” and expert signatures in the new Evidence Act, litigants should consider obtaining digital forensic analysis for crucial AI outputs. If possible, keep records of how any AI-generated document or image was created (time stamps, prompt logs, software version). Remember that AI content may be deemed hearsay unless it falls under a statutory exception – courts will likely treat it as any other third-party statement.
- Human Oversight. Lawyers should never use AI “as evidence” without backup. Any factual claim should have human-authenticated proof. For instance, don’t submit a ChatGPT chat log to prove a statement – get a live witness or document. The Louboutin case teaches that AI should only aid research, not replace evidence. Similarly, in drafting pleadings or contracts, if generative AI tools are used, carefully review and edit the output. Be aware that U.S. courts now often require attorneys to certify any AI assistance in filings; Indian lawyers should also flag to the court if they rely on AI-generated research or text.
- Data and Training. Creators of AI systems or AI-generated content (e.g. software companies, social media platforms) must audit their training data. Using copyrighted material without permission risks lawsuits like ANI’s. Consider licensing agreements for content used in training, or employ clearly licensed/public-domain sources. Keep detailed records of your data sets. If you deploy generative AI features to customers, clarify in terms of use whether outputs are guaranteed original or require user vetting.
- Ownership and Licensing. If you produce or use AI-generated works (for example, via AI art generators or text systems), document your role. Under the current law, the user who “caused” the output is the presumptive author. But if your AI use is wholly automated, beware that courts may not grant you copyright. If you commission an AI output or heavily prompt it, treat the process like a collaboration: the best practice is to secure the rights (through contracts or assignment clauses) from any human contributors (e.g. prompt engineers, editors, illustrators) involved in creating the final output.
- Monitoring Regulatory Trends. Keep an eye on legal developments. The government’s position – that no new AI-specific IP law is needed – could change. New rules or guidelines (like the data protection bill or sectoral AI regulations) may impact what is permissible. For example, the Delhi government has been directed by courts to form a panel on deepfakes. And internationally, the EU’s AI Act (adopted in 2024) will impose obligations on companies deploying AI that could affect cross-border services and content.
- Disclosure and Good Faith. Given the novelty of these issues, it pays to be transparent. In litigation, disclose when AI tools were used for drafting or research, as a matter of professional responsibility. In contracts or transactions involving AI technology, make clear representations about the provenance and IP status of AI-generated content. Document any filtering or editing you do on AI outputs. This shows courts and partners that you are taking care with generative AI legal implications, rather than unintentionally misleading others.
Bottom line for clients: Treat AI-generated content with caution. For evidence, stick to traditional admissible material; use AI only as an aid. For IP, assume that clear human authorship and licensing are still paramount. Advise content creators that if their works are used in AI systems, they may have claims (depending on fair use and contract terms). Advise developers to be proactive: respect copyright, be ready to explain your AI’s outputs, and help users verify authenticity. This proactive approach will mitigate many of the current legal uncertainties in AI-generated content disputes.
References
- Indian Evidence Act, 1872 (Sections 65A–65B); Bharatiya Sakshya Adhiniyam, 2023 (Sections 62–63).
- Indian Copyright Act, 1957 (Sections 13, 2(d)(vi), 52).
- Christian Louboutin SAS & Anr. v. The Shoe Boutique (Shutiq), CS(COMM) 583/2023 (Delhi High Court).
- Nirmaan Malhotra v. Tushita Kaul, 2024 (Delhi High Court).
- Jaswinder Singh v. State of Punjab, CRM-M-22496-2022 (Punjab & Haryana High Court).
- Anvar P.V. v. P.K. Basheer, (2014) 10 SCC 473.
- Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal, (2020) 10 SCC 1.
- Aklovya Panwar, “Generative AI and Copyright Issues Globally: ANI Media v OpenAI,” TechPolicy.Press (Jan. 2025).
- N.M. Neerav Merchant, “AI in Indian Courts – A Slow Start,” Chambers Student (Oct. 2023).
- Harshal Chhabra & Kanishk G. Pandey, “Balancing Indian Copyright Law with AI-Generated Content: The ‘Significant Human Input’ Approach,” IJLT Blog (Feb. 26, 2024).
- Rohan Mishra, “The Authenticity Challenge: Addressing Deepfake Media as Evidence,” NLUJ Criminal Law Blog (Apr. 5, 2025).
- Indian gov’t says current IP regime can protect AI works, AsiaIP (Mar. 22, 2024).
- Navigating the transition: Implications of the Bhartiya Sakshya Adhiniyam on digital evidence in ongoing trials, Bar & Bench (Mar. 2024).