India Proposes Mandatory Revenue-Sharing Model for AI Companies Using Copyrighted Content
New ‘One Nation One License One Payment’ framework would require OpenAI, Meta, Google, and other AI developers to pay statutory royalties for training models on Indian copyrighted works
Bhubaneswar: India’s Department for Promotion of Industry and Internal Trade (DPIIT) has proposed a groundbreaking copyright framework that would force global AI companies to share revenues generated from systems trained on Indian copyrighted content, potentially setting a precedent for how the world’s second-largest AI market regulates generative artificial intelligence.
The proposal, detailed in a 115-page working paper released in December 2025, introduces what the government calls a “hybrid model” that ensures “availability of all lawfully accessed copyrighted content for AI training as a matter of right, without the need for individual negotiations,” while guaranteeing “fair compensation to copyright holders.”
The framework comes as India emerges as a critical battleground for AI companies. OpenAI CEO Sam Altman recently stated that “India is our second-largest market in the world after the US and it may well become our largest market,” underscoring the stakes for companies like OpenAI, Meta, Google, and Anthropic.
How the Revenue-Sharing Model Works
Under the proposed framework, AI developers would be required to pay “certain percentage of the revenue generated from AI Systems trained on copyrighted content” as royalties. The exact rates would be “fixed by a committee appointed by the government,” according to the working paper.
A centralized non-profit entity called the Copyright Royalties Collective for AI Training (CRCAT), “made by the rightsholders and designated by the Central Government,” would collect these payments and distribute them to copyright owners through existing Copyright Societies and Collective Management Organizations.
The model represents a middle path between allowing unrestricted AI training and requiring individual licensing negotiations. As the DPIIT committee explained: “Long negotiations and high transaction costs can hold back innovation, particularly for startups and MSMEs.”
However, the framework firmly rejects the approach favored by many technology companies. “The Committee had detailed deliberations on the TDM [text and data mining] exception model recommended by the tech industry, however, this approach was not found to be a prudent policy approach,” the document states.
No Opt-Out Option for Copyright Holders
In a significant departure from the European Union model, India’s proposal does not allow copyright holders to opt out of having their works used in AI training.
“Under this framework, the rights holders will not have the option to withhold their works for use in the training of AI Systems,” the document states explicitly.
This means Indian publishers, music labels, film studios, and individual creators cannot prevent AI companies from training on their content—but they gain a statutory right to compensation through the collective licensing system.
The committee found that opt-out mechanisms, while offering “some comfort to large content industry players,” leave “small creators largely unprotected owing to lack of awareness to opt-out, bargaining power to negotiate, and the mechanisms to see if their content has been scraped despite opt-out.”
Why India Rejected the ‘Fair Use’ Approach
The working paper reveals that technology industry stakeholders, including Nasscom, advocated for a blanket text and data mining exception that would permit AI training on copyrighted works without payment. The committee rejected this approach.
“Allowing such an exception under law for commercial purposes would undermine copyright and it would leave human creators powerless to seek compensation for use of their works in AI Training,” the document argues. “It was not found to be a wise policy choice, especially for a country like India which has a rich cultural heritage and a growing content industry with immense potential.”
India’s media and entertainment sector crossed $29.4 billion in 2024, contributing 0.73% to the country’s GDP, with projections to reach $36.1 billion by 2027—growing faster than India’s overall GDP growth rate.
The committee noted that allowing free AI training on copyrighted content could lead to “a sharp decline in human-created works, which would, over the years, affect the richness of our cultural and creative landscape.”
Global Context: Courts Still Divided
The Indian proposal comes amid continuing legal uncertainty in other major markets. The working paper references recent contradictory U.S. court decisions where judges reached different conclusions on whether AI training constitutes “fair use” under copyright law.
In Kadrey v. Meta Platforms, a Northern California court noted that “no matter how transformative LLM training may be, it’s hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works.”
Yet in Bartz v. Anthropic PBC, another judge from the same district ruled that using lawfully acquired books to train AI models constituted fair use since the use was “transformative.”
The DPIIT committee acknowledged that “awaiting finality on such pending litigations may not be optimal,” choosing instead to propose direct legislative intervention.
*Implementation Through Central Authority*
The proposed framework would establish a single-window system for AI developers. Companies would pay royalties to CRCAT, which would then distribute payments to member organizations representing different classes of works—literary, musical, audiovisual, and others.
“Even the non-members would be eligible to receive royalties” if they register their works for the purpose of receiving AI training-related payments, the document specifies.
The model aims to provide “an easy access to content for AI Developers for AI Training, simplify licensing procedures, reduce transaction costs, ensure fair compensation for rightsholders,” while offering “a level playing field for all, including start-ups and small players.”
Industry Opposition
Nasscom, India’s influential technology industry association, formally dissented from the committee’s recommendations. In submissions dated August 17, 2025, Nasscom recommended “Text and Data Mining (TDM) for both commercial and non-commercial purposes where access is lawful, and a good faith knowledge safeguard is met.”
The trade body argued that rightsholders should be able to reserve their works from TDM through machine-readable opt-outs for publicly accessible content, or through contract terms for non-public content.
However, the DPIIT committee found that “a majority of Committee Members” endorsed the mandatory licensing approach over alternatives including voluntary licensing, extended collective licensing, or TDM exceptions with opt-out rights.
What It Means for Enterprises
For enterprises deploying AI systems in India, the framework could create both opportunities and obligations:
- Reduced compliance burden: Companies would pay a single entity rather than negotiating multiple licenses with individual rightsholders.
- Legal certainty: The statutory framework would eliminate ambiguity about whether AI training infringes copyright, replacing unclear “fair dealing” interpretations.
- Cost implications: Companies would face mandatory royalty payments based on revenue generated from AI systems, with rates determined by government committee.
- Training data audits: Enterprises may need to demonstrate that AI training data was lawfully accessed, as the framework maintains a “lawful access” requirement.
The working paper emphasizes that “since India represents a significant market for AI Systems and directly contributes to the revenues of AI Developers, there is an added rationale that a portion of such revenue be shared with the creators from India whose works are used in the training of such AI Systems.”
Next Steps
The working paper represents Part 1 of a broader examination of AI and copyright issues. Part 2, yet to be released, will address “the copyrightability and authorship of GenAI-generated outputs, including the applicability of moral rights and attribution of liability for infringing outputs.”
The DPIIT has presented this working paper “for stakeholders’ feedback,” indicating the proposal remains open for consultation before potential legislative action.
The framework also builds on India’s broader IndiaAI Mission, approved in March 2024 with over Rs 10,300 crore ($1.2 billion) in funding to build AI infrastructure, datasets, and computing capacity while ensuring “socially impactful AI projects” and “bolstering ethical AI.”
As India positions itself as both a major AI consumer market and an emerging AI developer hub, the proposed revenue-sharing model could influence how other countries balance innovation incentives with creator protections in the generative AI era.
