Recent developments in both the UK and US reflect a broader trend of governments grappling with — and stepping back from — definitive stances on AI and copyright, leaving businesses facing continued regulatory uncertainty.
UK: A Significant Policy Reversal on AI and Copyright
On 18 March 2026, the UK government announced a major shift in its approach to regulating the use of copyright‑protected works for AI training. It has withdrawn support for its previously favored framework, which would have allowed AI developers to train models on lawfully accessed copyrighted works, subject to a rightsholder opt-out.
Instead of pushing ahead with this model, the government has stated that it now has “no preferred option” for reform and will continue to evaluate a range of policy choices. This marks a clear retreat from what had been the central legislative proposal in its December 2024 consultation on Copyright and Artificial Intelligence.
U.S.: A Federal Policy Framework for artificial intelligence was published, which determined that the Administration believes that training of AI models on copyrighted material does not violate copyright laws, nevertheless, it acknowledges arguments to the contrary exist and therefore supports allowing the Courts to resolve this issue.
For businesses, the immediate effect is continued uncertainty: there is still no bespoke statutory regime providing a “safe route” for AI training on copyright works, nor a new comprehensive framework for rightsholders.
In December 2024, the government launched a consultation on how copyright law should apply to AI model training. Its preferred option at that time was a model intended to:
These concerns are reflected explicitly in the March 2026 statement, where the government notes that the opt‑out proposal was “overwhelmingly rejected by the vast majority of the creative industries.”
In its written statement to Parliament on 18 March 2026, issued alongside a Report on Copyright and Artificial Intelligence and an Impact Assessment under the Data (Use and Access) Act 2025, the government confirmed that:
Instead of advancing the earlier opt‑out model, the government has opted to:
While this does not close the door on future legislative reform, it does represent a meaningful recalibration. The direction of travel has changed, but there is no replacement regime in place.
Although the opt‑out framework has been set aside, the government has not abandoned the field. It has outlined a broader work programme focused on incremental, targeted measures.
The government will continue to:
Future reforms may therefore focus less on broad exceptions and more on mechanisms that, improve rightsholders’ visibility into how their works are used; and strengthen their ability to understand and enforce their rights.
The government has announced a task force on AI-generated content labeling, with an interim report expected in the autumn.
This initiative is intended to explore how AI‑generated content should be identified and labelled, which may have implications for:
The government will also launch a summer consultation on digital replicas, focusing on harms associated with unauthorized replication of a person’s likeness or identity.
Issues likely to be in scope include:
Finally, the government appears inclined, at least in the near term, to:
Observe rather than intervene in emerging licensing models for AI training on copyright works.
The March 2026 report discusses the development of licensing markets, but stops short of imposing a specific legislative solution. This suggests a preference for Market‑led licensing arrangements, while Retaining the option of more targeted statutory intervention if needed.
For AI developers and businesses deploying generative AI in or from the UK, the key implications are:
These implications flow from the government’s decision not to proceed with its previously preferred model and the absence of any enacted legislative reform in the March 2026 materials.
A useful point of comparison is the U.S. Administration’s March 20, 2026 National Policy Framework for Artificial Intelligence: Legislative Recommendations, which takes a notably different approach to the copyright-and-AI training issue from the recent UK position. The White House framework is a policy blueprint for Congress, not binding law, and any implementation would require congressional action.
Key U.S. IP-Related Takeaways
The UK and U.S. are currently signaling different policy instincts on AI and copyright. The UK has retreated from its earlier preferred opt-out model for AI training on copyrighted works and is shifting toward further consultation and policy development centered on transparency, creator control, labelling, and digital replicas. The March 2026 U.S. federal policy framework, by contrast, states that AI training on copyrighted material does not, in the Administration’s view, violate copyright law, but recommends that courts should resolve that question, while Congress considers collective licensing mechanisms and digital-replica protections. The result is a notable divergence: the UK is reassessing legislative reform, while the U.S. framework is more focused on judicial resolution of the core copyright issue and targeted legislative action around the edges.