Are you a client?
Sign in to view the full news archive.
UK government ministers have pushed back proposals to legislate AI by at least a year, abandoning plans for swift regulatory action in favour of a larger bill that addresses both safety concerns and copyright disputes. Technology Secretary Peter Kyle confirmed he intends to introduce a "comprehensive" AI bill during the next parliamentary session, which could delay implementation until May 2026. The move represents a significant shift from Labour's original strategy of introducing narrow, targeted legislation within months of taking office.
The government had initially planned a short bill focused specifically on large language models like ChatGPT, requiring companies to submit their systems for testing by the UK's AI Security (previously Safety) Institute. This approach was designed to address existential risks from advanced AI models that could potentially threaten humanity. However, ministers chose to delay implementation to align with Donald Trump's administration, amid concerns that regulation might damage the UK's appeal to AI companies, whilst more scrutiny of US firms could impact broader relations with the US government.
The broadened scope also stems largely from fierce opposition to the government's AI copyright proposals (See - What is the future of creative content?). Ministers now plan to incorporate copyright rules directly into AI-specific laws. The House of Lords has mounted sustained resistance to proposals allowing AI companies to train models using copyrighted material unless rights holders explicitly opt out. Peers backed an amendment to the data bill that would require AI companies to disclose if they were using copyrighted material to train their models, in an attempt to enforce current copyright law. However, the government insists the data bill is not the right vehicle for the copyright issue and has promised to publish an economic impact assessment and series of technical reports on copyright and AI issues.
The UK government is essentially pushing the issue down the road, by which point AI technology may well have moved on so much that any current regulatory approaches will need to be re-written or will be too late to matter. It is attempting to avoid overly aggressive regulation that could harm innovation, but inaction may prove just as harmful. That said, I think trying to control innovation or the use of AI through regulation is a difficult prospect, just as getting LLM suppliers to disclose all their sources will be.
Posted by: Simon Baxter at 09:24