Report Warns: Credibility Lacking in UK's Approach to AI Safety
July 18 2023
The UK government’s approach to AI safety regulation has been criticised by independent research-focused Ada Lovelace Institute in a report suggesting 18 recommendations to improve its policy. Despite statements of its ambition to be an AI superpower, the government’s refusal to pass new domestic laws regulating AI applications and its plans to reform data protection laws could undermine AI safety. The research institute also criticised the government’s proposal of letting existing, sector-specific regulators adapt to AI without any new legal powers or extra funding. The report calls for an expensive definition of AI safety, concerned with real-world AI harms rather than theoretical future risks.
Back to Breaking AI News
What does it mean?
- Ada Lovelace Institute: This is an independent research body that works in the public interest to ensure data and artificial intelligence are used to benefit everyone.
Does reading the news feel like drinking from the firehose?
Do you want more curation and in-depth content?
Then, perhaps, you'd like to subscribe to the Synthetic Work newsletter.
Many business leaders read Synthetic Work, including:
CEOs
CIOs
Chief Investment Officers
Chief People Officers
Chief Revenue Officers
CTOs
EVPs of Product
Managing Directors
VPs of Marketing
VPs of R&D
Board Members
and many other smart people.
They are turning the most transformative technology of our times into their biggest business opportunity ever.
What about you?
Do you want more curation and in-depth content?
Then, perhaps, you'd like to subscribe to the Synthetic Work newsletter.
Many business leaders read Synthetic Work, including:
CEOs
CIOs
Chief Investment Officers
Chief People Officers
Chief Revenue Officers
CTOs
EVPs of Product
Managing Directors
VPs of Marketing
VPs of R&D
Board Members
and many other smart people.
They are turning the most transformative technology of our times into their biggest business opportunity ever.
What about you?