OpenAI has officially called on US lawmakers to exempt it from complying with state-level AI regulations, instead urging a unified approach under federal AI rules. It argues that a consistent, nationwide framework is critical to maintain US leadership in AI development and deployment.
In a newly released policy proposal, the company outlines what it calls a “freedom-focused” strategy, emphasizing that only a national approach will allow American innovation to flourish without being slowed by fragmented, state-specific requirements.
Key Elements of OpenAI’s Policy Proposal:
- Freedom to Innovate: OpenAI wants US developers and entrepreneurs — seen as the country’s core competitive advantage — to not be constrained by a hotchpotch of state AI laws. Instead, it advocates for a voluntary partnership model between the federal government and private sector to fuel innovation, and stop adversaries like China from gaining an edge due to US overregulation.
- Exporting Democratic AI: The company proposed using export controls to protect America’s AI lead and promote the global adoption of US-made AI. This means updating export rules to see that US technology sets the standard globally from a commercial and ideological perspective.
- Copyright and AI Learning: Recognizing the role of AI models in learning from vast data, including copyrighted content, OpenAI suggests updating intellectual property laws to maintain a balance between protecting creators and ensuring US AI models remain competitive and secure — without ceding ground to foreign AI ecosystems.
- Infrastructure and Workforce: To keep its leadership in AI, it is also pushing for massive investments in AI infrastructure, from modernizing energy grids to building a highly skilled, AI-ready workforce. This, it says, will reindustrialize parts of America, create hundreds of thousands of jobs, and strengthen economic resilience.
- AI Adoption in Government: OpenAI says the US government must lead by example when it comes to deploying AI technologies efficiently and safely, claiming that with China accelerating AI adoption in government and military, lagging behind is not an option.
Shaping the Future Regulatory Landscape
If these proposals are adopted, they could shape the future regulatory landscape. A unified federal approach could lessen the complexity of AI development, streamline compliance efforts, and even accelerate the deployment of AI solutions across industries. But, it also raises questions about how content creators’ rights will be balanced with AI training needs.
OpenAI frames its strategy as essential to ensuring America “bets on American ingenuity” and stays ahead in the global AI race.
A Question of Copyright
Dr Ilia Kolochenko, CEO at ImmuniWeb and a Fellow at the British Computer Society (BCS), says: “Arguably, the most problematic issue with the proposal – legally, practically and socially speaking – is copyright. Paying a truly fair fee to all authors – whose copyrighted content has already been or will be used to train powerful LLM models that are eventually aimed at competing with those authors – will probably be economically unviable, as AI vendors will never make profits.”
He says millions of authors from all around the globe, whose creative content was already misappropriated and exploited to unwarrantedly train for-profit AI models without any permission or even in a direct breach of licensing agreements, still stay without any compensation. “In the meantime, AI giants awkwardly strive to make everybody forget about the inconvenient past and blindly focus on the allegedly bright future.”
Advocating for a special regime or copyright exception for AI technologies – which will likely deprive human authors of the true value of their intellectual labor – will unlikely be even approach fairness, Kolochenko adds.
Unleashing a Parade of Horrors
Moreover, he says the entire discussion toward an exception is a slippery slope that may unleash a parade of horrors.
“If AI technology deserves some exemptions from copyright protection, why other modern technologies don’t? Lawmakers should take OpenAI’s proposal with a high degree of caution, being mindful of the long-lasting consequences it may have on the American economy and legal system.”
Information Security Buzz News Editor
Kirsten Doyle has been in the technology journalism and editing space for nearly 24 years, during which time she has developed a great love for all aspects of technology, as well as words themselves. Her experience spans B2B tech, with a lot of focus on cybersecurity, cloud, enterprise, digital transformation, and data centre. Her specialties are in news, thought leadership, features, white papers, and PR writing, and she is an experienced editor for both print and online publications.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.