According to the UK’s FT in June 2024, San Francisco based OpenAI, the creator of ChatGPT, is expanding its lobbying team to influence AI regulation and plans to grow its politically influencing global affairs team currently 35, to 50 by the end of the year to meet the growing international sentiment of governments worldwide that they should be regulating AI. Such regulation could severely impact OpenAI’s innovation and it’s stated intention to commercialize the OpenAI capabilities.
Anna Makanju, OpenAI’s vice-president of government affairs is quoted as saying “We are not approaching this from a perspective of just needing to quash regulations… because we don’t have a goal of maximizing profit; we have a goal of ensuring AGI benefits all of humanity”. No, they don’t have a goal of maximizing profit, but they are transitioning to a fully commercial operation and have just removed key safety team members.
OpenAI’s lobbying focuses on shaping AI legislation, such as the EU’s AI Act as the company argues that some of its models should not be classified as “high risk” – unsurprisingly as high risk models have to have much stricter regulation. Open-AI, despite its name, has also opposed granting regulators access to pre-training data, arguing that post-training data is a better risk indicator, but this is a diffusion because the real problem is that generative machine learning models such as Open-AI have accessible pre-training data and measurable post-training data, but even Open-AI’s engineers don’t know what goes on in between. The EU included OpenAI’s most advanced models under the AI Act and theoretically could require regulator access to training data as OpenAI is likely to be designated as a high-risk system.
OpenAI has hired prominent lobbyists, including Chris Lehane (formerly of the Clinton administration and Airbnb), and former Republican senator Norm Coleman (law firm Hogan Lovells) as well as Chan Park from Microsoft as part of its lobbying team.
OpenAI has recently faced criticism for shifting safety from AI policy specialists to general tech lobbyists, although OpenAI continues to insist that it aims to ensure “safe and broadly beneficial” AI development, despite significant safety-related departures, including co-founder and chief scientist Ilya Sutskever and researcher Jan Leike, both of whom left the organization expressing concerns about safety practices being overshadowed by product development and following that PR disaster, OpenAI appointed former US Army General and NSA head, Paul M. Nakasone to the Board claiming “General Nakasone’s unparalleled experience in areas like cybersecurity will help guide OpenAI in achieving its mission of ensuring artificial general intelligence benefits all of humanity“.
This appointment rather highlights that Open AI must be very concerned about the cybersecurity of OpenAI and the risk of “re-weighting” and safeguard removal by bad actors of ChatGPT4o.

