TheThinkingMachine
Menu
  • Home
  • Academic
    • The Maths of AI – An introduction
    • Artificial Intelligence -a MIT Short course
  • Cyber-Defence
  • SPECTER
Menu

Open-AI in full lobbying mode

Posted on June 24, 2024July 2, 2024 by Webmaster

According to the UK’s FT in June 2024, San Francisco based OpenAI, the creator of ChatGPT, is expanding its lobbying team to influence AI regulation and plans to grow its politically influencing global affairs team currently 35, to 50 by the end of the year to meet the growing international sentiment of governments worldwide that they should be regulating AI. Such regulation could severely impact OpenAI’s innovation and it’s stated intention to commercialize the OpenAI capabilities.

Anna Makanju, OpenAI’s vice-president of government affairs is quoted as saying “We are not approaching this from a perspective of just needing to quash regulations… because we don’t have a goal of maximizing profit; we have a goal of ensuring AGI benefits all of humanity”. No, they don’t have a goal of maximizing profit, but they are transitioning to a fully commercial operation and have just removed key safety team members.

OpenAI’s lobbying focuses on shaping AI legislation, such as the EU’s AI Act as the company argues that some of its models should not be classified as “high risk” – unsurprisingly as high risk models have to have much stricter regulation. Open-AI, despite its name, has also opposed granting regulators access to pre-training data, arguing that post-training data is a better risk indicator, but this is a diffusion because the real problem is that generative machine learning models such as Open-AI have accessible pre-training data and measurable post-training data, but even Open-AI’s engineers don’t know what goes on in between. The EU included OpenAI’s most advanced models under the AI Act and theoretically could require regulator access to training data as OpenAI is likely to be designated as a high-risk system.

OpenAI has hired prominent lobbyists, including Chris Lehane (formerly of the Clinton administration and Airbnb), and former Republican senator Norm Coleman (law firm Hogan Lovells) as well as Chan Park from Microsoft as part of its lobbying team.

OpenAI has recently faced criticism for shifting safety from AI policy specialists to general tech lobbyists, although OpenAI continues to insist that it aims to ensure “safe and broadly beneficial” AI development, despite significant safety-related departures, including co-founder and chief scientist Ilya Sutskever and researcher Jan Leike, both of whom left the organization expressing concerns about safety practices being overshadowed by product development and following that PR disaster, OpenAI appointed former US Army General and NSA head, Paul M. Nakasone to the Board claiming “General Nakasone’s unparalleled experience in areas like cybersecurity will help guide OpenAI in achieving its mission of ensuring artificial general intelligence benefits all of humanity“.

This appointment rather highlights that Open AI must be very concerned about the cybersecurity of OpenAI and the risk of “re-weighting” and safeguard removal by bad actors of ChatGPT4o.

Category: ETHICS, LEGISLATIVE, NEWS, UK

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • The Tesla trolley Problem – success
  • The Greens were wrong! It is Microbes not Fossil Fuels!
  • The Doctor has already seen you!
  • AI assisted North Korean cyber-criminals being hired in US, UK, Europe and Australia
  • AI learns to teach and improve AI

Recent Comments

No comments to show.

Recent Comments

    Tags

    Academic Papers AI Tools Escalating threat to democracy Regulation Techsistential Risk Work

    Archives

    • November 2024
    • October 2024
    • September 2024
    • July 2024
    • June 2024
    • April 2024
    • November 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • September 2022
    • April 2016

    Categories

    • ACADEMIC
    • AI Books
    • Asia
    • CASELAW
    • ETHICS
    • European
    • LEGISLATIVE
    • NEWS
    • RISK
    • TECHNOLOGY
    • UK
    • US
    © 2026 TheThinkingMachine | Powered by Minimalist Blog WordPress Theme