BRUSSELS: 10th May 2023: The EU’s Internal Market Committee and the Civil Liberties Committee adopted a draft negotiating mandate on the first ever rules for Artificial Intelligence (84 votes in favour, 7 against and 12 abstentions).
In their amendments to the Commission’s proposal, MEPs aim to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly and that there is a uniform definition for AI designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow.
The rules are a Risk based approach to AI incorporating Prohibited AI practices and establish obligations for providers and users depending on the level of risk the AI can generate.
Under the proposals, AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics); however how this would be enforced is not specified and there is extensive dis-belief that enforcement is possible when a company stands to gain billions of dollars in market capitalisation for being next to market with a new AI development. Which CEO could possibly look at EU proposals and take them seriously in face of losing the chance to move the market cap of the company from a few million to a few billion simply by rolling out the new AI ….and he may not even be irresponsible in doing this as even his AI engineers probably don’t understand the risks involved with the AI infrastructure that they have built as it is widely accepted that the systems are not even internally transparent in LLMs.
Prohibited AI
MEPs substantially amended the list to include bans on intrusive and discriminatory uses of AI systems such as:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
- Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- Predictive policing systems (based on profiling, location or past criminal behaviour);
- Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
High-risk AI
MEPs expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights or the environment. They also added AI systems to influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act) to the high-risk list.
General-purpose AI – transparency measures
MEPs included obligations for providers of foundation models – a new and fast evolving development in the field of AI – who would have to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law, although as lawyers often don’t agree on what these are in practice, it is difficult to see how AI engineers can impose these guarantees. In practice, they are much more likely to pay token heed to protection of fundamental rights, health and safety and the environment, democracy and rule of law, leaving substantive challenges to be made further down the road. (A foundation model (also called base model) is a large artificial intelligence (AI) model trained on a vast quantity of data at scale (often by self-supervised learning or semi-supervised learning) resulting in a model that can be adapted to a wide range of downstream tasks.).
Companies will need to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database.
Generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.
Disclosure of the use of AI is a short term measure which is fundamentally flawed. It’s like asking people to disclose that they’ve used Photoshop to manipulate an image. This flaw is because as Generative LLM foundation models develop over the next 12 months, they will become standard and ubiquitous and in many cases they will be built into systems so that the end-user has no knowledge that they are using AI and the creators of Generative Foundation LLMs will not have direct control of the output so that the generation of illegal content will not be in their hands. Early GPT rollouts showed that within hours of roll-out and despite the programmer’s best intentions to prevent illegal content generation, users had found ways to hack the system to get around the restrictions.
Supporting innovation and protecting citizens’ rights
To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses.
The new law promotes regulatory sandboxes, or controlled environments, established by public authorities to test AI before its deployment; however the validity of these is highly questionable because even the programmers having used sandboxes in development have found that the Generative LLMs have acted in unpredictable ways and it is highly unlikely that any public authorities will have the knowledge or experience to test the AI in ways not already considered by the billion-dollar corporations rolling out their Generative LLMs.
MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights; however given the likelihood of a rollout of AI in the event of a significant step forwards, any complaint system is likely to be very quickly swamped, as well as overtaken by technological advances.
MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.
Quotes
The EU is clearly trying to get first mover advantage claiming “We are on the verge of putting in place landmark legislation that must resist the challenge of time. It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level. We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe”. The reality is that it is likely to be retrospectively considered a nice, if naive, try to roll out some form of control over AI.
Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely to find that it is outdated at a rate never before seen, after all, the legislation over the internet failed to control fake news and harmful social media content after 10 years of trying and AI will move aat a pace that makes internet development look positively glacial.
Next steps
Before negotiations with the Council on the final form of the law can begin, this draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.

