Just months after Rishi Sunak, the UK prime minister told the house of Commons that the UK would promote a safe legislative environment but promised that he would “not to rush” to set up rules for the fast-growing AI technology industry, there has been an abrupt about face in Government thinking.
The UK government has started developing new legislation to regulate artificial intelligence, expected to be focussed on the control of the production and roll-out of large language models, which are the foundation of AI products such as OpenAI’s ChatGPT.
While it is unclear what the legislation will cover or when it will be released, the proposed rules are likely to mandate that developers of advanced models share their algorithms with the government, provide evidence of safety testing, and to identify and disclose any potential harms, although commentators have expressed that this is entirely misconceived because most of the developmers of AI algorithms don’t understand how the algorithm works themselves. This is because LLM are largely regressively self-learning, meaning that they learn and then adjust their own code and therefore not only does teh code very quickly become effectively unreadable to the internal code team, but it has on many cases progressed so much that the team don’t fully understand how and why the algorithm produces the result it does. …and it is an increasingly difficult task to understand it because the code is auto-developing.
It is entirely possible for code teams to provide a list of the harms and risks that they can identify, but that is hardly safety testing because developers do not have a crystal ball able to look into the future, so whilst they can apply historic data about previously identified risk, they are unlikely to be able to identify the future unforeseen risks – those new risks that are novel.
UK regulators have expressed concerns about the potential impact of AI, including the possibility of bias and the development of harmful materials.
The proposed legislation would apparently, in an enormous loophole, apply to large language models LLMs, but not the applications themselves.
The UK has to date avoided strict regulations on AI, relying instead on voluntary agreements with governments and companies, but are increasingly realising that the scale of profit available in the field will likely mean that any controls will be ignored or circumvented in the dash for corporate profit and greed and now that the EU has already implemented strict regulations, regulators are becoming increasingly concerned about the potential risks associated with AI in the UK and the risk of unforeseen and unforeseeable novel but catastrophic events driven by AI or by what AI can do in the hands of bad actors.

