On 1st November 2023, The White House just announced a Presidential Executive Orders conerrning AI risk and setting out 10 key provisions with the aim of reducing AI risk. This is the Artificial Intelligence Safety, Security, and Trust executive order.
1. Developers of powerful AI systems (e.g., OpenAI, Google and Microsoft) must share the results of their safety tests with the federal government
AI companies are already having to create a Red Team. (A red team is a group of people that test the security and safety of a digital entity by posing as malicious actor, although unfortunately, this is not mandated to look for security and safety risks inherent in the system, only the use of malicious actors). As part of the training of the AI and the new Order, companies will have to share the results of red-team safety tests before they are released to the public. The order focuses on future generations of AI models, not current consumer-facing tools like ChatGPT and only seems to apply to AI systems that meet the highest threshold of computing performance…. “the most powerful systems in the world”.
2. Red-team testing will be held to high standards set by the National Institute of Standards and Technology
Homeland Security and the Departments of Energy will also work together to determine whether AI systems pose certain risks in the realm of cybersecurity .
3. Address the safety of AI players using models for science and biology-related projects
New standards for “biosynthesis screening” are in the works to protect against “dangerous biological materials” engineered by AI.
4. AI-generated content must be watermarked
The Department of Commerce will roll out guidance for ensuring all AI-generated content — audio, imagery, video, and text — is labeled as such, in an attempt to identify which content is created by a non-human entity, making it easier to identify deceptive deepfakes.
5. Continue building upon the ‘AI Cyber Challenge’
The AI Cyber Challenge is a Biden administration initiative that seeks to establish a high-level cybersecurity program that strengthens the security of AI tools, ensuring that vulnerabilities are fixed.
6. “bipartisan data privacy legislation”
The executive order requires AI companies to ensure that Americans’ privacy is protected whilst training and deploying their models. Children’s privacy will be a primary focus.
7. A review of companies’ data policies.
The White House says that it will evaluate how agencies and third-party data brokers collect and use “commercially available” information, meaning public datasets. Some “personally identifiable” data is available to the public, but that doesn’t mean AI players have free rein to use this information.
8. Anti-AI discrimination
Guidance will be rolled out to mandate the reduction of bias and to create best practices to address discrimination in AI algorithms and the Biden administration will address the concerning issues in the usage of AI in sentencing regarding the criminal justice system.
9. US seeking to attract top global talent
The ai.gov site has a portal for applicants seeking AI fellowships and job opportunities in the U.S. government. The order also seeks to update visa criteria for immigrants with AI expertise.
10. Support workers vulnerable to AI developments
The Biden administration Order will also set a pathway to support workers’ collective bargaining influence by developing principles and best practices to protect workers against potential harms like surveillance, job replacement, and discrimination arising from AI. The order also announced plans to produce a report on AI’s potential for disrupting labor markets.

