Friday, November 8, 2024
Business

Here's how the EU will regulate AI tools like OpenAI's ChatGPT and GPT-4

The European Union reached a preliminary deal that would limit how the advanced ChatGPT model could operate, in what’s seen as a key part of the world’s first comprehensive artificial intelligence regulation.

All developers of general purpose AI systems – powerful models that have a wide range of possible uses – must meet basic transparency requirements, unless they’re provided free and open-source, according to an EU document seen by Bloomberg.

These include:

  • Having an acceptable-use policy
  • Keeping up-to-date information on how they trained their models
  • Reporting a detailed summary of the data used to train their models
  • Having a policy to respect copyright law

Models deemed to pose a “systemic risk” would be subject to additional rules, according to the document. The EU would determine that risk based on the amount of computing power used to train the model. The threshold is set at those models that use more than 10 trillion trillion (or septillion) operations per second.

Currently, the only model that would automatically meet this threshold is OpenAi’s GPT-4, according to experts. The EU’s executive arm can designate others depending on the size of the data set, whether they have at least 10,000 registered business users in the EU, or the number of registered end-users, among other possible metrics.

Read more: European regulators agree to landmark regulation of AI tools like ChatGPT in what is among the world’s first efforts to rein in the cutting-edge tech

These highly capable models should sign on to a code of conduct while the European Commission works out more harmonized and longstanding controls. Those that don’t sign will have to prove to the commission that they’re complying with the AI Act. The exemption for open-source models doesn’t apply to those deemed to pose a systemic risk.

These models would also have to:

  • Report their energy consumption
  • Perform red-teaming, or adversarial tests, either internally or externally
  • Assess and mitigate possible systemic risks, and report any incidents
  • Ensure they’re using adequate cybersecurity controls
  • Report the information used to fine-tune the model, and their system architecture
  • Conform to more energy efficient standards if they’re developed

The tentative deal still needs to be approved by the European Parliament and the EU’s 27 member states. France and Germany have previously voiced concerns about applying too much regulation to general-purpose AI models and risk killing off European competitors like France’s Mistral AI or Germany’s Aleph Alpha.

For now, Mistral will likely not need to meet the general purpose AI controls because the company is still in the research and development phase, Spain’s secretary of state Carme Artigas said early Saturday.

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

source

Leave a Reply

Your email address will not be published. Required fields are marked *