Artificial intelligence (AI) and its usage of copyrighted material have sparked debate in a variety of contexts following a significant increase in the technology’s use in content creation.
Legislators in the European Union responded to the increasing use of AI in a vote on April 27 by advancing a draught of a new bill aimed to keep the technology and firms creating it in check.
The bill’s details will be finalised in the following phase of legislative and member-state deliberations. However, as things stand, AI tools will soon be classified based on their risk level. The risk levels vary from negligible to intolerable.
The measure states that high-risk instruments would not be completely prohibited, but will be subjected to stronger disclosure standards. Generated AI tools, such as ChatGPT and Midjourney, will be required to disclose any use of copyrighted resources in AI training.
Svenja Hahn, a member of the European Parliament, responded to the bill’s current state as a compromise between excessive surveillance and over-regulation that protects citizens while still “fostering innovation and boosting the economy.”
In the same week, Eurofi, a European think tank comprised of public and private sector enterprises, released the latest edition of its magazine, which included an entire section on AI and machine learning applications in finance in the EU.
The section comprised five mini-essays on AI innovation and regulation in the EU, with a focus on applications in the financial sector, all of which alluded to the future Artificial Intelligence Act.
In relation to the regulation, one author, Georgina Bulkeley, the director for EMEA financial services solutions at Google Cloud, stated:
“AI is too important not to regulate. And, it’s too important not to regulate well.”
These developments come just days after the EU’s data watchdog expressed alarm about the possible problems that AI companies in the United States may face if they do not comply with the EU’s General Data Protection Regulations.