There is no doubt that AI is rapidly expanding its presence in various areas, prompting the EU to take decisive steps towards AI regulation. The EU AI Act was approved by the parliament on Wednesday,14th June and is expected to become law by the end of this year.
The EU AI Act will serve as a comprehensive guideline for the use of AI in the workplace, positioning the EU as one of the world leaders in AI regulation.
Recently, the EU voted to exempt draft language on generative AI regulation, bringing the new AI Act closer to becoming law. However, before it becomes a law, it needs approval from the main legislative branch. Given the EU’s history of prompt actions, there is optimism that the Act will soon gain legal status.
While the impending enactment of the act is a positive development, there have been concerns regarding the draft language of the regulation, particularly in areas like enhanced biometric surveillance, emotion recognition, predictive policy, and generative AI like ChatGPT.
Regarding generative AI, it is a broad and significant aspect that cannot be overlooked, as it can profoundly impact various aspects of society, including elections and decision-making.
The EU AI Act classifies AI applications into four categories based on risk: little or no risk, limited risk, high risk, and unacceptable risk. Examples of little or no risk applications include spam filters and game components, while limited risk applications encompass chatbots and minor face rules and guidelines. High-risk applications involve areas like transportation, employment, financial services, and other sectors impacting safety. Unacceptable risks refer to applications that threaten people’s rights, livelihoods, and safety.
According to the EU AI draft regulation, any organization or individual utilizing generated content must disclose it to the user. Although many companies and businesses are integrating AI into their systems, adhering to the regulation may present challenges.
The official proposal for the Act was made in April 2021 and has undergone several amendments since then. It is yet to undergo negotiation between the Parliament, European Commission, and the council of the European Union, with the final agreement expected by the end of the year.
The implications of the EU AI Act extend beyond Europe, with major AI companies like OpenAI, the creator of ChatGPT, expressing concerns about complying with the regulation. Companies like Google and Microsoft, which invest heavily in AI, have also shown signs of disapproval. However, the EU AI Act aims to mitigate the risks associated with AI to ensure that its benefits outweigh the adverse effects.
AI Limitations
As per the EU AI regulations, there are limitations on what AI can do, particularly in areas posing risks to people’s safety. These areas include:
● Biometric identification systems
● Biometric categorization systems using sensitive characteristics
● Predictive policing systems
● Emotion recognition systems in law enforcement, border management, the workplace, and educational institutions
● Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
High-Risk AI
According to the EU AI Act, high-risk AI is when AI poses a threat to people’s health, safety, fundamental rights, or the environment, such as using AI to influence voters and election outcomes.
To operate in the EU, AI companies must adhere to transparency requirements and take precautions to prevent generating illegal content. However, the use of copyrighted data may present challenges at present.
Did you like this post? Do you have any feedback? Do you have some topics you’d like me to write about? Do you have any ideas on how I could make this better? I’d love your feedback!
Feel free to reach out to me on Twitter!