The Need for regulation of artificial intelligence
Artificial intelligence, which entered our lives by storm in the last year, is of far-reaching importance for the future of humanity, and has been described as the “fourth industrial revolution”. Various achievements of artificial intelligence, which were unimaginable in the past, are now considered a developing and fascinating reality. The field crosses sectors and continues to develop at an extremely fast pace. However, despite the inherent advantages of this emerging technology, these developments have created substantial concerns about the lack of regulation in the field and the dangers inherent in artificial intelligence, and it is clear that control over its development and use is required, in order to prevent risks that may arise as a result of its use, including in relation to such issues of privacy, copyrights, loss of workplaces, and even up to the violation of human rights.
 
Artificial intelligence in this sense is an emerging technology that must be prepared for through a national strategy and regulatory policy. Around the world we are currently witnessing a global race of legislation seeking to regulate the development and use of artificial intelligence technologies. In this sense, Europe has positioned itself as a pioneer and regulatory marker, while understanding the importance of its role as a global standard setter. Leading players in the field, along with the European Union, are the USA, China and the UK.
 
The EU AI ACT

After months of debate about how to regulate AI, and marathon negotiations of 36 hours in total, the EU’s three branches of government (The Parliament, Council and Commission) have reached a provisional agreement on the proposal on regulating rules on artificial intelligence (AI), the so-called artificial intelligence act. The proposal, first introduced in April 2021, is a key element of the EU’s policy to foster the development across the single market of safe and lawful AI that respects fundamental rights. The European Parliament approved the draft law in June 2023, and this law is now reaching its final configuration.

The EU AI ACT is a flagship initiative that lays down a uniform, horizontal legal framework, intended to regulate this disruptive technology and ensuring legal certainty. It follows a risk-based approach, where the main idea is to treat artificial intelligence based on its ability to cause harm to society: the higher the risk, the stricter the rules. Therefore, several levels of artificial intelligence systems have been defined: limited risk, through the application of a strict regime in cases of high-risk use, up to systems with a level of risk that is unacceptable – the use of which will be prohibited.

The draft regulation aims to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. This landmark proposal is also intended to encourage investment and innovation on AI in Europe.
The event is important and the act is a landmark in the legislative moves to regulate artificial intelligence technology. European Union Commissioner Thierry Breton wrote:
 
“I welcome this historic deal. The EU becomes the first continent to set clear rules for the use of AI. The AI Act is much more than a rulebook — it’s a launchpad for EU startups and researchers to lead the global race for trustworthy AI. This Act is not an end in itself; it’s the beginning of a new era in responsible and innovative AI development – fueling growth and innovation for Europe”.
 
Carme Artigas, Spanish secretary of state for digitalization and artificial intelligence, wrote:

“This is a historical achievement, and a huge milestone towards the future! Today’s agreement effectively addresses a global challenge in a fast-evolving technological environment on a key area for the future of our societies and economies. And in this endeavour, we managed to keep an extremely delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the fundamental rights of our citizens”.
 
The European approach to trustworthy AI

The requirements of the EU AI law differ depending on the level of risk posed by the AI ​​system. As mentioned, the basic idea is to regulate artificial intelligence according to its ability to cause damage to society: the higher the risk, the stricter the rules.

“Unacceptable risk”: For some uses of AI, risk is deemed unacceptable and, therefore, these systems will be banned from the EU.
The provisional agreement bans, for example:
-Biometric categorization systems that use sensitive data (such as political beliefs, sexual orientation, race, etc’);
-Emotion recognition in the workplace and educational institutions;
The untargeted scraping of facial images from the internet or CCTV footage, to create facial recognition databases;
Social scoring;
-AI systems or applications that manipulate human behavior to circumvent user’s free will;
-AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation)
-some cases of predictive policing for individuals.

High-risk: AI systems identified as high-risk would be authorized but will be subject to  a set of requirements and obligations to gain access to the EU market, due to their significant potential harm to health, safety and fundamental rights. Such requirements shall include risk-mitigation systems, data quality standards, logging of activity, detailed documentation requirements, clear user information, protection against discrimination and bias, accuracy and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems.

Examples of such high-risk AI systems include certain critical infrastructures for instance in the fields of water, gas and electricity; medical devices; systems to determine access to educational institutions or for recruiting people; or certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes.

Limited risk: would be subject to very light transparency obligations, for example that the content was AI-generated so users can make informed decisions of further use. Such applications may include  for example chat bots or deep fake.

Minimal risk: The vast majority of AI systems fall into the category of minimal risk. Minimal risk applications such as AI-enabled recommender systems or spam filters will benefit from a free-pass and absence of obligations, as these systems present only minimal or no risk for citizens’ rights or safety. On a voluntary basis, companies may nevertheless commit to additional codes of conduct for these AI systems.

The use of artificial intelligence by law enforcement agencies
The act does not seek to apply the law to systems used solely for military or defense purposes. Thus, the law includes the possibility of using remote biometric identification by law enforcement authorities in public spaces in urgent and emergency situations such as searching for victims (kidnapping, human trafficking, etc.), preventing specific and foreseeable threats, such as acts of terrorism and locating people involved in the relevant crimes, subject to certain protective measures and additional safeguards (this activity was prohibited in previous versions of the act and is now allowed on the condition that the law enforcement authorities respect additional protective measures).

General purpose AI systems and foundation models
New provisions have been added addressing situations where AI systems can be used for many different purposes (general purpose AI), and where general purpose AI technology is subsequently integrated into another high-risk system. For very powerful models that could pose systemic risks, there will be additional binding obligations related to managing risks and monitoring serious incidents, performing model evaluation and further testing. Such new obligations will be operationalized through codes of practices developed by industry, the scientific community, civil society and other stakeholders together with the Commission.

It is important to note that over time, AI technologies are integrated not only in fields such as translation, transportation, medicine, investments, but with the development of generative AI, they are becoming widespread and common in the artistic domains such as painting, music and literature, developing new kind of works and creativity, leading creative artists to explored the bounds and potential of the human-machine divide, new tools for creativity and its meaning.
 
Thus, the emergence of new AI tools have necessitated rewriting various sections of the proposal, time after time. Generative AI systems have exploded in the past year into the world’s attention, exciting users with the ability to produce human-like texts, photos and songs but raising fears about the related risks of such the rapidly developing technology. Thus, specific rules were also established in relation to “foundation models” – large systems capable to competently perform a wide range of distinctive tasks, such as generating video, text, images, conversing in lateral language, computing, or generating computer code. The provisional agreement provides that foundation models must comply with specific transparency obligations before they are placed in the market. A stricter regime was introduced for ‘high impact’ foundation models, which have advanced complexity, capability and performance, and may have systemic risks. The exact duties and requirements for these models will be published soon with the publication of the final draft by the EU authorities. The companies that develop such models will have to compile technical documentation, observe copyrights and list the content used for training the model.

Definition:
In order to clarify and ensure that the definition of an AI system provides clear criteria for distinguishing AI from simpler software systems, the act aligns with the definition proposed by the OECD:

“An AI system is a machine-based system that […] infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can affect physical or virtual environments”.
 
What are the consequences of not complying with the provisions of the law?

Companies not complying with the rules will be fined. Heavy and significant fines will be imposed for disobeying the provisions of the law. Fines were set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher.

The fines would range as the following:
 
– for violations of banned AI applications:                 35 million euros or 7%;
– for violations of other obligations:                           15 million euros or 3%;
– for the supply of incorrect information:                    7.5 million euros or 1.5%.

However, the provisional agreement provides for more proportionate caps on administrative fines for SMEs and start-ups in case of infringements of the provisions of the AI act.

Measures in support of innovation
With a view to creating a legal framework that is more innovation-friendly and to promoting evidence-based regulatory learning, the provisions concerning measures in support of innovation have been substantially modified compared to the Commission proposal. Further, the law will not apply to companies that develop artificial intelligence systems that are intended solely for the purpose of research.

In this context, the regulatory sandbox should be mentioned. A regulatory sandbox is a controlled environment for the development, testing and validation of innovative AI systems, in real world conditions. The regulatory sandbox is a way to connect innovators with regulators and provide a controlled environment for them to collaborate, under specific conditions and safeguards. Such cooperation between the regulators and the innovators should facilitate the development, testing and validation of innovative artificial intelligence systems in order to ensure compliance with the requirement of the artificial intelligence regulation. In order to alleviate the administrative burden for smaller companies, the provisional agreement includes a list of actions to be undertaken to support such operators and provides for some limited and clearly specified derogations.

Entry into force
The agreement is now subject to formal approval by the European Parliament and the Council, which shall take place early next year. The AI Act would then become applicable two years after its entry into force, except for some specific provisions: Prohibitions will already apply after 6 months while the rules on General Purpose AI will apply after 12 months.

What are the next steps?
During the coming weeks, work will continue at technical level to finalise the details of the new regulation. The entire text will need to be confirmed by both institutions and undergo legal-linguistic revision before formal adoption by the co-legislators, so it is likely that the law will become applicable in 2026.

At the same time, it is unlikely that the law will change substantially, and companies dealing with or developing artificial intelligence technologies, or investors considering investments in the field of artificial intelligence, may want to make sure that the activity is compatible and will not be considered a prohibited activity, according to the new law.

We will wait for the final version of the act to see what exactly was agreed upon and when the law will enter into force. We will continue to update.

We are delighted to update you about the joining of Dr. Eyal Brook as a partner, who will lead and develop the field of artificial intelligence (AI) at the firm. This innovative and ground-breaking field requires unique expertise and covers a wide range of aspects. Therefore, we have established a dedicated team that includes a number of partners from a wide variety of fields: high-tech, cyber, transactions, intellectual property and copyright.

Our office continues to be at your disposal for questions or further clarifications, and provides a dedicated service in the field of artificial intelligence (AI). It will be clarified that the above is general information, which is not a substitute for individual legal advice. We are at your disposal for any question or service, and wish you and your loved ones safe and peaceful days.

For further information:
https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/#:~:text=Following%203%2Dday%20’marathon’,so%2Dcalled%20artificial%20intelligence%20act.

 https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473