AI in Finance: Navigating the Revolution in Israel’s Financial Sector

9 min. read

In recent years, the financial industry has undergone a significant revolution with the widespread adoption of Artificial Intelligence (AI) tools. These technologies enable financial companies to analyze vast amounts of data quickly and efficiently, identify complex patterns, and make more informed decisions. From risk analysis to automated customer service, AI is transforming the industry and offering new opportunities for growth, efficiency, and improved customer experience.

Recently, an interim report was published in Israel for public comment, focusing on the impact of artificial intelligence on Israel’s financial sector and presenting the recommendations of the inter-ministerial team established to examine the matter[1]. The report adopts an approach that encourages the integration of artificial intelligence in the financial sector, while establishing appropriate and soft regulation at this stage. It addresses the technology’s potential to improve financial services and reduce costs, alongside the associated risks and challenges. The recommendations emphasize the need to balance innovation promotion with public protection, while maintaining regulatory flexibility given the rapid pace of technological development.

Some of the advantages include Artificial Intelligence’s advanced capabilities to analyze vast amounts of data with speed and precision, enabling more informed investment decisions. Financial AI systems excel at predicting trends and identifying risks in real-time, providing accurate alerts that surpass human diagnostic abilities. Beyond improving decision quality, the technology saves significant economic resources by reducing human error, accelerating processes, and lowering operational costs. The benefits extend to customer experience as well, with precise personalization of services and products, improved operational efficiency, and enhanced profitability – making artificial intelligence a central driving force in the digital transformation of the financial sector.

Thus, for example, artificial intelligence can be used in:

  • Investment Management – Artificial intelligence streamlines and accelerates decision-making processes in investment management and advisory.
  • Credit and Loan Underwriting – Risk assessment, eligibility verification, and financing decisions are performed with artificial intelligence tools.
  • Fraud Prevention – Identification of suspicious patterns and triggering of alerts enable the detection and reduction of financial fraud.
  • Personalization – Personalized customer service.

 

However, alongside the many advantages of artificial intelligence tools, their use involves various risks, such as – misleading information and errors; illegal uses – such as discriminatory uses, unfair uses, use of prohibited information; uses that violate privacy, uses for committing offenses including fraud and deception, information security risks, such as: extracting confidential information outside the organization, exposing classified confidential information within the organization; infringement of third-party rights; loss of control and monitoring ability; creating excessive and unmanaged dependency on third parties, and more.

In addition to these challenges, there are unique challenges to the financial sector such as financial stability risks, cyber risks, fraud and disinformation, and risk of harm to competition in the field.

These challenges present policymakers with the need to impose regulation that will apply to artificial intelligence technology, to the nature and strength of regulation, and to the choice of whether regulation in this matter should be integrated into existing regulation in various areas of life or whether dedicated regulation is required.

The current position adopted by Israel is that Artificial Intelligence will be regulated on a sectoral basis (as opposed to horizontal regulation that would extend to artificial intelligence technology as such), while adopting soft and gradual regulation at this stage, where it is required.

The report details nine guiding principles regarding the nature and manner of desired regulation of artificial intelligence in the financial sector:

  •  Flexible and adaptive regulation
  •  Alignment of regulation with global standards
  •  Encouraging innovation and integration of technology, among other things by removing barriers that hinder market development
  •  Encouraging the use of regulatory tools that enable experimentation and learning
  •  Implementing regulation only where circumstances justify it (as opposed to a default assumption that the use of artificial intelligence necessarily requires regulatory change)
  •  Adopting a risk-based regulatory approach
  •  Consideration of consumer and social factors in regulating the activity
  •  Striving for regulatory uniformity in relation to similar services and risks
  •  Technological neutrality – regulation derived from the nature of the activity regardless of technology unless it justifies special treatment

 

The report deals with various issues related to the integration of Artificial Intelligence systems in financial institutions.

“Black Box” and explainability

The term “explainability” refers to the ability to explain how and why an artificial intelligence system arrived at the result it produced in a way that would be understandable to humans. The focus on the need to “explain” the operation and outputs of an artificial intelligence system stems from the fact that these systems, in their advanced sense, are characterized as a “black box.” That is, among other things, due to their complexity or size, it is not possible to trace how the system’s outputs were obtained.

The team recommends that explainability requirements should be risk-based, not universal. Specific explanations should only be required for medium/high-risk systems making significant adverse decisions affecting customers. Explainability may be unnecessary when compensating alternatives exist (like robust monitoring) or when AI is merely a supporting tool. Financial institutions should nonetheless understand their AI systems’ operations and limitations.

Human Involvement

One of the common solutions proposed to address the challenges inherent in artificial intelligence activity is human involvement and supervision of algorithmic decision-making. Human involvement is perceived as a possible means to deal with potential malfunctions and failures of an algorithmic system, especially at the current stage where there is doubt regarding its ability to make accurate, proper, and safe decisions.

Human involvement recommendations for AI systems include: (1) Real-time human oversight only for material decisions where AI plays a significant role; (2) System-wide human supervision rather than involvement in every decision; (3) Financial institutions bear responsibility for implementing systems allowing human control; and (4) Human involvement requirements should be balanced with other safeguards like explainability and appeal mechanisms.

Notification and Disclosure

One of the regulatory obligations discussed regarding the integration of artificial intelligence systems in various services and products is the requirement to notify about the very use of an artificial intelligence system. That is, notification that a certain service or product uses a system that can be characterized as an artificial intelligence system. The notification requirement may provide the consumer with a choice regarding how they wish to consume the service, may inform the regulator about the type and scope of uses, and so on.

Notification recommendations include: (1) Requiring disclosure about AI system use, particularly during early adoption phases; and (2) Integrating AI-specific disclosures with existing product requirements, focusing on systems with material impact and addressing their unique characteristics, potential risks, and methodologies.

The Right to Privacy and Protection of Personal Information

Artificial intelligence technologies rely extensively on the collection and processing of information, including personal information, meaning information that relates to an identified person or a person who can be identified. Information is the “fuel” or “oxygen” that drives artificial intelligence systems; it is the primary and necessary resource for their development and operation.

Privacy recommendations focus on three key areas: (1) Distinguishing personal from anonymous data, with improved re-identification risk assessments and security measures for de-identified information; (2) Strengthening informed consent through formal requirements and risk-appropriate disclosures that become more detailed as uses grow more complex; and (3) Applying data minimization principles to AI systems by reducing data personalization and implementing processes to clean or delete unnecessary information.

Discrimination

A central challenge regarding artificial intelligence-based systems is the risk of discrimination and biases. The extensive scope of operation of artificial intelligence systems increases the risk of these phenomena occurring on a wide scale, compared to individual human decisions.

The challenge of preventing algorithmic discrimination and biases is at the core of the issue and requires systematic identification of possible biases at all stages of development and use of the systems. Artificial intelligence-based systems make extensive use of existing databases and information fed into them that serves to enable them to learn. Biases in databases or instructions fed into such systems, along with other system errors, can stem from various sources: from training data, from the basic assumptions underlying the model, or from the way the model is implemented in practice, may cause such systems to discriminate against some users. 

Responsibility

Legal regulation is mostly based on the responsibility of a human factor to fulfill obligations and examines their conduct and judgment in case of alleged violations. In the era of artificial intelligence, the role of the human factor is diminishing and sometimes not even defined, and therefore the familiar legal structure of imposing responsibility, which was historically based on the presence of a person or corporation at the center of the event, who generally bears responsibility for their actions, is being undermined.

Responsibility recommendations maintain that: (1) Supervised entities should retain full responsibility for their AI systems’ operations; and (2) Existing accountability rules for corporate organs and officeholders should continue to apply to AI contexts, including in general board responsibility for system implementation, usage, and associated risk management.

Artificial Intelligence Governance

The term Artificial Intelligence Governance (AI Governance) is used to describe a framework for reducing the challenges and mitigating the risks involved in using artificial intelligence. The role of AI governance rules is to create proper management and control mechanisms in a corporation in the field of artificial intelligence, while regulating decision-making, risk management, control, and supervision of the operation of artificial intelligence systems in organizations.

The toolkit on AI governance is presented according to the following categories: Procedures, policy documents, processes, and practices – dealing with the entire life cycle of artificial intelligence systems; Decision-making processes and responsibility – including the responsibility of the board of directors and senior management for aspects of adoption, supervision, monitoring, and use of artificial intelligence systems; monitoring, supervision, validation – before the start of using the artificial intelligence system, as well as on an ongoing basis after the start of its operation; emergency and cessation of activity – measures to stop the system’s activity and backup for the system; use of outsourcing and third-party suppliers. 

The report refers to several issues related to the impact of Artificial Intelligence uses on the financial market as a whole. The entry of artificial intelligence into the financial sector may also have implications from a market-wide perspective. The impacts of artificial intelligence in this context relate not only to entities that will seek to operate artificial intelligence systems, but to the financial system as a whole, and they relate to aspects of financial stability, market structure and competition, and operational risks including disinformation risks.

Financial Stability – Dealing with financial stability risks is a first-order supervisory interest, for the protection of public funds, maintaining the proper functioning of the financial system, and preventing financial crises that could harm the economy as a whole.

Given the limited adoption scope of artificial intelligence applications and the absence of concrete steps in countries around the world, the team believes that at this stage there is room for continued monitoring of this issue and examining the need for future actions.

Competition – Artificial intelligence has the potential to increase productivity in the financial field, drive innovation among existing and new players, and enable the creation of products and services that will benefit the public.

Competition recommendations emphasize: (1) Continued application of competition laws even where anti-competitive practices are carried out through artificial intelligence models or as a result of their operation; and (2) Proactively preventing market power concentration in AI service provision.

The development of artificial intelligence technologies increases operational risks in the financial sector, including cyber threats, fraud, and disinformation, due to their increasing accessibility and availability to hostile entities. The recommendations for addressing this include: requiring supervised entities to assess disinformation risks and develop appropriate controls, creating dedicated protection mechanisms, increasing public awareness through financial education, and improving investigation capabilities for artificial intelligence-related disinformation events.

Additional Actions to Promote Financial Regulation in the Field of Artificial Intelligence:

Encouraging innovation: It is proposed to establish independent “sandboxes” by each financial regulator in its area of responsibility,

  • Establishment of innovation centers by financial regulators and dedication of resources and focus of activity in the near future on artificial intelligence applications.

Promoting regulatory certainty and adjustments in law:

  • Increasing regulatory certainty in the field of artificial intelligence through tools such as policy documents, response to preliminary inquiries, interpretative positions, questions and answers, publication of guides and warnings.
  • Regulation in law and regulations should reflect the principle of technological neutrality and allow flexibility to operate through various technological means.

 

Promoting supervisory activity: Directing government resources, outside regular budget frameworks, to promote artificial intelligence-based supervisory activity.

Summary:

In sum, artificial intelligence is a significant driving force in transforming the financial industry and beyond. It enables increased efficiency, greater accuracy in decision-making, and significant improvement in customer experience. However, it is important to remember that the adoption of AI technologies brings with it challenges, such as privacy protection, intellectual property aspects, information security, and dealing with ethical questions.

The interim report has been published for public comment, and therefore the recommendations may change following the comments received. This is the first step in regulating artificial intelligence and adapting legal rules to a new era in which artificial intelligence is integrated into increasingly numerous sectors.

[1] https://www.gov.il/BlobFolder/policy/regulation_cmisa_11/he/04.10.24.pdf

Written by Dr. Eyal Book, Head of Artificial Intelligence practice at S. Horowitz & Co.

you might be interested in

Articles

1 min. read
S. Horowitz & Co. secures Supreme Court victory for the National Insurance Institute, upholding a NIS 1.1B/month long-term care tender affecting 400,000 Israelis.

Articles

7 min. read

AI-Assisted Coding: Innovation, Intellectual Property and Hidden Risks

AI coding tools are bringing speed and efficiency, but also urgent questions about copyright, ownership and licensing.

Updates

3 min. read

Employer Deposits into Individual Policies to Secure Future Employee Severance Grants

A recent ruling changes the tax treatment of employer severance policy deposits, affecting compensation structures across Israel.

Position Application

Subscribe

Get the latest updates straight to your inbox

SHARE

Facebook
LinkedIn
WhatsApp
Email
Print