Telegram (AI) YouTube Facebook X
Ру

What is explainable artificial intelligence (XAI)?

What is explainable artificial intelligence (XAI)?
What is explainable artificial intelligence (XAI)?

Key points

  • Explainable artificial intelligence (Explainable AI, XAI) is a research area within AI that seeks to build systems and models able to explain their actions and make decisions in ways people can understand.
  • One of the chief challenges in training modern AI is the “black box”: systems can give accurate answers and perform complex tasks, yet it is often hard to discern how they reached those results.
  • XAI can be useful where a high level of explainability is required, such as medicine, finance and law.

How to define AI?

Artificial intelligence can be complex and opaque. That naturally fuels demand for explainable artificial intelligence (XAI). 

To understand what XAI is, it helps first to look more closely at AI in general. AI technologies are a broad, fast-growing field, yet there is no universally accepted definition.

In 2021 the European Commission proposed a regulatory framework that would provide a legally binding definition of AI. The document says AI systems generate inferences, predictions, recommendations, or decisions that affect their environment. 

According to the lawyer and AI researcher Jacob Turner, artificial intelligence can also be defined as “the ability of a non-natural entity to make decisions on the basis of an evaluation process”. Combining the Commission’s and Turner’s definitions, one may say that AI systems are capable of “learning” and influencing their environment. AI is not limited to software; it can take various forms, including robotics.

What is the “black box” in AI?

AI produces “decisions” or outputs based on inputs and algorithms. Thanks to learning and the use of different techniques and approaches, it can do so without direct human intervention. As a result, AI systems are often perceived as a “black box”.

Here, a “black box” denotes the difficulty of understanding and controlling the decisions and actions generated by AI systems and algorithms. This creates problems of transparency and accountability, with legal and regulatory implications.

How does XAI address this?

The idea of explainable AI emerged as a response to the “black box” problem. XAI is an approach aimed at building AI systems whose results can be explained in human-understandable terms. Its main goal is to make decision-making in AI systems transparent and accessible.

The following factors make XAI a significant component in building and deploying AI:

  • Accountability. If an AI system makes an important decision about a person (for example, denying a loan or suggesting a medical diagnosis), people should understand how and why that decision was made. XAI can improve the transparency and accountability of such processes and reduce public fears about AI technologies.
  • Trust. People are more likely to trust systems they understand. If an AI system can explain its decisions clearly, people will be more inclined to accept them. 
  • Model improvement. If we can understand how an AI system reaches its decisions, we can use that insight to improve the model—spotting and removing biases and making the system more accurate, reliable and ethical. 
  • Legal compliance. In some jurisdictions, such as the European Union with the introduction of the General Data Protection Regulation (GDPR), organisations are required to explain decisions made using automated systems.

Transparency and explainability may conflict with other interests, such as profit or competitiveness. That underscores the need to strike a balance between innovation and ethical considerations when developing and deploying AI.

Boosting trust in public- and private-sector AI systems matters. It nudges developers to be more responsible and helps ensure their models do not propagate discriminatory ideas. It also helps prevent unlawful use of databases.

XAI plays a key role in this. Explainability entails transparency about the key factors and parameters that determine AI decisions. Although full explainability may be unattainable given the inherent complexity of many AI systems, it is possible to surface certain parameters and values. That makes AI more intelligible, technically useful and conducive to innovation.

What are examples of XAI?

Examples of explainable AI include various machine-learning techniques that enhance the interpretability of AI models in different ways:

  • Decision trees. These provide a clear visual depiction of an AI system’s decision process.
  • Rule-based systems. They specify algorithmic rules in a human-readable format, though they may be less flexible.
  • Bayesian networks. Probabilistic models that show causal relationships and uncertainties.
  • Linear models and analogous techniques in neural networks. These reveal how each input parameter affects the output.

Different approaches are used to achieve XAI, including visualisation, natural-language explanations and interactive interfaces. Interactive interfaces, for example, let users explore how a model’s predictions change when inputs are varied.

Visual tools, such as heat maps and decision trees, can also aid explanation.

What are XAI’s shortcomings?

Explainable AI has several limitations, some tied to how it is applied:

  • Development complexity. Large engineering teams may work on algorithms for long periods, making it hard to understand the entire development process and the principles embedded in AI systems.
  • Ambiguity of the term “explainability”. This broadly interpreted notion can yield differing views when XAI is implemented. When key parameters and factors are analysed, questions arise: what counts as “transparent” or “explainable”, and where are the boundaries?
  • Rapid advances in AI. AI is progressing exponentially and, in combination with unsupervised systems and deep learning, could in theory reach a level of general intelligence. That paves the way for new ideas and innovations, but also adds complexity when implementing XAI.

What is the outlook for XAI?

Consider an experiment on “generative agents”, whose authors integrated AI language models with interactive agents. They created a virtual sandbox: a small town with 25 virtual “residents”. Communicating in natural language, the agents displayed realistic individual and social behaviour. One agent “wanted” to organise a party; the agents then began independently sending invitations.

The word “independently” is crucial here. If AI systems exhibit behaviour that is hard to trace back to individual components, the consequences can be difficult to predict.

XAI can prevent—or at least mitigate—some risks of using AI. It is important to remember that responsibility for AI-based decisions and actions ultimately rests with people, even if not all decisions of artificial intelligence can be explained. 

This material was prepared with input from language models developed by OpenAI. Some of the information presented here is based on machine learning rather than real-world experience or empirical research.

Follow ForkLog on social media

Telegram Instagram
Found a mistake in the text? Highlight it and press CTRL+ENTER.

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK