logo Collaborating for Digital Health and Care in Europe

<   back
03/05/2023

TIC Salut Social Foundation’s Artificial Intelligence (AI) team has published a 2022 Report on the Explainability of Artificial Intelligence in Health. It was produced in the framework of the Catalan Government’s Health/AI Programme. The 50-page report is available in Catalan and English.


The use of Artificial Intelligence (AI) is continuously growing in the field of health. It works on vast volumes of data in e.g. electronic health records, and shows the great potential that technology has to improve people’s health and well-being.

Some Catalonian health centres use AI to support diagnosis, prognosis, and treatment of diseases. In fact, Catalonia’s Health AI Observatory has already detected nearly 100 Artificial Intelligence algorithms that are in the development stage or are being used in a controlled manner.

So why is Explainable AI needed? This 2022 report suggests why.

 

***

Explainable AI enables people (“human users”) to understand why an algorithm has produced a particular result.

Head of the Artificial Intelligence Area of the TIC Salut Social Foundation, Susanna Aussó, who is the report’s main author, explains that:

It is essential for health professionals to understand the mechanisms by which the Artificial Intelligence tool has arrived at a prediction. This knowledge is essential to build users’ trust, as it gives them the tools to verify whether the answer was based on robust clinical criteria. Explainability comes in various formats, and it is necessary to reach agreement with the experts on the most appropriate format in each case. They are normally very visual formats that may be combined depending on the needs.”

Explainability tools seek to resolve a particular problem – providing explanations. Without them, AI models remain a kind of “black box” that inhibits people from understanding what is happening when the AI is applied.

Tic salut

To outline machine learning models in human terms, Explainable AI needs to cover various aspects of the data. These relate to its correctness, robustness, biases, and areas for improvement, transferability, and human understanding. These forms of explanation make it possible to build professionals’ trust. As a result, the professionals are able to:

  • understand the limitations and difficulties of the data, and simplify and connect the data with easier concepts;
  • involve stakeholders in the building of an ‘intuitive’, understandable model;
  • create better models (by eliminating errors and identifying scenarios that are unfair as a result of being based on possible biases).

 

Three in-depth chapters in the report cover the different methods of explanation based on the source of the data. Around 10 different methods are outlined. First, there are methods for explaining algorithms based on digital medical imaging, such as x-rays and magnetic resonance imaging. Second, there is explainability of algorithms based on tabular data, i.e. variables that come from sources ranging across analytics, omics data, vital constants, and hospital management data, among others. Third, it is possible to explain data based on natural language processing. (As a result, structured information can be extracted from a free text report that contains diagnostic, treatment or monitoring data.)

Overall, the report describes the benefits of using explainability tools in AI. It sets out the main techniques used to explain algorithms that are based on e.g. digital medical imaging, tabular data, and natural language processing. Its overall aim is to support people involved in the development of AI algorithms in the field of health.   

Well worth reading, this 2022 report will help healthcare professionals and technology developers understand what’s happening with AI and its algorithms.

Read the English version of the report here.

 

***

The TICSalutSocial Foundation was one of the many hundreds of exhibitors at the February 27/March 2 four-day 2023 Mobile Health World Congress, where AI was a major theme. On 28 February 2023, the Foundation announced at its Board of Trustees meeting the arrival of its new Director, Joan Guanyabens. In previous years, Dr Guanyabens has featured among EHTEL speakers at its Symposia and webinars.

 

thumbnail image003


Join our Network

There has never been a more crucial time for health and social care stakeholders to engage with each other to shape and influence emerging models of healthcare...

Read more

Keep in Touch

Follow Us