15 July 2021
by Kyle Martin, Robert Gordon University
The use of AI and ML systems is increasingly becoming more commonplace in everyday life. In everything from recommender systems for media streaming services to machine vision for clinical decision support, intelligent systems are supporting both the personal and professional spheres of our society. However explaining the outcomes and decision-making of these systems remains a challenge. As the prevalence of AI grows in our society, so too does the complexity and expectation surrounding the ability of autonomous models to explain their actions.
Regulations increasingly support users rights to fair and transparent processing in automated decision-making systems. This can be difficult when the latest trends in data-driven ML systems, such as deep learning architectures, tend to be black-boxes with opaque decision-making processes. Furthermore, the need for accountability means that pipeline, ensemble and multi-agent systems may require complex combinations of explanations before being understandable to their target audience. Beyond the models themselves, designing explainer algorithms for users remains a challenge due to the highly subjective nature of the explanation itself.
The SICSA Workshop 2021 was designed to present a forum for the dissemination of ideas on domains relating to the explainability of AI and ML methods. The event was organised into several themed sessions.
- Session 1 – Applying and Evaluating Explanations
- Session 2 – Roles within an XAI System and Accountability
- Session 3 – Searching for Explanations
The SICSA XAI Workshop 2021 was an incredible success. We were proud to welcome 49 attendees from a mix of industrial organisations and academic institutions across Europe. A total of 13 papers were submitted for peer review by the programme committee, of which 12 were accepted and presented during the workshop (10 short papers and 2 position papers). The workshop featured an invited talk from Professor Belén Díaz-Agudo of Universidad Complutense de Madrid. She presented an examination of the relationship between Case-Based Reasoning and XAI and discussed how this had lead to the formation of the iSee project to share explanation experiences.
For interested readers, the proceedings of the workshop are available online through CEUR. Recorded presentations presented at the workshop are also available on YouTube.