The widespread use of Artificial Intelligence-based tools in the healthcare
sector raises many ethical and legal problems, one of the main reasons being
their black-box nature and therefore the seemingly opacity and inscrutability
of their characteristics and decision-making process. Literature extensively
discusses how this can lead to phenomena of over-reliance and under-reliance,
ultimately limiting the adoption of AI. We addressed these issues by building a
theoretical framework based on three concepts: Feature Importance,
Counterexample Explanations, and Similar-Case Explanations. Grounded in the
literature, the model was deployed within a case study in which, using a
participatory design approach, we designed and developed a high-fidelity
prototype. Through the co-design and development of the prototype and the
underlying model, we advanced the knowledge on how to design AI-based systems
for enabling complementarity in the decision-making process in the healthcare
domain. Our work aims at contributing to the current discourse on designing AI
systems to support clinicians' decision-making processes.