Explainable AI and medicine

old_uid19052
titleExplainable AI and medicine
start_date2021/04/28
schedule17h-18h
onlineno
detailsSe reporter au site Web pour se connecter à Zoom.
summaryIn the past few years, several scholars have been critical of the use of machine learning systems (MLS) in medicine, in particular for three reasons. First, MLSs are theory agnostic. Second, MLSs do not track any causal relationship. Finally, MLSs are black-boxes. For all these reasons, it has been claimed that MLSs should be able to provide explanations of how they work – the so-called Explainable AI (XAI). Recently, Alex John London claims that these reasons do not stand scrutiny. As long as MLSs are thoroughly validated by means of rigorous empirical testing, we do not need XAI in medicine. London’s view is based on three assumption: (1) we should treat MLSs as akin to pharmaceuticals, for which we do not need an understanding of how they work, but only that they work; (2) XAI plays one role in medicine, which is to assess reliability and safety; (3) MLSs have unlimited interoperability and little transfer-costs. In this talk, I will question London’s assumptions, and I will elaborate an account of XAI that I call ‘explanation-by-translation’. In a nutshell, XAI’s goal is to integrate MLS tools in medical practice; and in order to fulfill this integration task, XAI translates or represent MLSs findings in a way that is compatible with the conceptual and representational apparatus used in that system of practice in which MLS has to be integrated. I will illustrate ‘explanation-by-translation’ in action in medical diagnosis, and I will show how this account is helpful for understanding, in different contexts, whether we need XAI, what XAI has to explain, and how XAI has to explain it.
responsiblesPradeu