Responsibility for intelligent machines: a cognitive approach

old_uid18531
titleResponsibility for intelligent machines: a cognitive approach
start_date2020/11/25
schedule18h
onlineno
detailslien pour la connexion : https://us02web.zoom.us/j/87354181409
summaryMuch is written about "responsible AI" - but what does responsibility mean in this context? This talk begins by considering the cognitive basis of human responsibility, in order to inform comparisons between human and artificial agents. Human agents make a mental link between their intended action, and the outcome of that action. I will show that this mental link underpins the everyday experiences of sense of agency and responsibility - which algorithmic systems currently lack. Human agency has two specific important features, which make (most) humans safe agents for us to interact with. First, human agents can step back from a current goal once circumstances mean that goal is no longer appropriate. Many artificial agents still rely on a human over-ride to perform this stepping-back function. Second, while human actions have low explainability (we often don't know why we do what we do), they can have high fixability (we often change what we do, given appropriate learning signals). Discussions about the explainability of AI should be replaced by discussions of fixability. Last, I will consider the social dimension of human and machine action. The human sense of agency and responsibility are carefully trained by society, through reinforcement and cultural learning in early childhood experience that we do not generally remember. The public sphere is increasingly inhabited and shaped by artificial agents. I will consider what cognitive attributes AIs will need to have in order for us to cohabit with them, as opposed to merely use them, or avoid them.
responsiblesTarissan