|
Bayesian Argumentation| old_uid | 14843 |
|---|
| title | Bayesian Argumentation |
|---|
| start_date | 2014/12/17 |
|---|
| schedule | 10h50 |
|---|
| online | no |
|---|
| summary | I will sketch a Bayesian theory of argumentation. According to this theory, an agent has prior beliefs about some propositions A, B,. . . These beliefs are represented by a probability distribution P. The agent then learn the premisses of an argument from some information source. She may, for example, learn that A is the case and that A implies B. This amounts to the following constraints on the agent’s new probability distribution P’ : P’(A) = 1 and P’(B|A) = 1. The full new probability distribution is then determined by minimization of the Kullback-Leibler divergence between P’ and P. One then obtains P’(B) = 1 as one would expect from modus ponens. In a similar way, one can examine the inference patterns modus tollens, affirming the consequent, and denying the antecedent. This approach can be generalized in many respects. The agent may, for example, not fully trust the source that A is true and only assign a very high new probability to A (in the case of modus ponens). Or she may have beliefs about a disabling condition D that inhibits B. In this case she only learns (or so I argue) that P’(B|A and non-D) = 1 where the variable D has to be properly integrated into a causal Bayes net. Finally one may want to study alternatives to the Kullback-Leibler divergence and explore what follows from these measures. All this will, or so I hope, nicely connect to empirical studies. |
|---|
| responsibles | Baratgin |
|---|
| |
|