Do distributional word vector representations encode logical features?

old_uid10941
titleDo distributional word vector representations encode logical features?
start_date2016/02/05
schedule11h-12h30
onlineno
location_infosalle 266
summaryUnsupervised vector-space models of semantics represent the meaning of a word as a real-valued vector derived from the contexts in which the word occurs. Evaluation of such models typically focuses on their representation of concrete words and conceptual knowledge ("dog", "animal"); indeed, it is often argued that distributional representations are unlikely to be adequate for words that are involved in logical inference, such as quantifiers or modals. In this talk, I will report on an ongoing investigation of the vector representations of such words, focusing on the test cases of quantifiers and attitude verbs. I will show that that those representations to a large extent do encode the logical features proposed in formal semantics, such as quantificational force ("everywhere" vs. "somewhere") or factivity ("believe" vs. "know"). Not all vector spaces perform equally well, raising the possibility that this success on this task can be used as an evaluation metric for word representations.
responsiblesCandito