|
Injecting dictionary knowledge into word vector representations| title | Injecting dictionary knowledge into word vector representations |
|---|
| start_date | 2023/12/05 |
|---|
| schedule | 16h-17h30 |
|---|
| online | no |
|---|
| location_info | Salle du conseil (533) |
|---|
| summary | The issue of word vector representations (embeddings) has a rich history in natural language processing. In the previous decade, static embeddings were omnipresent in computational linguistics. However, they had a significant drawback, namely, there was only one vector representation per lemma which posed challenges for tasks involving polysemous words. In 2019, new language models were introduced which could generate new word representation for each context. In theory, this appeared to be a solution to the polysemy issue. However, in practice, the model's representations of the words with the same meaning seemed to vary considerably depending on the context. In my presentation, I will demonstrate my approach to addressing this issue by training the language model on the examples sharing the same meaning in order to make representations of words with the same sense closer in the vector space. |
|---|
| responsibles | NC |
|---|
Workflow history| from state (1) | to state | comment | date |
| submitted | published | | 2023/12/07 15:03 UTC |
| |
|