Affiliations: [a] Instituto de Ingeniería (II), Universidad Nacional Autónoma de México (UNAM), Mexico City, Mexico. E-mails: email@example.com, firstname.lastname@example.org | [b] Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas (IIMAS), Universidad Nacional Autónoma de México (UNAM), Mexico City, Mexico. E-mail: email@example.com | [c] Facultad de Matemáticas (FM), Universidad Autónoma de Yucatán (UAdY), Merida-Yucatan, Mexico. E-mail: firstname.lastname@example.org
Abstract: Word embeddings are powerful for many tasks in natural language processing. In this work, we learn word embeddings using weighted graphs from word association norms (WAN) with the node2vec algorithm. Although building WAN is a difficult and time-consuming task, training the vectors from these resources is a fast and efficient process. This allows us to obtain good quality word embeddings from small corpora. We evaluate our word vectors in two ways: intrinsic and extrinsic. The intrinsic evaluation was performed with several word similarity benchmarks, WordSim-353, MC30, MTurk-287, MEN-TR-3k, SimLex-999, MTurk-771 and RG-65, and different similarity measures achieving better results than those obtained with word2vec, GloVe, and fastText, trained on a huge corpus. The extrinsic evaluation was done by measuring the quality of sentence embeddings using transfer tasks: sentiment analysis, paraphrase detection, natural language inference, and semantic textual similarity. The word vectors learned from the WAN are available on our Github page.
Keywords: Word association norms, word embeddings, word similarity, word2vec, GloVe, fastText
Journal: Semantic Web, vol. Pre-press, no. Pre-press, pp. 1-16, 2019