Publication:
Waste Not: Meta-Embedding of Word and Context Vectors

dc.contributor.authorGANİZ, MURAT CAN
dc.contributor.authorsDegirmenci, Selin; Gerek, Aydin; Ganiz, Murat Can
dc.contributor.editorMetais, E
dc.contributor.editorMeziane, F
dc.contributor.editorVadera, S
dc.contributor.editorSugumaran, V
dc.contributor.editorSaraee, M
dc.date.accessioned2022-03-12T16:24:09Z
dc.date.accessioned2026-01-11T06:37:46Z
dc.date.available2022-03-12T16:24:09Z
dc.date.issued2019
dc.description.abstractThe word2vec and fastText models train two vectors per word: a word and a context vector. Typically the context vectors are discarded after training, even though they may contain useful information for different NLP tasks. Therefore we combine word and context vectors in the framework of meta-embeddings. Our experiments show performance increases at several NLP tasks such as text classification, semantic similarity, and analogy. In conclusion, this approach can be used to increase performance at downstream tasks while requiring minimal additional computational resources.
dc.identifier.doi10.1007/978-3-030-23281-8_35
dc.identifier.eissn1611-3349
dc.identifier.isbn978-3-030-23281-8; 978-3-030-23280-1
dc.identifier.issn0302-9743
dc.identifier.urihttps://hdl.handle.net/11424/226237
dc.identifier.wosWOS:000502398500035
dc.language.isoeng
dc.publisherSPRINGER INTERNATIONAL PUBLISHING AG
dc.relation.ispartofNATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2019)
dc.relation.ispartofseriesLecture Notes in Computer Science
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.subjectMeta-embedding
dc.subjectWord embeddings
dc.subjectWord2vec
dc.subjectFastText
dc.subjectText classification
dc.subjectSemantic similarity
dc.subjectAnalogy
dc.titleWaste Not: Meta-Embedding of Word and Context Vectors
dc.typeconferenceObject
dspace.entity.typePublication
oaire.citation.endPage401
oaire.citation.startPage393
oaire.citation.titleNATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2019)
oaire.citation.volume11608

Files