Publication:
Inspire at SemEval-2016 task 2: Interpretable semantic textual similarity alignment based on answer set programming

dc.contributor.authorsKazmi M., Schüller P.
dc.date.accessioned2022-03-15T02:12:01Z
dc.date.accessioned2026-01-10T20:22:27Z
dc.date.available2022-03-15T02:12:01Z
dc.date.issued2016
dc.description.abstractIn this paper we present our system developed for the SemEval 2016 Task 2 - Interpretable Semantic Textual Similarity along with the results obtained for our submitted runs. Our system participated in the subtasks predicting chunk similarity alignments for gold chunks as well as for predicted chunks. The Inspire system extends the basic ideas from last years participant NeRoSim, however we realize the rules in logic programming and obtain the result with an Answer Set Solver. To prepare the input for the logic program, we use the PunktTokenizer, Word2Vec, and WordNet APIs of NLTK, and the POS- and NER-taggers from Stanford CoreNLP. For chunking we use a joint POS-tagger and dependency parser and based on that determine chunks with an Answer Set Program. Our system ranked third place overall and first place in the Headlines gold chunk subtask. © 2016 Association for Computational Linguistics.
dc.identifier.doi10.18653/v1/s16-1171
dc.identifier.isbn9781941643952
dc.identifier.urihttps://hdl.handle.net/11424/247718
dc.language.isoeng
dc.publisherAssociation for Computational Linguistics (ACL)
dc.relation.ispartofSemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.titleInspire at SemEval-2016 task 2: Interpretable semantic textual similarity alignment based on answer set programming
dc.typeconferenceObject
dspace.entity.typePublication
oaire.citation.endPage1115
oaire.citation.startPage1109
oaire.citation.titleSemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings

Files