Improving Extractive Dialogue Summarization by Utilizing Human Feedback

M. Mieskes, C. Müller, and M. Strube (Germany)

Keywords

Multi-Party Dialogues, Automatic Summarization, GUI, Feedback, Learning

Abstract

Automatic summarization systems usually are trained and evaluated in a particular domain with fixed data sets. When such a system is to be applied to slightly different input, labor- and cost-intensive annotations have to be created to retrain the system. We deal with this problem by providing users with a GUI which allows them to correct automati cally produced imperfect summaries. The corrected sum mary in turn is added to the pool of training data. The per formance of the system is expected to improve as it adapts to the new domain.

Important Links:



Go Back