Active Multilabel Crowd Consensus

Guoxian Yu, Jinzheng Tu, Jun Wang, Carlotta Domeniconi, Xiangliang Zhang

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Crowdsourcing is an economic and efficient strategy aimed at collecting annotations of data through an online platform. Crowd workers with different expertise are paid for their service, and the task requester usually has a limited budget. How to collect reliable annotations for multilabel data and how to compute the consensus within budget are an interesting and challenging, but rarely studied, problem. In this article, we propose a novel approach to accomplish active multilabel crowd consensus (AMCC). AMCC accounts for the commonality and individuality of workers and assumes that workers can be organized into different groups. Each group includes a set of workers who share a similar annotation behavior and label correlations. To achieve an effective multilabel consensus, AMCC models workers' annotations via a linear combination of commonality and individuality and reduces the impact of unreliable workers by assigning smaller weights to their groups. To collect reliable annotations with reduced cost, AMCC introduces an active crowdsourcing learning strategy that selects sample-label-worker triplets. In a triplet, the selected sample and label are the most informative for the consensus model, and the selected worker can reliably annotate the sample at a low cost. Our experimental results on multilabel data sets demonstrate the advantages of AMCC over state-of-the-art solutions on computing crowd consensus and on reducing the budget by choosing cost-effective triplets.
Original languageEnglish (US)
Pages (from-to)1-12
Number of pages12
JournalIEEE Transactions on Neural Networks and Learning Systems
DOIs
StatePublished - Apr 16 2020

Fingerprint

Dive into the research topics of 'Active Multilabel Crowd Consensus'. Together they form a unique fingerprint.

Cite this