Multi-task learning for analyzing and sorting large databases of sequential data

Kai Ni, John Paisley, Lawrence Carin, David Dunson

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

A new hierarchical nonparametric Bayesian framework is proposed for the problem of multi-task learning (MTL) with sequential data. The models for multiple tasks, each characterized by sequential data, are learned jointly, and the intertask relationships are obtained simultaneously. This MTL setting is used to analyze and sort large databases composed of sequential data, such as music clips. Within each data set, we represent the sequential data with an infinite hidden Markov model (iHMM), avoiding the problem of model selection (selecting a number of states). Across the data sets, the multiple iHMMs are learned jointly in a MTL setting, employing a nested Dirichlet process (nDP). The nDP-iHMM MTL method allows simultaneous task-level and data-level clustering, with which the individual iHMMs are enhanced and the between-task similarities are learned. Therefore, in addition to improved learning of each of the models via appropriate data sharing, the learned sharing mechanisms are used to infer interdata relationships of interest for data search. Specifically, the MTL-learned task-level sharing mechanisms are used to define the affinity matrix in a graph-diffusion sorting framework. To speed up the MCMC inference for large databases, the nDP-iHMM is truncated to yield a nested Dirichlet-distribution based HMM representation, which accommodates fast variational Bayesian (VB) analysis for large-scale inference, and the effectiveness of the framework is demonstrated using a database composed of 2500 digital music pieces. © 2008 IEEE.
Original languageEnglish (US)
Pages (from-to)3918-3931
Number of pages14
JournalIEEE Transactions on Signal Processing
Volume56
Issue number8 II
DOIs
StatePublished - Aug 1 2008
Externally publishedYes

Fingerprint Dive into the research topics of 'Multi-task learning for analyzing and sorting large databases of sequential data'. Together they form a unique fingerprint.

Cite this