We consider the problem of multitask learning (MTL), in which we simultaneously learn classifiers for multiple data sets (tasks), with sharing of intertask data as appropriate. We introduce a set of relevance parameters that control the degree to which data from other tasks are used in estimating the current task's classifier parameters. The set of relevance parameters are learned by maximizing their posterior probability, yielding an expectation-maximization (EM) algorithm. We illustrate the effectiveness of our approach through experimental results on a practical data set. © 2008 IEEE.