Crowdsourcing is a relatively inexpensive and efficient mechanism to collect annotations of data from the open Internet. Crowdsourcing workers are paid for the provided annotations, but the task requester usually has a limited budget. It is desirable to wisely assign the appropriate task to the right workers, so the overall annotation quality is maximized while the cost is reduced. In this article, we propose a novel task assignment strategy (CrowdWT) to capture the complex interactions between tasks and workers, and properly assign tasks to workers. CrowdWT first develops a Worker Bias Model (WBM) to jointly model the worker's bias, the ground truths of tasks, and the task features. WBM constructs a mapping between task features and worker annotations to dynamically assign the task to a group of workers, who are more likely to give correct annotations for the task. CrowdWTfurther introduces a Task Difficulty Model (TDM), which builds a Kernel ridge regressor based on task features to quantify the intrinsic difficulty of tasks and thus to assign the difficult tasks tomore reliable workers. Finally, CrowdWT combines WBM and TDM into a unified model to dynamically assign tasks to a group of workers and recall more reliable and even expert workers to annotate the difficult tasks. Our experimental results on two real-world datasets and two semi-synthetic datasetsshow that CrowdWT achieves high-quality answers within a limited budget, and has the best performance against competitive methods.