This is a research-oriented graduate-level course on PGMs. The course will cover two (2) main types of PGMs, i.e., directed PGMs and undirected PGMs. For directed PGMs, we will cover Bayesian networks, with one (1) of its most important variants, hidden Markov models. For undirected PGMs, we will cover Markov networks (or Markov random fields), with one (1) of its most important variants, conditional random fields. Therefore, the course contains four (4) parts: Bayesian networks, hidden Markov models, Markov networks, and conditional random fields. In each part, I will introduce motivations, ideas, definitions, examples, properties, representations, inference algorithms, and applications for the corresponding PGM. This is done through lectures by the instructor. In the next two (2) lectures, the students will present recommended research papers and lead in-class discussions. The last lecture of each part will be an in-class quiz, the purpose of which is not to judge their ability of calculation or memorization, but to push them to think more and deeper about the contents introduced in lectures. The course will finish by a final exam lecture and two (2) project presentation lectures. The projects are expected to be a real application or a serious theoretical work of PGMs on real research problems.