Stochastic stability is an important solution concept for stochastic learning dynamics in games. However, a limitation of this solution concept is its inability to distinguish between different learning rules that lead to the same steady-state behavior. We identify this limitation and develop a framework for the comparative analysis of the transient behavior of stochastic learning dynamics. We present the framework in the context of two learning dynamics: Log-Linear Learning (LLL) and Metropolis Learning (ML). Although both of these dynamics lead to the same steady-state behavior, they correspond to different behavioral models for decision making. In this work, we propose multiple criteria to analyze and quantify the differences in the short and medium-run behaviors of stochastic learning dynamics. We derive upper bounds on the expected hitting time of the set of Nash equilibria for both LLL and ML. For the medium to long-run behavior, we identify a set of tools from the theory of perturbed Markov chains that result in a hierarchical decomposition of the state space into collections of states called cycles.