Sara Shrader and Jason Mock
Since partnering with Coursera in 2012, the University of Illinois has gathered massive amounts of data from our MOOCs, including data from course surveys (beginning and end of semester), clickstream data, as well as activity data consisting of quiz scores and forum posts. In trying to organize and understand this data, the Illinois Learning Analytics team has considered countless questions regarding the efficacy of MOOCs on student learning. For example, we have explored questions of completion and retention, as well as questions of opportunity and engagement of learners from developing countries. Furthermore, we have encouraged our faculty who are teaching MOOCs to think about the MOOC platform in novel ways, in order to leverage the rich research opportunities available to them.
However, despite having numerous robust discussions surrounding the value of our MOOC data, we have made less headway in answering some of the more fundamental questions about the nature and purpose of MOOCs. In particular, our research group has spent considerable time unpacking the various definitions of traditional metrics for student learning, such as “who counts as a participant?” and “what does learning mean in the context of MOOCs?” One of the most exciting – as well as frustrating – aspects of researching student learning in the context of MOOCs is having the ability to create standards by which to measure success. Unlike traditional online courses, MOOC students hail from a variety of backgrounds with diverse motivations. As such, we need a new “language” for talking about MOOCs, one which takes into account the unique and varied backgrounds and intentions of MOOC students.
This pressing need – creating common metrics for understanding MOOC data – creates some interesting conceptual challenges. On the one hand, our group believes that first and foremost the metrics used to describe MOOC data should serve a greater utilitarian purpose of bringing cohesion to the emerging field of MOOC research. Yet, on the other hand, we recognize the situated and contextualized nature of MOOCs, and understand that creating universal labels may unintentionally mask some of the more nuanced and interesting things to be learned from MOOC data.
In this workshop we would like to share some of the conceptual roadblocks we have encountered in trying to understand MOOC data, as well as to discuss ideas for creating inclusive MOOC metrics that aid researchers in better understanding student learning. In fostering discussion with other researchers, our goal is to generate useful MOOC metrics that enable cross-comparisons of MOOC data.