MOOCs and the Older Learner

Susan Hoffman, Chelsea Crown

The Osher Lifelong Learning Institute @ Berkeley (OLLI @Berkeley) is a year-round program of courses, lectures, special events, and interest circles for adults age 55+ through the University of California, Berkeley. Since September of 2013, our research team has been investigating Massive Open Online Courses (MOOCs) as a teaching and learning tool in lifelong learning. We have launched two small, informal case studies of MOOCs and the older learner. Our first investigation was specific to the experience of learners over the age of 80, the “high olds,” who enrolled in the Coursera MOOC “What a Plant Knows” and participated in three consecutive monthly meetings discussing their experiences. In January 2014, we expanded our investigation by holding a hybrid MOOC-classroom course, another Coursera course called The History and Future of (Mostly) Higher Education, open to all OLLI @Berkeley members 55+. Participants engaged in the MOOC in their own time and at their own pace during the week and attended 1.5 hour weekly classroom discussions to discuss the experience as well as content.

The MOOC learning experience of older learners is a crucial missing piece of the MOOC puzzle, an oversight in this era of profound demographic change. Concentrating on accessibility for older learners is important for many reasons, as this group may experience multiple barriers to MOOCs including challenges using technology, auditory and visual decline, and cognitive decline. However, this group may reap the most benefits from MOOCs, and thus deserve to be included in discussions. Given that the over 80 demographic is the fastest growing demographic in the country, as well as the most at-risk for social isolation, cognitive decline, and hearing and vision loss, the MOOC-as-intervention is an especially interesting concept to explore in further research. Potential benefits of MOOCs and the older learner may include advantages for homebound older adults who are able to remain intellectually stimulated and socially engaged as well as the neurocognitive advantages of continuing to learn stimulating new skills and knowledge, such as navigating technology and the multitude of subject areas MOOCs offer.

For the above reasons, at OLLI @Berkeley, we have highlighted MOOCs and the older learner as an important focus of our research team and believe that other researchers nationwide should also be aware of the importance of this age-specified investigation. The main takeaway from our research is to stress the importance and notion of universal access. The changes that could be made in MOOCs to make them more universally user-friendly and inclusive are changes that will go beyond age to benefit diverse students worldwide, including other groups that experience similar barriers, such as those with disabilities; English-language learners; and those with limited knowledge or experience of technology due to access issues, to name a few. Testing the platforms and presentations on a wide range of ages and abilities should be an important component of every MOOC development process. By making MOOCs more inclusive and accessible, everyone benefits. The revolutionary effects promised by MOOCs ring hollow if the target population remains limited to mostly traditional-aged able-bodied university students.

Our investigations have yielded several important and concrete findings that could be incorporated into various MOOC platforms with the intention of increased accessibility and user-friendliness. The first need is around better demonstrations and as-you-go assistance of how to navigate a MOOC. The need for an easily searchable and simply navigated demonstration of how to use a MOOC on any given platform has proven most important in group of learners over the age of 80, as they generally experience the least comfort with technology. The demos we explored together often came up short, were difficult to find, and did not serve the special needs of our population. It may make sense to modify platforms to include large, bold buttons indicating where students can access help and receive assistance in navigating various components of MOOCs as they go along, perhaps even offer a virtual MOOC assistant in the spirit of the old Microsoft Word office assistant paperclip icon that offered suggestions and assistance within documents. Simply put, adaptations for differing levels of technology mastery as well as disabilities need to be more overt in MOOCS. For a recent Coursera MOOC, one of our over 80 members with hearing loss expressed extreme frustration that she needed to press the closed caption (CC) button each and every lecture in order to be able to follow along via captions. Among our ideas is the institution of a short quiz during the sign-up stage that results in a tailored experience for the learner. By simply asking a few short questions about level of comfort and experience with technology, hearing or vision difficulties, English-proficiency level, and other barriers to engaging in a MOOC, an algorithm could automatically adjust the user’s personal experience and interface in response to these identified barriers. Another finding from our research is that the presentations of many MOOCs are not age-friendly, such as the instructor using their own scribbled writing on a board to highlight key points, not keeping slides on screen for an adequate period of time, or having otherwise hard-to-read slides or distractions on screen. The frustrations caused by these challenges dominated our research discussions with our older learners. Instituting elements of universal and inclusive design in the way that information is presented are easy adjustments that can become part of the “formula” for MOOCs, such as making sure that writing is clear and large for any accompanying slides, with ideal colors and contrast for aging eyes. Many studies have shown the various ways to make websites and print materials more age-friendly; it is just a matter of incorporating those same ideas into MOOCs. We make constant adjustments in our classrooms at OLLI @Berkeley based on cutting-edge aging research and student feedback and evaluations. It only makes sense for MOOCs to have the same level of responsiveness to the different needs of the older learner.

Even though our research subjects were highly-motivated and self-selected groups with professed interest in taking a MOOC, the barriers outlined in this abstract often led to experiences of frustration and resulted in individuals dropping-out of the course before completion, despite encouragement and support to keep going from peers and our faculty. By focusing on removing barriers, MOOCs can help reduce drop-out rates; as it stands now some estimates are that only 6% of enrollees actually finish a course. Adjusting MOOCs using ideas of universal and inclusive design is a change that will not only serve to better engage older learners, but should have ripple effects across the age and diversity spectrum of MOOC participants.

The Impact of MOOC Blended Instruction to Teach Programming

Velma Latson

Web 2.0 software applications are influencing the change from formal, traditional learning environments that are instructor-directed to more student-centered /open- learning environments where learns have more control. In student-centered/open-learning environments, the focus is on the learner and their ability to use thinking skills to solve problems. Teachers are no longer the authority in the classroom, but co-learners and guides while learners are making their own discoveries (Brown, 2000) about what is important in the learning experience. Learners are encouraged to use prior background knowledge of content to collaborate with experts and peers and new Web 2.0 software applications are making this more open.

MOOCs are web 2.0 teaching applications that are connecting people throughout the world into one classroom environment. MOOCs help learners collaborate, explore and create artifacts that will help them acquire the critical thinking skills to expand their learning. These new theories for instruction using computer technology and social software applications are changing the way people interact in society and gain knowledge in educational environments. New computer software applications are socially constructed for the user’s ability to collaborate and exchange ideas with others.

The University System of Maryland (USM) and Ithaka S+R (2013), initiated a research study on how effective different online learning platforms would be on student outcomes and could these different platforms reduce costs for students enrolled in traditional institutions. The purpose of this presentation is to describe the impact of using the MOOC in a blended instructional environment to teach programming to undergraduate, non-STEM major students. The presentation will describe the lessons learned from implementing MOOC blending instructional environments in a side by side comparison of approximately 100 student participants in the experimental group learning to program from the MOOC and approximately 100 students in the control group learning to program in a traditional classroom setting.

References:
Brown, J.S. (2000). Growing up digital: how the web changes work, education, and the ways people learn. [Electronic version] Change Mar/April.
Ithaka S+R. (2013). Interim Report: A Collaborative Effort to Test MOOCs and Other Online Learning Platforms on Campuses of the University System of Maryland.

Universal MOOC Metrics? How should researchers talk to one another about MOOC data?

Sara Shrader and Jason Mock

Since partnering with Coursera in 2012, the University of Illinois has gathered massive amounts of data from our MOOCs, including data from course surveys (beginning and end of semester), clickstream data, as well as activity data consisting of quiz scores and forum posts. In trying to organize and understand this data, the Illinois Learning Analytics team has considered countless questions regarding the efficacy of MOOCs on student learning. For example, we have explored questions of completion and retention, as well as questions of opportunity and engagement of learners from developing countries. Furthermore, we have encouraged our faculty who are teaching MOOCs to think about the MOOC platform in novel ways, in order to leverage the rich research opportunities available to them.

However, despite having numerous robust discussions surrounding the value of our MOOC data, we have made less headway in answering some of the more fundamental questions about the nature and purpose of MOOCs. In particular, our research group has spent considerable time unpacking the various definitions of traditional metrics for student learning, such as “who counts as a participant?” and “what does learning mean in the context of MOOCs?” One of the most exciting – as well as frustrating – aspects of researching student learning in the context of MOOCs is having the ability to create standards by which to measure success. Unlike traditional online courses, MOOC students hail from a variety of backgrounds with diverse motivations. As such, we need a new “language” for talking about MOOCs, one which takes into account the unique and varied backgrounds and intentions of MOOC students.

This pressing need – creating common metrics for understanding MOOC data – creates some interesting conceptual challenges. On the one hand, our group believes that first and foremost the metrics used to describe MOOC data should serve a greater utilitarian purpose of bringing cohesion to the emerging field of MOOC research. Yet, on the other hand, we recognize the situated and contextualized nature of MOOCs, and understand that creating universal labels may unintentionally mask some of the more nuanced and interesting things to be learned from MOOC data.

In this workshop we would like to share some of the conceptual roadblocks we have encountered in trying to understand MOOC data, as well as to discuss ideas for creating inclusive MOOC metrics that aid researchers in better understanding student learning. In fostering discussion with other researchers, our goal is to generate useful MOOC metrics that enable cross-comparisons of MOOC data.

Understanding Student Engagement in MOOCs

Arti Ramesh, Dan Goldwasser, Bert Huang, Hal Daume Iii and Lise Getoor

The large number of students participating and the availability of interaction data in massive open online courses (MOOCs) provides an opportunity to study and uncover student engagement patterns, develop technology that can help improve student engagement and facilitate instructor interventions. Our work looks at several problems in MOOCs: 1) modeling student engagement to predict course completion, 2) analyzing changes in engagement patterns, and 3) understanding discussion forum content and the relationship to course completion.

When users interact on a MOOC, they leave behind cues suggestive of their engagement with the course and intention to complete the course. We have developed a data-driven model of student engagement in MOOCs using features from users’ interaction with the MOOC, and use that to predict course completion (course survival) [3]. Our model uses behavioral cues (course related activities such as viewing lectures, giving assignments, participating in discussion forum), forum content (po- larity and subjectivity of forum posts), and forum interaction structure to distinguish between forms of student engagement (active and passive). The engagement types are represented as latent variables in our model and are learned from observed data. We then use the latent variables to predict student survival. We use probabilistic soft logic (PSL) [2], to represent ob- served features, (latent and target) variables as logical predicates and construct rules over these to capture domain knowledge. We evaluated our models on predicting learner survival across three MOOCs—Surviving Disruptive Technologies, Women and the Civil Rights Movement and Gene and Human Condition. We demonstrated that incorporating latent engagement variables helps in predicting student survival.

In order to design effective interventions, we need to identify students at a risk of dropping out early-on in the course. We conducted experiments to predict student survival early-on in the course, by training our models on data from the initial part of the course. Our experiments show that our models, especially the latent model, is able to predict student survival reliably at an early stage when compared to the model without latent variables. In addition to improved prediction accuracy, our latent engagement model also unveils interesting patterns in student engagement. Analyzing latent engagement estimates predicted by our model, we find that passive engagement dominates in the beginning and there is an increase in active forms of en- gagement toward the end. Examining the transition between engagement types, we observed that most passive users show an increase in active engagement levels prior to dropping out. This is suggestive of help-seeking/complaining/expressing dissat- isfaction or difficulty in following course materials in discussion forums before dropping out. Probing these forum-posts, we can uncover reasons leading to student disengagement and dropout and identify students that can be helped via intervention. This leads us to perform a more close analysis of forum content.

Our second contribution is looking more closely at discussion forum content for student survival indicators. MOOC discussion forums are the principal means of interaction among MOOC participants. Negative sentiment can indicate dissatisfaction with the class, however it can also be used to express an opinion showing high levels of engagement. Negative sentiment in course related discussions does not imply attitude toward the course and does not mean disengagement, but in logistics or feedback posts it signifies disengagement. Discerning between the two types of sentiment is vital as we are trying to identify students at a risk of dropping out. We make use of recent improvements in topic modeling, Seeded Topic Modeling (SeededLDA) [1], to extract posts corresponding to course-logistics, feedback and course-related material [under review]. We leverage the knowledge of course syllabus and general nature of logistics and feedback posts to seed our model. We enhance our survival models described above with topic distribution from SeededLDA. Our rules capture sentiment and topic of posts to assess signs of engagement and disengagement. We demonstrate that inclusion of features from topic distribution in our survival models helps in predicting student survival.

Our current research focuses on applying our models to courses as they progress and identify possible instructor interventions.

References
[1] Jagadeesh Jagarlamudi, Hal Daume ́, III, and Raghavendra Udupa. Incorporating lexical priors into topic models. In Proceedings of EACL, 2012.
[2] Angelika Kimmig, Stephen H. Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. A short introduction to proba- bilistic soft logic. In NIPS Workshop on Probabilistic Programming: Foundations and Applications, 2012.
[3] Arti Ramesh, Dan Goldwasser, Bert Huang, Hal Daume ́, III, and Lise Getoor. Learning latent engagement patterns of students in online courses. to appear in AAAI 2014.

MOOC Learner Motivation and Course Completion

Yuan Elle Wang and Ryan Baker

In this talk, we will discuss our work to investigate the relationships between the motivations of MOOC learners and completion rates, to study the motivational differences between students who complete the course and those who do not.

The research context is the Coursera MOOC “Big Data in Education”, with a total enrollment of over 48,000. As of the end of the course, the pre-course survey (consisting of subscales of the PALS and MOOC-specific items) received 2,792 responses. Our research results show that this combination of survey measures can be useful for studying students’ motivational directions early in a MOOC. Specifically, we find 4 relationships with the potential to inform future intervention for both instructors and learning designers:
• Students who were more interested in the MOOC learning environment as a new platform of learning than aspects related to the course content area will less likely to complete the course. In specific, non-completers rated higher on items such as “Course is offered by a prestigious university”, “Geographically isolated from educational institutions” when asked their reasons for enrolling in this course.
• Mastery-goal orientation and academic efficacy were not useful predictors of whether a learner will successfully complete the course or not. Learners were generally high in mastery-goal orientation, a finding potentially generally characteristic of students who choose to take a MOOC for no formal credit.
• Early self-reported confidence in completing the course was a successful indicator of whether a student will complete the course or not, with more confident students being more likely to complete.
• Students self-identified as non-English native speakers exhibited higher confidence in completing the course than native speakers, from the beginning of the course. However, this difference was not linked to a difference in course completion.

In considering these findings, it is worth noting that course completion is only one of many metrics that can be used to study persistence and learning in the context of MOOCs; many students enter MOOCs with the goal of learning a specific subset of course material rather than completing the course. Nonetheless, our results suggest that initial student motivation can play an important role in whether students persist in and complete a MOOC.

Achieving Learning Objectives Online: Not All Platforms are Equal

Nick Feamster

Over the past twelve months, I have taught two Massive Open Online Courses (MOOCs):
● A Coursera course on Software Defined Networking, to over 50,000 enrolled students, 4,000 of whom
completed all programming assignments; and
● A Udacity course on graduate computer networking, as part of Georgia Tech’s initial offering of the Online
Masters in Computer Science program

In both cases, I also used the videos as content for “flipped” versions of the on­campus versions of the courses at Georgia Tech (a graduate seminar for the SDN course; and a large graduate course for the latter).

Although the syllabus of each course differed, the courses have several overlapping assignments and elements, particularly on topics related to software defined networking. Despite the overlap in content and assignments in both courses, the modes of delivery and the amount of support that each platform offers for course development differ significantly. I have noted some of the following salient differences:
● Video delivery. Coursera uses a “slides and talking head” model for preparing videos that is largely “do it yourself”; Udacity uses a full production team to produce letters with a “voice overlay over whiteboard/writing hand”.
● Forums. Coursera uses a “home brew” forum setup, whereas Udacity relies on more polished third­party forum software, such as Piazza.
● Pacing. The courses are paced differently: Coursera uses a “synchronous” model, whereas Udacity uses a “go at your own pace through all of the course material” model.
● Demographics. The makeup of the Coursera course was largely students who enrolled in the course for their own enrichment, as no other credit was offered other than a “completion certificate”. On the other hand, as the Udacity course is part of a (paid) online degree program, I found students to be more “conventional” in their demands (and sense of entitlement).

I am still forming my conclusions about each of these platforms, and I think I can offer some fairly strong (and probably controversial) opinions about these platforms, and the pedagogical challenges with offering a degree online. In summary, I have found the Coursera course I have offered infinitely more enjoyable (and I think more effective as a course), for many reasons:
● One of the features of a MOOC that allow it to scale is that students help each other out, rather than relying excessively on TA or professor support. In the Coursera course, students enrolled for their own enrichment and thus “self selected” because they were there to learn the topic. This type of learning attitude allows a MOOC to scale much better than when students have more conventional “entitled” attitudes of paying for a course (and, hence, expecting instructor response). Independent of how MOOCs are financed, I have concluded one thing: students who pay “tuition” expect a level of service and responsiveness that may not scale. There need to be other, better ways of financing MOOCs.
● Along the lines of the previous point, the asynchronous “go at your own pace” model does not scale: Students can’t help each other out, and (busy) professors cannot context­switch quickly enough to provide in depth technical answers to student questions. In the Coursera course, students were all in the same place and could help each other out. But, perhaps even more importantly, my head was at one point in the course and I was much more capable of providing detailed answers to questions (often even reading code). Such a level of support is simply not possible in the asynchronous go­at­your­own­pace model.
● Content is king, and substance trumps style. I made all of the Coursera videos from a laptop camera, from the comfort of my own home. I edited them myself with Camtasia. I “scripted” each lecture the morning that I recorded it. I had no production staff. I am firmly convinced that the output from those videos was just as good as the over­produced content that was required from the in­studio Udacity lecture recordings. One thing in particular was that I found the Udaicty recording setup overly cumbersome: the tendency to encourage the “whiteboard style” made it particularly hard to do “on screen demos”, something that is critical in the type of things that we teach in networking courses.

In keeping with the theme of the workshop, I can provide examples of the same content delivered in both settings. We can look at a Udacity video and a Coursera video describing the same content. I can show examples of the forums from both courses and how the organization and response to questions different in each case. I can also show some examples of recording and production. One takeaway I would like to emphasize is that we should not be generalizing so much about what MOOCs can (or cannot) do. We should be paying particular attention as well to some of the finer points, such as the design of the platform and modes of delivery. My experiences may not generalize: it may be the case that other instructors might have exactly the opposite experience when comparing different platforms. However, I offer my two different experiences as a concrete starting point for discussion.

Diana Oblinger

Dr. Diana G. Oblinger is President and CEO of EDUCAUSE, a nonprofit association whose mission is to advance higher education through the use of information technology. The current membership comprises over 2,400 colleges, universities and, education organizations. Previously, Oblinger held positions in academia and business.

Oblinger is known for her leadership in teaching, learning, and information technology. For example, Oblinger created the EDUCAUSE Learning Initiative (ELI), known for its innovation in learning and learning technologies, as well as launching the Next Generation Learning Challenges with the Bill Melinda Gates Foundation. She is the author or co-author of many books, articles and monographs, including Game Changers: Education and Information Technologies, Educating the Net Generation, and What Business Wants from Higher Education.

Oblinger serves on a variety of boards and has received several awards and honorary degrees.

Mark Lester

Mark Lester is Global Head of Partnerships at FutureLearn, the UK-based massive social learning platform, and a member of its Executive team. Prior to joining FutureLearn, Mark headed the strategy development unit at the British Open University, has held senior management positions in the financial services sector and central Government, and was a managing consultant for several years at Monitor Group advising organisations on innovation, industry competitiveness, business strategy and healthcare policy. Mark holds a Masters of Science degree and a Bachelor of Science degree from the LSE and trained as a teacher at the Institute of Education, London. He is married with two children.

Melissa Loble

Melissa Loble is senior director of Canvas Network, a platform where academic institutions can offer open, online courses, including MOOCs. Melissa oversees strategy, design, and implementation, with a focus on ensuring highly engaging and effective learning experiences.

Previously, Melissa was associate dean for Distance Learning at the University of California, Irvine where she provided leadership in curriculum development, instructional design, and the selection and use of educational platforms and technologies. She led multiple projects resulting in the delivery of 13 MOOCs, including a course focused on themes from a popular television show, Society, Science, Survival: Lessons from AMC’s The Walking Dead and her own course, Emerging Trends and Technologies in Virtual K12 Education.

Melissa has held senior leadership roles for a number of educational technology companies and has taught in Pepperdine University’s Masters in Learning Technologies program (MALT) for the past ten years. Her classes have included Technology and Curriculum, Managing Technology for Change, and Mentoring Team Leadership.

She holds master’s degrees in business administration and educational policy from Columbia University and a bachelor’s degree in political science from the University of California, Los Angeles.