Skip to content

Evaluating E-Learning

by William K Horton
Evaluating Training Programs ()

Abstract

Presents an excerpt from the book "Evaluating Training Programs," by Donald L. Kirkpatrick and James D. Kirkpatrick. You can evaluate e-learning with Kirkpatrick's tried-and-true levels of evaluation. It's simply a matter of asking the right questions in the right ways. This article by William Horton is an excerpt from Don and James Kirkpatrick's Evaluating Training Programs, the third edition of which is being published by Berrett-Koehler this December. How well can an evaluation framework conceived in the 1950s apply to 21st-century e-learning and its blended-, mobile-, and ubiquitous-learning variants? When Don Kirkpatrick was coming up with a way to evaluate learning efforts, computers weighed tons and the term "network" referred to television stations. But his four-level frame-work still applies quite well. Like all effective engineering models of evaluation, Kirkpatrick's model concerned itself with the results rather than the mechanisms used to accomplish those results. What we evaluate with those levels is not the artifacts or apparatus of learning but the outcome, and the outcome of learning resides with the learners, not the pens, pencils, chalkboards, whiteboards, hardware, software, or other paraphernalia of learning. Since we are measuring results rather than mechanisms, we can use this framework to evaluate e-learning, just as we do to evaluate other forms of learning. There are, however, some reasons why we might want to use different techniques for the evaluation process. And that is the subject of this article. I will cover primarily electronic means of evaluating electronically delivered learning, but keep in mind that conventional means can be used to evaluate e-learning and electronic means can be used to evaluate conventional learning. Evaluating Level 1: REACTION Reaction evaluations have gotten a bad reputation of late. Critics dismiss them as "bingo cards" or "smiley sheets." They rightly point to research that shows no correlation between Level 1 evaluations and actual learning. Just because someone liked training, they remind us, that doesn't mean they learned anything. So why bother evaluating e-learning at Level 1? In many situations, e-learning is a new experience for learners. For it to succeed, it must overcome natural skepticism and inertia. Level 1 evaluations help us monitor learners' emotional acceptance of e-learning, and it can be essential in gathering the testimonials and statistics to generate a positive buzz around e-learning. So how do you evaluate the Level 1 response electronically? Here are some suggestions. Let learners vote on course design. Online polls and ballots give learners the opportunity to comment on aspects of e-learning design and delivery, such as whether a particular lesson should be included in future versions of the course. In live virtual-classroom sessions, you can use the built-in polling feature to ask for immediate feedback on the quality of presentation and delivery. Online testing and survey tools can also be used to post ballots and then record scores over a period of time. Set up a course discussion thread. Let learners talk about their experiences with the e-learning. One way to do this is to set up a course discussion forum. Discussion forums are a common feature of online-meeting tools, and they are also available as standalone online discussion tools. A forum like this can serve as a bulletin board where designers can post questions or issues for learners to respond to. In such discussions, learners can see other learners' comments and respond to them, creating an ongoing conversation that reveals more than a simple vote or numeric rating. Instead of a discussion forum, you may prefer to use a blog, or Web log, that displays entries as an ongoing journal of comments. Learners can post their reactions to the e-learning as they see fit, and read the reactions of others. Try both and see which harvests the kinds of comments you need. But whatever method you use, be sure to seed the discussion with questions that provoke meaningful discussion. Avoid vague questions like "Did you like it?" Use chat or instant messaging for a focus group. Focus groups traditionally require a lot of travel and set-up time. With chat and instant messaging, travel is no longer required. Participants just join a chat session. Each person in the chat sees the comments typed by the others. Brainstorming for ideas for improvement is particularly well-suited for chat, because it encourages a free flow of many ideas without criticism. You can also conduct focus groups with telephone conferencing, but chat has the advantage of leaving behind a written record, so there are no notes to transcribe. If you have access to an online-meeting tool, such as WebEx, Centra, or LiveMeeting, you can conduct a conventional focus group with voice and shared display areas. If you do use such a tool, record the session so you can play it back for further analysis and note-taking. Gather feedback continually. With e-learning, you can embed evaluation events among the learning experiences. For example, you can ask learners to select among possible responses to a lesson--pleased, disappointed, surprised, bored, confused--and ask for their reasoning. This approach can reveal unanticipated reactions, such as a learner who did not like or dislike the lesson but was surprised at what it contained. More frequent evaluations also solve the problem of e-learners who drop out before reaching the end of the course--and the end-of-course evaluation. But if you use frequent mini-evaluations, keep them short and simple with only a question or two. Never subject the learner to a lengthy interrogation as their reward for completing a tough module. Gather feedback continuously. My personal choice is to enable feedback at any time throughout the learning experience. You can include a button on every screen that lets learners immediately comment on the e-learning or ask a question about it. Providing the ability to send feedback at any time lets learners report problems, confusion, insights and triumphs immediately. It prevents frustration from building to the point that the end-of-course or end-of-lesson evaluation becomes an emotional rant. It also provides an early warning so you can fix problems faster. By the time the sixth learner encounters the problem area, you have fixed it. Record meaningful statistics automatically. Web servers, virtual-classroom systems, learning management systems (LMSs), and learning content management systems (LCMSs) all record detailed information about what the learner did while taking e-learning. By examining logs and reports from such systems, you can gather useful data: who's accessing the course and how often, the number of pages or modules accessed, and the number of assignments submitted. You can monitor participation in online chats and discussions, and trainees' rates of progress through the course. When reviewing such data, look for trends and anomalies. You might notice that learners gradually pick up speed as they proceed through a course. Or you might notice that 50 percent of your dropouts occur immediately after Lesson 6, which tells you that Lesson 6 needs improvement--or that six lessons are enough for most learners. Evaluating Level 2: LEARNING E-learning greatly simplifies Level 2 evaluations. In e-learning, tests can be automatically administered, scored, recorded, and reported. Automatic testing reduces the difficulty, effort, and costs of creating and administering tests, which means you can use them more widely. With pre-tests, you can determine whether learners are ready to begin a course or module. Diagnostic tests will help identify the specific modules or learning objects learners should take. Post-tests will confirm learning or shunt learners to remedial learning experiences, and within-course modules help learners to monitor their accomplishment of the learning objectives. E-learning provides learners with inexpensive and easy-to-use testing tools to create tests and standards-based reporting mechanisms to record and report scores. Advanced e-learning applications use testing results to design custom learning programs for learners. Let's explore these differences. Testing tools. Many tools for authoring content include components to create test questions. In addition, separate tools can be used to create and administer online tests. Many learning management systems and learning content management systems also contain tools for creating and delivering tests. Standards-based score reporting. E-learning standards for communications between learning content and management systems promise that content developed in different authoring tools can deliver tests and report scores back to any management system-provided all the tools and content follow the same standard. The advantage for evaluation is that the tedious and expensive process of distributing, conducting, gathering, grading, and recording tests is automated from start to finish. The effort and costs of tests are reduced, and the results of testing are available for immediate analysis. A few years ago, getting results back from the learner's computer to a centralized database required either laboriously printing out results and then reentering them, or doing some pretty sophisticated custom programming. Today, it can require as little as making a few clicks on dialog boxes on the authoring tool and management system. The exact procedure varies considerably from tool to tool, but once the content is set up, each time the learner answers a test question, that score is recorded on the management system. Many large organizations are going beyond simply recording test scores. The immediate availability of test results provides these organizations a way to continuously guide learning in their organizations to ensure that targeted competencies are being developed. Some LMSs and knowledge management

Cite this document (BETA)

Readership Statistics

1 Reader on Mendeley
by Discipline
 
100% Psychology
by Academic Status
 
100% Student > Master

Tags

Sign up today - FREE

Mendeley saves you time finding and organizing research. Learn more

  • All your research in one place
  • Add and import papers easily
  • Access it anywhere, anytime

Start using Mendeley in seconds!

Sign up & Download

Already have an account? Sign in