Case Study 12 showed how a perfectly good idea could get into trouble without a proper evaluation strategy.
For Sam Gonzales, the idea of using video clips and multiple choice questions, all delivered by computer, was a perfectly good way to assess training. It would be easier and faster than paper-based short answer, multiple choice tests. The data from one pilot test of the assessment seemed to show that it was more accurate than the existing method in predicting which students would have actual on the job performance problems, based on their test scores.
However the more that test scores are tied to evaluation and pay in a business, the more political the assessment process can become. When two students and a trainer objected to low scoring assessments, and the students wanted to take the regular paper-based assessment, Sam was not in a good spot. He had no evaluation results that could demonstrate that the protesting students and trainer had no basis for their complaints.
By evaluating and testing his assessment idea in its define and design stages, he might have discovered:
1) cultural/political objections to the new method that he could have possibly dealt with
2) computer-based skill issues that might have made the assessment system unfair or unreliable for a significant number of individuals.
With a proper evaluation strategy, Sam might have found ways to improve his idea and increased acceptance of it before releasing it in a real-world application.
I am interested in teaching non-credit classes about various computer skills in the community college/technical college system in Minnesota, and maybe elsewhere. This is part of a long-term strategy to find part-time work to supplement my full-time job now and my retirement income later.
So, I intend to interview at least one person who is teaching non-credit computer skill classes. I regard these individuals as instructional designers because they usually have to design and propose their own classes. The college may have a topic it wants to offer a non-credit course on, but it is up to the instructor to actually design and deliver the learning experience.
I would like to ask this person:
1) How do you gather information about the learners you expect to be in the class?
2) How detailed are the learner outcomes for the class? Does the college have to review and sign-off on them?
3) Do you have to come up with all learner activities? Do you design them from scratch or find existing materials, like a book or tutorial?
4) Do you have to test the learner's at the end of the course? Even if the college does not require it, do you go through some kind of assessment so the learners can prove to their employers (current or prospective) that their skills have improved?
5) How do you evaluate your course? A survey handed out to the learners at the end of the course? How do you use the evaluation to change the course?
I intend to conduct a face-to-face interview with the instructor, taking written notes and probably audio-taping the interview. I will transcribe the interview and prepare it for publication, probably doing some editing to clean up the grammar and improve the flow of ideas. I will make the interview available on my blog, probably as a pdf file attachment to a blog entry.
I may supplement the interview with an instructor by talking to one or more college staff members who organize and promote these non-credit courses.
Key points of Chapter 5 in Real World Instructional Design
Most of chapter 5 covers how learning activities are delivered.
Chapter 5 also presented the components of a learning package, including:
Many learning activities will include a variety of media assets. Selecting media assets includes the following concerns:
Finally, chapter 5 gets a little into structuring interactive programs. Although this is a major topic that can fill a book in itself, the main concerns listed in chapter 5 include:
In terms of my prior knowledge, Chapter 5 covers things I have had to learn and do in the past. I have designed and taught classroom courses in scriptwriting. I have designed learning materials delivered by video, web and CD-ROM. I have had to be concerned with how interactivity supports learning outcomes. However, I continue to be impressed with how well Cennamo and Kalk cover the details of being a successful instructional designer and I will be keeping my copy of the book to refer to when I prepare to do future projects.
This case study illustrates some of the challenges of trying to produce learning materials that work across different cultures. This is important for two reasons. A growing number of companies and organizations are multinational. Also, any company or organization within the U.S. will probably be working with a diverse group of employes and/or customers.
The case study itself seems to imply that Iris and Jim (the American developers working with Hill Industries, an American member of a multinational consortium of companies using Lapin software) had adjusted well to the challenge of adopting American ID models and practices to European expectations.
However, in our class, Janine pointed out that the prototype Iris and Jim were proposing was still riddled with American slang and idioms and might not be received well by the European developers and members of the consortium.
That brought home the point to me that if you want a learning product to work well across cultures and nations, you will have to go over the design again and again, and be willing to listen and challenge your own assumptions.
In this case study, Ross Caslon has a challenging task - come up with plan to implement a web-based, course-management tool at a large university. Unfortunately, there's no strong sponsorship, no clear leadership, no strong desire by faculty to use the tool, no willingness from the technical staff to support it, and to top it all off the tool has many irritating technical deficiencies and problems.
The case study strikes me as an example of setting yourself up for failure by expecting too much to happen too soon. Ross did a test by having 7 faculty members use the tool after a one-day training session. Even that test showed some promising results, but not as promising as Ross had hoped.
Ross needs to be realistic about what he can achieve with the resources he has in hand and he needs to build on the successes in his test case. He can't realistically use direct instruction to change the behavior of faculty and support staff. He can identify a few specific successful uses of the course-management tool and keep working to refine those uses. He can identify uses that were not successful and evaluate them to determine if specific support and instruction would have made them successful or if other factors - like software problems - were the cause.
This case study reminds me of the role I play (as webmaster) in my organization as a change agent. I don't have the resources - technical, financial, political - to impose large-scale changes. But I can propose and deliver projects that improve the organization's performance. I can promote ideas and respond as quickly as possible when those ideas take root in the minds of others and they decide to act.
In the big picture, Ross is on the "winning" side by being an advocate of web-based learning management. He just needs to pick his goals carefully.