Tips for training your staff: Basic instructional design

pixabay.com

By Barbara Bucklin, PhD

bSci21 Contributing Writer

If you’ve followed my articles, you know a bit about my background. I’m educated in OBM and have been working with organizations for around 20 years, with the lion’s share of that work focused on implementing Instructional Design solutions. In my conference presentations and articles, I tend to focus on instructional technologies or advanced topics. However, I’ve recently been asked a lot about basic behavioral Instructional Design. I’ve been approached with comments like, “I really loved your presentations on working with Subject Matter Experts and using mobile technology for learning. But where do I even start if I want to build a learning program for my staff?”  From these interactions, it appears that many people in our field want to improve their organization’s training and have no idea where to start.

Start with performance analysis

Before you consider designing a training solution, you’ll need to determine whether instruction is the answer. Is it a knowledge or skill gap? Or some other gap? Time and time again, clients come to me asking for training when training alone isn’t the best solution (or often even a solution for the problem at all). In fact, much of the time training is not the answer.

Data show that designing good instructional materials can take as much as 100 hours of time per 1 hour of finished instruction (Defelice, 2018). Depending on its complexity and how it’s delivered, it may even take upwards of that 100-hour figure. That’s a costly endeavor if the participants don’t need to be trained. It’s also costly because you’re taking people away from their work and potentially losing money indirectly to train them on skills they already know and can perform.

Back in 1970, Mager and Pipe wrote a book called Analyzing Performance Problems, Or, You Really Oughta Wanna. Their basic questions were (1) Is there a gap between ideal and actual performance? and (2) If the answer is yes, is it because performers ‘can’t do it,’ or because they ‘won’t do it?’

When it’s a ‘can’t do’ problem, employees can’t perform the task if their lives depend upon it. In most of these cases, training is the solution.

When it’s a ‘won’t do’ problem, employees have the skills but aren’t applying them. I use performance analysis to find the environmental variables that are causing the gap and avoid training skills and knowledge the performer already has. To do this, I use a comprehensive OBM Performance Analysis Checklist that combines some of the best OBM research and thought leadership—from Dr. Tom Gilbert’s famous Behavior Engineering Model dating back to the 1970s (Gilbert, 1978), Dr. Carl Binder’s more practical and updated Six Boxes® (Binder, 2012), and Dr. John Austin’s Performance Diagnostic Checklist (Austin, 2000). I use this checklist as a guide to ask questions and observe work behavior and results. If you’d like to learn more about it, check out a previous article I wrote called Pinpoint and Analyze the Problem. (link to here – http://www.baquarterly.com/wp-content/uploads/2016/10/BAQ-October-2016-Final.pdf)

As a quick summary, performance analysis questions ask: Are clear expectations set? Is feedback provided? Do consequences (+ and -) align with expectations and feedback? Are there positive consequences for competing, undesired behaviors? Are tools, equipment, and resources available?

Many times, when I do a performance analysis, I find gaps in both ‘can’t do’ and ‘won’t do’ areas. For example, in automotive training, sales consultants may not have the skills to conduct a proper walk-around inspection. Once they learn the skills, it’s still quicker and easier to do things the old way by skipping the inspection, and there are no positive reinforcers for conducting a proper walk-around or punishers for skipping it. In these cases, along with training, it’s important to design and implement solutions to support the newly trained skills back on the job.

Tips for effective instruction

When we discover through our analysis that it’s a ‘can’t do’ problem, it’s time to design instruction. The list below isn’t meant to be exhaustive, but it’s a great start and identifies what I’ve found to be important for effective instruction, and what’s most often lacking in instructional materials I review.

  1. Focus on the learner. Create instruction tailored to the learner’s skill or knowledge gaps – what they need to be effective on the job. Avoid ‘nice-to-know’ information. Train to their level and current behavioral repertoires, not yours as the expert.
  2. Write measurable learning objectives. For decades, Robert Mager has been the primary source for writing effective behavioral objectives, and still is. According to him (1997), “the most important and indispensable characteristic of a useful objective is that it describes the kind of performance that will be accepted as evidence that the learner has mastered the objective.” Because we want the learner to do something that someone else can observe as evidence, it’s important to avoid learning objectives like “know” and “understand.” How can you tell if someone “knows” or “understands” something? Instead, restate the objective so that the learner has to do something (like “state,” “list,” or “explain”) to demonstrate that he/she “knows” or “understands.”
  3. Design active, meaningful practice. The student learns what the student does! Susan Markel defined active as something the learner does overtly rather than covertly. We can only measure overt, observable behavior. Meaningful interactivity is something more than simply requiring learners to engage in activity. For example, learners are active when they click a “next” button to access more information in a web-based training course. While this is “interactivity,” it doesn’t prove that someone learned or can apply a new skill (or has even listened to or read the content presented). I recommend designing at least one meaningful practice activity for every measurable learning objective.
  4. Teach, practice, and test in context. Learning in context that closely approximates the learners’ job produces better retention than learning outside the context. A fun research study showed that studying for a test in the same gray, windowless room where the test was administered produced better outcomes than studying in other locations. With colleagues, I wrote a recent article in Performance Improvement Journal, called “Increasing Performance in the Automotive Industry Through Context Based Learning.” Through our research and application, we identified three factors of context-based learning:
    1. Physical environment. This is the first, and perhaps the most obvious component to context-based learning. What we mean by context, is delivering learning in the appropriate physical environment using the tools, processes, and resources that are the same or very similar to the ones used on the job.
    2. Behavioral consequences. The skills required during the training practice are those to be performed on-the-job, and the behavioral consequences replicate what those learners will receive on-the-job. For example, in practicing to overcome a customer’s objection in a call center, the role-play ‘customer’ responds similarly to the way a real customer would respond. Behavioral consequences, such as positive or negative customer reactions, are built into the materials – delivered by the facilitator,a role-play partner,or simulated customer – and support desired skills or correct undesired skills.
    3. Emotional context. This is the most difficult to replicate. For example, the emotional context while learning to interact with an angry customer, employee, or manager is often very different from applying it on-the-job. You can create emotional context through pressure such as adding a time component. Adding realistic behavioral consequences, as mentioned above, also helps with emotional context.
  5. Create ‘step sizes’ that are not too large or too small. Break concepts into smaller units or chunks, but not too small to be meaningless. In my experience, getting this right is both an art and science and needs to be tested with learners to determine if you’re teaching the concepts in the right ‘chunks.’ I include this concept because all too often, I see instruction that is incredibly long (e.g., 30 minutes to an hour) before the learner is provided with a meaningful response opportunity. In the old days of programmed instruction (before my time), instructional designers often erred on the side of short step sizes, which were hard for learners to piece back together. For example, they broke apart a process into each of its component steps and required practice between each one, never requiring the learner to put them all together.  As an example, if you’re teaching new data entry administrators to complete a 10-page form, teaching and testing one page of the form at a time may be an ideal step size that’s not too small and not to large. If you taught and tested a learner to enter one line at a time, he or she might lose the context or meaning of the section because it’s too small. Whereas if you taught and tested on all 10 sections of the form at a time, a learner may make an error in the earlier portion and get no corrective feedback, which might lead to a further error in a later section.
  6. Provide immediate, descriptive feedback, based on correct and incorrect responses. That feedback improves learning is tried and true through hundreds (or maybe thousands) of research studies. And, the more immediate, the better. It’s one of the most powerful mechanisms for learning. It can come from an instructor, a programmed (online) format, and/or the learners’ environment to help learners adjust or continue. As behavior analysts, this should be obvious, but it doesn’t hurt to point it out. For each response requirement, provide feedback that meets these four characteristics: immediate, frequent, descriptive, objective.
  7. Test the objectives. I’m covering this as the last tip; however, many Instructional Designers, myself included, write the test first. Good tests:
    1. Include at least one well-constructed test item for each objective. A test must determine if the learner has mastered the objectives. Don’t skip an objective – if it’s not important to test, it shouldn’t be an objective of the course.
    2. Measure the behavior(s) you’re looking for from a learner. For example, in a technical course, if you want the learner to be able to match tools with their functions, you could test for this by giving learners a list of tools, pictures of them, or actual tools if it’s live training, along with a list of functions and ask them to match the items. If you want the learner to describe the benefits of a new product, you’d ask them to write or say those benefits. You get the idea. Unfortunately, many tests are poorly constructed and are forced into a format that isn’t the best one for testing a skill. If you have the opportunity to select a format, absolutely take that opportunity!
    3. Use different examples from the teaching examples. This ensures that your learner can apply the skill to a novel situation. For example, if you’re teaching sales consultants to overcome customers’ objections, you would present a different series of objections for teaching and testing (although both sets may include objections to price, quality, etc. the customer’s wording is different in each set of examples). When the teaching example and testing example are the same, it’s often referred to as a ‘copy frame,’ which measures rote memorization more than anything.

The next time you’re designing a training course to fill a ‘can’t do’ performance gap, consider each of seven tips. Please reach out if you have questions: [email protected]

References:

  • Austin, J. (2000). Performance analysis and performance diagnostics. In J. Austin & J. E. Carr (Eds.), Handbook of applied behavior analysis (pp. 321–349). Reno, NV: Context Press.
  • Bucklin, B.R. (2016). Tools of the OBM trade part 1: Pinpoint and analyze the problem. Behavior Analysis Quarterly, 2 (4), 15-19.
  • Bucklin, B.R., Brown, J., and Conard, A.L. (2018). Increasing performance in the automotive industry through context-based learning. Performance Improvement, 57(2), 27 – 32.
  • Binder, C. (2012). The Performance Thinking Network. Retrieved from http://www.sixboxes.com.
  • Defelice, R. (2018). How long to develop one hour of training: Updated for 2017. Retrieved from: https://www.td.org/insights/how-long-does-it-take-to-develop-one-hour-of-training-updated-for-2017
  • Gilbert, T.F. (1978). Human competence: Engineering worthy performance. New York: McGraw-Hill.
  • Mager, R.F. (1997). Preparing instructional objectives: A critical tool in the development of effective instruction. Atlanta: Center for Effective Performance.
  • Mager, R.F. and Pipe, P. (1970). Analyzing performance problems: Or, you really oughta wanna—How to figure out why people aren’t doing what they should be, and what to do about it. Atlanta: Center for Effective Performance.
  • Markle, S.M. (1990). Designs for Instructional Designers.

 

Barbara Bucklin, PhD is a global learning and performance improvement leader with 20 years of experience who collaborates with her clients to identify performance gaps and recommend solutions that are directly aligned with their core business strategies. She oversees design and development processes for learning (live and virtual), performance-support tools, performance metrics, and a host of innovative blended solutions.

Dr. Bucklin serves as President and is on the Board of Directors for the Organizational Behavior Management Network. She has taught university courses in human performance technology, the psychology of learning, organizational behavior management, and statistical methods. Her research articles have appeared in Performance Improvement Quarterly and the Journal of Organizational Behavior Management. She presents her research and consulting results at international conventions such as the Association for Talent Development (ATD), International Society for Performance Improvement (ISPI), Training Magazine’s Conference and Expo, and the Organizational Behavior Management Network.  You can contact Dr. Bucklin at [email protected]

About bsci21 703 Articles
President, bSci21 Media, LLC Editor, bSci21.org

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.