12.07.2011

Learning Styles are for the individual, not group

NOTE: I left this comment in eLearn Magazine's, Why Is the Research on Learning Styles Still Being Dismissed by Some Learning Leaders and Practitioners by Guy Wallace. Since it wiped out most of my formatting, such as comments and quotation marks, I am posting it here for better readability.

Perhaps one of the best papers on learning styles is Coffield, Moseley, Hall, and Ecclestone's, Learning styles and pedagogy in post-16 learning: A systematic and critical review (PDF). While the paper does dismiss some types of learning styles and the importance that the recognized learning styles actually have when it comes to learning, it does leave a lot of questions opened.

One of the most profound statements in the paper, at least to me, is (p68):

“just varying delivery style may not be enough and... the unit of analysis must be the individual rather than the group.”

That is, when you analyze a group, the findings often suggest that learning styles are relative unimportant, however, when you look at an individual, then the learning style often distinguishes itself as a key component of being able to learn or not. Thus those who actually deliver the learning process, such as teachers, instructors, or trainers and are responsible for helping others to learn see these styles and must adjust for them, while those who design for groups or study them see the learning styles as relative unimportant.

In the next paragraph, the paper continues with this statement:

“For each research study supporting the principle of matching instructional style and learning style, there is a study rejecting the matching hypothesis’ (2002, 411). Indeed, they found eight studies supporting and eight studies rejecting the 'matching' hypothesis, which is based on the assumption that learning styles, if not a fixed characteristic of the person, are at least relatively stable over time. Kolb's views at least are clear: rather than confining learners to their preferred style, he advocates stretching their learning capabilities in other learning modes.”

While many find this as a reason to dismiss learning styles, I find it quite intriguing in that why do learning styles play a key component is some situations or environments, but not others? I think part of the answer is within this finding—a study that was conducted in the U. S. and Israel, found that when students' learning styles matched the teaching method they performed both more effectively and efficiently. But the authors of the paper seem too readily to dismiss it as the end the paragraph with this statement—“But even this conclusion needed to be qualified as it applied only to higher-order cognitive outcomes and not to basic knowledge.” (p67)

It seems logical that higher-order cognitive outcomes need more individual support (in this case matching the learning style the the correct learning strategy) than basic knowledge. Thus in some situations learning styles are important, while in others they are not.

Finally, in the paper's conclusion the authors note (P132-133) that:

“Despite reservations about their model and questionnaire (see Section 6.2), we recognise that Honey and Mumford have been prolific in showing how individuals can be helped to play to their strengths or to develop as all-round learners (or both) by means, for example, of keeping a learning log or of devising personal development plans; they also show how managers can help their staff to learn more effectively.”

Thus the main take-away that I get from the paper if that if you are an instructor, manager, etc. who has to help the individual learners, then learning styles make sense. On the other hand, if you are an instructional designer or someone who directs her or his efforts at the group, then learning styles are probably not that important. Note that I am both a trainer and a designer so perhaps this is why my take-away makes sense to me.

11.29.2011

Lingering Doubts About the 70:20:10 Model

Formal, Informal, and Nonformal Learning

In a couple of recent posts both Ben Betts and Clive Shepherd casts their doubts about the usefulness of 70-20-10 model and wonder if it's confusing the issue. You can read their posts at The Ubiquity of Informal Learning, and Beware who's selling informal learning.

I tend to agree with them, but before I begin I want to add that if you think I'm anti-informal learning, then please note that I wrote a post defending informal learning and it was Tweeted quite heavily. In addition, I've seen in the comments of these posts and others that if you challenge the idea about the usefulness of the 70-20-10 model then either you don't want to understand it, you clearly don't get it, or you see it as a threat to your job. If this is what you really think then you may talk-the-talk of informal and social learning learning but you walk-the-walk of a lecturer—“it's my way or the highway.” I have no patience with these attitudes because they are simply attacking people rather than their ideas.

While some proponents of the model insist it is non-prescriptive, both Ben Betts and Clive Shepherd saw the model as being “prescriptive.” I saw it as being prescriptive. Jay Cross saw it the same way as he wrote in one of his posts, A model of workplace learning,“The 70-20-10 model is more prescriptive. It builds upon how people internalize and apply what they learn based on how they acquire the knowledge.”

Even the Center for Creative Leadership, where the model was developed, write that the 70-20-10 model is indeed prescriptive:

“A research-based, time-tested guideline for developing managers says that you need to have three types of experience, using a 70-20-10 ratio: challenging assignments (70 percent), developmental relationships (20 percent) and coursework and training (10 percent).”

The 70-20-10 model is a prescriptive remedy for developing managers to senior and executive positions. Parts or perhaps all of the model may be useful for developing other professionals. However, by no means is it a useful model for the daily learning and work flows that takes place within organizations because it is being applied in an entirely different context that what it was designed to do. When people see numbers applied to a model they normally assume a couple of things: 1) that it is fact based, and/or 2) this is the way it is supposed to be.

As Will Thalheimer noted in one of his posts, adding numbers to make a model look more authentic makes it both bogus and dangerous (see People remember 10%, 20%...Oh Really?). I can attest to that because in some of my posts in the past I wrote that the formal to informal ratio was 30/70. People immediately commented and insisted it was 20/80 or 10/90. They seemed determined to lock the numbers in to an exact ratio—NO EXCEPTIONS! Even the model is begriming to look more like real ratios that must be adhered to because it is now being written as 70:20:10. Where will it end?

For more on the ratios see, 70-20-10: Is it a Viable Learning Model?

11.01.2011

Yes, you can manage informal learning

Jane Hart recently posted a thought-proving article on her blog in which she argues that you “can't manage informal learning, you can only manage the social media tools.” In her post she goes to great depth to define some of the various types of learning, such as formal, non-formal, and informal, however, I think the same needs to be done for “manage” in order to get a more accurate picture, otherwise we get mental images of Dilbert's pointy-haired boss when someone speaks or writes about management.

People often equate the term “management” with “control,” that is, when you manage something, you are trying to take direct control of it. However, management and control are actually two of four distinct processes for guiding an organization. The other two are leadership and command. While these are separate processes, they need to be blended together to deal with our rapidly changing world. Note that while I define the terms based upon my military experience and training, civilian organizations often use them because the military has the resources to study and research these concepts (and their studies are often done on civilian organizations which makes them valuable to the outside world).

Command and Control

Command is the imparting of a vision to the organization. It does this by formulating a well-thought out vision and then clearly communicating it. It emphasizes success and reward. That is, the organization has to be successful to survive and in turn reward its members (both intrinsically and extrinsically).

An example in this case would be visioning a process that helps to increase informal learning and make it more effective. A bad vision would be implementing a social media tool, such as a wiki or Twitter. This is because they are tools or technologies that are means rather than an end-goal.

Visions do not have to come from the top, but rather anywhere in the organization. Informal leaders are often good sources of visions, however if the vision requires resources, then they normally need the support of a formal leader.

In contrast, control is the process used to establish and provide structure in order to deal with uncertainties. Visions normally produce change, which in turn produces tension.

For example, “is the tool we provided to increase the effectiveness of informal learning really working?” Thus it tries to measure and evaluate. Inherent in evaluation is efficiency—it tries to make the goal more efficient. This can be good because it can save money and often improve a tool or process. The danger of this is if the command process is weak and the control process is strong then it can make efficiency the end-goal. That is, it replaces effectiveness with efficiency.

A good example of this is our present economy that caused many organizations to perform massive layoffs. Now the same organizations are complaining that they can't find qualified workers. Efficiency over road effectiveness—they failed to realize that they would need a trained workforce in the future.

Leadership and Management

Management's primary focus is on the conceptual side of the business, such as planning, organizing, and budgeting. It does the leg work to make visions reality. Thus it helps to acquire, integrate, and allocate resources to accomplish goals and task. This is why you need to manage non-formal learning and not just the tool itself. The goal is to increase informal learning and make it more effective, not to put into place a media tool. If the tool because the goal, then the wrong polices could be put into place that decrease its value as an informal learning tool.

Secondly, if the focus is only on the tool, then other options are omitted, such as tearing down cubicles and creating spaces where people can meet.

In contrast, leadership deals with the interpersonal relations such as being a teacher and coach, instilling organizational spirit to win, and serving the organization and workers.

Thus all four processes have their place. When you manage informal learning, you are not trying to control it, but rather planning how you will put the vision in place, budgeting for the required resources, and then organizing the teams so they can make it a reality.

In the August 2010 (p.10) edition of Chief Learning Officer magazine, Michael Echols notes a survey that the number one priority of 96 percent of the CEOs they surveyed want proof that learning programs are driving their top five business measures, but only 8 percent are getting it. Thus the learning and development leaders are going to start feeling the heat to get some type of evaluation process into place. If informal learning is going to be one of the primary objectives, we are going to have to get real about actually trying to measure it. The excuse that the learners control it so it can't be done is not going to cut it for long.

10.26.2011

Mapping the Performance

Mapping the Performance

Part of the analysis phase is capturing the skills required for performance. While there are several methods for capturing these performance, ISD normally only lists Behavioral Task Analysis. However, many task are largely overt and nonprocedural in nature, thus they require a Cognitive Task Analysis. The solution of course is to simply plug the desired method or tool into the ISD or ADDIE model as it is quite dynamic rather than the stale linear model that some believe.

Four analysis tools will be discussed in this post that may be plugged into ISD:

  • Behavioral Task Analysis
  • Information Processing Analysis
  • GOMS Analysis
  • Critical Decision Method

Considerations for Analysis Tool Selection

Selecting the correct analysis tool is dependent on the type of actions the worker must perform. This performance is normally composed of two types of actions:

  • Overt - behavioral and observable
  • Covert - mental and not observable

While some some tasks are only composed of one or the other, more complex tasks may be composed of both actions.

In addition, the selection of the analysis tool is also dependent upon the task steps:

  • Procedural - the steps are performed in order and are normally largely overt actions
  • Rule Based - the steps do not have to be performed in a temporal order and are normally largely covert (cognitive) actions

Procedural Analysis Methods

These methods are used when there is a temporal order of involved steps, thus there is a set procedure for performing the task. Two analysis tools that fall under Procedural Analysis are Behavioral Task Analysis and Information Processing Analysis.

Behavioral Task Analysis

Behavioral Task Analysis is used to capture overt actions by observing and recording an Exemplary Practitioner perform the task. Questions may also asked to ensure the analyst has fully captured the performance. This is perhaps the easiest method. The output is a list of steps that may also have diagrams or pictures of the desired performance or behavior. A short example is:

  1. Turn on computer and start spreadsheet application.
  2. Load projected sales report spreadsheet template (prosale.exl).
  3. Enter projected sales figures into designated spreadsheet cells.
  4. Run spreadsheet macros.
  5. Save file under new name, “pro***.exl”, with *** being the next sequential number For example, pro135.exl. Note: Do NOT overwrite template.
  6. Forward to Planning Manager by email.
  7. Exit application.

Depending upon the learners' prior knowledge and the complexity of the task, the list might also contain substeps as shown in this example:

  1. Turn on computer
  2. Start spreadsheet application
    • 2.1 Click the Start icon
    • 2.2 Scroll through the menu list and select the Excel application
  3. Load projected sales report spreadsheet template (prosale.exl).
    • 3.1 Click on the file menu
    • 3.2 Click on the Open option
    • Etc., Etc.

Information Processing Analysis

This tool is used when there are both overt steps that require a set order and covert steps that require decision making of a yes or no nature. The overt actions are captured by observing and recording an expert performer, while the covert actions are captured by having the expert performer talk about their actions (thinking aloud). Capturing the decision making process makes it more difficult than the Behavioral Task Analysis, but it is perhaps the most common method as the majority of tasks require decisions. The output is commonly a flow chart composed of three elements that outline the task steps:

  • Boxes - overt or covert actions
  • Diamonds - decisions
  • Arrows - order of steps

A short example of a flowchart showing a decision a forklift operator must make when moving goods from the receiving dock to a storage area may look like this:

Flow Chart

Rule Based Analysis Methods

These methods are used when there is NO temporal order of involved steps, thus there is not a set procedure for performing the task. In addition, most of the task steps are normally of a overt nature. Two tools that fall under Rule Based Analysis are GOMS Analysis and the Critical Decision Method.

GOMS Analysis

The GOMS tool analyzes a task by examining four elements of it:

  • Goals represents the intention to perform a task its underlying structures, such as subtasks, cognitive operations, and physical operations. For example, an Instructional Designer may have a task of selecting activities that will give a new sales person the skills to sell a product. That task includes subtasks of analyzing the skills needed, analyzing activities, etc. The ID will use cognitive operations, such as selecting an activity that actually teaches the skill and determining if the chosen media can effectively deliver the activity.
  • Operations represent the physical actions, such as binding a learner's manual or pressing a button. It also includes mental operations, such as retrieving from memory or setting a goal.
  • Methods represent sequences of operations that accomplish goals or objectives. It includes high-level goals that breaks a tasks into subtasks and low-level methods that are the actions that actually perform the subtasks.
  • Selections Rules represent the context for selecting a particular method. That is, there may be several ways to accomplish a goal or task, but one may be chosen because it should perform this particular task the best (heuristics).

Capturing the performance of a covert task can be done in many ways and normally several are used to capture the task. Some of the more common ones are interviews, job shadowing, having an expert performer think-aloud, and storytelling.

The GOMS analysis can normally best be represented by a concept or mind map. The Goals will normally be placed on the map first, which is then followed by placing the Methods and their Selection Rules on the map. The details (Operations) are then listed. Shown below is a partial mind map of selecting an analysis tool:

Mind Map

Depending upon the size and the scope of the task, you might have to link or reference other documents that go into more specifics about the Operations, Methods, and Selections rules. However, be careful as one of the common mistakes in GOMS analysis is getting too specific, which results in long and detailed procedural descriptions that are difficult to follow.

Critical Decision Method (CDM)

This can be thought of as a Case Study, however, it also includes a visual reference or map. Just as a case study uses an actual incident to tell a story, CDM is also performed by having an expert tell a story about a particular task they performed in the past. For example, an Instructional Designer might tell how he developed a Just-in-Time program for training sales persons to sell a new product or a fire fighter might tell about the actions she took in fighting a gas station fire. Note that when the person tells the story, the interviewer might have to probe to gather some details. While the output might include a case study, it should normally include a map similar to this:

CDM

The CDM process goes like this:

  • Sweep 1 - Identify a incident. It should come from a decision maker who was involved rather than a witness.
  • Sweep 2 - The expert tells his or her story. Identify the key decision points and when they were made.
  • Sweep 3 - These is where you “deepen” the interview by asking for analogs, mental models, other options, experience or training that was helpful, etc.
  • Sweep 4 - Finally ask “What If” questions, such as “If the situation had been different what would have happened?”, or “What if a novice had been in charge?”

Key Points

The finale outputs, such as a list of task steps, flowchart, mind map, or CDM chart prove not only invaluable to the the design team, but may also make excellent performance or learning aids. Just as it is as important to use a correct analysis tool, it is also just as import to represent the findings in a manner that others can understand.

The final output might not be just one chart, but rather a combination of them as shown in the picture at the top of this post. Also the chart does not have to be combined into one page if it gets too cluttered, but may rather be composed of a collection of documents that are linked together electronically or referenced if they are printed. In addition, other types of charts and visualizations representations may also be used.

More detailed information can be found in two books, van Merriënboer's, Training Complex Cognitive Skills: A Four-Component Instructional Design Model for Technical Training (1997) and Crandall, Klein, & Hoffman's, Working Minds: A Practitioner's Guide to Cognitive Task Analysis (2006). Merriënboer discusses Behavioral Task Analysis, Information Processing Analysis, and GOMS Analysis (plus other topics related to complex tasks), while Crandall, Klein, & Hoffman discusses Concept Mapping and Critical Decision Method (plus other topics related to Cognitive Task Analysis).

Both Merriënboer and Crandall, Klein, & Hoffman's books are excellent references and provide different information about Cognitive Task Analysis, however, Merriënboer's is now out of print, thus the used books may be a bit on the pricey side.

10.17.2011

ADDIE Does More Than Classrooms

Site Selection Tool

One of the misconceptions of ISD is that it was created to only build classroom training environments.Yet, one of the old Army manuals (1984) that is used for training ISD shows the above options for training. It also notes that options 1, 2, or 3 should be used in lieu of classroom training if it can adequately perform the job. Thus, classroom training should normally be the last option if there is more than one viable option. This is because classroom training is normally one of the more expensive options.

The five options are shown below with a few notes about them:

  1. Job Performance Aid (JPA) - this would include today's EPSS (Electronic Performance Support System)
  2. Self-Teaching Exportable Package - the elearning we know today would fall under this category
  3. Formal On-the-Job-Training (OJT)
  4. Installation Support School (on or near the employees' workplace) - this would be formal classroom training even though the training may be conducted outside in the field
  5. Resident Instruction (away from employees' workplace where travel and living expense would have to be considered) -this would also be formal classroom training.

Just as training designed for the classroom, the other options also need to follow the five phases of ISD to ensure they do what the are supposed to do. For example, even a simple JPA requires:

  • Analyzing the various settings and media to determine if it is the most approximate method.
  • Designing it so it performs as intended.
  • Developing it into a real product.
  • Delivering (implementing) it to the workers who need it.
  • Evaluating it to ensure it does the job it was intended to do. This also shows the business units that you care about the solutions you deliver (if it ain't worth following up on then it probably ain't worth doing) and you might learn something. Note the evaluation may be as simple as checking with a couple of managers and some of the employees to ensure it is doing what it is supposed to do.

The manual also gives some guidelines for selecting the correct training setting:

  • Job Performance Aid
    • close supervision not required
    • task follows a set procedure
    • JPA can be followed while performing the task
    • do not use if:
      • consequence of inadequate performance is high
      • employee lacks prerequisite skill
      • task requires high psychomotor skills
  • Self-Teaching Exportable Package
    • close supervision not required
    • task can be self-learned by individual or groups
    • material required for training can be adequate packaged
    • do not use if:
      • task failure would result in injury or damage
      • special facilities or equipment required
  • Formal On-the-Job-Training
    • close supervision is required
    • task can be self-learned by individual or groups in the workplace
    • task decay rate is very high
    • do not use if:
      • sufficient equipment is not available for learners to practice on
      • workplace cannot absorb the learners adequately
      • training would be disruptive to normal operations
  • Classroom
    • large group must be taught the same thing
    • task difficulty requires a high state of training (task is difficult and requires time to acquire skills)
    • learner interaction is required (such as team training)
    • material required for training cannot economically be placed in the field
    • essential the employee be able to perform upon job entry (high consequence if employees are inadequate performers)
      • do not use if:
      • task may be adequately trained elsewhere

In addition, think blended learning. When I was first trained in ISD we called it BoB (Best of Breed). Blended learning solutions are normally more efficient and effective when designed correctly as they inherit the best of each setting. And do not think of blended as just Brick and Click, but rather any combination of the above, plus more informal options, such as mentoring and social learning media.

Related Posts:

ADDIE Backwards Planning Model

ADDIE and the 5 Rules of Zen

10.12.2011

ID is not ISD

One of the trends in the learning industry is proclaiming that a new Instructional Design (ID) model, such as rapid development prototyping, needs to replace Instructional System Design (ISD) because the new model provides more benefits, such as it's newer, dynamic, and faster. Yet ID models differ from ISD models, thus its sort of like saying that a new boat model is going to replace the automobile—yes they are both transportation devices but they do differ in their uses!

ID (Instruction Design) models differ from ISD models in that ISD models have a broad scope and typically divide the instruction design process into five phases (van Merriënboer, 1997):

  • Analysis
  • Design (sometimes combined with Development)
  • Development
  • Implementation or Delivery
  • Evaluation

Since ISD models cover a broad spectrum they normally do not go into much detail in the design phase. This is where ID models excel. Since they are less broad in nature and mostly focus on design, they normally go into much more detail for the design phase.

Two popular ISD models are ADDIE and The Dick and Carey Model. ISD can also be extended by using Frog Design's model to solve wicked or complex problems as it aligns with ADDIE:

Some popular ID models include Rapid Instructional Design (RID), Gagne's Nine Steps of Instruction, John Keller's ARCS model, Merrill's Component Display Theory, and van Merriënboer's 4C/ID Model.

ISD can be thought more of as a project management tool while ID models are specialized tools used to enhance the learning process.

Omitting ISD and relying strictly on an ID model often omits critical parts of the design process, such as analysis and evaluation. Thus, unless you design for certain groups in an organization or industry in which you know your learners, analysis is important to determine the skill level that the learning program is aimed at. In addition, managers will often identify any performance problem as a training problem, thus the designer needs to ensure it is indeed a training problem rather than a bad process or motivation problem.

Evaluation is not only important to determine if the program is meeting the needs of the organization, but also as a learning tool for the designers themselves.

The best way to use ID models is to plug them into the ISD model as they are needed. For example:

Plu and play capabilities of ADDIE (ISD)

This method allows you to gain the benefit of the ID model that will best suit your needs for enhancing your learning program, while ensuring that your learning program will do what it is supposed to do.

Reference

van Merriënboer, J. J. G. (1997). Training Complex Cognitive Skills: A Four-Component Instructional Design Model for Technical Training. Englewood Cliffs, New Jersey: Educational Technology Publications.

8.10.2011

A Look Behind Robert Gagnè's Nine Steps of Instruction

In her post, Questioning Gagnè and Bloom’s Relevance, Christy Tucker describes how we often get caught up in theories without really looking at whether the research supports those theories. In this post, I would like to point out some of the research and newer findings.

While some think the Nine Steps are iron clad rules, it has been noted at least since 1977 (Good, Brophy, p.200) that the nine steps are “general considerations to be taken into account when designing instruction. Although some steps might need to be rearranged (or might be unnecessary) for certain types of lessons, the general set of considerations provide a good checklist of key design steps.”

1. Gain attention

In the military we called this an interest device—a story or some other vehicle for capturing the learners' attention and helping them to see the importance of learning the tasks at hand. For example, when I was training loading and and unloading trailers with a forklift, I would search the OSHA reports for the latest incidence report on a forklift operator who decapitated themself by sticking their head out of the protective structure of the forklift cage in order to get a better view when entering the trailer and then getting it caught between the bars supporting the forklifts protective top and the side of the trailer (it happens more often than we care to think about). This would become the basis for a story on why they needed to pay attention as the forklift may be small, but it weighs several tons and can easily slice off a limb or another body part if not treated with proper respect.

Wick, Pollock, Jefferson, and Flanagan (2006) describe how research supports extending the interest device into the workplace in order to increase performance when the learners apply they new learnings to the job. This is accomplished by having the learners and their managers discuss what they need to learn and be able to perform when they finish the training. This preclass activity ends in a mutual contract between the learners and managers on what is expected to be achieved from the learning activities (this is also closely related to the next step).

2. Tell learners the learning objective

Marzano (1998, p.94) reported an effect size of 0.97 (which indicates that achievement can be raised by 34 percentile points) when goal specification is used. When students have some control over the learning outcomes, there is an effect size of 1.21 (39 percentile points). This is the beauty of using Wick, Pollock, Jefferson, and Flanagan's mutual contract.

Of course the problem that some trainers and instructional designers run into is telling the learners the Learning Objectives word for word, rather than breaking it down into a less formal statement.

3. Stimulate recall

This is building on prior learning and forms the basis of scaffolding by 1) building on what the learners know, 2) adding more details, hints, information, concepts, feedback, etc. 3) and then allowing the learners to perform on their own. Allan Collins John Seely Brown, and Ann Holum (1991) note that scaffolding is the support the master gives apprentices in carrying out a task. This can range from doing almost the entire task for them to giving occasional hints as to what to do next. Fading is the notion of slowly removing the support, giving the apprentice more and more responsibility.

Part of stimulating recall is having the learners take notes and drawing mind maps. Learning is enhanced by encouraging the use of graphic representations when taking notes (mind or concept maps). While normal note taking has an overall effect size of .99, indicating a percentile gain of 34 points, graphic representations produced a percentile gain in achievement of 39 points (Marzano, 1998). One of the most effective of these techniques is semantic mapping (Toms-Bronosky, 1980) with an effect size of 1.48 (n=1), indicating a percentile gain of 43 points. With this technique, the learner represents the key ideas in a lesson as nodes (circles) with spokes depicting key details emanating from the node.

4. Present the stimulus, content

Implement (nuff said)

5. Provide guidance, relevance, and organization

Kind of redundant as it relates to the other steps.

6. Elicit the learning by demonstrating it (modeling and observational learning)

Albert Bandura noted that observation learning may or may not involve imitation. For example if you see someone driving in front of you hit a pothole, and then you swerve to miss it—you learned from observational learning, not imitation (if you learned from imitation then you would also hit the pothole). What you learned was the information you processed cognitively and then acted upon. Observational learning is much more complex than simple imitation. Bandura's theory is often referred to as social learning theory as it emphasizes the role of vicarious experience (observation) of people impacting people (models). Modeling has several affects on learners:

  • Acquisition - New responses are learned by observing the model.
  • Inhibition - A response that otherwise may be made is changed when the observer sees a model being punished.
  • Disinhibition - A reduction in fear by observing a model's behavior go unpunished in a feared activity.
  • Facilitation - A model elicits from an observer a response that has already been learned.
  • Creativity - Observing several models performing and then adapting a combination of characteristics or styles.

7. Provide feedback on performance

As Christy's post noted, performance and feedback are good.

8. Assess performance, give feedback and reinforcement

Related to above.

9. Enhance retention and transfer to other contexts

We often think of transfer of learning as just being able to apply the new skills and knowledge to the job, but it actually goes beyond that. Transfer of learning is a phenomenon of learning more quickly and developing a deeper understanding of the task if we bring some knowledge or skills from previous learning. Therefore, to produce positive transfer of learning, we need to practice under a variety of conditions. For more information, see Transfer of Learning.

References

Collins, A., Brown, J. S., & Holum, A. (1991). Cognitive apprenticeship: Making thinking visible. American Educator, 6-46.

Good, T. & Brophy, J. (1990). Educational Psychology: A realistic approach. New York: Holt, Rinehart, & Winston.

Marzano, Robert J. (1998). A Theory-Based Meta-Analysis of Research on Instruction. Mid-continent Aurora, Colorado: Regional Educational Laboratory. Retrieved May 2, 2000 from http://www.mcrel.org/products/learning/meta.pdf

Wick, C., Pollock, R., Jefferson, A., Flanagan, R. (2006). Six Disciplines of Breakthrough Learning: How to Turn Training and Development Into Business Results. San Francisco: Pfeiffer

7.07.2011

Andragogy vs. Pedagogy

In his post, Learning is learning, Steve Wheeler asks, “So does the concept of Andragogy add any value to our understanding of learning? For me, the answer is no.”

I would have to disagree because the concept of andragogy has actually added great value to our understanding of learning.

Pedagogy is derived from the Greek words paid meaning “child” and agogus meaning “leader of.” In this pedagogy classroom, the teachers are responsible for all decisions about learning in that they decided what is to be learned, how it is to be learned, when it should be learned, and if it has been learned. Which meant the learners were pretty much in the roles of passive, dependent recipients of the teachers' transmissions. When our public schools were first established, they were based on this pedagogical model.

When adult education was later established, this was the only model at the time, so our profession was also based on it. Which of course lead to a high drop out rate, low motivation, and poor performance. In 1926, Eduard C. Lindereman's book, The Meaning of Adult Education, captures the essence of adult learning:

In this process the teacher finds a new function. He is no longer the oracle who speaks from the platform of authority, but rather the guide, the pointer-out who also participates in learning in proportion to the vitality and relevance of his facts and experiences. In short, my conception of adult education is this: a cooperative venture in nonauthoritarian, informal learning, the chief purpose of which is to discover the meaning of experience; a quest of the mind which digs down to the roots of the preconceptions which formulate our conduct; a technique of learning for adults that makes education coterminous with life and hence elevates living itself to the level of adventurous experiment. - quoted in Nadler, 1984, p.6.4

In the 1950s, European educators started using the term “andragogy,” from the Greek word anere for “adult,” and agogus, “the art and science of helping students to learn.” They wanted to be able to discuss the growing body of knowledge about adult learners in parallel with pedagogy.

Andragogy, is often criticized because as we now know, it also applies to younger learners; however the people behind the theories at the time were trainers of adults rather than educators in the school system, thus they applied their theories to the section of the population that they best knew about. Because of their work, they pioneered the way for the world of pedagogy to also advance itself from being almost entirely passive-based to a more experience-based process of learning.

So yes, Knowles's concept of andragogy is that he intended for it to be different to pedagogy, because pedagogy at the time was extremely passive-based. Just because pedagogy is finally catching up to andragogy is not a strong enough reason to drop the concept from our terminology. I believe we should be embracing the term because of its rich history and pioneering the way of our present concept of learning.

Reference

Nadler, Leonard (1984). The Handbook of Human Resource Development. New York: John Wiley & Sons

6.27.2011

Marching Backwards into the Future

A recent post by Bersin & Associates notes, “Approximately three-quarters of employers globally cite a lack of experience, skills or knowledge as the primary reason for the difficulty filling positions. However, only one in five employers is concentrating on training and development to fill the gap. A mere 6% of employers are working more closely with educational institutions to create curriculums that close knowledge gaps.”

Doh! These same employers slashed their work forces for a total job loss of 8,700,000 jobs since the recession started in December 2007. Only 4,444,000 jobs have been added since then, which leaves a net loss of 425,6000 4,256,000 jobs.

What were they thinking? That they could slash their “most valuable asset” and when the economy picks back up, find the knowledge and skills they require? Yep—short term thinking at its best—and of course it backfired in this complicated/complex work environment.

In a prior post I wrote about the three most important words that managers in an organization must know when it comes to learning—training development, and education.

Training is learning that is provided in order to improve performance on the present job, which means it's orientated towards the present. What these employers should have been thinking is towards the future—what skills and knowledge are we going to need when the economy picks back up? Which means they should have implemented development and education processes.

Development is training people to acquire new horizons, technologies, or viewpoints. It enables leaders to guide their organizations onto new expectations by being proactive rather than reactive. It enables workers to create better products, faster services, and more competitive organizations. It is learning for growth of the individual, but not related to a specific present or future job.

Education in organizations differ from education in schools so don't let the following definition confuse you. Education is training people to do a different job. It is often given to people who have been identified as being promotable, being considered for a new job either lateral or upwards, or to increase their potential.

The past went that-a-way. When faced with a totally new situation, we tend to always to attach ourselves to the objects, to the flavor of the most recent past. We look at the present through a rear-view mirror. We march backwards into the future. - Marshall McLuhan

As we craft our learning processes we must remember the three most important words and ensure that our clients/customers also understand them. Failure to do so will again result in marching backwards into the future.

 

6.07.2011

Five Years later: A Review of Kirschner, Sweller and Clark's Why Minimal Guidance during Instruction Does Not Work

After having a short discussion with Guy Wallace on his blog, I decided to do a review of Kirschner, Sweller and Clark's, Why Minimal Guidance during Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching, in which they postulate that students who learn in classrooms with pure-discovery methods and minimal feedback often become lost and frustrated, and their confusion can lead to misconceptions.

The paper caused a bit of a stir in the learning and training community when it was published five years ago, especially among those who lean towards a more constructivist approach. However, while the author's critics did raise some good points, the paper is a good reminder that learning and training professionals often carry new ideals and technologies to the extreme. For example:

  • We had the visual movement from about 1900—1950, which brought us Dale's Cone of Experience. And of course someone had to add some bogus percentages to it to make it more “official.”
  • When VCR's arrived we made training tapes of everything… even if it did not make sense.
  • eLearning was supposed to kill the classroom.
  • Formal and informal learning were supposed to be at odds which each other, even though each hour of formal learning spills over to four-hours of informal learning.
  • All learning is social! Uhh… no. While the majority of learning may be social we often still learn things on our own.

Thus Kirschner, Sweller and Clark's paper is an important reminder for us to not carry Problem Based Learning (PBL) to its extreme. That is, while it has its strengths, learners often need a more direct approach in order to build a solid foundations before being presented with PBL.

With that being said, we do need to take a closer look at the paper. For those that are interested, there is a list of papers that discuss the Direct Instruction versus Constructivism Controversy (they are located at the bottom of the page).

The Title and Paper gives Little Respect to the Constructivism Approach

With the title blaring, “Why Minimal Guidance during Instruction Does Not Work” rather than, “Why Minimal Guidance during Instruction Does Not Work for Novice Learners,” the authors almost seem to ignore that PBL is a necessity in order to promote deeper levels of understanding. They do pay some respect to constructivism, such as:

Higher aptitude students who chose highly structured approaches tended to like them but achieve at a lower level than with less structured versions

Certain aspects of the PBL model should be tailored to the developmental level of the learners… there may be a place for direct instruction on a just-in-time basis. In other words, as students are grappling with a problem and confronted with the need for particular kinds of knowledge, a lecture at the right time may be beneficial.

However, they end up admonishing constructivist:

According to Kyle (1980), scientific inquiry is a systematic and investigative performance ability incorporating unrestrained thinking capabilities after a person has acquired a broad, critical knowledge of the particular subject matter through formal teaching processes. It may not be equated with investigative methods of science teaching, self-instructional teaching techniques and/or open-ended teaching techniques. Educators who confuse the two are guilty of the improper use of inquiry as a paradigm on which to base an instructional strategy.

But it seems, at least to me, they may be doing the same, but only at the opposite end of the continuum. For example, they seem to treat their theories as laws, yet…

Cognitive Load Theory Coming Under Withering Attacks

The paper relies heavily on Cognitive Load Theory, yet we have to realize that it is still a theory rather than a law. Will Thalheimer lists several papers on his site that raises several concerns about Cognitive Load Theory. For example, even though we know that working memory can only hold about seven chucks (which actually may only be four, give or take one), using the old KiSS (Keep it Simple Stupid) principle can be just as effect because trying to count the number of chunks can be quite difficult, if almost impossible. For example, how many chunks are in Rene Descartes statement, “I think, therefore I am?”

Thus, both the authors and the constructivism movement are guilty of jumping on theories before they are fully understood. But why do we do this? Joel Michael writes in Advances in Physiology Education:

…it is important to recognize that educational research is difficult to do; this has been cogently highlighted by Berliner (8) in "Educational research: the hardest science of them all." Berliner points out that unlike a physics experiment, in which it is possible to readily distinguish between the independent and dependent variables, and also possible to isolate and control all of the independent variables, in educational experiments all of this is problematic. Researchers may not agree on which variable is the dependent variable of greatest interest or importance. There may be disagreements about which independent variable(s) are to be manipulated. There may be disagreements about how to measure any of the relevant variables. And, finally, it may be extremely difficult, or even impossible, to isolate and manipulate all the variables suspected of being involved in the phenomena being studied.”

Rather than waiting for eons to pass before all the research is available, we (the learning, training, and educational community) often jump into a new theory because will simply do not want to wait until we are dead and buried before we can fix and/or improve our methodology. With that in mind…

Evidence for Constructivism

Joel Michael continues his discussion for promoting active learning with these two studies:

1. Support for discovery learning is provided by a study in which students engaged in a course that incorporated some discovery learning exercises were tested, and their performance on questions related to topics learned through discovery learning was compared with their performance on questions related to topics learned in lecture (Wilke, Straits, 2001). The authors concluded that performance was better on those topics learned through discovery learning.

2. Burrowes compared learning outcomes in two sections of the same course taught by the same teacher. One section was taught in the traditional teacher-centered manner (control group of 100 students), whereas the other section was taught in a manner that was based on constructivist ideas (experimental group of 104 students). The results of this experiment were striking: the mean exam scores of the experimental group were significantly higher than those of the control group, and students in the experimental group did better on questions that specifically tested their ability to “think like a scientist.” Reference: Burrowes PA. Lord's constructivist model put to a test. Am Biol Teacher 65: 491–502, 2003.

While you can find plenty of other research findings on constructivist methods, the ideal that you can teach learners to “think like scientists” is fascinating because problem solving skills are extremely hard to train. That is, conduct a problem solving course in an organizational setting and you will more than likely get little or no results. It's almost as if the process must be embodied within the discipline.

Embodied Cognition

On the Brain Science Podcast, Ginger Campbell discusses Embodied Cognition with Lawrence Shapiro (both podcast and transcript can be found in the link). They note that in cognitive science, the brain is normally studied while isolated from the world and from the body. While in contrast, embodied cognition imagines not that the brain can be isolated from the body and the environment, but thinks of the body as in some sense shaping, or constraining, or involved in the very processing of the kinds of information that an organism needs to interact successfully with the world.

In the podcast, Dr. Shapiro talks about a fascinating work on the use of gesture. He notes that boys perform better in certain spatial reasoning tasks than girls. When psychologists studied this they've noticed something kind of interesting—boys rely on gestures a lot more than girls do when solving spatial reasoning tasks. Boys use gestures to work out the problem, at the same time they're talking through the problem. And often, they don't synchronize with the verbalizing. It's as if they have two different systems working at the same time—one a gesture system, and one a verbalization system. Gestures seem to be a part of the process of figuring out these spatial reasoning tasks.

They also discuss the study of kittens moving around their environments by pulling carriages with other kittens in them. And the ones in the carriages presumably see everything that the kittens pulling them see, but because they don't actually employ their motor systems to move them around the environment, their perceptual systems don't develop properly. The idea there seems to be that part of what's necessary for perception is actual exploration of the environment; not just being a passive recipient, like these kittens conveyed in the carriages are.

Thus rather than focusing on pieces of conceptually unrelated pieces of information, such as practice first and then learn problem solving second, perhaps we should be focusing our learning processes on entire ideas and concepts whenever possible.

I would interested in your thoughts on the subject by leaving a comment, creating a blog post, or through Twitter (I'm @iOPT).

6.02.2011

Training at its Basic is a Positive Impact Caused by Learning

The WSJ recently ran an informative article, Lessons Learned that discusses the importance of the need to create a workplace environment that actively encourages people to change. They start their article with the statement, “With some studies suggesting that just 10% to 40% of training is ever used on the job, it is clear that a big chunk of the tens of billions of dollars organizations spend annually on staff development is going down the drain.”This is actually a myth as some studies suggest the transfer rate is actually around 60% as the the study the WSJ used is based on extremely faulty research (see Myth - 10% of Training Transfers to the Job). However, 60% is still too low of a transfer rate, thus we know we must use a better method for designing our learning processes.

To ROI or not to ROI?

Looking Backwards in Time

The article notes that an effective post-training follow-up activity is the performance assessment—“When employees know that they are going to be observed and given feedback on their performance, the motivation to use newly learned skills and knowledge increases.” This means you must know what you are going to assess before you design the training, which in turn means the learning process must be based on a goal achieved through backwards planning that will have a positive impact upon the organization.

Does this mean you need to perform a ROI? No. Learning and Training departments normally only need to provide an ROI if they are the initiators of a learning or training process. For example, if you presently outsource your Microsoft Office training program and you want to bring it in-house because you believe you can do it cheaper and better, then you should provide an ROI to show this to be true. If on the other hand you will provide the training for the users of a new computer system, then it is up to the original initiators, normally the IT department, to provide the ROI.

In other cases... it depends. For example if a manager comes to you for a request for training that will eliminate a problem in her department and you determine that training is indeed the answer, then you will have to decide if an ROI is needed or not. In many cases the manager simply wants the problem to go away. And yes, training may be more expensive than solving the problem, but this frees the manager to help grow the organization rather than spend her time putting out fires. You are really investing in her—by eliminating the problem you allow her to spend more time growing the organization, thus the training will pay off in the long run. One rule of thumb is to provide an ROI whenever possible to show that your efforts have a positive financial impact on the organization, but always keep in mind that even if the cost savings are not there in the short turn, it could pay off in the future.

Getting the Impact out of Training with Agile Design

So why you may or may not have an ROI, you still need a positive result or impact. Nadler defines “training” as learning that is provided in order to improve performance on the present job (1984). Most definitions closely follow Nadler on these two points: 1) training always involves learning and 2) performance is improved. Thus training is basically a positive impact caused by learning. If it does not follow these two points then that means you are doing something besides training. It doesn't mean its wrong or right, but it is simply not training.

While the author of the WSJ brings the learners more into the learning and training process, there is still another step to go—include them in the design. Rittel (1972) noted that the best experts with the best knowledge for solving wicked problems are often those affected by the solution; in this case it is the learners themselves. Yet, the only time we normally bring them in is to be guinea pigs for testing our learning process. While you might not be able do bring the entire population of them in on the planning stage, we do need bring in enough learners who will actually represent the population.

This is the heart of Agile Design. The learners are the real stakeholders, even if you or the managers don't agree to what they are saying, you need to listen, guide, and act on their needs and perspectives so that they take ownership of the learning and performance solution. In addition, they gain metalearning and metacognitive skills.

This is true “Learner Design.” Simply designing a learning process for them is andragogy or pedagogy design. True learner designed process involves the learners. Change works best if the people it affects are involved in the process. Learning is no different if you want to change performance on the job. Involve the learners so you not only make them part of the change process, but also as Rittel noted—they are often the experts who can provide good advice.

References

Nadler, Leonard (1984). The Handbook of Human Resource Development. New York: John Wiley & Sons.

Rittel, H. (1972). On the planning crisis: systems analysis of the “first and second generation.” Bedriftsokonomen. No. 8, pp.390-396.

5.23.2011

Creating and Evaluating Informal & Social Learning Processes in a Call Center

I recently received this comment on my post, Using Kirkpatrick's Four Levels to Create and Evaluate Informal & Social Learning Processes:

“What do you do when the "learners" are new hires? And the "environment" is a real-time call center?”

Using the same process as in the last post (shown below), we start off with the main goal or objective:

Kirkpatrick's Backwards Planning and Evaluation Model

1. Results or Impact - What is our goal?
2. Performance - What must the performers do to achieve the goal?
3. Learning - What must they learn to be able to perform?
4. Reaction - What needs to be done to engage the learners/performers?

1. What is our Goal?

Training new hires is normally performed because proficient ones cannot be recruited. However, using training as the only performance solution is not a good choice as it is normally one of the more costly and time-consuming solutions if done correctly. Thus, when formulating your goal, don't think of training as being the solution or goal, but rather what are the benefits you are looking for. For example:

Our goal is to convert interested callers into extremely satisfied and delighted customers. We will achieve this by providing timely, accurate and professional service at each and every customer contact and answering their questions and inquiries in a timely and professional manner.

The benefit of our goal is to maintain/increase customer satisfaction, which will lead to higher sales.

2. What must the performers do to achieve the goal?

While there are several tasks the employees should be able to perform, a few of them that would lead to higher sales are:

  • Greet customers in a timely, cheerful and professional manner
    • The benefit is to jump start the customers' experience from the moment they call
  • Quickly and accurately find product information
    • The benefit is to show our customers that we are professionals who will take care of their needs
  • Understand the culture, mission and policies of the company in order to make wise and timely decisions
    • The benefit is to not only provide customers with our goods and services, but to also show them we can aid them with difficult problems

3. What must they learn to be able to perform?

In this example we need a Learning Environment (not just training) that will enable new hires to perform correctly in our call center so that it can perform its mission:

Star Diagram of the Continua of Learning

Star Diagram of Learning

click image for a larger version

Note: for more information on the above diagram see:

A number of experiences and activities are then designed for the learning environment that will enable the Customer Service Representatives to perform the three tasks:

Task One: Greet customers in a cheerful and professional manner

Social Learning: The learners will discuss with each other what makes a great Customer Service Representative.

Role Play: This activity will be performed in the classroom where the learners will take turns playing customers and Customer Service Representatives. When a learner is role playing the customer, he or she will be provided a number of scenarios that range from a happy to dissatisfied customer (some sample role playing activities may be found here).

Task Two: Quickly find product information

elearning: Explains the company's database and how to find product and service information.

eLearning Branching Scenarios: This course will take the learners through a number of scenarios for finding information that a customer requests.

Informal Learning: The learners are coupled with experienced employees in order to gain real experience.

Social Learning with Social Media: Employees are connected to a micro-blogging service (e.g. Yammer or Twitter) so that they may ask for and pass on information through a social network.

Task Three: Understand the culture, mission, and policies of the company in order to make wise and timely decisions

eLearning Branching Scenarios: This course is an extension of the last eLearning Branching Scenario in that once a learner finds information that customer requests, he or she then has to go through various scenarios to help the customer make a decision.

Informal Learning: The learners are coupled with experienced employees in order to gain real experience.

Social Learning with Social Media: Employees are connected to a micro-blogging service so that they may ask for and pass on decision making techniques through a social network.

Social Learning with Social Media: Employees are provided a blogging platform that will enable them to find and pass on decision making techniques that may require more detailed information than the micro-blogging service allows — the micro-blogging service is for quick and short bursts of information while the blog is for more complex and detailed information.

Wiki: For storing and retrieving lessons learned.

Note that the learning platform may start with traditional classroom training, but it is blended with elearning and informal learning. In addition it is transformed into a true learning process, rather than an event, in that it is implemented into their daily work flow so that they can continue to not only learn, but help others learn.

4. What needs to be done to engage the learners/performers?

Before the learners enter the learning environment, each learner's respective manager will ensure that the learner understands the importance of the training they are about to receive. In addition, the learners and managers will set goals and discuss potential problems. After the initial elearning and classroom learning programs are completed, the manager will follow-up with the learners and assign them coaches/mentors and follow their progress on a weekly basis.

Evaluating the Learning Platform

Since we know precisely what each part of the Learning Platform was designed to perform, our task of evaluating the program becomes much easier:

Kirkpatrick's Backwards Planning and Evaluation Model

1. Results or Impact (What is our goal?)

Did we achieve higher customer satisfaction? This can also be tied to a hard ROI , such as an increase in sales.

2. Performance (What must the performers do to achieve the goal?)

Can the employees now perform as expected?

3. Learning (What must they learn to be able to perform?)

This is assessed in the elearning programs and discussions with the experienced employees involved in the informal learning sessions.

4. Reaction (What needs to be done to engage the learners/performers?)

The learners managers can provided input on the level of the learner's reaction and engagement of the learning platform.

This backwards planning process can help you pinpoint problems. For example, let's say that you do not get an increase in sales. That means to go back one step and see if the learners are performing as desired. If they are, then that means your initial premise was wrong (greater customer satisfaction does not lead to higher sales) or something else is preventing it, such as your product is priced too high.

On the other hand, if they are not performing as desired, then you have to evaluate the working environment to determine if something is preventing the learners from using their skills, such as processes that are counter-productive to great customer service. If you determine that they should be able to perform, then evaluate the learners to see if they can perform or if something in the learning process is preventing them from learning, such as not enough practice time.

If the learning process is sound, then go back one more step and determine if the learners are engaged. That is, do the have the basic skills that will allow them to master the learning program and/or do they have the motivation and desire to complete the learning program (maybe they see it as a waste of time).

2.22.2011

Using Kirkpatrick's Four Levels to Create and Evaluate Informal & Social Learning Processes

In my last post, The Tools of Our Craft, I wrote how the Four Level Evaluation model is best used by flipping it upside down and that it can be used to evaluate informal and social learning. In this post, I want to expand on the second point—evaluating informal and social learning.

Backwards Planning and Evaluation Model

1. Results or Impact - What is our goal?
2. Performance - What must the performers do to achieve the goal?
3. Learning - What must they learn to be able to perform?
4. Reaction - What needs to be done to engage the learners/performers?

Inherent in the idea of evaluation is “value.” That is, when we evaluate something we are trying to make a judgment about the worth of it. The measurements we obtain gives us information to help base our judgment on. This is the real value of Kirkpatrick's Four Level Evaluation model as it allows us to take a number of measurements throughout the life span of learning process in order to place a value on it, thus it is a process-based solution rather than an event-based solution.

Each stakeholder will normally only use a couple of the levels when making their evaluation, except for the Learning Department. For example, top executives are normally only interested in the first one—Results, as it directly affects the bottom-line. Some are also interested in the last one, Reaction, as they are interested in the engagement aspect—are the employees engaged in their job? Managers and supervisors are normally most interested in the top two levels, Results and Performance, and somewhat in the last one, Reaction. While the Learning Department needs all four to properly deliver and evaluate the learning process.

Note that this post uses as actual problem that is based on informal and social learning for the solution. I wrote about part of it in Strategies for Creating Informal Learning Environments, thus you might want to read the first half of it (you can stop when it comes to the section on OODA).

Results

Implementing a learning process is based on what results or goals you are trying to achieve—and identifying what measurements you need to help evaluate your results will help you to zero in on identify the result or goal you are trying to achieve. For example, saying that you want your employees to quickly find information is normally a poor goal to shoot for as it is hard to measure. Starting with a focused project and then letting demand drive additional initiatives is normally the best way to start implementing social and informal learning processes.

If you find that you are unable to come up with a good measurement, that normally means you have not zeroed in on a viable goal. In that case, use the Japanese method of asking “Why?” five times or until you are able to pinpoint the exact goal you are trying to achieve. Establishing new or improving learning/training processes normally begin with a problem, for example, a manager complains that when he reads the monthly project reports he finds that employees are often faced with the same problems as others and in turn, repeat the same learning process again, thus the same mistakes are repeated throughout the organization.”

“Why?”

“No one realizes that others within the organization have had the same problem before and have normally documented their solution (Lesson Learned).”

“Why?”

“There is no central database for them to look and the people who work next to them are normally unable to help.”

Zeroing in on the actual cause of the problem helps you to build a focused program, in this example, it's a central database for “Lessons Learned” and a means of connecting the people within the organization to see if anyone has been faced with the same problem (and vice versa—allowing people to tweet (broadcast) their problems and their solution that may be of help to others).

In addition, you now have a viable measurement—counting the number of problems/mistakes in the project reports each month to see if they improve.

Performance

In a normal training situation, performance on the job is normally easily evaluated. For example, when I instructed forklifts operations in a manufacturing plant, after the training/practice period we would assess the learners performing on the forklifts in the actual work environment to ensure they could operate safely and correctly under actual working conditions.

In addition, I used to train users to use Query/400 (a programming language to extract information from a company's computer system). One of the methods we used to assess the performers was that when they returned to their workplace, we required them to build three queries that were assessed by someone from the training department to ensure they could perform on the job. Thus the training is transformed from an event to a process by ensuring their skills are carried over to the workplace.

However, in our working example, it would be hard to observe the entire “Lessons Learned” process as it is a three-prong solution that uses informal and social learning:

  • Capture the Lessons Learned by using an After Action Review (AAR)
  • Store it in a social site (such as a wiki or SharePoint) for easy retrieval
  • Provide a microblogging tool, such as Yammer, to help others to ask about lessons learned that might pertain to their problems and to tweet lessons learned

The first part of the solution could somewhat be evaluated by observing some of the AARs and watching the informal learning taking place as they discuss their problems and solutions, however, the second and third points would be difficult as it would be hard to sit at someone's desk all day to see if they are using the wiki and microblogging tools. While there are probably a number of solutions, one method is to identify approximately how often the social media tools should be used on a daily, weekly or monthly basis and then determine if expectations are being met by counting the number of:

  • contributions per month to the wiki (based on their Lessons Learned in the AAR sessions)
  • contributions to the microblogging tool (short briefs on their Lessons Learned)
  • questions asked on the microblogging tool by employees who could not find a Lesson Learned in the wiki that matched their problem

The approximations are based on the number of problems/mistakes found in the project reports and the total number submitted. You might have to adjust your expectations as the process continues, but it does give you a method for measuring the performance. Note that Tim Wiering has a recent blog post on this method in the Green Chameleon blog.

In addition, once the performers have started using the tools, you can interview them by asking how the new tools are helping them and then capture some of the real success stories, such as videotaping them or using a question and answer interview and then blogging about it. These stories have a two-fold purpose:

  • The stories themselves are evidence of the success of the performance solution.
  • The stories can be then be used to help other learners/performers to use the new tools in a more effective manner as stories carry a lot of power in a learning process because the learners are able to identify and relate to them.

Learning

First, the purpose of this level is not to evaluate what the performers are learning through the AARs, microblogging, and wiki tools when they return to their job (that measurement is captured in the Performance Evaluation), but rather what they need to learn so that they can use the tools on the job. Look around almost any organization and you see processes, programs, tools, etc. that were built on the idea that if we build it, they will come, but are now wastelands because the performers saw no use for them and/or had no real idea how to use them. Just because a tool, such as Yammer or Twitter may be obvious to you, does not mean the intended performers will see a use for it, and for that matter, know how to use it.

In addition, while one organization may not care if someone sends an occasional tweet about the latest Lady Gaga video, another may frown on it, so ensure the intended performers also know what not to use the new tools for.

Since these learning programs can be elearning, classroom, social, informal, etc. and the majority of Learning Designers know how to build and evaluate them, I'm not going to delve into that in this post.

Reaction Engagement

While this may be the last level when flipping Kirkpatrick's Evaluation Model, it is actually the foundation of the other three levels.

One of the mistakes Kirkpatrick made is putting too much emphasis on smiley sheets. As noted in the excellent article, Are You Too Nice to Train?, measuring reaction is mostly a waste of time. What we really want to know is how engaged the learners will be in the learning level and will that engagement carry through to the performance level. People don't care so much about how happy they are with a learning process, but rather how will the new skills and knowledge be of use to them?

For example, when I was stationed in Germany while in the Army we trained on how to protect ourselves and perform during CBR (Chemical/Biological/Radiological) attacks. One of the learning processes was to don our CBR gear (heavy clothing lined with charcoal to absorb the chemical and biological agents, rubber gloves, rubber boots, the full-face rubber protective mask, and of course our helmets to protect our heads) in the midday heat of the summer time and then using a compass and map, move as fast as we could on foot to a given location about two miles away. And I can tell you from experience, this is absolutely no fun at all, yet we learned to do it because no one wants to die from a chemical or biological agent—a ghastly way to go. Thus the training had us totally engaged even though the training was absolutely horrible.

Thus the purpose of this phase is to ensure the learner's are on board with the learning and performance process, which is often best accomplished by ensuring a portion of them and their managers are included in the planning process. You need the managers to help ensure they are on board as employees most often do what their managers emphasize (unless you have some strong informal leaders among them).

Reversing the Process

By using the four levels to build the learning/performance process (going through levels 1 to 4 in the chart below), it is now relative easy to evaluate the program by reversing the process (going through the levels in reverse order [4, 3, 2, 1]:

 

Evaluation Level Create

Measurement/

1. Results or Impact - What is our goal?

Implement a process that allows the employees to capture Lessons Learned so that others may also learn from them when similar problems arise.

Reduce number of repeated problems/mistakes in the project reports by 90%.
2. Performance - What must the performers do to achieve the goal?

Identify and capture “Lessons Learned” in an AAR, post them on a wiki, and tweet them using Yammer.

When problems in their projects arise, they should be able to search the wiki and/or use Yammer to see if there is a previous solution.

Count the :
  • contributions per month to the wiki
  • contributions to the microblogging tool
  • questions asked on Yammer

Interview performers to capture success stories.

3. Learning - What must they learn to be able to perform?

Perform an AAR.

Upload the captured “Lessons Learned” to a wiki.

Search and find documents in a wiki that are similar to their problem.

Microblog in Yammer.

Proficient use of an AAR is measured by using Branching Scenarios in an elearning program and performing an actual AAR in a classroom environment.

The proficient use of the wiki and Yammer are measured in their respective elearning program (multiple choice) and by interacting (social learning) with the instructor and other learners on Yammer.

4. Reaction - What needs to be done to engage the learners/performers?

Bring learners in on the planning/building process to ensure it meets their needs.

Managers will meet with the learners on a one-on-one basis before they begin the learning process to ensure the program is relevant to their needs.

The instructional staff will meet with the learners during the learning process to ensure it is meeting their needs.

The managers, with help from the learning department, will meet with the performers to ensure the new process is not conflicting with their daily working environment.

 

Learner/performer engagement problems/roadblocks that are encountered will be the first item discussed and solved during the weekly project meetings.

Since we started with a focused project, we can now let demand drive additional initiatives that expand upon the present social and informal learning platform.

How do you build and measure learning processes?

2.13.2011

The Tools of Our Craft

The latest edition of Chief Learning Officer magazine contains an interesting article, Time's Up (you can also read the article on the author's blog). It is about Donald Kirkpatrick's Four Level Evaluation Model that was first published in a series of articles in 1959 in the Journal of American Society of Training Directors (now known as T+D magazine).

The author, Dan Pontefract, sums up the article in his last statement, “Diverging from the cockroach, it's time for the learning profession to evolve.” While Dan's article is thought-provoking, I believe it misses out on two points, 1) since it is old, it's no good and 2) it has not evolved.

Old Does Not Mean Outdated

Interaction Design is closely related to our craft, training and learning. While the concept of interaction design has been around for ages, it was not formerly defined until 1990 by Bill Moggridge, co-founder of the Silicon Valley-based design firm IDEO. While it is one of the newer design professions, it still relies on older tools. For example, one of the tools used is the affinity diagram that was developed by Jiro Kawakita in the early 1960s. Thus, being old does not mean a tool should be extinct. If that was true, the cockroach would have disappeared millions of years ago, yet, because it has evolved, it has managed to survive... much to the disgust of anyone who has had their home invaded by them.

Enso

enso circle by Vibhav
“Nature itself is full of beauty and harmonious relationships that are asymmetrical yet balanced. This is a dynamic beauty that attracts and engages.” - Garr Reynolds

While people who have had their homes invaded by the cockroach look at them in disgust, entomologists look at them as one of the marvels of natures. Learning/Instructional Designers should not look upon our tools, such as ADDIE and Kirkpatrick's model as disgusting objects that have invaded our craft, but rather more as entomologists look upon the lowly cockroach—marvels of our craft that have survived the test of time.

The Evolution of Our Tools

Just as the ADDIE model has evolved over time, Kirkpatrick's model has also evolved. One of its main evolutionary steps was flipping it into a backwards planning model:

  1. Results or Impact - What is our goal?
  2. Performance - What must the performers do to achieve the goal?
  3. Learning - What must they learn to perform?
  4. Reaction - What needs to be done to engage the learners/performers?

While I blogged of this in 2008, Flipping Kirkpatrick, Kirkpatrick himself wrote of this several years earlier. This method align's with how Dan's article says we should start, “start with an end goal to achieve overall return on performance and engagement.” In addition, this in no way treats learning as an event, but rather a process. What is interesting, is how closely Kirkpatrick's evolved model fits in with other models, such as Cathy Moore's Action Mapping.

Like the ADDIE model, Kirkpatrick's model is often called a process model. However, this is only true if you blindly follow it. If you remove your blinders and study and play with it, it becomes a way to not only implement formal learning, but informal, social, and nonformal learning as well. For example, in step three, What must they learn to perform?, does not imply strictly formal learning methods, but rather any combination of the four learning processes (social, informal, nonformal, and formal).

How do you see our tools evolving?

 

1.02.2011

Star Diagram of the Continua of Learning

In my last post, Should the Door be Closed or Open, Nick Kearney commented that the star diagram was a better representation than the various continua I laid out. I agree; however, since the star diagram is composed of continua, I think when discussing a particular one, as I did in the last post, it helps to just show the one being discussed.

As show in the diagram below, I did some adjustments to it (you can click the diagram for a larger version):

Star Diagram of the Continua of Learning

Star Diagram of Learning

As I noted in my last post, I put social learning and reflection on the same continuum (The Door), as the real purpose is that sometimes we need to be alone with our thoughts, while at other times we need to interact with others. And of course there are a lot of alternatives between the two (social reflection being one of them).

I also dropped the Purpose of Learning Continuum (intentional and incidental/serendipitous). While it is an interesting concept, I don't believe that it fits in with the diagram in that it does not help us to design better learning/performance platforms.

David Winter (@davidawinter) tweeted me with the suggestion of adding another continuum: Impact - 'reinforcement/augmentation' of existing understanding/behaviours/identity vs 'transformation.' After thinking about it, I believe it belongs on the Workflow Continuum (EPSS/performance support and training). However, I'm not sure which of the terms are better. I'm thinking that it should be called the 'Workflow Continuum' with augmentation on one end and transformation on the other. I believe that EPSS/performance support and training would be some of the options that lie between the two:

Workflow

What are your thoughts?