Systems Thinking
So how do we take into account all the other variables that exist in the environment and consider them as we discuss a behavior? We are going to zoom in to focus on very specific behaviors in later chapters of this book because that sharp focus helps us analyze and diagnose, but a too-narrow focus can also cause us to miss other causes and solutions.
Donella Meadows, author of the classic book Thinking in Systems: A Primer, describes a system as “A set of things—people, cells, molecules, or whatever—interconnected in such a way that they produce their own pattern of behavior over time.” Trying to understand the complexity of a whole system can be overwhelming, but being able to both focus on individual behaviors and keep in mind the overall system is a necessary balancing act for any serious behavior change effort.
For example, we know that plastics cause many ecological issues, so you could have a narrow focus on the behavior “People need to recycle consistently and correctly.” But it’s worth asking if that narrow behavior will make a big enough change. Maybe people recycling more frequently and accurately will change things significantly, but we probably need to look at the bigger system and consider variables like the cost and availability of recycling facilities, the market for recycled plastics, the incentives for manufacturers to use less plastic, and so on.
One tool we can use to consider how a system works is system mapping. There’s no single way to do this kind of mapping, and I’m only going to use the simplest examples here. Peter Senge’s The Fifth Discipline and Donella Meadows’s Thinking in Systems are both excellent books if you’d like to explore this further.
Peter Senge explains that the building blocks are reinforcing processes, balancing processes, and delays. Let’s look at a reinforcing process. If you’ve been in the world of learning and development or higher education for any time, you are probably very familiar with the “end of class survey.”
The illustration shows how this should probably work. Evaluation data should be used to improve classes, which will then improve the evaluation scores.
Eventually the system will balance out when the evaluation data and the class quality can’t get any higher. Everybody wins!
That’s how it theoretically should work. It often doesn’t go quite like that. I’ve seen organizations where it goes something like this:
Yep, the evaluation data is collected, but then doesn’t go anywhere, except into the metaphorical ether.
If a system is supposed to work a particular way, and it’s not, then it’s worth asking “why not?”
You can tug on each of these threads and, for example, ask why the data isn’t actionable or who should be responsible for paying attention to the data.
Let’s say that you are managing the training function, and you decide the team will evaluate all the evaluation data and allocate resources to improve the classes, which hopefully improves business results. That sounds pretty good! But, of course, other things are involved, like resource allocation to new projects, stakeholder support, and so forth. If you think about how these all interact, it can get complicated quickly:
If we try to figure out how these things interact, we might find that the assignment of resources to improve existing classes doesn’t come with an overall increase in team resources, so they have to be pulled from somewhere. That means fewer resources for new training projects, which makes some stakeholders unhappy and leads to a decrease in business results from new projects. And the unhappy stakeholders decrease their funding support, so now you can’t fill the open staff position you were counting on to support the improvements.
Thinking through these relationships can help you identify key places in the system where you can intervene and adjust to make beneficial changes.
A system view can help show where there isn’t enough reinforcement, where there are unintended effects, or where difficulty seeing feedback can be causing problems.
Unintended Consequences
Any behavior change intervention can have unintended consequences. For example, the intended consequences for most compliance training efforts are outcomes like employees not doing things that are illegal or problematic, or legal defensibility if the company is sued.
But if we create compliance training that isn’t relevant to the audience, and the message is that you just need to tick the completion box, then we may not like the unintended consequences of forcing compliance training where it’s not relevant or useful.
I talked to Christian Hunt, author of Humanizing Rules: Bringing Behavioural Science to Ethics and Compliance, and he described it this way:
It used to frustrate me in banking when my assistant had to do training on obscure regulations that made no sense to her whatsoever. It was not relevant to her job. And so she would sit there and go, “Oh, it’s another one of those things from the people who brought you the tedious trade course.” So even when it was relevant, she would sit there and go, “Here’s more useless stuff from those idiots that don’t understand me, I’m going to ignore it.” We are teaching people to ignore our training.
I think the key bit with all of this is that we’re dealing with human beings that are sentient. And so they will react to what they see us doing. Attempts to assess whether our training has been effective needs to bear in mind that the test itself sends a signal to employees. If you teach them something you say is important, but then if the assessment is dumb—you tell them to just regurgitate what they’ve just been told or give them an “everybody knows this” kind of test—that’s not a genuine test of whether they know it, and they’ll recognize that. And so in trying to test the effectiveness, we often actually make the situation worse and we undermine the subject matter in the tests.
Where Does Feedback Become Visible?
Often, in a behavior-change project, we decide that a set of behaviors will produce the desired outcomes. At that point, it’s worth asking, “Where do the consequences of a behavior becomes visible?”
Here are some example behaviors:
Salespeople should increase their sales of the turbowidget (desired result: increased turbowidget sales).
Hospital healthcare providers should wash their hands according to governmental guidelines (desired result: decreased patient infections).
Jan needs to buy extra milk while her brother and nieces are visiting (desired result: there will be enough milk for breakfast and other meals).
Managers need to ensure that salary offers to new hires are fair and equitable (desired result: staff will be paid appropriately for their qualifications and responsibilities).
Results of behaviors can becomes visible at very different levels. I usually use the distinction of individual, group, and system levels.
Individual-Level Consequences
Jan’s behavior will be visible at the individual level. She’ll be able to see whether they have enough milk or she needs to buy more. The behavior and consequences will be pretty easy to see at the individual level:
The same thing is probably true with the sales example. Most organizations track sales results at the individual level, so we can see how a particular salesperson did, as shown in the illustration.
So buying milk and selling Turbowidgets both have a visible outcome at the individual level.
Group-Level Consequences
Sometimes, evaluating the outcome can require comparison across a group. For example, selling 180% sounds great, but if everybody does exactly the same, then it’s less impressive. But if most other people on a sales team sell around 100% of goal, then 180% is going to be exceptional.
In most healthcare facilities, the consequence of handwashing is very difficult to measure at the individual level. It would be extremely unlikely that a patient only has contact with a single healthcare provider in a hospital setting. Impact would really only be visible at a group level.
Since it’s pretty much impossible to see the consequence at the individual level, the consequence has to be examined at the group level, but that might not be enough data for comparison either.
If a manager is hiring a new employee, that manager might have all sorts of reasons for the salary offer being 15% less than the person currently doing the job. The new person might have less experience or different qualifications, or the current employee might have been in the job for several years and received merit increases over the years.
A manager might be able to judge the fairness of a salary offer against several other people in the department doing the same work with similar qualifications, or they might not have anyone else in that same role.
So, sales goals, handwashing, and salary offers need comparison or aggregate data to be relevant. We need to have some basis for comparison to know if the rate of sales or patient infections are good numbers, and a single salary offer can’t be judged equitable without comparing it to similar offers.
System-Level Consequences
Sometimes consequence can be judged only at the level of whole systems. A hospital might not know whether its infection rates are excessive without being able to benchmark against similar hospitals or national averages. Behavior changes that focus on changing the individual will be easiest when the results or consequences are visible at the individual level.
When you are being asked to design learning for a behavior change where individual learners can’t see any feedback because there are no systems in place to measure at the group or system level, it’s important to recognize that this will be a difficult and uphill battle, and you should make stakeholders aware that training alone will probably not be enough to support change.
For example, it can be very difficult to judge the fairness of a job offer without more data than most hiring managers have access to.
The Example of Pay Equity
The company Salesforce.com set out to look at salary disparities. In an article in Wired magazine, Salesforce.com CEO Marc Benioff and two members of the senior executive team, Cindy Robbins and Leyla Seka, raised the issue of gender pay equity and proposed an audit of compensation for all employees. Benioff described how they had been working on equity initiatives for a few years at that point, so he didn’t expect the audit to show much disparity.
It wasn’t simple to look at the data. They “assembled a cross-functional team and developed a methodology with outside experts that analyzed the entire employee population to determine whether there were unexplained differences in pay.” Benioff was chagrined to discover that there were significant disparities and that 6% of Salesforce.com employees needed their salaries adjusted, at a cost of approximately $3 million. They found that the next year, they had to perform a similar adjustment (mostly due to acquiring companies who brought their own salary disparities to the organization). The company discovered this would be an ongoing effort and publishes an annual update on their website regarding goals and progress.
Benioff describes how this is not the product of deliberate bad actors in the system. No bad person is scheming to pay people less based on race or gender. He describes pay inequity as “a stubborn, slippery problem in business.” He also explains that the reasons to fix it aren’t about reputation or even doing the right thing, but that diversity and equity are good for business, according to research from McKinsey & Company and others.
The point of this example isn’t to promote pay equity (though I’m a fan), but to show how a focus on individual behavior would be inadequate here. I’ve worked on many diversity training projects over the years, and the training has had learning objectives like
Managers will be able to describe the importance of fair and equitable treatment.
Or even
Managers will be able to identify the characteristics of a fair and unbiased salary offer.
But in the Salesforce.com example, they were unable to see the problem clearly without a system in place to measure and correct for the issue. After the initial audit, they “devised a new set of job codes and standards and applied them to each newly integrated company.” With those measures in place, it might be possible to address the problem at an individual level, as disparities against those standards would be visible on an individual basis.