Posts

How to Digitally Enable Performance

The promise of better performance via digitization and real-time access to information is best achieved if we begin with a process-centric view. With that, ask the question: What minimal information does each process require to make the best possible decisions? By so doing, we sort the wheat from the chaff and relay only the information required to speed decisions, enable agility and lift results.


Parse your data thoughtfully to make better decisions faster and get the most out of any process.

We’re impatient. In delivering products and services everybody wants to improve results. Yet, progress rarely matches expectations. To create an advantage, few things hold greater promise than digitization. We’re well equipped with computers, smart phones, tablets and touch screens… and lots of data. The cloud allows us to distribute huge quantities of information with ease. In spite of all this technology, we bog down when it comes to parsing critical bits of information needed to make the people-process interface work best.

What Does Digitally Enabled Change Mean to Performance?

Merging the concepts of “digitally enabled change” and “performance” can mean different things to different people. We’re seldom on the same page. The connection between digitization and performance is two-fold: First, it should enable better and faster decisions; and second, it should help compare results in order to propel best demonstrated practices. Such entitlements are pure gold for a large enterprise. Let’s take a closer look.

  1. Faster Decisions: If we collect, parse and distribute the right information to the appropriate stakeholders in an environment that’s equipped to use the information, better decisions up and down the value chain are possible. Then, lead times shrink. The ability to reduce lead time is an enterprise’s best indicator of competitiveness.
  2. Comparative Evaluation: Data allows us to compare results between procedures, processes and equipment across time and space. This allows us to identify performance trends and to evaluate and spread best practices. My friend and colleague Hans-Georg Scheibe with ROI* describes this quite well:

“In our projects, “digitally enabled change and performance“ often means to provide the right information to different stakeholders. E.g. to gather data from comparable machines all over the world, and to compare a dedicated machine with the others. There is always a best and a worst and when you ask why, you get a chance to improve. Or to gather best practices from different teams and to provide this knowledge in a manner that somebody else can use to solve problems directly, or faster or better. This requires connectivity to gather the data – to have a kind of intelligence to make information out of the data and then to turn information into knowledge.”

A Process-Centric Approach

Instead of starting with all the data available – both signal and noise – and pushing it into the process, we start by looking at the process itself and ask what minimum information (data) is required for people closest to the work to make the best decisions.

Why is worker inclusion so important? In practice, we’ve found that only the people doing the work can reliably tell you what information is essential to getting the job done right. Thus, engaging in front-end information conditioning is vital to actually reducing lead times, revealing improvement opportunities and paving the way for a better future-state work process.

The Decision Map

As its name implies, a Decision Map emphasizes the idea that faster and better decisions are the precursor of optimal processes. We start with a fairly traditional view so we can see the process flat on the wall – it could be a swim lane map, box and wire, value stream map, etc. Then we identify places where information is required to make decisions and take action. We pose simple enough queries that often prove to be thorny lines of inquiry such as:

Who makes the decision and is this right? There can be no room for ambiguity.
What information is really necessary? Less is more. Extraneous information is inventory and inventory has a carrying cost.
Is the decision point in the right place in the value stream or could it be moved or consolidated? When more than one point makes a decision, it’s an opportunity for no decision.
What’s the best way to collect and convey the information? Critical point here. Some information is best collected and conveyed electronically. Some is better suited for human involvement. This must be sorted out deliberately.

The maps show us different functions and accountabilities. You will find that some of the same data and information will be allocated to more than one location for different purposes. The point here is that these distributions of information are not a push of all information to be sorted out by the users, but rather they are done with a logical intent to provide only the information necessary to make the best decision at any given node.

Conclusions

For faster and better point-of-use decisions, information must be relevant to the user. Because of this we highlight data that is layered according to function and accountability. While daunting in an environment that identifies with “everybody can and perhaps should know everything”, a disciplined “need to know” approach makes whatever information is conveyed seem more valuable to the user because it is concise and actionable – it sorts out the wheat and the chaff is left behind. In the end, Digitization can enable change and performance in remarkable ways. But, the concept works best when we don’t just throw all kinds of data at people and process and hope for the best. Up front evaluation of the process with the people doing the work is the right way to start. It makes change stickier and reduces rework and frustration. Most of all, it gets to optimal performance much, much faster.

==========

Want to know more about how Kaufman Global helps clients manage change and improve performance? Contact us.

*ROI Management Consulting, AG is a German-headquartered, network partner of ours that specializes in operational performance.

Three OpEx Questions You Need to Know the Answers To

A small group recently posed a few questions to us about Operational Excellence, leadership and change. They were looking to get started on their journey and wanted to understand our perspective on some of the critical elements. Here’s a recap of the ensuing discussion…

Q. Leaders mostly understand the benefit of engagement and often see OpEx as a way to obtain this, yet it seems that leaders are not well equipped to make this happen. In fact, the training for leaders is often ineffective. How do we overcome this gap?

A. The graph below is sourced from our YE16 OpEx survey report. It shows how the surveyed organizations ranked the effectiveness of training at various levels of the organization and across a variety of business types:

Img 1 – The relative effectiveness of OpEx and Engagement Training at different levels of the organization. It’s notable that training is deemed most effective at the lowest levels of the organization while executive level training is deemed effective only about 50% of the time.

We can all see the connection between better engagement scores and improved performance, however there is a lot of confusion about what good engagement looks like. Often engagement is thought to be more frequent face time between bosses and subordinates, 360 feedback, suggestion programs, and so on. Rather, good engagement is about giving people the ability to directly influence their work. It recognizes basic human needs that include the power to make decisions, the ability to control outcomes and being part of something bigger. These attributes are not naturally occurring in many work environments so equipping leaders to enable OpEx is about training and coaching them on the essential actions and behaviors they must take to engage and align the organization from top to bottom.

Our approach to ensuring effective leadership training starts with the Managers / Executive Lean Overview workshops. These sessions quickly inform the team with a common vocabulary, awareness and understanding of:

  • Lean concepts and the Lean enterprise
  • Developing the right culture, structure and behaviors to support Lean
  • Managing resistance to uncertainty, and
  • Driving measurable results linked to the business strategy and objectives

Tools and methods are covered, but gaining expert capability on them is not specifically intended. Rather, this portion of the workshop is meant to provide context for how front line practitioners apply problem-solving tools to achieve desired business outcomes.

Our primary objective with leaders and managers is to provide insights that help them define and  develop their own leader standard work. This means doing the hard work of changing some of their own behaviors and habits to be able to actively coach and demonstrate support for Lean to the organization as implementation begins.

Beyond training and workshops, coaching is an important element that we always employ during project work with clients. Coaching is about observing behaviors and suggesting alternatives that can be more effective at delivering certain results. A simple example: If you want people to be more engaged, ask leading questions as opposed to prescribing a potentially ill-conceived solution. In this way, everyone learns something and engagement is supported instead of stifled.

Q. If the leaders are not equipped to lead engagement, can implementation still be successful if delegated to a lower level?

A. No. We’re talking about a shift here that must be valued up and down the organization and especially at the top. These values drive subtle and not so subtle behaviors that become part of the culture and transcend market shifts and personnel changes. Here we assume “implementation” to be a sustainable OpEx system. A leader who is equipped to lead engagement not only understands the benefits, but values the operating norms that better engagement brings.

Since a lot of the heavy lifting and day-to-day activities of implementation are in fact delegated, it’s important to understand how to help leaders do this. We talked about the training and coaching aspect for leaders in Q1 above. In addition to understanding the value of better engagement, the organization must know how to do it.

Everyone in the organization must be expected to spend a small percentage of time on improving the business ― as opposed to running the business. In the simplest terms, this means allowing workers some freedom to fix problems that affect their day-to-day work at the micro-process level. Supervisors and middle managers aren’t exempt: They too should spend about an hour a week addressing slightly more “macro” problems that affect their areas and people. At all levels, the most effective improvement efforts are team-based to drive process ownership and accountability.

Since exactly how to do engagement can be described, the activities can be tracked. This is important because it moves leaders beyond the idea of just “valuing engagement” (because who doesn’t right?) to “knowing how to DO engagement.” Only when this happens can implementation be effectively delegated.

Q. How important are engagement scores to measuring the success of OpEx? What measures would be more important to determining success?

A. Engagement scores are important. OpEx and engagement scores (from surveys and audits) are directly related. Successful Operational Excellence is in large part the result of good engagement. So engagement scores are a good lagging indicator of OpEx and a great leading indicator of operational performance.

A focus on leading indicators is a good place to start. Here’s a way to think about indicators:

  • Leading indicators ― Instead of a “result” metric, leading indicators are often the measurable actions that are taken to achieve a result. For engagement these are the structures and mechanisms we use to cause engagement – for example, the Executive Steering Committee (ESC), Functional Steering Committee (FSC), and Lean Daily Management System®. These structures describe specific, measurable activities that are part of a high-functioning OpEx system.
  • Middle indicators are the process performance measures ― and the associated plans to improve ― at the macro and micro-process levels. These are a tangible reflection of the living adoption of OpEx. The organization likes these a lot because they show something is being done to improve results.
  • Engagement scores are an important lagging indicator that provides proof and external validation that the OpEx system is working (or not). Those who score the highest go beyond better communication and asking people for more feedback. They incorporate ways for employees to have direct input into the work that they do – that is, the work that is relevant for them.

==========

Want  more detail on these topics? You can download the full survey report – An Examination of Operational Excellence – from the Resources section of our website. (It’s great, really).

To learn more about enabling leadership to connect the dots between engagement and value, check out our White Paper: Engage the Organization – And a Performance Culture Will Follow.

 

Rapid Performance Evaluation – Speed Matters

Rapid Performance Evaluation: Standard Work for Identifying Operational Performance Gaps

Kaufman Global helps clients solve complex problems and drive fundamental improvement. We engage when people and process collide – places where expertise and leverage can speed results. Even when an organization knows there is a problem, understanding operational performance, getting to solutions and knowing which levers to adjust can often benefit from outside perspective.

Rapid Performance Evaluation (RPE)Over the past few years, we’ve observed that clients want answers faster than ever before. And while it could be that “time is money,” it seems to us that it’s more related to the frenetic pace of, well, everything these days. Headlines and “apps” often don’t dig deep enough and the “Ready, Fire, Aim” approach has great potential for missteps.

To meet the demand for fast but thorough answers, we devised an innovative method for quickly getting to the heart of the matter – operationally and organizationally. Our Rapid Performance Evaluation (RPE) uses a standard work approach to cut the time required for credible solutions to about a day. How, you ask?

Instead of only identifying and prioritizing process problems, the RPE delivers tangible feedback and scores that can be used to immediately take action to improve. The RPE:

  • Provides a comparison against well-defined standards and benchmarks
  • Ensures the leadership team is aligned on the issues
  • Establishes specific and prioritized things to work on now
  • Engages the organization out of the gate, reducing rework and improving data fidelity 10x
  • Is fast and agile – minimizing disruptions. Getting accurate info doesn’t have to take weeks

The process begins by on-boarding the team and communicating with site leadership. It ends with a report to same. Core attributes of the Rapid Performance Evaluation are noted below.

Visit the Gemba

The gemba is the place work is done. It’s the shop floor, the office, the warehouse, the lab or the medical unit. It embodies the concept of “Go look, go see” and is a vital step in collecting information for analysis. We’ve found it helpful to review immediately before the visit what we’re trying to “see”, so we use a standard set of definitions to focus our attention. For example, in an office environment one thing we look for is communication between functions. In a factory we want to understand how material is moved (pushed or pulled) and stored (inventory) from one location to the next. Lots of paperwork is not needed for reference. Our optics have been adjusted ahead of time so the visit to the gemba can be for observation and understanding.  We need to keep our eyes and ears open.

Template Driven for Simplicity

Templates are used to compare existing practices to best known practices. With the RPE, simple but proven definitions and an intuitive measurement system make it easy to get everyone on the same page when it comes to scores and ratings. We look at factors that correlate to overall performance, such as quality systems, teamwork, continuous improvement capability and material and information flow. The correlation factors provide a big-picture view and point to overarching or systemic causes affecting performance.

For more discrete aspects of the operation, we use the Kaufman Global 20 Keys ® to evaluate 20 critical elements that affect efficiency and effectiveness. For each key, the tool ranks the current level of performance using a 5-point scale where 1 is “Traditional” and 5 is “Currently Invincible”.  Levels are described simply so the requirements for achieving the next level of performance are easily understood. The 20 Keys dig a little deeper than the correlation factors by identifying and prioritizing specific things to work on.

Alignment  Speeds Change

No matter how good the templates and rating systems are, they don’t account for the human factor. Opinions matter. During the course of the visit we interview key leaders and stakeholders. This usually means functional heads who have valuable insights and who will play a critical role in any changes moving forward. We start to see how much (or little) agreement there is about the underlying issues. This is an area where being external to the organization is a key advantage. Functions are typically protective of their turf. Outsiders can ask more probing questions. If there is a significant difference between what we hear from the leaders and what we see on the ground, we sometimes opt to survey the organization. This can help identify broader organizational issues.

Balance Speed and Accuracy

The RPE is done with a small joint team comprised of Kaufman Global and the client. Since the method is standard, well defined, intuitive, and template driven, training for the client participants can be completed at the start of the day. The real benefit of this simplicity becomes clear at the end of the day when scoring begins. After we’ve completed the tour and interviews, we individually rate and rank based on our personal observations. Then, we come together to discuss and negotiate consensus results. The evaluation is better because it consolidates multiple views, experiences and vantage points and compares actual performance against intuitive and easy to understand benchmarks.

Results

Any operation can be assessed for performance quickly if the method considers all of the output requirements and integrates change management approaches. The RPE gives leaders rational ratings of performance, a clear understanding of organizational challenges and confidence that they’re spending energy in the right places.

Find an example of the approach and results here: Rapid Performance Evaluation (RPE) Case Study: Automotive Electronics.

 

Continuous Improvement And The 20 Keys®

On the Continuous Improvement Journey, It Pays to Look in the Rear view Mirror

Business leaders are often criticized for their tendency to keep looking “in the rear view mirror,” but is this always a bad thing? In life there are frequent occasions when one can learn from events of the past to augment future outcomes.  Where Continuous Improvement (CI) activities are concerned, monitoring and tracking quantifiable results over time maintains focus on increasing levels of performance.

Knowledge and expertise stem from both good and bad past experiences. As much as we admire the entrepreneurial spirit of those who seem to be charging ahead at the speed of light, the truth is that successful innovators occupy a significant amount of their time studying the “lessons learned” of their predecessors. Unfortunately, it’s far too common for organizations to follow a well-trodden path and then stop abruptly when other priorities arise. Experiences are often lost, and errors are later repeated. In the end, past efforts are often wasted as the creative ideas from former teams are constantly reinvented.

When it comes to continuous improvement, it’s imperative that time be dedicated to holding standard, prescriptive, and quantitative reviews that evaluate the effectiveness of CI efforts. A concentrated effort should be made to discuss and formally document successes and failures of any significant flow of tasks and activities. If one thinks about it, every workflow should be a continuous loop of tasks from which the concept of CI can grow and flourish. This is equally applicable to such innovative functions as new product development or to the more mundane activities associated with financial month-end closing. CI effectiveness reviews are a critical success factor for any initiative in order to continually build on prior efforts.

To quantify, monitor and review continuous improvement progress, there is no better tool than Kaufman Global’s 20 Keys ®, a proprietary method for focusing an intact workgroup on the 20 most important elements of how it is operating versus world-class (or better) standards. The 20 Keys automates the review process by regularly assessing current state, targeting future performance levels and implementing a month-to-month plan for improvement.

20 keys cycle for continuous improvementAs identified in the 20 Keys Cycle illustration above, the method drives a continuous cycle of improvement and builds on prior efforts. Typically repeated four times per year, location leadership or a designated representative works with the workgroup to assess their score for each of the 20 keys. This is an honest, direct exchange in which the they score each key against known criteria. Once the assessment is done, this same workgroup decides on which key(s) they should focus on improving. The objective is for them to increase their overall score by 10 points per year.

A universal process, organizations throughout the world have implemented 20 Keys successfully. To date, Kaufman Global has developed more than 25 different sets of keys that are applicable to both industry and functional applications, including Customer Service, Logistics and Supply Chain Management, Engineering, and Project Management, among others. Regardless of which key sets are used, the 20 Keys process provides a standardized approach for measuring CI effectiveness across functions and locations.

It’s time to make a fully documented review procedure (that includes targeted metrics) part of your organization’s way of life — no matter what industry or business function you happen to be working in. Think about it. What did you learn from your recent improvement efforts, what was measured to evaluate results, and how can those results be improved?

To learn more about Kaufman Global’s 20 Keys download our White Paper: Continuous Improvement and The 20 Keys ®

Benchmarks As A Performance Improvement Method

The Case for Benchmarks

Benchmarks can be a valid comparison tool to apply to your performance transformation for a couple of reasons. First, it offers initiative leadership the moral authority to urge comparable results. That is to say, given an equivalent process, if some other organization can do better with the same resources, or do as well with fewer resources, then their approach should become an entitlement to the astute observer. Second, it indicates that you might be too far from the best to even bother trying. Have you ever thought that maybe we just shouldn’t even try to manage the best motor pool? That’s right, maybe you should simply outsource your motor pool to a best-in-class resource.

One Little Hitch

So, while it’s true that benchmarks can provide authority and clarity, consider the flip side. It’s been observed more than once that few things are more wasteful than optimizing a process that doesn’t need to exist. So, if you learn that your process doesn’t really need to exist, will you really fold up shop? Often not. This frequently occurs in public sector settings. Standing back, we may surely agree that government needs to embrace Lean practices and improve processes. Yet, if Agency A learns of Agency B’s prowess doing exactly what they do, will “A” fire everyone, bolt the doors and outsource to “B”? You already know the answer. There’s this little matter of constitutional authority and jurisdiction. So, “A” needs to think instead of client (taxpayer) needs, baseline the as-is, identify the waste, make a fact-based plan to improve, and move out. Extensive benchmarking merely squanders already slim resources.

Benchmark Envy

Obtaining relevant, timely and directly comparable benchmarks is as much art as science. While fascinating revelations unfold, pursuing benchmark perfection consumes a lot of resources that could be better invested in just beginning your improvement journey. That’s particularly true (and wasteful) if you sense you’re quite far from ideal. For example, let’s say that in a 20 Keys® category you are at a 1, the best known is a 3, and you’re trying to understand how to incrementally become a 5. Instead of all that angst, we’d say why not get started on your transformation journey by just going for level 2 to start? You’ll be far more competitive right away, and it’s probably quite easy to stretch one level up where things will seem (and really are) a lot better. Moreover, taking this path means that your organization learns a lot about what it takes to become world-class. That alone is priceless and can prove to be a lasting gain.

Trust Your Team

There’s another risk in relying exclusively upon external benchmarks to upgrade your performance envelope. It could just be that the world-class benchmark identified for your performance challenge is itself challenged. In fact, it could be so distant or implausible that it doesn’t even deserve your off-hand consideration. What you need to do instead may require a real innovation, a step change, a revolutionary new way — period. And, the insights for that breakthrough may only exist in the soul of your team when passionately led in a new direction.

No More Fire in the Belly

Lastly, arriving to a “benchmark destination” can install a false sense of security that you’ve become the segment leader. It sends the message that “if we’re the best, why should we continue to improve?” So, while relevant benchmarks are important to understand and apply, I’ll leave you with this simple notion — let cool heads (yours) prevail in your pursuit of unimpeachable benchmarks. A little benchmark knowledge can go a long way. And sometimes, just sometimes, there can be great value in understanding your own execution before gauging performance with someone else’s tape measure.

For additional perspective, download Kaufman Global’s White Paper: Defining World-Class Practices.