Tuesday, January 31, 2012

Focus and Leverage Part 82

This is my final posting on performance metrics and I want to thank Bob for asking me to contribute to his great blog.  By the time this is posted, some of you will have purchased, received and perhaps read our new book Epiphanized:  Integrating Theory of Constraints, Lean and Six Sigma.  Bob and I both hope you enjoy the book and have as much fun reading it as we did writing it.  And now, on to utilization and a summary of the past three postings.
Utilization is most often the metric that organizations should be using instead of erroneously using the efficiency metric.  Unlike efficiency and productivity, which are ratio calculations based on input and output, the utilization metric is a proportional measure of resource time used divided by resource time available.  Simply stated as:
Utilization = Actual time used / Time available

As an example, suppose we wanted to measure machine utilization for a specific machine in our system.  Suppose this machine was available to work one-full shift, which for our example would equal one shift of 8-hours in duration.  During this 8-hour shift there are 480 minute available to work (8 hours X 60 minutes = 480 minutes).  Of those 480 minutes available let’s say our machine was busy making parts for 395 minutes then;
395 (Time used) / 480 (time available) = 82.29% Utilization.
In many organizations lower utilization percentages (if they measure them at all) are considered as inefficient and should be improved to operate as close to 100% as possible.  It is possible that lower utilization numbers can be attributed to lengthy set-up time, or periods when an operator is not available (reasons can be, and are, varied), or simply periods of time when the machine has nothing to do.  For many organizations this is an unacceptable situation and they take actions to correct it.  So, in order to improve the utilization (efficiency) of the machine (resource) more work is released to keep the machine active.  Most organizations take these actions not because they must, but solely because they can.  The only reason this action is taken is to preserve the internal utilization/efficiency measure.  However, there is an important lesson to be learned here and that is: activation of a resource does not necessarily equal utilization of the same resource.  In other words:
Activation ≠ Utilization
Activation is using a machine because you can.  Utilization is using a machine because you must!  There is a vast difference between these two approaches.  If you understand your system, then you understand that high utilization should only be considered at a constraint location and not implemented at non-constraint locations.  It’s not important that utilization numbers be at or near the 100% mark for all the resources in the system.  Those actions would be counterproductive to system output.  What is important is that utilization is implemented and monitored at the constraint location of the system.. Subordination should be the implemented rule for all non-constraint locations.  In others words, let the non-constraint work at a level necessary to support the needs of the constraint.  No more - no less.
Are these metrics useful or evil?  The answer is: it depends.  It depends on if you calculate and use them correctly.  If these metrics are used to define a baseline measure, the metric can help you answer the question: “Did we improve?”  Used incorrectly these metric will result in poor decision making.
In a production system the productivity metric is probably the most important, if used correctly.  It is possible to combine the sub-elements of productivity and define the common denominator of dollars for both the production process and the monetary process.  Using productivity in this fashion will give you the clearest and most accurate assessment of your system.

The utilization measure is best used to monitor the constraint in a system, but only the constraint and not the non-constraints.  It’s a good metric, but not when it is applied everywhere.  Implementing utilization at the constraint will help you focus on the most important location in your system.  By monitoring this location you can determine how much leverage you have in the system to meet the growing loads demand.

Efficiency is by far the worst metric in a production system.  It’s just not a good fit.  It’s like trying to put a size 9 shoe on a size 10 foot.  It requires a lot of manipulation and it doesn’t always feel right.  It usually creates the opposite behavior from what you are really trying to achieve in the system.

Bruce Nelson

Sunday, January 29, 2012

Focus and Leverage Part 81

So now that we’ve talked about Efficiency, let’s now turn our attention to the second performance metric, Productivity and see how it's different. ©

Productivity is another one of those metrics that has multiple uses and multiple ways of being calculated depending on how it is being used.  You can use it for Economic productivity, Labor productivity, Total factor productivity, Service sector productivity, and several other forms of measurement.  But for our discussion we want to look at the productivity metric as it applies to a production system.  We’re looking for a metric to measure the system and answer the question: “Did we improve?”  For a production system application we can define productivity as the ratio between output and input, or simply

                Productivity = Output quantity/Input Quantity (p = O/I)

Mathematically the answer is a ratio and is calculated by determining the number of units completed (output) divided by the number of units input (usually hours).  As an example, suppose you had completed 750 units of work and used 1000 hours completing the work the equation would look like this:

                0.75 = 750 units/1000 hours

In essence, for every hour worked you completed 75% (0.75) of one unit of work.  In this instance, the productivity measure is also the efficiency measure of the system.  Now, when the system is measured we can answer the question “Did we improve?”  The best means to improving productivity is to increase the numbers of units produced while the measured time stays the same or decreases.  Or you could decrease the amount of time required to make the same number, or more, units.

A closer look reveals that productivity can actually sub-divided into two separate processes.  The first is the production process and, the second is the monetary process.  The production process is a measureable component of the system, but the monetary process is also measureable, if you do it correctly.  In the Theory of Constraints (TOC) measurements library there is a formula for measuring productivity.  The formula is simple and is expressed as follows:

                Productivity = Throughput/Operating Expense or (P = T/OE)

In order for this to work both the output and input need a common denominator, which in this case is dollars. The output can be measured in dollars by using the throughput calculation.  The definition of throughput is defined as: “the rate at which inventory is converted to sales.”  The monetary calculation for throughput is the selling price of the product minus the total variable costs (TVC.)  Total variable cost are defined as the cost of raw material, plus any sales commission (for each product sold), plus the shipping costs.  In others words, you are looking for all the costs associated with a single product.  However, the labor costs are not included in the TVC.  The labor costs are part of the OE dollars.  The difference between the Selling Price (SP) and the Total Variable Costs (TVC) is considered to be the Throughput (T.)  As an example, suppose we had a product that sold for $1.00 and the TVC was $0.40 then the Throughput (T) would be $0.60.

The Operating Expense (OE) can be expressed in dollars and is calculated as labor, overhead, gas, lights, benefits and all other expenses.  The attractiveness of using this type of measure is that it includes ALL of your expenses and not just the labor expense in the form of hours used.  This measure provides a much cleaner picture of the productivity and not just a partial look.  Also, by knowing the Throughput and Operating Expense numbers it becomes very easy to calculate the Net profit during any given period.  Net Profit (NP) is simply determined by subtracting the Operating Expense from the total Throughput number.  In other words:

                NP = (T – OE).

The important aspect of accurately determining the productivity metric is converting units to throughput dollars and operating expense to dollars.  The dollars component is the common denominator between units and hours.  Once you have that information the productivity measurement accuracy improves greatly.  Let move on to a discussion about Utilization.

In my last posting in this series, I’ll discuss the performance metric Utilization and then summarize what all three of these metrics really mean.
Bruce Nelson

Friday, January 27, 2012

Focus and Leverage Part 80

As promised, in the next several postings, Bruce Nelson will be discussing metrics.  Here is his first of a 3 part series.

Efficiency, Productivity, and Utilization (EPU) ©

Usable Metrics or the Evil Trio
Bruce Nelson
Many businesses, seemingly across all industries, are prone to develop and use some type of metrics in their decision making process.  Many of these organizations are focused on Efficiency, Productivity and Utilization, or EPU.  Using these metrics as guidelines, many organizations try to forecast there business activity and make a judgment call concerning their current status.  And, in many respects this is probably not all bad.  However, what is bad is the seemingly common nonsensical way these metric are used.
Many business leaders understand that in order to make good business decisions, they must have good data on which to base their decisions.  If the data is incorrect or interpreted incorrectly, it is highly probable that bad decisions will be the outcome.  And, if bad decisions are implemented it can spell the death of an otherwise good company.
If good data is required to make good decisions then, collecting, interpreting, and using the data would seem to be of paramount importance.  Useful data collection is a way of using past performance to help predict future performance.  Accurate data can provide the user with the ability to make the necessary course corrections and get the organization back on track and headed in the right direction.  This brings up another important point.  It’s imperative to know where you are going in order to set a course of actions on how to get there.  If you don’t know where you are going, then it doesn’t matter what actions you take to get there, which seems to be the way that many organizations make crucial decisions!  Many organizations make decisions not based on what they need or understand, but instead based on what everyone else is doing.  This decision making process is sometimes referred to as “Bench Marking.”  This type of decision thinking only works if you can validate the assumption that what the competitor is doing is correct!  It is possible that what the competitor is doing might make sense for them, but not necessarily for you.  So, wishfully following what the competitor is doing in hopes of having the same effect for your organization is fantasy leadership.
Many organizations use some kind of metric on a daily basis, albeit in a roundabout way.  Consider the instrumentation in your car.  There are measuring devises (gages and read-outs) designed to keep the driver informed as to the operational stability of the vehicle they are driving.  The gas gage tracks the consumption of raw material inventory required to keep the system operational.  The tachometer tracks the speed at which the system is working (RPM’s) and the speedometer monitors the speed of the system through time.  The temperature gage, oil pressure gage, and battery charging system all provide vital data about the status of the system, but only if you understand how to interpret the data and react to it.  If you analyze and interpret the data incorrectly, the system could operate sub optimally or, worst case the system could fail. That is not what we want to have happen.  These same measurement principles hold true for analyzing production systems and business data within an organization.  So, what is important to measure and why is it important?

Somewhere along the way efficiency became “king” of the metrics mountain.  Many organizations are measuring efficiency, not because it was really important to them, but rather, because somebody else was measuring efficiency (bench marking.)  The assumption being that if the competitor is doing it, then we should be doing it also.  Question: “Is that a valid assumption?”
Efficiency is a metric used by many industries and it is a metric that is used incorrectly most of the time.  When used incorrectly, efficiency will give the false impression of “If we look good, then we are doing good.”  There is hardly a day that goes by that you don’t read about efficiency in the paper, or hear it used on the news.  The new battle cry is; “We must become more efficient at what we do.” Or, “We must improve our efficiency to stay competitive.”  There is a downside to these mottos if, in fact, you measure efficiency incorrectly.
The concept of “efficiency” is often times confused with the term “effectiveness.”  Many believe that a high efficiency is synonymous with being highly effective.  Not true!  Efficiency is measurable and therefore quantitative.  Effectiveness is non-quantitative and therefore a rather vague concept associated mostly with achieving a goal or objective.
Efficiency also has many models for application including Physics, Economics and other sciences.  It is expressed in terms of a ratio, i.e., the ratio in terms of something produced (units) divided by the resources consumed to produce it (hours), or r = P/C.  However, there are some concerns with trying to use the efficiency model in a production setting.  The most obvious is that the efficiency model doesn’t fit well within a typical production system.  The best measure of the production system is the productivity measure.  In essence, the productivity measure tells you the efficiency of the system, but more on that later.
The mathematical limitation of efficiency is that it can never exceed 100%, and yet there are companies who proclaim much higher efficiency metrics than 100%!  How do they do that?  One simple trick they employ is to measure efficiency as a ratio of standard hours given divided by actual hours used. As an example; suppose for a period of work time there were 1000 standard hours issued to do the work, but the actual time to do the work was tracked at 500 hours, then 1000/500 = 200% efficiency.  The first hint that this is incorrect comes from the definition of efficiency - there is no variable of what is actually produced!  Measuring efficiency this way only gives you a ratio of the hours given (standard hours) compared to hours used (actual hours). If efficiency goes up (which it probably will), it translates to the standard hours being incorrect (which they probably are.)   In fact, when using this method (and many do) of calculating efficiency it is very probable that ALL the standard hours could be consumed and not a single unit of product produced!  The metric would tell you that the system is operating at 100% efficiency, and yet not a single unit was produced.  What the measure really says is this: “The more you improve the further away from the goal you get!”  Is this an accurate measure of the system performance?
In this case, the system looks really good (the efficiency), but the system isn’t performing well at all (the output.)  So, how useful is a metric that paints this picture?  The efficiency metric says you are doing fine and, yet reality says you are missing the goal/objective!  Suppose this was the data output from your system – what would you do?
For obvious reasons the efficiency metric is not reliable, and certainly not an accurate measure to get a clear picture of what is going on.  Even when efficiency is calculated correctly it can still have a devastating effect on a system.  As an example; suppose in your organization the metric was to maintain high efficiency levels all the time.  In this scenario efficiency can be literally interpreted to mean “keep everyone busy all the time.”  The assumption with this thinking is that “busy people” equate to high efficiency – which is true!  But keeping people busy all the time also prompts an organization to buy and release more and more raw materials to achieve its goal.  Buying, and releasing, more raw materials only serves to increase the work-in-process (WIP) in the system.  Higher WIP levels will also have a negative effect with on-time-delivery (OTD) and will cause it to drop as the WIP levels go higher.  A system can actually become so polluted with WIP that it might produce nothing at all.  So, you achieved a very high efficiency, but at what expense?  Understanding these different scenarios begs the question; “Does high efficiency also equal higher levels of productivity?”  Efficiency, when used this way could prevent good decision making for production systems.  The efficiency metric would better serve its user for calculating the gas mileage of a car, or for the efficiency of a gas furnace, but not so much in a production system.  So, if efficiency might not be the best metric for a system, then what is?  Let’s take a look at the merits of the productivity metric and see how it might apply.
In my next posting, we’ll take a look at the merits of Productivity as a metric.
Bruce Nelson

Focus and Leverage Part 79

As I told you in my last posting, I've asked Bruce Nelson to continue on the theme of performance metrics and Bruce is nearly finished writing his piece.  So today, while we're waiting for Bruce to complete his posting, I'd like to talk about metrics in general and although I've posted a similar piece, the key components of what makes a good metric are worth repeating.  I say this because your choice of performance metrics is critical to your long term and short term success.

Performance metrics are intended to serve some very important functions:

  1. Performance measure should de designed and selected based upon the behaviors you want  exhibited in your organization.
  2. Performance measures should reinforce and support the goals and objectives of the organization or company.
  3. The measures should be able to assess, evaluate, and provide feedback as to the status of people, departments, products, and the total company.
  4. Performance measures should be objective, precisely defined and quantifiable.
  5. The measures should be well within the control of the people and/or departments being measured and not some abstract number,
  6. Performance metrics must be understood and utilized by the organization as a whole and they must positively impact the system and not individual parts of it.
This last function, especially that they be understood and impactful to the system and not parts of it, is very important.  If people don't understand the metric, they simply won't understand the behaviors that are required to move it in the right direction.  I also believe that companies should develop a hierarchy of submetrics so that even people at the lowest rung in the organizational ladder will understand how their behavior drives the metric in the correct direction.  Let's look at an example.

Suppose your company has selected the performance metric efficiency and you are a production manager.  You're told that your performance appraisal is based upon how high your production unit's efficiency is.  If this was your mandate, how would you make this happen or what behavior would you exhibit to reach your highest performance.  And remember, your personal appraisal is based upon how high your unit's efficiency is.

If it were me in this position, I'd probably tell my boss that this isn't a good metric to measure me.  But most people would look at the metric and say to themselves, "If I want higher unit efficiencies, then I must run all of my process steps as fast as I can."  So what would that do to the unit?  Well, for one, it would drive efficiencies higher and put me in a position to get a good appraisal.  But what would it do to the organization?  Think about it.....if I run all of the process steps as fast as I could, what would be the organizational impact?  Would we produce and ship more product?  Would we spend less money?  What?

Quite simply, if you run all process steps as fast as you can, the net effect would be as follows:
  1. Your unit's efficiency metric would move to its highest level.....good for your personal appraisal.
  2. Work-in-process (WIP) inventory would grow larger and larger.
  3. Because WIP grows larger, cycle times become extended.
  4. Because cycle times become extended, on-time deliveries decreased proportionally to the level of WIP.
  5. Because on-time deliveries have decreased, customer satisfaction levels fall.
  6. Because customer satisfaction levels fall, sales will decrease.
Need I go any further?  Yes my friends, selecting the "right" performance metrics is critical to your long term survival so reason them out before you select them and above all else, make sure the metric has the total organization in mind.

Bob Sproull


Wednesday, January 25, 2012

Focus and Leverage Part 78

One of the major differences between Theory of Constraints (TOC) thinking and traditional approaches to manufacturing or any industry for that matter is the use of performance metrics.  Many companies still hold onto metrics like efficiency and utilization in nonconstraints and probably will continue to do so.  Deborah Smith has written an excellent chapter in the TOC Handbook (i.e. Chapter 14) and I encourage everyone to read it.  Whether you know it or not, each of the chapters in the handbook can be purchased individually and many, if not all of them, are downloadable electronically.  Deborah’s chapter is entitled as Resolving Measurement/Performance Dilemmas.
Deborah tells us that metrics need to encourage the right behavior, but when you’re dealing with organizations of significant size and complexity, it’s always a challenge to construct a system of local metrics that:
1.    Encourage the local parts to do what is in the interest of the global objective.  I just discussed this in a couple of my most recent postings.
2.    Provides relatively clear conflict resolution between and within the local parts.
3.    Provides clear and visible signals to management about local progress and status relative to the organizational objectives.
Deborah presents a “simple set of six general measurements” that all assume that a valid TOC model has been implemented.  These six measurements are:
1.    Reliability
2.    Stability
3.    Speed/Velocity
4.    Strategic Contribution
5.    Local OE (i.e. Operating Expense)
6.    Local Improvements/Waste
What I want to do in this posting is focus on the metric Stability.  Not that the others aren’t important, but to me getting control of the stability metric presents a huge opportunity for improvement.  I may touch on a couple others to help make a point, but the focus will be on stability.
The objective of the stability metric is to measure or at least get an idea of the amount of variation that is being passed throughout the system in question.  We all agree that having variation and volatility in the system is not conducive to stability.  This is especially true when we’re talking about the system constraint, otherwise known as the drum, simply because the drum is the anchor point of our scheduling system or at least it should be.  Any disruption of the drum schedule creates a lack of synchronization in the rest of the system as well as reducing the capacity of the constraint and the revenue stream.
One measure that is important is drum utilization which is simply a measure of how well the constraint is being used to produce throughput compared to how well it should be doing.  Utilization, which is usually expressed as a percentage, compares the actual time the constraint is used to produce throughput to the total time available.  In other words, utilization is 100% minus the time lost due to starvation, blockage and downtime due to breakdowns.  Keep in mind that every time the utilization of the constraint falls below 100%, we are losing potential revenue so it’s very important to track this metric and to record the causes of the reduction.  Let’s look at some of the causes that we might experience.
·         Starvation of the constraint occurs when it runs out of material being fed to it by an upstream process.  The cause and the length of time the starvation lasted are very important, so record them.
·         Unnecessary/Over-Production is simply a waste of the constraint’s capacity on things that aren’t required.
·         Unplanned and Planned Downtime in the constraint takes away the opportunity to produce throughput.  The cause and length of the downtime should be recorded.
·         Blockages of the constraint occur when the constraint is prevented from running because the operation feeding it experiences downtime.  This is somewhat different than starvation in that any upstream location could be the cause of starvation.  Once again, record the reason and the length of time the constraint was blocked.
There are other reasons or factors that affect the stability of the constraint such as late releases, absenteeism, etc. but the four I listed are the most important.  So now that you’ve collected the causes and times associated with this stability metric, it should be easy for you to develop an action plan to improve the stability of the constraint.  Simply create a Pareto chart of the causes and times and attack the top 20% that account for 80% of the stability problem.  Pretty simple as long as you put the tracking mechanism in place.
In my next few postings, I’ve ask the co-author of our new book Epiphanized: Integrating Theory of Constraints, Lean and Six Sigma, Bruce Nelson to continue on the theme of performance metrics.
Bob Sproull

Monday, January 23, 2012

Focus and Leverage Part 77

Over the years I have been asked many times if the teachings of the Theory of Constraints apply to industries outside of manufacturing.  Many of these questions have come from industries that were not manufacturing based and since TOC had its birth in manufacturing, it seems like a completely understandable question to port forth.  The obvious answer for me to give is a definitive yes.....the teachings of TOC absolutely apply to any industry and to any system within any company.  Let’s talk about why I believe this is true.

In its most basic form the Theory of Constraints teaches us that within any system there is one part of it, above everything else, that limits our ability to move closer to our goal (of making more money now and in the future).  If our goal is more output, for example, then the secret to more output is to first identify the system constraint and then freeing up more capacity.  In recent years I’ve been teaching this basic concept by relating it to a piping system used to deliver water from its input, through a series of pipes until the water reaches its output section as demonstrated in the drawing below.
The water enters the piping system through Section A, then passes through B and so on until it exits Section F.  Clearly, because the diameter of Section C is the smallest, it totally controls the amount of water that flows through the entire system.  It should be obvious that the only way to increase the output of water through this system is by increasing the diameter of the pipe in Section C.  Likewise, increasing the diameter of pipes in any other section will not change the throughput of water through this system.  In TOC terms, Section C is the system constraint.  So how does this concept apply to other processes?  Let’s look first at a manufacturing system and then relate it so other systems outside manufacturing.

Suppose in the drawing below we were producing high quality cannons.  Maybe in Step 1 we were setting up a grinding operation and performed a rough grind to get it into the approximate final shape and this took 1 day to complete.  In Step 2, suppose we had some kind of precision boring operation that required intricate details and it took 17 days on average to complete.  Maybe Step 3 was a semi-finishing step and it took 5 days to complete and then Step 4 was a finishing step that required 2 days.  Theoretically this process for producing cannons took 25 total days, from start to finish, to complete.
Using the same thought processes as the piping diagram, Step 2 of this process is the system constraint, only in this example, the variable is time to complete and not the physical diameter.  And like the piping diagram, if you wanted more throughput of cannons, the only way you could do so would be to reduce the time required in Step 2.  What if, for example, your market requirement was 1 cannon every 10 days?  Without any changes to the constraint, you simply would not be able to meet the needs of this market demand simply because your capacity to supply cannons is 1 cannon every 17 days.  So does it make sense that in order for the cannon process to meet these new market needs, you would need to focus on Step 2 and somehow eliminate the extra 7 days?  Let’s look at a completely different type process and see if these same TOC basics apply.

Suppose we were looking at a basic purchasing process where we receive a request to purchase a part, order it, wait for the purchased product to arrive, receive it and finally deliver it to the requestor.  Once again, using the same logic as the piping diagram, we identify that Step 2, because it takes the longest amount of time (i.e. 8 days), is the system constraint.
If we wanted to “speed up” this process, doesn’t it make sense that we would have to focus any improvement efforts on Step 2?  I’m certain that Step 2 is full of wasted steps and that by eliminating or significantly reducing much of this wasted effort, we could improve the throughput of purchase orders through this process.  In fact, if we wanted to significantly improve this process, focusing on Steps 1, 3 and 4 would yield little improvement.

The point is, no matter what type of system you are working in, the basics of TOC apply.  That is, Goldratt’s five focusing steps always yield the most improvements.  It doesn’t matter if you are working in a hospital emergency room environment, a scheduled maintenance environment, a food supply business, a retail operation, or even a military installation, TOC will always help you improve your business as long as you follow these five steps:

1.    Identify the system constraint.

2.    Decide how to exploit the constraint.

3.    Subordinate everything else to system constraint.

4.    If necessary, elevate the constraint.

5.    Return to Step 1.

So the answer to the original question of whether TOC applies to non-manufacturing environments is a resounding yes!

Bob Sproull

Saturday, January 21, 2012

Focus and Leverage Part 76

Although the implementation and sustainment problems I discussed in my last posting certainly impede progress or limit profitability, in my view there is another, more compelling reason why improvement initiatives have failed in many companies.  Whether you are using Lean or Six Sigma or Lean Six Sigma as your primary improvement tactic, there still seems to be something missing that is limiting your company’s ability to make money now and in the future.  I want to state up front that this missing link is not the methodologies (i.e. Lean, Six Sigma or Lean Six Sigma) themselves because we all know that processes are full of waste and variation that must be reduced or removed.  So what is missing?
My motivation for starting this blog in 2010 was to share the benefit of my experience and my own lessons learned in the 40 plus years I’ve been engaged in improving companies.  I wanted to help businesses flourish and become more profitable.  Because I had been successful in helping businesses improve their bottom line, I felt almost duty-bound to share what I had discovered so many years ago. What I’ve learned along the way is that if Lean, Six Sigma, Lean-Six Sigma or some other improvement methodology are to be successfully deployed, knowing where to focus your efforts is the key to making money.  I named my blog site Focus and Leverage for a reason.  I wanted to help companies answer the three basic questions that will lead them to new and sustained levels of profitability…..what to change?.....what to change to? and how to make the change happen?
One day in the 1990’s I received a phone call from a man that I had always considered to be my mentor.  It was a simple phone call telling me he needed my help with a company in Kentucky.  I never hesitated and accepted his offer, but I never asked him what the job was.  Silly huh?  I mean wouldn’t any normal human being offered a job ask that basic question?  You see I had worked for this man in several other companies and there was complete trust in him.  When I arrived at this company I found out that I was hired to be the General Manager of a plant that was on the verge of shutting down.  You must understand that my background prior to this was all Quality and Engineering with zero experience in Operation’s Management.  Talk about being lost…. But I reasoned to myself that I would just listen and learn from the two Operation’s Managers (OM’s) that were at the plant already.  I soon found out that was not the answer because one of them had just been hired with his background being mostly job shop type environments.  The other OM had been with the company for 25 years, so he may have been part of the problem.
To make a long story short, in less than 4 months we not only stopped the financial hemorrhaging, we became profitable!  And the lessons I learned at this plant changed my approach to managing improvement forever.  What changed me you may be wondering?  I found a copy of Eli Goldratt’s masterpiece, The Goal.  That night I stayed up all night and read it, soaking in all of the many lessons contained in it.  I had never heard of the Theory of Constraints before and I became mesmerized by it.  I went out and bought multiple copies of this book for my team and we even had a daily discussion session about the book.  My team at this plant became very passionate about applying what they had learned and the improvements came rapidly and they were huge.  What amazed me the most was our on-time delivery rates which were abysmal when I took over, but went to nearly 100% in very short order!  I knew then that I had found the secret and it has stayed with me all these years.
So what was it about the Theory of Constraints (TOC) that changed my approach?  Eli Goldratt, the genius author of The Goal and the inventor of TOC, believed that organizations could not maximize and sustain profitability until they maximized the throughput of their total system.  Goldratt taught my team that the sum of all local optimizations does not translate into system optimization.  This simple message was our driving force and in my opinion, the quintessential reason why so many Lean Six Sigma implementations have failed to deliver sustained bottom line results.  Our lesson was simple…….improving isolated parts of the system does nothing to improve the total system output.  It was a simple, but compelling message for us and was the single most important lesson we learned in those days.
So for me the real message behind the Theory of Constraint’s process of ongoing improvement is always focusing your efforts toward achieving system improvement rather than localized improvement.  In order to achieve this focus, Goldratt developed the following five-step process of on-going improvement:
1.    Identify the system’s constraint(s).  What part of your system is limiting your ability to deliver more product or service?
2.    Decide how to exploit the system’s constraint(s).  What do you need to do to improve your system throughput?
3.    Subordinate everything else to the above decision.  Don’t ever out-pace or out-run your system constraint.
4.    Elevate the system’s constraint(s).  If after you’ve fully exploited your system constraint and your constraint capacity is still too low, you may need to spend some money to increase it.
5.    If in the previous steps a constraint has been broken, go back to Step 1, but do not let allow inertia to cause a system constraint.  Once the current constraint has been eliminated, a new one will take its place immediately, so be prepared to move your improvement resources to your new leverage point.
Theoretically, the implications of TOC to improvement initiatives can be profound. From a Throughput Accounting perspective, (which I’ve covered in past postings) reduction in inventory (one of the benefits of Lean) has a functional lower limit of zero, and once you’ve reached it, there is little, if anything, left to harvest. Lowering inventory can lead to substantial dollars, but it is a one-time occurrence. Operating expense reduction also has a functional lower limit and when you reach this lower limit, further attempts to reduce it can actually debilitate your organization, especially if you reduce OE through layoffs, which I might add, I never approve of.
Throughput in the TOC world is revenue minus totally variable costs (e.g. cost of raw materials, sales commissions, shipping costs, etc.).  Just creating products to fill up storage racks is not throughput at all because no money has been received from the customer.  Even though Throughput improvement probably has a practical upper limit, theoretically it does not and the secret to generating more of it is by recognizing the existence of a system constraint.  The simple fact is, you cannot produce more throughput until you free up more productive capacity in your system constraint! And when the productive capacity of your system exceeds the number of customer orders, the market becomes the new constraint and your new area of focus.  When this happens, things like faster lead times, superior on-time deliveries, world class quality and even price reductions can be used to generate more sales to utilize this freed-up constraint capacity.  It is important to remember that if you have excess capacity, as long as your new product pricing covers your cost of raw materials and totally variable costs and you have not added excess labor to achieve this excess capacity, the net flows directly to the bottom line. Of course, all three of these actions (throughput increases, inventory reductions, and operating expense reduction) have a positive impact on net profit and return on investment. Think about this……if there were no constraints in your company, wouldn’t your profits be infinite?
Just as a chain has a weakest link, there will always be a resource of some kind that limits the system from maximizing its output. It is my belief that in order to improve the system’s performance and sustain it, you must locate the weakest organizational link and leverage it by focusing your improvements there. It may not be obvious to you, but when you are looking for a starting point in any improvement initiative, it should always be the system constraint simply because it offers the greatest opportunity to increase profits in a relatively short period of time. Whether your constraint is a flow problem, a quality problem, a capacity problem, a policy problem or even the market, it should be identified as the area on which to focus your efforts.  And to me, this is the other part of the answer to the question of why so many improvement initiatives have failed….they simply lack the right focus to leverage maximum profitability in the shortest amount of time.
I started out this posting by saying that there’s absolutely nothing wrong with Lean, Six Sigma or Lean Six Sigma and in fact at the plant in Kentucky we had no Black Belts or Lean experts. What we did have was our new-found knowledge of system improvement rather than localized improvement.  At the end of the day, accepting the concept of system improvement is what really matters.  TOC, Lean and Six Sigma are totally complementary and need each other to deliver maximum and sustained profitability.  So if you want sustained profitability for your company, try the integration of the Theory of Constraints, Lean and Six Sigma.  It is the focus of Bruce Nelson’s and my new book, Epiphanized:  Integrating Theory of Constraints, Lean and Six Sigma.

Bob Sproull