Saturday, July 30, 2011

Focus and Leverage Part 47

What we discussed in my last posting is what happens in a balanced line, but what about an unbalanced line? What happens with C/T, TH, and WIP? Consider the four-step process below. In this process we see that Step A has a processing time of 1 minute, Step B’s P/T equals 2 minutes, Step C’s P/T equals 3 minutes and Step D’s P/T equals 1 minute. Clearly this is an unbalanced line because the processing times are not all equal.

The assumptions we make here are that no variation exists and only one machine exists at each process step. Here we see that the capacity of each process step is obtained by dividing the processing time in minutes/part into 60 minutes/hour. The capacity of the line is dictated by the bottleneck step which in this process is Step C at 20 parts per hour. Applying Little's law to this process, results in a critical WIP level needed to maximize throughput while minimizing cycle time as follows:

Critical WIP = TH x C/T


W10 = rbTo

Where: W0 is the critical WIP, rb is maximum throughput, and T0 is minimum cycle time

W0 = 20 parts/hour x 7/60 hour = 20 parts/hr x 0.116667 hr = 2.3333 parts

This extension of Little’s Law tells us that if we want to achieve maximum throughput at minimum cycle time, then our critical WIP is 2.3333 parts, or 3 parts. Any number of parts above 3 will lengthen the cycle time and any number of parts below 3 will negatively impact throughput. Since we are interested in maximizing revenue and on-time delivery, Little’s Law will help us achieve this. The table below is a summary of what we just discussed.
As we have just seen, the most efficient form of manufacturing from a flow perspective is single piece flow. But having said this, there are times when one piece flow is not appropriate or not even possible, so common sense must play a role in the decision to use it. For example, suppose the next step in a process is bead blasting or heat treatment of parts. Would it make sense to run the bead blaster or a heat treat oven with a single part or with a small batch? From a manufacturing efficiency perspective, a small batch probably makes more sense.

We always want to minimize the non-value-added activities in a manufacturing process which includes travel time. If one piece flow is not possible and transfer batches are needed, then one way to keep the transfer batch size small is through the use of cellular manufacturing. Cellular manufacturing positions all work stations needed to produce a family of parts in close physical proximity with each other. Because material handling is minimized, it is much easier to move parts between stations in small batches.

Since some processes don’t lend themselves to one piece flow and are better served by producing in batches or lots then how do we know what that batch size should be?. Once again we turn to Hopp and Spearman1. In 1913 Harris developed a mathematical model to compute the optimal manufacturing batch size. His model, the Economic Order Quantity (EOQ) is considered the foundation of research on inventory management. In order to develop this model Harris made six assumptions as follows:

1. Instantaneous production. There is no capacity constraint, so the entire lot is produced simultaneously.

2. Immediate delivery. There is no lag time between production and availability to satisfy demand.

3. Demand is deterministic. There is no uncertainty about the quantity or timing of demand.

4. Demand is constant over time. Demand is linear meaning that if annual demand was 365 units, it translates into a daily demand of one unit/day.

5. A production run incurs a fixed setup cost. Regardless of the size of the lot or status of the factory, the setup cost is the same.

6. Products can be analyzed individually. Either there is only a single product or there are no interactions (e.g. shared equipment) between products.

As Harris developed his model he assumed constant, deterministic demand, ordering Q units whenever the inventory reached zero with an average inventory of Q/2 (i.e. maximum + minimum divided by 2). Harris also presented the holding cost of the inventory as hQ/2 per year, with h being the holding cost in dollars/unit/year. Continuing, Harris next added the setup cost, A, per order to his equation to have AD/Q per year with D being equal to demand, since we must place D/Q orders/year to satisfy the demand. Harris then included the production cost/unit, c, or cD/year to complete the equation for cost. The final equation for cost, then, is as follows:

Y(Q) = hQ/2 + AD/Q + cD

Without going through “higher mathematics” as Harris put it, we can find the value of Q that minimizes Y(Q), or the economic order quantity (EOQ):

EOQ = √2AD/h

The inference we can take away from this formula is that the optimal order quantity increases with the square root of the setup cost or the demand rate and decreases with the square root of the holding costs. What this all boils down to is, there is a tradeoff between lot size and inventory. Increasing the lot size will increase the average amount of inventory in the factory, but also reduces the frequency of ordering and by using a setup cost to penalize frequent replenishments Harris was able to articulate this tradeoff in concise financial terms.

There is another law, the law of move batching, presented by Hopp and Spearman1 that suggests one of the easiest ways to reduce cycle times in some manufacturing systems is to reduce transfer batch sizes. This law states that cycle times over a segment of a routing are roughly proportional to the transfer batch sizes used over that segment, providing there is no waiting for the conveyance device. The bottom line here is that by holding the transfer batch size to its optimal level, cycle time will also be optimal. So, if one piece flow isn’t ideal for your process, then at least calculate the optimal transfer batch size.

1 Factory Physics by Hopp & Spearman

Bob Sproull

Saturday, July 23, 2011

Focus and Leverage Part 46

I mentioned in my last blog that the batch and queue production system is the worst possible scenario for a company, but that isn’t exactly true. If a company practices batch and store production, whereby instead of processing the material to the next process, they move the material to a storage location, then the cycle time becomes even more protracted!

Continuing on, suppose instead of producing material in batches, when a part is completed in one station, it moves immediately to the next station and then on to the next station until it is completed (Remember our four station process in the following figure)? This type of production is referred to as single piece or one piece flow. One piece flow refers then to the concept of moving one work piece at a time between individual work stations. One piece flow has several distinct benefits like keeping WIP to low levels, encouraging work balance and improved quality, but in a system like this, what happens to cycle time? Let’s take a look.
Let’s assume, as before, that the processing time for each work station is 1 minute. The first part is processed through work station A and takes 1 minute. The first part continues immediately to station B and simultaneously the next part enters station A. After 1 minute the first part continues to station C, the second part moves to station B and a new part enters station A. After another minute the first part moves to station D, the second part moves to station C, the third part moves to station B and a new part enters station A. After another minute, the first part has been completed in station D so that the total time the first part remained in the process was exactly 4 minutes. The throughput is 1 part every four minutes or 0.25 parts per minute (15 parts per hour).

The question is, “What would happen if we increased WIP to 2 instead of 1?” Once again we begin measuring as soon as the first part enters work station A. We know that this first part will take one minute before it is passed to station B. At the same time, the second part is introduced to station A. These two parts follow each other through the 4-station line, so both remain in the process for 4 minutes total. Therefore, this system produces 2 parts every 4 minutes or 0.50 parts per minute (30 parts per hour). But what happens when we increase the WIP even more?

The table below contains all WIP values from 1 to 10 and as you can see, there is an interesting phenomena or nuance that takes place in this process when the level of WIP reaches 5 parts. If we make the assumption that the process is full (one at each station), and each part is ready to be processed, then when the fifth part is introduced to work station A, it must wait until the station has finished processing the fourth part and it moves to station B. Therefore, the fifth part remains in the system for 5 minutes. Each time a part is completed, the next part is introduced at station A and waits one minute before it can proceed. Look at the column for cycle time. As long as the system has no more than four parts in it, the cycle time remains constant at 4 minutes. But as soon as the fifth part becomes part of the system, the cycle time begins to increase by one minute for each increase of 1 part as WIP.
This demonstrates that increasing WIP levels doesn’t result in a corresponding increase in cycle time until the process is full or until its critical WIP level has been reached. For this example, the critical WIP level is four parts. If WIP is increased beyond this critical WIP, parts simply stack up and wait to be processed, causing the cycle time to increase.

Equally interesting, however, is that reducing WIP levels below the critical WIP (i.e. in this example 1, 2 or 3 parts) results in a corresponding decrease in throughput! As you can see in the table, when WIP is at its critical WIP level of 4 (i.e. the system is full) the throughput is at its maximum value of 1 part per minute or 60 parts per hour. But when the WIP level is reduced to 3, throughput drops from 1 part per minute (60 parts per hour) to 0.75 parts per minute (45 parts per hour).

The following figures demonstrate the relationship between WIP and Throughput and WIP and Cycle Time. In this example, there is one WIP value that results in minimum cycle time and maximum throughput which is what we always want. Any less WIP and we lose throughput with no decrease in cycle time. Any more WIP and we increase cycle time with no increase in throughput. It is interesting to note that for a balanced line (i.e. all individual processing times are equal), the critical WIP level will always equal the number of process steps. For unbalanced lines, this is not the case.

What we’ve learned here is that too much WIP results in longer cycle time and increased holding costs while too little WIP results in decreased throughput and lost revenue. This means that there is an optimum amount of WIP (i.e. critical WIP) that should be in a system. Even though there are many “experts” who believe in the concept of “zero inventory,” we now know that WIP should never realistically be zero! In my next posting, we’ll take a look at what happens to WIP, Cycle Time and Throughput in an unbalanced line (i.e. when processing times are not equal).  This whole series of blogs is centered around the concept of factory physics.

Bob Sproull

Thursday, July 21, 2011

Focus and Leverage Part 45

In my last blog I discussed the relationship between Cycle Time (C/T), Processing Time (P/T), Work-In-Process (WIP) and Throughput (TP). In this blog I want to discuss something called Little’s Law and why it is imperative to keep cycle times as low as possible as well as the importance of reducing variability.

In 1961 John Little published a mathematical proof known as Little’s Law which states that throughput is always equal to WIP divided by cycle time, or stated mathematically,

Throughput = WIP / Cycle Time

Graphically, the relationship between batch size (WIP) and cycle time can be seen in the figure below. For any batch size (WIP Level) the curve clearly behaves in a liner fashion. As a matter of fact, for this example the equation for this curve is:
y = 4x

Little’s Law (TP = WIP/C/T) implies that reducing cycle time and reducing WIP are essentially equivalent activities as long as throughput remains constant. But we know that reducing WIP without reducing variability will cause throughput to decrease (Variability Buffering Law). The real message here is, variability reduction is an extremely important component of WIP and cycle time reduction initiatives.

Before leaving this subject, I want to discuss the importance of keeping cycle times as short as possible, especially in the constraint operation. Hopp and Spearman (1) provide five reasons why this should be your objective:

1. Better responsiveness to the customer. If it takes less time to produce product, then the lead time to the customer can be shortened. Shorter lead times can result in increased sales.

2. Maintaining flexibility. Changing the list (backlog) of parts that are planned to start next is less disruptive than trying to change the set of jobs already in the process. Since shorter cycle times allow for later releases, they enhance this type of flexibility.

3. Improving quality. Long cycle times typically imply long queues in the system, which in turn imply long delays between defect creation and defect detection. For this reason, short cycle times support good quality.

4. Relying less on forecasts. If cycle times are longer than customers are willing to wait, production must be done in anticipation of demand rather than in response to it. Given the lack of accuracy of most demand forecasts, it is extremely important to keep cycle times shorter than quoted lead times, whenever possible.

5. Making better forecasts. The more cycle times exceed customer lead times, the farther out the forecast must extend. Hence, even if cycle times cannot be reduced to the point where dependence on forecasting is eliminated, cycle time reduction can shorten the forecasting time horizon. This can greatly reduce forecasting errors.

In my next blog I want to expand on the concept of batch and queue production, provide a bit more insight into why it might be the worst possible way to manufacture products, and to introduce the concept of critical WIP level.

(1)Factory Physics – Hopp and Spearman

Bob Sproull

Monday, July 18, 2011

Focus and Leverage Part 44

In this blog I want to perform a simple mental exercise to demonstrate how processing time, cycle time, throughput and WIP are interrelated. Let’s consider a simple four-step production line where the processing time (P/T) is exactly one minute for each work station (i.e. a balanced line). The figure below is an example of such a production process where raw material enters the process at Step A and then progresses to Steps B, C and finally Step D. This process is set up to produce parts in batches of 10 pieces at a time. The question is, “How long will it take to complete all 10 pieces?”

Before proceeding, let me provide some simple definitions for these four entities cycle time (C/T), processing time (P/T), throughput (TP), and work-in-process inventory (WIP).:

Cycle time (C/T) in this context will be defined as the total amount of time material spends in a production system being converted from raw material to finished product. Cycle time is measured in units of seconds, minutes, hours, days and even weeks, depending upon the product being produced.

Processing time (P/T) is the time required to process product through a single work station and, like cycle time, it too is measured in seconds, minutes, hours, days and even weeks, depending upon the product being produced.

Throughput (TP) is the rate at which material is processed through a production line and is measured in units of product per unit time (e.g. pieces per hour, units per week, etc.).

Work-in-process inventory (WIP) is the amount of work-in-process product not yet complete waiting for additional work to be done on it.

Since each piece takes 1 minute of processing time at the first station, a total of 10 minutes will be needed to process the entire batch through work station A. The batch of 10 is then transferred to work station B and 10 minutes are also required at work station B and so on until all 10 parts are completed in work station D. Therefore, ignoring transport time between steps, it would take exactly 40 minutes to process the entire batch of 10 parts through the process. Each part spends 40 minutes in the system, so the cycle time (C/T) is 40 minutes. The throughput (TH) is 10 parts every 40 minutes or 0.25 parts per minute or 15 parts per hour (i.e. 60 minutes/hour x 0.25 parts per minute = 15 parts per hour).

Suppose the factory decides to change its batch size from 10 to 4. What is the impact on cycle time and throughput? Each part still requires 1 minute of processing time at work station A, so all four parts would take four minutes total to pass through station A. Likewise, 4 minutes would be required to process the batch of four parts through stations B, C and D. Again, ignoring transport time, it would require a total of 16 minutes of cycle time to process the batch of 4 parts through this process. The throughput is 4 parts every 16 minutes or 0.25 parts per minute or 15 parts per hour, so the throughput has not changed, but the cycle time is significantly less.

This same exercise can be repeated for any batch size as seen in the table below and the results remain the same. No matter what the batch size (WIP) the throughput always remains the same, but look what happens to cycle time. Because the parts are transferred from station to station in batches, rather than one piece at a time, the cycle time for the batches grows as a function of batch size. This style of production and production control is characteristic of the mass production mindset and is referred to as batch and queue or batch and push production and represents the worst possible way to process material through a factory!
In this type of production, the level of WIP (batch size) has a pronounced effect on cycle time, but absolutely no effect on throughput! For all of you production managers who believe in or insist that it is faster and more efficient to process material in large batches, I hope this exercise has at least made you think about batch size and how it affects C/T, P/T and TP.

In my next few series of blogs I plan to expand on this exercise and introduce you to something called Little’s Law and how it can have a profound impact on your throughput.  In addition, I want to lay out why it's so important to minimize cycle times and why having a balanced line is not the best option.

Bob Sproull

Tuesday, July 12, 2011

Focus and Leverage Part 43

Dedication to Dr. Eliyahu Goldratt

It’s been a little over a month ago since Dr. Eliyahu Goldratt passed away and I thought it would be a fitting tribute to dedicate this blog posting to the man that has shaped my career more than any other man (or woman) ever has. For those of you who may not be familiar with Dr. Goldratt, he is the man who created the Theory of Constraints.  Yes, W. Edwards Deming and Taichci Ohno impacted me as well, but it wasn’t until I picked up and read a copy of The Goal that everything fell into place. When I was a much younger man, back in the early 1980’s, I was deep into TQM and I followed Dr. Deming’s teachings. I learned all I could about his philosophies, tools and continuous improvement techniques.  I even took the time to memorize Dr. Deming's 14 points and tried to apply them wherever I could. I remember thinking back then, that if I could be half as good as Deming, I could be so successful.

And then along came Taichi Ohno. Wow, was I ever impressed! Actually, it was Womack and Jones that really impressed me, but the more I read, the more I understood that it was Ohno who developed what we now call Lean. Ohno created what he called the Toyota Production System (TPS) and I absolutely loved it! I kept thinking that if I could just “Lean out” the company I was working for, we could dominate the markets. Lean became almost an obsession for me. I couldn’t read enough or learn at a fast enough rate to suit me! So here I was, with Deming and Ohno shaping how I attacked processes. I just figured that if I could reduce variation and waste everywhere, I’d become a celebrity of sorts. I read everything, but the two books I enjoyed the most were Lean Thinking by Womack and Jones and The Toyota Way by Liker.

Just when I thought I had learned everything about improvement, I picked up a couple of books on Six Sigma, The Six Sigma Way by Pande, Neuman and Cavanaugh and Six Sigma by Harry and Schroeder. Now I had a process map for improvement…..DMAIC and once again I had an obsession. So much so that I had to rush out and get my Black Belt certification and then later on my Master Black Belt. I was flying high back then, but there was only one problem….I really wasn’t seeing much bottom line improvement. What was I missing? I had even combined Lean and Six Sigma before it was popular to do so. Yes, there was improvement to the processes, but…….

Enter Dr. Eliyahu Goldratt. I remember well the impact The Goal had on me. I had been asked to take over a manufacturing facility and either turn it around or shut it down. My background had been almost exclusively Quality and Engineering, so what did I know about operations? Very little! I went to the library, found a copy of The Goal and stayed up all night reading it. The next day, I bought all of my direct reports a copy and then proceeded to have a book reading where we discussed it on a regular basis. Keep in mind that I read The Goal well before I learned Lean and Six Sigma, but one day it all clicked for me. I read everything I could find about the Theory of Constraints and it changed my entire approach to continuous improvement or at least the way I looked at it. I owe Dr. Goldratt a debt of gratitude for impacting me the way he did through his writings and teachings. My one regret is that I never got to meet him. But just for the record, we not only turned around that failing plant around, it became a model for the rest of the company.

In 1998 I began combining TOC with Lean and Six Sigma and went to work for a TPS consulting company. I used TOC to identify the system constraint and then used Lean and Six Sigma to reduce waste and variation in the constraint. I worked hard to develop a methodology that I could share with the world. In 2009 I started writing a book on this integration and in 2010 my last book, The Ultimate Improvement Cycle – Maximizing Profits Through the Integration of Lean, Six Sigma and the Theory of Constraints was published. Since then there have been a couple of books that have also been published under the name of TLS. Most notably, Profitability With No Boundaries by my friend Bob Fox and Russ Pirasteh and Velocity by Jacobs, Bergland and Cox (co-author of The Goal).

The world owes a great deal to Dr. Goldratt. He gave people time and had such incredible patience. He personally mentored and guided many people in his brief stay with us. He gave people confidence and encouraged them to apply what they had learned from him to their work and as a result, the world is a better place. What started out as a manufacturing improvement methodology has grown to include healthcare, government, agriculture, and other industries. Even on his death bed we are hearing that he was sharing new ideas, insights and breakthroughs to people who committed to transfer this new knowledge to the TOC Community. Several years ago I was fortunate enough to become a TOC Jonah by learning how to apply and use Goldratt’s Thinking Processes and that changed my approach to improvement even more.

Goldratt is leaving a legacy for all of us to carry on and he will be greatly missed. I want to personally thank Dr. Goldratt for giving me the opportunity to grow and flourish and pass on his teachings to as many people as I can. This is the reason I write this blog. Thank you Dr. Goldratt….you will be missed.

Bob Sproull

Saturday, July 9, 2011

Focus and Leverage Part 42

I was reading the TOC-Lean Institute’s latest newsletter yesterday written by Dr. Ted Hutchin from England. If you don’t receive this newsletter, you really should because it’s always full of valuable tips and insights (www.toc-lean). Anyway, in this issue, Ted had a discussion about Deming, Ohno and Goldratt and he pointed out something that they all had in common, their dislike of traditional cost accounting (CA) for making operational decisions. They all agreed that we should be more concerned with things that result in system improvements rather than localized improvements. As I read his newsletter, I thought he probably should have included Henry Ford into this mix of “giants”, as Ted refers to them. All four of these men unsuccessfully fought against the dominant paradigm of Cost Accounting (CA). I say unsuccessfully because, even though all of them disliked using CA for decision making, and openly spoke out against it, many companies still use it to their detriment.

In the early twentieth century, the pioneers of the Industrial Revolution challenged "financial" accounting. In fact, Ford said that “We should not let Cost Accounting run the business!” Ford believed that Cost Accounting was simply a data dump whereas Cost Management was "information." Ford said that financial information signals to us that something is wrong, but it fails to tell us what is wrong. He further explained that Cost Accounting has only two real purposes, tax accounting and financial statements.

One of Ford’s famous quotes was, "We have never considered any costs as fixed. Therefore we first reduce the price to a point where we believe more sales will result, then we go ahead and try to make the price. We do not bother about the costs. The new price forces the cost down." (Ford, 1922)

Like Ford, Ohno focused in on the flawed judgments resulting from cost accounting thinking. To quote Ohno, "If you insist on blindly calculating individual costs and waste time insisting that this is profitable or that is not profitable, you will just increase the cost of your low volume products. For this reason there are many cases in this world where companies will discontinue car models that are actually profitable, but are money losers according to their calculations. Likewise, there are cases where companies sell a lot of models that they think are profitable, but in fact are only increasing their losses." He discussed the importance of not letting your understanding be clouded by thinking with the accounting mindset. Another of Ohno’s famous quotes was, “It was not enough to chase out the cost accountants from the plants. The problem was to chase cost accounting from my people’s minds.”

Ohno believed cost accounting thinking was the biggest obstacle he had to overcome in developing his TPS system. Workers believed they should make each operation very efficient, which often meant running big batches of parts. But Ohno knew this was detrimental to the total system.

Deming told us that a common mistake was to assume that an organization can be managed on the basis of economic performance. In fact, Deming said that using cost accounting metrics to make decisions is like “driving by looking in the rear view mirror.” As Deming said: “…we need good results, but management by results is not the way to get them”

Deming also said, “What we need to do is learn to work in the system, by which I mean that everybody, every team, every platform, every division, every component is there not for individual competitive profit or recognition, but for contribution to the system as a whole on a win-win basis.” “Eliminate numerical quotas, including Management by Objectives.” The following is an excerpt from Chapter 4 of The New Economics, second edition by W. Edwards Deming:

Accounting-based measures of performance drive employees to achieve targets of sales, revenue, and costs, by manipulation of processes, and by flattery or delusive promises to cajole a customer into purchase of what he does not need (adapted from the book by H. Thomas Johnson, Relevance Regained, The Free Press, 1992).

Like Ford, Ohno and Deming, Goldratt was very much opposed to using traditional cost accounting metrics to make decisions. In The Goal, Alex learns from Jonah several concepts which are directly opposite from what he has been taught before about business operations. Jonah explains that:

1. Money is more important to management over efficiency.

2. Cost accounting is the number one enemy of productivity.

3. A plant in which everyone is working all the time is inefficient.

Jonah points out that the best way to increase profits is by simultaneously increasing throughput while reducing inventory and operating expenses. Jonah also explains that a balanced plant is where the capacity of every resource is balanced exactly with demand from the market. However, the closer you come to being a balanced plant the closer you are to bankruptcy.

So if these four great men all were in harmony in terms of their opposition to cost accounting, why is it then that most companies still use CA in their daily decision making? There are serious shortcomings associated with both GAAP and Absorption Accounting not the least of which are that they encourage inventory accumulation and “paper profits” through overproduction; production expenses associated with WIP and Finished Goods; and allocations that further distort the true costs of production all of which lead to inappropriate decisions on pricing, staffing, capacity and scheduling.

I don’t know about you, but because these four truly great mean believed that traditional cost accounting was not in the best interests of companies, then maybe we should all take heed to their advice and opinions.

Bob Sproull

Monday, July 4, 2011

Focus and Leverage Part 41

In this blog I'd like to try something a bit different.  I'm presenting a link to and interview I did with a man named Joe Dager from Business901.  I think it ties in nicely with this concept of focus and leverage.  I hope you like it.  Just click the link and it should take you there.


Sunday, July 3, 2011

I need your help....

If anyone tried to leave a comment today and was denied access or received a message that said something like "Unable to complete your request" I would appreciate it if you could send me an email to to let me know.  Also, there should be something called a bX code listed and Goog.  le needs this code to be able to pinpoint the problem.  Here's what Google sent me to explain the bX code:

If you've been blogging for any amount of time, at one time or another, you've seen something like "We're sorry, but we were unable to complete your request.  When reporting this error to Blogger Support or on the Blogger Help Group, please:
* Describe what you were doing when you got this error.
* Provide the following error code and additional information:  bX-sp4hmm

Thanks everyone and I'm so sorry for this inconvenience.

Still not fixed

The fix that Google recommended did not fix the problem, so once again, I apologize to everyone.  I notified Google again.

Message to Followers of this Blog

It appears as though the change I made is now allowing comments to be received.  I would appreciate it someone who follows this blog would leave comments to confirm what I believe to be true.  Thanks everyone for your patience and I will be posting a new Focus and Leverage entry soon.


Message to Followers of this Blog

Apparently I had inadvertently checked the box that didn't allow comments and in order to fix this, I must go back and repost the old posting with the correct box.  Please bear with me while I do this.  Comments stopped coming in in November of 2010, so I will go back and fix all of them.  Sorry for any inconvenience I have caused to anyone.