Thursday, September 26, 2013

Focus and Leverage Part 252


This posting is the last in our series comparing Cost Accounting and Throughput Accounting written by Bruce Nelson in our book Epiphanized and I hope you have enjoyed it.  So many companies that I consult for to help improve their profitability are still suffering from the stranglehold of Cost Accounting and I hope this series has proved beneficial to better understand the stark difference that exist between these two distinctly different accounting methods.

The decision-making process becomes much easier when these factors (i.e. T, I and OE) are considered.  The movement either up or down, of these three measures should provide sufficient information for good strategy and good decisions.  Any good decision should be based on global impacts to the company and not just a single unit or process in isolation.  If your thinking is limited to the lowest level of the organization, and you are focused on the wrong area, then the positive impact will never be seen or felt by the entire organization.




 
Figure 1.Common Practive vs. Common sense

If we compare these two concepts at the highest level, then CA is all about the actions you take to try and save money, while TA is about the actions you take to make money.  Once you’ve made the cost reductions and you still need more, what do you do next?  Where else can you reduce costs?   On the other hand, making money, at least in theory, is infinite.  What is the limit on how much money your company can make now?  Figure 1 compares the top level priorities of these two accounting approaches. With these differences in priorities it is easy to see why CA is focused on saving money and TA is focused on making money.  So consider the real goal of your company before you decide which path to take.


You can pick up the newspaper almost any day of the week and see the effects of these priorities. You can read about company XYZ that is going to lay off 500 employees in order to reduce costs and become more efficient and align themselves to be more vertical with the customer and . . . blah, blah, blah!  What these companies are really saying is they have forgotten how to make money.  They are so focused on saving money that they have forgotten what the real goal of the company is.


So, how did all of this come about?  Why are things happening the way there are?  If all of this CA and saving money is so good, then how come so many companies seem to be in trouble or worse yet—bankrupt!  There are many reasons and some could be debated for weeks, if not months or years.  But however many reasons there may be, all of them are not equal.  Some reasons are bigger players than others, and as such have had a far greater impact.  Let’s look at the cost model associated with both the CA and TA concepts.  It provides an interesting history about why things are the way they are.   Figure 2 defines the cost model concept for both CA and TA.


The product depicted in Figure 2 is exactly the same for both models. It indicates the same selling price, same manufacturing process, same everything.  In the CA model you notice the layers of allocated cost that are applied to each product as some percentage of the cost, or allocated rate. The sum total of all of these costs, whatever it may be, equal what CA considers to be the cost to manufacture.  Let’s look at each layer.
 
Figure 2. Cost Model Comparisons

Raw Materials—This is the total cost of all the raw materials used in the product you make.  An average raw material cost for most companies might be around 40%, but some can, and do, go much higher.

Labor Costs—This is the allocated labor cost per parts.  It is usually calculated based on some type of total parts per hour, or day, or production batch, or order, or some other value. Then the total labor cost is divided by the number of parts produced to arive at the percent of labor to be allocated to each part.

Overhead Costs—This is the allocated percentage per part to pay for all of the overhead costs.  These are items like the management staff, administrative jobs, training and so on.  Usually these types of overhead  assignments  cover many type of parts, but also no part in particular.  Human Resources or even Finance are examples of organizations that fit in the overhead category.  You need to have some place to charge and collect your overhead costs.

Corporate General and Administrative—This is the allocated cost that pays for all of the corporate staff and everything they provide.

Profit—This is the location where you add the percentage of profit you want to receive for your product.

Selling Price (SP)—This is the selling price for your product once you’ve gone through and added together all of the manufacturing cost categories and the percentage of profit.


There very well could be more layers in your company, but in the end the hope is that when you add up all of the costs and sell to the market, or consumer, or the next guy in the supply chain, your selling price is always greater than your manufacturing costs.  If it is, then you have made a profit.

But in reality the selling price is not determined by the manufacturer, but rather by the consumer.  If the price is too high they won’t buy your product and will look elsewhere. So if that happens, what are your choices?  Somehow you have to lower your cost and selling price in order to make your product more attractive to the consumer.  So how do you do that?  You could cut your profit margins, but most organizations do not like to do that.  If you can’t do that, then what else do you look for?  How about overhead cost? You can slow down or stop doing some of the things associated with overhead, for example, training.  You could cut your raw materials expense.  Perhaps find a different a vendor, or maybe buy cheaper parts.  If you do that, then what about the quality risk?  How about cutting labor costs?  If you could just get more efficient, then your labor costs would go down.  If labor costs go down, then we can make more profit – correct?  I think by now you understand the cycle of chaos that takes place when you focus on efficiency—disaster usually follows in short order.  Such is life in the cost model cycle.


In your company if you do not pay your employees using the piece-rate pay system, then the assumption of using allocated labor costs, or any costs, is invalid!  Why is the stigma of allocated costs so strong in CA?  The assumption that higher efficiency reduces the cost per part is also invalid.  In today’s reality of the per hour rate the cost remains the same.

The TA cost model contains only Total Variable Cost (TVC) and Throughput (T).  The calculation is simple: T = SP-TVC.  Throughput, in essence, equals the dollars remaining from selling the product after you have subtracted the TVC cost.  Nothing is allocated, nothing is assumed, it’s just a simple cash calculation from the sale.

Bob Sproull

Wednesday, September 25, 2013

Focus and Leverage Part 251


In this posting we will continue our series on Cost Accounting and Throughput Accounting and look into them in a bit more detail.

Throughput Accounting is not necessarily a frontal attack on Cost Accounting, however, it is a different way to view accounting measures, solve issues and manage the company at a much higher success and profitability level—an update of the accounting rules, if you will, that is much more in line with current business reality.
 
Throughput Accounting uses primarily three performance metrics—Throughput (T), Investment/Inventory (I) and Operating Expense (OE).   These metrics are a simplified methodology that removes all of the mystery of accounting and rolls it into three simple measures.

1.  Throughput is the rate at which inventory is converted into sales.  If you make lots of products and put them in a warehouse, that is not throughput—it’s inventory.  The products or services only count as throughput if they are sold to the customer and fresh money comes back into the business system.

2.  Investment/Inventory is the money an organization invests in items that it intends to sell.  This category would include inventory, both raw materials and finished goods, but also includes buildings, machines and other equipment used to make products for sale, knowing that any or all of these investments could at some point in time, be sold for cash.

3.  Operating Expense is all of the money spent generating the Throughput.  This includes, rent, electrical, phone, benefits and all wages.  It is any money spent that does not fit within one of the first two TA categories.  When you read and understand these definitions it seems likely that all the money within your company can be categorized to fit within one of these three measures.

In thinking about TA it is important to consider the following thoughts:  TA is neither costing nor Cost Accounting.  Instead, TA is focused on cash without the need for allocation to a specific product.  This concept includes the variable and fixed expenses for a product.  The only slight variation would be the calculation for Total Variable Cost (TVC).  In this case the TVC is a cost that is truly variable to a product or service, such as raw materials, paying a sales commission or shipping charges. The sum total of these costs becomes the product TVC.  TVC is only the cost associated with each product.  Some would argue that labor should also be added as a variable cost per product.  Not true!  Labor is no longer a variable cost, it’s a fixed cost.  With the  hourly labor measures, you pay employees for vacation, holidays and sick leave.  You pay them while they are making nothing!  The employees cost you exactly the same amount of money whether they are at work or not.  Using this example, labor is an operating expense and not a variable cost associated with products.

The following definitions apply to TA:

1.  Throughput (T) = Product Selling Price (SP) – the Total Variable Cost (TVC).  Or T = SP – TVC.

2.  Net Profit (NP) = Throughput (T) minus Operational Expense (OE).  Or NP = T – OE

3.  Return on Investment (ROI) = Net Profit (NP) divided by Inventory (I).  Or ROI = NP/I

4.  Productivity (P) = Throughput (T) divided by Operating Expense (OE).  Or P = T/OE

5.  Inventory Turns (IT) = Throughput (T) divided by Inventory Value (IV). Or IT = T/I

Some would argue that TA falls short because it is not able to pigeon hole all of the categories of CA into TA categories.  Things like interest payments on loans, or payment of stock holder dividends or depreciation of machines or facilities. However, this argument appears to be invalid.  Which one of those specific categories can’t be placed into one of the TA categories?  The baseline TA concept is really very simple.  It you have to write a check to somebody else, it’s either an Investment (I) or an Operating Expense (OE).  It’s an investment if it is something you can sell for money at some point in time.  It’s an operating expense if you can’t.  Put this debt in the category that makes the most sense. On the other hand, if somebody is writing a check to you, and you get to make a deposit, then it’s probably Throughput (T).  Cost accounting rules have made it much more complicated and difficult than it needs to be.  When you make it that complex and difficult and intently argue about the semantics, the stranglehold that CA has on your thinking becomes even more obvious.

TA is really focused on providing the necessary information that allows decision makers to make better decisions.  If the goal of the company is truly to make money, then any decisions being considered should get the company closer to the goal and not further away.  Effective decision-making is well suited to an effective T, I & OE analysis.  This analysis can show the impact of any local decisions on the bottom line of the company.  Ideally good business decisions will cause:

1.  Throughput (T) to increase.

2.  Investment/Inventory to decrease or stay the same.  It is also possible that investment can go up as long as the effect on T is exponential.  In others words, sometime a very well placed investment can cause the T to skyrocket.

3.  Operating expenses decrease or stay the same.  It is not always necessary to decrease OE to have a dramatic effect.  Consider the situation where the T actually doubles and you didn’t have to hire anyone new to do it, nor did you have to lay anyone off.

In my next posting we’ll complete our series on Cost Accounting and Throughput Accounting from Bruce Nelson’s appendix in our book Epiphanized.

Bob Sproull

 

Monday, September 23, 2013

Focus and Leverage Part 250


In the last two postings we’ve talked about our sock maker and how if he uses the wrong accounting system model to make his financial and staffing decisions, he could be heading in the wrong direction in terms of profitability.  Let’s now continue on with Bruce Nelson’s musings and consider some other accounting factors.


The Efficiency Model

The efficiency model, when measured and implemented at the wrong system location will have devastating effects on your perceived results.  The end results will actually be the opposite of what you expected to happen. I wonder why with all of the technology improvements accomplished through the years, why it is still acceptable to use cost accounting rules from the early 1900s?


Cost Accounting
The primary focus of Cost Accounting is per part or per unit cost reductions.  Because perceived cost reductions are viewed so favorably, is it any wonder why there is so much emphasis on efficiency?  And yet cost reductions don’t seem to be the answer.  There have been many highly efficient companies that have come close to going out of business or have gone out of business.  Have you ever heard of a company that has saved themselves into prosperity?  Think about it, any perceived savings that the sock maker thought he was getting were quickly eroded by buying more raw materials.  In fact, it ended up costing the sock maker much more money than he realized and not saving him anything!  He was doing all of the recommended practices and yet he was failing—How come?


Many companies will emphatically state that the primary goal of their company is to make money, and yet they spend the largest portion of their time trying to save money.  It would appear they’ve forgotten what their goal really is. The strategy you employ to make money is vastly different than the strategy you would employ to save money.  For most companies, the assumption is that saving money is equal to making money—that is, if you somehow save some money it’s the same as making money.  This is simply not true.  These two concepts are divergent in their thinking—each takes you in a different direction with different results.  If the real goal of your company is to save money, then the very best way to accomplish your goal is to go out of business.  This action will save you the maximum amount of money—goal accomplished!  However, if the goal of your company is to make money, then a different strategy must be employed—maximizing throughput through the system.


Throughput Accounting

Suppose we consider again the same example using the sock maker.  Suppose the sock maker wants to make three times as many socks as he is making now.  What does he have to do?  Using the piece-rate pay system he would have to hire three times as many employees to be sock makers and pay them a piece rate of $1.00 per pair.  So in order to make three times as many pairs of socks, the labor rate must go up—he has to hire three times as many people. In the piece-rate world getting three times as much through the system will cost him three times as much in labor.  But let us suppose our sock maker is paying an hourly wage rather than a piece rate, and he figures out a way to make three more pairs of socks, per worker, per day.  By being able to make three times as much, how much do his labor costs go up?  They do not go up at all!  His labor rate stays exactly the same.  He still pays the workers an hourly rate whether they make one pair of socks or ten pairs of socks.  He only has to pay the employees once, not a rate based on the number of socks made.  His only increase in cost comes from buying more raw materials to make the socks.  So why does modern day cost accounting still try to allocate a labor cost per unit of work and then claim that increased efficiency drives down the cost per part?  It does no such thing!  In today’s reality, labor costs are fixed not variable!


Perhaps it is possible that some of these cost accounting rules and methods might be wrong and mislead the user into thinking some results are better than they really are.  Is it possible that there might be another way to look logically at the practice of accounting that will truly get us closer to the goal? What if there was another way? A way that provides an alternative accounting method that allows us to remove, or abandon or ignore the CA rules that are causing so much trouble?  In my next posting, we’ll have a look at Throughput Accounting.

Bob Sproull

Sunday, September 22, 2013

Focus and Leverage Part 249


This is the second of a 2-part posting on the dangers of using traditional cost accounting to run your business that Bruce Nelson wrote in the appendix of our book Epiphanized. You will recall, that my last posting ended with the following…..


"If the owner could make more pairs of socks in the same amount of time, then his labor cost per pair of socks would go down.  This was the solution the business owner was looking for—reducing his costs.  If everyone was busy making more and more socks, and they could make a lot of socks in a day, then his new labor cost per pair of socks could be reduced! This had to be the answer— look how cheap he could make socks now!  Or so he thought."


With these new found levels of high efficiency came another problem.  The owner quickly noticed that he had to buy more and more raw materials just to keep his employees working at such high efficiency levels.  The raw materials were expensive, but he had to have them.  The owner knew that his past success was directly linked to his ability to maintain such high efficiency and keep his cost low. More and more raw materials were brought in.  More and more socks were made.  The socks were now being made much faster than he could sell them.  What he needed now was more warehouse space to store all of those wonderfully cheap socks!  So at great expense, the owner built another warehouse to store more and more cheap socks.  The owner had lots and lots of inventory of very cheap socks.  According to his numbers the socks now were costing next to nothing to make.  He was saving lots of money! Wasn’t he?
 

Soon the creditors started to show up and want their money.   The owner was getting behind on his bills to his raw material suppliers.  He had warehouses full of very cheap socks, but he wasn’t selling his socks at the same rate he was making them. He was just making more socks.  He rationalized that he had to keep the costs down, and in order to do that he had to have the efficiency numbers high.  The business owner soon realized that he had to save even more money.  He had to cut his costs even more, so he had to lay people off and reduce his workforce to save even more money.  How did he ever get into a situation like this?  His business was highly efficient.  His cost per pair of socks was very low.  He saved the maximum amount of money he could, and yet he was going out of business—How come?


Reality had changed and labor costing had changed (labor shifted from a variable cost to a fixed cost), but the cost accounting rules did not change.  The owner was still trying to treat his labor cost as a variable cost.  Even today many businesses still try to treat their labor cost as a variable cost and allocate the labor cost to individual products.  When the labor costs are allocated to a product, then companies try and take the next step—they work hard to improve efficiency and drive down the labor costs per part, or unit.  This erroneous thought process is ingrained in their mind, and they believe that this action will somehow reduce labor costs.  And if you can reduce labor costs, they think, then you are making more profit.  But take just a moment and reflect back on the consequences of the sock maker’s experience with cost savings and the high efficiency model.  Are these end results anywhere close to what the business owner really wanted to have happen?  Was this the real outcome business owners really wanted from high efficiency?
 
In my next series of postings we'll look deeper into the problems faced by the sock maker.

Bob Sproull

 

Saturday, September 21, 2013

Focus and Leverage Part 248


For those of you who haven’t read Bruce Nelson and my book, Epiphanized, I thought you might enjoy reading one of the writings in our book’s appendix that Bruce wrote.  It’s all about the dangers of using traditional cost accounting to make routine decisions.  This subject will be delivered in two different posting, but I think you will enjoy it.  Bruce and I have received quite a few positive comments on our appendix and this subject has been very well received.


The Sock Maker

In the early 1900s Cost Accounting (CA) was in its early stages and beginning to be widely accepted and used.  For a business owner there were many things to consider in the day-to-day operation of the business.  One of the most important functions of the business owner was tending to the daily needs of the business financial situation.  Keeping the books, calculating cost for raw materials, calculating labor cost and making sales were all important issues to be dealt with on a daily basis.


It was understood by business owners that in order to stay in business and make money the cost they paid for the products or service rendered had to be less than the selling price of their products or services.  If it wasn’t, then they would quickly go out of business. Then and now, the needs of business haven’t changed much, but others things have changed.


The ideas and concepts about what was important to measure and how to measure it were starting to form and were being passed from one generation to the next.  This was considered important information that you needed to know in order to be successful.  Without this understanding, it was assumed that you would fail.  Back then, the business structure and methods were different than they are today.  The labor force was not nearly as reliable, and most workers did not work 40 hours a week.  When they did work, they were not paid an hourly wage, but instead were paid using the piece-rate pay system. 


As an example, suppose you owned a knitting business, and the product you made and sold was socks.  The employees in your business would knit socks as their job.  With the piece-rate pay system, you paid the employees based on the number of socks they knitted in a day, or a week, or whatever unit of measure you used.  If an employee knitted ten pairs of sock in a day, and you paid a piece rate of $1.00 for each pair knitted, then you owed that employee $10.00.  However, if the employee didn’t show up for work and did not knit any socks, then you owed nothing.  In this type of work environment, labor was truly a variable cost and deserved to be allocated as a cost to the product.  It just made sense in a piece-rate pay system.  The more socks the employees knitted, the more money they could make.  Also, as the business owner your labor costs were very precisely controlled.  If employees didn’t make any socks, then you didn’t have to pay.


In time, metrics for calculating labor costs changed and the labor rates changed as well.  Many employees were now paid a daily rate instead of a piece rate.  Labor costs had now shifted from a truly variable cost per unit to a fixed cost per day.  In other words, the employees got that same amount of money per day no matter how many pairs of socks they knitted or didn’t knit.  As time went by, the employee labor rates shifted again.  This time labor rates shifted from a daily rate to an hourly rate.  With the new hourly rate came the more standardized work week of forty hours, or eight hours a day, five days a week.  With the hourly rate the labor costs now become fixed. 


With these changes, it became apparent to the sock-knitting business owner that in order to get the biggest bang for the labor buck, the owner needed to produce as many pairs of socks as he could in a day in order to offset the rising labor costs.  The most obvious way to do that was to keep all of your sock knitters busy all of the time making socks.  In other words, efficiency was a key ingredient and needed to be increased.  If the owner could make more pairs of socks in the same amount of time, then his labor cost per pair of socks would go down.  This was the solution the business owner was looking for—reducing his costs.  If everyone was busy making more and more socks, and they could make a lot of socks in a day, then his new labor cost per pair of socks could be reduced! This had to be the answer— look how cheap he could make socks now!  Or so he thought.

To be continued……..

Bob Sproull

Thursday, September 19, 2013

Focus and Leverage Part 247


In manufacturing plants like the one I have been consulting for recently, that has limited human resources, having a well-planned and synchronized production schedule is critical for the smooth flow of parts through the processes within the manufacturing plant.   But as important as the production schedule is, there are other key factors that must be considered before a viable production schedule can to be developed.  Perhaps the most significant factor that must be considered is the degree of operational stability of the equipment being used to produce the products.  That is, if the equipment is unreliable and unpredictable with excessive amounts of downtime, then it is virtually impossible to develop a practical and dependable schedule for producing parts.  Without stability, the level of predictability will be very low.

Another important factor that enters into the production flow equation is the degree of flexibility of the work force, especially when a limited number of human resources are employed.  That is, if the work force in place is limited on the number of different machines and parts they are able to run, then scheduling becomes much more difficult. One could even say that in circumstances like this, the human resource level might be considered the system constraint.

Finally, two other factors that enter into the creation of the scheduling process are the quality level of parts (i.e. yield) and the demand requirements placed on each machine/operator combination.  If the yield losses are excessive, then these must be added into the scheduling assumptions and planned accordingly.  For example, if the scrap rate is 5 % on average, then 5 % more parts must be produced to meet demand requirements.

While all of these factors are important to the development of a viable production schedule, there must also be an active improvement plan in place to eliminate the barriers to scheduling.  An analysis and review of existing machine downtime and quality information must be at the forefront of this improvement plan.  A system for capturing this key data must exist, but unless it is used to its fullest extent to determine the focal points for improvement, it is just a database.

So the question becomes one of what should the data analysis look like or what information is important?  Clearly the reasons for equipment downtime should be available for review, but in what form should it be displayed?  And what about the quality losses….shouldn’t that be available for review as well?  Should the review be only daily information or should trend analysis also be available?  Obviously daily results are important so that a real time investigation of the issues can be completed.  Memories fade, so as soon as the data is available, it must be investigated.  How an operation is performing, as a function of time, is equally important because single event problems are not nearly as damaging as chronic ones.  Because of this, historical plots (e.g. run charts) are very important as well. 

My recommendations for how the data should be analyzed are very straightforward and simple to understand.  Daily results for both production and quality should be discussed at daily production meetings and then once per week a simple Pareto Analysis of both downtime and Quality can be very helpful.  In addition, time based production run charts should be developed to monitor production results as well as control charts on critical quality measurements.  Those responsible for each machine group should be required to develop and discuss action plans on how they plan to improve both production and quality.

Once we have created a more stable production environment, then we can focus on developing a discrete scheduling system.  Drum Buffer Rope (DBR) is a methodology that can provide relief in a resource starved environment.  There are several key principles behind Drum-Buffer-Rope scheduling that must be understood including:
 
1.  In any set of resources, some will be more heavily loaded than others which are referred to as Capacity Constraints, or CCR (capacity constrained resource).  In some plants there is really only one CCR while in others, there might be several.  The easiest way to locate the constraint is to walk the process and finding the largest pile of WIP.

2.  The most capacity constrained resource will always dictate the rate of work flow from raw materials through to finished goods.  This is an important concept because knowing where it is will permit us to focus on individual processes and identify each of their specific constraints. 

3.  There is no value in having any resource that feeds the constraint produce at a faster rate than the constraint.  Common measurements like manpower efficiency or equipment utilization might encourage this, but the only outcomes we observe from using these metrics are increased WIP, extended lead times, more expediting, and deteriorating on-time delivery performance.  Plus, the excessive WIP ties up floor space and cash as well as increasing the chances of part's damage and undetected quality issues.  Both of these problems can be hidden by the seemingly endless waves of WIP.

4.  The production rate of resources that are fed by the constraint is dictated by the output rate of the constraint.  In other words, the output of the entire process is dictated by the constraint.  Because of this, it is absolutely necessary to keep the constraints producing at all times.

These four points lay the foundation for a scheduling system named Drum-Buffer-Rope, so let’s now look at each individual component of DBR.

The DRUM

Once the constraint has been identified, a finite schedule based upon the capacity of the constraint (i.e. the Drum) must be developed for the work that has to pass through it. The schedule can be something as simple as deciding the number of parts to produce and must also include the timing of when each of the parts are due to be completed and shipped to the customer. In addition, the schedule must also consider normal yield losses.  Essentially the constraint (the drum) sets the production pace for the entire process.

The Buffer

DBR’s buffers exist in two distinctly different dimensions, time and stock.  And because the constraint controls the throughput of the entire process, every minute wasted is a waste of the whole plant’s production capability.  So in order to get the most out of the plant, employees must get the most out of the constraint(s).  For this reason, we can never let the constraint be starved of parts.  One logical answer to this is to make sure that there’s always a buffer of work in front of the constraint so that it is NEVER starved of work. And although it is a stock buffer, it is created as a result of managing time and not stock.  In other words, parts must be scheduled to reach the constraint prior to it running out of parts.  Let’s look at a simple example.

Suppose that when raw materials are released into the gating operations of the process, it takes an average of 27 hours to flow through the different process steps until they reach the constraint.  But because of normal disruptions and statistical fluctuations, many times semi-finished products would not get there in time.  Things like unplanned downtime, quality issues or even operator absences happen routinely.  In order to counter these delays, we might want to release the parts and materials hours ahead of when they due at the constraint to protect the constraint from starvation.  In other words, we want to establish a buffer in front of the constraint.  The question becomes, how long in advance should they be released?  One way to calculate this buffer is to determine the “normal” average length of time from release of materials until they reach the constraint and calculate one third of this time.  In our example, the size of our buffer would be 27 hours divided by 3, or 9 hours.  This means that if everything flows smoothly, then the work will arrive at the constraint 9 hours earlier than needed thereby creating a 9-hour stock buffer.  The advantage of using this buffer is what happens when there are disruptions and statistical fluctuation in the processes in front of the constraint.  As long as the delays don’t exceed 9 hours, then the work will still arrive early or on time at the constraint.

One of the most effective ways to monitor the flow of parts so that they will reach the constraint on time is to create a visual buffer management system.  In our example we said that it takes 27 hours on average from release of raw materials into the process until the semi-finished parts arrive at the constraints.   We calculated a buffer size of 9 hours to assure that the parts would arrive on time at the constraint.  The visual buffer management system divides these 9 hours into 3 segments of 3 hours each and color codes them as green (1-3 hours), yellow (4-6 hours) and red (7-9 hours).  If the parts are in the green zone (i.e. the first 3 hours), then they will most likely arrive at the constraint on time.   If the parts are late arriving at the constraint and fall into the yellow zone (4-6 hour point), then plans should be put in place to expedite the parts in the event that they exceed the 6 hour point.  If they pass into the red zone (7-9 hours), then the plans to expedite should be implemented.  This system works quite well as long as the part’s status is closely monitored.

The Rope

The concept of the rope is probably the simplest DBR concept to understand in that the only parts and materials that should be released into the gating process should be those needed to support the Drum schedule.   And while this may seem obvious, in reality for many people it isn’t.  In many manufacturing plants work is released into the gating operations simply to keep operators busy so that workers and machines aren’t idle.  The performance metrics manpower efficiency or equipment utilization are many times behind this behavior.  Many times batch sizes are inflated to avoid lengthy changeovers so materials are “pushed” into the process.

So to summarize:

1.  Identify the constraints in the system (this corresponds to the first of Goldratt’s 5  Focusing  Steps, “Identify the constraint.”)




2.  Examine the orders within the system, consider the finite capacity of the constraint, and schedule the work in detail through the constraint. That is, schedule backwards from the customer due date through the entire process.  This corresponds to the second of the 5 Theory of Constraints Focusing Steps, “Decide how to exploit the constraint.”




3.  Calculate a time buffer and add it to the average set-up and run times of the processes supplying the constraint. This is also part of TOC’s Step 2 (Decide how to exploit the constraint).




4.  Using the rope, only release materials and parts into the gating processes of the plant based upon the needs of the Drum (constraint) and the timing of the Buffer.




5.  Monitor the buffer penetration by dividing the total buffer into 3 equal quantities and visually record each 1/3 as green (1/3 of total buffer time), yellow (2/3 of total buffer time) and red (Total buffer time).  Green indicates no action is required, yellow requires the development of a plan to expedite the parts, and red requires execution of the expedition plan.




There are many variations and refinements on this basic technique, but as described above, it’s one of the keys to dramatically improving the flow of parts through processes as well as improving on-time delivery performance while at the same time shrinking lead times and work in progress inventories. 

Bob Sproull