One of the keys to optimizing the performance of your processes is understanding the nature and sources of variability. Like anything else, if you don’t understand it, you simply won’t be able to improve it. In the next series of postings, I’m going to discuss a series of key points related to process and system variability.
Hoop and Spearman, in their breakthrough book, Factory Physics, provide valuable insights into variability by providing us with seven fundamental points to remember. If you don’t have a copy of this book, I suggest you purchase one. These seven points are as follows:
- Variability is a fact of life. From a management perspective, it is clear that the ability to recognize and deal effectively with variability is perhaps the most critical skill to develop for all Managers, Engineers and Black Belts. Without this skill, your decisions will be full of uncertainty and might even be wrong many times.
- There are many sources of variability in manufacturing systems. Process variability comes at us in many different forms. It can be as simple as work method variations or as complex as machine setups and changeovers, planned and unplanned downtime, or scrap and rework. Flow variability is created by the way we release work into the system or how it moves between stations. The result of variability, present in a system, can be catastrophic if its underlying causes aren’t identified and controlled.
- The coefficient of variation is a key measure of item variability. The coefficient of variation, given by the formula, CV = σ/μ, is a reasonable way to compare the variability of different elements of a production or flow system. Because it is a unitless ratio, we can make rational comparisons of the level of variability in both process times and flows. In work stations, the CV of effective process time is inflated by equipment downtime and setups, rework and scrap, and a host of other factors. Interruptions that cause long, but infrequent periods of downtime will increase the CV more than ones that cause short, frequent periods of downtime as long as the variability remains somewhat constant.
- Variability propagates. If the output of a workstation is highly variable, it’s inevitable that downstream workstations receiving products will also be highly variable.
- Waiting time is frequently the largest component of cycle time. Two factors contribute to long waiting times: high utilization levels and high levels of variability. It follows then that increasing the effective capacity and decreasing variability will both work to reduce cycle times.
- Limiting buffers reduces cycle time sometimes at the cost of decreasing throughput. Because limiting inventory between workstation is the equivalent of implementing a pull system, it is the primary reason why reducing variability is so critical in JIT systems.
- Variability pooling reduces the effects of variability. Pooling variability will dampen the effects of variability because it is less likely that a single occurrence will dominate performance.
The inevitable conclusion is that variability degrades the performance of a manufacturing organization. Once again, I encourage the reader to seek out a copy of Hopp and Spearman’s book for detailed explanations and proofs of these manufacturing fundamentals as they apply to variability.
So where does this variability come from? Before we attempt to identify and locate sources of variation, it is important to first understand the causes of variability. It is equally important to be able to quantify it and we can do this by using standard measures from statistics to define variability classes. Again, Hopp and Spearman report that there are three classes of processing time variability as seen in Table 1.
CVt < 0.75
0.75 ≤ CVt < 1.33
CVt ≥ 1.33
Process times without outages (e.g. downtime)
Process times with short adjustments (e.g. setups)
Process times with long outages (e.g. failures)
When we think about processing times, we have a tendency to consider only the actual time that the machine or operator spends on the job actually working (i.e. not including failures or setups) and these times tend to be normally distributed. If, for example, the average process time was 20 minutes and the standard deviation was 6.3 minutes, then CVt = 6.3/20 = 0.315 and would be considered a low variation (LV) process. Most LV processes follow a normal probability distribution. Suppose the mean processing time was 20 minutes, but the standard deviation was 30 minutes. The value for CVt = 30/20 = 1.5. This process would be considered highly variable.
You may be wondering why we care whether a process is LV, MV or HV? Suppose, for example, that you have identified a constraint that is classed as a LV process with an average process time of 30 minutes and a standard deviation of 10 minutes. The calculated value of the coefficient of variation, CVt = 10/30 = 0.33 and would be considered low LV. Suppose that the non-constraint operation feeding the constraint has an average processing time of one-half that of the constraint, 15 minutes, but its standard deviation was 30 minutes. The calculated value for CVt = 30/15 = 2.0 and is considered a HV process. A value of 2.0 from Table 1 suggests that this process probably has long failure outages which could starve the constraint! When developing your plan of attack for reducing variation, using the coefficient of variation suggests that you include non-constraint processes that feed the constraint operation if they are classified as HV.
In my next posting we will look at the most prevalent sources of variation in manufacturing environments as they apply to processing times.