In my last posting, The Eight Deadly Wastes of Lean, we covered the basics of Lean Manufacturing and more specifically, the eight categories of waste and how they can be used to improve the capacity of the constraint or bottleneck resource. In today’s posting I want to focus on how the tools and techniques of Six Sigma can be used to reduce and control the variation that impacts the output of the constraint resource. So just like Lean is to waste, Six Sigma is to variation.
The basics of Six Sigma
The theory associated with Six Sigma is that variation is the enemy and must not only be reduced, but must also be in a state of control. Literally translated, this means variation should be predictable. The focus of Six Sigma is typically problem-solving with the end result of actions being variation reduction. It is impossible to remove all variation, so the best we can hope to do is to reduce its impact on the process and the constraint. In a typical Six Sigma initiative, variation must be defined, measured, and analyzed and then solutions are developed to reduce the variation. The final step in the Six Sigma process is to implement a control mechanism to provides an advanced warning of any variation increase that may impact your process.
Different types of variation
It’s important to understand that within a process there are principally two different types of variation. The first type is referred to as natural or common cause variation and the second type is called special cause variation. Understand that the impact of each type of variation is drastically different. Whereas natural variation is very predictable in nature, special cause variation enters the process without warning…..a sort of Murphy’s Law type thing. In other words, natural variation is variation characterized by a stable and consistent pattern of variation over time whereas special cause variation is characterized by a pattern of variation that changes over time. An example of natural variation is the width of a product not being exactly the same each time it is measured. An example of special cause variation would be drastic differences in the width of a product each time it is measured that creates havoc in terms of producing good parts. Dr. W. Edwards Deming taught us that we must attack and eliminate special cause variation first or our processes will never be predictable. So how does variation negatively impact the operation and capacity of our constraint? Or maybe a better way of asking this is, how can reducing and controlling variation positively impact the operation and capacity of our constraint? Before we address these ways, let’s first explore the concept of variation in a bit more detail.
The negative impact of high variation
Everyone has heard of the bell-shaped curve, so named because of its familiar shape, but exactly what is it telling us? We know that even though we try hard to make every part exactly the same, sometimes things just don’t come out like we want them to. It’s because of the variation that exists in every process. But even though variation exists, what we hope to have happen is that our parts are “reasonably” the same and that as many parts as possible are within spec limits for key variables and that our process is “in control.” So what’s the difference between being “in spec” and “in control?” Let’s talk about being in spec first.
The drawing above represents data collected on one of the key variables for a part produced by a manufacturer like you. Let’s say, for example, that one of the key variables for this part is its thickness and as such, it has an upper and lower spec limit. The manufacturer is producing this part on two different machines. We just said that not all parts produced will be exactly the same thickness, so as we collect more and more data points for thickness, the data will naturally arrange itself into a pattern. In this case, we see the familiar bell shaped curve, also known as the normal distribution curve.
The positive impact of low variation
In this drawing there are actually two sets of data plotted, one for each of the two machines making this same part (i.e. one blue curve and one brown curve). In this example, there are two distinctly different normal distribution curves which indicate that the variation between the two machines is significantly different. Because the darker shaded curve is much wider than the lighter shaded curve, we can say with certainty that there is much more variation associated with it. In fact, since quite a few of the data points are outside the acceptable spec limits, we can state with confidence that defective product is being made on the machine exhibiting more variation. If it’s too thick, hopefully it can be reworked and made thinner, but if it’s too thin, then it is probably scrap. So if this problem is being caused at the constraint, then right away we see that finding and eliminating the root cause of the scrap and rework is a clear way to increase the effective capacity of the constraint. If we could reduce the variation to the same level as the machine in the lighter shaded bell curve, then we have an automatic gain in throughput. The overarching conclusion is, the lighter shaded curve exhibits much less variation than the dark shaded curve and is said to be “in spec.”
In my next posting we will discuss what it means to be “in control” and why we want all processes to be in this state.
Thanks for reading and let me know if you have any questions.
Don't miss out!
Stay on top of the latest business acumen by subscribing to the Manufacturing Breakthrough blog.