All About Statistical Process Control

Statistical methods like hypothesis testing, regression, SPC, and DOE play a crucial role in quality improvement. By analyzing data, identifying variation, and optimizing processes, these tools help ensure consistent performance and informed decision-making in complex systems.

Table of Contents

Share
Technology concept with futuristic statistical element
Article overview
Completed projects
0 +
Returning customers
0 %

Statistical Methods In Quality Improvement

The use of statistical methods in quality improvement takes many forms, including:

Hypothesis Testing

Two hypotheses are evaluated: a null hypothesis (H0) and an alternative hypothesis (H 1). The null hypothesis is a “straw man” used in a statistical test. The conclusion is to either reject or fail to reject the null hypothesis.

Regression Analysis 

Determines a mathematical expression describing the functional relationship between one response and one or more independent variables.

Statistical Process Control (SPC)

Monitors, controls and improves processes through statistical techniques. SPC identifies when processes are out of control due to special cause variation (variation caused by special circumstances, not inherent to the process). Practitioners may then seek ways to remove that variation from the process.

Design and Analysis of Experiments

Planning, conducting, analyzing and interpreting controlled tests to evaluate the factors that may influence a response variable.

The practice of employing a small, representative sample to make an inference of a wider population originated in the early part of the 20th century. William S. Gosset, more commonly known by his pseudonym “Student”, was required to take small samples from a brewing process to understand particular quality characteristics. The statistical approach he derived (now called an one-sample t-test) was subsequently built upon by R. A. Fisher and others.

Jerzy Neyman and E. S. Pearson developed a more complete mathematical framework for hypothesis testing in the 1920s. This included concepts now familiar to statisticians, such as:

  • Type I error – incorrectly rejecting the null hypothesis.
  • Type II error – incorrectly failing to reject the null hypothesis.
  • Statistical power – the probability of correctly rejecting the null hypothesis.

 

Fisher’s Analysis of Variance (or ANOVA) procedure provides the statistical engine through which many statistical analyses are conducted, as in Gage Repeatability and Reproducibility studies and other designed experiments. ANOVA has proven to be a very helpful tool to address how variation may be attributed to certain factors under consideration.

W. Edwards Deming and others have criticized the indiscriminate use of statistical inference procedures, noting that erroneous conclusions may be drawn unless one is sampling from a stable system. Consideration of the type of statistical study being performed should be a key concern when looking at data.

Download our e-book

Download our free e-book to discover how GQ Interim can transform your business with expert leadership solutions!

Statistical Process Control

Statistical process control (SPC) procedures can help you monitor process behavior.

Arguably the most successful SPC tool is the control chart, originally developed by Walter Shewhart in the early 1920s. A control chart helps you record data and lets you see when an unusual event, e.g., a very high or low observation compared with “typical” process performance, occurs.

Control charts attempt to distinguish between two types of process variation:

  • Common cause variation, which is intrinsic to the process and will always be present.
  • Special cause variation, which stems from external sources and indicates that the process is out of statistical control.

 

Various tests can help determine when an out-of-control event has occurred. However, as more tests are employed, the probability of a false alarm also increases.

Background

A marked increase in the use of control charts occurred during World War II in the United States to ensure the quality of munitions and other strategically important products. The use of SPC diminished somewhat after the war, though was subsequently taken up with great effect in Japan and continues to the present day.

Many SPC techniques have been “rediscovered” by American firms in recent years, especially as a component of quality improvement initiatives like Six Sigma. The widespread use of control charting procedures has been greatly assisted by statistical software packages and ever-more sophisticated data collection systems.

Over time, other process monitoring tools have been developed, including:

  • Cumulative Sum (CUSUM) charts: The ordinate of each plotted point represents the algebraic sum of the previous ordinate and the most recent deviations from the target.
  • Exponentially Weighted Moving Average (EWMA) charts: Each chart point represents the weighted average of current and all previous subgroup values, giving more weight to recent process history and decreasing weights for older data.

 

More recently, others have advocated integrating SPC with Engineering Process Control (EPC) tools, which regularly change process inputs to improve performance.

Statistical Quality Control Versus Statistical Process Control (SQC vs. SPC)

In 1974 Dr. Kaoru Ishikawa brought together a collection of process improvement tools in his text Guide to Quality Control. Known around the world as the seven quality control (7– QC) tools, they are:

  • Cause–and–effect analysis
  • Check sheets/tally sheets
  • Control charts
  • Graphs
  • Histograms
  • Pareto analysis
  • Scatter analysis

 

In addition to the basic 7–QC tools, there are also some additional tools known as the seven supplemental (7– SUPP) tools:

  • Data stratification
  • Defect maps
  • Events logs
  • Process flowcharts/maps
  • Progress centers
  • Randomization
  • Sample size determination

 

Statistical quality control (SQC) is the application of the 14 statistical and analytical tools (7–QC and 7 SUPP) to monitor process outputs (dependent variables). Statistical process control (SPC) is the application of the same 14 tools to control process inputs (independent variables).

The figure below portrays these relationships.

What Is Design of Experiments (DOE)?

This branch of applied statistics deals with planning, conducting, analyzing and interpreting controlled tests to evaluate the factors that control the value of a parameter or group of parameters.

A strategically planned and executed experiment may provide a great deal of information about the effect on a response variable due to one or more factors. Many experiments involve holding certain factors constant and altering the levels of another variable. This One–Factor–at–a–Time (or OFAT) approach to process knowledge is, however, inefficient when compared with changing factor levels simultaneously.

Many of the current statistical approaches to designed experiments
originate from the work of R. A. Fisher in the early part of the 20th century. Fisher demonstrated how taking the time to seriously consider the design and execution of an experiment before trying it helped avoid frequently encountered problems in analysis. Key concepts in creating a designed experiment include blocking, randomization and rep lication.

A well–performed experiment may provide answers to questions such as:

  • What are the key factors in a process?
  • At what settings would the process deliver acceptable performance?
  • What are the key, main and interaction effects in the process?
  • What settings would bring about less variation in the output?

 

A repetitive approach to gaining knowledge is encouraged, typically involving these consecutive steps:

  • A screening design which narrows the field of variables under assessment.
  • A “full factorial” design which studies the response of every combination of factors and factor levels, and an attempt to zone in on a region of values where the process is close to optimization.
  • A response surface design to model the response.

 

Blocking

When randomizing a factor is impossible or too costly, blocking lets you restrict randomization by carrying out all of the trials with one setting of the factor and then all the trials with the other setting.

Randomization

Refers to t he order in which the trials of an experiment are performed. A randomized sequence helps eliminate effects of unknown or uncontrolled variables.

Replication

Repetition of a complete experimental treatment, including the setup.

Analysis of Variance (ANOVA)

ANOVA is a basic statistical technique for determining the proportion of influence a factor or set of factors has on total variation. It subdivides the total variation of a data set into meaningful component parts associated with specific sources of variation to test a hypothesis on the parameters of the model or to estimate variance components. There are three models: fixed, random and mixed.

Conclusion

In conclusion, statistical methods form the foundation of effective quality improvement. Tools such as hypothesis testing, regression analysis, SPC, and design of experiments enable organizations to uncover insights, monitor performance, and control variation. By distinguishing between common and special causes, optimizing process inputs, and systematically testing changes, these techniques empower decision-makers to improve outcomes, reduce waste, and enhance overall efficiency. Embracing these methods supports a culture of continuous improvement grounded in data and scientific thinking.

Interested in Interim Expert?

Discover how interim management can dramatically increase the efficiency of your business. Get in touch with our team to learn how working with GQ Interim will improve your company.

Related articles

Author

Interested in our
services?

Get in touch with our team to discuss your Project or join our network of Experts.