An Evolution in Quality Control
Process monitoring evaluates part to part variation, focusing on the difference between one result and the next as opposed to the determination of a ‘true’ value. The PET packaging industry, due to the initial requirement for stable production, originally implemented inspection systems based on process over metrology. This aided the creation of reliable manufacturing techniques with minimal variation, thus allowing performance optimisation, material resource reduction and design improvements.
However, quality requirements are changing with the emphasis now on ‘true’ metrology. Recorded, traceable and accurate measurement results are essential to ensure parts are correct and consistent, regardless of the country of manufacture. There has also been an increase in the number and variety of measured features with a greater amount of the bottle and preform being inspected, calling for more sophisticated methods.
An Undesirable Scenario
If a batch of bottles were uncharacteristically splitting in the base area, it would be important to determine if they had been blown and stretched correctly. This could be assessed by checking the material thickness in that area. For example, captured data may indicate parts checked during that production period tested within tolerance and were verified against a reference sample. However, how do we know the average thickness result reflects the physical size and has not been affected by linearity errors? Additionally, how do we know the reference standard used for setup is correct? Furthermore, how do we know, if re-checked, the results would repeat within an acceptable range? The answers to these questions define the fundamental differences between process monitoring and traceable measurement.
A measurement value can never be exact or perfect. There is always a degree of error associated with any type of inspection. Measurement is its own discipline with internationally recognised definitions, methodology and standards. Only through strict adherence to these rules and guidelines can quality of measurement be ascertained.
Clearly defining terms such as accuracy, precision, resolution, repeatability and reproducibility, and their associated methodologies, is important for a fair comparison between systems. Additionally, tracing measurements back to reference instruments and international standards can ensure, for instance, that test results obtained in Brazil correlate to those obtained in Poland.
Accuracy is a qualitative term defined as the deviation between the measurement result and the ‘true’ value. In order to determine an instruments accuracy it must first be quantified by using one of two methods. The first method, categorised as a ‘Type A’ evaluation, statistically analyses repeated measurements, often referencing values from equipment of a higher traceable accuracy. The second method, categorised as a ‘Type B’ evaluation, uses a calculated uncertainty value by following the ISO Guide to the Expression of Uncertainty in Measurement.
Gauges supplied by Torus Measurement Systems, such as the B300 Wall Thickness, B303 Bottle Burst or B304 Top Load & Volume, are supplied with a proven accuracy, giving the end user confidence in their measurement results. This is determined through a Correlation study (Type A), where six parts are each measured five times and compared to the same number of measurements from a higher accuracy instrument.
FIG 1: B301 Semi-Auto Preform Inspection Gauge with telecentric lens and collimated light source
Precision is not Accuracy
Precision is the expected spread of results when a specific measurement is repeated under fixed conditions. Use of the word precision is discouraged by measurement institutes and standards authorities worldwide who instead entreat the use of defined terms such as capability, repeatability and reproducibility.
The difference between accuracy, precision and resolution are illustrated in Figure 2.
Repeatability & Reproducibility, often referred to as R&R, is a quantitative value representing the expected scatter of measurement results when the same test is performed multiple times. It is generically expressed by the equation
where GRR is the calculated gauge R&R, EV is equipment variance (repeatability) and AV is appraiser variance (reproducibility).
Repeatability, or equipment variance, is the contribution to overall R&R that comes from the instrument. Specifically, the range of obtained measurement results when a particular measurement is repeated with no external influence.
Reproducibility, or appraiser variance, is the expected measurement change that can be attributed to operator influence. Contributing factors include operator training, loading factors, positional variation and misuse.
Torus Measurement Systems’ R&R study is historically derived from the AIAG (Automotive Industry Action Group) MSA (Measurement System Analysis) and performed on all gauges prior to shipping. The study evaluates measurement results of 10 parts each measured three times by three individual operators, using a 5.15 sigma value to represent 99% of the area beneath the normal distribution curve. Subsequently, equipment and appraiser variation are evaluated by identifying trends present in control charts, see Figure 3, Figure 4 and Figure 5 for examples.
Measurement quality is ascertained by comparison to a reference instrument of higher accuracy or to a master calibration. Either method must be certified, directly or indirectly, to a primary standard. The series of documented links between the highest level calibration and the final gauge accuracy is known as measurement Traceability.
The primary, or national, standard is defined by a country’s National Measurement Institute (NMI), see Table 1. They are responsible for providing guidelines, working practices and calibration services. By collaborating with other NMIs they ensure standards are internationally recognised and adhered to. This means a traceable gauge accuracy is valid in many countries.
Table 1: A small selection of National Measurement Institutes from around the world
Measurement competency is assessed and monitored by accreditation bodies. They ensure laboratories are capable of calibrating in accordance to the national standard. Torus Measurement Systems have an in-house measurement laboratory accredited by the United Kingdom Accreditation Service (UKAS), for internal and external calibration services.
Specifying Measurement Performance
The questions set-out at the beginning of this article can now be answered in reference to Torus’ B300 Wall Thickness Gauge.
How do we know the measured value reflects the physical size and has not been affected by linearity errors?
The gauge has a proven accuracy determined by performing a ‘Type A’ correlation study on a master artefact, see Table 2. In addition, 4 reference pucks of thickness 0.125mm, 0.250mm, 1.000mm and 1.500mm, are supplied with each gauge, verifying the system will remain linear through its measurement range.
How do we know the reference standard used for setup is correct?
The artefact and reference pucks are UKAS calibrated and traceable to a national standard for thickness measurement.
FIG 3: Points beyond either of the control limits
FIG 4: Points consistently one side of the centreline
FIG 5: Points consecutively rising and falling
How do we know, if re-checked, the results would repeat within an acceptable range?
The gauge repeatability is confirmed by conducting a comprehensive R&R study, as previously detailed, see Table 2.
Table 2: Proven accuracy and repeatability values for the B300 Wall Thickness Gauge
Beyond Base and Wall Thickness Measurement
Torus Measurement Systems have produced a revolutionary wall thickness gauge by combining new technology, sophisticated software and stringent metrology techniques. Enhanced bottle profiling and positioning has enabled Shoulder, Wall, Base Peak and Base Valley thicknesses to be inspected, as well as measuring Overall Height and Base Clearance.
The new B304 Top Load & Volume Gauge has been developed by continuing to apply the above principles and innovation to further meet industry requirements, see Figure 6. Improvements include; integrated top loading and volume testing into a single station, removing the need to move the bottle between tests and reducing labour time and costs; a unique fill nozzle design that improves reliability and fill level repeatability; calibrated load cell and weigh scale traceable to national standards.
The measurement principles, as outlined in this article, are currently being implemented on a new high-speed automatic preform measurement and inspection gauge.