What is Measurement System Analysis (MSA) ? #msa # Quality assurance #Quality control

                                                                   नई खबर

#qualitycontrol #qualituassurance #msa

Measurement System Analysis (MSA) ?  


Introduction, Purpose and Terminology

Measurement data are used more often and in more ways than ever before.
For instance, the decision to adjust a manufacturing process is now
commonly based on measurement data. The data, or some statistic calculated
from them, are compared with statistical control limits for the process, and if
the comparison indicates that the process is out of statistical control, then an
adjustment of some kind is made. Otherwise, the process is allowed to run
without adjustment.

The benefit of using a data-based procedure is largely determined by the

quality of the measurement data used. If the data quality is low, the benefit of
the procedure is likely to be low. Similarly, if the quality of the data is high,
the benefit is likely to be high also.
To ensure that the benefit derived from using measurement data is great
enough to warrant the cost of obtaining it, attention needs to be focused on
the quality of the data.

Quality of Measurement Data

The quality of measurement data is defined by the statistical properties of
multiple measurements obtained from a measurement system operating under
stable conditions. For instance, suppose that a measurement system,
operating under stable conditions, is used to obtain several measurements of
a certain characteristic. If the measurements are all “close” to the master
value for the characteristic, then the quality of the data is said to be “high”.
Similarly, if some, or all, of the measurements are “far away” from the
master value, then the quality of the data is said to be “low”.



Purpose

The purpose of this document is to present guidelines for assessing the
quality of a measurement system. Although the guidelines are general
enough to be used for any measurement system, they are intended primarily
for the measurement systems used in the industrial world. This document is
not intended to be a compendium of analyses for all measurement systems.
Its primary focus is measurement systems where the readings can be
replicated on each part. Many of the analyses are useful with other types of
measurement systems and the manual does contain references and
suggestions. It is recommended that competent statistical resources be
consulted for more complex or unusual situations not discussed here.
Customer approval is required for measurement systems analysis methods
not covered in this manual.

Summary of Terms

Standard
1. Accepted basis for comparison
2. Criteria for acceptance
3. Known value, within stated limits of uncertainty, accepted as a true
    value
4. Reference value

A standard should be an operational definition: a definition which will
yield the same results when applied by the supplier or customer, with
the same meaning yesterday, today, and tomorrow.

Basic equipment
1. Discrimination, readability, resolution
2. Alias: smallest readable unit, measurement resolution, scale
    limit, or detection limit
3. An inherent property fixed by design                                   
4. Smallest scale unit of measure or output for an instrument
5. Always reported as a unit of measure
6. 10 to 1 rule of thumb


Effective resolution
1.. The sensitivity of a measurement system to process variation for
     a particular application
2. Smallest input that results in a usable output signal of
measurement
3. Always reported as a unit of measure
4. Reference value
5. Accepted value of an artifact
6. Requires an operational definition
7. Used as the surrogate for the true value
8. True value
9. Actual value of an artifact
10. Unknown and unknowable

Location variation

Accuracy
1. “Closeness” to the true value, or to an accepted reference value
2.  ASTM includes the effect of location and width errors

Bias
1. Difference between the observed average of measurements and
the reference value
2. A systematic error component of the measurement system



Stability
1. The change in bias over time
2. A stable measurement process is in statistical control with
    respect to location
3. Alias: Drift



Linearity
1. The change in bias over the normal operating range
2. The correlation of multiple and independent bias errors over the
operating range
3. A systematic error component of the measurement system



Width variation

Precision
1. “Closeness” of repeated readings to each other
2. A random error component of the measurement system

Repeatability
1. Variation in measurements obtained with      one measuring
    instrument when used several times by          an appraiser while
    measuring the identical characteristic on         the same part
2. The variation in successive (short-term)        trials under fixed and
    defined conditions of measurement
3. Commonly referred to as E.V. –                      Equipment Variation
4. Instrument (gage) capability or potential
5. Within-system variation

Reproducibility
1. Variation in the average of the measurements made by different
appraisers using the same gage when measuring a characteristic
on one part
2. For product and process qualification, error may be appraiser,
environment (time), or method
3. Commonly referred to as A.V. – Appraiser Variation
4. Between-system (conditions) variation
5. ASTM E456-96 includes repeatability, laboratory, and
environmental effects as well as appraiser effects


GRR or Gage R&R
1. Gage repeatability and reproducibility: the combined estimate of
measurement system repeatability and reproducibility
2. Measurement system capability; depending on the method used,
may or may not include the effects of time

Measurement System Capability
1.Short-term estimate of measurement system variation (e.g.,
“GRR” including graphics)

Measurement System Performance
1. Long-term estimate of measurement system variation (e.g., longterm
Control Chart Method)

Sensitivity
1. Smallest input that results in a detectable output signal
2. Responsiveness of the measurement system to changes in
    measured feature
3. Determined by gage design (discrimination), inherent quality
    (Original Equipment Manufacturer), in-service maintenance, and
    operating condition of the instrument and standard
4. Always reported as a unit of measure

Consistency
1. The degree of change of repeatability over time
2. A consistent measurement process is in statistical control with
    respect to width (variability)

Uniformity
1. The change in repeatability over the normal operating range
2. Homogeneity of repeatability

System variation

Measurement system variation can be characterized as:
1. Capability
2. Variability in readings taken over a short period of time
3. Performance
4. Variability in readings taken over a long period of time
5. Based on total variation
6. Uncertainty
7. An estimated range of values about the measured value in which
    the true value is believed to be contained


Comments