Analytic Hierarchy Process (AHP) employs a metric to evaluate the reliability of pairwise comparisons made during the decision-making process. This metric quantifies the degree of inconsistency in the judgments provided by a decision-maker. Consider a scenario where an individual is comparing three alternatives (A, B, and C) based on a particular criterion. If the individual states that A is strongly preferred to B (e.g., a score of 5), B is moderately preferred to C (e.g., a score of 3), and then C is strongly preferred to A (e.g., a score of 5, implying A is less preferred than C), an inconsistency exists. The aforementioned metric is used to measure this incoherence, often involving calculating a consistency index (CI) and then normalizing it by a random consistency index (RI) appropriate for the matrix size, resulting in a ratio. A result below a certain threshold, typically 0.10, indicates acceptable consistency, suggesting that the decision-maker’s judgments are reasonably reliable. The process involves constructing a pairwise comparison matrix, normalizing it, determining priority vectors, computing the consistency index (CI) based on the maximum eigenvalue, and ultimately dividing this by the random index (RI) relevant to the matrix’s dimensions.
The value of assessing judgment consistency lies in ensuring the validity of decisions based on AHP. High levels of inconsistency undermine the credibility of the results and may lead to suboptimal choices. By identifying and addressing inconsistencies, the decision-making process becomes more robust and defensible. Historically, the development of this ratio was crucial in establishing AHP as a respected methodology for multi-criteria decision analysis, distinguishing it from simpler weighting techniques and providing a mechanism for quantifying subjective judgment reliability. Using such measurements allows stakeholders to have increased confidence in the ranking/prioritization of the decision factors involved.