The Global Integrity Index and the Integrity Indicators provide in-depth material for users to identify strengths and weaknesses in a country's anti-corruption framework. Our stress on "integrity" is meant to highlight the indicators' usefulness as a positive tool - rather than as a "shaming" mechanism - for advocates of good governance seeking to prioritize governance challenges and promote evidence-based reforms. We view the Indicators' greatest strength as their ability to unpack governance challenges within a country into discrete, actionable issues rather than just single numbers or rankings. The richness of the data set - more than 300 indicators for each country - enables a discussion of how best to allocate limited political and financial capital when the challenges are many and the resources few.
There are, however, a few caveats users should bear in mind when interpreting the data.
The Integrity Indicators do not measure corruption.
First, it is worth emphasizing that the Integrity Indicators do not measure corruption but rather assess its opposite, that is, anti-corruption and good governance institutions, mechanisms, and practices. While corruption and bribery are difficult if not impossible phenomena to capture empirically, assessing the performance of key integrity-promoting mechanisms such as civil society, the media, and law enforcement provides a much more concrete access point through which to analyze and monitor government accountability.
"More integrity" is not the same as "less corruption."
Corruption and good governance are certainly related. However, it is not always the case that countries which implement seemingly "best practices" by way of governance inputs the laws, mechanisms, and enforcement of anti-corruption safeguards end up with ideal governance outputs: reduced corruption and increased government accountability. That is to say, users should not necessarily interpret high scores on the Global Integrity Index as reflective of countries where there is no corruption. Instead, those results should simply be understood to reflect circumstances where key anti-corruption safeguards exist and have been enforced, which while one would hope reduces corruption may not eliminate it entirely. In simple terms, corruption can and will occur even where societies have implemented what are understood to be ideal reforms.
We have a slight bias against informal integrity systems.
Users should avoid conflating the wide diversity of formal and informal institutional practices that promote good governance when reviewing the Global Integrity Index, since our Integrity Indicators focus heavily on formal institutions. While we realize this may prejudice countries where informal relations remain strong and weak institutions predominate, whenever possible we have tried to recognize functional equivalences even in the absence of a specific, sought-after institution or mechanism. If a function is performed by a unique or informal system, it can score the same as a formal institution.
Our methodology changes slightly from year to year.
Users should exercise caution when examining our data to diagnose trends and changes over time because our methodology has changed slightly over the years. While our national-level Integrity Indicators have remained virtually identical since 2006, we regularly improve a small number of indicators each year through the introduction of more consistent scoring criteria or definitional precision. We recommend that users read indicator questions and scoring criteria closely when examining year-to-year changes in the data.
The 2004 pilot data is not directly comparable to later assessments.
We do not recommend that users compare changes in country-specific data between 2004 and later assessments; in the period between the 2004 and 2006 rounds of national assessments many important methodological changes were implemented which make it difficult to assess whether changes in the data resulted from real change or simply methodological changes. Those methodological changes included the introduction of consistent scoring criteria and revisions to certain sub-categories based on feedback from our 2006 methodology advisory committee.
Older data should be used with appropriate caution.
For countries that have been previously assessed but are skipped in a given year, we do not recommend that users assume that no change has taken place in the interim period. While we would agree with the argument that macro-level governance changes take long periods of time to develop, we have seen important changes in countries at the sub-category and category-level in 12-month time spans, leading us to believe that reform efforts (or backsliding) do happen in shorter periods and can be picked up with appropriately designed assessment tools.
For more detailed examination of our methodology, see the Methodology White Paper.