Impact has become a buzzword for those who in social science research to those who work in Physics and Engineering. Impact implies causality, which for the sake of organizations doing research, means their organization caused something (usually positive to happen). This might not always be the cause, but it still sounds good for organizations, or for junior researchers, to talk about impact as though they are driving some great change.
Impact relies on impact evaluations, the actual methodology for measuring whether impact was created or not. The interesting part here is that organizations too, talk about the impact they have created with sometimes false or misleading numbers, heightening “impacts” worth on the page. For example, servicing 1,000 clients does not necessarily mean impact was created as a net for those 1000 clients. Different people are also interested in the success of impact, and have different ways of looking at it. If 100 of those clients who were serviced were greatly satisfied by the service provided, then most onlookers might say that it was not a very favorable rate. However, there are some optimists, or perhaps we might call them economists, who see the success rate in terms of other economic or macro data.
Evaluations and the Counterfactual
A good evaluation, that takes into account the impact of a program or a participant, has to focus on the counterfactual. The counterfactual is basically measuring two units (or participants): Unit X who participated in a program and is now showcasing the results of their buying habits, persay, and another Unit Y who did not participate in the program and is showcasing the results of their buying habits. The impact is the difference between Unit X and Unit Y’s buying habits because both were buying the same product, and only was a direct program participant.
This is a good indication of the real impact of a program. Some of the limitations and/or problems that result from such an evaluation are the following:
- The counterfactual is not comparable to the individual who participated in the program
- An adverse event happened during the research study
- The impact was not calculated correctly
- The impact of the study is exaggerated by an organization to make its results more appealing
As to this last point, the most important aspect of working with counterfactuals is moving from the individual setting to the group setting. When organizations and researchers do large studies on populations of people, their spending habits, their savings habits, this information also should match some valid comparison group that has the same, or about the same, characteristics as the treatment group. This is statistically important because if the comparison group is not similar, it will skew the results and the “impact” press release the organization or researcher puts out on the Internet or to a wide audience. This might spur more academics or researchers to see these experiments as valid even though they are inaccurate with respect to impact. This produces at least some sort of domino effect that should be avoided.
Impact might be used too much, as it pertains to the last thought, because of selection bias. To keep things in perspective, it is not easy to find a good comparison group because the motivations of the two groups, the participating group, and the non-participating group, have different motivations. This is a subtle, yet important thing to consider for impact assessments.
For example, if an impact evaluation is done on behalf of an underemployed cohort who were given extra resources to find employment, there motivating factors for finding a new job are already higher than the non-participant group who were not given the extra resources. In other words, the participant group might have come from a background or history of underemployment whereas control members might have not.