CULTURAL BIAS IN INTERNATIONAL SURVEY RESEARCH
Gallup conducted a 148 country opinion survey in 2011 measuring positive emotions and the findings released in 2012 quickly became known as the “happiness poll.” As countries were evaluated on how “happy” or “unhappy” they were, especially in contrast to their economic well-being, some debate arose over the findings and what some critics see as cultural bias in the results. AP reporter, Michael Wissenstein, wrote an article on December 19, 2012 that has appeared on the HuffPost Healthy Living site outlining the findings and making note that many of the supposed “happiest” populations live in countries with poor “traditional measures of well-being.”
Wissenstein cited Eduardo Lora, a Columbian formerly of the Inter-American Development Bank, as seeing cultural bias in the findings for the empirical literature says that “some cultures tend to respond to any type of question in a more positive way.” Others said the poll, which showed many Latin American countries on the positive emotions end of the scale “may have been skewed by a Latin American cultural proclivity to avoid negative statements regardless of how one actually feels.” On the other end of the scale is Armenia with the second lowest happiness rating. Wissenstein quoted Agaron Adibekian, an Armenian sociologist, “feeling unhappy is part of the national mentality here,” and that “Armenians feel ashamed about being successful.”
According to Wissenstein all this has “serious implications for a relatively new and controversial field called happiness economics” that adds the public’s perceptions of their positive emotions to harder data on the quality of life and material well-being such as life expectancy, per capita income, and education levels. Aside from the somber policy repercussions this may have, the debate surrounding the release of the Gallup poll should remind anyone involved in international public opinion research that cultural differences do exist and efforts to compare and contrast populations on perceptual measures need to be treated with care and precaution.
Reading about possible cultural bias in the happiness poll brought to mind several experiences that I have had in this area. Trying to find an acceptable color for a package marketing program in Western Europe I found that favorite colors varied by region. The reds and yellows that were popular in Spain turned to burgundy in Bordeaux and grey in Brittany. Not surprising once one thought about it a little.
While director of what was then the United States Information Agency’s Office of Research in the early 1990s we were conducting some of the first surveys in Eastern Europe. We were trying to gather as much information on social, economic, religious, as well as political attitudes as possible. One of the religious questions posed was about one’s personal relationship with God. Results varied widely and cross-national comparison was impossible. In seeking feedback from local field service vendors we found that in countries with orthodox religion traditions the question simply did not translate into the cultural ethos. The tenet was that the local priest was the intermediary between the people and God and that one’s relationship was not with God but one’s priest. Our translations may have been true to our English question, but the concept of the question did not translate into some of the cultures we were surveying.
Our USIA/R Japan specialist use to bemoan the lack of diverse findings in his surveys. He was envious of his counterparts responsible for other parts of the world who could find all kinds of statistical differences in the data within their populations. As he noted, the Japanese tendency not to stand out from the crowd sunk deep into his polling results. On scales, they avoided the extremes on either end and if there was a mid-point on the scale (such as are natural on three, five, or seven point scales), they gravitated to that point.
I was asked by one client to survey its 16 major product markets across the global. One of the items the client was interested in was how each of the markets would rate the image of its particular industry and the major competitors within that industry. Leary of single-item measures, I suggested that we do a rating for several industries, theirs being one of them. We settled on seven industries for comparison.
The results showed that the client’s industry was rated “better” in some countries than in others. However, once we put the seven industries together across all markets we saw clearly that the client’s industry was rated “lowest” in each. All industries varied from country to country in their absolute rating but their position in the comparison stayed constant. The variance appeared to be cultural and not statistical. Those markets with reputations for more “positive” ratings (Asian) gave all industries more positive scores, but there was little difference in the overall variance between the top and bottom industry ratings (about 3+ points whether we were looking at a 2-5 mean rating per industry or a 4-7 mean rating on a ten point scale). The message to the client was to ignore the absolute ratings and learn from the comparative ratings; last was last whatever the absolute score.
To be prepared for cultural differences in cross-national surveys, I came to rely on observed focus groups and observed questionnaire pre-tests both with simultaneous translators. The topics and English-based questionnaire were the same in all situations but I was interested in how the person-in-the-street would react to these topics and the “expert” translations as they participated in the groups or the interviews. I listened for nuances in the interactions that would indicate differences from say Taiwan to Hong Kong, or Korea to Malaysia. I often found them and adjusted the questionnaire translations to fit the context that emerged from these groups and pre-tests.
One other thing I did regularly was ask my local field service manager to comment on my questionnaire and on the focus group and pre-test results. I found them initially hesitant as either or both their cultural predisposition to respect and their experience with American researchers who wanted everything done their way kept the managers from opening up. Once they observed me listening carefully to the group participants, the pre-test respondents, and debrief the interviewers, they essentially said, “Oh, you really want to know what we think,” and they offered their insights, which usually proved very beneficial.
With some forethought and preparation one can be prepared for cultural predispositions and how they might affect one’s efforts to compare data across nations.