The Evidence Base

Informing Policy in Health, Economics & Well-Being
A collaboration with
USC Dornsife Center for economic and social research

Are We Asking Too Much of Older Participants in National Surveys?

The United States is an aging nation. In 2014, about 1 in 7 Americans were 65 years or older, and it is expected that 1 in 5 Americans will be aged 65 and over by 2040. As the proportion of older people in the population grows, research has become increasingly interested in understanding how we can age successfully. How can we maintain high quality of life in the face of eventual physical and cognitive health decline in older age? What makes us happy or unhappy in older age? How does emotional well-being change with age? Can happiness help protect us from sickness and chronic disease? These are some of the commonly addressed topics in research on aging.

Large-scale surveys are often used to tackle these issues, and the number of national and international surveys that are conducted among seniors is steadily increasing. Surveys are a relatively convenient and inexpensive way for researchers to collect large amounts of data about people’s well-being, feelings, and daily life experiences. However, in order to provide optimal data and inform policy decisions, it is essential that the information obtained from surveys is as accurate as possible.

Even well-constructed surveys tend to ask a lot of questions, such as: How happy were you in the last month? Were you depressed in the past year, and if so, how much? How satisfied are you with your life as a whole these days? Ideally, respondents will thoroughly consider all aspects of the question and carefully select their responses, but working through question after question requires considerable mental effort and can be tedious and tiresome after a while. When this happens, respondents may pay limited attention to what the questions are asking, and the quality of their responses is likely to suffer.

It is well known that at least some cognitive and mental abilities decline in older age (for example, see here and here), which can make it more effortful for older participants to complete a survey. As we get older, we tend to have a harder time processing information quickly, working swiftly and flexibly through a series of similar tasks, and retrieving information from short- and longer-term memory. All of these cognitive functions are required when completing a survey. Could this mean that we are asking too much of older participants in national surveys, such that the quality of survey data from participants at older ages is compromised without our knowledge?

I investigate this in a recent study  by examining data collected as part of the Health and Retirement Study (HRS). The HRS is conducted every two years with a large representative sample of Americans 50 years and older, which presented an opportunity to take a closer look at the quality of responses in older age groups. The study focused on 25 questions about positive emotions (e.g., feelings of happiness, enthusiasm, pride, etc.) and negative emotions (e.g., feeling sad, afraid, upset, etc.), two commonly-used indicators of well-being in aging research. The questions were administered in paper-pencil format, located about half-way through (on pages 14-15) a 33 page-long questionnaire package. So, there was a good chance that at least some respondents might be cognitively drained when they arrived at the emotion questions in the survey package.

Identifying potentially inaccurate responses to questions about people’s emotions is not a trivial task. After all, emotions are private and subjective experiences, so how can we distinguish a careful response from an incorrect or erroneous one? The study used a novel statistical procedure (a “multidimensional nominal item response model”) to determine whether a given response to a survey question is a likely choice or a rather odd choice given the other responses. When people get tired or cognitively worn out, they may use what is commonly known as “response styles”: instead of carefully selecting the best possible answer, they may simply agree (“yea-saying”) or disagree (“nay-saying”) with most questions regardless of what is being asked, or they may give less nuanced and more extreme answers. The statistical procedure used in the study picks up on whether a person showed these kinds of response styles when answering the emotion questions.

The study found that the use of response styles increased with participant age: older HRS respondents (80+ years of age) were more likely to pick responses that were consistent with stylistic answer patterns than younger (50-60 years of age) respondents. Moreover, there was an opportunity to examine whether response styles were more likely among participants with lower cognitive test scores, in that the HRS routinely conducts tests of participants’ cognitive functioning by assessing their short-term memory and general mental status. The results suggested that increases in response styles in older ages may in fact be partially explained by age-related reductions in cognitive abilities.

The study also examined whether response styles compromise the quality of the data. Notably, the multidimensional nominal item response model can be used to filter out response styles from people’s ratings to obtain emotion scores that are “cleaned” from stylistic response patterns. Compared to conventional scoring methods, this procedure improved the ability to predict whether a person would be newly diagnosed with one or more serious diseases (hypertension, diabetes, cancer, lung disease, heart disease, stroke, or arthritis) in the subsequent 4 years based on their emotion levels. Removing response styles from the data also improved the usefulness of positive and negative emotions for predicting whether a participant would have overnight hospital visits in the subsequent 4 years.

Overall, the results suggest that elderly people may have a harder time meeting the cognitive demands of completing self-report surveys than people at younger ages, and that this leads to errors that reduce the quality of survey data in older ages. The statistical procedures used in the present study can remove some of these errors and may improve data quality. However, statistical cleaning procedures are unlikely to find and eliminate all inaccuracies from survey responses. In order to get optimal data to inform research and policy about health and well-being in older ages, we may need to think harder about how to design the best possible surveys for elderly people. Instead of asking more questions to get more results, how can we ask the right questions to get the most informative results?