The Evidence Base

Informing Policy in Health, Economics & Well-Being
A collaboration with
USC Dornsife Center for economic and social research

Surveys and Character Skills: The Information we Reveal Without Even Trying

Character skills such as grit, self-control or a growth mindset are receiving a lot of buzz in the education world these days. There is now a considerable body of research by economists and psychologists showing the relevance of these character skills for determining later life outcomes, including not just education, but also labor and health outcomes, the propensity to engage in criminal behavior, and retirement planning and savings, even after taking into account different levels of respondents’ cognitive ability.

Teachers and school leaders around the country are starting to embrace the importance of these character skills and looking for ways to promote them more in their schools and classrooms. The truth is, however, that this is one of those cases where practice is running ahead of knowledge. Little is known about the origin and development of these character skills and measures of these skills are rarely included in evaluations of public policies and social interventions.

Measuring Character Skills is Challenging

The most common way to try to measure such skills is by asking respondents to answer a series of questions related to their perception of the level of skill in a given scale. For instance, the popular Grit scale developed by Duckworth and Quinn in 2009 asks respondents to answer eight questions about themselves indicating their level of agreement with statements like: “I am a hard worker”, “I am diligent” or “Setbacks don’t discourage me”, among others.

The problem with these measures, which are relatively easy to collect, is that they are known to be affected by important biases. In particular, these measures have been shown to suffer from social desirability bias, as respondents tent to give socially desirable answers. In other words, it is hard to admit to the world that you might not be such a hard worker. In addition, reference group bias could also be a problem, as different groups of respondents might have different standards of what it means to have a certain skill.

An alternative option would be to employ performance tasks measures where participants are asked to perform a specific, carefully designed task to provide meaningful differences in behaviors as indicative of their level of a given skill. The famous “Marshmallow Test”, developed by Mischel and Ebbesen, where 4 year olds were given the option of eating a marshmallow now or waiting 20 minutes for the reward of 2 marshmallows instead, is an example of a behavioral task that aims to measure children self-control levels. Although performance task based measures do not suffer from the same sources of bias than self-reports they can also be problematic as they are generally costly and difficult to collect in big samples. Also, artificial tasks, often completed in a lab, might not generalize well to other contexts and it is not clear that they would always capture the character skill of interest. Finally, these measures might be difficult to implement multiple times as the participants would learn the task after performing it once.

What if Respondents Were Already Revealing Information on Their Character Skills by the Amount of Effort They Put Forward on Surveys?

Finding robust measures of character skills and the study of their origin and development is the focus of the Character Assessment Initiative (Charassein), a collaborative research effort that I direct at the Department of Education Reform at the University of Arkansas. In our recent work, we argue that questionnaires themselves can be seen as performance tasks, such that measures of survey and test effort can lead to meaningful measures of character skills. Surveys and tests take effort to complete and respondents reveal something about their character skills through the effort they exhibit on these tasks.

Survey effort can be measured by analyzing response patterns within surveys. Recent evidence from members of our team has highlighted the potential of studying students’ response patterns to questionnaires and tests as a way of quantifying character skills. For example, the simple rate at which students skip questions on surveys is predictive of later educational attainment and labor-market outcomes using data from six nationally representative longitudinal datasets (Hitt, Trivitt & Cheng, 2015). Similarly, measures of “careless answering” on surveys, based on the extent to which adolescence students give unpredictable answers when comparing responses on related questions, are found to be predictive of educational and labor-market outcomes in adulthood (see Hitt, 2015).

With our recent work in the USC Understanding America Study we studied the performance of survey effort measures to proxy for relevant character skills by studying their relationship with self-reported measures of personality and grit, and studying their comparative predictive power for education and labor outcomes. We found that measures of careless answering show promise to be valid proxy measures of relevant character skills among adults. Careless answering correlates mostly with self-reported measures of conscientiousness and neuroticism, just like self-reported grit measures. In addition, careless answering presents stronger correlations with education and labor outcomes than self-reported grit. In contrast, measures based on item non-response did not seem to work as well in this internet panel. We believe this could be the case because of the design of the internet surveys in this panel where non-response is discouraged by a screen message asking respondents of the importance of their answers and encouraging them to go back and complete the information. More work is needed to fully understand how sensitive survey effort measures are to the survey method employed.

In another work we study whether refusing to complete future surveys at all, after volunteering to participate, could also be a proxy for relevant character skills. Our results show that certain personality traits are significantly related to panel attrition. Complete non-response to future surveys is more prevalent among those less conscientious and more open to experience, even after controlling for demographic information and cognitive ability.

Overall, our work shows the potential for measures of survey effort to proxy for relevant character skills related to conscientiousness. Usually, when working with data, researchers eliminate those incomplete or suspicious responses from the analysis. Our work suggests ways to capture measures of conscientiousness for those students or respondents who expend low effort on surveys – they may provide thoughtless answers or skip questions entirely. This is an important development as effort on surveys is likely related to the very skills that researchers are attempting to measure. For example, respondents who lack grit or self-control are unlikely to report that they lack those skills. This indicates that measurement error on surveys is related to the underlying skills we seek to measure, which then leads to invalid research findings. By studying actual behavior through the effort individuals devote to surveys we aim to include these respondents in the analysis limiting such bias. An added cost-effective benefit of our approach for measuring conscientiousness through survey effort is that these measures do not require new data to be collected. Therefore, one could obtain measures of character skills to complement already-collected datasets, expanding the opportunity for researchers to answer new questions with existing data, with the hope to find better ways to develop these important skills in our schools and classrooms.