Technical Appendix

This thematic report emphasizes trends, patterns of variation in the U.S. population, and comparisons between public opinion in the United States and in other countries or regions. It reviews data from surveys using national samples and that were published after the release of Science and Engineering Indicators 2018.

Methodology Notes for General Social Survey Data

The General Social Survey (GSS) is based largely on face-to-face interviews with survey recipients selected to be representative of the entire United States. In 2018, a response rate of 61% was achieved. NORC at the University of Chicago conducts the survey and ensures that interviewees closely match key demographics from the U.S. Census Bureau. It is difficult to know whether survey nonrespondents are systematically different from respondents.

In 2018, the sample size for the science and technology (S&T) questions in the GSS was 1,173 for a sampling margin of error to the U.S. adult population of approximately plus or minus 3%, 19 times out of 20. This subsample of GSS respondents had very similar demographics to the full GSS sample (Table S7-39). Although results of statistical tests are not reported in the text, differences and linear trends noted in the text are significantly different at p < 0.05.

Another potential source of error is question wording, because responses can be influenced by how a question is asked. Many of the questions used in the S&T module of the GSS continue to be used to allow for comparisons over time despite question wording and structures (e.g., limited response options) that might not be used today.

For the GSS, questions about S&T information, knowledge, and attitudes were added by the National Science Foundation (NSF) beginning in 2006. Comparable survey data were collected by telephone for NSF between 1982 and 2004. As with the GSS, data collected for NSF before 1982 come from face-to-face interviews. The changes in data collection methods over these years may affect comparisons over time. Such situations are highlighted in the text.

Other Sources of Data

A range of other data sources is also used in the report, although only surveys involving probability-based samples are included. As described in the General Methodology for Indicators 2020, the accuracy of information gathered from such surveys can be defined as the extent to which results deviate from the true values of the characteristics in the target population. Statisticians use the term error to refer to this deviation. The results in this report are subject to survey “error”—such as sampling error, response error, and measurement error due to question wording and random variation—that should be kept in mind when interpreting the findings. This report exercises caution in interpreting results from surveys that omit portions of the target population, have low response rates, or have topics that are particularly sensitive to subtle differences in question wording. Only differences that are statistically unlikely to have occurred by chance and that seem substantively important are emphasized in this report.

Although the report uses data from mostly the GSS, the primary sources of additional U.S. data include Gallup and Pew Research Center. The GSS typically uses face-to-face interviews, but most of the data from groups such as Gallup use telephone samples (including landlines and mobile phones) that can exclude those without telephones. The only Internet-based surveys used in the report are those that choose their panel members based on techniques similar to the telephone samples used by other organizations. For these, probability-based sampling typically is done using telephone and mail to invite people to be part of the panel, then respondents are probabilistically selected for individual surveys. The additional step means that response rates are often lower than those of high-quality telephone surveys. Pew Research Center has increasingly used these types of panels, and the biannual surveys on climate change by George Mason University and Yale University have long used this type of online panel. The survey company Ipsos managed the recent Pew Research Center and George Mason University/Yale University climate change surveys. Nevertheless, face-to-face surveys are believed to be the best way to obtain high response rates and to maximize participation by respondents with low income or education levels who may be less likely to respond to other types of surveys.

Another important limitation is that up-to-date, high-quality data are not always available. In some cases, there are only single surveys covering a particular period, large gaps between data collection years, or only a small number of questions on any given topic. This challenge is particularly acute when it comes to international data. There have been many surveys on S&T in Europe, but these are not conducted as regularly as the GSS, and recent data from Africa, Asia (outside China), and South America are especially rare. The 2018 survey by Gallup for the Wellcome Trust of people in 140 countries is an exception to this tendency.

As noted, this report focuses on data that have become available after the preparation of the 2018 edition of Indicators. Earlier data can be found in past editions of Indicators (e.g., National Science Board [NSB] 2018) from many countries and regions. Moreover, even in cases in which international comparisons attempt to use identical questions, the responses may not be wholly comparable because of cultural differences in the meaning of the questions.

A Note about Terminology

Throughout this report, the terminology used in the text reflects the wording in the corresponding survey questions. In general, survey questions asking respondents about their primary sources of information, interest in issues in the news, and general attitudes use the phrase science and technology. Thus, S&T is used when discussing these data. Survey questions asking respondents about their confidence in institutional leaders, the prestige of occupations, and their views on different disciplines use terms such as scientific community, scientists, researchers, and engineers, so science and engineering (S&E) is used when appropriate for examining issues related to occupations, careers, and fields of research. Although science and engineering are distinct fields, national survey data that make this distinction are scarce (see NSB Indicators 2014: Science and Technology: Public Attitudes and Understanding). The term Americans is used throughout to refer to U.S. residents included in a national survey; equivalent terms (e.g., Canadians) are used for residents of other countries. However, not all respondents were necessarily citizens of the countries in which they were surveyed. When discussing data collected on behalf of NSF, the term recent is used to refer to surveys conducted since 2006, when data collection shifted to the GSS.

GSS Questions on Specific Science Issues

There are two important limitations to note in how the GSS asks respondents about specific issues. First, the available questions focus on only the “danger” and not the benefits of the issues addressed. Second, interviewees were asked to respond using an unbalanced set of response options. In most survey questions, the middle response category is neutral, but that is not the case here, because the options include “extremely dangerous,” “very dangerous,” “somewhat dangerous,” “not very dangerous,” and “not at all dangerous.” Fortunately, other surveys address specific issues; therefore, it is possible to compare the patterns found in the GSS data with other results from the same period. It is also noteworthy that, although the questions focus on very different issues or topics, the response patterns among the various surveys are similar.

Survey Experiments on Evolution and the Big Bang

The current version of the GSS includes two experiments embedded in the survey in which half of the respondents receive the original version of questions about evolution, and the other half receive an alternate version of the question. This is done to see if small question wording changes can affect how people respond. In this case, because the questions are meant to capture scientific knowledge, the substantive meaning of the question was not changed. Instead, the information was changed (i.e., substituting elephants for humans in the evolution question and adding the preface “according to astronomers” to the question about the Big Bang), so people who belong to religious groups that reject evolution but who are familiar with scientific thinking would not use the knowledge questions to express their religious identity. The finding that people respond differently depending on these small changes in question wording therefore provides evidence consistent with the idea that some people who provide the wrong answer are doing so despite the fact that they know what would be considered a correct response according to the scientific community. These types of experiments are described in some detail in the 2016 and 2018 editions of Indicators. Related experiments were also reported in the 2006 and 2014 editions of Indicators.

Key to Acronyms and Abbreviations

GSS: General Social Survey

NSB: National Science Board

NSF: National Science Foundation

S&E: science and engineering

S&T: science and technology

References

National Science Board (NSB), National Science Foundation. 2014. Science and Technology: Public Attitudes and Understanding. In Science and Engineering Indicators 2014 (Indicators 2014). NSB 14-01. Arlington, VA. Available at https://www.nsf.gov/statistics/seind14.

National Science Board (NSB), National Science Foundation. 2018. Science and Engineering Indicators 2018 (Indicators 2018). NSB-2018-01. Alexandria, VA. Available at https://www.nsf.gov/statistics/2018/nsb20181/.