Notes

  1. 1 The report relies largely on data on the U.S. public’s attitudes about S&T and awareness of basic science facts that have been collected through the GSS since 2006 and by a standalone S&T survey managed by NSF in prior years. Data from other high-quality American surveys are also noted for context. Where possible, U.S. attitudes are placed in an international context using data from high-quality surveys in other countries. For 2018, 1,175 respondents to the GSS answered questions about science. This provides a sampling margin of error of approximately plus or minus 3%, 19 times out of 20, when looking at the full sample (sampling error is smaller when looking at subgroups). Sample sizes are similar for recent years, although some previous surveys were larger. The term Americans is sometimes used in the report to refer to GSS respondents, but some respondents may not be American citizens.

  2. 2 Because of rounding, the aggregate will not always equal the sum of components and this may result in slight differences between the text, figures, and/or tables.

  3. 3 It is not clear why earlier Pew Research Center data were lower than the GSS data, although the question asked is slightly different because it specifically asks about confidence of scientists “to act in the best interests of the public,” whereas the GSS question asks about the “scientific community” rather than “scientists.” It is also noteworthy that respondents to the GSS had the choice of “a great deal of confidence,” “only some confidence,” or “hardly any confidence,” whereas Pew Research Center respondents chose among “a great deal,” “a fair amount,” or “not too much.” This middle category, in particular, may have been seen as more positive than the equivalent middle category in the GSS.

  4. 4 Previous research has shown that these perceptions are associated with a range of additional negative views about scientists (Besley 2015) and that these types of negative views might affect the degree to which people are willing to support a group (Fiske and Dupree 2014).

  5. 5 Pew Research Center also published a report on Americans’ views about space and found that many had positive views about the role of government in space exploration (Funk and Strauss 2018).

  6. 6 A main problem is that these questions focus on only "danger" and not associated benefits (where appropriate) which means that only one aspect of respondents views is assessed.

  7. 7 The focus on the “greenhouse effect” in this question (which was first asked in 1993) is somewhat unusual, and the 2010 GSS (as part of an international survey process) replaced the term with climate change (the common term used in academic and public debates). The 2016 and 2018 GSS returned to the original question wording to maintain the time series. The response pattern for climate change is also similar to that of the other questions reported in this section. This suggests that the term may not make a substantive difference in an overall trend that shows increasing concern about climate change. However, other research suggests that the term used can affect how some people respond to questions about the topic (Schuldt, Konrath, and Schwarz 2011). The term greenhouse effect has therefore continued to be used in the GSS.

  8. 8 The wording of questions varies across these surveys, limiting their direct comparability.

  9. 9 Data source is SRI International (2020).

  10. 10 The nature and origins of this difference were discussed at length in the 2018 edition of Indicators using 2016 data that showed similar patterns. The analyses suggested that people who responded correctly to the other factual questions were also more likely to respond to the modified Big Bang and evolution questions. However, correct responses to the original Big Bang and evolution questions were not as closely connected to correct responses to the other questions (NSB Indicators 2018: Science and Technology: Public Attitudes and Understanding).

  11. 11 Data source is SRI International (2020).

  12. 12 Earlier NSF surveys used for Indicators employed additional questions to measure understanding of probability. Bann and Schwerin (2004) identified a smaller number of questions that could be administered to develop a comparable indicator. Starting in 2004, the NSF surveys used these questions for a trend factual knowledge scale. This scale does not include the questions aimed at studying scientific reasoning and understanding (e.g., questions about probability or the design of an experiment), and the current report attempts to avoid describing the combined questions as an overall knowledge scale. Instead, the report recognizes that the nine questions “are not understood to certify comprehension of any canonical set of facts or principles. Rather, they are conceptualized as observable (or manifest) indicators of an unobservable (latent) cognitive capacity that enables individuals to acquire and use scientific knowledge” (Kahan 2017:997).

  13. 13 Declines, such as those seen in 2012, need to be regarded with caution. In that case, the percentage of Americans who correctly answered the initial multiple-choice question about how to conduct a pharmaceutical trial remained stable between 2010 and 2012. It was the only follow-up question that asked respondents to use their own words to justify the use of a control group that saw a decline. For this question, interviewers recorded the response, then trained coders to use a standard set of rules to judge whether the response was correct. Although the instructions and training have remained the same in different survey years, small changes in survey administration practices can sometimes substantially affect such estimates.

  14. 14 Some respondents might understandably argue that because astrology is based on systematic observation of planets and stars, it is “sort of scientific.” The fact that those with more formal education and higher factual science knowledge scores are consistently more likely to fully reject astrology suggests that this nuance has only a limited effect on results. Another problem is that some respondents may also confuse astrology with astronomy, and such confusion seems most likely to occur in some of the same groups (i.e., relatively lower education and factual knowledge) that might be predicted to get the question wrong. This could artificially inflate the number of incorrect responses. However, the question comes immediately after a question that asks respondents if they have ever “read a horoscope or personal astrology report,” which offers respondents a hint that astrology is not astronomy. Also noteworthy is the fact that a Pew Research Center study (2009) using a different question found that 25% of Americans believe in “astrology, or that the position of the stars and planets can affect people’s lives.” Gallup found the same result with the same question in 2005 (Lyons 2005). In contrast, the 2010 GSS found that 6% saw astrology as “very scientific,” and 28% saw astrology as “sort of scientific” (34% total). Pew Research Center found that 73% could distinguish between astrology and astronomy and that there were few demographic differences beyond education (Funk and Goo 2015).