Public Familiarity with S&T Facts

Although this report tracks a set of questions aimed to assess knowledge of several basic scientific facts, substantial research has shown that general measures of science knowledge typically only have small—although meaningful—relationships with how people make decisions in their public and private lives (Allum et al. 2008). NASEM also recently highlighted that science literacy is largely a function of general (or foundational) literacy and that more focus should be directed toward the ability of groups to use science to make evidence-based decisions (NASEM 2016b). In this regard, it should be recognized that the science literacy of individuals is unequally distributed across social groups. This means that some groups or communities can use science when needed, whereas others cannot because they may not have access to resources such as local expertise (e.g., community members who are also scientists, engineers, or doctors).

The current GSS uses nine questions and therefore does not address the full range of scientific subjects that could be included. Further, these questions were selected several decades ago based on the likelihood that they would remain stable over time rather than as an effort to capture any specific body of scientific knowledge. Consequently, the survey data do not represent a deep or comprehensive measurement of scientific knowledge. These questions might instead be understood as a way to capture the degree to which people have paid attention to science over their life or might be expected to do so in the future (Kahan 2017). To address these types of concerns, the 2010 edition of Indicators included responses to an expanded list of questions about scientific ideas based on regular exams given to American students as part of the American Association for the Advancement of Science’s Project 2061. This research found that respondents who “answered the additional factual questions accurately also tended to provide correct answers to the trend factual knowledge questions included in the GSS” (NSB Indicators 2010: Science and Technology: Public Attitudes and Understanding). These nine items are presented as an indicator of people’s familiarity with scientific ideas or facts taught in school, as a means for evaluating trends or conducting group comparisons. Making generalizations about Americans’ overall knowledge of science should be made cautiously given that this indicator comprises a small number of questions on school-level knowledge of science.

Understanding Scientific Terms and Concepts

In 2018, Americans correctly answered an average of 62% of the nine true-or-false or multiple-choice items from the long-running factual knowledge questions (Table 7-1; Table S7-24 and Table S7-25). The 2018 average is statistically similar to averages in recent years and the historical average since 1992 (Table S7-25). In terms of specific questions (Table S7-26 and Table S7-27), the overall average stability since 1992 hides some variation within individual questions. For example, the share of correct answers to questions on radioactivity and the fact that the Earth goes around the Sun has been relatively stable except for spikes of additional correct or incorrect responses in specific years. After increasing in the early period of the survey, the share of correct answers for several questions has been relatively stable. Examples include whether antibiotics kill viruses, whether electrons are smaller than atoms, and whether lasers work by focusing sound waves. The one question where there has been a small decline in correct answers over time is whether it is the “father’s gene that decides whether the baby is a boy or a girl.” The Pew Research Center has also collected data on this topic and found similar patterns of results (Kennedy and Hefferon 2019).

Correct answers to questions about basic facts in physical science and biological science, by country or economy: Most recent year

(Percent)

na = not applicable; data were not collected for this question in that country.

EU = European Union.

a See Table S7-25 for U.S. trends.

b Numbers for Japan are the average from two studies conducted in 2011.

c Questions are among the nine used to calculate the average factual knowledge measure (eight appear in this table; see Table S7-26 for data on all nine questions over time).

d The question How long does it take for the Earth to go around the Sun? (One year) was asked only if the respondent answered correctly that the Earth goes around the Sun.

e An experiment in the 2012 General Social Survey showed that adding the preface "according to astronomers" increased the percentage correct from 39% to 60%.

f In 2008, the statement was It is the mother's gene that decides whether the baby is a boy or a girl. (False) (Split ballot in 2008; 1,506 survey respondents were asked about "father's gene"; 515 survey respondents were asked about "mother's gene.") The China, EU, and Switzerland surveys asked about "mother's gene" instead of "father's gene." The Israel survey asked about "hereditary material from the father."

g The Japan survey asked about "antibodies" instead of "antibiotics."

h An experiment in the 2012 General Social Survey showed that adding the preface "according to the theory of evolution" increased the percentage correct from 48% to 72%.

Note(s):

Responses of "don't know" and refusals to respond count as incorrect and are not shown. EU data include Austria, Belgium, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Poland, Portugal, Slovakia, Slovenia, Spain, Sweden, and the United Kingdom but do not include Bulgaria and Romania.

Source(s):

United States—NORC at the University of Chicago, General Social Survey (2018); Canada—Council of Canadian Academies, Expert Panel on the State of Canada's Science Culture, Science Culture: Where Canada Stands (2014); China—Chinese Association for Science and Technology/China Research Institute for Science Popularization, Chinese National Survey of Public Scientific Literacy (2015); EU—European Commission, Eurobarometer 224/Wave 63.1: Europeans, Science and Technology (2005); India—National Council of Applied Economic Research, National Science Survey (2004); Israel—Israeli Ministry of Science, Technology and Space, Geocartography Knowledge Group, Perceptions and Attitudes of the Israeli Public about Science, Technology and Space (2016); Japan—National Institute of Science and Technology Policy/Ministry of Education, Culture, Sports, Science and Technology, Survey of Public Attitudes Toward and Understanding of Science and Technology in Japan (2011); Malaysia—Malaysian Science and Technology Information Centre/Ministry of Science, Technology and Innovation, Survey of Public Awareness of Science, Technology and Innovation: Malaysia (2014); Russia—Gokhberg L, Shuvalova O, Russian Public Opinion of the Knowledge Economy: Science, Innovation, Information Technology and Education as Drivers of Economic Growth and Quality of Life, British Council, Russia (2004), Fig. 7; South Korea—Korea Science Foundation (now Korea Foundation for the Advancement of Science and Creativity), Survey of Public Attitudes Toward and Understanding of Science and Technology (2004); Switzerland—University of Zurich, Institute of Mass Communication and Media Research, Department of Science, Crisis, and Risk Communication, Science Barometer Switzerland (2016).

Science and Engineering Indicators

Evolution and the Big Bang

The GSS includes two additional true-or-false science questions that are not included in the trend data reported earlier because Americans’ responses to these questions appear to reflect factors beyond familiarity with scientific facts (e.g., beliefs related to specific religious teachings). One of these questions is about evolution, and the other is about the origins of the universe. The data presented in this section show that, in some specific cases, changes to question wording can produce substantially different responses.

In 2018, nearly half of Americans (49%) correctly indicated that “human beings, as we know them today, developed from earlier species of animals,” and 38% indicated that “the universe began with a big explosion” (Table S7-26). Both percentages are relatively low compared with scores on most of the other factual information questions in the survey. Some people may respond to the evolution question based on their religious beliefs rather than familiarity with scientific concepts (Maitland, Tourangeau, and Sun 2018). As such, half of 2018 GSS respondents received an evolution question that omitted reference to human evolution and instead read, “Elephants, as we know them today, developed from earlier species of animals.” With the question posed in this form, 66% gave the scientifically expected response (compared with 49% when the question focused on humans, as noted earlier).

Similarly, two alternate “origin of the universe” questions were also provided to random subsets of respondents in the 2018 GSS. First, simply adding the preface “according to astronomers” to the statement “the universe began with a huge explosion” resulted in about two-thirds (65%) of respondents providing the scientifically correct answer, considerably higher than the 38% who provided the correct answer without the preface. Another subset of respondents was given the statement “the universe has been expanding ever since it began” and was asked if this was true or false. Again, about two-thirds (68%) of respondents gave the correct scientifically understood response.

Reasoning and Understanding the Scientific Process

Another indicator of the public’s understanding of science focuses on the public’s understanding of how the scientific process generates and assesses evidence. Data on three scientific process elements—probability, experimental design, and the scientific method—show some previous increases in Americans’ understanding of the scientific inquiry process but substantial stability in recent years.

Two probability questions are included in the GSS. Most (84%) Americans in 2018 correctly indicated that, faced with an inherited disease that affects 1 in 4 children, the fact that a couple’s first child has the illness does not affect whether three future children will have the illness. In addition, about three-quarters (74%) correctly responded that the odds of a genetic illness are equal for all of a couple’s children. Overall, 65% correctly answered both probability questions. The public’s understanding of probability as measured by these two questions has been stable for most of the last 20 years (Figure 7-11; Table S7-28).

Keyboard instructions

Correct answers to scientific process questions: Selected years, 1999–2018

Note(s):

Data represent respondents giving a correct response for each concept. Responses of "don't know" and refusals to respond are counted as incorrect and are not shown. See Table S7-28 for more detail on the probability questions.

Source(s):

National Center for Science and Engineering Statistics, National Science Foundation, Survey of Public Attitudes Toward and Understanding of Science and Technology (1999, 2001); University of Michigan, Survey of Consumer Attitudes (2004); NORC at the University of Chicago, General Social Survey (2006–18).

Science and Engineering Indicators

With regard to understanding experiments, nearly half (49%) of Americans in 2018 correctly answered a question about how to test a drug, then provided a correct response to an open-ended question that required them to explain the rationale for an experimental design (i.e., giving 500 people a drug while not giving the drug to 500 additional people, who then serve as a control group) (Table S7-29). On average, the percentage of correct responses rose over the previous 20 years (Table S7-28); this is despite the substantial year-to-year variation that may be partially explained by reliance on human coders to categorize responses.

Similarly, respondents were asked whether they have “a clear understanding,” “a general sense,” or “little understanding” of the term scientific study. About 27% in 2018 said they have “a clear understanding,” whereas 51% said they have “a general sense.” These respondents were then asked to use their own words to describe what it meant to study something scientifically, and their responses were coded. Overall, about a quarter of respondents (24%) adequately described a scientific study as involving something to do with testing theories or hypotheses, conducting experiments, or making systematic comparisons, similar to results dating back to 1999 (Table S7-28, Table S7-30, and Table S7-31).

In general, those with the most education and those who answered the most factual questions correctly were more likely to respond that they had “a clear understanding,” but many respondents with relatively limited background in science also reported high levels of understanding (Table S7-30). For example, 15% of those with the lowest level of science education reported having “a clear understanding” of what constitutes a scientific study, compared with 51% of those with the most scientific education.

All scientific reasoning questions can be combined into an overall measure of scientific inquiry understanding. Using this combined measure, about 43% of Americans in 2018 could correctly respond to the two probability questions and provide a correct response to at least one of the open-ended questions about experimental design or what it means to study something scientifically (Table S7-28). In general, respondents with more education and respondents with higher incomes performed better on the scientific inquiry questions (Table S7-29 and Table S7-30).

International Comparisons

Previous editions of Indicators have reported that people outside the United States generally do similarly (e.g., Canada) or less well than Americans on similar questions; however, few countries currently put substantial focus on public literacy surveys (Table 7-1). The 140-country survey for the Wellcome Trust (2019), however, included several questions about self-perceived knowledge and found that people in developed regions, such as North America and Europe, are more likely to say they know “a lot” or “some” about science; people in poorer regions and countries in Asia, Africa, South America, and the Middle East are much less likely to indicate science knowledge. Younger people (and those with more education) in all regions are also more likely to report higher knowledge levels.

Pseudoscience

Another indicator of public understanding about S&T comes from a measure focused on the public’s capacity to distinguish science from pseudoscience. One such measure has been included in Indicators because of the availability of data going back to the late 1970s: Americans’ views on whether astrology is scientific. Other examples of pseudoscience include the belief in lucky numbers, extrasensory perception, or magnetic therapy.

More Americans today than in the past see astrology as unscientific, although there has been some variation in recent years. In 2018, about 58% of Americans said astrology was “not at all scientific,” a value near the middle of the historical range and down somewhat from 65% in 2014 (Table S7-32). About a third of Americans thought astrology was “sort of scientific,” and the remainder thought astrology was “very scientific” or “didn’t know.” Men, older respondents, those with more education, and those with more correct answers on the factual science questions all tend to be more likely to see astrology as nonscientific.