Separator.
Close Menu

Nonprofit Research Activities: Fiscal Year 2016

NSF 22-338   |   September 22, 2022
  |   Ronda Britt
 

General Notes

These tables present the results of the FY 2016 Nonprofit Research Activities Survey. This survey is the primary source of information about 501(c) nonprofit organizations’ R&D performance and funding in the United States.

 

Data Tables

 

Technical Notes

Survey Overview (FY 2016 survey cycle)

Purpose. The Nonprofit Research Activities (NPRA) Survey collects information on research and experimental development (R&D) performed or funded by 501(c) nonprofit organizations in the United States. The nonprofit sector is one of four sectors (business, government, higher education, and other private nonprofit) that fund or perform R&D, and the National Center for Science and Engineering Statistics (NCSES) within the National Science Foundation (NSF) combines nonprofit sector data with data from the other sectors to estimate total national R&D expenditures.

Data collection authority. The information is solicited under the authority of the National Science Foundation Act of 1950, as amended, and the America COMPETES Reauthorization Act of 2010. The Office of Management and Budget control number is 3145-0240, and it expired on 28 February 2021.

Survey contractor. ICF.

Survey sponsor. The NPRA Survey is sponsored by NCSES within NSF.

Key Survey Information

Frequency. Occasional.

Initial survey year. A pilot survey that collected FY 2015 data was conducted from September 2016 through February 2017, and then a full implementation of the survey to collect FY 2016 data was conducted in 2018.

Reference period. The nonprofit fiscal year ending in 2016.

Response unit. Organization.

Sample or census. Sample.

Population size. A total of 117,539 nonprofit organizations.

Sample size. A total of 6,071 nonprofit organizations.

Survey Design

Target population. The target population for the FY 2016 NPRA Survey consisted of nonprofit organizations in the United States that performed or funded R&D activities in FY 2016.

Sampling frame. The National Center for Charitable Statistics (NCCS) Core Files, which capture financial information from Internal Revenue Service (IRS) Form 990, Form 990-EZ, and Form 990-PF, served as the primary input to the sampling frame from which the NPRA Survey sample was selected. Organizations were excluded from the frame if they were considered outside the scope of the survey. Specifically, organizations were excluded if they had an IRS subsection code that did not equal 3, 4, 5, or 6; had a foundation code indicating that they were a church or government; were located outside the United States; had a North American Industry Classification System (NAICS) code indicating that they were in the public administration sector; or had an NCCS code indicating that they were a government entity or otherwise out of scope. Some education-related organizations were excluded based on their National Taxonomy of Exempt Entities (NTEE) and foundation codes. These included schools (e.g., preschools, primary and high schools, special education schools, and charter schools), vocational and technical schools, institutions of higher education, graduate and professional schools, adult education entities, and student services organizations. In addition, organizations were excluded if they were found to be inactive during the pilot survey.

A financial threshold was imposed to increase the efficiency of reaching organizations that perform or fund research. Cutoffs based on revenue, assets, and expenses were examined to retain organizations that were likely performers or funders and eliminate organizations that were unlikely to either perform or fund research. For organizations filing Form 990 or Form 990-EZ, only those with $500,000 or more in expenses were included; for organizations filing Form 990-PF, only those with $2,750,000 in total assets were included.

Sample design. Organizations were stratified based on frame variables associated with R&D, such as NTEE code (e.g., hospitals or research institutes), as well as a propensity score measuring the likelihood that an organization performed or funded research. The propensity score was developed from a model relating likely performers and likely funders to financial variables in the frame. The likely performers and likely funders were a subset of organizations identified from auxiliary sources that strongly indicated that those organizations perform or fund research. The model produced a propensity score, where high values indicate a higher likelihood of performing or funding research. The propensity scores were grouped into high, moderate, and low likelihood.

The propensity score strata were combined with other stratifiers to form the final stratification. On the basis of the results of the pilot survey, size strata defined by the total amount of expenses were also added. The final stratification was developed using the following steps: (1) develop the density stratification, (2) create the hospital and research institutes strata, (3) create the certainty strata based on the likely performer and funder flags, and (4) stratify based on the size of the organization.

The sample was stratified as follows:

  • Stratum 1—Likely performers and funders
  • Stratum 2—Likely performers
  • Stratum 3—Likely funders
  • Stratum 4—Hospitals
  • Stratum 5—Research institutes
  • Stratum 6a—Form 990, high likelihood
  • Stratum 6b—Form 990, moderate likelihood
  • Stratum 6c—Form 990, low likelihood
  • Stratum 7a—Form 990-PF, high likelihood
  • Stratum 7b—Form 990-PF, moderate likelihood
  • Stratum 7c—Form 990-PF, low likelihood

Strata 1–3 included organizations with a high likelihood of performing or funding research based on various auxiliary data, including past surveys, membership lists, and other government data collections. Strata 4–7c included organizations whose performer and funder status was unknown. Stratum 4 included organizations identified as hospitals by their NTEE code, if those organizations had not already been assigned to strata 1–3. Stratum 5 included organizations identified as research institutes that had not already been assigned to strata 1–3. Strata 6a–7c included organizations whose likelihood of performing or funding research was predicted using information on their Form 990 (strata 6a–6c) or Form 990-PF (strata 7a–7c). Within each stratum, organizations were grouped into six size classes based on total expenses, with assignment to the six classes based on the cumulative square root rule (ƒ rule). The sample was allocated to the strata to minimize the variability of total expenses, resulting in oversampling in substrata that included large organizations.

The sample was a systematic (1-in-k) random sample of organizations within each stratum. The organizations were selected with equal probability. Before the systematic sample was selected, the organizations were stratified implicitly (sorted) by total expenditures to ensure that the sample was proportionately distributed by size.

Data Collection and Processing Methods

Data collection. Data collection for the NPRA Survey occurred in two general phases. The first focused on determining whether the organizations in strata 4–7c (i.e., “unknown” organizations) performed or funded research during FY 2016. The second involved notifying organizations that they had been selected to participate in the survey and sending the survey materials to each organization’s point of contact via mail and e-mail.

In phase 1, organizations in strata 4–7c were sent a letter providing details about the survey, a screener response card, and a business reply envelope on 26 February 2018. They were asked to complete the screener card by indicating whether their organization had performed or funded research during FY 2016 and by providing their contact information (i.e., contact name, title, e-mail address, and phone number). If an organization said that it had not performed or funded research in FY 2016, it was not contacted again. The last screener card was returned on 25 May, and after a week during which no additional screener cards were received, phase 1 was considered closed on 1 June 2018.

Organizations in strata 1–3 (i.e., “known” organizations) received their first communication about the survey in phase 2. Organizations from phase 1 that either responded that they performed or funded research or who had not responded yet were also included in phase 2. All organizations included in phase 2 received the survey via e-mail on 30 April 2018. There were two versions of the FY 2016 NPRA Survey questionnaire: the health version of the survey form was sent to organizations classified as a health organization by NCCS, and the standard survey form was sent to all other organizations. Respondents could choose to submit a paper survey or use a Web-based data collection system to respond. Every effort was made to maintain close contact with respondents throughout the process to ensure the accuracy of the resulting data. Questionnaires were carefully examined for completeness upon receipt, and respondents were sent personalized e-mails asking them to make any necessary revisions before the final processing and tabulation of data.

Mode. All phase 1 screener cards were sent via mail, and organizations could choose to respond via mail, e-mail, or phone. At the end of phase 1, 67.0% of respondents had provided their information by mail, and 33.0% had used another mode (i.e., e-mail or phone). For phase 2, respondents could choose to use the Web-based data collection system or submit the paper questionnaire form that had been mailed to them. ICF staff also entered responses for organizations that were received by e-mail or phone. Of those organizations responding to phase 2, 38.9% of responding organizations had submitted their data using the Web-based data collection system, 14.6% by returning a paper version of the survey, and 46.5% using some other mode (i.e., e-mail or phone).

Response rates. Overall response was defined as completing the questionnaire or indicating that the organization does not perform or fund research, and response rates were calculated out of the number of eligible organizations in the sample (i.e., excluding organizations found to be ineligible for the survey because they were not a nonprofit, were covered on another NCSES data collection, or were defunct—i.e., out of business). The survey obtained a 48.1% unweighted response rate (table A-1) across all strata (2,919 organizations responding out of 6,071 eligible organizations that were sampled).

Performer and funder response rates were both calculated to count organizations as complete if they answered all the questions asked of them. Performer response rates were calculated as completed performer questionnaires, organizations that indicated they only fund research, or do not perform or fund any research, out of all eligible organizations. Similarly, funder response rates were calculated as completed funder questionnaires, organizations that only perform research, or do not perform or fund any research out of all eligible organizations.

The unweighted performer response rate was 49.2% (2,985 organizations out of 6,071). The unweighted funder response was 50.6% (3,074 organizations out of 6,071).

Response rates varied across strata. The overall unweighted response rates, inclusive of performers and funders, ranged from 34.6% among hospitals to 61.7% among organizations that were likely performers and funders of research. Response rates were slightly higher among funders than performers across most strata.

In addition to the sample design strata, organizations were grouped into health versus nonhealth organizations. Health organizations received questionnaires referring to the survey as a “health survey,” while nonhealth organizations received the standard survey form. The unweighted overall response rate among health organizations was lower than that found among nonhealth organizations (41.5% versus 53.0%), and this difference was also seen in the performer and funder response rates.

Response for health and medical organizations (40.5%) was lower than that for all other organization types (53.1%) (table A-2).

For the 801 responding organizations, item response varied for each respondent category (table A-3). Item response for organizations performing but not funding R&D was lowest for Question 13, which requested the number of paid employees that worked on research activities (92.0%). Item response for organizations funding but not performing R&D was generally higher than the other two response categories, with the lowest response rate of 98.5% for the questions requesting funding expenditures by type of organization funded (Question 17) and by R&D field (Question 18). Finally, organizations both performing and funding R&D showed the lowest item response rates in the two questions requesting R&D expenditures by type of R&D conducted (94.1% response for Question 12, performance expenditures by type of R&D, and 92.4% response for Question 20, funding expenditures by type of R&D).

Data editing. The NPRA Survey was subject to very little editing. Respondents were contacted and asked to resolve possible self-reporting issues themselves. Questionnaires were carefully examined by survey staff upon receipt. Reviews focused on unexplained missing data, expenditures that exceeded total expenses for the organization, expenditures for performing research and funding research that matched, and other data anomalies. If additional explanations or data revisions were needed, respondents were sent personalized e-mail messages asking them to provide any necessary revisions before the final processing and tabulation of data. For any follow-up questions that went unanswered, NCSES was consulted before the data were either accepted without changes or adjusted based on information from other questions or previous contacts with the respondent.

Imputation. Instances of missing data occurred when a sampled organization did not respond to the survey (i.e., unit nonresponse) or when an organization responded but did not answer certain survey questions (i.e., item nonresponse). Missing data were imputed for organizations that (1) did not respond to the survey, but for which auxiliary data about the amounts spent performing or funding research were available, (2) did not respond to the survey, but for which information from the pilot survey were available about the amounts spent performing or funding research, and (3) reported that they performed or funded research (Questions 7 and 8, respectively) but did not provide information on the amounts spent performing or funding research (Questions 9 and 16, respectively). The last group included organizations that completed the screener card in phase 1 but did not complete the questionnaire in phase 2. Organizations that did not fall into one of these groups were accounted for in the nonresponse weighting adjustment described below. For those organizations performing R&D activities, imputation accounted for $3,065,665,524 of the $22,572,535,286 weighted total reported in table 1. For those funding R&D activities, imputation accounted for $791,795,909 of the $10,528,134,154 weighted total reported in table 5.

For large organizations that did not respond to the survey, publicly available documents, such as annual reports and financial statements, were used to impute the amounts spent on research activities. In total, 83 organizations were included in the lookups, including the 40 largest nonresponding organizations from strata 1–3 (likely performers and funders) and the 10 largest organizations that reported either performing or funding but did not provide an amount. This auxiliary look-up method resulted in imputing research-performing status for 46 organizations, research-funding status for 33 organizations, total performance dollar amounts for 24 organizations, and total funding dollar amounts for 2 organizations.

Sixty-three organizations reported the amount spent on performing research on the pilot survey but did not provide a response on the FY 2016 survey. Similarly, 44 organizations reported the amount spent on funding research on the pilot survey but not on the FY 2016 survey. The pilot survey was used to impute values for these organizations. The pilot survey amounts were adjusted by an imputation factor to account for inflation or deflation in the reported amounts from the previous year. Imputation factors are the ratio of data from the current survey to the previous survey for organizations that responded to both, and these factors reflect the average annual growth or decline in research expenditures. The imputation factor was applied to responses from the previous survey to estimate the amount for the current survey. The total imputed amount spent in each subcategory for performing research (Questions 10, 11, and 12) and funding research (Questions 17, 18, 19, and 20) was imputed by distributing the total amounts to the subcategories in the same proportions as reported on the pilot survey.

For organizations that reported performing or funding research (Questions 7 and 8, respectively) but did not provide the amount spent (Questions 9 and 16, respectively), amounts were imputed from a regression model using expenses and assets reported on Form 990 or Form 990-PF and significant classification variables such as NTEE code. The source of the expenses and assets was the 2016 Statistics of Income (SOI) financial data extract downloaded from the IRS. If no 2016 data were available, data for 2015 or 2014 were substituted.

The total imputed amount spent in each subcategory for performing research (Questions 10, 11, and 12) and funding research (Questions 17, 18, 19, and 20) was imputed by distributing the total amounts to the subcategories, based on the average proportions for the responding organizations in the imputation classes based on type of organization (NTEE groups).

Weighting. The nonresponse weighting adjustment accounted for organizations that did not respond to the FY 2016 survey and for which information from other sources was not available to use for imputation. For these adjustments, the definition of respondents included organizations that performed or funded research (confirmed eligible) and those that reported that they did not perform or fund research (confirmed ineligible). Nonrespondents were the organizations for which eligibility had not been determined (unresolved eligibility status).

The nonresponse adjustment was a ratio adjustment where the respondents (r) were weighted to account for the nonrespondents (nr). The nonresponse adjustment calculation was weighted based on base weight and total expenses. The base weight was adjusted by the nonresponse factor,

W2 = W1 * f1; adjusted weight (w2) equals the base weight (W1) times the nonresponse factor (f1)

Nonresponse classes were determined through a nonresponse analysis that identified variables with differential nonresponse. This was done using a logistic regression model with survey response as the outcome. The outcome was modeled based on the frame data and auxiliary information available for both respondents and nonrespondents. The frame data included information from Form 990 and Form 990-PF at the time of frame creation (2013). Other information explored in the nonresponse analysis included expenses, assets, and revenue from the 2016 SOI data and organizations from the FY 2016 Survey of Federal Science and Engineering Support to Universities, Colleges, and Nonprofit Institutions. Because the available data for nonresponse adjustment differed between Form 990 and Form 990-PF, the response models were calculated separately for organizations filing Form 990 and organizations filing Form 990-PF. Nonresponse classes were based on significant variables within each group.

The last step in the weighting process was to calibrate the weighted expenses for the responding organizations to match the total expenses of nonprofit organizations in the population based on the 2016 SOI financial data extract. This adjustment corrected for changes in the population that may have occurred between frame development in 2013 and the reference year of the survey (FY 2016). The calibration adjustment was based on the respondents and the out-of-sample organizations identified during the data collection. Although the out-of-sample organizations did not qualify for the NPRA Survey, they were still represented in the population.

Variance estimation. Successive difference replication (SDR) was used to estimate the variance of the estimates. SDR was developed for systematic samples where the frame is ordered in such a way as to improve the sampling variance. The sample selection was sorted by expenses within each stratum. In comparison to the direct variance estimator, SDR has the advantage of capturing the variability resulting from the imputation and weighting process. The variance estimates were based on 80 replicates. For each replicate, every selected organization was weighted by a replicate factor of 1.0, 1.7, or 0.3. The replicate factor was applied to the sampling weight. For each sample replicate, nonresponse and calibration adjustments were recalculated as described above.

Survey Quality Measures

Sampling error. The estimates produced using the data collected by the NPRA Survey are subject to sampling error, or the difference between the estimates obtained from the sample and the results theoretically obtainable from a comparable complete enumeration of the sampling frame. This error results because only a subset of the sampling frame is measured in a sample survey.

The sampling error for a survey estimate is most often measured by calculating its standard error (SE). SEs may be used to define confidence intervals for the corresponding estimates with a desired level of confidence. For example, the interval defined by a margin of error of +/−2 SEs yields a confidence interval of approximately 95%. The confidence level represents the percentage of confidence intervals, if calculated for every possible random sample using the same sample design, that would contain the result of a complete enumeration of the sample frame. The observed NPRA Survey sample is only one of many possible samples that could have been selected. Therefore, the 95% confidence interval (+/−2 * SE) for the observed sample indicates a 95% probability that it contains the full enumeration result.

Because relatively few organizations perform R&D in the United States and because the amount of R&D they perform is quite variable, the sampling errors of the estimates produced using the NPRA Survey can vary widely. Controlling sampling error depends on the correlation between the measure of size in the sample frame, which was used to stratify the organizations, and the actual data collected, which could not be predicted accurately for all organizations when the sample was designed. Further, to help guarantee coverage, the largest organizations known to perform R&D were included in the sample with certainty so that these organizations would not contribute to the sampling error for the resulting estimates.

Relative standard errors (RSEs) are produced for all published estimates. The RSE is calculated as RSE = SE (Y) / Y, where Y equals an estimate from the survey (e.g., mean R&D performance, total R&D performance). The RSEs represent variability of the survey estimates as a result of random sampling error as well as nonresponse error assuming organizations were missing at random and item imputation (described in the next section). Tables presenting the estimated measures of variability corresponding to each data table are available from the NCSES survey manager.

The RSE for the estimated total R&D performed by nonprofit organizations in FY 2016 is 13.2%. The RSE for the estimated total R&D funded by nonprofit organizations in FY 2016 is 18.0%.

Nonresponse error. Nonresponse error refers to the differences in key estimates between units (i.e., organizations) in the sampling frame that were sampled for data collection and those that responded. For unit nonresponse, multiple follow-ups were conducted with nonresponding organizations, and multiple contact and data collection modes were used (i.e., phone, mail, and e-mail) to mitigate nonresponse error. The final survey weights incorporated nonresponse adjustments to reduce the risk of nonresponse bias in the final estimates.

Nonresponse bias for survey estimates cannot be directly measured. However, the impact of nonresponse can be incorporated into the variability of survey estimates assuming the data is missing at random. Missing at random for this survey means that, conditional on the nonresponse adjustments, the propensity for an organization to respond is not related to R&D performance and funding.

For item nonresponse, organizations were encouraged to report estimates of expenditures when actual dollar amounts could not be provided. This approach reduces item nonresponse error risk but may introduce measurement error. It should be noted that the Web survey allowed respondents to select “Unavailable” as a response; however, this option was not provided on the paper survey form. Imputation was conducted to help mitigate item nonresponse error. Imputation was incorporated into the RSEs through multiple imputation.

Coverage error. Under the total survey error framework, coverage error describes the difference between units (i.e., organizations) in the sampling frame and units in the target population that the frame was developed to reach. Using the NCCS Core Files as a sampling frame excludes most public charities and other exempt organizations with gross receipts of less than $50,000. Although organizations with gross receipts of less than $50,000 represent 70% of public charities and other exempt organizations, they only represent approximately 0.1% of the total gross receipts generated by those organizations. Exclusion of these small organizations is an undercoverage risk and a limitation of the design because they will not be included in the sampling frame. However, due to their size, the impact on the estimate of total research expenditures is negligible.

Another source of undercoverage is new organizations that have not filed a Form 990, Form 990-EZ, or Form 990-PF because they did not meet the criteria or because they did not exist in 2013 when the frame was created. However, we anticipate that these coverage errors are small, based on an increase of only 1.1% of organizations between the 2014 and 2015 IRS Exempt Organizations Business Master File Extract. Further, to the extent that R&D for new organizations is similar to that of organizations on the frame, the calibration adjustment reduces the impact of new organizations on the final estimates.

Overcoverage is caused by organizations that are not part of the target population (e.g., defunct organizations) being present in the sampling frame and potentially sampled. The sample cleaning and screening process (to remove organizations that do not perform or fund research) used by the NPRA Survey to verify performer and funder status was designed to remove these cases and therefore minimize overcoverage error in the final estimates. Organizations that did not respond to the screener questions could not be screened out.

Measurement error. The largest risk of measurement error is likely respondents’ interpretation of the definition of R&D activities and variations in record-keeping procedures used by respondents to answer the survey questions. For example, the survey asks for an organization’s expenditures for basic research, applied research, and experimental development. Some organizations said that these amounts were difficult to report because either they could not determine how best to allocate expenditures among the three categories or they did not track information in that way.

Definitions

  • 501(c)(3) organization. Section 501(c)(3) of the Internal Revenue Code allows for federal tax exemption of nonprofit organizations, specifically those that are considered public charities, private foundations, or private operating foundations. It is regulated and administered by the Department of the Treasury through the IRS. There are other 501(c) organizations, designated by the categories 501(c)(1) through 501(c)(28).
  • National Taxonomy of Exempt Entities (NTEE). Developed by NCCS, the NTEE system is used by the IRS to classify nonprofit organizations. The classification system divides nonprofit organizations into 26 major groups under 10 broad categories. The major groups represent broad subsectors that include arts, culture, and humanities; education; environment and animals; health; human services; international, foreign affairs; public, societal benefit; religion related; mutual/membership benefit; and unknown, unclassified. Decile, centile, and common codes further subdivide these categories.
  • Nonprofit organization. A business granted tax-exempt status by the IRS. Nonprofits pay no income tax on the donations they receive or on any money that they earn through fundraising activities. Nonprofit organizations are sometimes called NPOs or 501(c)(3) organizations, based on the section of the tax code that permits them to operate.
  • Research activities. Organizations were asked to report whether they performed or funded research activities on Questions 7 and 8 of the NPRA Survey. Research was defined as creative and systematic work undertaken in order to increase the stock of knowledge—including knowledge of humankind, culture, and society—and to devise new applications of available knowledge. Research covers three activities: basic research, applied research, and experimental development (see the definition of expenditures by type of research for additional information). Research does not include internal program monitoring or evaluation; public service or outreach programs; education or training programs; quality control testing; market research; management studies or efficiency surveys; literary, artistic, or historical projects; or feasibility studies (unless included as part of an overall research project). Organizations were asked to report whether they performed or funded research during the fiscal year. Research performed included research activities performed by an organization’s employees or contract employees. Research funded included all grants, contracts, subcontracts, and subawards awarded by an organization to external recipients to perform research activities.
  • Research expenditures. Organizations were asked to report their research-related expenditures for the fiscal year on Questions 9 and 16. Amounts spent on research performed within the reporting organization were reported on Question 9. These included direct costs (e.g., salaries and wages, travel, equipment, supplies, and consulting) and indirect costs associated with research expenditures (e.g., general and administrative salaries and wages, fringe benefits, facility costs, and depreciation) that were calculated using the organization’s applicable fringe, overhead, and general and administrative rate or facilities and administrative rate. Organizations were told to exclude capital expenditures as well as payments or funds in excess of the actual cost of the research (e.g., fees). The expenditures reported on Questions 10–12 were a subset of those reported on Question 9. Amounts provided to others to perform research outside of the reporting organization were reported on Question 16. These included all grants, contracts, subcontracts, and subawards; for multiyear awards, only the costs for FY 2016 were reported. The expenditures reported on Questions 17–20 were a subset of those reported on Question 16.
  • Fiscal year. Organizations were asked to report data for their fiscal year (or financial year). The fiscal year is a 12-month period that an organization uses for budgeting, forecasting, and reporting. It can start at any point the year and ends 12 months later.
  • Employer Identification Number (EIN). Organizations were asked to confirm and report data based on their EIN, also known as the Federal Employer Identification Number or Federal Tax Identification Number. This unique nine-digit number is assigned by the IRS to business entities operating in the United States for the purposes of identification.
  • Full-time employee (FTE). On Question 6, organizations were asked to provide the number of FTEs for FY 2016. FTEs are calculated as the total working (paid) hours during a specific reference period (usually a year) divided by the number of hours representing a full-time schedule within the same period. Only full-time, part-time, and seasonal or temporary employees were included in this count. On Question 13, organizations were asked to report the number of FTEs that worked on research activities in FY 2016. The response was to be split by the number of researchers (i.e., professionals engaged in the conception or creation of new knowledge) and technicians and other support personnel (i.e., staff who work under the supervision of researchers to conduct research activities or who provide direct support services for the research project). Contract employees and volunteers were not included on Question 13; they were reported on Questions 14 and 15, respectively.
  • Expenditures by source. Questions 10 and 19 asked organizations for their total research expenditures by funding source, as defined below. Due to high standard errors for some of the smaller sources of funding, some categories were combined in the data tables. Foundations and all other nonprofits were combined into the nonprofits category, and state and local government and universities were combined with other to form the category all other sources.
    • Federal government. Any agency of the U.S. government. Subcontract or subaward funds received by the organization for research activities on federal projects were included.
    • State and local government. Any state, county, municipality, or other local government entity in the United States.
    • Businesses. Domestic or foreign for-profit organizations. Funds from a company’s nonprofit foundation were not reported under businesses; they were reported under foundations. Funds from another nonprofit were reported under all other nonprofits.
    • Foundations. Domestic or foreign nonprofit grant-making organizations.
    • Universities. Domestic or foreign degree-granting institutions.
    • All other nonprofits. Domestic or foreign public charities and other nonprofit organizations not reported under foundations or universities. Funds from the reporting organization itself were reported under the category internal funds.
    • Internal funds. The organization’s own funds from its endowment, general donations, or other unrestricted sources.
    • Individual donors. Gifts designated by the donors for research.
    • Other sources. Sources not reported in any of the other categories.
  • Expenditures by type of research. Questions 12 and 20 asked organizations for the amount of federal and nonfederal research expenditures by type of research, as defined below:
    • Basic research. Experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundations of phenomena and observable facts, without any particular application or use in view.
    • Applied research. Original investigation undertaken in order to acquire new knowledge. It is directed primarily toward a specific, practical aim or objective.
    • Experimental development. Systematic work, drawing on knowledge gained from research and practical experience and producing additional knowledge, which is directed to producing new products or processes or to improving existing products or processes.
  • Expenditures by type of organization funded. Question 17 asked organizations for the amounts of funding provided to four types of organizations, as defined below:
    • Universities or other educational entities. Domestic or foreign degree-granting institutions.
    • Other nonprofit organizations. Domestic or foreign nonprofit foundations and organizations.
    • Businesses. Domestic of foreign for-profit organizations.
    • Other. Recipients not reported in any of the other categories.
 

Notes

1A health organization was defined as an organization with (1) a major NTEE code of E: Health, F: Mental Health, Crisis Intervention, G: Disease, Disorder, Medical Disciplines, H: Medical Research, (2) an IRS Foundation code of 12, or (3) an IRS Reason code of 03 or 05 indicating a hospital or medical research organization.

2Adapted from Foundation Group definition. See https://www.501c3.org/what-is-a-501c3/.

3The complete list of codes is available at https://nccs.urban.org/project/irs-activity-codes.

4Adapted from Investopedia definition. See https://www.investopedia.com/terms/n/non-profitorganization.asp.

 

Acknowledgments and Suggested Citation

Acknowledgments

Ronda Britt of the National Center for Science and Engineering Statistics (NCSES) developed and coordinated this report under the guidance of Gary Anderson, NCSES Acting Program Director, and the leadership of Emilda B. Rivers, NCSES Director; Vipin Arora, NCSES Deputy Director; and John Finamore, NCSES Chief Statistician. Darius Singpurwalla (NCSES) reviewed the report.

Under contract to NCSES, ICF International conducted the survey and prepared the statistics for this report. ICF staff members who made significant contributions include Randall ZuWallack and Sherri Mamon. Publication processing support was provided by Christine Hamel (NCSES).

NCSES thanks the nonprofit organizations that provided input for this report.

Suggested Citation

National Center for Science and Engineering Statistics (NCSES). 2022. Nonprofit Research Activities: Fiscal Year 2016. NSF 22-338. Alexandria, VA: National Science Foundation. Available at https://ncses.nsf.gov/pubs/nsf22338/.

 

Contact Us

Report Author

Ronda Britt
Survey Manager
Tel: (703) 292-7765
E-mail: rbritt@nsf.gov

NCSES

National Center for Science and Engineering Statistics
Directorate for Social, Behavioral and Economic Sciences
National Science Foundation
2415 Eisenhower Avenue, Suite W14200
Alexandria, VA 22314
Tel: (703) 292-8780
FIRS: (800) 877-8339
TDD: (800) 281-8749
E-mail: ncsesweb@nsf.gov