Skip to Main Content

Organizing Your Social Sciences Research Paper: Evaluating Sources

Importance of Evaluating Sources

Evaluating the authority, usefulness, and reliability of resources is a crucial step in developing a literature review that effectively covers pertinent research as well as demonstrates to the reader that you know what you're talking about. The process of evaluating scholarly materials also enhances your general skills and ability to:

  1. Seek out alternative points of view and differing perspectives,
  2. Identify possible bias in the work of others,
  3. Distinguish between fact, fiction, and opinion,
  4. Develop and strengthen your ability to distinguish between relevant and irrelevant content,
  5. Draw cogent, well thought out conclusions, and
  6. Synthesize information, extracting meaning through a deliberate process of interpretation and analysis.

Black, Thomas R. Evaluating Social Science Research: An Introduction. London: Sage, 1993.

Strategies for Critically Evaluating Web Content

Web Content Requires Additional Methods of Evaluation

A report from the Stanford University Graduate School of Education found that students evaluating information that flows across social media channels or retrieved from online search engines like Google or Bing, have difficulty distinguishing advertisements from news articles or how to identity where the content came from. In general, the principles that guide your evaluation of print materials are the same that apply to evaluating online resources. However, unlike print materials that have certain features that help determine their scholarly integrity, the interactive and multimedia dynamics of online sources requires additional attention to the content in order to obtain confidence that what you are viewing is valid and credible.

Additional things to look for when considering using an online resource:

  • Source of the content is stated -- determine whether the content is original or borrowed, quoted, or imported from elsewhere. Note that content imported from another source via RSS feed can be difficult to identify, as this material can be incorporated into other content on the page without being appropriately labeled.
  • Don't be fooled by an attractive, professional-looking presentation -- just because a site looks professional doesn't mean that it is. However, poorly organized web page designs or poorly written content is easy to recognize and can be a signal that you should carefully scrutinize the site's content.
  • Site is currently being maintained -- check for last posting dates or last revised dates. Most scholarly websites show a date when the content was last posted or revised. Note that, if no date is indicated, this does not mean its content is invalid. However, it may indicate that the content is out-of-date and does not reflect current information about the topic.
  • Links are relevant and appropriate, and are in working order -- a site with a lot of broken links is an indication of neglect and out-of-date content.
  • Clearly states authorship -- if a site is produced anonymously, you cannot verify the legitimacy of its creator. Note that an author of a site can be either be a person or an organization.
  • The site includes contact information -- if you have questions about the site, contact information is an important indicator that the site is well-maintained.
  • Domain location in the site address (URL) is relevant to the focus of the material [e.g., .edu for educational or research materials; .org for non-profit organizations; .gov for government sites; .com for business sites]. Note that the domain is not necessarily a primary indicator of site content. For example, some authors post their content on blog or wiki platforms hosted by companies with .com addresses. Also note that the tilde (~) in the URL usually indicates a personal page.

Evaluating Internet Information. Online Library Learning Center. University of Georgia; Evaluating Internet Sources: A Library Resource Guide. Olsen Library. Northern Michigan University; Evaluating Sources. Writing@CSU. Colorado State University; Evaluating Web Sites. Teaching and Learning Services, University of Maryland Libraries; Ostenson, Jonathan. “Skeptics on the Internet: Teaching Students to Read Critically.” The English Journal 98 (May, 2009): 54-59; Stanford History Education Group. "Evaluating Information: The Cornerstone of Civic Online Reasoning." Stanford, CA: Graduate School of Education, 2016; Writing from Sources: Evaluating Web Sources. The Reading/Writing Center. Hunter College.

Detecting Author Bias

Detecting Bias

Bias, whether done intentionally or not, occurs when a statement reflects a partiality, preference, or prejudice for or against an object, person, place, or idea. Listed below are problems to look for when determining if the source is biased.

  1. Availability Bias -- this is a tendency for people to overestimate probabilities of events related to memorable or dramatic occurrences [e.g., after 9/11, people took vacation by traveling by car rather than airplane even though, statistically, car travel is much more dangerous]. This form of bias in a research study can take the form of an example used to support author’s argument or the design a case study focused around a particular event. Unless the purpose of the study is to illuminate new understanding around a memorable or dramatic occurrence, be critical of studies that use this type of measurement to examine a research problem. A seemingly mundane or uneventful occurrence can be just as valid in developing solutions to a problem or advancing new knowledge.
  2. Distortion or Stretching of the Facts -- this refers to the act of making issues, problems, or arguments appear more extreme by using misinformation or exaggerated and/or imprecise language to describe research outcomes [e.g., “Everyone agreed the policy was a complete disaster.” Who's everyone? How was data gathered to come to this conclusion? And, how does one specifically define something as a "disaster"? Is there sufficient evidence to support such a broad statement?]. Look for declarative statements that lack appropriate reference to supporting evidence or are follow up with detailed analysis.
  3. Flawed Research Design -- bias can enter the narrative as a result of a poorly designed study; this may include a claim or generalization about the findings based upon too small a sample, manipulating statistics, omitting contrary conclusions from other studies, or failing to recognize negative results [results that do not support the hypothesis].
  4. Lack of Citations -- it is acceptable to issue a broad declarative statement if it is clearly supported and linked to evidence from your study [e.g., "Testimony during Congressional hearings shows that the Department of Education is reluctant to act so teachers must do so"]. This problem refers to statements or information presented as fact that does not include proper citation to a source or to sources that support the researcher's position, or that are not statements explicitly framed as the author's opinion.
  5. Misquoting a Source -- this is when an author rewords, paraphrases, or manipulates a statement, the information about a source is incomplete, or a quote is presented in such a way that it misleads or conveys a false impression. This is important when paraphrasing another author. If you cannot adequate summarize a specific statement, finding, or recommendation, use a direct quote to avoid any ambiguity.
  6. Persuasive or Inflammatory Language -- using words and phrases intended to elicit a positive or negative response from the reader or that leads the reader to arrive at a specific conclusion [e.g., referring to one group in an armed conflict as “terrorists” and the other group as “peace-loving”].
  7. Selective Facts -- taking information out of context or selectively choosing information that only supports the argument while omitting the overall context or vital supporting evidence.
  8. Statistical Survey Bias -- this can take several forms so, if data is presented in a study that was gathered by the author(s), examine it critically for the following possible biases:
    • Measurement Error: this results from problems with the process by which data was gathered, such as, the use of leading questions that influence the response rate or that are biased toward what respondents believe is socially desirable because most people want to present themselves favorably. The only way to assess bias in these cases is to have access to the survey instrument used to gather data.
    • Sample Size: increasing the number of a sample, for example the number of people interviewed, does not necessarily decrease bias, but look to see if the sample used is representative of the population under study to ensure that any generalizations or conclusions from the interpretation of the data is valid.
    • Undercoverage: this refers to the method of data gathering that is a result of non-response to a survey because some subjects do not have the opportunity to participate. In looking at data, be sure to understand the percentage of non-responses to a survey or groups of people who were not included.
    • Voluntary Response: this bias occurs when respondents to a survey are self-selected, resulting in an overrepresentation of individuals who have strong opinions [e.g., data from a radio call-in show]. Be an especially critical reader of web-based surveys about controversial topics if the author(s) have not indicated how they interpreted thew data from voluntary surveys.

NOTE:  The act of determining bias in scholarly research is also an act of constant self-reflection. Everyone has biases. Therefore, it is important that you minimize the influence of your own biases by approaching the assessment of another person's research introspectively and with a degree of self-awareness.


"Availability Bias, Source Bias, and Publication Bias in Meta-Analysis." In Methods of Meta-Analysis: Correcting Error and Bias in Research Findings. 3rd Edition. (London: SAGE Publications, 2015), pp. 513-551; "Bias." In Key Concepts in Social Research. Geoff Payne and Judy Payne. (London: SAGE Publications, 2004), pp. 28-31; Evaluating Sources. Lakeland Library Research Guides. Lakeland Community College; Podsakoff, Philip M. et al. “Common Method Biases in Behavioral Research: A Critical Review of the Literature and Recommended Remedies.” Journal of Applied Psychology 88 (October 2003): 879-903; Stereotypes and Biased Language. The Writing Lab and The OWL. Purdue University; Bias in Survey Sampling. Stat Trek Online Tutorials; What is Availability Bias? InnovateUs.net.