How to read a paper, critical review
Reading a scientific article is a complex task. The worst way to approach this task is to treat it like the reading of a textbook—reading from title to literature cited, digesting every word along the way without any reflection or criticism.
A critical review (sometimes called a critique, critical commentary, critical appraisal, critical analysis) is a detailed commentary on and critical evaluation of a text. You might carry out a critical review as a stand-alone exercise, or as part of your research and preparation for writing a literature review. The following guidelines are designed to help you critically evaluate a research article.

How to Read a Scientific Article
You should begin by skimming the article to identify its structure and features. As you read, look for the author’s main points.
- Generate questions before, during, and after reading.
- Draw inferences based on your own experiences and knowledge.
- To really improve understanding and recall, take notes as you read.
What is meant by critical and evaluation?
- To be critical does not mean to criticise in an exclusively negative manner. To be critical of a text means you question the information and opinions in the text, in an attempt to evaluate or judge its worth overall.
- An evaluation is an assessment of the strengths and weaknesses of a text. This should relate to specific criteria, in the case of a research article. You have to understand the purpose of each section, and be aware of the type of information and evidence that are needed to make it convincing, before you can judge its overall value to the research article as a whole.
Useful Downloads
- How to read a scientific paper
- How to conduct a critical review

IOE Writing Centre
- Writing a Critical Review

Writing a Critique

A critique (or critical review) is not to be mistaken for a literature review. A 'critical review', or 'critique', is a complete type of text (or genre), discussing one particular article or book in detail. In some instances, you may be asked to write a critique of two or three articles (e.g. a comparative critical review). In contrast, a 'literature review', which also needs to be 'critical', is a part of a larger type of text, such as a chapter of your dissertation.
Most importantly: Read your article / book as many times as possible, as this will make the critical review much easier.
1. Read and take notes 2. Organising your writing 3. Summary 4. Evaluation 5. Linguistic features of a critical review 6. Summary language 7. Evaluation language 8. Conclusion language 9. Example extracts from a critical review 10. Further resources
Read and Take Notes
To improve your reading confidence and efficiency, visit our pages on reading.
Further reading: Read Confidently
After you are familiar with the text, make notes on some of the following questions. Choose the questions which seem suitable:
- What kind of article is it (for example does it present data or does it present purely theoretical arguments)?
- What is the main area under discussion?
- What are the main findings?
- What are the stated limitations?
- Where does the author's data and evidence come from? Are they appropriate / sufficient?
- What are the main issues raised by the author?
- What questions are raised?
- How well are these questions addressed?
- What are the major points/interpretations made by the author in terms of the issues raised?
- Is the text balanced? Is it fair / biased?
- Does the author contradict herself?
- How does all this relate to other literature on this topic?
- How does all this relate to your own experience, ideas and views?
- What else has this author written? Do these build / complement this text?
- (Optional) Has anyone else reviewed this article? What did they say? Do I agree with them?
^ Back to top
Organising your writing
You first need to summarise the text that you have read. One reason to summarise the text is that the reader may not have read the text. In your summary, you will
- focus on points within the article that you think are interesting
- summarise the author(s) main ideas or argument
- explain how these ideas / argument have been constructed. (For example, is the author basing her arguments on data that they have collected? Are the main ideas / argument purely theoretical?)
In your summary you might answer the following questions: Why is this topic important? Where can this text be located? For example, does it address policy studies? What other prominent authors also write about this?
Evaluation is the most important part in a critical review.
Use the literature to support your views. You may also use your knowledge of conducting research, and your own experience. Evaluation can be explicit or implicit.
Explicit evaluation
Explicit evaluation involves stating directly (explicitly) how you intend to evaluate the text. e.g. "I will review this article by focusing on the following questions. First, I will examine the extent to which the authors contribute to current thought on Second Language Acquisition (SLA) pedagogy. After that, I will analyse whether the authors' propositions are feasible within overseas SLA classrooms."
Implicit evaluation
Implicit evaluation is less direct. The following section on Linguistic Features of Writing a Critical Review contains language that evaluates the text. A difficult part of evaluation of a published text (and a professional author) is how to do this as a student. There is nothing wrong with making your position as a student explicit and incorporating it into your evaluation. Examples of how you might do this can be found in the section on Linguistic Features of Writing a Critical Review. You need to remember to locate and analyse the author's argument when you are writing your critical review. For example, you need to locate the authors' view of classroom pedagogy as presented in the book / article and not present a critique of views of classroom pedagogy in general.
Linguistic features of a critical review
The following examples come from published critical reviews. Some of them have been adapted for student use.
Summary language
- This article / book is divided into two / three parts. First...
- While the title might suggest...
- The tone appears to be...
- Title is the first / second volume in the series Title, edited by...The books / articles in this series address...
- The second / third claim is based on...
- The author challenges the notion that...
- The author tries to find a more middle ground / make more modest claims...
- The article / book begins with a short historical overview of...
- Numerous authors have recently suggested that...(see Author, Year; Author, Year). Author would also be once such author. With his / her argument that...
- To refer to title as a...is not to say that it is...
- This book / article is aimed at... This intended readership...
- The author's book / article examines the...To do this, the author first...
- The author develops / suggests a theoretical / pedagogical model to…
- This book / article positions itself firmly within the field of...
- The author in a series of subtle arguments, indicates that he / she...
- The argument is therefore...
- The author asks "..."
- With a purely critical / postmodern take on...
- Topic, as the author points out, can be viewed as...
- In this recent contribution to the field of...this British author...
- As a leading author in the field of...
- This book / article nicely contributes to the field of...and complements other work by this author...
- The second / third part of...provides / questions / asks the reader...
- Title is intended to encourage students / researchers to...
- The approach taken by the author provides the opportunity to examine...in a qualitative / quantitative research framework that nicely complements...
- The author notes / claims that state support / a focus on pedagogy / the adoption of...remains vital if...
- According to Author (Year) teaching towards examinations is not as effective as it is in other areas of the curriculum. This is because, as Author (Year) claims that examinations have undue status within the curriculum.
- According to Author (Year)…is not as effective in some areas of the curriculum / syllabus as others. Therefore the author believes that this is a reason for some school's…
Evaluation language
- This argument is not entirely convincing, as...furthermore it commodifies / rationalises the...
- Over the last five / ten years the view of...has increasingly been viewed as 'complicated' (see Author, Year; Author, Year).
- However, through trying to integrate...with...the author...
- There are difficulties with such a position.
- Inevitably, several crucial questions are left unanswered / glossed over by this insightful / timely / interesting / stimulating book / article. Why should...
- It might have been more relevant for the author to have written this book / article as...
- This article / book is not without disappointment from those who would view...as...
- This chosen framework enlightens / clouds...
- This analysis intends to be...but falls a little short as...
- The authors rightly conclude that if...
- A detailed, well-written and rigorous account of...
- As a Korean student I feel that this article / book very clearly illustrates...
- The beginning of...provides an informative overview into...
- The tables / figures do little to help / greatly help the reader...
- The reaction by scholars who take a...approach might not be so favourable (e.g. Author, Year).
- This explanation has a few weaknesses that other researchers have pointed out (see Author, Year; Author, Year). The first is...
- On the other hand, the author wisely suggests / proposes that...By combining these two dimensions...
- The author's brief introduction to...may leave the intended reader confused as it fails to properly...
- Despite my inability to...I was greatly interested in...
- Even where this reader / I disagree(s), the author's effort to...
- The author thus combines...with...to argue...which seems quite improbable for a number of reasons. First...
- Perhaps this aversion to...would explain the author's reluctance to...
- As a second language student from ...I find it slightly ironic that such an anglo-centric view is...
- The reader is rewarded with...
- Less convincing is the broad-sweeping generalisation that...
- There is no denying the author's subject knowledge nor his / her...
- The author's prose is dense and littered with unnecessary jargon...
- The author's critique of...might seem harsh but is well supported within the literature (see Author, Year; Author, Year; Author, Year). Aligning herself with the author, Author (Year) states that...
- As it stands, the central focus of Title is well / poorly supported by its empirical findings...
- Given the hesitation to generalise to...the limitation of...does not seem problematic...
- For instance, the term...is never properly defined and the reader left to guess as to whether...
- Furthermore, to label...as...inadvertently misguides...
- In addition, this research proves to be timely / especially significant to... as recent government policy / proposals has / have been enacted to...
- On this well researched / documented basis the author emphasises / proposes that...
- Nonetheless, other research / scholarship / data tend to counter / contradict this possible trend / assumption...(see Author, Year; Author, Year).
- Without entering into detail of the..., it should be stated that Title should be read by...others will see little value in...
- As experimental conditions were not used in the study the word 'significant' misleads the reader.
- The article / book becomes repetitious in its assertion that...
- The thread of the author's argument becomes lost in an overuse of empirical data...
- Almost every argument presented in the final section is largely derivative, providing little to say about...
- She / he does not seem to take into consideration; however, that there are fundamental differences in the conditions of…
- As Author (Year) points out, however, it seems to be necessary to look at…
- This suggest that having low…does not necessarily indicate that…is ineffective.
- Therefore, the suggestion made by Author (Year)…is difficult to support.
- When considering all the data presented…it is not clear that the low scores of some students, indeed, reflects…
Conclusion language
- Overall this article / book is an analytical look at...which within the field of...is often overlooked.
- Despite its problems, Title offers valuable theoretical insights / interesting examples / a contribution to pedagogy and a starting point for students / researchers of...with an interest in...
- This detailed and rigorously argued...
- This first / second volume / book / article by...with an interest in...is highly informative...
Example extracts from a critical review
Writing critically.
If you have been told your writing is not critical enough, it probably means that your writing treats the knowledge claims as if they are true, well supported, and applicable in the context you are writing about. This may not always be the case.
In these two examples, the extracts refer to the same section of text. In each example, the section that refers to a source has been highlighted in bold. The note below the example then explains how the writer has used the source material.
There is a strong positive effect on students, both educationally and emotionally, when the instructors try to learn to say students' names without making pronunciation errors (Kiang, 2004).
Use of source material in example a:
This is a simple paraphrase with no critical comment. It looks like the writer agrees with Kiang. (This is not a good example for critical writing, as the writer has not made any critical comment).
Kiang (2004) gives various examples to support his claim that "the positive emotional and educational impact on students is clear" (p.210) when instructors try to pronounce students' names in the correct way. He quotes one student, Nguyet, as saying that he "felt surprised and happy" (p.211) when the tutor said his name clearly . The emotional effect claimed by Kiang is illustrated in quotes such as these, although the educational impact is supported more indirectly through the chapter. Overall, he provides more examples of students being negatively affected by incorrect pronunciation, and it is difficult to find examples within the text of a positive educational impact as such.
Use of source material in example b:
The writer describes Kiang's (2004) claim and the examples which he uses to try to support it. The writer then comments that the examples do not seem balanced and may not be enough to support the claims fully. This is a better example of writing which expresses criticality.
^Back to top
Further resources
You may also be interested in our page on criticality, which covers criticality in general, and includes more critical reading questions.
Further reading: Read and Write Critically
We recommend that you do not search for other university guidelines on critical reviews. This is because the expectations may be different at other institutions. Ask your tutor for more guidance or examples if you have further questions.
IOE Writing Centre Online
Self-access resources from the Academic Writing Centre at the UCL Institute of Education.
Anonymous Suggestions Box
Information for Staff
Academic Writing Centre
Academic Writing Centre, UCL Institute of Education [email protected] Twitter: @AWC_IOE Skype: awc.ioe
A guide to critical appraisal of evidence : Nursing2020 Critical Care

- Subscribe to journal Subscribe
- Get new issue alerts Get alerts

Secondary Logo
Journal logo.
Colleague's E-mail is Invalid
Your message has been successfully sent to your colleague.
Save my selection
A guide to critical appraisal of evidence
Fineout-Overholt, Ellen PhD, RN, FNAP, FAAN
Ellen Fineout-Overholt is the Mary Coulter Dowdy Distinguished Professor of Nursing at the University of Texas at Tyler School of Nursing, Tyler, Tex.
The author has disclosed no financial relationships related to this article.
Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis , and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers successfully determine what is known about a clinical issue. Patient outcomes are improved when clinicians apply a body of evidence to daily practice.
How do nurses assess the quality of clinical research? This article outlines a stepwise approach to critical appraisal of research studies' worth to clinical practice: rapid critical appraisal, evaluation, synthesis , and recommendation. When critical care nurses apply a body of valid, reliable, and applicable evidence to daily practice, patient outcomes are improved.

Critical care nurses can best explain the reasoning for their clinical actions when they understand the worth of the research supporting their practices. In c ritical appraisal , clinicians assess the worth of research studies to clinical practice. Given that achieving improved patient outcomes is the reason patients enter the healthcare system, nurses must be confident their care techniques will reliably achieve best outcomes.
Nurses must verify that the information supporting their clinical care is valid, reliable, and applicable. Validity of research refers to the quality of research methods used, or how good of a job researchers did conducting a study. Reliability of research means similar outcomes can be achieved when the care techniques of a study are replicated by clinicians. Applicability of research means it was conducted in a similar sample to the patients for whom the findings will be applied. These three criteria determine a study's worth in clinical practice.
Appraising the worth of research requires a standardized approach. This approach applies to both quantitative research (research that deals with counting things and comparing those counts) and qualitative research (research that describes experiences and perceptions). The word critique has a negative connotation. In the past, some clinicians were taught that studies with flaws should be discarded. Today, it is important to consider all valid and reliable research informative to what we understand as best practice. Therefore, the author developed the critical appraisal methodology that enables clinicians to determine quickly which evidence is worth keeping and which must be discarded because of poor validity, reliability, or applicability.
Evidence-based practice process
The evidence-based practice (EBP) process is a seven-step problem-solving approach that begins with data gathering (see Seven steps to EBP ). During daily practice, clinicians gather data supporting inquiry into a particular clinical issue (Step 0). The description is then framed as an answerable question (Step 1) using the PICOT question format ( P opulation of interest; I ssue of interest or intervention; C omparison to the intervention; desired O utcome; and T ime for the outcome to be achieved). 1 Consistently using the PICOT format helps ensure that all elements of the clinical issue are covered. Next, clinicians conduct a systematic search to gather data answering the PICOT question (Step 2). Using the PICOT framework, clinicians can systematically search multiple databases to find available studies to help determine the best practice to achieve the desired outcome for their patients. When the systematic search is completed, the work of critical appraisal begins (Step 3). The known group of valid and reliable studies that answers the PICOT question is called the body of evidence and is the foundation for the best practice implementation (Step 4). Next, clinicians evaluate integration of best evidence with clinical expertise and patient preferences and values to determine if the outcomes in the studies are realized in practice (Step 5). Because healthcare is a community of practice, it is important that experiences with evidence implementation be shared, whether the outcome is what was expected or not. This enables critical care nurses concerned with similar care issues to better understand what has been successful and what has not (Step 6).
Critical appraisal of evidence
The first phase of critical appraisal, rapid critical appraisal, begins with determining which studies will be kept in the body of evidence. All valid, reliable, and applicable studies on the topic should be included. This is accomplished using design-specific checklists with key markers of good research. When clinicians determine a study is one they want to keep (a “keeper” study) and that it belongs in the body of evidence, they move on to phase 2, evaluation. 2
In the evaluation phase, the keeper studies are put together in a table so that they can be compared as a body of evidence, rather than individual studies. This phase of critical appraisal helps clinicians identify what is already known about a clinical issue. In the third phase, synthesis , certain data that provide a snapshot of a particular aspect of the clinical issue are pulled out of the evaluation table to showcase what is known. These snapshots of information underpin clinicians' decision-making and lead to phase 4, recommendation. A recommendation is a specific statement based on the body of evidence indicating what should be done—best practice. Critical appraisal is not complete without a specific recommendation. Each of the phases is explained in more detail below.
Phase 1: Rapid critical appraisal . Rapid critical appraisal involves using two tools that help clinicians determine if a research study is worthy of keeping in the body of evidence. The first tool, General Appraisal Overview for All Studies (GAO), covers the basics of all research studies (see Elements of the General Appraisal Overview for All Studies ). Sometimes, clinicians find gaps in knowledge about certain elements of research studies (for example, sampling or statistics) and need to review some content. Conducting an internet search for resources that explain how to read a research paper, such as an instructional video or step-by-step guide, can be helpful. Finding basic definitions of research methods often helps resolve identified gaps.
To accomplish the GAO, it is best to begin with finding out why the study was conducted and how it answers the PICOT question (for example, does it provide information critical care nurses want to know from the literature). If the study purpose helps answer the PICOT question, then the type of study design is evaluated. The study design is compared with the hierarchy of evidence for the type of PICOT question. The higher the design falls within the hierarchy or levels of evidence, the more confidence nurses can have in its finding, if the study was conducted well. 3,4 Next, find out what the researchers wanted to learn from their study. These are called the research questions or hypotheses. Research questions are just what they imply; insufficient information from theories or the literature are available to guide an educated guess, so a question is asked. Hypotheses are reasonable expectations guided by understanding from theory and other research that predicts what will be found when the research is conducted. The research questions or hypotheses provide the purpose of the study.
Next, the sample size is evaluated. Expectations of sample size are present for every study design. As an example, consider as a rule that quantitative study designs operate best when there is a sample size large enough to establish that relationships do not exist by chance. In general, the more participants in a study, the more confidence in the findings. Qualitative designs operate best with fewer people in the sample because these designs represent a deeper dive into the understanding or experience of each person in the study. 5 It is always important to describe the sample, as clinicians need to know if the study sample resembles their patients. It is equally important to identify the major variables in the study and how they are defined because this helps clinicians best understand what the study is about.
The final step in the GAO is to consider the analyses that answer the study research questions or confirm the study hypothesis. This is another opportunity for clinicians to learn, as learning about statistics in healthcare education has traditionally focused on conducting statistical tests as opposed to interpreting statistical tests. Understanding what the statistics indicate about the study findings is an imperative of critical appraisal of quantitative evidence.
The second tool is one of the variety of rapid critical appraisal checklists that speak to validity, reliability, and applicability of specific study designs, which are available at varying locations (see Critical appraisal resources ). When choosing a checklist to implement with a group of critical care nurses, it is important to verify that the checklist is complete and simple to use. Be sure to check that the checklist has answers to three key questions. The first question is: Are the results of the study valid? Related subquestions should help nurses discern if certain markers of good research design are present within the study. For example, identifying that study participants were randomly assigned to study groups is an essential marker of good research for a randomized controlled trial. Checking these essential markers helps clinicians quickly review a study to check off these important requirements. Clinical judgment is required when the study lacks any of the identified quality markers. Clinicians must discern whether the absence of any of the essential markers negates the usefulness of the study findings. 6-9

The second question is: What are the study results? This is answered by reviewing whether the study found what it was expecting to and if those findings were meaningful to clinical practice. Basic knowledge of how to interpret statistics is important for understanding quantitative studies, and basic knowledge of qualitative analysis greatly facilitates understanding those results. 6-9
The third question is: Are the results applicable to my patients? Answering this question involves consideration of the feasibility of implementing the study findings into the clinicians' environment as well as any contraindication within the clinicians' patient populations. Consider issues such as organizational politics, financial feasibility, and patient preferences. 6-9
When these questions have been answered, clinicians must decide about whether to keep the particular study in the body of evidence. Once the final group of keeper studies is identified, clinicians are ready to move into the phase of critical appraisal. 6-9
Phase 2: Evaluation . The goal of evaluation is to determine how studies within the body of evidence agree or disagree by identifying common patterns of information across studies. For example, an evaluator may compare whether the same intervention is used or if the outcomes are measured in the same way across all studies. A useful tool to help clinicians accomplish this is an evaluation table. This table serves two purposes: first, it enables clinicians to extract data from the studies and place the information in one table for easy comparison with other studies; and second, it eliminates the need for further searching through piles of periodicals for the information. (See Bonus Content: Evaluation table headings .) Although the information for each of the columns may not be what clinicians consider as part of their daily work, the information is important for them to understand about the body of evidence so that they can explain the patterns of agreement or disagreement they identify across studies. Further, the in-depth understanding of the body of evidence from the evaluation table helps with discussing the relevant clinical issue to facilitate best practice. Their discussion comes from a place of knowledge and experience, which affords the most confidence. The patterns and in-depth understanding are what lead to the synthesis phase of critical appraisal.
The key to a successful evaluation table is simplicity. Entering data into the table in a simple, consistent manner offers more opportunity for comparing studies. 6-9 For example, using abbreviations versus complete sentences in all columns except the final one allows for ease of comparison. An example might be the dependent variable of depression defined as “feelings of severe despondency and dejection” in one study and as “feeling sad and lonely” in another study. 10 Because these are two different definitions, they need to be different dependent variables. Clinicians must use their clinical judgment to discern that these different dependent variables require different names and abbreviations and how these further their comparison across studies.

Sample and theoretical or conceptual underpinnings are important to understanding how studies compare. Similar samples and settings across studies increase agreement. Several studies with the same conceptual framework increase the likelihood of common independent variables and dependent variables. The findings of a study are dependent on the analyses conducted. That is why an analysis column is dedicated to recording the kind of analysis used (for example, the name of the statistical analyses for quantitative studies). Only statistics that help answer the clinical question belong in this column. The findings column must have a result for each of the analyses listed; however, in the actual results, not in words. For example, a clinician lists a t -test as a statistic in the analysis column, so a t -value should reflect whether the groups are different as well as probability ( P -value or confidence interval) that reflects statistical significance. The explanation for these results would go in the last column that describes worth of the research to practice. This column is much more flexible and contains other information such as the level of evidence, the studies' strengths and limitations, any caveats about the methodology, or other aspects of the study that would be helpful to its use in practice. The final piece of information in this column is a recommendation for how this study would be used in practice. Each of the studies in the body of evidence that addresses the clinical question is placed in one evaluation table to facilitate the ease of comparing across the studies. This comparison sets the stage for synthesis .
Phase 3: Synthesis . In the synthesis phase, clinicians pull out key information from the evaluation table to produce a snapshot of the body of evidence. A table also is used here to feature what is known and help all those viewing the synthesis table to come to the same conclusion. A hypothetical example table included here demonstrates that a music therapy intervention is effective in reducing the outcome of oxygen saturation (SaO 2 ) in six of the eight studies in the body of evidence that evaluated that outcome (see Sample synthesis table: Impact on outcomes ). Simply using arrows to indicate effect offers readers a collective view of the agreement across studies that prompts action. Action may be to change practice, affirm current practice, or conduct research to strengthen the body of evidence by collaborating with nurse scientists.
When synthesizing evidence, there are at least two recommended synthesis tables, including the level-of-evidence table and the impact-on-outcomes table for quantitative questions, such as therapy or relevant themes table for “meaning” questions about human experience. (See Bonus Content: Level of evidence for intervention studies: Synthesis of type .) The sample synthesis table also demonstrates that a final column labeled synthesis indicates agreement across the studies. Of the three outcomes, the most reliable for clinicians to see with music therapy is SaO 2 , with positive results in six out of eight studies. The second most reliable outcome would be reducing increased respiratory rate (RR). Parental engagement has the least support as a reliable outcome, with only two of five studies showing positive results. Synthesis tables make the recommendation clear to all those who are involved in caring for that patient population. Although the two synthesis tables mentioned are a great start, the evidence may require more synthesis tables to adequately explain what is known. These tables are the foundation that supports clinically meaningful recommendations.
Phase 4: Recommendation . Recommendations are definitive statements based on what is known from the body of evidence. For example, with an intervention question, clinicians should be able to discern from the evidence if they will reliably get the desired outcome when they deliver the intervention as it was in the studies. In the sample synthesis table, the recommendation would be to implement the music therapy intervention across all settings with the population, and measure SaO 2 and RR, with the expectation that both would be optimally improved with the intervention. When the synthesis demonstrates that studies consistently verify an outcome occurs as a result of an intervention, however that intervention is not currently practiced, care is not best practice. Therefore, a firm recommendation to deliver the intervention and measure the appropriate outcomes must be made, which concludes critical appraisal of the evidence.
A recommendation that is off limits is conducting more research, as this is not the focus of clinicians' critical appraisal. In the case of insufficient evidence to make a recommendation for practice change, the recommendation would be to continue current practice and monitor outcomes and processes until there are more reliable studies to be added to the body of evidence. Researchers who use the critical appraisal process may indeed identify gaps in knowledge, research methods, or analyses, for example, that they then recommend studies that would fill in the identified gaps. In this way, clinicians and nurse scientists work together to build relevant, efficient bodies of evidence that guide clinical practice.
Evidence into action
Critical appraisal helps clinicians understand the literature so they can implement it. Critical care nurses have a professional and ethical responsibility to make sure their care is based on a solid foundation of available evidence that is carefully appraised using the phases outlined here. Critical appraisal allows for decision-making based on evidence that demonstrates reliable outcomes. Any other approach to the literature is likely haphazard and may lead to misguided care and unreliable outcomes. 11 Evidence translated into practice should have the desired outcomes and their measurement defined from the body of evidence. It is also imperative that all critical care nurses carefully monitor care delivery outcomes to establish that best outcomes are sustained. With the EBP paradigm as the basis for decision-making and the EBP process as the basis for addressing clinical issues, critical care nurses can improve patient, provider, and system outcomes by providing best care.
Seven steps to EBP
Step 0–A spirit of inquiry to notice internal data that indicate an opportunity for positive change.
Step 1– Ask a clinical question using the PICOT question format.
Step 2–Conduct a systematic search to find out what is already known about a clinical issue.
Step 3–Conduct a critical appraisal (rapid critical appraisal, evaluation, synthesis , and recommendation).
Step 4–Implement best practices by blending external evidence with clinician expertise and patient preferences and values.
Step 5–Evaluate evidence implementation to see if study outcomes happened in practice and if the implementation went well.
Step 6–Share project results, good or bad, with others in healthcare.
Adapted from: Steps of the evidence-based practice (EBP) process leading to high-quality healthcare and best patient outcomes. © Melnyk & Fineout-Overholt, 2017. Used with permission.
Critical appraisal resources
- The Joanna Briggs Institute http://joannabriggs.org/research/critical-appraisal-tools.html
- Critical Appraisal Skills Programme (CASP) www.casp-uk.net/casp-tools-checklists
- Center for Evidence-Based Medicine www.cebm.net/critical-appraisal
- Melnyk BM, Fineout-Overholt E. Evidence-Based Practice in Nursing and Healthcare: A Guide to Best Practice . 3rd ed. Philadelphia, PA: Wolters Kluwer; 2015.
A full set of critical appraisal checklists are available in the appendices.
Bonus content!
This article includes supplementary online-exclusive material. Visit the online version of this article at www.nursingcriticalcare.com to access this content.
critical appraisal; decision-making; evaluation of research; evidence-based practice; synthesis
- + Favorites
- View in Gallery

- University Libraries
- Research Guides
- Critical Evaluation
Authority: Critical Evaluation
- World Views and Voices
- Understanding Peer Review
Critical Evaluation of Information Sources
After initial evaluation of a source, the next step is to go deeper. This includes a wide variety of techniques and may depend on the type of source. In the case of research, it will include evaluating the methodology used in the study and requires you to have knowledge of those discipline-specific methods. If you are just beginning your academic career or just entered a new field, you will likely need to learn more about the methodologies used in order to fully understand and evaluate this part of a study.
Lateral reading is a technique that can, and should, be applied to any source type. In the case of a research study, looking for the older articles that influenced the one you selected can give you a better understanding of the issues and context. Reading articles that were published after can give you an idea of how scholars are pushing that research to the next step. This can also help with understanding how scholars engage with each other in conversation through research and even how the academic system privileges certain voices and established authorities in the conversation. You might find articles that respond directly to studies that provide insight into evaluation and critique within that discipline.
Evaluation at this level is central to developing a better understanding of your own research question by learning from these scholarly conversations and how authority is tested.
Check out the resources below to help you with this stage of evaluation.
Scientific Method/Methodologies
Here is a general overview of how the scientific method works and how scholars evaluate their work using critical thinking. This same process is used when scholars write up their scholarly work.
The Steps of the Scientific Method
Question something that was observed, do background research to better understand, formulate a hypothesis (research question), create an experiment or method for studying the question, run the experiment and record the results, think critically about what the results mean, suggest conclusions and report back, lateral reading.
Critical Thinking
Thinking critically about the information you encounter is central to how you develop your own conclusions, judgement, and position. This analysis is what will allow you to make a valuable contribution of your own to the scholarly conversation.
- TEDEd: Dig Deeper on the 5 Tips to Improve Your Critical Thinking
- The Foundation for Critical Thinking: College and University Students
- Stanford Encyclopedia of Philosophy: Critical Thinking
Scholarship as Conversation
It sounds pretty bad if you say an article was retracted, but is it always? As with most things, it depends on the context. Someone retracting a statement made based on false information or misinformation is one thing. It happens fairly often in the case of social media--removed tweets or Instagram posts for example.
In scholarship, there are a number of reasons an article might be retracted. These range from errors in the methods used, experiment structure, data, etc. to issues of fraud or misrepresentation. Central to scholarship is the community of scholars actively participating in the scholarly conversation even after the peer review process. Careful analysis of published research by other scholars is vital to course correction.
In science research, it's a central part of the process ! An inherent part of discovery is basing conclusions on the information at hand and repeating the process to gather more information. If further research is done that provides new information and insight, that might mean an older conclusion gets corrected. Uncertainty is unsettling, but trust in the process means understanding the important role of retraction.
- << Previous: Evaluation
- Next: Resources >>
- Writing Rules
- Running Head & Page numbers
- Using Quotations
- Citing Sources
- Reference List
- General Reference List Principles
- Structure of the Report
Introduction
- References & Appendices
- Unpacking the Assignment Topic
- Planning and Structuring the Assignment
- Writing the Assignment
- Writing Concisely
- Developing Arguments
Critically Evaluating Research
- Editing the Assignment
- Writing in the Third Person
- Directive Words
- Before You Submit
- Cover Sheet & Title Page
- Academic Integrity
- Marking Criteria
- Word Limit Rules
- Submitting Your Work
- Writing Effective E-mails
- Writing Concisely Exercise
- About Redbook
Some research reports or assessments will require you critically evaluate a journal article or piece of research. Below is a guide with examples of how to critically evaluate research and how to communicate your ideas in writing.
To develop the skill of being able to critically evaluate, when reading research articles in psychology read with an open mind and be active when reading. Ask questions as you go and see if the answers are provided. Initially skim through the article to gain an overview of the problem, the design, methods, and conclusions. Then read for details and consider the questions provided below for each section of a journal article.
- Did the title describe the study?
- Did the key words of the title serve as key elements of the article?
- Was the title concise, i.e., free of distracting or extraneous phrases?
- Was the abstract concise and to the point?
- Did the abstract summarise the study’s purpose/research problem, the independent and dependent variables under study, methods, main findings, and conclusions?
- Did the abstract provide you with sufficient information to determine what the study is about and whether you would be interested in reading the entire article?
- Was the research problem clearly identified?
- Is the problem significant enough to warrant the study that was conducted?
- Did the authors present an appropriate theoretical rationale for the study?
- Is the literature review informative and comprehensive or are there gaps?
- Are the variables adequately explained and operationalised?
- Are hypotheses and research questions clearly stated? Are they directional? Do the author’s hypotheses and/or research questions seem logical in light of the conceptual framework and research problem?
- Overall, does the literature review lead logically into the Method section?
- Is the sample clearly described, in terms of size, relevant characteristics (gender, age, SES, etc), selection and assignment procedures, and whether any inducements were used to solicit subjects (payment, subject credit, free therapy, etc)?
- What population do the subjects represent (external validity)?
- Are there sufficient subjects to produce adequate power (statistical validity)?
- Have the variables and measurement techniques been clearly operationalised?
- Do the measures/instruments seem appropriate as measures of the variables under study (construct validity)?
- Have the authors included sufficient information about the psychometric properties (eg. reliability and validity) of the instruments?
- Are the materials used in conducting the study or in collecting data clearly described?
- Are the study’s scientific procedures thoroughly described in chronological order?
- Is the design of the study identified (or made evident)?
- Do the design and procedures seem appropriate in light of the research problem, conceptual framework, and research questions/hypotheses?
- Are there other factors that might explain the differences between groups (internal validity)?
- Were subjects randomly assigned to groups so there was no systematic bias in favour of one group? Was there a differential drop-out rate from groups so that bias was introduced (internal validity and attrition)?
- Were all the necessary control groups used? Were participants in each group treated identically except for the administration of the independent variable?
- Were steps taken to prevent subject bias and/or experimenter bias, eg, blind or double blind procedures?
- Were steps taken to control for other possible confounds such as regression to the mean, history effects, order effects, etc (internal validity)?
- Were ethical considerations adhered to, eg, debriefing, anonymity, informed consent, voluntary participation?
- Overall, does the method section provide sufficient information to replicate the study?
- Are the findings complete, clearly presented, comprehensible, and well organised?
- Are data coding and analysis appropriate in light of the study’s design and hypotheses? Are the statistics reported correctly and fully, eg. are degrees of freedom and p values given?
- Have the assumptions of the statistical analyses been met, eg. does one group have very different variance to the others?
- Are salient results connected directly to hypotheses? Are there superfluous results presented that are not relevant to the hypotheses or research question?
- Are tables and figures clearly labelled? Well-organised? Necessary (non-duplicative of text)?
- If a significant result is obtained, consider effect size. Is the finding meaningful? If a non-significant result is found, could low power be an issue? Were there sufficient levels of the IV?
- If necessary have appropriate post-hoc analyses been performed? Were any transformations performed; if so, were there valid reasons? Were data collapsed over any IVs; if so, were there valid reasons? If any data was eliminated, were valid reasons given?
Discussion and Conclusion
- Are findings adequately interpreted and discussed in terms of the stated research problem, conceptual framework, and hypotheses?
- Is the interpretation adequate? i.e., does it go too far given what was actually done or not far enough? Are non-significant findings interpreted inappropriately?
- Is the discussion biased? Are the limitations of the study delineated?
- Are implications for future research and/or practical application identified?
- Are the overall conclusions warranted by the data and any limitations in the study? Are the conclusions restricted to the population under study or are they generalised too widely?
- Is the reference list sufficiently specific to the topic under investigation and current?
- Are citations used appropriately in the text?
General Evaluation
- Is the article objective, well written and organised?
- Does the information provided allow you to replicate the study in all its details?
- Was the study worth doing? Does the study provide an answer to a practical or important problem? Does it have theoretical importance? Does it represent a methodological or technical advance? Does it demonstrate a previously undocumented phenomenon? Does it explore the conditions under which a phenomenon occurs?
How to turn your critical evaluation into writing
Example from a journal article.

IMAGES
VIDEO
COMMENTS
To be critical of a text means you question the information and opinions in the text, in an attempt to evaluate or judge its worth overall. An evaluation is an assessment of the strengths and weaknesses of a text. This should relate to specific criteria, in the case of a research article. You have to understand the purpose of each section, and ...
Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation.
Critical Evaluation of Information Sources. After initial evaluation of a source, the next step is to go deeper. This includes a wide variety of techniques and may depend on the type of source. In the case of research, it will include evaluating the methodology used in the study and requires you to have knowledge of those discipline-specific ...
Critically Evaluating Research Some research reports or assessments will require you critically evaluate a journal article or piece of research. Below is a guide with examples of how to critically evaluate research and how to communicate your ideas in writing.