Many master’s and PhD students will find that to pass their courses, they need to write a literature review. In certain cases, these literature reviews need to be standalone documents, and in other cases, the literature review is simply used as a way to contextualise a primary research project.
If you’re interested in finding examples of each one, first take a look at Mishra and Nair (2015). This is a clear case of a standalone literature review, and it’s a great example of what scholars call a systematic literature review (or SLR). Contrastingly, Carnwell and Daly’s (2003) qualitative exploratory study, before presenting its own primary research findings, briefly summarises and critiques other findings in the form of a critical literature review (or CLR).
Clearly, then, there are different types of literature review.
However, a less obvious – yet equally important point – is that each type of literature review comes with its own advantages and disadvantages, purposes, and degrees of suitability to your own situation as a student or academic.
With this in mind, this document explains the core features of one of the most common – and most powerful – types of literature review: the systematic literature review (SLR).
Specifically, this document does the following things:
Please note that a full list of references (in Harvard style) is given at the end of this document
According to Aveyard (2014), one of the authorities on literature review writing, systematic literature reviews (SLRs) are the most valid, reliable, and robust types of literature review a researcher can undertake.
If you’re studying a course on public health, radiology, or any other healthcare-related topic, then you’re likely to have run into SLRs before. This is because SLRs, especially with the recently emerging emphasis on evidence-based practice (EBP) in many of the world’s healthcare systems (Marshall, 2014), serve as the principal reference point for evidence-based decision-making (Clarke and Chalmers, 2018).
So why – exactly – are SLRs valued so much by healthcare professionals and policymakers? Another way to phrase this question is as follows: Why do SLRs consistently rank at the top of the so-called “hierarchy of evidence”?
To answer these questions, let’s take a look at the SLR’s main characteristics.
Figure 1: Hierarchy of Evidence (adapted from Murad et al., 2016)
The review question that guides an SLR is not plucked out of thin air. Instead, systematic processes are followed to determine what the review question will be.
As a case in point, for quantitative SLRs, Richardson et al.’s (1995) PICO framework is used to devise a review question; for qualitative and mixed methods SLRs, the SPIDER framework is typically used (Cooke, Smith, and Booth, 2012); and for qualitative SLRs, Cleyle and Booth’s (2006) SPICE framework is popular.
Another popular framework is Aslam and Emmanuel’s (2010) FINER formula.
This characteristic is what gives the SLR its name, and it is one of the main reasons SLRs sit atop the hierarchy of evidence (see Figure 1).
A systematic literature search strategy is marked by the following features:
SLRs produce valid and reliable results because they exclude studies that could undermine the veracity of their findings. However, studies are not excluded arbitrarily; instead, inclusion and exclusion criteria are established in advance.
These criteria are based on aspects of the research question (e.g., the examined population), the types of literature targeted, and other practical considerations (e.g., publication date ranges and publication language) (Meline, 2006).
Any studies that are included in the SLR must be subjected to a rigorous, formal, and systematic process of critical appraisal. This ensures that studies with low methodological quality are excluded from the results, thereby maximising the validity and relevance of the SLR’s findings.
Importantly, the critical appraisal process must be guided by the use of a clear framework, since this safeguards against bias (Harrison et al., 2017). Prominent types of critical appraisal frameworks include:
Irrespective of the type of studies an SLR turns up, researchers must engage with the data set in a trustworthy, reliable, formal, and reproducible manner. There are several ways this may be done. For example, if you need to extract data from a qualitative study, then you might consider using thematic analysis, as described by Nowell et al. (2017).
Following data extraction, a systematic evidence synthesis technique is essential. This ensures that no relevant aspects of the data set have been overlooked. Prominent evidence synthesis techniques include segregated, integrated, and contingent methodologies (Sandelowski et al., 2006).
Since SLRs form the basis of evidence-based practice (EBP), nothing should be hidden from other researchers. Therefore, when publishing an SLR, everything from the databases searched to the completed critical appraisal checklists must be included.
Another important issue is the transparency of the search process itself. To ensure transparency in this area, the authors of SLRs usually publish PRISMA flow diagrams alongside their findings (PRISMA, 2009).
SLRs always rank at the top of the hierarchy of evidence (see Figure 1). Clearly, this is a direct consequence of the characteristics listed above (e.g., reliability, validity, transparency, reproducibility, and robustness).
With this in mind, high-quality SLRs are urgently required in many areas of scholarship. The implication of this is that you yourself – as an aspiring researcher or academic – can increase your chances of publication in a prominent journal if you write a high-quality SLR.
Given that SLRs are formal, structured, rigorous, and comprehensive, they eliminate many of the opportunities that bias has to creep into the process. For example, since the researcher is not subjectively deciding which articles should be included in their review, this almost entirely eliminates selection bias (Andrews, 2017).
A wealth of literature exists around any given topic. As a result, for doctors, practitioners, and policymakers who have busy lives, high-quality SLRs allow them to extract the wisdom, findings, and conclusions of 1,000 articles simply by reading one publication.
Since it is difficult for most researchers to access unpublished studies, these studies tend not to be included in SLRs. This undermines the comprehensiveness of the SLR, and it may mean that the most up-to-date evidence is not included in the review (Müller et al., 2013).
Especially for novice researchers, the expertise needed to conduct a high-quality SLR is a significant barrier. Given the essential requirement for SLRs to produce valid and reliable evidence, the whole process is extremely time-consuming, and it often means that multiple researchers must collaborate. Clearly, collaboration of this kind is associated with its own range of difficulties.
Since negative or inconclusive results often remain unpublished, the articles that are eventually included in an SLR may include an unreasonable – and unrepresentative – number of positive conclusions. With this in mind, the widespread suppression of negative results gives rise to the danger of publication bias.
Andrews, L. (2017) How can we demonstrate the public value of evidence-based policy making when government ministers declare that people ‘have had enough of experts’? Palgrave Communications. 3 (11).
Aslam, S. and Emmanuel, P. (2010) Formulating a researchable question: A critical step for facilitating good clinical research. Indian Journal of Sexually Transmitted Diseases and AIDS. 31 (1), 47-50.
Aveyard, H. (2014) Doing a Literature Review in Health and Social Care: A Practical Guide. New York: Open University Press.
Carnwell, R. and Daly, W. (2003) Advanced nursing practitioners in primary care settings: an exploration of the developing roles. Journal of Clinical Nursing. 12 (5).
Clarke, M. and Chalmers, I. (2018) Reflections on the history of systematic reviews. BMJ Evidence-Based Medicine. 23, 121-122.
Cleyle, S. and Booth, A. (2006) Clear and present questions: Formulating questions for evidence based practice. Library Hi Tech. 24 (3), 355-368.
Cooke, A., Smith, D., and Booth, A. (2012) Beyond PICO: the spider tool for qualitative evidence synthesis. Qualitative Health Research. 22 (10), 1435-1443.
Cooper, C., Booth, A., Varley-Campbell, J., Britten, N., and Garside, R. (2018) Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies. BMC Medical Research Methodology. 18, 85.
Greenhalgh, T. and Peacock, R. (2005) Effectiveness and efficiency of search methods in systematic reviews of complex evidence: audit of primary sources. British Medical Journal. 331, 1064.
Grewal, A., Kataria, H., and Dhawan, I. (2016) Literature search for research planning and identification of research problem. Indian Journal of Anaesthesia. 60 (9), 635-639.
Hagen-Zanker, J., McCord, A., and Holmes, R. (2011) Systematic Review of the Impact of Employment Guarantee Schemes and Cash Transfers on the Poor. Overseas Development Institute.
Hong, Q. and Pluye, P. (2018) A Conceptual Framework for Critical Appraisal in Systematic Mixed Studies Reviews. Journal of Mixed Methods Research.
Khan, K., Kunz, R., Kleijnen, J., and Antes, G. (2003) Five steps to conducting a systematic review. Journal of the Royal Society of Medicine. 96 (3), 118-121.
La Torre, G., Backhaus, I., and Mannocci, A. (2015) Rating for narrative reviews: Concept and development of the International Narrative Systematic Assessment (INSA) tool. Sense Sciences. 2 (1), 31-35.
Marshall, J. G. (2014) Linking research to practice. Journal of the Medical Library Association. 102 (1), 14-21.
Meline, T. (2006) Selecting studies for systematic review: Inclusion and exclusion criteria. Contemporary Issues in Communication Science and Disorders. 33, 21-27.
Mishra, D. and Nair, S. R. (2015) Systematic literature review to evaluate and characterise the health economics and outcomes research studies in India. Perspectives in Clinical Research. 6 (1), 20-33.
Müller, K. F. et al. (2013) Defining publication bias: protocol for a systematic review of highly cited articles and proposal for a new framework. Systematic Reviews. 2, 34.
Murad, M. Asi, N., Alsawas, M., and Alahdab, F. (2016) New evidence pyramid. BMJ Evidence-Based Medicine. 21, 125-127.
Nowell, L. S. et al. (2017) Thematic Analysis: Striving to Meet the Trustworthiness Criteria. International Journal of Qualitative Methods. 16, 1-13.
Patino, C. M. and Ferreira, J. C. (2018) Inclusion and exclusion criteria in research studies: definitions and why they matter. Brazilian Journal of Pulmonology. 44 (2), 84.
PRISMA (2009) PRISMA Flow Diagram. Available from http://prisma-statement.org [Accessed 7 January 2019].
Richardson, W. S., Wilson, M. C., Nishikawa, J., and Hayward, R. S. (1995) The well-built clinical question: A key to evidence-based decisions. ACP Journal Club. 123 (3), 12.
Sandelowski, M. et al. (20060 Defining and Designing Mixed Research Synthesis Studies. Research in the Schools. 13 (1), 29.
Wallace, M. and Wray, A. (2006) Critical Reading and Writing for Postgraduates. Thousand Oaks, California: SAGE Publications Inc.
Wildridge, V. and Bell, L. (2002) How clip became eclipse: A mnemonic to assist in searching for health policy/management information. Health Information & Libraries Journal. 19 (2), 113-115.