UBC Faculty Research and Publications

Comparing the coverage, recall, and precision of searches for 120 systematic reviews in Embase, MEDLINE,… Bramer, Wichor M; Giustini, Dean; Kramer, Bianca M R Mar 1, 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


52383-13643_2016_Article_215.pdf [ 699.28kB ]
JSON: 52383-1.0357444.json
JSON-LD: 52383-1.0357444-ld.json
RDF/XML (Pretty): 52383-1.0357444-rdf.xml
RDF/JSON: 52383-1.0357444-rdf.json
Turtle: 52383-1.0357444-turtle.txt
N-Triples: 52383-1.0357444-rdf-ntriples.txt
Original Record: 52383-1.0357444-source.json
Full Text

Full Text

RESEARCH Open AccessComparing the coverage, recall, andprecision of searches for 120 systematicreviews in Embase, MEDLINE, and GoogleScholar: a prospective studyWichor M. Bramer1* , Dean Giustini2 and Bianca M. R. Kramer3AbstractBackground: Previously, we reported on the low recall of Google Scholar (GS) for systematic review (SR) searching.Here, we test our conclusions further in a prospective study by comparing the coverage, recall, and precision of SRsearch strategies previously performed in Embase, MEDLINE, and GS.Methods: The original search results from Embase and MEDLINE and the first 1000 results of GS for librarian-mediated SR searches were recorded. Once the inclusion-exclusion process for the resulting SR was complete,search results from all three databases were screened for the SR’s included references. All three databases werethen searched post hoc for included references not found in the original search results.Results: We checked 4795 included references from 120 SRs against the original search results. Coverage of GS washigh (97.2 %) but marginally lower than Embase and MEDLINE combined (97.5 %). MEDLINE on its own achieved92.3 % coverage. Total recall of Embase/MEDLINE combined was 81.6 % for all included references, compared to GSat 72.8 % and MEDLINE alone at 72.6 %. However, only 46.4 % of the included references were among thedownloadable first 1000 references in GS. When examining data for each SR, the traditional databases’ recall wasbetter than GS, even when taking into account included references listed beyond the first 1000 search results.Finally, precision of the first 1000 references of GS is comparable to searches in Embase and MEDLINE combined.Conclusions: Although overall coverage and recall of GS are high for many searches, the database does notachieve full coverage as some researchers found in previous research. Further, being able to view only the first 1000records in GS severely reduces its recall percentages. If GS would enable the browsing of records beyond the first1000, its recall would increase but not sufficiently to be used alone in SR searching. Time needed to screen resultswould also increase considerably. These results support our assertion that neither GS nor one of the otherdatabases investigated, is on its own, an acceptable database to support systematic review searching.Keywords: Information storage and retrieval, Review literature as topic, Bibliographic databases, Search engine,Sensitivity and specificity* Correspondence: w.bramer@erasmusmc.nl1Erasmus MC, University Medical Center Rotterdam, Medical Library, PO Box2040, 3000 CA Rotterdam, The NetherlandsFull list of author information is available at the end of the article© 2016 Bramer et al. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, andreproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link tothe Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.Bramer et al. Systematic Reviews  (2016) 5:39 DOI 10.1186/s13643-016-0215-7BackgroundIn 2013, an article by Gehanno et al. [1] prompted a dis-cussion around the utility of Google Scholar (GS) tosupport systematic review (SR) searching. In response,we examined the recall of GS and PubMed search strat-egies for included references of published biomedicalSRs [2]. There, we determined that the recall of allincluded references found among the first 1000 searchresults in GS was insufficient for it to be used on itsown to support SR searching.In our 2013 study, we intentionally selected searchstrategies that were identical in PubMed and GS, in aneffort to study the effect of the database, instead of thequality of the query translation. Therefore, the searchstrategies used for PubMed in our previous study didnot fully use the possibilities of a traditional databasesearch strategy. Librarian-mediated searches (combiningMeSH terms and free text terms in traditional databases)achieve better results than non-librarian searches [3, 4].References that would have been included, had theybeen retrieved, could have been missed in PubMed dueto a lack of MeSH terms in the search strategies. In ourprevious paper, we showed that the optimization ofsearch strategies in PubMed (adding MeSH terms andmore synonyms) led to more improvement than asimilar process in GS (adding extra terms found in theincluded references). Here, we investigate whether anexperienced information specialist, an expert at perform-ing systematic review searches, can find all includedreferences using only one database.Our prior research replicated search strategies usedin previously published systematic reviews. One of GS’shortcomings is that searches are never whollyreplicable later, as the search algorithm is constantlychanging day to day. GS can only limit search resultsto publication date ranges. In traditional databasessuch as PubMed, search results can be limited tospecific dates, such as MeSH date (date when MeSHterms were altered), or entry date (date a record wasadded to database). Search results can be reproducedin PubMed as they were performed on a specific day,month, and year.GS does not only index papers which it found as fulltext but also find references merely because they werecited by papers (which are then marked as [citation]). Inthis article, we refer to references marked by GS as [cit-ation] as “citation only.” As we used only published SRsin our previous paper, for which most of the full texthad already been indexed by GS, we hypothesize that GSprobably covered all included references at least as cit-ation only. Due to GS’s ever-changing database, searchengine, and relevance ranking algorithm, searchers arenever confident these citation only results were presentat the time of the original search.As a follow-up, we aim to evaluate the search resultsof systematic review search strategies created by an ex-perienced information specialist at the time they wereconducted in MEDLINE via the Ovid interface, com-bined searching in Embase and MEDLINE via Embase.-com and GS. Our goal is to compare the coverage ofthese databases and their performance in terms of preci-sion and recall for included references in SRs.MethodsThe first author regularly performs librarian-mediatedsearches to support SRs in the academic hospital setting inwhich he works. The reviews generally cover a wide rangeof medical topics, from therapeutic effectiveness and diag-nostic accuracy to ethics and public health. The methodsused at Erasmus MC to create systematic review searchstrategies will be described in detail in a separate paper.In short, the first author performs single-line searchstrategies in Embase.com, which are developed using aunique optimization method. The Embase.com searchstrategies are translated into other databases and inter-faces using macros. These macros are developed in MSWord to search for syntax from one interface and re-place it with an appropriate syntax for another interface.After automatic translation of syntax from Embase.comto MEDLINE in Ovid, Emtree terms for Embase aremanually replaced with appropriate MeSH terms.Search strategies for GS are derived from an array ofwords searched in titles and abstracts in Embase.com.All relevant search terms are copied, and truncated termsare expanded to the most common term(s). To adhere tothe limitation of 256 characters, the length of each searchstrategy is reduced by replacing all Boolean operators OR,including its surrounding spaces with |, effectively reducingthe number of characters per synonym by three. Proximityoperators in Embase.com are replaced in GS by combiningoptional search terms in quoted phrases. Thus, if an Emba-se.com search strategy for liver cancer contains ((liver ORhepatic) NEAR/3 (cancer* OR tumor* OR neoplasm*)), thisis translated to "liver|hepatic cancer|tumor|neoplasms" inGS. If the total number of characters in the GS search ex-ceeds 256, the information specialist (often together withthe reviewer) decides which search terms are likely to beleast relevant and deletes them one at a time, until thethreshold is reached. In the Additional file 1 some examplesof search strategy translations between the three databasesmentioned are provided.SR searches were documented at the institution of thefirst author before researchers began to screen articlesfor inclusion. The total search results from two majorbiomedical databases: MEDLINE in the Ovid interface,Embase at Embase.com (searching both Embase andMEDLINE records) and Google Scholar (where Publishor Perish software [5] allowed downloading of the firstBramer et al. Systematic Reviews  (2016) 5:39 Page 2 of 71000 search results) were imported into EndNote aftersearching was concluded (at the start of the systematicreview project).Reviewers obtained full search results from all databaseswhich additionally to the aforementioned databases in-volved at least the Cochrane Registry of Trials, Web ofScience, and a subset of PubMed to find recent articles.Occasionally, additional databases were used such as Sco-pus, CINAHL (via EBSCOhost), or PsycINFO (via Ovid).Reviewers were advised to seek other sources of includedreferences by using cited and citing references tracking,contacting key authors in the field, and hand-searchingjournals, but the decision to do so was up to the re-searchers. In the first author’s institution, as in many otherinstitutes, these tasks are generally performed by the re-searchers, not as a library service.After the process of collecting included references wascompleted, reviewers provided us with a list of includedreferences. Alternatively, these were retrieved using thereference lists of resulting publications. We searched forall included references one-by-one in the original files inEndNote, using author names, year, and if necessaryparts of the title. Record numbers of positive matches inEndNote were used to determine the database(s) fromwhich each included reference was retrieved.For included references not found in the first 1000 resultsfrom GS, post hoc GS searches were conducted. Originalsearch strategies used for the SR were combined withauthor names preceded by “author:” and distinct words orphrases from titles, preceded by “intitle:”. If included refer-ences were retrieved by this search, they were identified aspart of the overall recall of the total number of hits re-ported. Where combinations of these data elements for theincluded reference, together with the original search strat-egies, exceeded 256 characters, the original search strategieswere divided into separate searches. Positive hits wereconfirmed when both separate searches, combined with thearticle’s metadata, retrieved the item.When included references were present in GS ascitations only, this was documented regardless ofwhether they had been found in the first 1000 search re-sults, in the total search results or as a positive coverage.When included references were found as citations only,all citing articles were checked. When the single articleciting this included reference was the published reviewfor which the search strategy was first designed, we con-cluded that the result must have been indexed after thesearch strategy was originally performed. This includedreference was thus not taken into account in the overallcoverage of GS. For all three databases, coverage of non-retrieved included references from the inclusion sets waschecked thoroughly by searching the databases for au-thor names, distinct words from titles and publicationyear, using multiple combinations if necessary to ensureno included references were missed.From these results, overall coverage (number of in-cluded references available in the database divided bythe total number of included references), recall (numberof included references found in the search results for theoriginal search strategies for a database divided by thetotal number of included references retrieved by all data-bases together), and precision (number of included refer-ences retrieved by a certain database divided by the totalnumber of search results retrieved by that database) ofthe three databases were calculated. We additionally cal-culated recall and precision for the first 1000 hits of GS.All data were calculated for the total set of included refer-ences (overall values), as well as per review. After we de-termined which search strategies scored exceptionally wellor low on recall in the first 1000 search results in GS, weexamined the characteristics of the search strategies(topics and number of search terms) and search results(number of hits and number of included references).We visualized most data in boxplot figures. A generallegend can be found in Fig. 1.ResultsBetween May 2013 and August 2015, 520 exhaustivesearches designed for SRs by the first author were savedand documented. In August 2015, the reviewers of 120SRs had screened all search results against their review’sunique inclusion and exclusion criteria. In aggregate,these reviews had included a total of 4795 references.The results for overall recall and coverage of the original120 search strategies for all three databases for these4795 included references are summarized in Fig. 2.Overall coverageOverall, GS contained 4708 of the total number of in-cluded references (N = 4795). However, 179 of thesewere present as citations only. In 49 of these citationonly results, the only citing paper in GS was the reviewbased on our search strategy. These 49 search resultscould not have been covered in GS at the time of theoriginal search. Therefore, overall coverage of the in-cluded references was 4659 (97.2 %). In Embase, theFig. 1 Legend of boxplot figuresBramer et al. Systematic Reviews  (2016) 5:39 Page 3 of 7percentage of included references found in the databasewas slightly above GS at 97.5 % while MEDLINE pro-duced 92.3 % of all included references (See Fig. 2).Coverage per SRFor individual SRs, the percentage of included refer-ences present in the three databases varied. For 68 %of all SRs, the coverage of GS was 100 %, Embasecontained 100 % of all included references for 63 %of all reviews, compared to 34 % for MEDLINE. Forindividual SRs, the recall of GS can be as low as72 %; 77 % was the lowest observation in Embaseand 61 % in MEDLINE. See Fig. 3 for a visualizationof the coverage per SR.Overall recallIn terms of overall recall, Embase/MEDLINE was themost complete, retrieving 3914 of all included references(81.6 %), while MEDLINE alone retrieved 3481 includedreferences (72.6 %). Counting all search results found bythe search strategies, GS retrieved 3493 included refer-ences (72.8 %). However, only 2224 of those were down-loaded with the combined first 1000 search results forthe 120 SRs, so the actual recall of GS is much lower at46.4 % (See Fig. 2).Recall per SRFor individual SRs, the percentage of included referencespresent in the first 1000 search results in GS varied by awide margin. In fifteen SRs, fewer than 25 % of all in-cluded references that had been identified through data-base searches were found in the first 1000 search resultsof GS, but nine SRs achieved the maximum 100 %.Recall fared much better in GS when all search resultswere taken into account (see Fig. 4). A rate of at least100 % was reached in 24 SRs (20.0 %). For four SRs, therecall of all search results in GS was even higher than100 % because GS was able to find included referencesthat had not been found in the traditional databases butwere identified via other sources (e.g., reference checkingor hand searching). The recall of traditional databasessuch as Embase and MEDLINE was more consistent ofwhich Embase/MEDLINE performed the best, althoughits minimum recall was only 43 %.Overall precisionThe total number of search results that were down-loaded from GS was 118,509 (in 4 of 120 reviews thenumber of hits in GS was lower than 1000). Thesesearch results together contained 2224 of the includedreferences in the SRs; thus, the overall precision of the firstFig. 2 Total coverage and recall of MEDLINE (ML), Embase/MEDLINE (EM), and Google Scholar (GS)Fig. 3 Coverage per SR for GS, MEDLINE, and Embase/MEDLINEBramer et al. Systematic Reviews  (2016) 5:39 Page 4 of 71000 search results of GS is 1.9 %. The total reported num-ber of search results in GS was 10,092,939, of which 3493were included references; thus, the overall precision of thecomplete search results of GS was 0.03 %. The precision ofEmbase was 3940/192,935 = 2.0 %, and for MEDLINE3506/126,657 = 2.8 %. These data are visualized withinFig. 5.Precision per SRThe precision of GS’ first 1000 search results (1.9 %) didnot differ much from the precision as observed in all da-tabases (1.6 %) that were searched in the review process(see Fig. 5). However, the precision of the total set ofsearch results in GS was much lower than that of theother databases (0.03 %).Why GS scored low is a valid question. Some reasonsare discussed below. In some cases (Ahmadi et al. [6],Ambagtsheer (not yet published), Leermakers, Moreira(7)), the first author, together with the reviewer, had notbeen able to translate a complicated embase.com searchstrategy into a GS search, due to the lack of proximityoperators. Another reason was that the search strategyof Embase was too long for all important search termsto be used in the GS search strategy. In some cases, re-call in the traditional databases was possibly higher be-cause of the use of thesaurus terms for a broad topic(such as sexual risk behavior, Legemate, not yet pub-lished) or because the fact that topic was very broadwhich could have resulted in many non-medical refer-ences in GS (music in premature infants, Oliai Araghi,not yet published). In other cases, it is unclear why thereis such a vast difference between recall in Embase andGS (Bramer [7]). SRs where GS scored exceptionally welloften try to answer well-defined topics, such as cashewnut allergy (van der Valk et al. [8]), or platelet-richplasma injections for tennis elbow (de Vos et al. [9]).Search strategies for Embase.com, Medline via Ovid,and GS for already published reviews where GS scoredexceptionally high or low (as cited in the paragraphabove) are shown in the Additional file 1.DiscussionGS covers a vast amount of literature but, when exclud-ing citation only results first indexed after publication ofthe reviews used in this research, overall coverage ofEmbase is slightly higher. Overall recall of GS is notFig. 4 Recall per SR for the first 1000 references in GS, total GS, MEDLINE, and Embase/MEDLINEFig. 5 Precision per SR for the first 1000 references in GS, GS, MEDLINE, Embase/MEDLINE, and overallBramer et al. Systematic Reviews  (2016) 5:39 Page 5 of 7higher than when searching MEDLINE only, and muchlower than when searching both MEDLINE and Embase.Since only the first 1000 search results of GS can beused, practical recall of GS is exceptionally low, whichmakes GS unacceptable as a single database to supportthe SR. If all search results in GS were made available tousers, recall would still be too low for SRs, but reviewerburden would increase due to loss of precision. In fact,none of the observed databases can be used as single da-tabases for SR searching, as the best performing data-base (Embase/MEDLINE) for individual SRs can resultin a recall of less than 50 %. Our observations are similarto recent observations made by Haddaway et al. [10]who compared the recall of GS to that of Web ofScience for SRs in the field of environmental science.Low precision has always been considered a problemin GS [11], but when accounting for actually usablesearch results (i.e., the first listed 1000), we observedprecision to be only slightly lower than the 2.9 % as re-ported by Sampson et al. in 2011, and comparable tothat observed in the other databases [12]. Further, theprecision observed in all databases in our study wasnearly equal to the practical precision of the first 1000hits of GS.The results of this prospective research are for themost part comparable to our previous, retrospective,study. The coverage of GS and recall in MEDLINE aresimilar to those observed in 2013. However, recall inGoogle Scholar is much lower for our original searchesthan for the reconstructed searches in our previousstudy (45.1 vs. 72 %). That is probably because oursearch strategies, as they were designed by an experi-enced information specialist, were optimized to find asmany included references as possible in the traditionaldatabases. We translated these search strategies with ourbest of knowledge into a GS search strategy but wereunable to reach a high recall in GS as we had succeededin the traditional databases.In SRs, ideally, extended search methods that go beyondtraditional databases are used to find included references.Total number of included references is therefore some-times higher than the number of included references re-trieved in the downloaded search results from traditionaldatabases. In evaluating the results of GS search strategiesfor known items from the included references, some arti-cles did meet the search strategy’s criteria but were notconsidered relevant enough by GS to be among the first1000 viewable search results. For some SRs, therefore, therecall of the complete GS search results was higher than100 %.The current research could be improved by using searchstrategies created by multiple independent informationspecialists at baseline, but we question whether such achange would alter our conclusions. Research on GS forSRs should not focus on whether to use the search tool asa single source but whether it adds value to the search re-sults from other databases. The authors are currently col-lecting data for a follow-up study that can answer thequestion whether GS is able to locate included referencesunidentified by the traditional databases.ConclusionsDespite its vast coverage of the scholarly literature, GoogleScholar is not sufficient to be used on its own as a singledatabase to support SR searching. The reason for this isnot low precision in GS searching, which is comparable totraditional databases. More problematic is GS’ low recallcapabilities which are related to the viewable 1000 searchresults only policy of the search engine. Even if GoogleScholar was to allow users to browse beyond the first1000 search results, its overall recall would still be too lowto locate all included references to support the systematicreview. We conclude similarly that neither Embase norMEDLINE on its own is sufficient in retrieving all in-cluded references for SRs.List of definitionsBoolean operator—Set of words (AND, OR, NOT, orproximity operators) used as conjunctions to combine orexclude keywords in a search strategy.Citation only—Search result retrieved by GoogleScholar solely because another article cited it in its listof references.Coverage—Number of included references available ina certain database divided by the number of relevant ar-ticles included in a systematic review.Included reference—A specific article that, after con-sideration of inclusion and exclusion criteria, is includedby review authors in their systematic review.Librarian-mediated searches—Searches that are de-signed by (medical) librarians or information specialistsin close accordance with researchers’ information needsand research goals.Optimization of search strategies—Improving recall orprecision of search strategies by adding or droppingsearch terms or key concepts.Practical precision in Google Scholar—Percentage ofincluded hits in the first 1000 search results of GoogleScholar.Precision—Number of included references retrieveddivided by the total number of articles retrieved.Proximity operator—A special kind of Boolean oper-ator used to search for occurrences of words adjacent toor within a certain number of words from another word(or group of words).Recall—Number of included references retrieved byone database divided by the number of included refer-ences retrieved by all databases together.Bramer et al. Systematic Reviews  (2016) 5:39 Page 6 of 7Reviewer—A person requesting a librarian-mediatedsearch, for a systematic review, who is responsible forreviewing search yield or results and determining whichreferences meet predetermined inclusion criteria.Search result—The references (or the number thereof )provided by a certain database that fulfill the criteria of asearch strategy.Search strategy—A sequence of search terms (the-saurus terms and free text) combined with Boolean op-erators designed to find relevant search results in acertain database.Single-line search strategy—Search strategies consistingof one line of search terms combined with Boolean oper-ators and parentheses, as opposed to multi-line searchstrategies, which combine search results from multiplerecord sets.Additional fileAdditional file 1: Search strategies for published systematic reviewswhere the recall of Google Scholar was exeptionally good or poor.(DOCX 23 kb)AbbreviationsEM: Embase/MEDLINE combination; GS: Google Scholar; ML: MEDLINE;SR: systematic review.Competing interestsWB has received reimbursement from Elsevier for attending the MedicalLibrary Association conference in Austin, Texas, in May 2015 as an invitedspeaker for Embase.com, which is one of the databases investigated in thisresearch. The other authors declare that they have no competing interestsAuthors’ contributionsWB designed the study, created the search strategies, gathered the data, didthe data analysis, and drafted the manuscript. DG checked the data analysisand helped draft the manuscript. BK checked the samples of the raw data,advised on data analysis, and revised the manuscript critically. All authorsread and approved the final manuscript.Authors’ informationWB is an information specialist at Erasmus MC Rotterdam, the Netherlands.He designs exhaustive searches for hundreds of SRs per year. He holds a BScin biology and LIS and is currently pursuing a PhD degree, specializing in theliterature search process for SRs.DG is the UBC Biomedical Branch Librarian at Vancouver General Hospitalin Canada. He holds MLS and MEd degrees.BK is a subject specialist at Life Sciences and Medicine at Utrecht UniversityLibrary, the Netherlands. She holds a PhD in Neurobiology.AcknowledgementsThe authors thank Patricia F. Anderson who was involved in this researchand the earlier publication, and helped us draft the article. The authors thankProfessors Jos Kleijnen and Oscar Franco for critically reviewing the final draftof this paper. No funding was received for this research.Author details1Erasmus MC, University Medical Center Rotterdam, Medical Library, PO Box2040, 3000 CA Rotterdam, The Netherlands. 2The University of BritishColumbia, UBC Biomedical Branch Library, Gordon and Leslie DiamondHealth Care Centre, 2775 Laurel Street, Floor 2, Vancouver, BC V5Z 1 M9,Canada. 3Utrecht University Library, PO Box 80125, 3508 TC Utrecht, TheNetherlands.Received: 19 October 2015 Accepted: 20 February 2016References1. Gehanno J-F, Rollin L, Darmoni S. Is the coverage of google scholar enough tobe used alone for systematic reviews. BMC Med Inform Decis Mak. 2013;13(1):7.2. Bramer WM, Giustini D, Kramer BM, Anderson PF. The comparative recall ofGoogle Scholar versus PubMed in identical searches for biomedicalsystematic reviews: a review of searches used in systematic reviews.Systematic Reviews. 2013;2:115. doi:10.1186/2046-4053-2-115.3. Gardois P, Calabrese R, Colombi N, Deplano A, Lingua C, Longo F, et al.Effectiveness of bibliographic searches performed by paediatric residentsand interns assisted by librarians. A randomised controlled trial. Health InfoLibr J. 2011;28(4):273–84.4. Brettle A, Hulme C, Ormandy P. Effectiveness of information skills trainingand mediated searching: qualitative results from the EMPIRIC project. HealthInfo Libr J. 2007;24(1):24–33. doi:10.1111/j.1471-1842.2007.00702.x.5. Harzing AW. Publish or Perish. 2007. http://www.harzing.com/resources/publish-or-perish . Accessed 24 February 2016.6. Ahmadi AR, Lafranca JA, Claessens LA, Imamdi RM, JN IJ, Betjes MG, et al.Shifting paradigms in eligibility criteria for live kidney donation: a systematicreview. Kidney Int. 2015;87(1):31–45. doi:10.1038/ki.2014.118.7. Bramer WM. Evaluation of instructive texts on searching medical databases.J Med Libr Assoc. 2015;103(4):208–9. doi:10.3163/1536-5050. van der Valk JP, Dubois AE, Gerth van Wijk R, Wichers HJ, de Jong NW.Systematic review on cashew nut allergy. Allergy. 2014;69(6):692–8.doi:10.1111/all.12401.9. de Vos RJ, Windt J, Weir A. Strong evidence against platelet-rich plasmainjections for chronic lateral epicondylar tendinopathy: a systematic review.Br J Sports Med. 2014;48(12):952–6. doi:10.1136/bjsports-2013-093281.10. Haddaway NR, Collins AM, Coughlin D, Kirk S. The role of Google Scholar inevidence reviews and its applicability to grey literature searching. PLoS One.2015;10(9):e0138237. doi:10.1371/journal.pone.0138237.11. Boeker M, Vach W, Motschall E. Google Scholar as replacement forsystematic literature searches: good relative recall and precision are notenough. BMC Med Res Methodol. 2013;13(1):131.12. Sampson M, Tetzlaff J, Urquhart C. Precision of healthcare systematic reviewsearches in a cross-sectional sample. Research Synthesis Methods. 2011;2(2):119–25. doi:10.1002/jrsm.42.•  We accept pre-submission inquiries •  Our selector tool helps you to find the most relevant journal•  We provide round the clock customer support •  Convenient online submission•  Thorough peer review•  Inclusion in PubMed and all major indexing services •  Maximum visibility for your researchSubmit your manuscript atwww.biomedcentral.com/submitSubmit your next manuscript to BioMed Central and we will help you at every step:Bramer et al. Systematic Reviews  (2016) 5:39 Page 7 of 7


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items