UBC Faculty Research and Publications

Systematic review of the effectiveness of training programs in writing for scholarly publication, journal… Galipeau, James; Moher, David; Skidmore, Becky; Campbell, Craig; Hendry, Paul; Cameron, D W; Hébert, Paul C; Palepu, Anita Jun 17, 2013

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


52383-13643_2013_Article_106.pdf [ 205.91kB ]
JSON: 52383-1.0220572.json
JSON-LD: 52383-1.0220572-ld.json
RDF/XML (Pretty): 52383-1.0220572-rdf.xml
RDF/JSON: 52383-1.0220572-rdf.json
Turtle: 52383-1.0220572-turtle.txt
N-Triples: 52383-1.0220572-rdf-ntriples.txt
Original Record: 52383-1.0220572-source.json
Full Text

Full Text

PROTOCOL Open AccessSystematic review of the effectiveness of trainingprograms in writing for scholarly publication,journal editing, and manuscript peer review(protocol)James Galipeau1*, David Moher1,2, Becky Skidmore3, Craig Campbell4, Paul Hendry2, D William Cameron1,2,Paul C Hébert1,2 and Anita Palepu5AbstractBackground: An estimated $100 billion is lost to ‘waste’ in biomedical research globally, annually, much of whichcomes from the poor quality of published research. One area of waste involves bias in reporting research, whichcompromises the usability of published reports. In response, there has been an upsurge in interest and research inthe scientific process of writing, editing, peer reviewing, and publishing (that is, journalology) of biomedicalresearch. One reason for bias in reporting and the problem of unusable reports could be due to authors lackingknowledge or engaging in questionable practices while designing, conducting, or reporting their research. Anothermight be that the peer review process for journal publication has serious flaws, including possibly being ineffective,and having poorly trained and poorly motivated reviewers. Similarly, many journal editors have limited knowledgerelated to publication ethics. This can ultimately have a negative impact on the healthcare system. There have beenrepeated calls for better, more numerous training opportunities in writing for publication, peer review, andpublishing. However, little research has taken stock of journalology training opportunities or evaluations of theireffectiveness.Methods: We will conduct a systematic review to synthesize studies that evaluate the effectiveness of trainingprograms in journalology. A comprehensive three-phase search approach will be employed to identify evaluationsof training opportunities, involving: 1) forward-searching using the Scopus citation database, 2) a search of theMEDLINE In-Process and Non-Indexed Citations, MEDLINE, Embase, ERIC, and PsycINFO databases, as well as thedatabases of the Cochrane Library, and 3) a grey literature search.Discussion: This project aims to provide evidence to help guide the journalological training of authors, peerreviewers, and editors. While there is ample evidence that many members of these groups are not getting thenecessary training needed to excel at their respective journalology-related tasks, little is known about thecharacteristics of existing training opportunities, including their effectiveness. The proposed systematic review willprovide evidence regarding the effectiveness of training, therefore giving potential trainees, course designers, anddecision-makers evidence to help inform their choices and policies regarding the merits of specific trainingopportunities or types of training.Keywords: Training, Writing for publication, Journalology, Author, Journal editor, Manuscript peer review, Publishing* Correspondence: jgalipeau@ohri.ca1Ottawa Hospital Research Institute, 501 Smyth Rd, Ottawa K1H 8L6, CanadaFull list of author information is available at the end of the article© 2013 Galipeau et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the CreativeCommons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, andreproduction in any medium, provided the original work is properly cited.Galipeau et al. Systematic Reviews 2013, 2:41http://www.systematicreviewsjournal.com/content/2/1/41BackgroundAn estimated $100 billion is lost to ‘waste’ in biomedicalresearch globally each year, a sizeable portion of whichcomes from the poor quality of published research.Chalmers and Glasziou identified four areas of wasterelated to: 1) the relevancy of research questions to cli-nicians and patients, 2) the appropriateness of researchdesign and methods, 3) making publications fully ac-cessible, and 4) issues of bias and the usability of reports[1]. In the latter of these categories, the authors explainthat over 30% of trial interventions are not sufficientlydescribed, over 50% of planned study outcomes are notreported and most new research is not interpreted inthe context of systematic assessment of other relevantevidence [2]. In response to this, there has been anupsurge in interest and research on topics such as publi-cation ethics, research integrity, and rigor in the scien-tific process of writing, editing, peer reviewing, andpublishing (that is, journalology) of biomedical research.This type of waste has also become a primary focus oforganizations such as the Committee on Publication Ethics(COPE), World Association of Medical Editors (WAME),and the Enhancing the Quality and Transparency of HealthResearch (EQUATOR) Network.Bias in reporting and the problem of unusable reportscan be attributed to shortcomings at both the productionand publication phases of the research process. On onehand, some authors lack knowledge or engage in question-able practices while designing, conducting, or reportingtheir research. On the other hand, the peer review processfor both grant giving and journal publication has seriousflaws, including claims of being ineffective [3], as well ashaving poorly trained and poorly motivated reviewers.Similarly, many journal editors lack formal training [4,5],as well as having poor knowledge related to publicationethics [5]. While the causes for this type of research wastemay be varied, the consequences for decision-makers,knowledge users, and tax-paying healthcare patientsare ultimately negative, as indicated by Dickerson andChalmers in their 2010 report on this topic: ‘Incompleteand biased reporting has resulted in patients sufferingand dying unnecessarily [6]. Reliance on an incompleteevidence base for decision-making can lead to impreciseor incorrect conclusions about an intervention’s effects.Biased reporting of clinical research can result in over-estimates of beneficial effects [7] and suppression ofharmful effects of treatments [8]. Furthermore, plannersof new research are unable to benefit from all relevantpast research [9]’.The lack of formal training appears to be widespreadnot only among authors of health research, but alsoamong the gatekeepers of health literature - journal peerreviewers and editors, and at earlier stages, grant peerreviewers. This may be one potential reason for the largeamount of waste in biomedical research. Murray [10]suggests that most academics have no formal training inwriting for publication and that they developed theirskills mainly through a process of trial and error. Inaddition, the rates of author misconduct [11-14], ofwhich most incidences stem from negligence, poorlyperformed science, investigator bias, or lack of know-ledge, rather than acts of fraud [15], suggest a need forbetter training among authors on journalological issues.Meanwhile, Keen [16] argued that, while there is awealth of literature describing how to go about writingfor publication, the provision of information alone maybe insufficient to support potential authors. In addition,Eastwood [17] suggested that professional training op-portunities may be lacking due to a faulty assumptionthat trainees could not have achieved their postdoctoralstatus without having acquired an education in criticalreading and writing.In the case of peer reviewers, very little is knownabout their training and experiences [18], however, re-search shows that many reviewers’ needs for trainingand support are not being met, despite the desire for itamong most of them [19]. Peer reviewers have difficultyidentifying major errors in articles submitted for publi-cation [20-22] and in some cases agreement betweenreviewers of the same manuscript is not much differentthan would be expected by chance [3]. There is also evi-dence to suggest that the quality of one’s peer reviewingdeteriorates over time [3,18] and that peer reviewers aresusceptible to positive-outcome bias [23]. Similarly, thepeer review process used by granting agencies alsoappears to be problematic. A survey of 29 internationalgranting agencies indicated that several aspects of theirpeer review process were poor and had not improved inthe preceding 5 years, including difficulty retaining goodreviewers, reviewers carrying out poor quality reviews,and reviewers not following guidelines appropriately[24]. The same survey polled external reviewers ofgranting agencies (n = 258) and found that only 9% hadreceived some form of training in how to conduct bio-medical grant reviews, despite 64% of reviewers express-ing an interest in peer review training [24]. The authorsconcluded by saying that funding organizations shouldhelp reviewers do their job effectively by providing clearguidance and training.Many journal editors report having informal [5], orlittle to no [4] training in editing skills, as well as beingunfamiliar with available guidelines [25]. However, theyalso say they would welcome more guidance or training[4,5,25]. When tested, editors performed very poorly onknowledge of editorial issues related to authorship, con-flict of interest, peer review, and plagiarism [5]. Manyeditors also believed that ethical issues occur rarely ornever at their journal [25]. These findings echo theGalipeau et al. Systematic Reviews 2013, 2:41 Page 2 of 7http://www.systematicreviewsjournal.com/content/2/1/41assertion of Paul Hébert, former Editor-in-Chief of theCanadian Medical Association Journal (CMAJ), that ‘weneed to train the editors of tomorrow. In Canada, we havea very small scholarly publishing industry. As a conse-quence, there are few medical editing positions, no obviouscareer paths and even fewer training opportunities [26]’.While the reasons for the poor training of authors,peer reviewers, and editors have not been studied dir-ectly, one cause may be a lack of legitimate opportun-ities to obtain formal training [26], or a lack of access tothese training opportunities. For example, there arecurrently no certification programs or degrees that allowa physician to train specifically to become a medicaljournal editor. While courses are offered by a few groups(for example, Council of Science Editors) and Fellowshiptraining programs exist in the USA, Canada, and theUK, the majority of these are 1-year programs, largelyrequire a full-time commitment, and are accessible toonly a select few, annually. Some journals, such as theJournal of the American Medical Association (JAMA),the British Medical Journal (BMJ) and American FamilyPhysician (AFP) have created 1- to 2-month electives formedical students to undergo similar training; however,these are only open to medical students. The situation forauthors and peer reviewers is not much better. Eastwood[17] explains that, for medical residents, journal clubs arethe primary forum in which to learn about critical ap-praisal of biomedical literature, however, most clinicianswill receive little [27] to no [10] formal training in writingfor publication. Eastwood also points out that ‘few of theprograms developed to meet the National Institutes ofHealth requirement for training in responsible researchpractices devote time to the practice and ethics of biomed-ical reporting [17]’. Similarly, for peer reviewers, there islittle to no formal training available, with most reviewersbeing guided by journals’ instructions to reviewers sec-tions and being forced to learn by trial-and-error [28].There have been repeated calls for better, more nu-merous training opportunities for research reporting,peer review, and publishing [1,26,29]. Although trainingopportunities appear to be somewhat limited, a smallnumber of them do exist, some being offered by reputableorganizations. However, little research has taken stock ofthe journalology training opportunities or related evalua-tions of their effectiveness. A systematic review of trainingopportunities in a related area - overcoming barriers topublication - has been identified [30]. The review, whichincluded 17 studies published between 1984 and 2004,evaluated the effect of writing courses, writing supportgroups and writing coaches on author output. Whilepublication rates were found to increase overall, whetheror not opportunities exist to enhance the quality of suchresearch output for all relevant players (that is, authors,peer reviewers and editors), is a vastly more relevantquestion in the age of increasing evidence towards authormisconduct and misreporting. We are not aware of anexisting synthesis of knowledge on this topic.The objective of this project is to systematically review,evaluate and synthesize information on whether trainingin journalology effectively improves educational out-comes, such as measures of knowledge, intention tochange behaviour, and measures of excellence in trainingdomains. Collecting this information will allow knowledgeusers to know what training options are most effective.This will be useful for making training recommendationsto authors, peer reviewers, editors, and others. In addition,it will provide a solid foundation from which to developand build new training courses and programs for thesegroups, ultimately improving knowledge and the quality ofresearch practices both within Canada and abroad.MethodsCriteria for considering studies for this reviewPopulationThose centrally involved in writing for scholarly publica-tion, journal editing, and manuscript peer review (that is,authors, peer reviewers, journal editors), or any other groupthat may be peripherally involved in the scientific writingand publishing process, such as medical journalists.InterventionEvaluations of training in any specialty or subspecialty ofwriting for scholarly publication, journal editing, or manu-script peer review targeted at the designated population(s)will be included.ComparatorThe following comparisons will be included: 1) before andafter administration of a training class/course/program ofinterest, 2) between two or more training classes/courses/programs of interest, or 3) between a training class/course/program and any other intervention(s) (including nointervention).Outcome(s)The primary outcomes will be any measure of effective-ness of training as reported, including, but not limited to:measures of knowledge, intention to change behaviour,measures of excellence in training domains (writing, peerreview, editing), however reported. Since this review islargely exploratory, where other meaningful outcomes arereported, this information will be collected as well.Study design(s)Comparative studies evaluating at least one trainingprogram/course/class of interest will be included in thisreview and henceforth be termed ‘evaluations’.Galipeau et al. Systematic Reviews 2013, 2:41 Page 3 of 7http://www.systematicreviewsjournal.com/content/2/1/41Search methods for identification of studiesA comprehensive three-phase search approach will beemployed to identify evaluations of training opportun-ities, as follows: 1) For training which has been describedin published reports, citations of these reports will beforward-searched using the Scopus citation database. 2)We will also perform a search of MEDLINE In-Processand Non-Indexed Citations, MEDLINE, Embase, ERIC,PsycINFO, and the databases of the Cochrane Library. Aspecific search strategy will be developed by an informa-tion specialist and will be peer reviewed prior to execution[31]. There will be no language restrictions on the searchstrategy, however, due to the large expected yield of theplanned review and limited resources available, evalua-tions encountered in languages other than English andFrench will be set aside and listed in an appendix in thereport. Letters, commentaries and editorials will not beexcluded due to the possibility that they may containreference to evaluations of particular training programs.Studies will not be excluded based on publication status.3) A grey literature search will also be conducted,consisting of screening the results of a concurrent projectto map all existing and previous training in journalologythrough an ‘environmental scanning’ technique [32,33]using the Google search engine. The administrators of anyrelevant training opportunities identified in the environ-mental scan will be contacted to inquire whether they areaware of any published or unpublished evaluations of thetraining opportunities.Data collectionFollowing the execution of the search strategy, the iden-tified records (titles and/or available abstracts) will becollated in a Reference Manager [34] database forde-duplication. The final unique record set and full-textof potentially eligible studies will be exported to anInternet-based software, DistillerSR (Evidence Partners,Ottawa, Canada), through which screening of recordsand extraction of data from included evaluations will becarried out.Study selectionGiven the broad/general nature of many of the searchterms (for example, author, editor, education) we expecta large volume of initial search results. Therefore, wewill conduct an initial screening of the titles only by tworeviewers, using the liberal acceleration method (that is,one reviewer screens all identified studies and a secondreviewer screens only excluded studies, independently).Following the title screen, titles and abstracts of identifiedrecords will be screened by two reviewers, again using theliberal accelerated method. Subsequently, the full-text ofall potentially eligible evaluations will be retrieved andreviewed for eligibility, independently, by two members ofthe team using a priori eligibility criteria. To be included,evaluations must include one of the a priori comparisongroups and examine the influence of training, as reported,using any measure. Disagreements between reviewers atthis stage will be resolved by consensus or by a thirdmember of the research team. If necessary, authors ofpotentially included evaluations will be contacted to clarifydata needed for eligibility.Data extractionSeparate data extraction forms will be developed tocapture information needed for synthesis for each of thethree comparisons and will be piloted using a subset ofincluded evaluations and modified, as needed. Onereviewer will extract general study characteristics ofincluded evaluations, with verification carried out by asecond reviewer. Data on measures of effectiveness oftraining for each program/course/class will be extractedby one reviewer; a second reviewer will verify the accur-acy of the data from a random 20% sample of includedevaluations. Any discrepancies between reviewers will beresolved by consensus or by a third member of the re-search team. If there is greater than a 50% discrepancybetween reviewers’ answers in the random 20% sample,or only a small number of included studies, 100% dataverification will be considered. Authors of included eval-uations will be contacted to invite contribution of anyunpublished data needed for this review that is not avail-able in published reports.General publication characteristics to be extractedinclude: first author name and contact information (offirst or corresponding author), year of publication, insti-tutional affiliation of first author, country, language ofpublication, type of document (full text versus abstract),and funding source. Other details to be collected in-clude: name of training class(es)/course(s)/program(s)being evaluated (if applicable), population evaluated,sample size, whether prospective or retrospective, andmechanism of sampling (or participant assignment togroups). Extracted outcome data will include: tool(s)used to evaluate effectiveness of training, timing ofmeasurement, effectiveness measurement (howeverreported), intention to change behaviour scores, andmeasures of excellence data (however reported).Assessment of validity of evaluationsNo tool currently exists to assess the validity (internaland external) of evaluations in methodological reviewssuch as this. Study designs are expected to be largelyheterogeneous, however, if evaluations using randomizedcontrol trial (RCT) designs are encountered, the Cochranerisk of bias tool will be used to judge validity [35]. Toassess all other evaluations, we propose assessing thefollowing criterion with a rating of ‘yes’, ‘no’ or ‘unclear’,Galipeau et al. Systematic Reviews 2013, 2:41 Page 4 of 7http://www.systematicreviewsjournal.com/content/2/1/41to help readers make their own judgments about the over-all validity of the included evidence. We have used thisapproach elsewhere [36,37].1. Whether an objective measure of trainingeffectiveness was employed (that is, a prioriquestionnaires).2. Whether the measurement tool to evaluate trainingeffectiveness was reported to be validated.3. Whether intended methods align with reportedfindings.4. Whether data from all included participants wasreported.5. Whether sampling for comparison 2. and 3.occurred within the same time frame.6. Whether comparison groups represent similarpopulations (that is, same area of health-relateddiscipline, similar levels of training).Data analysisMeasures of effectDue to the paucity of literature describing formal trainingopportunities in journalology, we are unable to anticipatethe types of measurement tools that might be used fortheir evaluation. Where data is provided narratively, it willbe collected as such. Where summary scores of outcomes(that is, participant knowledge using a particular tool)are presented within each evaluation, we will collectmeans and standard deviations (SDs). When mediansand ranges are reported instead of means and SDs, suit-able approximations will be used, as discussed in theCochrane Handbook [38]. A standardized mean difference(SMD) and 99% confidence interval will be calculated foreach study; a SMD >0 will indicate better overall trainingeffectiveness. Where proportions of participants in eachcomparison group are reported for a particular outcomein an evaluation, this information will be collected. A rela-tive risk (RR) and 99% confidence intervals will be calcu-lated for each study. A RR >1.0 will indicate a higherproportion of participants with positive outcomes. Confi-dence levels of 99% will ensure conservative estimates ofprecision are obtained. If reporting and sample size allow,standard methods - depending on the approximate distri-bution of the data - will be used to transform medians andinterquartile ranges (IQRs) to mean difference (MD) andSD, to allow visual inspection of estimates of effect. Wherepossible, these estimates will be included in SMD calcula-tions for overall training effect.Dealing with missing dataCorresponding authors of potentially included evalua-tions will be contacted, up to two times, where data areneeded. If the data are not obtained and compromisethe ability to include the evaluation in quantitativesynthesis, it will be excluded from quantitative synthesis.Data synthesisGeneral study characteristics will be presented in tabularformat. Due to the anticipated methodological hetero-geneity of potentially included evaluations (based onprevious experience carrying out methodological system-atic reviews), it is unlikely that data will be combinedacross evaluations. If this is the case, data will be de-scribed qualitatively in the text of the review. However,we will first assess for the suitability of meta-analysis,which will depend on the quantity of data and thehomogeneity of studies according to methodology andcontent. If meta-analysis is possible, analyses will beconducted with the random effects model using theReview Manager software [39]. Any follow-up time foroutcomes will be considered relevant, but only similartime points will be meta-analyzed; ‘similarity’ will needto be determined post hoc once study data are collectedduring the data extraction phase. Initially, all trainingprograms will be considered together.Subgroup analysisIf relevant data are reported and permit quantitativesynthesis, the following subgroup analyses are planned:1) modes of training delivery (that is, online, face-to-face),2) primary role (that is, author, peer review, editor), 3) pri-mary occupation of participant (that is, student (includingmedical residents), health practitioner, other health-relatedprofessional), 3) duration of training (that is, class, course,program), 4) credibility via institutional affiliation (that is,sponsored by academic institution, publishing house, in-dustry, other), 5) setting (that is, individual versus group),or 6) associated cost (that is, cost versus no cost).Assessment of heterogeneityIf there is any quantitatively aggregated data acrossincluded studies, we plan to measure the inconsistencyof study results using the I2 heterogeneity statistic todetermine the extent of variation in effect estimates thatis due to heterogeneity rather than chance [38]. Hetero-geneity will be determined by visual inspection of theforest plot and I2 statistics. For the interpretation of I2, arough guide of low (0% to 25%), moderate (25% to 50%),substantial (50% to 75%), and considerable (75% to100%) heterogeneity will be used [38]. Where consider-able statistical heterogeneity exists (≥75%) data will notbe pooled. Possible reasons for heterogeneity will be ex-plored in sensitivity analyses; the pre-specified subgroupanalyses, if feasible, will be examined to determinewhether they provide possible reasons for any observedstatistical heterogeneity. The variables outlined above forGalipeau et al. Systematic Reviews 2013, 2:41 Page 5 of 7http://www.systematicreviewsjournal.com/content/2/1/41subgroup analyses will be considered statistically signifi-cant at P <0.01.Reporting biasesAsymmetry of funnel plots is an established method forassessing the potential presence of publication bias, andothers biases, in traditional systematic reviews of inter-vention effectiveness, subject to a sufficient number ofincluded studies [38,40]. We will generate funnel plotsand graphically explore the presence of asymmetry. Ifwarranted we will evaluate asymmetry using statisticaltools [40].DiscussionThis project aims to provide evidence to help guide thejournalological training of authors, peer reviewers, andeditors, and the development of future training oppor-tunities in this domain. While there is ample evidencethat many members of these groups are not getting thenecessary training needed to excel at their respectivejournalology-related tasks, little is known about thecharacteristics of existing training opportunities, includingtheir effectiveness. The proposed systematic review willprovide the evidence regarding the effectiveness of train-ing, therefore giving potential trainees, course designers,and decision-makers evidence to help inform their choicesand policies regarding the merits of a specific trainingopportunity or type of training.We believe that the results of the proposed review willbe of relevance to a wide variety of knowledge users,namely: authors, peer reviewers, and editors, as well asdesigners and teachers of training courses related tojournalology, and decision-makers for continuing medicaleducation (CME) and continuing professional develop-ment (CPD). Consumers of training (that is, potentialtrainees) will benefit by learning which types of trainingprovide the most effective learning outcomes. This willempower them to make more informed choices regardingspecific training, rather than making decisions based onword-of-mouth recommendations, academic affiliation,and other such unreliable methods. The designers of train-ing will benefit by gaining access to a knowledge synthesisthat outlines the characteristics of effective learningstructures and environments. This will enable them todesign better learning strategies and a better curriculumthat takes into consideration the particularities ofeducation in journalology. Finally, decision-makers willbenefit by gaining an understanding of what workswhen choosing the type of training that will best benefittheir organizations.This review will provide the knowledge that is necessaryfor better educating authors, peer reviewers, and editorson how to reduce biomedical research waste by improvingthe quality and rigor in research reporting. Ultimately, thegoal is to move closer to optimal reporting of healthresearch, so that we can have full access to, and use of, thenew knowledge coming from our investments in research.AbbreviationsAFP: American Family Physician; BMJ: British Medical Journal; CMAJ: CanadianMedical Association Journal; CME: Continuing medical education;COPE: Committee on Publication Ethics; CPD: Continuing professionaldevelopment; EQUATOR: Enhancing the Quality and Transparency of HealthResearch; IQR: Interquartile range; JAMA: Journal of the American MedicalAssociation; MD: Mean difference; RCT: Randomized controlled trial;RR: Relative risk; SD: Standard deviation; SMD: Standardized mean difference;WAME: World Association of Medical Editors.Competing interestsThe authors declare that they have no competing interests.Authors’ contributionsJG drafted the manuscript; DM conceived of the study, participated in itsdesign and coordination, and helped draft the manuscript; BS designed thesearch strategy; and CC, PHendry, DWC, PHébert, and AP all providedcontent expertise and participated in the design of the study. All authorsread and approved the final manuscript.AcknowledgementsThis research project is funded by the Canadian Institutes of Health Research(number: 278874). The funder has no role in the design, collection, analysis,and interpretation of the data; in the writing of the manuscript; or in thedecision to submit the manuscript for publication.Author details1Ottawa Hospital Research Institute, 501 Smyth Rd, Ottawa K1H 8L6, Canada.2Faculty of Medicine, University of Ottawa, 451 Smyth Road, Ottawa, ON K1H8M5, Canada. 3Independent Consultant, Ottawa, Canada. 4Royal College ofPhysicians and Surgeons of Canada, 774 Echo Drive, Ottawa, ON K1S 5N8,Canada. 5Department of Medicine, Centre for Health Evaluation andOutcome Sciences, University of British Columbia, St Paul’s Hospital,Vancouver, BC V6Z 1Y6, Canada.Received: 21 March 2013 Accepted: 28 May 2013Published: 17 June 2013References1. Chalmers I, Glasziou P: Avoidable waste in the production and reportingof research evidence. Obstet Gynecol 2009, 114(6):1341.2. Glasziou P, Meats E, Heneghan C, Shepperd S: What is missing fromdescriptions of treatment in trials and reviews? Br Med J 2008,336(7659):1472.3. Callaham ML: The natural history of peer reviewers: The decay of quality.In Proceedings of the Sixth International Congress on Peer Review andBiomedical Publication. Vancouver: International Congress on Peer Reviewand Biomedical Publication; 2009.4. Garrow J, Butterfield M, Marshall J, Williamson A: The reported training andexperience of editors in chief of specialist clinical medical journals. JAMA1998, 280(3):286–287.5. Wong VS, Callaham ML: Medical journal editors lacked familiarity withscientific publication issues despite training and regular exposure.J Clin Epidemiol 2012, 65(3):247–252.6. Cowley AJ, Skene A, Stainer K, Hampton JR: The effect of lorcainide onarrhythmias and survival in patients with acute myocardial infarction: anexample of publication bias. Int J Cardiol 1993, 40(2):161–166.7. Sterne JAC, Egger M, Moher D: Addressing reporting biases. In CochraneHandbook for Systematic Reviews of Interventions: Cochrane Book Series.Edited by Higgins JP, Green S. Chichester: John Wiley & Sons; 2008:297–333.8. Dealing with biased reporting of the available evidence. The James LindLibrary: [http://www.jameslindlibrary.org/essays/interpretation/relevant_evidence/dealing-with-biased-reporting-of-the-available-evidence.html]9. Dickersin K, Chalmers I: Recognizing, investigating and dealing withincomplete and biased reporting of clinical research: from Francis Baconto the WHO. JRSM 2011, 12:532–538.Galipeau et al. Systematic Reviews 2013, 2:41 Page 6 of 7http://www.systematicreviewsjournal.com/content/2/1/4110. Murray R, Newton M: Facilitating writing for publication. Physiotherapy2008, 94:29–34.11. Rennie D: Editorial peer review: its development and rationale. In PeerReview in Health Sciences. Edited by Godlee F, Jefferson T. London: BMJ;2003:1–13.12. Tavare A: Managing research misconduct: is anyone getting it right?BMJ 2011, 34:d8212.13. Wager E: Coping with scientific misconduct. BMJ 2011, 343:d6586.14. Brice J, Bligh J: Author misconduct: not just the editors’ responsibility.Med Educ 2005, 39(1):83–89.15. Marusic A: Author misconduct: editors as educators of research integrity.Med Educ 2005, 39(1):7–8.16. Keen A: Writing for publication: pressures, barriers and supportstrategies. Nurse Educ Today 2007, 27(5):382–388.17. Eastwood S, Derish PA, Berger MS: Biomedical publication forneurosurgery residents: a program and guide. Neurosurgery 2000,47(3):739–748. discussion 748–9.18. Callaham ML, Tercier J: The relationship of previous training andexperience of journal peer reviewers to subsequent review quality.PLoS Med 2007, 4(1):e40.19. Freda MC, Kearney MH, Baggs JG, Broome ME, Dougherty M: Peer reviewertraining and editor support: results from an international survey ofnursing peer reviewers. J Prof Nurs 2009, 25(2):101–108.20. Baxt WG, Waeckerle JF, Berlin JA, Callaham ML: Who reviews thereviewers? Feasibility of using a fictitious manuscript to evaluate peerreviewer performance. Ann Emerg Med 1998, 32(3):310–317.21. Schroter S, Black N, Evans S, Carpenter J, Godlee F, Smith R: Effects oftraining on quality of peer review: randomised controlled trial. BMJ 2004,328(7441):673.22. van Rooyen S, Godlee F, Evans S, Smith R, Black N: Effect of blinding andunmasking on the quality of peer review: a randomized trial. JAMA 1998,280(3):234–237.23. Emerson GB, Warme WJ, Wolf FM, Heckman JD, Brand RA, Leopold SS:Testing for the presence of positive-outcome bias in peer review: arandomized controlled trial. Arch Intern Med 2010, 170(21):1934–1939.24. Schroter S, Groves T, Hojgaard L: Surveys of current status in biomedicalscience grant review: funding organisations’ and grant reviewers’perspectives. BMC Med 2010, 8:62.25. Wager E, Fiack S, Graf C, Robinson A, Rowlands I: Science journal editors’views on publication ethics: results of an international survey.J Med Ethics 2009, 35(6):348–353.26. Hebert PC: Even an editor needs an editor: reflections after five years atCMAJ. CMAJ 2011, 183(17):1951.27. Pololi L, Knight S, Dunn K: Facilitating scholarly writing in academicmedicine. J Gen Intern Med 2004, 19(1):64–68.28. Lu Y: Learning to be confident and capable journal reviewers: anAustralian perspective. Learned Publishing 2012, 25(1):56–61.29. Benos DJ, Bashari E, Chaves JM, Gaggar A, Kapoor N, LaFrance M, Mans R,Mayhew D, McGowan S, Polter A, Qadri Y, Sarfare S, Schultz K, SplittgerberR, Stephenson J, Tower C, Walton RG, Zotov A: The ups and downs of peerreview. Adv Physiol Educ 2007, 31(2):145–152.30. McGrail MR, Rickard CM, Jones R: Publish or perish: a systematic review ofinterventions to increase academic publication rates. High Educ Res Dev2006, 25(1):19–35.31. Sampson M, McGowan J, Cogo E, Grimshaw J, Moher D, Lefebvre C: Anevidence-based practice guideline for the peer review of electronicsearch strategies. J Clin Epidemiol 2009, 62(9):944–952.32. Brown A, Weiner E: Supermanaging: How to Harness Change for Personal andOrganizational Success. New York: McGraw-Hill; 1985.33. Porterfield D, Hinnant L, Kane H, Horne J, McAleer K, Roussel A: Linkagesbetween clinical practices and community organizations for prevention:a literature review and environmental scan. Am J Preventive Medicine2012, 42(6):S163–S171.34. Reuters T: Reference Manager. New York: Thomson Reuters; 2008.35. Higgins JPT, Altman DG: Assessing risk of bias in included studies. InCochrane Handbook for Systematic Reviews of Interventions. Version 5.0.2.Edited by Higgins JPT, Green S. Chichester: John Wiley & Sons;2008:187–242.36. Turner L, Moher D, Shamseer L, Weeks L, Peters J, Plint A, Altman DG,Schulz KF: The influence of CONSORT on the quality of reporting ofrandomised controlled trials: an updated review. Trials 2011,12(Suppl 1):A47.37. Shamseer L, Stevens A, Skidmore B, Turner L, Altman DG, Hirst A, Hoey J,Palepu A, Simera I, Schulz K, Moher D: Does journal endorsement ofreporting guidelines influence the completeness of reporting of healthresearch? A systematic review protocol. Systematic Reviews 2012, 1:24.38. Deeks JJ, Higgins JPT, Altman DG: Analysing data and undertaking meta-analyses. In Cochrane Handbook for Systematic Reviews of Interventions.Version 5.1.0. Edited by Higgins JPT, Green S. Chichester: John Wiley & Sons;2011.39. The Cochrane Collaboration: Review Manager (RevMan) [Computer program].Version 5.1. Copenhagen: The Nordic Cochrane Centre; 2011.40. Sterne JAC, Sutton AJ, Ioannidis JPA, Terrin N, Jones DR, Lau J, Carpenter J,Rücker G, Harbord RM, Schmid CH, Tetzlaff J, Deeks JJ, Peters J, Macaskill P,Schwarzer G, Duval S, Altman DG, Moher D, Higgins JP: Recommendationsfor examining and interpreting funnel plot asymmetry in meta-analysesof randomised controlled trials. BMJ 2011, 343(7818):302.doi:10.1186/2046-4053-2-41Cite this article as: Galipeau et al.: Systematic review of the effectivenessof training programs in writing for scholarly publication, journal editing,and manuscript peer review (protocol). Systematic Reviews 2013 2:41.Submit your next manuscript to BioMed Centraland take full advantage of: • Convenient online submission• Thorough peer review• No space constraints or color figure charges• Immediate publication on acceptance• Inclusion in PubMed, CAS, Scopus and Google Scholar• Research which is freely available for redistributionSubmit your manuscript at www.biomedcentral.com/submitGalipeau et al. Systematic Reviews 2013, 2:41 Page 7 of 7http://www.systematicreviewsjournal.com/content/2/1/41


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items