UBC Faculty Research and Publications

An expanded evaluation of protein function prediction methods shows an improvement in accuracy Jiang, Yuxiang; Oron, Tal Ronnen; Clark, Wyatt T.; Bankapur, Asma R.; D’Andrea, Daniel; Lepore, Rosalba; Funk, Christopher S.; Kahanda, Indika; Verspoor, Karin M.; Ben-Hur, Asa; Koo, Da Chen Emily; Penfold-Brown, Duncan; Shasha, Dennis; Youngs, Noah; Bonneau, Richard; Lin, Alexandra; Sahraeian, Sayed M. E.; Martelli, Pier Luigi; Profiti, Giuseppe; Casadio, Rita; Cao, Renzhi; Zhong, Zhaolong; Cheng, Jianlin; Altenhoff, Adrian; Skunca, Nives; Dessimoz, Christophe; Dogan, Tunca; Hakala, Kai; Kaewphan, Suwisa; Mehryary, Farrokh; Salakoski, Tapio; Ginter, Filip; Fang, Hai; Smithers, Ben; Oates, Matt; Gough, Julian; Törönen, Petri; Koskinen, Patrik; Holm, Liisa; Chen, Ching-Tai; Hsu, Wen-Lian; Bryson, Kevin; Cozzetto, Domenico; Minneci, Federico; Jones, David T.; Chapman, Samuel; BKC, Dukka; Khan, Ishita K.; Kihara, Daisuke; Ofer, Dan; Rappoport, Nadav; Stern, Amos; Cibrian-Uhalte, Elena; Denny, Paul; Foulger, Rebecca E.; Hieta, Reija; Legge, Duncan; Lovering, Ruth C.; Magrane, Michele; Melidoni, Anna N.; Mutowo-Meullenet, Prudence; Pichler, Klemens; Shypitsyna, Aleksandra; Li, Biao; Zakeri, Pooya; ElShal, Sarah; Tranchevent, Léon-Charles; Das, Sayoni; Dawson, Natalie L.; Lee, David; Lees, Jonathan G.; Sillitoe, Ian; Bhat, Prajwal; Nepusz, Tamás; Romero, Alfonso E.; Sasidharan, Rajkumar; Yang, Haixuan; Paccanaro, Alberto; Gillis, Jesse; Sedeño-Cortés, Adriana E.; Pavlidis, Paul; Feng, Shou; Cejuela, Juan M.; Goldberg, Tatyana; Hamp, Tobias; Richter, Lothar; Salamov, Asaf; Gabaldon, Toni; Marcet-Houben, Marina; Supek, Fran; Gong, Qingtian; Ning, Wei; Zhou, Yuanpeng; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Toppo, Stefano; Ferrari, Carlo; Giollo, Manuel; Piovesan, Damiano; Tosatto, Silvio C.E.; del Pozo, Angela; Fernández, José M.; Maietta, Paolo; Valencia, Alfonso; Tress, Michael L.; Benso, Alfredo; Di Carlo, Stefano; Politano, Gianfranco; Savino, Alessandro; Rehman, Hafeez Ur; Re, Matteo; Mesiti, Marco; Valentini, Giorgio; Bargsten, Joachim W.; van Dijk, Aalt D. J.; Gemovic, Branislava; Glisic, Sanja; Perovic, Vladmir; Veljkovic, Veljko; Veljkovic, Nevena; Almeida-e-Silva, Danillo C.; Vencio, Ricardo Z. N.; Sharan, Malvika; Vogel, Jörg; Kansakar, Lakesh; Zhang, Shanshan; Vucetic, Slobodan; Wang, Zheng; Sternberg, Michael J. E.; Wass, Mark N.; Huntley, Rachael P.; Martin, Maria J.; O’Donovan, Claire; Robinson, Peter N.; Moreau, Yves; Tramontano, Anna; Babbitt, Patricia C.; Brenner, Steven E.; Linial, Michal; Orengo, Christine A.; Rost, Burkhard; Greene, Casey S.; Mooney, Sean D.; Friedberg, Iddo; Radivojac, Predrag Sep 7, 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
52383-13059_2016_Article_1037.pdf [ 2.99MB ]
Metadata
JSON: 52383-1.0366935.json
JSON-LD: 52383-1.0366935-ld.json
RDF/XML (Pretty): 52383-1.0366935-rdf.xml
RDF/JSON: 52383-1.0366935-rdf.json
Turtle: 52383-1.0366935-turtle.txt
N-Triples: 52383-1.0366935-rdf-ntriples.txt
Original Record: 52383-1.0366935-source.json
Full Text
52383-1.0366935-fulltext.txt
Citation
52383-1.0366935.ris

Full Text

Jiang et al. Genome Biology  (2016) 17:184 DOI 10.1186/s13059-016-1037-6RESEARCH Open AccessAn expanded evaluation of proteinfunction prediction methods shows animprovement in accuracyYuxiang Jiang1, Tal Ronnen Oron2, Wyatt T. Clark3, Asma R. Bankapur4, Daniel D’Andrea5, Rosalba Lepore5,Christopher S. Funk6, Indika Kahanda7, Karin M. Verspoor8,9, Asa Ben-Hur7, Da Chen Emily Koo10,Duncan Penfold-Brown11,12, Dennis Shasha13, Noah Youngs12,13,14, Richard Bonneau13,14,15,Alexandra Lin16, Sayed M. E. Sahraeian17, Pier Luigi Martelli18, Giuseppe Profiti18, Rita Casadio18,Renzhi Cao19, Zhaolong Zhong19, Jianlin Cheng19, Adrian Altenhoff20,21, Nives Skunca20,21,Christophe Dessimoz22,87,88, Tunca Dogan23, Kai Hakala24,25, Suwisa Kaewphan24,25,26,Farrokh Mehryary24,25, Tapio Salakoski24,26, Filip Ginter24, Hai Fang27, Ben Smithers27, Matt Oates27, JulianGough27, Petri Törönen28, Patrik Koskinen28, Liisa Holm28,86, Ching-Tai Chen29, Wen-Lian Hsu29, KevinBryson22, Domenico Cozzetto22, Federico Minneci22, David T. Jones22, Samuel Chapman30, Dukka BKC30,Ishita K. Khan31, Daisuke Kihara31,85, Dan Ofer32, Nadav Rappoport32,33, Amos Stern32,33,Elena Cibrian-Uhalte23, Paul Denny35, Rebecca E. Foulger35, Reija Hieta23, Duncan Legge23,Ruth C. Lovering35, Michele Magrane23, Anna N. Melidoni35, Prudence Mutowo-Meullenet23,Klemens Pichler23, Aleksandra Shypitsyna23, Biao Li2, Pooya Zakeri36,37, Sarah ElShal36,37,Léon-Charles Tranchevent38,39,40, Sayoni Das41, Natalie L. Dawson41, David Lee41, Jonathan G. Lees41,Ian Sillitoe41, Prajwal Bhat42, Tamás Nepusz43, Alfonso E. Romero44, Rajkumar Sasidharan45,Haixuan Yang46, Alberto Paccanaro44, Jesse Gillis47, Adriana E. Sedeño-Cortés48, Paul Pavlidis49,Shou Feng1, Juan M. Cejuela50, Tatyana Goldberg50, Tobias Hamp50, Lothar Richter50, Asaf Salamov51,Toni Gabaldon52,53,54, Marina Marcet-Houben52,53, Fran Supek53,55,56, Qingtian Gong57,58, Wei Ning57,58,Yuanpeng Zhou57,58, Weidong Tian57,58, Marco Falda59, Paolo Fontana60, Enrico Lavezzo59,Stefano Toppo59, Carlo Ferrari61, Manuel Giollo61,84, Damiano Piovesan61, Silvio C.E. Tosatto61,Angela del Pozo62, José M. Fernández63, Paolo Maietta64, Alfonso Valencia64, Michael L. Tress64, AlfredoBenso65, Stefano Di Carlo65, Gianfranco Politano65, Alessandro Savino65, Hafeez Ur Rehman66, Matteo Re67,Marco Mesiti67, Giorgio Valentini67, Joachim W. Bargsten68, Aalt D. J. van Dijk68,69, Branislava Gemovic70,Sanja Glisic70, Vladmir Perovic70, Veljko Veljkovic70, Nevena Veljkovic70, Danillo C. Almeida-e-Silva71,Ricardo Z. N. Vencio71, Malvika Sharan72, Jörg Vogel72, Lakesh Kansakar73, Shanshan Zhang73,Slobodan Vucetic73, Zheng Wang74, Michael J. E. Sternberg34, Mark N. Wass75, Rachael P. Huntley23,Maria J. Martin23, Claire O’Donovan23, Peter N. Robinson76, Yves Moreau77, Anna Tramontano5,Patricia C. Babbitt78, Steven E. Brenner17, Michal Linial79, Christine A. Orengo41, Burkhard Rost50,Casey S. Greene80, Sean D. Mooney81, Iddo Friedberg4,82,83* and Predrag Radivojac1**Correspondence: idoerg@iastate.edu; predrag@indiana.edu83 Department of Veterinary Microbiology and Preventive Medicine, Iowa StateUniversity, Ames, IA, USA1Department of Computer Science and Informatics, Indiana University,Bloomington, IN, USAFull list of author information is available at the end of the article© 2016 The Author(s). Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, andreproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to theCreative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.Jiang et al. Genome Biology  (2016) 17:184 Page 2 of 19AbstractBackground: A major bottleneck in our understanding of the molecular underpinnings of life is the assignment offunction to proteins. While molecular experiments provide the most reliable annotation of proteins, their relativelylow throughput and restricted purview have led to an increasing role for computational function prediction. However,assessing methods for protein function prediction and tracking progress in the field remain challenging.Results: We conducted the second critical assessment of functional annotation (CAFA), a timed challenge to assesscomputational methods that automatically assign protein function. We evaluated 126 methods from 56 researchgroups for their ability to predict biological functions using Gene Ontology and gene-disease associations usingHuman Phenotype Ontology on a set of 3681 proteins from 18 species. CAFA2 featured expanded analysis comparedwith CAFA1, with regards to data set size, variety, and assessment metrics. To review progress in the field, the analysiscompared the best methods from CAFA1 to those of CAFA2.Conclusions: The top-performing methods in CAFA2 outperformed those from CAFA1. This increased accuracycan be attributed to a combination of the growing number of experimental annotations and improved methodsfor function prediction. The assessment also revealed that the definition of top-performing algorithms is ontologyspecific, that different performance metrics can be used to probe the nature of accurate predictions, and the relativediversity of predictions in the biological process and human phenotype ontologies. While there was methodologicalimprovement between CAFA1 and CAFA2, the interpretation of results and usefulness of individual methods remaincontext-dependent.Keywords: Protein function prediction, Disease gene prioritizationBackgroundAccurate computer-generated functional annotations ofbiological macromolecules allow biologists to rapidly gen-erate testable hypotheses about the roles that newlyidentified proteins play in processes or pathways. Theyalso allow them to reason about new species basedon the observed functional repertoire associated withtheir genes. However, protein function prediction is anopen research problem and it is not yet clear whichtools are best for predicting function. At the same time,critically evaluating these tools and understanding thelandscape of the function prediction field is a chal-lenging task that extends beyond the capabilities of asingle lab.Assessments and challenges have a successful historyof driving the development of new methods in the lifesciences by independently assessing performance andproviding discussion forums for the researchers [1]. In2010–2011, we organized the first critical assessmentof functional annotation (CAFA) challenge to evaluatemethods for the automated annotation of protein func-tion and to assess the progress in method develop-ment in the first decade of the 2000s [2]. The challengeused a time-delayed evaluation of predictions for a largeset of target proteins without any experimental func-tional annotation. A subset of these target proteins accu-mulated experimental annotations after the predictionswere submitted and was used to estimate the perfor-mance accuracy. The estimated performance was subse-quently used to draw conclusions about the status of thefield.The first CAFA (CAFA1) showed that advanced meth-ods for the prediction of Gene Ontology (GO) terms[3] significantly outperformed a straightforward applica-tion of function transfer by local sequence similarity. Inaddition to validating investment in the development ofnew methods, CAFA1 also showed that using machinelearning to integrate multiple sequence hits and multi-ple data types tends to perform well. However, CAFA1also identified challenges for experimentalists, biocu-rators, and computational biologists. These challengesinclude the choice of experimental techniques and pro-teins in functional studies and curation, the structureand status of biomedical ontologies, the lack of com-prehensive systems data that are necessary for accurateprediction of complex biological concepts, as well aslimitations of evaluation metrics [2, 4–7]. Overall, byestablishing the state-of-the-art in the field and identi-fying challenges, CAFA1 set the stage for quantifyingprogress in the field of protein function prediction overtime.In this study, we report on the major outcomes ofthe second CAFA experiment, CAFA2, that was orga-nized and conducted in 2013–2014, exactly 3 years afterthe original experiment. We were motivated to evalu-ate the progress in method development for functionprediction as well as to expand the experiment to newontologies. The CAFA2 experiment also greatly expandedthe performance analysis to new types of evaluation andincluded new performance metrics. By surveying the stateof the field, we aim to help all direct and indirect usersof computational function prediction software developJiang et al. Genome Biology  (2016) 17:184 Page 3 of 19intuition for the quality, robustness, and reliability of thesepredictions.MethodsExperiment overviewThe time line for the second CAFA experiment followedthat of the first experiment and is illustrated in Fig. 1.Briefly, CAFA2 was announced in July 2013 and officiallystarted in September 2013, when 100,816 target sequencesfrom 27 species were made available to the community.Teams were required to submit prediction scores withinthe (0, 1] range for each protein–term pair they choseto predict on. The submission deadline for depositingthese predictions was set for January 2014 (time point t0).We then waited until September 2014 (time point t1) fornew experimental annotations to accumulate on the targetproteins and assessed the performance of the predictionmethods. We will refer to the set of all experimentallyannotated proteins available at t0 as the training set andto a subset of target proteins that accumulated experi-mental annotations during (t0, t1] and used for evaluationas the benchmark set. It is important to note that thebenchmark proteins and the resulting analysis vary basedon the selection of time point t1. For example, a pre-liminary analysis of the CAFA2 experiment was providedduring the Automated Function Prediction Special Inter-est Group (AFP-SIG) meeting at the Intelligent Systemsfor Molecular Biology (ISMB) conference in July 2014.The participating methods were evaluated accordingto their ability to predict terms in GO [3] and HumanPhenotype Ontology (HPO) [8]. In contrast with CAFA1,where the evaluation was carried out only for the Molec-ular Function Ontology (MFO) and Biological ProcessOntology (BPO), in CAFA2 we also assessed the perfor-mance for the prediction of Cellular ComponentOntology(CCO) terms in GO. The set of human proteins was fur-ther used to evaluate methods according to their abilityto associate these proteins with disease terms from HPO,which included all sub-classes of the term HP:0000118,“Phenotypic abnormality”.In total, 56 groups submitting 126 methods participatedin CAFA2. From those, 125 methods made valid predic-tions on a sufficient number of sequences. Further, 121methods submitted predictions for at least one of the GObenchmarks, while 30 methods participated in the diseasegene prediction tasks using HPO.EvaluationThe CAFA2 experiment expanded the assessment of com-putational function prediction compared with CAFA1.This includes the increased number of targets, bench-marks, ontologies, and method comparison metrics.We distinguish between two major types of methodevaluation. The first, protein-centric evaluation, assessesperformance accuracy of methods that predict all onto-logical terms associated with a given protein sequence.The second type, term-centric evaluation, assesses per-formance accuracy of methods that predict if a singleontology term of interest is associated with a given pro-tein sequence [2]. The protein-centric evaluation can beviewed as a multi-label or structured-output learningproblem of predicting a set of terms or a directed acyclicgraph (a subgraph of the ontology) for a given protein.Because the ontologies contain many terms, the outputspace in this setting is extremely large and the evaluationmetrics must incorporate similarity functions betweengroups of mutually interdependent terms (directed acyclicgraphs). In contrast, the term-centric evaluation is anexample of binary classification, where a given ontologyterm is assigned (or not) to an input protein sequence.These methods are particularly common in disease geneprioritization [9]. Put otherwise, a protein-centric evalu-ation considers a ranking of ontology terms for a givenprotein, whereas the term-centric evaluation considersa ranking of protein sequences for a given ontologyterm.Both types of evaluation have merits in assessing per-formance. This is partly due to the statistical depen-dency between ontology terms, the statistical dependencyamong protein sequences, and also the incomplete andFig. 1 Time line for the CAFA2 experimentJiang et al. Genome Biology  (2016) 17:184 Page 4 of 19biased nature of the experimental annotation of proteinfunction [6]. In CAFA2, we provide both types of evalu-ation, but we emphasize the protein-centric scenario foreasier comparisons with CAFA1. We also draw importantconclusions regarding method assessment in these twoscenarios.No-knowledge and limited-knowledge benchmark setsIn CAFA1, a protein was eligible to be in the bench-mark set if it had not had any experimentally veri-fied annotations in any of the GO ontologies at timet0 but accumulated at least one functional term withan experimental evidence code between t0 and t1;we refer to such benchmark proteins as no-knowledgebenchmarks. In CAFA2 we introduced proteins withlimited knowledge, which are those that had been exper-imentally annotated in one or two GO ontologies (butnot in all three) at time t0. For example, for the per-formance evaluation in MFO, a protein without anyannotation in MFO prior to the submission deadlinewas allowed to have experimental annotations in BPOand CCO.During the growth phase, the no-knowledge targets thathave acquired experimental annotations in one or moreontologies became benchmarks in those ontologies. Thelimited-knowledge targets that have acquired additionalannotations became benchmarks only for those ontologiesfor which there were no prior experimental annotations.The reason for using limited-knowledge targets was toidentify whether the correlations between experimentalannotations across ontologies can be exploited to improvefunction prediction.The selection of benchmark proteins for evaluatingHPO-term predictors was separated from the GO analy-ses.We created only a no-knowledge benchmark set in theHPO category.Partial and full evaluationmodesMany function prediction methods apply only to certaintypes of proteins, such as proteins for which 3D structuredata are available, proteins from certain taxa, or specificsubcellular localizations. To accommodate thesemethods,CAFA2 provided predictors with an option of choosing asubset of the targets to predict on as long as they com-putationally annotated at least 5,000 targets, of whichat least ten accumulated experimental terms. We referto the assessment mode in which the predictions wereevaluated only on those benchmarks for which a modelmade at least one prediction at any threshold as partialevaluation mode. In contrast, the full evaluation modecorresponds to the same type of assessment performed inCAFA1 where all benchmark proteins were used for theevaluation and methods were penalized for not makingpredictions.In most cases, for each benchmark category, we havetwo types of benchmarks, no-knowledge and limited-knowledge, and two modes of evaluation, full modeand partial mode. Exceptions are all HPO categoriesthat only have no-knowledge benchmarks. The fullmode is appropriate for comparisons of general-purposemethods designed to make predictions on any pro-tein, while the partial mode gives an idea of howwell each method performs on a self-selected subset oftargets.EvaluationmetricsPrecision–recall curves and remaining uncertainty–misinformation curves were used as the two chief metricsin the protein-centric mode [10]. We also provide a sin-gle measure for evaluation of both types of curves as areal-valued scalar to compare methods; however, we notethat any choice of a single point on those curves maynot match the intended application objectives for a givenalgorithm. Thus, a careful understanding of the evaluationmetrics used in CAFA is necessary to properly interpretthe results.Precision (pr), recall (rc), and the resulting Fmax aredefined aspr(τ ) = 1m(τ )m(τ )∑i=1∑f 1(f ∈ Pi(τ ) ∧ f ∈ Ti)∑f 1(f ∈ Pi(τ )) ,rc(τ ) = 1nene∑i=1∑f 1(f ∈ Pi(τ ) ∧ f ∈ Ti)∑f 1(f ∈ Ti) ,Fmax = maxτ{2 · pr(τ ) · rc(τ )pr(τ ) + rc(τ )},where Pi(τ ) denotes the set of terms that have predictedscores greater than or equal to τ for a protein sequencei, Ti denotes the corresponding ground-truth set of termsfor that sequence, m(τ ) is the number of sequences withat least one predicted score greater than or equal to τ , 1 (·)is an indicator function, and ne is the number of targetsused in a particular mode of evaluation. In the full evalu-ation mode ne = n, the number of benchmark proteins,whereas in the partial evaluation mode ne = m(0), i.e.,the number of proteins that were chosen to be predictedusing the particular method. For each method, we referto m(0)/n as the coverage because it provides the fractionof benchmark proteins on which the method made anypredictions.The remaining uncertainty (ru), misinformation (mi),and the resulting minimum semantic distance (Smin) aredefined asJiang et al. Genome Biology  (2016) 17:184 Page 5 of 19ru(τ ) = 1nene∑i=1∑fic(f ) · 1 (f /∈ Pi(τ ) ∧ f ∈ Ti) ,mi(τ ) = 1nene∑i=1∑fic(f ) · 1 (f ∈ Pi(τ ) ∧ f /∈ Ti) ,Smin = minτ{√ru(τ )2 + mi(τ )2},where ic(f ) is the information content of the ontologyterm f [10]. It is estimated in a maximum likelihoodmanner as the negative binary logarithm of the condi-tional probability that the term f is present in a protein’sannotation given that all its parent terms are also present.Note that here, ne = n in the full evaluation mode andne = m(0) in the partial evaluation mode applies to bothru and mi.In addition to the main metrics, we used two sec-ondary metrics. Those were the weighted version of theprecision–recall curves and the version of the remain-ing uncertainty–misinformation curves normalized to the[0, 1] interval. These metrics and the corresponding eval-uation results are shown in Additional file 1.For the term-centric evaluation we used the area underthe receiver operating characteristic (ROC) curve (AUC).The AUCs were calculated for all terms that have acquiredat least ten positively annotated sequences, whereas theremaining benchmarks were used as negatives. The term-centric evaluation was used both for ranking models andto differentiate well and poorly predictable terms. Theperformance of each model on each term is provided inAdditional file 1.As we required all methods to keep two significantfigures for prediction scores, the threshold τ in all metricsused in this study was varied from 0.01 to 1.00 with a stepsize of 0.01.Data setsProtein function annotations for the GO assessment wereextracted, as a union, from three major protein databasesthat are available in the public domain: Swiss-Prot [11],UniProt-GOA [12] and the data from the GO consor-tium web site [3]. We used evidence codes EXP, IDA,IPI, IMP, IGI, IEP, TAS, and IC to build benchmark andground-truth sets. Annotations for the HPO assessmentwere downloaded from the HPO database [8].Figure 2 summarizes the benchmarks we used in thisstudy. Figure 2a shows the benchmark sizes for each ofthe ontologies and compares these numbers to CAFA1.All species that have at least 15 proteins in any of thebenchmark categories are listed in Fig. 2b.Comparison between CAFA1 and CAFA2methodsWe compared the results from CAFA1 and CAFA2 usinga benchmark set that we created from CAFA1 targets andCAFA2 targets. More precisely, we used the stored pre-dictions of the target proteins from CAFA1 and comparedthem with the new predictions from CAFA2 on the over-lapping set of CAFA2 benchmarks and CAFA1 targetsFig. 2 CAFA2 benchmark breakdown. a The benchmark size for each of the four ontologies. b Breakdown of benchmarks for both types over 11species (with no less than 15 proteins) sorted according to the total number of benchmark proteins. For both panels, dark colors (blue, red, andyellow) correspond to no-knowledge (NK) types, while their light color counterparts correspond to limited-knowledge (LK) types. The distributions ofinformation contents corresponding to the benchmark sets are shown in Additional file 1. The size of CAFA 1 benchmarks are shown in gray. BPOBiological Process Ontology, CCO Cellular Component Ontology, HPO Human Phenotype Ontology, LK limited-knowledge,MFOMolecular FunctionOntology, NK no-knowledgeJiang et al. Genome Biology  (2016) 17:184 Page 6 of 19(a sequence had to be a no-knowledge target in bothexperiments to be eligible for this evaluation). For thisanalysis only, we used an artificial GO version by takingthe intersection of the two GO snapshots (versions fromJanuary 2011 and June 2013) so as to mitigate the influ-ence of ontology changes. We, thus, collected 357 bench-mark proteins for MFO comparisons and 699 for BPOcomparisons. The two baseline methods were trained onrespective Swiss-Prot annotations for both ontologies sothat they serve as controls for database change. In par-ticular, SwissProt2011 (for CAFA1) contained 29,330 and31,282 proteins for MFO and BPO, while SwissProt2014(for CAFA2) contained 26,907 and 41,959 proteins for thetwo ontologies.To conduct a head-to-head analysis between any twomethods, we generated B = 10, 000 bootstrap samplesand let methods compete on each such benchmark set.The performance improvement δ from CAFA1 to CAFA2was calculated asδ(m2,m1) = 1BB∑b=1F(b)max(m2) −1BB∑b=1F(b)max(m1),where m1 and m2 stand for methods from CAFA1 andCAFA2, respectively, and F(b)max(·) represents the Fmax ofa method evaluated on the b-th bootstrapped benchmarkset.Baseline modelsWe built two baseline methods, Naïve and BLAST, andcompared them with all participating methods. The Naïvemethod simply predicts the frequency of a term beingannotated in a database [13]. BLAST was based on searchresults using the Basic Local Alignment Search Tool(BLAST) software against the training database [14]. Aterm will be predicted as the highest local alignmentsequence identity among all BLAST hits annotated withthe term. Both of these methods were trained on theexperimentally annotated proteins available in Swiss-Protat time t0, except for HPO where the two baseline modelswere trained using the annotations from the t0 release ofthe HPO.Results and discussionTopmethods have improved since CAFA1We conducted the second CAFA experiment 3 years afterthe first one. As our knowledge of protein function hasincreased since then, it was worthwhile to assess whethercomputational methods have also been improved and ifso, to what extent. Therefore, to monitor the progress overtime, we revisit some of the top methods in CAFA1 andcompare them with their successors.For each benchmark set we carried out a bootstrap-based comparison between a pair of top-ranked methods(one fromCAFA1 and another fromCAFA2), as describedin “Methods”. The average performance metric as well asthe number of wins were recorded (in the case of iden-tical performance, neither method was awarded a win).Figure 3 summarizes the results of this analysis. We usea color code from orange to blue to indicate the perfor-mance improvement δ from CAFA1 to CAFA2.The selection of top methods for this study was basedon their performance in each ontology on the entirebenchmark sets. Panels B and C in Fig. 3 compare base-line methods trained on different data sets. We see noimprovements of these baselines except for BLAST onBPO where it is slightly better to use the newer versionof Swiss-Prot as the reference database for the search.On the other hand, all top methods in CAFA2 out-performed their counterparts in CAFA1. For predictingmolecular functions, even though transferring functionsfrom BLAST hits does not give better results, the topmodels still managed to perform better. It is possible thatthe newly acquired annotations since CAFA1 enhancedBLAST, which involves direct function transfer, and per-haps lead to better performances of those downstreammethods that rely on sequence alignments. However, thiseffect does not completely explain the extent of the per-formance improvement achieved by those methods. Thisis promising evidence that top methods from the commu-nity have improved since CAFA1 and that improvementswere not simply due to updates of curated databases.Protein-centric evaluationProtein-centric evaluation measures how accuratelymethods can assign functional terms to a protein. Theprotein-centric performance evaluation of the top-tenmethods is shown in Figs. 4, 5, and 6. The 95 % con-fidence intervals were estimated using bootstrapping onthe benchmark set with B = 10, 000 iterations [15]. Theresults provide a broad insight into the state of the art.Predictors performed very differently across the fourontologies. Various reasons contribute to this effectincluding: (1) the topological properties of the ontologysuch as the size, depth, and branching factor; (2) term pre-dictability; for example, the BPO terms are considered tobe more abstract in nature than theMFO and CCO terms;(3) the annotation status, such as the size of the trainingset at t0, the annotation depth of benchmark proteins, aswell as various annotation biases [6].In general, CAFA2 methods perform better at pre-dicting MFO terms than any other ontology. Top meth-ods achieved Fmax scores around 0.6 and considerablysurpassed the two baseline models. Maintaining thepattern from CAFA1, the performance accuracies inthe BPO category were not as good as in the MFOcategory. The best-performing method scored slightlybelow 0.4.Jiang et al. Genome Biology  (2016) 17:184 Page 7 of 19Fig. 3 CAFA1 versus CAFA2 (topmethods). A comparison in Fmax between the top-five CAFA1 models against the top-five CAFA2 models. Coloredboxes encode the results such that (1) the colors indicate margins of a CAFA2 method over a CAFA1 method in Fmax and (2) the numbers in the boxindicate the percentage of wins. For both the Molecular Function Ontology (a) and Biological Process Ontology (b) results: A CAFA1 top-five models(rows, from top to bottom) against CAFA2 top-five models (columns, from left to right). B Comparison of Naïve baselines trained respectively onSwissProt2011 and SwissProt2014. C Comparison of BLAST baselines trained on SwissProt2011 and SwissProt2014For the two newly added ontologies in CAFA2, weobserved that the top predictors performed no betterthan the Naïve method under Fmax, whereas they slightlyoutperformed the Naïve method under Smin in CCO.One reason for the competitive performance of the Naïvemethod in the CCO category is that a small number ofJiang et al. Genome Biology  (2016) 17:184 Page 8 of 19Fig. 4 Overall evaluation using the maximum F measure, Fmax. Evaluation was carried out on no-knowledge benchmark sequences in the full mode.The coverage of each method is shown within its performance bar. A perfect predictor would be characterized with Fmax = 1. Confidence intervals(95 %) were determined using bootstrapping with 10,000 iterations on the set of benchmark sequences. For cases in which a principal investigatorparticipated in multiple teams, the results of only the best-scoring method are presented. Details for all methods are provided in Additional file 1relatively general terms are frequently used, and those rel-ative frequencies do not diffuse quickly enough with thedepth of the graph. For instance, the annotation frequencyof “organelle” (GO:0043226, level 2), “intracellular part”(GO:0044424, level 3), and “cytoplasm” (GO:0005737,level 4) are all above the best threshold for the Naïvemethod (τoptimal = 0.32). Correctly predicting theseterms increases the number of true positives and thusboosts the performance of the Naïve method under theFmax evaluation. However, once the less informative termsare down-weighted (using the Smin measure), the Naïvemethod becomes significantly penalized and degraded.Another reason for the comparatively good performanceof Naïve is that the benchmark proteins were annotatedwith more general terms than the (training) proteins pre-viously deposited in the UniProt database. This effect wasmost prominent in the CCO (Additional file 1: Figure S2)and has thus artificially boosted the performance of theNaïve method. The weighted Fmax and normalized Sminevaluations can be found in Additional file 1.Interestingly, generally shallower annotations of bench-mark proteins do not seem to be the major reason for theobserved performance in the HPO category. One possi-bility for the observed performance is that, unlike for GOterms, the HPO annotations are difficult to transfer fromother species. Another possibility is the sparsity of exper-imental annotations. The current number of experimen-tally annotated proteins in HPO is 4794, i.e., 0.5 proteinsper HPO term, which is at least an order of magnitude lessthan for other ontologies. Finally, the relatively high fre-quency of general terms may have also contributed to thegood performance of Naïve. We originally hypothesizedthat a possible additional explanation for this effect mightbe that the average number of HPO terms associated withJiang et al. Genome Biology  (2016) 17:184 Page 9 of 19Fig. 5 Precision–recall curves for top-performing methods. Evaluation was carried out on no-knowledge benchmark sequences in the full mode.A perfect predictor would be characterized with Fmax = 1, which corresponds to the point (1, 1) in the precision–recall plane. For cases in which aprincipal investigator participated in multiple teams, the results of only the best-scoring method are presentedJiang et al. Genome Biology  (2016) 17:184 Page 10 of 19Fig. 6 Overall evaluation using the minimum semantic distance, Smin. Evaluation was carried out on no-knowledge benchmark sequences in the fullmode. The coverage of each method is shown within its performance bar. A perfect predictor would be characterized with Smin = 0. Confidenceintervals (95 %) were determined using bootstrapping with 10,000 iterations on the set of benchmark sequences. For cases in which a principalinvestigator participated in multiple teams, the results of only the best-scoring method are presented. Details for all methods are provided inAdditional file 1a human protein is considerably larger than in GO; i.e.,the mean number of annotations per protein in HPO is84, while for MFO, BPO, and CCO, the mean numberof annotations per protein is 10, 39, and 14, respectively.However, we do not observe this effect in other ontolo-gies when the benchmark proteins are split into those witha low or high number of terms. Overall, successfully pre-dicting the HPO terms in the protein-centric mode is adifficult problem and further effort will be required tofully characterize the performance.Term-centric evaluationThe protein-centric view, despite its power in showingthe strengths of a predictor, does not gauge a predic-tor’s performance for a specific function. In a term-centricevaluation, we assess the ability of eachmethod to identifynew proteins that have a particular function, participatein a process, are localized to a component, or affect ahuman phenotype. To assess this term-wise accuracy, wecalculated AUCs in the prediction of individual terms.Averaging the AUC values over terms provides a metricfor ranking predictors, whereas averaging predictor per-formance over terms provides insights into how well thisterm can be predicted computationally by the community.Figure 7 shows the performance evaluation where theAUCs for each method were averaged over all termsfor which at least ten positive sequences were available.Proteins without predictions were counted as predictionswith a score of 0. As shown in Figs. 4, 5, and 6, correctlypredicting CCO and HPO terms for a protein might notbe an easy task according to the protein-centric results.However, the overall poor performance could also resultJiang et al. Genome Biology  (2016) 17:184 Page 11 of 19Fig. 7 Overall evaluation using the averaged AUC over terms with no less than ten positive annotations. The evaluation was carried out onno-knowledge benchmark sequences in the full mode. Error bars indicate the standard error in averaging AUC over terms for each method. Forcases in which a principal investigator participated in multiple teams, the results of only the best-scoring method are presented. Details for allmethods are provided in Additional file 1. AUC receiver operating characteristic curvefrom the dominance of poorly predictable terms. There-fore, a term-centric view can help differentiate predictionquality across terms. As shown in Fig. 8, most of theterms in HPO obtain an AUC greater than the Naïvemodel, with some terms on average achieving reason-ably well AUCs around 0.7. Depending on the trainingdata available for participating methods, well-predictedphenotype terms range frommildly specific such as “Lym-phadenopathy” and “Thrombophlebitis” to general onessuch as “Abnormality of the Skin Physiology”.Performance on various categories of benchmarksEasy versus difficult benchmarksAs in CAFA1, the no-knowledge GO benchmarks weredivided into easy versus difficult categories based on theirmaximal global sequence identity with proteins in thetraining set. Since the distribution of sequence identi-ties roughly forms a bimodal shape (Additional file 1), acutoff of 60 % was manually chosen to define the twocategories. The same cutoff was used in CAFA1. Unsur-prisingly, across all three ontologies, the performance ofthe BLASTmodel was substantially impacted for the diffi-cult category because of the lack of high sequence identityhomologs and as a result, transferring annotations was rel-atively unreliable. However, we also observed that mosttop methods were insensitive to the types of benchmarks,which provides us with encouraging evidence that state-of-the-art protein function predictors can successfullycombine multiple potentially unreliable hits, as well asmultiple types of data, into a reliable prediction.Jiang et al. Genome Biology  (2016) 17:184 Page 12 of 19a bFig. 8 Averaged AUC per term for Human Phenotype Ontology. a Terms are sorted based on AUC. The dashed red line indicates the performance ofthe Naïve method. b The top-ten accurately predicted terms without overlapping ancestors (except for the root). AUC receiver operatingcharacteristic curveSpecies-specific categoriesThe benchmark proteins were split into even smallercategories for each species as long as the resulting cate-gory contained at least 15 sequences. However, becauseof space limitations, in Fig. 9 we show the breakdownresults on only eukarya and prokarya benchmarks; thespecies-specific results are provided in Additional file 1.It is worth noting that the performance accuracies on theentire benchmark sets were dominated by the targets fromeukarya due to their larger proportion in the benchmarkset and annotation preferences. The eukarya benchmarkrankings therefore coincide with the overall rankings, butthe smaller categories typically showed different rank-ings and may be informative to more specialized researchgroups.For all three GO ontologies, no-knowledge prokaryabenchmark sequences collected over the annotationgrowth phase mostly (over 80 %) came from two species:Escherichia coli and Pseudomonas aeruginosa (for CCO,21 out of 22 proteins were from E. coli). Thus, oneshould keep in mind that the prokarya benchmarks essen-tially reflect the performance on proteins from thesetwo species. Methods predicting the MFO terms forprokaryotes are slightly worse than those for eukary-otes. In addition, direct function transfer by homologyfor prokaryotes did not work well using this ontology.However, the performance was better using the other twoontologies, especially CCO. It is not very surprising thatthe top methods achieved good performance for E. coli asit is a well-studied model organism.Diversity of predictionsEvaluation of the top methods revealed that perfor-mance was often statistically indistinguishable betweenthe best methods. This could result from all top methodsmaking the same predictions, or from different predic-tion sets resulting in the same summarized performance.To assess this, we analyzed the extent to which meth-ods generated similar predictions within each ontology.Specifically, we calculated the pairwise Pearson correla-tion between methods on a common set of gene-conceptpairs and then visualized these similarities as networks(for BPO, see Fig. 10; for MFO, CCO, and HPO, seeAdditional file 1).In MFO, where we observed the highest overall perfor-mance of prediction methods, eight of the ten top meth-ods were in the largest connected component. In addition,we observed a high connectivity between methods, sug-gesting that the participating methods are leveraging sim-ilar sources of data in similar ways. Predictions for BPOshowed a contrasting pattern. In this ontology, the largestconnected component contained only two of the top-tenmethods. The other top methods were contained in com-ponents made up of other methods produced by the samelab. This suggests that the approaches that participatinggroups have taken generate more diverse predictions forthis ontology and that there are many different paths toa top-performing biological process prediction method.Results for HPO were more similar to those for BPO,while results for cellular component were more similar instructure to molecular function.Taken together, these results suggest that ensembleapproaches that aim to include independent sourcesof high-quality predictions may benefit from leverag-ing the data and techniques used by different researchgroups and that such approaches that effectively weighand integrate disparate methods may demonstrate moresubstantial improvements over existing methods in theprocess and phenotype ontologies where current predic-tion approaches share less similarity.Jiang et al. Genome Biology  (2016) 17:184 Page 13 of 19Fig. 9 Performance evaluation using the maximum F measure, Fmax, on eukaryotic (left) versus prokaryotic (right) benchmark sequences. Theevaluation was carried out on no-knowledge benchmark sequences in the full mode. The coverage of each method is shown within itsperformance bar. Confidence intervals (95 %) were determined using bootstrapping with 10,000 iterations on the set of benchmark sequences. Forcases in which a principal investigator participated in multiple teams, the results of only the best-scoring method are presented. Details for allmethods are provided in Additional file 1Jiang et al. Genome Biology  (2016) 17:184 Page 14 of 19Fig. 10 Similarity network of participating methods for BPO. Similarities are computed as Pearson’s correlation coefficient between methods, with a0.75 cutoff for illustration purposes. A unique color is assigned to all methods submitted under the same principal investigator. Not evaluated(organizers’) methods are shown in triangles, while benchmark methods (Naïve and BLAST) are shown in squares. The top-ten methods arehighlighted with enlarged nodes and circled in red. The edge width indicates the strength of similarity. Nodes are labeled with the name of themethods followed by “-team(model)” if multiple teams/models were submittedAt the time that authors submitted predictions, we alsoasked them to select from a list of 30 keywords that bestdescribe their methodology. We examined these author-assigned keywords for methods that ranked in the topten to determine what approaches were used in currentlyhigh-performing methods (Additional file 1). Sequencealignment andmachine-learningmethods were in the top-three terms for all ontologies. For biological process, theother member of the top three is protein–protein interac-tions, while for cellular component and molecular func-tion the third member is sequence properties. The broadsets of keywords among top-performing methods furthersuggest that these methods are diverse in their inputs andapproach.Case study: ADAM-TS12To illustrate some of the challenges and accomplishmentsof CAFA, we provide an in-depth examination of theprediction of the functional terms of one protein, humanADAM-TS12 [16]. ADAMs (a disintegrin and metallo-proteinase) are a family of secreted metallopeptidasesfeaturing a pro-domain, a metalloproteinase, a disintegrin,Jiang et al. Genome Biology  (2016) 17:184 Page 15 of 19a cysteine-rich epidermal growth-factor-like domain, anda transmembrane domain [17]. The ADAM-TS subfamilyinclude eight thrombospondin type-1 (TS-1) motifs; it isbelieved to play a role in fetal pulmonary development andmay have a role as a tumor suppressor, specifically the neg-ative regulation of the hepatocyte growth factor receptorsignaling pathway [18].We did not observe any experimental annotation bythe time submission was closed. Annotations were laterdeposited to all three GO ontologies during the growthphase of CAFA2. Therefore, ADAM-TS12 was considereda no-knowledge benchmark protein for our assessment inall GO ontologies. The total number of leaf terms to pre-dict for biological process was 12; these nodes induced adirected acyclic annotation graph consisting of 89 nodes.In Fig. 11 we show the performance of the top-five meth-ods in predicting the BPO terms that are experimentallyverified to be associated with ADAM-TS12.As can be seen, most methods correctly discoverednon-leaf nodes with a moderate amount of informa-tion content. “Glycoprotein Catabolic Process”, “Cellu-lar Response to Stimulus”, and “Proteolysis” were thebest discovered GO terms by the top-five performers.The Paccanaro Lab (P) discovered several additionalcorrect leaf terms. It is interesting to note that onlyBLAST successfully predicted “Negative regulation of sig-nal transduction” whereas the other methods did not. Thereason for this is that we set the threshold for report-ing a discovery when the confidence score for a termwas equal to or exceeded the method’s Fmax. In this par-ticular case, the Paccanaro Lab method did predict theterm, but the confidence score was 0.01 below their Fmaxthreshold.This example illustrates both the success and the dif-ficulty of correctly predicting highly specific terms inBPO, especially with a protein that is involved in fourdistinct cellular processes: in this case, regulation of cel-lular growth, proteolysis, cellular response to variouscytokines, and cell-matrix adhesion. Additionally, thisexample shows that the choices that need to be madewhen assessing method performance may cause someloss of information with respect to the method’s actualperformance. That is, the way we capture a method’sperformance in CAFA may not be exactly the same asa user may employ. In this case, a user may chooseto include lower confidence scores when running thePaccanaro Lab method, and include the term “Negativeregulation of signal transduction” in the list of acceptedpredictions.ConclusionsAccurately annotating the function of biological macro-molecules is difficult, and requires the concerted effort ofexperimental scientists, biocurators, and computationalbiologists. Though challenging, advances are valuable:accurate predictions allow biologists to rapidly generatetestable hypotheses about how proteins fit into processesand pathways. We conducted the second CAFA challengeto assess the status of the computational function pre-diction of proteins and to quantify the progress in thefield.The field has moved forwardThree years ago, in CAFA1, we concluded that the topmethods for function prediction outperform straight-forward function transfer by homology. In CAFA2, weobserve that the methods for function prediction haveimproved compared to those from CAFA1. As part ofthe CAFA1 experiment, we stored all predictions from allmethods on 48,298 target proteins from 18 species. Wecompared those stored predictions to the newly depositedpredictions from CAFA2 on the overlapping set of bench-mark proteins and CAFA1 targets. The head-to-headcomparisons among the top-five CAFA1 methods againstthe top-five CAFA2 methods reveal that the top CAFA2methods outperformed all top CAFA1 methods.Our parallel evaluation using an unchanged BLASTalgorithm with data from 2011 and data from 2014showed little difference, strongly suggesting that theimprovements observed are due to methodologicaladvances. The lessons from CAFA1 and annual AFP-SIGduring the ISMB conference, where new developmentsare rapidly disseminated, may have contributed to thisoutcome [19].Evaluation metricsA universal performance assessment in protein functionprediction is far from straightforward. Although variousevaluation metrics have been proposed under the frame-work of multi-label and structured-output learning, theevaluation in this subfield also needs to be interpretableto a broad community of researchers as well as the public.To address this, we used several metrics in this study aseach provides useful insights and complements the others.Understanding the strengths and weaknesses of currentmetrics and developing better metrics remain important.One important observation with respect to metrics isthat the protein-centric and term-centric views may givedifferent perspectives to the same problem. For example,while in MFO and BPO we generally observe a pos-itive correlation between the two, in CCO and HPOthese different metrics may lead to entirely different inter-pretations of an experiment. Regardless of the under-lying cause, as discussed in “Results and discussion”,it is clear that some ontological terms are predictablewith high accuracy and can be reliably used in practiceeven in these ontologies. In the meantime, more effortwill be needed to understand the problems associatedJiang et al. Genome Biology  (2016) 17:184 Page 16 of 19Fig. 11 Case study on the human ADAM-TS12 gene. Biological process terms associated with ADAM-TS12 gene in the union of the three databasesby September 2014. The entire functional annotation of ADAM-TS12 consists of 89 terms, 28 of which are shown. Twelve terms, marked in green, areleaf terms. This directed acyclic graph was treated as ground truth in the CAFA2 assessment. Solid black lines provide direct “is a” or “part of”relationships between terms, while gray linesmark indirect relationships (that is, some terms were not drawn in this picture). Predicted terms of thetop-five methods and two baseline methods were picked at their optimal Fmax threshold. Over-predicted terms are not shownwith the statistical and computational aspects of methoddevelopment.Well-performingmethodsWe observe that participating methods usually special-ize in one or few categories of protein function predic-tion, and have been developed with their own applicationobjectives in mind. Therefore, the performance rankingsof methods often change from one benchmark set toanother. There are complex factors that influence the finalranking including the selection of the ontology, types ofbenchmark sets and evaluation, as well as evaluation met-rics, as discussed earlier. Most of our assessment resultsshow that the performances of top-performing methodsare generally comparable to each other. It is worth notingthat performance is usually better in predicting molecularfunction than other ontologies.Beyond simply showing diversity in inputs, our eval-uation of prediction similarity revealed that many top-performingmethods are reaching this status by generatingdistinct predictions, suggesting that there is additionalroom for continued performance improvement. AlthoughJiang et al. Genome Biology  (2016) 17:184 Page 17 of 19a small group of methods could be considered as generallyhigh performing, there is no single method that dominatesover all benchmarks. Taken together, these results high-light the potential for ensemble learning approaches inthis domain.We also observed that when provided with a chance toselect a reliable set of predictions, the methods generallyperform better (partial evaluation mode versus full evalu-ation mode). This outcome is encouraging; it suggests thatmethod developers can predict where their methods areparticularly accurate and target them to that space.Our keyword analysis showed that machine-learningmethods are widely used by successful approaches. Pro-tein interactions were more overrepresented in the best-performing methods for biological process prediction.This suggests that predicting membership in pathwaysand processes requires information on interacting part-ners in addition to a protein’s sequence features.Final notesAutomated functional annotation remains an exciting andchallenging task, central to understanding genomic data,which are central to biomedical research. Three yearsafter CAFA1, the top methods from the community haveshown encouraging progress. However, in terms of rawscores, there is still significant room for improvement inall ontologies, and particularly in BPO, CCO, and HPO.There is also a need to develop an experiment-driven,as opposed to curation-driven, component of the evalu-ation to address limitations for term-centric evaluation.In the future CAFA experiments, we will continue tomonitor the performance over time and invite a broadrange of computational biologists, computer scientists,statisticians, and others to address these engaging prob-lems of concept annotation for biological macromoleculesthrough CAFA.CAFA2 significantly expanded the number of proteintargets, the number of biomedical ontologies used forannotation, the number of analysis scenarios, as well asthe metrics used for evaluation. The results of the CAFA2experiment detail the state of the art in protein functionprediction, can guide the development of new conceptannotation methods, and help molecular biologists assessthe relative reliability of predictions. Understanding thefunction of biological macromolecules brings us closer tounderstanding life at the molecular level and improvinghuman health.Additional fileAdditional file 1: A document containing a subset of CAFA2 analyses thatare equivalent to those provided about the CAFA1 experiment in theCAFA1 supplement. (PDF 11100 kb)FundingWe acknowledge the contributions of Maximilian Hecht, Alexander Grün, JuliaKrumhoff, My Nguyen Ly, Jonathan Boidol, Rene Schoeffel, Yann Spöri, JessikaBinder, Christoph Hamm and Karolina Worf. This work was partially supportedby the following grants: National Science Foundation grants DBI-1458477 (PR),DBI-1458443 (SDM), DBI-1458390 (CSG), DBI-1458359 (IF), IIS-1319551 (DK),DBI-1262189 (DK), and DBI-1149224 (JC); National Institutes of Health grantsR01GM093123 (JC), R01GM097528 (DK), R01GM076990 (PP), R01GM071749(SEB), R01LM009722 (SDM), and UL1TR000423 (SDM); the National NaturalScience Foundation of China grants 3147124 (WT) and 91231116 (WT); theNational Basic Research Program of China grant 2012CB316505 (WT); NSERCgrant RGPIN 371348-11 (PP); FP7 infrastructure project TransPLANT Award283496 (ADJvD); Microsoft Research/FAPESP grant 2009/53161-6 and FAPESPfellowship 2010/50491-1 (DCAeS); Biotechnology and Biological SciencesResearch Council grants BB/L020505/1 (DTJ), BB/F020481/1 (MJES),BB/K004131/1 (AP), BB/F00964X/1 (AP), and BB/L018241/1 (CD); the SpanishMinistry of Economics and Competitiveness grant BIO2012-40205 (MT); KULeuven CoE PFV/10/016 SymBioSys (YM); the Newton International FellowshipScheme of the Royal Society grant NF080750 (TN). CSG was supported in partby the Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiativegrant GBMF4552. Computational resources were provided by CSC – IT Centerfor Science Ltd., Espoo, Finland (TS). This work was supported by the Academyof Finland (TS). RCL and ANM were supported by British Heart Foundationgrant RG/13/5/30112. PD, RCL, and REF were supported by Parkinson’s UKgrant G-1307, the Alexander von Humboldt Foundation through the GermanFederal Ministry for Education and Research, Ernst Ludwig Ehrlich Studienwerk,and the Ministry of Education, Science and Technological Development of theRepublic of Serbia grant 173001. This work was a Technology Developmenteffort for ENIGMA – Ecosystems and Networks Integrated with Genes andMolecular Assemblies (http://enigma.lbl.gov), a Scientific Focus Area Programat Lawrence Berkeley National Laboratory, which is based upon worksupported by the U.S. Department of Energy, Office of Science, Office ofBiological & Environmental Research grant DE-AC02-05CH11231. ENIGMA onlycovers the application of this work to microbial proteins. NSF DBI-0965616 andAustralian Research Council grant DP150101550 (KMV). NSF DBI-0965768(ABH). NIH T15 LM00945102 (training grant for CSF). FP7 FET grant MAESTRAICT-2013-612944 and FP7 REGPOT grant InnoMol (FS). NIH R01 GM60595(PCB). University of Padova grants CPDA138081/13 (ST) and GRIC13AAI9 (EL).Swiss National Science Foundation grant 150654 and UK BBSRC grantBB/M015009/1 (COD). PRB2 IPT13/0001 - ISCIII-SGEFI / FEDER (JMF).Availability of data andmaterialsData The benchmark data and the predictions are available on FigSharehttps://dx.doi.org/10.6084/m9.figshare.2059944.v1. Note that according toCAFA rules, all but the top-ten methods are anonymized. However, methodsare uniquely identified by a code number, so use of the data for furtheranalysis is possible.Software The code used in this study is available at https://github.com/yuxjiang/CAFA2.Authors’ contributionsPR and IF conceived of the CAFA experiment and supervised the project. YJperformed most analyses and significantly contributed to the writing. PR, IF,and CSG significantly contributed to writing the manuscript. IF, PR, CSG, WTC,ARB, DD, and RL contributed to the analyses. SDMmanaged the dataacquisition. TRO developed the web interface, including the portal forsubmission and the storage of predictions. RPH, MJM, and CO’D directed thebiocuration efforts. EC-U, PD, REF, RH, DL, RCL, MM, ANM, PM-M, KP, and ASperformed the biocuration. YM and PNR co-organized the human phenotypechallenge. ML, AT, PCB, SEB, CO, and BR steered the CAFA experiment andprovided critical guidance. The remaining authors participated in theexperiment, provided writing and data for their methods, and contributedcomments on the manuscript. All authors read and approved the finalmanuscript.Competing interestsThe authors declare that they have no competing interests.Ethics approval and consent to participateNot applicable to this work.Jiang et al. Genome Biology  (2016) 17:184 Page 18 of 19Author details1Department of Computer Science and Informatics, Indiana University,Bloomington, IN, USA. 2Buck Institute for Research on Aging, Novato, CA, USA.3Department of Molecular Biophysics and Biochemistry, Yale University, NewHaven, CT, USA. 4Department of Microbiology, Miami University, Oxford, OH,USA. 5University of Rome, La Sapienza, Rome, Italy. 6Computational BioscienceProgram, University of Colorado School of Medicine, Aurora, CO, USA.7Department of Computer Science, Colorado State University, Fort Collins, CO,USA. 8Department of Computing and Information Systems, University ofMelbourne, Parkville, Victoria, Australia. 9Health and Biomedical InformaticsCentre, University of Melbourne, Parkville, Victoria, Australia. 10Department ofBiology, New York University, New York, NY, USA. 11Social Media and PoliticalParticipation Lab, New York University, New York, NY, USA. 12CY Data Science,New York, NY, USA. 13Department of Computer Science, New York University,New York, NY, USA. 14Simons Center for Data Analysis, New York, NY, USA.15Center for Genomics and Systems Biology, Department of Biology, New YorkUniversity, New York, NY, USA. 16Department of Electrical Engineering andComputer Sciences, University of California Berkeley, Berkeley, CA, USA.17Department of Plant and Microbial Biology, University of California Berkeley,Berkeley, CA, USA. 18Biocomputing Group, BiGeA, University of Bologna,Bologna, Italy. 19Computer Science Department, University of Missouri,Columbia, MO, USA. 20ETH Zurich, Zurich, Switzerland. 21Swiss Institute ofBioinformatics, Zurich, Switzerland. 22Bioinformatics Group, Department ofComputer Science, University College London, London, UK. 23EuropeanMolecular Biology Laboratory, European Bioinformatics Institute, Cambridge,UK. 24Department of Information Technology, University of Turku, Turku,Finland. 25University of Turku Graduate School, University of Turku, Turku,Finland. 26Turku Centre for Computer Science, Turku, Finland. 27University ofBristol, Bristol, UK. 28Institute of Biotechnology, University of Helsinki, Helsinki,Finland. 29Institute of Information Science, Academia Sinica, Taipei, Taiwan.30Department of Computational Science and Engineering, North Carolina A&TState University, Greensboro, NC, USA. 31Department of Computer Science,Purdue University, West Lafayette, IN, USA. 32Department of BiologicalChemistry, Institute of Life Sciences, The Hebrew University of Jerusalem,Jerusalem, Israel. 33School of Computer Science and Engineering, The HebrewUniversity of Jerusalem, Jerusalem, Israel. 34Centre for Integrative SystemsBiology and Bioinformatics, Department of Life Sciences, Imperial CollegeLondon, London, UK. 35Centre for Cardiovascular Genetics, Institute ofCardiovascular Science, University College London, London, UK. 36Departmentof Electrical Engineering, STADIUS Center for Dynamical Systems, SignalProcessing and Data Analytics, KU Leuven, Leuven, Belgium. 37iMindsDepartment Medical Information Technologies, Leuven, Belgium. 38InsermUMR-S1052, CNRS UMR5286, Cancer Research Centre of Lyon, Lyon, France.39Université de Lyon 1, Villeurbanne, France. 40Centre Léon Bérard, Lyon,France. 41Institute of Structural and Molecular Biology, University CollegeLondon, London, UK. 42Cerenode Inc., Boston, MA, USA. 43Molde UniversityCollege, Molde, Norway. 44Department of Computer Science, Centre forSystems and Synthetic Biology, Royal Holloway University of London, Egham,UK. 45Department of Molecular, Cell and Developmental Biology, University ofCalifornia at Los Angeles, Los Angeles, CA, USA. 46School of Mathematics,Statistics and Applied Mathematics, National University of Ireland, Galway,Ireland. 47Stanley Institute for Cognitive Genomics Cold Spring HarborLaboratory, NY, New York, USA. 48Graduate Program in Bioinformatics,University of British Columbia, Vancouver, Canada. 49Department of Psychiatryand Michael Smith Laboratories, University of British Columbia, Vancouver,Canada. 50Department for Bioinformatics and Computational Biology-I12,Technische Universität München, Garching, Germany. 51DOE Joint GenomeInstitute, Walnut Creek, CA, USA. 52Bioinformatics and Genomics, Centre forGenomic Regulation, Barcelona, Spain. 53Universitat Pompeu Fabra, Barcelona,Spain. 54Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain.55Division of Electronics, Rudjer Boskovic Institute, Zagreb, Croatia.56EMBL/CRG Systems Biology Research Unit, Centre for Genomic Regulation,Barcelona, Spain. 57State Key Laboratory of Genetic Engineering, CollaborativeInnovation Center of Genetics and Development, Department of Biostatisticsand Computational Biology, School of Life Science, Fudan University,Shanghai, China. 58Children’s Hospital of Fudan University, Shanghai, China.59Department of Molecular Medicine, University of Padua, Padua, Italy.60Research and Innovation Center, Edmund Mach Foundation, San Micheleall’Adige, Italy. 61Department of Information Engineering, University of Padua,Padova, Italy. 62Instituto De Genetica Medica y Molecular, HospitalUniversitario de La Paz, Madrid, Spain. 63Spanish National BioinformaticsInstitute, Spanish National Cancer Research Institute, Madrid, Spain.64Structural and Computational Biology Programme, Spanish National CancerResearch Institute, Madrid, Spain. 65Control and Computer EngineeringDepartment, Politecnico di Torino, Torino, Italy. 66National University ofComputer & Emerging Sciences, Islamabad, Pakistan. 67Anacleto Lab,Dipartimento di informatica, Università degli Studi di Milano, Milan, Italy.68Applied Bioinformatics, Bioscience, Wageningen University and ResearchCentre, Wageningen, Netherlands. 69Biometris, Wageningen University,Wageningen, Netherlands. 70Center for Multidisciplinary Research, Institute ofNuclear Sciences Vinca, University of Belgrade, Belgrade, Serbia. 71Departmentof Computing and Mathematics FFCLRP-USP, University of Sao Paulo, RibeiraoPreto, Brazil. 72Institute for Molecular Infection Biology, University of Würzburg,Würzburg, Germany. 73Department of Computer and Information Sciences,Temple University, Philadelphia, PA, USA. 74University of Southern Mississippi,Hattiesburg, MS, USA. 75School of Biosciences, University of Kent, Canterbury,Kent, UK. 76Institut für Medizinische Genetik und Humangenetik, Charité -Universitätsmedizin Berlin, Berlin, Germany. 77Department of ElectricalEngineering ESAT-SCD and IBBT-KU Leuven Future Health Department,Katholieke Universiteit Leuven, Leuven, Belgium. 78California Institute forQuantitative Biosciences, University of California San Francisco, San Francisco,CA, USA. 79Department of Chemical Biology, The Hebrew University ofJerusalem, Jerusalem, Israel. 80Department of Systems Pharmacology andTranslational Therapeutics, University of Pennsylvania, Philadelphia, PA, USA.81Department of Biomedical Informatics and Medical Education, University ofWashington, Seattle, WA, USA. 82Department of Computer Science, MiamiUniversity, Oxford, OH, USA. 83Department of Veterinary Microbiology andPreventive Medicine, Iowa State University, Ames, IA, USA. 84Department ofBiomedical Sciences, University of Padua, Padova, Italy. 85Department ofBiological Sciences, Purdue University, West Lafayette, IN, USA. 86Departmentof Biological and Environmental Sciences, Universitity of Helsinki, Helsinki,Finland. 87University of Lausanne, Lausanne, Switzerland. 88Swiss Institute ofBioinformatics, Lausanne, Switzerland.Received: 26 October 2015 Accepted: 4 August 2016References1. Costello JC, Stolovitzky G. Seeking the wisdom of crowds throughchallenge-based competitions in biomedical research. Clin PharmacolTher. 2013;93(5):396–8.2. Radivojac P, Clark WT, Oron TR, Schnoes AM, Wittkop T, Sokolov A, et al.A large-scale evaluation of computational protein function prediction.Nat Methods. 2013;10(3):221–7.3. Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, et al.Gene Ontology: tool for the unification of biology. Nat Genet. 2000;25(1):25–9.4. Dessimoz C, Skunca N, Thomas PD. CAFA and the open world of proteinfunction predictions. Trends Genet. 2013;29(11):609–10.5. Gillis J, Pavlidis P. Characterizing the state of the art in the computationalassignment of gene function: lessons from the first critical assessment offunctional annotation (CAFA). BMC Bioinform. 2013;14(Suppl 3):15.6. Schnoes AM, Ream DC, Thorman AW, Babbitt PC, Friedberg I. Biases inthe experimental annotations of protein function and their effect on ourunderstanding of protein function space. PLoS Comput Biol. 2013;9(5):1003063.7. Jiang Y, Clark WT, Friedberg I, Radivojac P. The impact of incompleteknowledge on the evaluation of protein function prediction: astructured-output learning perspective. Bioinformatics. 2014;30(17):609–16.8. Robinson PN, Mundlos S. The human phenotype ontology. Clin Genet.2010;77(6):525–34.9. Moreau Y, Tranchevent LC. Computational tools for prioritizing candidategenes: boosting disease gene discovery. Nat Rev Genet. 2012;13(8):523–36.10. Clark WT, Radivojac P. Information-theoretic evaluation of predictedontological annotations. Bioinformatics. 2013;29(13):53–61.11. Bairoch A, Apweiler R, Wu CH, Barker WC, Boeckmann B, Ferro S, et al.The Universal Protein Resource (UniProt). Nucleic Acids Res.2005;33(Database issue):154–9.12. Huntley RP, Sawford T, Mutowo-Meullenet P, Shypitsyna A, Bonilla C,Martin MJ, et al. The GOA database: gene ontology annotation updatesfor 2015. Nucleic Acids Res. 2015;43(Database issue):1057–63.Jiang et al. Genome Biology  (2016) 17:184 Page 19 of 1913. Clark WT, Radivojac P. Analysis of protein function and its prediction fromamino acid sequence. Proteins. 2011;79(7):2086–96.14. Altschul SF, Madden TL, Schaffer AA, Zhang J, Zhang Z, Miller W, et al.Gapped BLAST and PSI-BLAST: a new generation of protein databasesearch programs. Nucleic Acids Res. 1997;25(17):3389–402.15. Efron B, Tibshirani RJ. An introduction to the bootstrap. New York:Chapman & Hall; 1993.16. Cal S, Argüelles JM, Fernández PL, López-Otın C. Identification,characterization, and intracellular processing of ADAM-TS12, a novelhuman disintegrin with a complex structural organization involvingmultiple thrombospondin-1 repeats. J Biol Chem. 2001;276(21):17932–40.17. Wolfsberg TG, Straight PD, Gerena RL, Huovila A-PJ, Primakoff P, MylesDG, et al. ADAM, a widely distributed and developmentally regulatedgene family encoding membrane proteins with a disintegrin andmetalloprotease domain. Dev Biol. 1995;169(1):378–83.18. Brocker CN, Vasiliou V, Nebert DW. Evolutionary divergence andfunctions of the ADAM and ADAMTS gene families. Hum Genomics.2009;4(1):43–55.19. Wass MN, Mooney SD, Linial M, Radivojac P, Friedberg I. The automatedfunction prediction SIG looks back at 2013 and prepares for 2014.Bioinformatics. 2014;30(14):2091–2.•  We accept pre-submission inquiries •  Our selector tool helps you to find the most relevant journal•  We provide round the clock customer support •  Convenient online submission•  Thorough peer review•  Inclusion in PubMed and all major indexing services •  Maximum visibility for your researchSubmit your manuscript atwww.biomedcentral.com/submitSubmit your next manuscript to BioMed Central and we will help you at every step:

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.52383.1-0366935/manifest

Comment

Related Items