Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Adventures in the nature of trade : the quest for ’relevance’ and ’excellence’ in Canadian science Atkinson-Grosjean, Janet 2002

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata


ubc_2002-731251.pdf [ 13.1MB ]
JSON: 1.0076832.json
JSON-LD: 1.0076832+ld.json
RDF/XML (Pretty): 1.0076832.xml
RDF/JSON: 1.0076832+rdf.json
Turtle: 1.0076832+rdf-turtle.txt
N-Triples: 1.0076832+rdf-ntriples.txt
Original Record: 1.0076832 +original-record.json
Full Text

Full Text

ADVENTURES IN THE NATURE OF TRADE: THE QUEST FOR 'RELEVANCE' AND 'EXCELLENCE' IN CANADIAN SCIENCE by Janet Atkinson-Grosjean MA., SIMON FRASER UNIVERSITY, 1996 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY In THE FACULTY OF GRADUATE STUDIES (INDIVIDUAL INTERDISCIPLINARY STUDIES GRADUATE PROGRAM) We accept this thesis as conforming to the requited standard THE UNIVERSITY OF BRITISH COLUMBIA NOVEMBER 2001 © JANET ATKINSON-GROSJEAN, 2001' In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall riot be allowed without my written permission. DE-6 (2/88) ABSTRACT The study addresses: (1) changes in Canada's science-policy climate over the past two decades; (2) impacts of such changes on the conduct and organization of academic science; and (3) public-interest implications of promoting, in public institutions, research 'relevant' to private sector needs. Working within the interdisciplinary traditions of science studies, the conceptual framework draws on the cross-cutting tensions at the intersection of public and private space, and basic and applied science. These tensions are articulated in two opposing models: 'open science' and 'overflowing networks'. Canada's Networks of Centres of Excellence (NCE) program provides the study's empirical focus. Founded in 1988, the NCE program rests on dual goals of research excellence and commercial relevance. It promotes a national research capacity that 'floats across' existing provincial institutions. The first part of the study investigates the evolution of the NCE program against the background of Canadian science policy. The second part problematizes the notion of 'network' while investigating one of the NCEs in depth, examining the scientific, commercial, cultural, and spatial-structural practices that are the outcomes of policy. Examination of these practices reveals not only the cultural and commercial shifts sought by policy, but also unintended consequences such as regional clustering; elitism and exclusion; problems with social and fiscal accountability; tensions with host institutions; and goal displacement between science and commerce. In relation to the overall problematic, the study constructs a new typology depicting network scientists as 'settlers', 'translators', or 'merchant scientists' according to their public/private, basic/applied orientation. The study then develops a set of broad conclusions about NCEs, especially those in the life sciences. (1) Translational research—at the nexus of public/private, basic/applied—is foundational for these networks. (2) As policy/practice hybrids, their spatial dynamics are highly enigmatic. (3) NCEs develop contradictory cultural norms. (4) Network effects resist standard assessments. (5) 'Public' and 'profit' seem to be problematic partners. (6) The recent historical focus of science policy has been myopic. The study expresses concerns for the public interest when commercial 'relevance' becomes an overarching goal of both science and policy. It concludes with a recommendation for open networks that would retain the flexibility of the network form, but would produce open rather than proprietary knowledge . iii TABLE OF CONTENTS Abstract " Table Of Contents iv List of Figures vii List of Tables ; viiAcknowledgements x Dedication x Acronyms and abbreviations used xi CHAPTER 1: INTRODUCTION l Where this study fits, in theory 6 Details of the Study 8 Data Collection & Analysis 13 Chapter Outline 8 CHAPTER 2: CONCEPTUAL AND ANALYTICAL TOOLS 21 I. Mapping the Divides 22 Public and PrivateBasic and Applied 7 The Spaces in Between 3II. 'Open Science' or 'Science that Overflows'? 35 The Open Science Model 6 The Overflow Model 41 Mode 2 and Triple Helix 5 Policy Regimes 7 III. Translating Networks 5Structural Issues 4 'Studying up' • 55 Summary 57 CHAPTER 3. SCIENCE POLICY IN CANADA AND THE NCE EXPERIMENT 59 I. Historical Influences on Policy 60 Public Science in Canada 2 II. Evolution of the NCE Program 9 Models for the NCE Program 71 Territorial Struggles and Program Design 75 Mobilizing Networks; Changing Attitudes 9 Summary and Discussion 87 iv CHAPTER 4: CONFIGURING THE CANADIAN GENETIC DISEASES NETWORK 92 I. Power of One 93 Enrolling the Core-set 101 II. Managing the Network 9 Institutional Friction 112 III. Spatial-Structural Dynamics 120 Regional DistributionElitism: Norms of Equity and Exclusion 123 Accountability as Social Reflexivity 8 Accountability as Value for money' 132 Summary Discussion 136 CHAPTER 5: CULTURE AND SCIENCE 138 I: 'A Nation of Colleagues' 139 Inducing Solidarity 140 Face to Face Community 2 A Climate for Collaboration 144 Phase Transitions 148 II: Network Science? 15Medical Genetics: An Overview 154 Space and Scale 6 Scaling Up 159 A Network Research Program? 161 Core Facilities: 'Where all the spokes converge' 167 Conclusion 175 CHAPTER 6: FROM SCIENCE To COMMERCE 17I. Understanding the Pipe 178 Industry Partnerships 181 II. Traversing the Pipe 6 The First Commercial Turn 188 The Second Commercial Turn 191 Resistance ... 199 in. T-tViW fUn ? 20'Funding Galileo' 3 Conclusion 6 CHAPTER 7: ADVENTURES IN THE NATURE OF TRADE 208 I. Localizing Cosmopolitans 209 II. Towards a new typology 212 Setder Science: 'Excursions into the land of ignorance' 214 v Translational Research: 'I wouldn't call it science' 222 Merchant Science: Worlds in Transition 226 Conflicts of Interest and Commitment 9 Incorporating merchant science 235 Discussion 237 CHAPTER 8: CONCLUSIONS & IMPLICATIONS 240 I. Argument and General Findings 24II. Case Study: Conclusions and Implications 242 Translational research is foundational 24Spatial dynamics are enigmatic 4 Cultural norms are contradictory 247 Network effects resist assessment 9 'Public' and 'profit' are problematic 250 Policy's focus is myopic 251 III. Suggestions for Future Research 253 IV. Summary 255 Appendix A: Networks Funded 257 References 258 vi LIST OF FIGURES Figure 1: Conceptual Matrix 10 Figure 2: Formal Interviews Conducted 7 Figure 3: The Linear Model of Research: WWII to mid 1970s 31 Figure 4: Stokes's Quadrant Model of Scientific Research 4 Figure 5: Model of Canada's Strategic Science Policy Regime in relation to the NCE program 90 Figure 6: CGDN Investigators, listed in 1988 Proposal for Phase I of NCE Program 106 Figure 7: Tri-Nodal Distribution of Funding 121 Figure 8: Growth in Partner Institutions and Principal Investigators Phase I to Phase III 149 Figure 9: CGDN's Core Facilities, End of Year One, Phase I (1990-1) 168 Figure 10: CGDN's Core Facilities, End of Phase II, beginning of Phase III (1998) 170 Figure 11: Industry Relationships, Phase II 184 Figure 12: NCE Program: Funded Networks 1989—2005, sorted by date of first funding 257 vii LIST OF TABLES Table 1: Share of university research funded by industry (%) in 1996,1990, and 1985 83 Table 2: Percentage of R&D Expenditures for the G7 Nations in 1996 84 Table 3: Total cash contributions to NCEs, 1990-2000 HTable 4: Funding Allocations by Institution, 1991 to 2000 120 viii ACKNOWLEDGEMENTS Without the generosity of my sources, there would be no study. I want to thank the scientists, policy advisors, adrninistrators, bureaucrats, and many others who contributed their time and reflections to help me understand their complex worlds. I was fortunate in the calibre of my interdisciplinary research committee and the breadth of their research interests: Don Fisher, Educational Studies, and Stephen Straker, History of Science (co-supervisors); Richard Ericson, Sociology and Law; Derek Gregory, Human Geography; and Judy Segal, English. The Science and Society group and the Individual Interdisciplinary Studies Graduate Program, both housed at Green College, constituted my intellectual community and I am grateful for the opportunities for discussion and support provided by both. Finally, embarking on the adventures of graduate studies at a relatively advanced age is less foolhardy when it is a folie a deux. Thanks to my partner in this madness: my husband and friend, Garnet Grosjean. We made it, kid! ix DEDICATION In gratitude for providing a climate of encouragement, and the intellectual and material resources that allow students to work outside disciplinary lines, I dedicate this dissertation to the Individual Interdisciplinary Studies Graduate Program (Rhodri Windsor-Liscombe) and Green College (Richard Ericson) at the University of British Columbia, and the Graduate Liberal Studies Program at Simon Fraser University (Hannah Gay, Len Berggren, and Steve Duguid.) ACRONYMS AND ABBREVIATIONS USED ACST Prime Minister's Advisory Council on Science & Technology ANT Actor-Network Theory (aka Translation Sociology) 'big pharma' Multinational pharmaceutical industry CBDN Canadian Bacterial Diseases Network CGDN Canadian Genetic Diseases Network CHEO Children's Hospital of Eastern Ontario CIAR Canadian Institute of Advanced Research CIHR Canadian Institutes of Health Research CSA Canadian Standards Association HUGO International Human Genome Organization ILO University-Industry Liaison Office (aka Technology Transfer Office; Commercialization Office IP Intellectual Property IPR Intellectual Property Rights IRAP Industrial Research Assistance Program (see NRC) ISTC Industry, Science, And Technology Canada ((now Industry Canada) MOSST Ministry of State for Science & Technology MRC Medical Research Council NABST National Advisory Board on Science & Technology NCE Networks of Centres of Excellence NRC National Research Council NSERC Natural Sciences & Engineering Research Council NSF National Science Foundation (US) ORF Ontario Research Foundation OST Observatoire des sciences et des technologies PENCE Protein Engineering Network of Centres of Excellence PI Principal Investigator PRECARN Pre-Competitive Advanced Research Network PRI Policy Research Institute PUS Public Understanding of Science R&D Research and development 'Sick Kids' University of Toronto Hospital for Sick Children SME Small and Mid-Sized Enterprises SPRU Science Policy Research Unit (UK) SSHRC Social Sciences and Humanities Research Council UBC University of British Columbia UQAM Universite Quebec a Montreal UT University of Toronto NB In quotes from interviews or documents, capitals in parentheses are my locator codes, not acronyms. CHAPTER 1: INTRODUCTION The creation of the Networks of Centres of Excellence (NCE) program, in 1988, was arguably the most dramatic change in Canadian science policy since the National Research Council was established in 1916. The NCE program can be understood as an attempt to create an interpenetrating system of public and private research within academic settings. The federal government sought to establish a university-based system of national research networks—'research institutes without walls'—that would target and develop commercial opportunities. The program is now a central element in the government's 'innovation agenda', where scientific excellence, commercial relevance, and public/private collaborations are recurrent themes. By the end of the 2001 fiscal year, a total of 29 networks had been funded in areas deemed strategically important to Canada's prosperity and international competitiveness (see Appendix A). What makes the NCE effort an exemplar in Canadian policy history is the explicit attempt to turn the culture of academic science towards commercial application, and to manage research on private-sector rather than academic principles. Purposive tensions are 'designed in' to these networks in the form of commitments to both fundamental enquiry and exploitation of intellectual property; private-sector investment and public funding; academic ideals and commercial values. NCEs are institutionally ambiguous in that they occupy ^determinate public/private spaces, inside/outside the academy. As 1 such, they 'float above' universities—which fund a significant portion of their costs—with little accountability. This abundance of novelties would seem attractive to anyone interested in the sHfting terrain of science, economics, and policy. Yet surprisingly, the program has largely escaped scholarly notice.1 My interest in NCEs began with an outsider's curiosity about the workings of academic science and the way it appears to be changing. In earlier graduate work,2 I'd examined science as a master narrative of modernity, presenting 'the scientific worldview' as a metaphor for Enlightenment values of rationality, predictability, and order. In a world characterized by quite opposite values, I'd searched for a different metaphor: a 'postmodern science' that would more accurately reflect today's fragmentation and loss of certainty. Revisiting that work, I find little in the way of critical reflection or recognition that science itself might require some unpacking. Despite much talk of the embedding of science in society, and the socially constructed nature of knowledge, my approach was deeply conservative. Science was treated as an institutional black box, governed by Mertonian ideals. The 'booming, buzzing, confusion' of actual scientific practices was nowhere to be found, and the structural and organizational contingencies that constrain and shape these practices were completely absent. Moving on to interdisciplinary doctoral work in science studies and the political economy of science, I studied the way market forces and neoliberal public-sector reforms were affecting research funding and science policies. The conversion of public science into private (intellectual) property, and academic and state institutions into market players, was progressing rapidly and with relatively little 1 Clark (1998) is one exception, providing a comparative but atheoretical overview of various 'formal knowledge networks.' As well, as ongoing research program includes interest in certain NCEs, for example, Dalpe & Ippersiel (2000), Dalpe, et al. (2001). 2 Atkinson-Grosjean (1996) 'Science in Postmodern Times', unpublished MA terminal project. Simon Fraser University 2 resistance. I found this curious because Canada's structure and values, and the heterogeneity of its federal and provincial political institutions, generally preclude radical change. I could find few, if any, evidence-based studies of the phenomenon and no disinterested calculations of the social and financial costs and benefits involved. It seemed that the policy of 'privatizing' public science and its institutions was proceeding ideologically, rather than by rational calculation. Such policies were assumed to fuel innovation and maximize wealth creation, but that was a highly contestable assumption. Many economists were pointing to the relative inefficiency of proprietary approaches to public science.3 Meanwhile, other critics4 questioned a calculus that collapsed the social into the economic, and turned universities into 'knowledge factories'. It was clear that these policies could fundamentally realign the public/private divide with potentially far-reaching consequences. The shift from 'public' to 'private' in Canadian university science was accelerating rapidly; intellectual property rights were becoming the hegemonic currency of the research economy. Gross revenues from royalties and license fees grew more than threefold between 1991 and 1997, while industrial research funding saw more than a fourfold increase (AUTM 1998).5 Of the almost 400 spin-off companies created in Canadian universities since 1980, more than 62% had been formed since 1990, at an average rate of 23 per year (Statistics Canada 1999). The 'free flow of ideas' into the public knowledge base tends to falter when public research becomes privatised. A review of various literatures indicated that researchers become reluctant to share 3 For example, Nelson and Romer, 1998:59; Nelson 1996; Mazzoleni & Nelson 1998; Rosenberg 1998, and see Chapter 2, following 4 See the reference list for works by David Noble, Sheila Slaughter and colleagues, Janice Newson, and Claire Polster. 5 AUTM is the US-based Association of University Technology Managers. Since the majority of Canada's major research universities participate in the AUTM survey, these are fairly reliable indicators of growth. Conversely, the majority of Canadian universities do not participate in the AUTM survey, suggesting that commercialization concentrates in the major institutions, as in the US. This is confirmed by the Statistics Canada survey (1999:17), which shows that the 12 most active universities account for 75% of invention reports and licenses, and two thirds of new patent applications. Of the remaining universities, medium-sized institutions account for the majority of activity. The number of universities that can effectively pursue commercialization activities and academy-industry partnerships thus appears limited. 3 information with colleagues; sponsored research contracts sprout clauses that restrict dissemination; public and private 'partners' squabble about the ownership of intellectual property; and universities develop policies governing disclosure of research with commercial potential. Such practices are rationalized on economic grounds: if science is to be harnessed in pursuit of competitive advantage, subscribing to free-flowing knowledge is deemed hazardous. On the basis of the literature reviews and statistical evidence, I developed a hypothesis that some kind of radical break from past practices was underway. Academic science was turning away from disinterested enquiry and open sharing towards commercial interests and 'secret knowledge'. Academic forms of organization were being replaced by new and dynamic cross-sectoral networks. The hypothesis drew on the tension between research pursued for understanding and research pursued for use and on associated attitudes towards ownership and access, secrecy and openness. The argument was positioned within the shifting and historically contingent distinctions dividing 'public' from 'private' and 'basic' from 'applied'. My larger purpose was to question the impact of shifts in the organization and ethos of science on 'the public interest'. To whom is a privatized science accountable? I asked. What is gained and what is lost when longstanding institutional distinctions dissolve? These questions constituted the 'moral purpose' of the project. A pilot study revealed flaws in the way the hypothesis had been formulated. To avoid the errors of my earlier work, I had adopted an empirical approach that would open up the black boxes marked science and public interest. Interviews quickly demonstrated that I was, nevertheless, focusing almost exclusively on structural forces. In the first place, the way my thesis was framed left no room for agency, yet the autonomy that individual scientists exercised over their work came through clearly at an early stage, as did the choices some had made to engage in 'academic capitalism'.6 In the second 6 The term was coined by Ed Hackett in a 1990 article and developed by Sheila Slaughter and Larry Leslie (1997) in their book 'Academic Capitalism: Politics, Policies, and the Entrepreneurial University' 4 place, it seemed to matter which type of science I was addressing. While network forms of organization were becoming a default requirement for funding, commercial interests were largely absent in whole areas of the natural sciences. It soon became apparent, therefore, that I should focus on changes in the biomedical sciences rather than, say, physics or chemistry. Next, an examination of the historical record quickly dispelled notions of 'radical' or 'revolutionary' breaks. Before 'networks' there were 'invisible colleges', and the relations between science and commerce seemed anchored in a long evolutionary process. Comparing the end of the 19th century and the start of the 21st, for example, I perceived differences of degree rather than kind in academy-industry ties. Finally, as expected, I found many examples of federal steering of the research agenda, but few indications of direct interference by 'big business'. Thus the empirical realities of the data disciplined my opening assumptions, allowing a more 'grounded' approach to emerge. Adjusting for these new insights, the core assumptions seemed sound and the study could proceed. 5 Where this study fits, in theory The study participates in the interdisciplinary tradition of enquiry known as science studies. Science studies is a broad church embracing many sects, including the three that inform this project: micro-studies of laboratory and organizational practices; the economics of science; and science policy studies. Because the study incorporates a case study of the work that individual and institutional actors do to construct, extend, stabilize, and maintain complex networks and power relations, it is most at home in the 'Paris School' of science studies, where such networks have long been a topic of enquiry. Michel Callon, Bruno Latour, John Law, and others have worked to develop Actor-Network Theory (ANT), or 'the sociology of translation', for the past 30 years. But despite the powerful descriptive vocabulary it has accumulated, ANT carries little explanatory weight, as many, including the principals, have argued. A workshop at Keele University in 1997, and a subsequent book (Law & Hassard 1999), focused actor-network theorists on what 'comes after' ANT. Although this study's primary purpose is empirical, rather than theoretical, I hope it will in some way contribute to that debate. One of ANT's weaknesses is 'explanation by incorporation'. Nick Lee & Steve Brown (1994) complained of ANT's 'colonial' expansion. Because everything is enrolled into the network, nothing remains outside. One of the results is that surrounding institutional structures are (under)explained, or explained away, as network outcomes. I find this unsatisfactory. Like Daniel Lee Kleinman (1991 & 1998), I believe that actor-networks are constrained and shaped in important ways by the institutional structures that provide their context. I see these structures as important already-existing features external to the network, rather than as the contingent outcomes that ANT depicts. Accounting 6 for the transition from 'micro-structure' to 'macro-structure' is an ongoing challenge in ANT and this study participates in that challenge. A related problem is that ANT adopts a deliberately agnostic stance towards the broader political, economic, and social implications of what it describes. In a theory that erases all boundaries between science and society, such agnosticism seems to me a contradiction, since it separates science from its consequences. I think an agnostic stance is a luxury ANT can no longer afford. Thus, ANT needs to be 'stiffened' with several critical starches and this study may indicate just where those stiffeners can be most effectively applied. First, I point empirically to the myriad ways biomedical networks are bounded and closed by members, who thereby invent 'insides' and 'outsides'. Second, through the empirical evidence, I can challenge ANT with normative questions about the nature of public and private science and how the public interest can be served. Third, I develop a typology that classifies network scientists by their response to political-economic pressures. This new typology operationalizes the intersection of public/private, basic/applied divides in networks, and reinterprets ANT's notion of 'translation'. Finally, in pursuit of this effort to give ANT an afterlife, I follow Michel Callon into the current controversy in economics of science and science policy, where 'open science' takes on the network model. By weighing the arguments against my empirical results, I hope not only to contribute somerliing to that debate, but also to contribute to policy studies and inform future policy. In that regard, the study's primary contribution is empirical: collecting and systematizing data on the Networks of Centres of Excellence (NCE) program and the Canadian Genetic Diseases Network in the context of Canadian science policy. 7 Details of the Study A review of the current science policy environment suggested that, for the reasons indicated earlier, the NCE program would reward attention. In fact, the research reported here constitutes the first full-length, academic analysis of this program. To extend the study beyond the structural level, I would conduct a detailed review of one of the networks, using ethnographic techniques. (Time and cost constraints precluded more than one in-depth case study.) A number of criteria were developed to guide selection, including research sector; position on the pub he/private continuum; longevity; density of linkages; amount of funding; and location. The best match with my selection criteria was the Canadian Genetic Diseases Network (CGDN). CGDN was one of the first networks funded under the NCE program. Inaugurated in August 1990, it brought together medical genetics researchers across the country, under the leadership of Scientific Director Michael Hayden. By 2001, support received from the program totalled $50 M. Currendy, some 50+ researchers belong to the network, together with 11 universities and hospitals and eight companies. The research program covers four integrated themes: gene identification; pathogenesis and functional genomics; genetic therapies; and genetics and health care. 'Core facilities' in major centres undertake work such as DNA sequencing, genotyping, and bioinformatics ttaining. The network opened a 'child' organization—the Centre for Molecular Medicine and Therapeutics— in Vancouver in 1998. Merck Frosst, a founding industry partner, contributed $15M towards the Centre. The network's commercial prowess can be seen in the major intellectual property (IP) agreement it brokered between Schering Canada Inc. and the University of Toronto; at the time, the largest university IP agreement in Canadian history. The agreement was based on the 1995 discovery, by a network researcher, of two genes for early-onset Alzheimer's disease. Schering's 8 initial $9M funded a three-year research program in the development of drugs and technologies to treat and prevent Alzheimer's. Over the long-term, the agreement has a potential value of $>34.5M, not mcluding royalties. In 1997, when the NCE program announced a 14-year 'cap' for all networks, CGDN learned its funding would 'sunset' in 2005. This policy change set in motion a major strategic shift. In 1998, the network incorporated itself as CGDN Inc. It adopted a corporate organizational form and an aggressively commercial focus. The goal was to maximize revenues from license agreements and equity holdings in order to replace the $4.5 M a year in federal funds that would cease in 2005. Many of the scientists interviewed expressed ambivalence at the direction in which the network was moving. On one hand, they knew action was needed if the network was to survive. On the other hand, they regretted the attendant loss of collegiality and openness that had marked earlier days. If one imagines public/private and basic/applied as cross-cutting dimensions (see Figure 1 below), CGDN had, until this point, concerned itself predorninandy with 'public science' and 'basic research'. Approximately 70% of NCE core funding supported fundamental, discovery-based research, with 20% going to early-stage development of technologies with commercial potential (the remaining 10% supports networking and administration). But the goal of sustaining the network beyond 2005 accelerated a downward shift to the 'private science' half of the matrix. A key question is how this policy shift affects the public's social and economic return on investment in CGDN and NCEs more generally. 9 Figure 1: Conceptual Matrix Public Basic Science Applied Science Private The tension between the public and private faces of NCEs became increasingly apparent over the course of the study. Countervailing currents of confidentiality and openness ebbed and flowed around the project. Scientists spoke to me freely, for example, while gatekeepers erected formal blocks to access. The contradictions give an indication of the normative and ethical boundaries that are constandy negotiated in these networks. At the federal level, the vast majority of people who designed and implemented the program in the mid-1980s had disciplinary roots in the sciences. Most held PhDs and were associated with the Research Councils. In interviews, their commitment to a scientific culture of openness prevailed. In contrast, 'career bureaucrats' remained guarded, refusing to provide key materials on the grounds of commercial and/or cabinet and/or third party confidentiality. Access was formally denied. The fact that the research was sponsored by one of the three federal funding councils (SSHRC) carried no weight. The wording of the formal denial sidestepped an outright claim that NCE files were exempt from disclosure. But access would require implementation of the provisions of the Access to Information and Privacy (AIP) Acts and every individual document would have to be requested by name. Not only was this quite impossible without prior access to the files, the delays and costs would have been 10 unmanageable. The nature of the problem is demonstrated in the following extracts from correspondence (e-mail, January 21, 2000). Because of the sensitivity of the files in this case, we really have no option but to 'do it by the book'. This means that in order to gain access to documents in NCE files, we will have to ask you to submit formal Access to Information Act requests.. .Many documents within [NCE files] would have to be reviewed on a line-by-line basis to identify information subject to exemption. And in many instances a decision about the operation of a particular exemption could only be made in consultation with other federal institutions, with networks, and with any other parties affected by the disclosure. Canada's information commissioner has criticized precisely this type of strategic use of the Access to Information Act by public servants. He speaks of 'the stubborn persistence of a culture of secrecy in Ottawa' (Reid 2000b) and complains of too-frequent recourse to claims of third-party 'commercial sensitivity' to avoid the release of documents (Reid 2001). When public information disappears behind a screen of privacy erected by public servants, questions are bound to be raised about accountability and the abuse of power.7 At the network level, the contrast between the exaggerated discretion of professional staff and the openness of network scientists was marked. And the balance of power between scientific officers and professional staff appeared to be highly delicate, given the goal of commercial sustainabiHty by 2005. CGDN's scientific director, Michael Hayden, belongs to both cultures. His instincts were to be open but his position required him to attend to the concerns of staff. Hayden was the first person I interviewed in the pilot stage. He was an enthusiastic participant and his support for the study never wavered. As my design of the study developed, Hayden assured me all involved would cooperate fully. But despite his endorsement, professional staff at the network's 7 This is not an isolated case. My experience confirms that of another doctoral researcher who attempted to explore a similar topic in Ottawa during the early 1990s. Claire Polster was seeking financial and statistical information on the proliferation of federal support programs for industry-relevant university science; some of the data required for her study were denied to her. Other data were not tracked, and what was tracked often proved inconsistent and unreliable (Polster 1993) 11 administrative centre, where I'd hoped to be based, initially refused access. Commercial sensitivity was the formal reason given: 'many of our interactions with industry involve the element of confidentiality and an "outsider" may impact those discussions negatively' (e-mail, March 31 1999). Hayden suggested timing might be the problem; professional staff were simply 'too busy right now' but they would co-operate when the workload moderated. To accommodate the delay, I reordered the study and undertook the federal phase next, returning to the network several months later. At that time, Michael Hayden arranged for my participation in the annual scientific meeting and the International Human Genome conference. He also asked professional staff to arrange an 'internship' for me in the various facets of network governance and to facilitate my access to network documents. Again, staff were slow to comply. A further ten weeks of refusals, negotiations, delays, reversals, and interventions were required before a compromise was reached and limited access granted. There would be no internship and access to materials was curtailed. Board and committee minutes and commercial files were denied to me. I was not allowed to photocopy or remove any of the materials provided, nor could I attend board, committee, or staff meetings. Only information that staff considered 'in the public domain, i.e. financial and annual reports.. .funding proposals and interim reports' (e-mail, May 17 2000) would be provided. By the time I began my fieldwork at the network's offices, 15 months had elapsed since my initial request for access. Despite repeated requests, I was never allowed to consult board and committee minutes. Eventually I asked for a written rationale. In the response, elaborate framing sequesters aspects of this public entity as private. 'Management has received a legal opinion recommending against public disclosure of Board Minutes. The CGDN Board is a legal entity and as such, holds the right to maintain 12 confidentiality of its in-camera meetings. Management does not hold the right to disclose those proceedings' (e-mail, June 2000). I was able to compensate for the lack of access to records by probing quite deeply in my interviews with private-sector and researcher board members. I also found evidence of board and committee discussions and decisions by triangulating against the materials prepared by the network for NCE site visits. In this way, I was able to form an adequate understanding of the key decisions over the years. Data Collection & Analysis The majority of data for the study was derived from in-depth interviews and participant-observation, supplemented by analysis of documents and financial and statistical reports. The preliminary phase of the study lasted from Fall 1998 through Spring 1999.1 collected and analyzed documents, then interviewed CGDN officers in Vancouver and network researchers in Vancouver, Edmonton, and Calgary. At the same time, involvement in a separate study8 of industry liaison offices (ILOs) in four universities (two in BC and two in Alberta) allowed me to solicit information on network commercialization practices and network-university interface issues from the 15 ILO officials I interviewed. The next phase, extending from Fall 1999 through Winter 1999, focused on the federal level and the officials responsible for the NCE program. During a week-long fieldwork visit to Ottawa, a total of 19 individuals9 involved in the program's initiation, development, and ongoing maintenance were identified and interviewed. Historical details of policy formation and program-building were sought, 8 'Academy-Industry Relations in North America,' Dr Donald Fisher, principal investigator, 1998-2001. Funded by SSHRC. 13 as well as the rationale behind certain 'design features', such as the twin criteria of scientific excellence and commercial relevance. At the same time, documents and reports spanning the NCE's history and pre-history were collected from the program directorate. These materials included annual reports; program evaluations; public relations materials; newsletters; and various committee reports. Particular attention was paid to the acquisition of program-wide information on partnership and intellectual property arrangements, company creation, and funding patterns. The final phase of data collection encompassed the CGDN case study, which extended from Winter 1999 through Fall 2000, with follow-up visits to June 2001. In March 2000,1 attended the annual scientific meeting in Vancouver, one of the network's key cultural events. The purpose was to present a paper introducing my study; conduct and solicit interviews; observe interactions; ask questions; and generally familiarize myself with the science and business of the network. Directly after, I represented CGDN as a volunteer media relations officer at the International Human Genome Project's annual conference, which the network was co-hosting. These meetings were invaluable introductions to network culture and science, and the vast 'industry' that molecular biology has become. In addition, over the course of the study, I made site visits to three research labs in Toronto and several to the Centre for Molecular Medicine and Therapeutics, in Vancouver, for interviews and observation. But the core of my fieldwork centred on the network's cramped administrative headquarters in the 'NCE Building', at the University of British Columbia. Here, for a period of eight weeks, I observed the workings of the network from a makeshift desk in the hallway. Over the course of the study, I interviewed a selection10 of board members and private sector partners and all current and former professional staff. In selecting which of the 50+ network researchers to interview, I focused on the 'founder population', i.e. the 16 scientists who remained 9 In the interim, the scope of the aforementioned SSHRC study had been extended to include NCEs, so this phase of my data collection process overlapped with that of the larger study. Data from 15 of the 19 interviews were shared. 14 active in the network, of the 21 who had signed the original 1988 proposal. Eleven of the 16 were interviewed. For balance, I also contacted two of the five founders who had left the network and three more-recent recruits, two from the start of Phase II (1994-5); another from the start of Phase III (1998-9). In total, the CGDN phase of data collection incorporated 40 formal interviews with 31 people. Interviews were semi-structured, allowing scope for reflection and opinion. Informants were first asked to describe their recollections of the network-building process, then answered a series of questions about the science produced in the network; culture and relationships; commercialization practices; governance; and whether they had noted any problems or 'sticking points' over the years. The relative weight of these questions was adjusted to reflect the informant's role in the network. The majority of interviews were conducted in Toronto, Ottawa, and Vancouver—three of the network's four main nodes. I was unable to visit Montreal, the fourth major centre, but interviewed two researchers from McGill, one by telephone and another during his visit to Vancouver. Throughout the study, I attempted to compensate for the 'single-case' focus by identifying and interviewing other knowledgeable individuals with interests in the NCE program. These included 'insiders' involved in networks other than CGDN, and 'outsiders' such as university technology managers and policy consultants. The purpose was to generate a cross-section of fact, opinion, and experience about NCEs from which shared patterns could emerge—patterns that would not be discernible in a single case." Access was controlled by staff; I was not permitted to contact board members and industry partners independently The technique, originally developed by Glaser and Strauss, is called 'maximum variation'. See Merriam (1998: 62) for a brief and useful description In all, a total of 74 formal interviews12 were conducted with 65 people in nine Canadian and two US cities (Figure 2 below). CGDN professional staff were interviewed twice, at the beginning and mid point of the study, to check changing conditions and perceptions. Michael Hayden was interviewed three times: a wide-ranging discussion at the beginning of the study helped define my general focus; another at the mid-point dealt with the human genome program and the network's involvement in genomics; a third during fieldwork covered specific questions that had arisen and shifts I had noted. CGDN's current NCE program officer was interviewed twice; once, in Ottawa, in October 1999, and again in Vancouver during the annual scientific meeting in March 2000. Of these, 30 were shared with the previously mentioned SSHRC study. 16 Figure 2: Formal Interviews Conducted People Interviews Senior policy makers 4 4 NCE Directorate 7 7 NCE Program Officers 6 7 CGDN 'founder' researchers —Current 11 14 —Former 2 2 CGDN 'new* researchers 3 3 CGDN Professional staff 4 9 CGDN Private Sector 5 5 Non-NCE scientific networks 3 3 University administrators 2 2 University technology managers 15 15 Policy Consultants 3 3 Total People/Interviews 65 74 Initial analysis of the data began during fieldwork. Daily write-up of field notes helped me to reflect on what I was discovering and identify questions for subsequent follow up. After fieldwork, during the intensive analysis of the data, I continued with the practice of daily written reflection. These notes reminded me of where my thinking had been in relation to the study and suggested directions I might explore. They proved invaluable in helping me structure the eventual write-up. The materials I had collected included financial and statistical reports. My background as a professional accountant allowed me to analyze financial and performance data using generally accepted accounting principles and conventions. Key ratios were calculated in an attempt to determine the program's economic costs and benefits, and comparative rates of public/private participation and reward. Such calculations are unable to account for social dimensions of the research questions, since social costs and benefits resist quantification. Nevertheless, these indicators can suggest the underlying social calculus and it is in this spirit they were sought. 17 The policy and program material was analysed and written up first. Several conference papers and articles were produced from these historical and interview data.13 This process had the effect of 'stabilizing' a large part of the evidence. The 'macro' level of the program's composition and policy context could then be set aside in favour of a much finer-grained analysis of the network's micro-level practices. The material lent itself naturally to this bifurcation, leading me to question theoretical claims that actor-networks could not be bound within structural frames. Next, network interviews were sorted into four broad categories: 'network scientists', 'professional staff, 'board and private sector', and 'other'. Provisional code-books were developed from iterative readings of the transcripts in each category, which were then coded and recoded using software tools of my own devising (rather than a commercial qualitative analysis program). Once I was satisfied the codings were consistent, each category was sorted by main code and sub-codes. Then all categories were combined in a single database and sorted. A numerical weight was assigned to the codes according to frequency across categories. The dominant codes became headings to which less-frequent codes were assigned on the basis of 'family resemblances'. These then provided a framework to guide the structure of the dissertation. In turn, these dominant codes were collapsed into broad interpretive themes, to aid theory-building. Chapter Outline In this chapter (1) I have provided a broad overview of motivation and methods. Chapter 2, extends the discussion of public/private, basic/applied, and the public interest. I focus on the fundamental tension between 'open science' and proprietary knowledge and set up the two conceptual models which guide the study. In the second part of the Chapter 2,1 discuss some of the analytical tools 13 For example, Atkinson-Grosjean, J. 1999c, 1999d, 1999e, 2000a, 2000b, Atkinson-Grosjean, et al. 2001, Fisher, et al. 2001, Atkinson-Grosjean 2002 18 that will be brought to bear, from actor-network theory and science studies more generally. Chapter 3 addresses the historical and structural factors contributing to the development of the NCE program. The CGDN case study begins in Chapter 4 where I describe the network-building activities of this group of medical geneticists and the institutional identity they constructed. Chapter 5 demonstrates the way the network evolved a culture and sense of community, critically examines the rhetorical construction of the network's research program, and points to the authentic locations of 'network science'. These two chapters represent the 'public' face of the network; the following two chapters move to the 'private' side of network identity. Chapter 6 describes the trajectory from 'public' to 'private' and 'basic' to 'applied' in terms of the network's development of intellectual property and construction of a commercial portfolio. Chapter 7 develops a typology of network researchers based on their alignment along the public/private, basic/applied dimensions. The last chapter summarizes the study and its findings, derives a number of conclusions and policy implications, and makes recommendations for future research. In summary, what follows is an enquiry into the material and epistemic spaces of the NCE program in general, and the Canadian Genetic Diseases Network (CGDN) and its scientists in particular. Detailed descriptions of the social, cultural, and material mechanisms at work draw authenticity from the voices of federal public servants, network officers, private-sector partners, university administrators, and scientists themselves. I trace the trajectory of the NCE program and CGDN over time, attending to the ways federal policies are translated into specific research projects, practices, and institutional arrangements and recording how scientists embrace, resist, or ignore these initiatives. The purpose is to achieve a greater understanding of changes in the organization and motivation of academic science as well as the way these changes affect the public's manifold 19 interests in the science it funds. Close examination of how Science is planned and produced and the Public Interest is served in this one particular case will contribute to the development of science studies and policy research more generally. CHAPTER 2: CONCEPTUAL AND ANALYTICAL TOOLS The conceptual framework of this study relies on the relationship between two sociological and epistemological distinctions: the public/private and basic/applied divides, and this chapter commences with a review of their relationship. The space where these dimensions intersect is particularly relevant to this study and I examine various attempts to describe it. Donald Stokes (1997), for example, calls the space 'Pasteur's Quadrant'. Others speak of'strategic research', 'emergent science', or 'Jeffersonian science'.14 I will introduce two models that present opposing interpretations of the relation between the divides: the open science model and the overflow or network model. The tension between these contrasting approaches to public and proprietary knowledge runs throughout this dissertation. In the last part of the chapter, I introduce the analytical tools I will use to understand the conduct and culture of science in networks. 14 See, respectively, Godin (2000-3), Callon (in press), and Holton & Sonnert (1999) 21 I. Mapping the Divides Public and Private The public/private demarcation is one of the core sociological distinctions. Norberto Bobbio (1989) calls it one of the 'grand dichotomies' of western thought. Yet like other such dichotomies this one begins to collapse on closer examination, becoming not one but a number of related oppositions that nest one within the other like Russian dolls (Starr 1988). Is the stock market, for example, public or private? From one perspective it is a mass of individuals pursuing private interests; but from another, it is a public social and cultural aggregation. What do we mean by 'the private sector'? Usually, we mean private businesses, large and small. Yet many of the largest corporations are 'public companies', owned by millions of shareholders, some individual, some huge and institutional. But huge institutional investors are themselves often 'public', in that they represent the pensions and investments of millions of people. What do we mean by 'the public sector'? Many publicly owned institutions and agencies are 'private' in the sense that they are exempt from direct, or even delegated, public control; for example, crown corporations; universities; even departments and bureaucracies of the state. What does it mean to speak of 'public' and 'private' life? For individuals in 'public life' we designate whole areas exempt from public scrutiny (private matters of conscience, conviction, family, and morality). But when these aspects of private life impinge on or attract the public interest, they enter the public domain and become 'public knowledge'. Does my body belong to me? If so, I should be able to control what happens to my genetic material. But legal cases have been fought and won by researchers who have taken cell lines from unsuspecting patients and patented them for profit, rendering bodies 'public' by acts of 22 privatization. On discovering their colonization, patients fought not for the right to privacy but for the right to profit for and from themselves.15 What about ownership of the human genome? In the vast undertaking to map it, public researchers raced against a private company (Celera Genomics) which sought to patent and profit from 'the stuff of life'. Because results of the public effort were held in common in the public domain, Celera was able to use them to advance their own project. The controversy raised awareness of the role of patent law in privatizing public research. Patents make knowledge private by circumscribing ideas with property rights, so if a public university takes out a patent on a publicly funded discovery, is it 'privatizing' that knowledge? Or is it securing the ownership of that discovery for the public domain? These questions without answers help to illustrate that public/private is a negotiated, discursive space rather than a fact of the world. But two core ideas help connect the many different meanings. These are, as Paul Starr, states, 'that public is to private as open is to closed, and that public is to private as the whole is to the part' (1988:2). In the first sense, public and private oppose each other along the dimension of accessibility. That is to say, the openness and transparency of public space, public life, and public disclosure contrast to the opaqueness and concealment of private space, private life, and personal communications. In the second sense, 'public' is synonymous with 'common', as in public opinion, public health, or the public interest; this sense has merged with the sense of 'official' or 'state'. Thus, to Starr, 'public' can carry three contrasting meanings from which 'privatization' represents corresponding withdrawals. In the first sense, 'public' means open and visible, as in public life and 15 The classic case is Moore vs Regents of the University of California, see Boyle (1996). John Moore sued researchers and their university for stealing his cell line (uniquely resistant to hairy-cell leukemia) for profit and without his consent. He lost the case. 23 social relations while 'private' means a withdrawal from sociability and the decline of public culture. In the second sense, we invoke the 'general public' or the public-at-large, to speak of public action and civic concerns in contrast to private concerns and the pursuit of self-interest. The third sense of 'public' is the domain of common (state or community) ownership, as opposed to appropriation by an individual or group. These senses of open, closed, and common will reappear throughout this study. The locus classicus of the public/private distinction can be found in Greek and Roman thought. It represented the separation of the private household and its economy (oikos) from the sphere of collective public institutions—the polls or respublica. Collectively, heads of households constituted the 'body politic' or public realm (Arendt 1959:56). As Arendt explains, a physical space, a boundary or no-man's land, separated private households. The boundary demarcated one property from another, and marked off the household from the city. Arendt identifies the spatial significance of this boundary with that of the law. In the same way that the law harbored and protected the public domain that was political life, fences sheltered and protected the private property of households (Arendt 1959:57). Between the political (public) and intimate (private) domains, Arendt interposed a third space: that of the social. By feudal times, public/private distinctions in property and affairs had developed a certain taxonomic and ideological slipperiness. The emerging concept of the corporation under Roman and Canon Law is a case in point. A corporation interpolates between the individual and the collective, the political and the economic; in a sense, it is both and neither public and private. After the church invoked the corporate form to sever itself from state control, the principle of incorporation spread into secular law, where it established the rudiments of a public sphere free of ecclesiastical control (Huff 1997). Thus, 'we find in the 12th and 13th centuries the widespread emergence of a vast array of legally autonomous [corporate] entities that were bestowed with a composite bundle of legal rights and which presumed the legal authority of jurisdiction, that is, legitimate legal authority over a limited territory or domain' (Huff 1997:28). These newly incorporated (literally, embodied) entities included cities and towns, merchant guilds, charitable organizations, professional associations and universities (ibid.). Subsequendy, according to Huff, corporations contributed to the rise of the public sphere by faciUtating the extension of trade in the high middle ages. The original trading companies were extensions of the private economy of the family, in that assets and investments entrusted to the company were commingled with family assets. The developing legal theory of the corporation made it possible to disentangle familial and business affairs, mstalling a chstmction that converted what was previously private (oikos) into public (the market). Huff argues that corporate law made it possible to differentiate between individuals and the corporate body. The corporate collectivity was construed as a single, legal person. A distinction now existed between ownership and jurisdiction, especially concerning assets, liabilities, and debts. By providing for allegiance to the corporation rather than to individuals, the continuity of the enterprise was ensured. The historical development of these concepts , according to Huff, provided for the emergence of distinctive public and private spheres of action and interest. This separation laid the foundation for the emergence of modern science as a 'public' institution witbin 'public' universities by establishing a 'neutral space' of thought and action. As Huff explains, The medieval intellectual elite of Europe established an impersonal intellectual agenda whose ultimate purpose was to describe and explain the world in its entirety in terms of causal processes and mechanisms. This disinterested agenda was no longer a private, personal, or idiosyncratic preoccupation, but a publicly shared set of texts, questions, commentaries, and in some cases, centuries old expositions of unsolved physical and metaphysical questions that set the highest standards of intellectual enquiry.. .A disinterested agenda of naturalistic enquiry had been institutionalized... It thereby laid the foundation for the breakthrough to modern science (Huff 1997:33). 25 The science that emerged from the Renaissance took place in relatively small, interdependent communities of practice where scientific advance rested on the veracity of individuals. It depended on a culture of honour, epitomized by the position within the social order of the 17th century English gendeman-scientist (Shapin 1994). The production of scientific knowledge was, and remains, according to Shapin, a moral enterprise built on mutual trust. Personal trust is the 'great civility' and the currency of an 'economy of credibility' in the conduct of science. Within such small interdependent groups as the 'core-sets' of specialized scientific practices, the economy of credibility is likely to flow along channels of familiarity. The practitioners involved are likely to know each other very well and to need each others' findings in order to produce their own. Here...the pragmatic as well as the moral consequences of distrust and skepticism are likely to be high (Shapin 1995a:269) Thus trust in the public institution of science rests on trust in the private morality of its individual practitioners. The 'public' nature of scientific knowledge rests on the collective construction of a collective good, under conditions requiring reliance on the work of others. Within this 'moral economy of truth' public and private, scientific and social, become inseparable. Similarly, Habermas (1989) conceived the public sphere as a social space, first emerging in 17th century English coffee-houses and salons, where 'private' individuals came together to engage in rational-critical debate and thereby further the 'public' interest. Habermas distinguishes this 'authentic' public sphere from the 'public' realm of state interests. The authentic public sphere is a dimension of private life: 'a public of private people' who came together to further the 'common good'. In Habermasian terms, however, the common good and the public sphere itself are undifferentiated. As in classical Greece, where women and slaves were confined to the home, rational-critical discourse in the public domain was a white, male, bourgeois prerogative as was the 26 'scientific revolution' itself. The legacies remain, as will be seen in the empirical section of this study.16 Basic and Applied With the onset of modern science in the 17th century, questions of public and private begin to map onto distinctions between basic science, applied science, and what lies between. These distinctions are part of an ancient argument that has its roots in the classical differentiation between theoria and praxis m early Greek thought (Godin 2000-3:3; Arendt 1959). The path of theoria travels from Plato, through Descartes and Newton. The path of praxis from Aristotle, through Montaigne and Bacon. Toulmin (1990) shows how the rationalism of early modern science came to dominate the experiential and empiricist values of Renaissance humanism. For 16th century humanists, the central demand was that thought and conduct should be reasonable (rather than rational) tolerating social, cultural, and intellectual diversity. But after the Enlightenment, says Toulmin, ideas became decontextualized. Scientists began to conduct 'pure research'—a careful and systematic search for the abstract universal laws through which God governed nature (see also Latour, 1993 for a parallel discussion). A fundamental part of Francis Bacon's critique of institutionalized scholarship in the 16th and early 17th centuries was its ignorance of the concerns of industry and commerce, the crafts and trades. Consequendy, an important part of his call for reformation involved bringing the two together so that in the reformed academy 'the sounds of industry' would be heard 'at every hand'.17 16 For an interesting discussion on historiographical approaches to the relation of public sphere and private life, see Dena Goodman's (1992). A definitive critique of the inadequacy of the liberal model of the public sphere descibed by Habermas is available in Fraser 1997 17 Thanks to Stephen Straker for this point 27 According to Benoit Godin,18 the word 'research', meaning thorough examination, emerged from French origins in the 16th century. The concept of'pure research' was first used in the mid-17th century, to distinguish abstract theorizing from 'mixed research' dealing with concrete subjects (Kline 1995:196). It came into general use towards the end of the 19th century, as part of a contrast pair, the opposing element being industrial or 'applied' research. Thomas Henry Huxley (1880) had an aversion to the pure/applied distinction, stating I often wish this phrase 'applied science' had never been invented. For it suggests that there is a sort of scientific knowledge of direct practical use, which can be studied apart from another sort of scientific knowledge, which is of no practical utility, and which is termed 'pure science'. But there is no more complete fallacy than this. What people call applied science is nothing but the application of pure science to a particular class of problems (quoted in Kline 1995: 194) Huxley was making a nice distinction, ignoring the fact that 'technology', particularly in industry, had its own distinct history and trajectory. Others recognized the linkages between 'pure' and 'applied' research, or disputed the proper place of each. As early as 1840 Prussian chemist, Justus Leibig, sought to establish a university program that would combine the search for pure knowledge with production tiraining for students; he was strenuously opposed by faculty (Turner 1982). Lenoir (1998) describes a number of late 19th century German initiatives to link the demands of the pharmaceutical industry with the interests of academic science, first through consulting and contracting arrangements, then the establishment of independent institutes. Noble (1977) traces the connections between US academic engineers and industrial research problems from the early decades of the twentieth century. Veblen was complaining about too-close relations between universities and local industries as long ago as 1918. Well-documented debates" from the interwar years address the propriety of ahgning academic and industrial research and patenting publicly-funded research. Conflicts of interest and commitment were not uncommon; there were disputes 18 In this section, I draw quite extensively on Godin's series of working papers (2000, 2001, and ongoing) for the Observatoire des sciences et des les technologies, UQAM. His project constitutes a history of attempts to measure the impacts of scientific research. 28 about intellectual property ownership and concerns about on the proper role of the university. As we grapple with similar concerns today, the continuities argue against claims that a radical break in moral and organizational culture is in progress. The terms 'pure' and 'applied' dominated the discourse until the 1930s, when 'fundamental' research came into occasional to avoid the moral connotations of'pure' (Kline 1995:196). Subject-matter, e.g. theoretical or applied physics, defined what was pure or applied rather than the motivation of the researcher, as is the case today. The phrase 'basic science' was first coined by Julian Huxley (1934) (grandson and intellectual heir of T.H. Huxley) as part of a typology in which 'pure' and 'applied' each contained two categories: 'background' and 'basic' for the first; 'ad hoc' and 'development' for the second. British socialists like Huxley, and his colleague John Desmond Bernal were inspired by the apparent success of'planned' Soviet science. In The Social Function of Science (1939), Bernal advocated state steering of science through socioeconomic controls and goals. In contrast to this image of social engagement, Michael Polanyi and others who opposed 'Bernalism', founded the Society for Freedom in Science to defend the ideal of a 'pure science', unfettered by social constraints (Polanyi 1940; Sheehan 1993). According to Polanyi (1962:62) 'you can kill or mutilate the advance of science [but] you cannot shape it'; any practical benefits are incidental and unpredictable. The dialogue between Bernal and Polanyi on social direction and autonomy in science is the origin of our continuing debates about the relative allocation of resources to basic and applied research (David 1995). The same debates were being engaged in the interwar period in the US, and the 'Polanyi' position dominated. At the time, academic science was controlled by 'a tacit oligarchy of eminent scientists 19 See, for example, in Weiner 1986 and 1989; Geiger 1988 & 1990; Noble 1977. 29 who shared a number of ideological convictions' (Geiger 1990: 19). Among these convictions, according to Geiger, were the beliefs that: (1) society should support basic science, because society benefited from its discoveries; (2) funding should be reserved for the 'best' scientists, because their productivity was established; (3) who the best scientists might be was a matter for the best scientists themselves to determine, and (4) government funding carried the taint of politics, so private support was preferable. Robert K. Merton captured the Polanyi Zeitgeist in The Normative Structure of Science (1942). Merton defined pure science by its characteristic methods and institutional structure, and also by the distinctive cultural values and mores that bound the behaviour of scientists. In combination, these clearly demarcated 'science' from 'technology'. The Bernal position on socioeconomic relevance was adopted by Harley Kilgore, a New Deal senator from West Virginia. Kilgore wanted publicly supported science to be politically and socially accountable. He suggested that the sole criterion for public funding should be 'manifest social utility' in the production of knowledge (David 1995). Vannevar Bush, an engineer and former president of MIT, who headed the wartime Office of Scientific Research and Development (OSRD), took the Polanyi and Merton side of the debate. Bush (1945) politicked Merton and Polanyi's vision of a freestanding science governed by a system of binding universal norms that underpinned the moral authority on which it rested. He adopted Julian Huxley's term 'basic research' to describe what this autonomous university-based collective produced, and articulated a 'linear model of innovation' to link basic research to eventual socio economic returns. The 'pipeline' is the dominant metaphor of the linear model. Fundamental discoveries are fed into one end of the pipe and move through various stages of development until they emerge onto the market at the far end of the pipe. The resultant growth fuels the economy and returns taxes, to maintain the cycle (see Figure 3 below). The linear model was a powerful argument for 'market failure' in that basic science was viewed as a public good, requiring public funding and the 30 open dissemination of research results. It was argued that government investment in basic research must be preserved, and science left to regulate itself, if the pipeline was to fuel the innovation process and produce wealth. These arguments were the foundation of the postwar 'social contract for science',20 a contract secured by a promissory note on the eventual but completely unpredictable technological and social spin-offs of basic science. Figure 3: The Linear Model of Research: WWII to mid 1970s Basic research was 'performed without thought of practical ends' and with the sole purpose of contributing to 'the understanding of nature and its laws'. According to Bush, if basic research is contaminated by premature considerations of use it loses its creative edge. But if left alone, it provides the raw materials for innovation and becomes, at a distance, 'the pacemaker of technological progress' (1945:19). Thus, in the form of technology transfer, basic science generates social and economic returns on the state's investment—but only if scientists are allowed to pursue it, wherever it leads, without government controls. Government's role was simply to support university researchers with the resources they needed to produce knowledge. 20 See David Guston's extensive work in science policy and the social contract, for example Guston (2000a); Guston and Kenniston (1994) 31 Scientists viewed Bush's 'Endless Frontier' as 'a charter for pure science' (Holton and Sonnert 1999:53). It enshrined the basic/applied dichotomy in US science policy, and entrenched the 'ideology of the autonomous researcher' (Godin 2000-1:9). Bush argued that 'the responsibility for the creation of scientific knowledge - and for most of its application - rests on that small body of men and women who understand the fundamental laws of nature and are skilled in the techniques of scientific research' (Bush 1945:7). Only peers could decide the value and merit of research. Consequendy, 'there was no need for governments to worry about the evaluation and measurement of science and scientists, and to track the output of research' (Godin 2000-1:9). Politicians and policymakers initially refused Bush's gambit. The National Science Foundation, for example, was not established until 1950, with far more restricted levels of authority and autonomy than Bush had anticipated. But in the late 1950s, in the aftermath of Sputnik, the linear explanation of the relation between basic science and application became compelling. Bush had argued that without significant investment at the source of the knowledge pipeline, no innovations would issue from the mouth, and the nation would fall behind its competitors. Sputnik seemed to demonstrate the truth of this claim. Fears of Soviet dominance of 'the space race' generated immediate revisions in the US federal research budget. The 'golden age' of state-sponsored research had arrived. The Spaces in Between Setting up a dichotomy between basic and applied dissolves deep connections between the search for solutions to practical and technical problems and the search for fundamental understanding. As Donald Stokes (1997; 1995) argues, and as the historical record suggests, basic research has never been divorced from application, and distinctions between research directed to useful ends, and research directed to the advancement of knowledge, are deeply misguided. Stokes suggests that a large proportion of university research is—and always has been—both useful and fundamental. He suggests that the basic/applied dichotomy renders this significant segment of the research spectrum invisible, and that the linear model's one-way flow obscures the number of basic research questions arising from purely technological phenomena. In furthering his claims, Stokes (1997) employs an iUuminating typology. He classifies fundamental 'understanding-based' research as 'Bohr's Quadrant', and applied 'use-inspired' research as 'Edison's Quadrant'. Research that is both useful and fundamental resides in between, in 'Pasteur's Quadrant'.21 Pasteur's research commitment, according to Stokes, was twofold: not only to understand the microbiological processes he discovered, but also to exert practical control over their effects in products, people, and animals (1997:71-2). 'The mature Pasteur never did a study that was not applied while he laid out a whole fresh branch of science [microbiology]'. (1995:5). In Stokes's view, it is this dual commitment to understanding and use that characterizes much of university research. 'Every one of the basic scientific disciplines has its modern form, in part, as the result of use-inspired basic research. We should no longer allow the post-war vision [of Bush] to conceal the importance of this fact' (Stokes 1995:6). In further contrast to Bush's one-dimensional linear model, Stokes (1995:7-8) sees the rise in fundamental scientific understanding and the rise in technological know-how as two loosely coupled systems. Instead of the latter being dependent on the former, each progresses along largely independent trajectories, with no intervention from the other. But at times, Stokes argues, the mutual influences are profound and can go in either direction, with use-inspired basic research often cast in the linking role. At that point they conjoin in a 'seamless web'. While it is a commonplace that new technologies will be increasingly science-based, the under-appreciated concomitant, argues Stokes, is that 'more and more science will be technology-based' (1995:8). 21 A similar formulation, found in Holton and Sonnert, 1999, adopts "Newtonian Science', 'Baconian Science' and 'Jeffersonian Science' as the ideal types. The latter emphasizes the role of state patronage in promoting scientific advance. 33 What goes unsaid but is nevertheless clear from the discussion is the relation of 'understanding' and 'use' to 'public' and 'private'. If Bohr is the former and Edison the latter, Pasteur occupies the shifting space between public and private. Clearly, as will be seen throughout this study, today's biomedical sciences epitomize these 'spaces in between'. In my empirical findings, physician-scientists describe much of what they do as translational research, a concept that fits the intermediate space between bench and bedside, laboratory and market. A second concept, transitional research feeds the findings of translation back into basic questions, as Stokes predicts. Policy instruments, such as the NCE program, that are geared to both scientific excellence and commercial relevance, address research in Pasteur's Quadrant. The implications of Stokes's insight are being explored by others.22 The model is reproduced below. Figure 4: Stokes' s Quadrant Model of Scientific Research Research inspired by: Considerations of Use? No Yes Quest for Fundamental Understanding? Yes Pure Basic Research (Bohr) Use-inspired Basic Research (Pasteur) No Research directed to particular phenomena (Wissenschaft) Pure Applied Research (Edison) Source: Stokes (1997) In the following section, I summarize two opposing theoretical perspectives towards policies on university research, both of which can lay claim to the space between basic and applied, public and private. The 'open science' model, grounded in evolutionary economics, argues that commercial exploitation of proprietary knowledge by public universities undermines the pursuit of use-inspired basic research. The 'overflow' or 'network' model grounded in science studies, argues that the genie 22 Stokes had a long and distinguished career in US science policy. He died of leukemia shortly after Pasteur's Quadrant went to press. Work has continued in Branscomb, et al. 1999; Nelson 1996; Nelson and Romer 1998; Holton and Sonnert 1999; Branscomb, Holton and Sonnert 2000; Sonnert and Brooks 2000). 34 is already out of the bottle, that institutional distinctions are largely irrelevant anyway, and that the resulting state of affairs (inter-sectoral fluxes, flows, and circulations) is largely beneficial. II. 'Open Science' or 'Science that Overflows'? In 1954 Jonas Salk, of the University of Pittsburgh, announced he had developed a vaccine for polio. In a television interview, he was asked why he had not taken out a patent on an invention clearly worth millions. Salk replied, 'How can you patent the sun?' (Zalewski 1997:51). Salk's point—that no one should own or profit from discoveries about the natural world—has been overtaken by events. Patents are now used routinely to translate university research into proprietary knowledge, as part of a systematic effort to turn universities towards the market by 'capitalizing' their own research (Etzkowitz, et al. 1998). This is where basic/applied and public/private dimensions overlap. University intervention in the commerciahzation process is highly contested on both social and economic grounds. The first questions the social costs of commodifying universities and their knowledge, holding that these (public) institutions should remain outside the (private) system of market exchange.23. One argument is that while the costs of advancing basic knowledge are socialized—taxpayer supported—the benefits from its application are privatized, in the form of intellectual property rights (Noble 1997). Some make an ethical argument that when research is publicly funded, neither researchers nor their universities have moral rights to proprietary control over resulting products (for example see Goldman 1989). These are powerful debates and Ionly touch on them here. Note however, that the position of social critics is aligned, rather curiously, with the second line of contestation which advances the economic interests of industry. 35 This 'open science' model problematizes the new commercial role of universities and university-researchers as impediments to industry and therefore to innovation and wealth-creation The focus on intellectual property rights creates tensions by redefining the role of universities. Once relatively open suppliers of ideas to industry, they become more closed and costly sources of information (Rappert and Webster 1997). The Open Science Model Articulated by Dasgupta & David (1994) as 'the new economics of science', the open science perspective advocates a return to 'no-strings attached' public funding of basic science; a recommitment to the open publication of results, and removal of expectations that universities should be involved in commercialization. Essentially, this model seeks to 'turn back the clock' to the linear understandings of the post-war Golden Age discussed above, when universities produced 'public' knowledge, industry exploited it, and an arm's-length relationship kept the two sectors at a healthy distance (see Figure 3). Using Starr's formulation, discussed earlier, 'public' knowledge produced in universities is common property. In the classic formulations of Richard Nelson (1959) and Kenneth Arrow (1962), 'public goods' are the result of market failure in that knowledge is considered to be 'non-appropriable' and 'non-rival'. As summarized by Keith Pavitt (2000), 'the simple economics of basic scientific research' are such that basic research generates information that is cosdy to produce, but virtually cosdess to reproduce and re-use. It therefore has the properties of a public good and deserves public support. If business firms try to capture all the benefits of basic research for themselves, either through trade 23 For Canadian thinking on this issue, see for example Buchbinder 1993; Polster 1998; Newson 1998. For the US, see Sheila Slaughter and colleagues at the University of Arizona, for example Slaughter and Leslie 1997; Slaughter 1998; Slaughter and Rhoades 1990. Simon Marginson (1997) is a good source for Australia. 36 secrecy or property rights, knowledge remains under-explored or under-exploited. Thus state support for basic research can be justified on the grounds of economic efficiency (Nelson 1959). Yet as Nelson (1998) and Nelson & Sampat (2001) have recendy shown, universities are now patenting and licensing a 'non-trivial fraction' of what would previously have been placed in the public domain. When a university owns patents and licenses, transaction costs for industrial development are increased because companies must now pay for techniques and materials that were previously freely available. Industry's costs also increase when university researchers spin-off patented discoveries into their own companies, then license subsequent products to larger firms.24 Thus, again referring back to Starr, transaction costs reduce accessibility. Industry prefers, therefore, to maintain university research in the public domain. Nelson (1998:2) remarks that 'the large pharmaceutical companies, in particular, have begun to complain vociferously that since they and the public pay for this research through taxes given to the university, it is not fair for them to pay again for access'. As well, patents are said to restrict the diffusion of knowledge that promotes innovation. Traditional.methods of knowledge diffusion from universities to industry—journal articles, meetings, conferences, and so on—are held to be more efficient (Cohen, et al. 1996). Since barriers to access decrease overall wealth, arguably it is more efficient for government to subsidize the production of fundamental knowledge and give it away 'for free' (Nelson and Romer 1998:59). Florida & Cohen (1999:590) argue that although the role of the university in the knowledge economy is 'not yet clearly articulated, identified, or understood', inherent tensions beset their dual pursuit of both commercial alliances and the traditional 'quest for eminence'. A more balanced view of the university's new role in the economy is required, they say. Instead of positioning universities For an extended discussion on the economic costs and benefits of patents see Mazzoleni & Nelson (1998) 37 as engines of economic growth, a more nuanced perspective would reframe the university as 'an enabling infrastructure for technological and economic development'.25 In this vein, a recent empirical study of university patenting in the UK (Rappert and Webster 1997)26 concludes that the construction of a 'regime of appropriation' in the academy, while effective in the short term, may in the medium to long term constrain the overall rate of return. The authors argue that university patenting and intellectual property rights can unintentionally compromise the commercial potential of research, and that in securing patents the university positions itself as a potential competitor to private-sector firms. Further, university patents may present an obstacle to future development if the patent coverage has been poorly framed or filed prematurely. Starr's dimension of open/closed appears in disclosure restrictions associated with the securing of intellectual property rights; these may prevent research results from entering the public domain in a timely fashion. University commercialization activities can be perceived as impeding the cumulative advance of the research enterprise by increasing wasteful duplication of effort, and reducing the likelihood that current findings will contribute to future work (Nelson and Romer 1998). Disclosure restrictions are by far the most significant economic cost associated with university patenting and licensing (Cohen, et al. 1998; Blumenthal, et al. 1996). Restrictions in licenses are pervasive. A recent US study (Blumenthal 1997) found that 82% of companies surveyed require academic researchers to keep information confidential to allow for the filing of a patent application, while some 47% have agreements with universities that allow for even longer delays. Additionally, 30% reported that conflicts of interest had arisen with universities, and 34% had experienced intellectual property disputes with academic researchers. The study confirmed that participation by researchers in commercialization is associated with both delays in publication and refusal to share research results 25 As will be seen later, CGDN has recently redefined its mission in precisely these terms. 26 see also Packer & Webster 1996,1995; Webster & Packer 1996a, 1996b, 1995 38 on request. Industry-supported and market-oriented academic researchers were more than three times as likely to delay publication as those who had no industry support. Similarly, in a survey of technology managers and faculty at the 'top 100' R&D-performing universities in the US (Rahm 1994), 39% of managers had experienced situations where firms placed restrictions on the sharing of information between faculty. Also, 79% of managers and 53% of faculty reported that firms had asked for R&D results to be delayed or kept from publication. In addition to restricting the flow of knowledge, disclosure limitations also generate real and potential conflicts of interest that can damage public perception of the research enterprise. Another issue receiving attention in the literature is the so-called 'patent-scope' problem. This refers to the practice of taking 'broad patents' on basic biomedical systems technologies, such as recombinant DNA or monoclonal antibodies. Especially problematic are rights claimed to 'whatever useful may come' from the patenting of DNA fragments. Critics (Nelson 1996; Nelson and Romer 1998) argue that the use of broad patents to commercialize 'public' scientific research, and the policies promoting that commercialization are unsupportable. Especially in the biomedical sciences, when discoveries are converted into proprietary products the amount of prior public investment required to bring them to fruition is not taken into account. Biotechnologies build on years of publicly-funded research in 'pure' molecular biology; they continue to draw on advances in 'public' science. As Nelson says, modern biotechnology is a canonical example of a field where science and technology, public and private are inextricably mixed (1996:141) Allowing those who placed the last brick on the wall—in patent terminology, the first to 'bring to practice'—to privatize the whole system seems not only unfair, but unjustifiable. In more general terms, broad 'pioneer' patents appear to act as a disincentive to further development because of the likelihood of patent infringement, and the legal costs of defending such 39 mfringement. The effect is analogous to an 'act of enclosure' over a wide area of the intellectual landscape. Nelson argues strongly that patent scope should be kept as tight as possible. To the response that broad patents are necessary to encourage inventors to innovate, Nelson points to technologies that have been developed without such protection; for example, semiconductors, transistors, and integrated circuits. He states unequivocally: We believe that the granting and enforcing of broad pioneer patents is a dangerous social policy. It can, and has, hurt in a number of ways.. .And there are many cases in which technical advance has been very rapid under a regime where intellectual property rights were weak or not stringently enforced. We think the latter regime is the better social bet (Nelson 1996:137). In that it underutili%es scarce resources, the situation has been described as an 'anticommons' (Heller and Eisenberg 1998). ProUferating patents and licenses 'upstream' block each other, and impede researchers 'downstream'. Rather than stimulating innovation and diffusion, therefore, a tangle of fragmented and overlapping patent claims impedes the advance of knowledge. Researchers must obtain licenses and pay royalties to all who hold interests in the 'upstream' basic technologies (Nelson 1996). As a result, and paradoxically, an increase in intellectual properly rights can lead to a decrease in useful products. In a 1998 report, the House Committee on Science in the United States' Congress acknowledged the 'chilling' effect of university patenting, staring that 'a review of intellectual property issues may be necessary to ensure that an acceptable balance is struck between stimmating the development of scientific research into marketable technologies and mamtaining effective dissemination of research results'. Rosenberg (1998) emphasizes the continuing economic importance of sustaining basic research, rather than directing it into specific and narrow commercial applications. He shows that the majority of R&D funding (80%) is spent on already-existing products; i.e. on improvement, not innovation. He cites telephones, transistors, lasers, and computers as examples of the essentially unpredictable 40 nature of the technological outcomes of basic research investments. Similarly, Nelson & Romer (1998) point out that basic academic research produces a multitude of new, publicly available ideas that everyone can share, thereby stimulating innovation. The enforcement of university intellectual property policies, they argue, chokes off this important source of innovation. They fear that 'instead of offering new and different opportunities for the Pasteurs of the university, policy makers may try to convert both the Bohrs and Pasteurs into Edisons' (1998:45). Modern-day Pasteurs must continue to find a place in the university, they say, if progress is to continue. 'If badly designed policies interfere with this interaction, they can do great harm'. In summary, the conditions of knowledge production are such that the details of institutional and organizational differences between the public and private sectors 'really do matter' in the open science model. Paul David argues that the integrity of science and the scientific method depends on 'mamtaining an ethos of openness and cooperation among researchers, supported by the presupposition that the reliability of scientific statements is a collective product requiring independent verification, and consequendy conformity with some behavioural norms regarding the disclosure of their findings' (1995: 13). As noted earlier, these institutionalist economic arguments mirror those of social critics of university commercialization, indicating a developing consensus which may be significant for future policy. But for another influential model, demarcations such as public/private and basic/applied are basically meaningless and intellectual property is just one of the many 'intermediaries' in a knowledge production system constituted by flows, circulations, and network linkages. The Overflow Model 41 The opposing view to the open science model, (lacking an umbrella term; I will call it the overflow or network model) argues that changes to the knowledge production system over the last two decades are radical and irreversible, and constitute a productive force for the good. Callon (in press:3) states that the open science model defends 'Cold War institutions' that have now 'had their day'; they constitute obstacles to science's ability to contribute to economic development. Especially in the biosciences and information and communication technologies (ICTs), tight coupling and multiple linkages between state policy, university research and industry receptors, is the new norm. Pubuc/private and basic/applied distinctions are beside the point here; what matters is the extent of the connections. The model is process-based; its intellectual antecedents can be traced from Heraclitus through Alfred North Whitehead. What this model attempts to describe may be closer to the historical reality than the open science model, the 'purity' of which can be seen as an artefact of post-war affluence. As suggested earlier, there was a long tradition of cross-sectoral linkages in the interwar years and before. However, the shift in degree of cross-sectoral interactions today is a marked departure from earlier times. Michel Callon" supports the argument that the state should invest in basic research, and he is concerned about the increasingly problematic confrontation between the logic of disclosure and free circulation of ideas and the logic of proprietary knowledge and secrecy. However, he rejects the economic foundations of the open science model.28 Stories that invoke marketfailure to define science as a public good are wrong. 'The thesis of underinvestment in research [by the market] is becoming more and more difficult to support,' he says; 'public laboratories are one after another falling into private hands, either direcdy through takeovers and cooperative arrangements or 27 See for example Callon 1994, 1997,1998a, 1998b, in press 42 indirectly through incentives and research programs' (1994:401). Rather than defining the private domain in terms of withdrawals from the public domain, as Starr does, Callon inverts the question by pointing out that a lot of effort is required to make scientific knowledge public, whereas almost no work is required to keep it private." To Callon, science has always been 'potentially privatizable'; to maintain it in the public domain requires intensive investments of energy by scientists and the state, and institutions like universities. In Callon's formulation, which is anchored in actor-network theory, no a priori distinction separates public and private. Instead we have heterogeneous networks—hybrid collectives—some local, some extended, in which science is constructed and circulates. The more networks there are, the more scientific innovation flourishes. 'Science is a public good when it can make a new set of entities proliferate and reconfigure the existing states of the world. Private science is the science that firms up these worlds, makes them habitable. This is why public and private science are complementary: despite being distinct: each draws on the other' (1994:416). Local networks are private in the sense of 'mtimate', in that the space of circulation is limited. When network science overflows the local frame, the space of circulation opens up. At the same time, however, the magnitude of investments required is enormous and tends to generate long and complex chains of associations. As the network settles into place so the links and relations become standardized and 'heavy with norms'. This tends to produce what Callon calls 'irreversibility' and economists of an evolutionary persuasion call path-dependence and technological lock-in. In other words, the network becomes self-perpetuating and the space for the circulation of new ideas shrinks It is at this point that intervention is needed and the hard work of keeping science public must take place. Strong, stabilized networks should receive no additional public support, says Callon. Instead, 28 The economic details of the arguments are beyond the scope of this paper, but are fully articulated in Dasgupta and David 1994; also David 1998a, 1998b, 2000, on the one hand, and Callon 1994,1998a, 1998b, and in press on the other. 43 support should go to encouraging the emergence and proliferation of new networks. It is the variety of academic research that thwarts the tendency to lock-in. Established networks should be constrained by requirements to disclose the knowledge they produce and by limiting the duration of patent protection.30 But Callon (in press) admits that accounting for the 'dual movement' of scientific exploration and commercial exploitation is a difficult question, in that investments in established and profitable developments have to be encouraged at the same time as new, currendy unprofitable, avenues of enquiry. In other words, without the incentives of 'open science', how do we ensure a continuing supply of basic research? To address this question, Callon directs our attention to fields such as biotechnology and ICTs that in his estimation successfully balance exploration and exploitation. These fields 'constitute veritable social laboratories in which new arrangements, devices, and rules of the game are tried and argued' (3). These areas rely as much on tacit (applied) as codified (basic) knowledge.3' Callon argues that rules on prior disclosure make tacit knowledge easily appropriable as intellectual property while codified knowledge is not, because in the latter case disclosure is difficult to contain. The main problem with the open science model, according to Callon, is that 'it is not allowed to cross boundaries' (7). While good at describing existing institutions, it has 'nothing to say about the work that transforms scientific knowledge into commercial innovations' (7). In other words, it addresses only codified knowledge and assumes the same conditions also apply to tacit. Further, it assumes we can draw clean lines between these two forms of knowledge. In contrast, Callon argues that biosciences and ICTs are 'emerging sciences' that are both autonomous and strongly connected to the market economy. 'Emerging sciences' seems to occupy a mid-point on the continuum between tacit/embodied and codified/consolidated. In other words, 29 For additional discussion on this point, see Cambrosio and Keating (1998) 30 See Cambrosio & Keating & Keating (1998) for discussion of the way monoclonal antibodies moved from local to extended networks 44 they belong in Pasteur's Quadrant. Subsequent 'translations' align emergent networks and move them towards consolidation. The reverse is also the case. Consolidated networks can unravel and cede to emergent. While more extensively theorized, Callon's model bears a close 'family resemblance' to two descriptive formulations that have been circulating in the science policy/science studies literatures since the early 1990s, when government cutbacks in research funding and enhanced expectations of commercial exploitation began to fundamentally rewrite the conduct of academic science. MODE 2 AND TRIPLE HELDC 'Mode-2'32 and 'Triple-Helix'33 formulations emerged in the early 1990s to describe the changing conditions of knowledge production. The first argues that traditional ('Mode-1') ways of producing knowledge are being replaced by new ('Mode-2') configurations. Mode 2 knowledge is produced in contexts of application by new, ttansdisciplinary networks that operate along the periphery of the academy and extend beyond it. They combine heterogeneous skills and different types of expertise in flat rather than hierarchical forms, that shift and recombine as the problem-focus changes. Rather than being accountable to the community of science, they are accountable to the community at large. Quality control extends beyond traditional peer review structures to include the broader set of practitioners that populates these networks. Mode-1 may be considered analogous to 'Bohr's Quadrant'. The focus on 'useful' knowledge and the context of application in Mode 2 clearly suggests 'Edison's Quadrant'. In this typology, there seems to be no room for a 'Mode 3' or 'Pasteur's Quadrant'. 31 Collins (1982) provides a classic SSK analysis of tacit knowledge and scientific networks 32 See Gibbons, et al. (1994) and Nowotny, et al. (2001) for the full exposition, and (Jacob 2000) for an excellent summary 33 For the model's attributes see, for example, Etzkowitz, et al. (1998) 45 In a complementary fashion, the 'triple-helix' model posits the recursive interaction of academy, industry, and state institutions in pursuit of knowledge-based economic development and innovation. Triple-helix proponents argue that these institutional alliances signal a new 'democratic corporatist' form that creates a new 'quasi-public sphere.. .in between representative government and private interests (Etzkowitz 1997a: 149-150). This new arena legitimates the state's involvement in an area that might otherwise be left to the 'invisible hand' of the market. But Saul (1995) sees little difference between the new corporatism and the goals of old, fascist-era corporatism. These were 'to shift power direcdy to economic and social interest groups; push entrepreneurial initiative in areas normally reserved for public bodies; and obliterate the boundaries between public and private interest—that is, challenge the idea of the public interest. This sounds like the official program of most contemporary western governments' (87-8). Integral to the triple-helix vision is an image of a new type of university—the entrepreneurial university (Etzkowitz, et al. 1998). In contrast to the 'passive' linear model, where knowledge was handed over to industry for exploitation, the entrepreneurial university capitalizes its own knowledge, thereby changing the dialectic between the university and society. The primary vehicles of change are public/private linkages and collaborations, and dedicated structures to capture, capitalize, and exploit intellectual property (Etzkowitz & Leydesdorff 1997; Etzkowitz, et al. 1998). Triple-helix proponents firmly locate these collaborations within the productive sector of the economy (Etzkowitz and Leydesdorff 1997). Again then, the empasis falls on 'Edison's Quadrant'. Fuller (2000) warns against uncritical acceptance of the perceived dichotomy between new and traditional forms of knowledge organization, calling it 'the myth of the modes' (page xiii). Far from being new, says Fuller, the 'institutional dawn' of Mode-2 and triple-helix models can be found in 19th century Germany's large-scale academy/industry/state collaborations in physics and chemistry. Positing radical breaks and new eras obscures the basic continuity in knowledge production and betrays a presentist understanding of history. Rip (2000) presents the new models as rhetorical ploys ('fashionable ideas') that name features always-already present. They favour descriptions of 'revolution' rather than 'evolution', because they are normatively loaded towards entrepreneurial activities and public/private partnerships. Nevertheless, a programmatic orientation towards the new formulations has been incorporated into the science policies of most OECD countries.34 Policy Regimes In order to understand how these models are being operationalized, we need a broad framework that will encompass the way our two cross-cutting dimensions (public/private; basic/applied) play out at the policy and program level. Arie Rip's concept of 'policy regimes' fills that role.35 Rip suggests that science policy regimes manage the mutually-dependent 'national research system': a landscape made up of interactions between research performers, funders, users, markets, and state 'incentive structures'. Policy regimes 'lock-in' to particular trajectories of institutionalization. In the 1950s and 1960s the linear model of innovation and the social contract for science dominated. In the 1970s and 1980s, a flurry of activity marked 'big science'. Today, we have the 'strategic science' regime that was initiated during the high-tide of neoliberalism in the late 1980s. Neoliberal ideology advocated a comprehensive withdrawal of the state from the economy. Regardless of political complexion, governments 'all abandoned Keynesian policies and.. .pursued fiscal restraint, tax rninimisation, deregulation, and marketization' (Marginson 1997:73). States began to divest themselves of public utilities, nationalized industries, national airlines, and controlling interests in strategic industries. Truly 'public goods'—that is, those with costs but no profit 34 see Jacob & Hellstrom, 2000, for examples 47 potential—could safely be left with the state fTeeple 1995); everything else belonged in private hands. At the same time, states adjusted their redistributive functions. Here too, the logic of the market prevailed. Citizens were to become self-regulating 'enterprises' and market themselves accordingly (Gordon 1991; Rose 1996). Translated into the public service, this reformist spirit became known as the 'enterprise' or 'entrepreneurial' model, more formally 'New Public Management' (NPM).36 This new culture took as axiomatic market-like principles of cost-recovery, competitiveness, and entrepreneurship in the provision of public services (Power 1996). At the same time, accounting , auditing, and accountability measures normalized the new principles and entrenched them in the public service ethos. From 1980 on, then, public funding of academic science began to be contingent on these same principles, which continue to dominate policy mechanisms. 'Neoliberal science' is Strategic Science. Strategic Science qualifies as the signature policy regime of neoliberalism across two dimensions: 'steering' (attempts by the state to impose an agenda) and 'aggregation' (institutionalized processes of agenda-building). Thus Strategic Science has developed 'more or less stabilized rules of how to proceed' towards the state's goals of wealth creation and sustainability. At the same time, an emergent new scientific establishment is 'promising to contribute to [those goals] and forging new alliances with policy makers and societal actors on this basis' (Rip 2001:4 ; see also van der Meulen and Rip 1996:346-7). The Strategic Science regime typically combines concerns for relevance (applied research for the private sector) with demands for excellence (basic research to enrich the public knowledge base). 35 For the development of Rip's thinking over time, see Rip 1990,1997, 2000, 2001; van der Meulen & Rip 1996 36 See Hood 1991 and 1995 for a full accounting of NPM more generally; Savoie 1995 for its influence in Canada 48 These ideas were corning into the policy discourse at the time the Networks of Centres of Excellence program was conceptualized, in the mid-1980s. Rip (2000) speaks of 'fashions' in ideas and the 'abstract sponsorship' ideas exercise. Ideas matter. Their power lies in their performativity. They help to 'order the world', shaping agendas and outcomes (Goldstein and Keohane 1994). Modest ideas like relevance and excellence, and big ideas like New Public Management, Systems of Innovation, and the Knowledge-based Economy disseminate widely and become dominant. In describing this effect, I have used the term 'international ideas' (Atkinson-Grosjean 2002). In science policy, 'international ideas' are a combination of principled and causal beliefs37 held by dominant international 'knowledge elites' about the economic importance of scientific knowledge, and the best way to harness science to the economy. New ideas about science and the economy tend to circulate first in epistemic communities38 of policy professionals in international organizations like OECD, the G7, and the World Bank. These expert communities then 'teach' the new ideas to member states,39 creating convergence around particular regimes or models.40 These organizations also supply the formal and informal structures through which policy frameworks are negotiated and ideas disseminated. I suggest that the broad outlines of Strategic Science in Canada emerged as part of this general internationalizing movement. The material effects of international ideas can be seen in the reformulation of funding priorities; new infrastructures for the exploitation of intellectual property; and initiatives such as the Networks of Centres of Excellence program. 'Excellence' is one of the defining tropes of Strategic Science. It is not an innocent term. See Goldstein and Keohane (1994) for a full explanation of worldviews and principled and causal beliefs 38 In a later chapter I will be describing epistemic communities of scientists, but the term was first used in relation to the international policy community. See, for example, (Ruggie 1975; Haas 1992) 39 Martha Finnemore's work is important here, see 1992 and 1993 40 A dialectic is at work in that many of the policy professionals are seconded from member states. According to an informed observes, one finds a mutual shaping of policy, between and among the member countries, the Permanent Secretariat, and the expert communities 49 In its fixed sense, excellence simply means 'high quality'; this is unobjectionable. But in its relative sense excellence means 'superior' or 'better than the norm'. Used in this manner, performers of 'excellent' research stand in contrast to a much broader population of average or marginal4' performers. In their critical review of the career of 'excellence' in UK science policy, Gallart & Salter (2001) point out that 'by its very nature excellence can only be achieved by a very limited number of researchers or research groups' (5). These authors fear a 'Matthew Effect' (Merton 1968) that will direct funding exclusively to researchers and research organisations with established records of excellence. Not only would this restrict diversity and capacity in the research system, it would cut off the important contribution of 'average' science in areas such as trairiing the next generation of researchers, opening up new fields of inquiry, and offering a wider field of social choices about which new technologies get developed (Gallert & Salter 2001: 8). Michel Callon has argued42 that concenttating research funding on established scientists and institutions leads to less innovation than spreading funds across multiple sites. Nelson & Winter (1982) and Rip (1997) reinforce that variety in the system ensures possibilities for new entrants, who often sit on the margins of traditional disciplines. Similar concerns about exclusion and loss of diversity were expressed when Vannevar Bush was developing 'the doctrine of basic science'. As Bernard Cohen recalls,43 there was a fear that setting up a National Science Foundation would institutionalize 'the monolithic pressures of scientific orthodoxy' and support 'only research of a recognized kind in established fields'. In my later analysis of CGDN discourse, excellence will emerge as a dominant trope in the guise of a performative elitism, with themes of inclusion and exclusion. Actor-network theory (ANT), 'the 41 Gallart & Salter (2001) use the term 'mediocre' but do so polemically, to enhance the contrast 42 See the upcoming chapter in Science Bought and Sold. Mirowski & Sent (eds) as well as Callon (1994) 43 See (Stokes 1995); Cohen was responding to Stokes's presentation and responses are appended to the document. Cohen was disussing the Bowman Report, the foundation document for Bush's (1945) landmark: Science: The Endless Frontier 50 sociology of translation', provides a way of understanding these results. ANT helps me describe the way scientists and others in the Canadian Genetic Diseases Network continuously negotiate competing demands for excellence and relevance from the NCE program, while continuously inventing (translating) their network. III. Translating Networks The sociology of translation describes the politics of scientific organization and practices using a vocabulary of power, force, strategy, and negotiation (Pels 1997). As will be seen, these idioms are especially useful for analyzing the practical arrangements and power relations at work in the Canadian Genetic Diseases Network. In ANT's precursor study, Laboratory Life. Latour and Woolgar (1979) crafted 'a political economy of truth' by weaving together the economics and politics of science. Their 'integrated economic model of the production of facts' explained scientific credibility in terms of the accumulation and maintenance of symbolic capital. At the same time, however, they portrayed political competence as central to scientific work, seeing little practical difference between 'politics' and 'truth' (Latour and Woolgar 1979:213, 237; Pels 1997:10). The emphasis on power was made more explicit in the work of Michel Callon, who coined the term actor-network and defined it as a theory of translation. Using the analogy of a 'seamless web', ANT attempts to understand the materiality of the social and technical relations permeating heterogeneous materials. In ANT, distinguishing between 'facts' and 'artifacts' is neither useful nor relevant: all are actors in the network and all are treated symmetrically. Since materiality is a relational effect, it is provisional and susceptible to change. Boundaries are fluid not fixed; the emphasis is on connection, interdependence, mutuality, and flux (Bingham 1996). It is important, therefore, to stabilize actors and actants in order to maintain the tenuous stability of the 51 network, which can quickly dissociate without constant attention. Stabilized facts, practices, and artifacts—those under temporary control—are 'black-boxed.' For the moment at least, they are no longer questioned or considered controversial. Power and agency are relational effects of networks 'acting at a distance' by 'remote control.' The achievement of action at a distance is exemplified by the concept of centres of calculation (Latour 1987) or centres of translation (Callon 1986), where the ability to control actors at the periphery translates into power at the centre. These key ANT ideas have found their way into post-Foucauldian theories of governmentality, illustrating not only ANT's conceptual fertility but also its location between dual 'repertoires of disenchantment': Nietzsche and Foucault on the one hand, and Marx and Bourdieu on the other (Pels 1997). Power flowing through networks accumulates in the hands of actors who are able to enrol the most allies, translate their interests, and act as their spokesman (Callon 1986; Callon & Latour 1981). Power and agency lie in this ability to intervene between forces and stabilize power relations. The most powerful actors—those who assume a network's leadership and become its spokesperson—are those who enrol the largest number of irreversibly linked allies (Pels 1997:11). Rather than being delegated by pre-existing groups to speak on their behalf, spokespersons actually create the groups they speak for, by the very act of assuming the role of spokesperson (Cambrosio, et al. 1990:214). For example, Latour (1988) shows that Louis Pasteur 'the scientist', who made fundamental discoveries in microbiology and public health, is inseparable from Louis Pasteur 'the politician', who skillfully translated and mobilized legions of microbes, farmers, laboratories, and other allies to create new sources of social power and legitimacy. Pasteur became 'Pasteur', the authorized spokesman and exclusive interpreter for the heterogeneous multitudes he enrolled. For Latour, a sociology that concerned itself only with 'social facts' and 'social relations' would miss the most 52 interesting features of science as a political practice. The sociology of science needed to be redefined as the science of strong or weak associations (Latour 1988:40, see also 1986). As ANT developed, the linkages between science and politics became firmly embedded, and the concept of networks so extensive, that everything was explained in network terms. All social relations, including power and organization, were treated as network effects (Law: 1992:379). In a seminal paper, Lee and Brown (1994) argued that ANT 'plays god' when it claims the ability to know the whole world through networks. They ascribed a 'Nietzschean world-view' to ANT: one which 'simultaneously secures the universal applicability of its political metaphorics, and stretches the notion of relational power.. .to cover everything' (778). In claiming the right to speak for all, said Lee & Brown, ANT risks becoming 'yet another ahistorical grand narrative'. It was this paper that began the reflexive self-questioriing that spawned a 1997 workshop on what 'comes after' ANT44 as well as a subsequent book of the same name (Law and Hassard 1999), and a whole new literature to which Callon's analysis of 'overflowing' networks belongs. Despite the 'imperialist' tendencies Lee & Brown warn against, it is precisely ANT's 'totalizing tendency'—its ability to fully account for the workings of power in network relations—that makes it such an appropriate analytical tool for my case study of the Canadian Genetic Diseases Network. However, like micro-studies of science in general, ANT is less than helpful when it comes to accounting for the structural relations between CGDN with the NCE program. Actor-Network Theory and After, Keele University, 1997 53 Structural Issues Relational approaches to the study of science ask 'how' questions about the micro-level of knowledge production (Knorr-Cetina & Mulkay 1983:6). The focus is on detailed, ethnographic description of local practices, or close historical study of specific episodes. The key is simply to 'follow the actors' (Latour 1987) at the actual site of their scientific work. Explanation emerges once description has been saturated, or pursued 'to the bitter end' (Murdoch 1995:731). With such a strong focus on the local, surrounding institutions tend to become epiphenomenal 'scale effects' of relational networks. The entire research system can be viewed as a contingent outcome of the 'powers of association' attached to networks. Causal accounts are abandoned. Social and normative 'why' questions disappear in the minutiae of mundane 'how' questions (Shapin 1995b). Political-economic issues vanish into the local politics of research.45 Like Winner (1993), Kleinman (1991; 1998), Fuller (1992), and others I find this not only unsatisfactory, but also methodologically unsound. To me, the micro-focus neglects important already-existing structural and institutional features that constrain individual and collective actors. One of the goals of this study is to encourage ANT towards something it has long sought to avoid: full engagement in the agency/structure debate and a more satisfactory accounting of formal institutions. ANT tends to fall into infinite regress when attempting to account for structural features. Keating & Cambrosio frame the problem as follows: Critics of the changing milieu of academic knowledge production view phenomena such as patenting and public/private research partnerships as evidence of the intrusion of global capital and market ideologies into academic institutions. But in practice-based approaches, as Knorr-Cetina (1995) admits, these wider concerns disappear. 54 the fact that traditional sociological dichotomies (macro and micro, social and technical, nature and culture) are inappropriate tools for describing and analyzing scientific and medical practices .. .has been a leitmotif of many recent contributions to the science studies field. Yet, once the ritual rhetorical ceremony of excommunicating the usual dichotomies has been performed, the question remains of [what] analytical frame...will allow us to move appropriate account of, say, the development of biomedicine in the last half-century (2000:385) I mink we can meaningfully speak of 'structure'—and study its effects on the way science is organized and done—without reifying it. One way is to use the gerund form: 'structuring'. Another is Giddens' notion of structuration.46 Law (1992) has proposed 'punctualization' to denote networks that 'run wide and deep—the seemingly macrosocial—[that] can be more-or-less, most of the time, taken for granted (Law 1992:385). Accepting that 'divide talk' is ultimately meaningless, and that continuity overrides all these distinctions, perhaps this is a good enough place to begin the structural (policy/program) level of my study. But in order to undertake the micro-level study of the Canadian Genetic Diseases Network, I first had to overcome another methodological problem associated with science studies. 'Studying up' As Shapin points out, science studies is 'one of the few sociological specialities.. .that aims to interpret a culture far more powerful and prestigious than itself... [and]... few students come equipped with relevant competencies in the natural sciences' (1995b:293). He calls this the problem of 'studying up'.47 By proposing to enter the social world of medical geneticists, without being an initiate, I had to deal with the issue of whether or not I could, or should, acquire linguistic competence in the field. 46 There are so many other differences between ANT and Giddens' theorizing that this does not seem practical, but the term itself is still suggestive 47 For another perspective on 'studying up' see Bronwyn Parry's (1998) interesting account of her attempt to study 'elite networks' of senior executives in big pharma and biotechnology 55 Latour & Woolgar (1979:27-8) called Laboratory Life, their pioneering study of scienusts-in-action, an 'anthropology' of science. The study was an ethnographic investigation, grounded in participant observation, of one specific group of scientists in one specific setting. Using anthropological means, they hoped to penetrate the 'closed-shop' status of science, and open up scientific claims, by breaking down the mystique of scientific objectivity. In order to understand the tribes they study, anthropologists usually attempt to acquire linguistic and cultural competence by immersing themselves in the field. In contrast, Latour & Woolgar made it a methodological principle to maintain their 'anthropological strangeness' in regard to their subject matter Although conducting a field-based study, they made a point of mamtaining critical distance. They decided an understanding of science was not a necessary prerequisite for understanding scientists' work. On the contrary, 'the dangers of going native outweigh[ed] the possible advantages of ease of access and rapid establishment of rapport with participants' (29). Thus Latour & Woolgar's stories of 'laboratory life' were accounts based on 'the experiences of an observer with some anthropological ttaining, but largely ignorant of science' (30). In the land of science, they chose to be 'the stranger' (Simmel 1950), a mixture of presence and absence, proximity and distance (Shields 1992). But to what extent can strangers, ignorant of the 'native language', expect to penetrate the meaning of activities they observe and document? Certainly strangers may be able to observe without bias but, on the other hand, they may utterly misinterpret what they observe. Alleged misinterpretations by science studies researchers have provided ammunition in the 'Science Wars'.48 Physicist Alan Sokal argues that our case studies are often contaminated by 'extremes of subjectivism, relativism and social constructivism'.4' Even science-studies scholar Steve Fuller admits that science studies 48 This debate is well beyond the parameters of my study. For more information see, for example, Koertge 1998 ; Segerstrale 2000 and others 49 The comment derives from my interview with Sokal for Atkinson-Grosjean, 1997:11-12 56 practitioners often appear to be 'carping from the sidelines' (ibid.) and argues that researchers should acquire at least a basic level of scientific literacy. The solution, according to Harry Collins (2000), who has 'studied up' for decades, is to differentiate the types of competence required. Science studies researchers do not need 'procedural expertise', ability to do the science, but they must develop 'interactional expertise', ability to talk knowledgeably to experts in the field. I set out to gain interactional expertise by immersing myself in readings about medical genetics, and molecular biology, both prior to and during the study. I relied mainly on journals like 'Science', 'Nature', and 'Nature Genetics'. While I found the 'empiricist repertoire' of the scientific sections of these journals almost impossible to penetrate, the 'news' and 'features' sections were couched in an informal 'contingent repertoire' and proved much more accessible.501 also tracked developments in the field by subscribing to electronic lists like Medscape's Molecular Medicine MedPulse and Science Week, as well as activist monitors like Genetic Crossroads and Loka Alert. As with any language, however, I found the best way to learn was to hear it spoken. I gained most of the interactional expertise I needed to complete the study by interviewing informants, participating in informal conversations, and paying close attention to the papers and posters at CGDN and HUGO scientific meetings. IV. Summary This study rests on the tension between two cross-cutting dimensions: public/private and basic/applied; it pays particular attention to the separating '/'. This '/' represents the overlapping interstitial spaces in which the 'open science' model and the 'overflow model' offer their competing See Gilbert and Mulkay, 1984, for a discourse analytic approach to science studies 57 explanations. The 'open science' model was the dominant policy regime of the postwar years. It enacted a social contract for science and a linear system of innovation that justified unfettered government funding for basic science. The 'overflow model' captures the Zeitgeist of 'neoliberal science'. Examples include 'Mode-2' and 'triple-helix' formulations. In the Strategic Science policy regime, governments are more interested in funding research with direct application than in funding basic science. They deploy a dual rhetoric of research excellence and commercial relevance. Funding is contingent on cross-sectoral partnerships, market applications, and the formation of research networks. The sociology of translation—actor-network theory—offers ways to understand the complex interactions that take place in the network forms of scientific organization that emerge under this regime. 58 CHAPTER 3. SCIENCE POLICY IN CANADA AND THE NCE EXPERIMENT A review of Canadian science over the past century confirms the hypothesized fundamental continuity and absence of radical breaks. Continuity can be seen in longstanding R&D relationships between public- and private-sectors and in the federal government's historical commitment to the commercial relevance of publicly funded science. The broad periodizations and policy regimes discussed in the previous chapter, and the onset of Strategic Science, can be clearly discerned in the Canadian case. In the field of policy studies, analysts account for three mutually interacting influences that shape and constrain the business of policy formation. Powerful ideas, powerful institutions, and powerful interests act as gatekeepers to the process of agenda-setting. These three 'structuring' influences can be seen at work in the historical development of Canadian science policy and public science institutions described in the first half of this chapter. The second half of the chapter focuses on the formulation and implementation of the Networks of Centres of Excellence program as an instrument of Strategic Science policy. 59 I. Historical Influences on Policy Historian Donald Phillipson51 suggests the Canadian state has had an abiding interest in the economic relevance of science and in promoting public- and private-sector interactions. He suggests three principal reasons why this might be the case. First, consistent with interest-based explanations, until quite recently 'everybody knew everyone else and everybody that mattered' at the senior levels of industrial, academic, and government science. For a century up to the 1960s, science in Canada was very much the enterprise of a small elite group of men from similar socioeconomic backgrounds52 who held interlocking positions of power.53 Their networks of influence went 'up' to the politicians, 'down' to the top Canadian talent in their own fields, and 'sideways' to senior scientists in other fields. This is illustrated in C.J. Mackenzie's response to a journalist on whether it was difficult to get government approval when the National Research Council established a nuclear research unit during the Second World War. Mackenzie, then President of NRC, replied, It was surprisingly easy. In those days the NRC reported to CD. Howe [then Minister of Department of Trade and Commerce].... CD. was a particular friend of mine.... We all went to CD.'s office and discussed the idea with him. I remember he sat there and listened to the whole thing, then he turned to me and said: 'What do you think?' I told him I thought it was a sound idea, then he nodded a couple of times and said: 'Okay, let's go.' (B. Lee, The Atom Secrets,' Globe Magazine, October 28, 1961; cited in Porter, 1965:432) For most of the country's history, policy making was personalist (Phillipson 2000). It operated on social capital rather than academic or scientific capital. Decisions were made on the basis of whom one knew. So the story of Canadian science policy is in large part the story of the people who made 51 The historical background presented in this chapter relies heavily on (Phillipson 1983) and (Phillipson 1991), but more especially on our personal correspondence. By virtue of his oral history projects in the 1970s and 1980s, Phillipson is an authority on the National Research Council and the evolution of Canadian science policy. He has communicated an enormous amount of background material to me in a series of letters over the period 1998-2001. His collegial willingness to share his scholarship has enriched my understanding and I acknowledge his contribution to this policy history, which in many cases draws direcdy on our correspondence. Parts of this chapter appeared in Atkinson-Grosjean, et al. 2001 and Atkinson-Gros jean 2002 (forthcoming) 52 Most were Canadian-born of British extraction, middle-class in origin and Protestant 53 See Porter 1965: 507-11. For the operation of the US 'power elite see Mills, 1956 60 it. The evolution of policy attitudes towards the respective roles of basic and applied science reflects the evolution in elite ways of tbinking on the topic. Although the influence of elite interests has become more subtle in recent years, it remains a major factor: 'This is Canada. When these people speak others listen.'54 A second element identified by Phillipson relates to institutions. Boundaries between public and private in Canadian science are quite unstable and tend to evolve fairly quickly in institutional terms. Phillipson (1991, 2000) provides the example of the Ontario Research Foundation (ORF). Founded by the province in the Depression era as a rival to the federal National Research Council, ORF was transformed into a successful autonomous public industrial laboratory, a Crown agency, in the 1950s. Later, it was 'privatized' as a state-owned corporation. Subsequently, the shares were bought by a commercial company. Another example is the Canadian Standards Association (CSA). Founded in the early 1920s as a government-funded advisory committee of researchers and industrialists, it was incorporated as a company in 1940, with the approval of a government preoccupied with war research. CSA then moved its laboratories from Ottawa to Toronto. Here, it became a self-financing independent institution, and is still authorised to promulgate and enforce standards. A third element is 'ideas-based'. Awareness of other national models—predominantly American and British—has always shaped what was implemented in Canada, whether in the early 20th century or the early 21st. In comparison to other advanced nations, we tend to feel we lag scientifically and this has always influenced the projects undertaken. 'The country is dogged by a national inferiority complex' (Phillipson 2000). As described more extensively earlier, the influence of policy 'fashions' from international forums like OECD and G7 can be clearly discerned in the formation of Canadian University administrator cited in ReSearch Money editorial; Henderson (2001). 61 policy. Canada's National Research Council, for example, founded 1916, was an example of convergence with similar bodies in Britain and the USA. Taken together, the interests of powerful elites and the trade in international ideas tend to promote convergence around generalized policy regimes. However, the historical particularities of a nation's institutional and cultural legacies represent a countervailing force for divergence (Banting, et al. 1997). In other words, we put our own stamp on what we adopt. The Networks of Centres of Excellence program is an example. While the phrase 'centres of excellence' was appearing with increasing regularity in the international policy discourse at the time, networking centres of excellence together was a specific solution to the peculiarities of Canadian geography (sheer size and diversity) and 'soft federalism' (powerful provinces and the requirement to serve all regions equally). Canada's constitutional arrangements represent a longstanding constraint on federal science policy. Universities fall under provincial jurisdiction putting them beyond direct federal reach.55 Historically, federal control of research funding emerged as one of the few avenues for shaping the 'national' role of universities within the 'knowledge-production system'. But until at least the 1960s, universities were not major players in the research economy. The majority of public science—historically defined in terms of utility and industrial relevance— was conducted by the National Research Council (NRC). Public Science in Canada From inception in 1916, NRC's 'public' mission was to serve 'private' needs by directing its research towards 'the most practical and pressing problems indicated by industrial necessities.' The obligation 55 The federal government funds university operations through transfer payments to the provinces but it has no direct influence on these institutions and receives little credit for its funding role. 62 to serve industry was literally graven in stone above the doors of the laboratories on Sussex Drive in Ottawa. Public science was defined not as the search for knowledge, but as the search for solutions. As one of its first tasks, the NRC set out to gauge the state of industrial research in Canada. Survey results showed that only 37 of the 2,800 firms responding performed research on an ongoing basis and most of these employed only one researcher (Thisde, 1966: 29). There was litde for NRC to coordinate, therefore, and a clear national need to develop a critical mass of researchers. This conclusion motivated the 1917 introduction of NRC-funded post-graduate scholarships in the sciences at selected universities (Thisde, 1966: 26,127). Shordy after, the idea of constructing institutes for industrial research on university campuses began to circulate. But this heresy was briskly disposed of when proponents discovered that university faculty were adamandy opposed to 'bargaining with manufacturers'.56 Canadian universities modelled themselves on the humanistic traditions of Oxbridge, where the focus was scholarship and teaching. To undertake research was unusual; to undertake research for industry unthinkable. NRC's views were much the same, arguing that universities would subvert their role by conducting industrial research. NRC itself became increasingly drawn to fundamental enquiry, if only to retain its researchers. Between 1916 and 1940, NRC's workforce expanded from one employee to 2,000; its annual budget from $91,600 to almost $7 million.57 NRC's wartime expansion allowed Canada's academic scientists to work closely with British and American colleagues on the front lines of basic advances in knowledge of microwave techniques, jet engines, digital computers and nuclear power. They were intent on continuing this momentum into the postwar era but conducting research within Canadian universities was still a 'fringe' activity. For example, CD. Howe's Office of Supply and 56 The inquirey was conducted by Hume Cronyn's parliamentary sub-committee struck in April 1919. The 'bargaining' quote is attributed to Professor Lash Miller, University of Toronto, Cronyn Committee Proceedings, June 4,1919, p. 99; cited in Lamontagne report, 1970: 31 63 Reconstruction began an annual inventory of university research in 1946 but abandoned the project in 1949. Scientists were 'faking the results, to conceal from university authorities how much they were diverting from teaching to spend on research' (Phillipson correspondence). Universities were preoccupied with educating returning war veterans and other undergraduates. Research was not a priority. But by then, the linear model of innovation was beginning to circulate as an 'international idea'. In 1951 the Massey Commission58 articulated the model's pipeline metaphor in noting the importance of fundamental research in priming the pump that eventually produces industrial products and applications. 'Without fundamental research,' said the commissioners, 'there can be no proper teaching of science, no scientific workers and no applied science' (175). In the commissioners' view, basic research was most properly housed in universities which should be adequately funded to conduct it. The Commissioners strenuously opposed the idea that publicly funded laboratories should undertake research for industry fearing that it would deaden the scientific imagination and stall the advancement of knowledge. applied research...cannot be expected to add in any way to the knowledge of scientific principles. Occasionally private donors offering research grants require that research projects be approved by them. University authorities generally agree with scientists that these gifts should be steadily refused. (Massey report, 1951:177) From 1952 on, when Dr. E.W.R. Steacie took the helm of NRC, support of basic research in universities became a key Canadian policy goal.5' In line with the logic of the linear model, funding university research was seen as the best way for NRC to achieve its long-term mandate to serve industry. As Steacie said, 'it is absolutely impossible to have first-rate industrial research without 57 Lamontagne report, 1968-77, vol. 1: 61. 58 The Royal Commission on National Development in the Arts, Letters and Sciences, 1949-51 59 . Steacie left McGill University to become head of NRC's chemistry division in 1939. He was appointed NRC's vice-president in 1950 and president in 1952, holding the latter post until his death in 1962, at which time he was widely acknowledged the leader of Canadian science' (Babbit, 1965: 3). 64 first-rate university research' (1965: 159-160). As in the US, the 1957 'Sputnik shock' had a salutary effect on research funding, helping to cement the state's commitment to basic science. Federal expenditures devoted to R&D grew from an estimated $5 million in 1939 to over $200 million in 1959.60 But the policy climate began to change in the decade following the Massey Commission's report. A speculative paper submitted in 1957 by the [Gordon] Royal Commission on Canada's Economic Prospects envisioned the roles that science might assume in the distant future, setting the stage for more intense debate on the status of science in national progress and economic development. In 1962, having examined the federally funded research system, the Glassco Commission concluded that the system had failed. Glassco singled out the NRC for blame, argmng that its (vested) interests in basic 'public' research had been promoted at the expense of applied 'private' research. One of the original purposes of government in devoting money to research was to encourage and stimulate Canadian industry. From being a primary goal this has, over the years, been relegated to being little more than a minor distraction.... At present there is a wide-spread feeling that fundamental research is the only activity adequately recognized within the National Research Council. (Glassco report, 1963, vol. 4: 230, 271) In short, Glassco famously concluded that NRC had 'turned away' from industry. According to a funding distribution in the late 1960s, 91% of the NRC budget was allocated to university research and its own laboratories (50% and 41% respectively) while only 9% was allocated to industrial support and information services (5% and 4% respectively) (Hayes, 1973: 38-39). Commenting on reactions from NRC's scientists and bureaucrats, OECD noted that 'many, no doubt, recognised that there were grounds for the criticism expressed by the Commission, but the majority protested against its recommendations' (OECD, 1969: 63). Lamontagne report, 1968-77, vol. 1: 64 65 Following Glassco's recommendations, a Science Secretariat was established in 1964 and the Science Council of Canada began operations in 1966. Overall, however, the Glassco framework was fundamentally undermined by a report to Prime Minister Lester Pearson by CJ. Mackenzie, former NRC president, who advised against the substance of the findings. The personalist system protected its own. Nevertheless, the Glassco report established a policy climate more hospitable to the applied/private side of the matrix. A number of government initiatives intended to bring academic research closer to the needs of industry were designed in the 1960s.6' By the end of the decade, the Glassco Committee's main criticisms were echoed in several other policy documents mcluding the Science Council of Canada's 1968 report Towards a National Science Policy for Canada and an extensive survey of Canada's science and technology infrastructure by OECD examiners (1969) .The OECD and Science Council reports substantially contributed to the decade-long deliberations of the Senate's Special Committee on Science Policy chaired by economist Maurice Lamontagne, 1968-77. Lamontagne provided an exhaustive analysis of Canada's overall R&D system; the role and performance of federally funded science wherever it occurred; and the culture of science in Canada. At the core of the findings was an attack on the scientific elitism that had driven Canadian science policy since 1916 (Vol. 1: 268). Steacie's proud comment that Canada stands out among the nations by recognizing 'the fundamental fact that the control of a scientific organization must be in the hands of scientists' became an indictment (1965: 119, cited in Vol. 1: 269). Such freedom, the committee argued, 'cannot be justified as a general principle for the organization of scientific progress when the tremendous cost of research has to be met mainly by public funds and when the good and bad effects of science and technology on society are becoming so far-reaching' (Vol. 1: Among these, the Industrial Research Institute Program, established by the Department of Industry in 1966, provided grants to universities to establish institutes where they could work with industry and undertake contract research on their behalf. Legislative tools were also introduced; in 1967 government passed the Industrial Research and Development Incentives Act which was intended to foster academy-industry collaboration in research aimed at solving industrial problems. As well, in 1969 the NRC announced a grants program for universities that emphasized the promotion of industrial development through 'centres of excellence' aimed at fostering a regional balance of scientific and technological expertise. However, plans for this program were vague. 66 270-271). Steps needed to be taken to bridge the gap between science and industry and federal funding should affirm and reflect the priority of applied research (vol. 2: 521). Lamontagne was enthusiastic about the whole business ofp/anification—econormc forecasting and planning—and its potential for fostering innovation. The latter word entered the Canadian policy discourse about halfway through the 'Lamontagne decade'. Seduced by this emerging 'international idea,' the committee also embraced 'the new quasi-economic discipline of science policy that went along with it' (Phillipson correspondence). Committee members and staff were thus 'naively enthusiastic about both (a) the notional completability of the Science Policy model.. .and (b) its political appeal to actual politicians' (PhiUipson correspondence). In politics, extensive data is superfluous to the decision making process. Politicians do not wish to be confused by too many facts. As Cohen, et al. (1972) classically demonstrated, they operate from a 'garbage can model of rationality'. Consequendy, despite the years of effort that went into it, the Lamontagne report, too, 'fell dead from the press', failing to find a place on the agenda of the Trudeau administration (Dufour & de la Mothe, 1993: 21, ft. 13). The power of entrenched elites to resist unwanted change is formidable, but so is the power of new elites to advance change, once the correct tools are in hand. Many remained convinced that the role of public science was to foster industrial innovation and economic expansion and that NRC, with its focus on the advancement of knowledge, represented an impediment to that enterprise. As a crown corporation, however, NRC was beyond direct political and bureaucratic interference. The only way to control it was to systematically strip away its budgets and responsibilities and transfer them to another, more subordinate, agency.62 In 1971, a Ministry of State for Science and Technology (MOSST) was created (as both Glassco and Lamontagne had recommended) replacing the existing Science Secretariat. In 1977, NRC's responsibility for supporting university research was devolved to 62 This is what eventually happened to the Science Council of Canada, disbanded 1992 along with other autonomous agencies 67 a new agency, the Natural Sciences and Engineering Research Council of Canada (NSERC) which then fell under the administrative authority of MOSST. In 1978, MOSST also assumed authority over the Social Sciences and Humanities Research Council of Canada (SSHRC) after the Canada Council was reorganized. This restructuring gradually eroded the autonomy of all granting councils. Science and technology policy edged gradually towards the top of the political agenda. The first G7 summit meeting, in 1982, revealed the fact that Canada had the lowest R&D investment in the G7.63. A Scientific Research Tax Credit was introduced to stimulate investment. It was a flawed instrument, open to abuse, and required a number of revisions to correct the deficiencies, but it marked a major policy innovation. As a result of the changes introduced then, Canada established—and still boasts--the most generous R&D investment and tax climate in the G7 nations. The following year, as the Liberal Party came to the end of its long postwar mandate, several reports established the need to tie government support of public research to commercial relevance. In 1984, with the election of a Progressive Conservative government, the momentum towards a national science policy accelerated, and the neoliberal agenda came into play. After a period of intensive federal/provincial consultation, a national science and technology policy was formally signed in March 1987. Details of InnovAction: The Canadian Strategy for Science and Technology—a $1.5 billion 'package'—were announced the following month. MOSST would subsumed into a new 'superministry'—Industry, Science, and Technology Canada (ISTC)—a combination that clearly signalled the alignment of science and commerce. Legislation would provide $240 million for a new 'flagship' strategy: the Networks of Centres of Excellence (NCE) program. 68 II. Evolution of the NCE Program The NCE program is an example of the way international ideas, existing institutions, and socioeconomic interests interact under a policy regime of Strategic Science.64 The policy innovation was to bring ideological concerns for commercial relevance and research excellence together with the concept of distributed research networks to form networks of centres of excellence. Now that 'networks' are so associated with computer imagery, it is hard to remember it was not always the case. By way of policy studies and science studies, the network concept was just then becoming a 'fashionable idea' in its own right, as a way of thinking about the organization of science. This section presents an analysis of the evolution of the NCE program within the policy context outlined above. The data derive from examination of policy documents and interviews with key players involved in the program's formation. Although many of the sources interviewed for this part of the study belong to the scientific culture (most have at least one degree in the sciences and a background in government or university science) here they represent the science policy culture and the 'official' perspective. Most were associated with the federal government, either as past or present employees or policy advisors. The decision to embark on the Networks of Centres of Excellence program was made in an ideological climate that promoted the outright privatization of public-sector functions. Where this was not possible or desirable, public-private partnerships were preferable to mamtaining public-sector monopolies. Most new65 initiatives in science and technology partnerships saw their beginnings at this time. According to Niosi (1995:34-35), Canada's provincial and federal governments launched over one hundred new intersectoral research partnerships during this period. 63 This remains a chronic problem. Only Italy has a lower R&D: GDP ratio. Finance Minister Martin has made increasing the ratio a key commitment for the 2001 to 2003 fiscal period 69 At the provincial level, Quebec's Programmme d'actions structurantes started in 1984-85 with forty networks of university and government laboratories. Ontario's eight Centres of Excellence were established in 1986. In 1987, Quebec pioneered the Centre d'inidative technologique de Montreal (CITEC) at McGill University. At the federal level, Industry, Science, and Technology Canada (ISTC, later Industry Canada) emphasized public-private partnerships and collaborations. Both the natural science and engineering and medical research councils (NSERC; MRC) actively supported collaborative targeted research. NSERC started to fund 'big science' networks in the early 1980s — in the earth sciences (Lithoprobe) and integrated circuit design (Canadian Microelectonics Corporation). During 1987/88, the budget year prior to the establishment of the NCE, 15 percent of NSERC's total budget went to targeted research. (For further discussion see Friedman and Friedman, 1990, and Niosi, 2000). In late 1987, delegates to the National Forum on Post-Secondary Education raised the idea of centres of excellence that would emphasize interdisciplinarity and involve networks of researchers representing several institutions across Canada (National Forum 1987). In 1988, the Science Council of Canada advised that prosperity depended on integrating the university with the marketplace (Science Council 1988). Reinforcing this theme, the National Advisory Board on Science and Technology (NABST) recommended that 'greater emphasis be given to funding generic pre-competitive research collaboration by university-industry in research consortia' (NABST 1988: 76) This complex of initiatives and recommendations helped provide a foundational platform for the January 1988 launch of the NCE program. 64 Some material in this section appeared in Atkinson-Grosjean (2002) and Fisher, Atkinson-Grosjean and House (2001) 65 There were older initiatives. The Pulp and Paper Research Institute of Canada (Paprican), founded in 1925 at McGill University, represents perhaps Canada's most enduring example of a state-academy-industry alliance (C-HEF 1987: 45-6). Another enduring initiative is the NRC's Industrial Research Assistance Program (IRAP), launched in the 1960s, of which more will be said shortly. 70 Models for the NCE Program The NCE program was designed as a hybrid of two influential models, one governmental and associated with industry, one non-governmental with no industrial affiliations. The first was NRC's Industrial Research Assistance Program (IRAP) established in 1962; the second the Canadian Institute for Advanced Research (CIAR) founded in 1981. IRAP dates from when the NRC still ran along personalist ('old boys' network') lines. IRAP's prehistory was as the Technical Information Service founded by Mackenzie in CD. Howe's Department of Reconstruction and Supply in 1945 and reenergized in 1962 by a retired air marshal named Ralph McBumey. TIS gave 'knowledge subsidies' to industry in the form of technical advice. The 1962 innovation added cash subsidies as well. IRAP would give grant funding to industry for private research, in the same way that universities received grants for public research. According to Phillipson, the idea of giving public money to private industry 'was such an extraordinary precedent that it took a year's preparation by the Advisory Panel on Scientific Policy and required Treasury Board and Cabinet approval' (Phillipson correspondence; see also Phillipson, 1983). As well as having an innovative approach to industrial research, the IRAP program was organized as a solution to Canada's geographical challenges. Rather than hire technically trained civil servants to give hands-on advice to all sorts of different industries, in every region of the country, IRAP created a mechanism for borrowing them. Approximately two-thirds of IRAP's field agents were locals, co-opted from industries, universities, and professional associations in the region. They were paid by their own institutions which received salary support from IRAP to release them. According to a former IRAP director, these agents constituted a 'field army' (NRC 0101) who knew their regions, closely identified with their industrial clients, and enjoyed an enormous amount of autonomy from the Ottawa bureaucracy. 71 These Industrial Technology Advisors as they were called, were gateways in extended networks of resources and facilities. Through them, small and mid-sized enterprises (SMEs) had access to some 130 public and private research- and technology-based organizations that were partners in the field network. In the manner that John Law (1992) calls 'heterogeneous engineering', industry clients, their technical problems, technology advisors, provincial labs, federal labs, industry labs, engineering prototypes, and federal money were all linked together in long-chained networks dedicated to helping Canadian SMEs innovate.66 The networking model that began with IRAP was clearly focused on the technical needs of industry. In contrast, the Canadian Institute for Advanced Research (CIAR), launched some twenty years later (1981) by Dr. Fraser Mustard, a distinguished medical scientist, was a networking model concentrated exclusively on fundamental enquiry. Mustard and his associates promoted the idea of focusing the basic research effort in a limited number of fields where Canada had a strategic advantage and could make an original contribution. Certainly, elevating the overall pool of knowledge would benefit industry in the long-run, but no immediate applications would be forthcoming. CIAR was conceived as an 'institute without walls,' a network that would link together outstanding researchers in institutions across Canada. According to those involved at the start, the idea came out of a dissatisfaction with existing arrangements and a realistic sense of the way knowledge works. To deal with complicated problems, some sort of institutional structure was needed that would override disciplinary and geographical barriers to the full exchange of knowledge. As well, the geographical constraints suggested that 'the simplest way to try to move fields was to opt for an institutional structure that invested in people rather than research' (OTHFM-2). 66 See Callon 1997 and 1998 for analysis of the market significance of these networks; these should be read in relation to Granovetter's (1985) notion of 'embeddness' in relation to economic action 72 CIAR raised funding from federal and provincial governments and from private donations but the funding was 'unencumbered and in no way strategic' (OTHPB). CIAR's mandate was the pursuit of fundamental knowledge for its own sake, without need for 'deliverables' or industry partnerships. Industry was viewed as 'a user of the knowledge generated, rather than a collaborative partner' (OTHFM:l). Funding was used to underwrite networking interactions and to buy-out researchers' time at their home universities so CIAR members could pursue research on fundamental questions. The only criterion was that, 'five years from now you're going to be reviewed by an international panel who will see if you have shifted the world community on how it views that question, in terms of its understanding' (OTHPB-12). In 1986 Mustard became co-director of the committee that was designing the main features of Ontario's Centres of Excellence program, which was launched in June 1987. According to a senior civil servant, Mustard predicted that these new research centres would draw 'key researchers from across the country to Ontario's universities and Ontario's centres,' making it extremely difficult for universities in other provinces to retain the best researchers (NCE-DH:4) As a former NCE program officer put it, 'like a vortex all the best science would migrate to Ontario'(NCE-EI:4) Earlier in the year Mustard and one of his associates in CIAR, Dr. Patricia Baird, had been drafted onto NABST. Not surprisingly, therefore, it was NABST that brought forward the idea of creating CIAR-like national networks in the fundamental sciences, to counter the Ontario initiative. The target would be fast-moving, high-profile, competitive fields that had technological implications in the relatively short-term. At that stage, direct links to industry were not part of the plan. The rationale was that effective strategic or applied research programs required a good fundamental research base. 73 The Minister and Deputy Minister of Industry, Science, and Technology Canada paid attention to the NABST recommendations. Clearly, the federal government needed something to balance the Ontario initiative. The idea of creating 'virtual' CIAR-type networks, rather than 'fixed' Ontario-type centres, was especially attractive 'because there just wasn't enough money to create dozens of new centres around the country' (civil servant, NCE-DH:4). The question regarding the relative merits of 'fixed' and 'distributed' centres originated in the postwar Kilgore/Bush debate regarding the creation of the National Science Foundation (see Chapter 2) to promote basic research; the issue was whether the NSF should follow a 'centre of excellence' model or one that favoured a more geographical distribution of funding.67 While interested in the network model, the Ministry was not convinced that a focus on excellence in basic research was the correct route. Government wanted to see far more in the way of relevance— technology transfer to industry. The outcome was a blend of IRAP and CIAR. Like the latter, NCEs would invest in people (researchers), rather than bricks and mortar (universities and hospitals), and would be free to undertake fundamental enquiry. But, like the former, they would partner with industry and concern themselves with industry needs. As with IRAP and CIAR, network researchers would be paid by their own institutions but would build a strong sense of belonging to a larger national entity. But in contrast to both, NCEs would be parasitic on their hosts (Newson 1994). Universities and hospitals would receive no compensation for paying the salaries and benefits of network researchers, providing space and equipment, and covering laboratory overhead. NCE funds would flow to the researchers through separate 'network offices' which would have no duty of accountability to the university.68 Because their reporting allegiance was to the NCE directorate in Ottawa, these new networks would 'float' above existing 67 Thanks for this point go to my correspondent, Andrew Russell, of the University of Colorado-Boulder, who is studying the development of computer research in the US during the Cold War. 74 institutions (Clark 1998). They would provide the federal government with direct access to provincial university systems, overriding traditional autonomy (OTH-DR). The networks would create a national research capacity open to the needs of industry and the economy. The compromise balancing 'relevance' and 'excellence' was the outcome of sustained bureaucratic struggles to capture control of the NCE initiative. The battle between the Ministry and the research councils was so fierce, it quickly became a case-study (Pullen 1990) for the federal civil service training institute. Territorial Struggles and Program Design Although the federal bureaucracy had been awash in rumours that a major reform of research funding was being planned, the prime minister's announcement in January 1988 came 'out of the blue and without any consultation' with the three granting councils responsible for university research (program officer, NCE-EI: 2). The research council presidents quickly forged an alliance to prevent the NCE initiative being implemented without their input. The president of NSERC assigned two staff members to observe how the Prime Minister's Office was handling the new program and instructed his staff to develop alternative plans (Pullen 1990). A senior NSERC administrator interviewed the consultant hired to develop the program and concluded that the objectives would be impossible to implement (too many criteria, often conflicting) (NCE-MB: 3-4). The councils discovered, as well, that public servants were to review the research applications, with final decisions made by the Ministry; no peer review would be built into the process. While NCE funds flowed to the networks through university financial systems, the university was just an intermediary 75 This contravention of scientific norms became the councils' point of attack. They argued that peer-reviewed competitions were essential to the program's academic credibility. They insisted that the councils were the only bodies with the expertise to run such competitions and to administer the resulting research funding. Without their endorsement and involvement, they suggested, the NCE program would receive a chilly reception in the academic community. If the government wanted the program to succeed, the Ministry could not be allowed to control the initiative. In May 1988, a compromise was struck. The peer review process would be deployed strategically. By cloaking the program in the 'objectivity' of peer review, it could be protected from political pressures. This separation could then be used to rhetorical advantage by the government. The Prime Minister's Office announced that the three research councils would run the NCE competition and distribute the funds, while ISTC would act as the program's secretariat. The research council presidents and the deputy minister of industry formed a steering committee, while ISTC retained overall control, albeit 'from a distance'. As a senior civil servant noted, the Ministry 'holds the pen' when writing memoranda to cabinet or making submissions to the Treasury Board and is also 'closer to the centre' than the arm's-length granting councils (NCE-MAL: 18). Further, two of the three research councils (NSERC and SSHRC) fall within the Industry portfolio and the Minister of Industry's sphere of responsibility. Nevertheless, the three council presidents exercised considerable political leverage on the steering committee, because the Ministry had no experience with research management in universities. They were also able to influence the direction of intellectual inquiry, identifying as targets areas where they perceived a research gap. As a result of the compromise, the policy objective was to reshape the culture of academic science around the dual goals noted earlier: excellence (fundamental research) and relevance (utility to industry). 76 An Advisory Committee (to which Fraser Mustard was appointed) was established in June 1988 to design and implement the program. The committee developed four selection criteria. The weighting assigned to each reflected the success of the research councils in capturing the initiative. Research excellence was weighted at 50 percent; a 'coherent, focused program of research' was deemed the most decisive feature (NCE 1988: 1). Relevance to industry was weighted at 20 percent, as was 'linkages and networking'. The remaining 10 percent covered administrative and management capability. In language reminiscent of Pasteur's Quadrant, an informant explains that [tjhe strategy was to be pregnant — we needed pure, long-term applied science that was somewhat guided by the needs of industry.... Everyone was grappling with the term 'pure, long-term applied science.' [It] was used to walk the fine line separating science and application (policy advisor; NCE-SS: 2-3) The program attracted diverse support. On the one hand, it was sold to Cabinet as a regional economic development package. On the other hand, it was promoted to scientists as an elitist program for producing the best science. In fact, according to one interviewee, it was neither, but merely a means to pull together teams of the very best researchers who, by example, would pull the rest forward (policy advisor; NCE-SS: 1). The nomenclature of'excellence' facilitated the process 'of capturing some of the best researchers in the country [and] recruiting them as champions for change within the system' (senior civil servant; NCE-DH: 8). Yet the program was intended to reach beyond demarcations of excellence and relevance 'to bring in the whole concept of research management and cross-disciplinarity' (program officer; NCE-SM: 22). As suggested earlier, program design was much influenced by the Mode 1/Mode 2 theory of knowledge production developed by Michael Gibbons and colleagues in the late 1980s and 1990s. Gibbons served as a science policy advisor to Industry Canada during this period, and sat on the NCE selection committees. According to one informant, he was their acknowledged 'guru' (program officer; NCE-SM: 13). Michel Callon was also involved in the early design and 77 implementation of the program, as a member of the International Peer Review Committee. Thus the conceptual framework for the NCE program seems to have been a hybrid of Mode 2 and actor-network concepts. Following the receipt of some 240 letters of intent, 158 formal applications were forwarded for assessment to an International Peer Review Committee in November 1988. Composed of first-ranked scientists, engineers and social scientists, mostly from the USA and Europe, this committee reported to the Advisory Committee in June 1989. As previously stipulated by the research councils, the report was made public. Public disclosure gave some assurance that the decisions were made in accordance with established scientific criteria and were not politically influenced. Sixteen applications were deemed worthy of funding, nine in the 'must be funded' category and seven in the 'recommended for funding' second tier. The Advisory Committee endorsed all nine first-tier networks but, for reasons that remain unclear, would not support two of the second tier networks. One of these, on ageing, was the only social science proposal on the short list. After extensive lobbying by the councils, 'a decision came from above' to include the ageing network (policy advisor; NCE-SS). However, it would be funded by the research councils rather than the NCE. The poor showing of the social sciences was later attributed to selection criteria oriented toward engineering and the hard sciences rather than 'the broad perspective needed to make the participation of human scientists possible' (program officer; NCE-EI). Because they reflected a compromise, the initial selection criteria failed to fully articulate the preferences of either the research councils or the Ministry. In practice, networking and industrial relevance hardly figured into the equation. And because companies made few cash commitments at the proposal stage, it was difficult to assess the extent of partnerships and linkages (program officer; NCE-MB: 6). Academics inexperienced in such matters found it difficult to demonstrate such 78 competencies. For similar reasons the applications were weak in defining proposed management structures. Furthermore, the reviewers themselves were not skilled in assessing this area (program officer; NCE-SM: 1-2). As a result, the reviewers could not bring themselves to say 'no' to the best science regardless of the other criteria. They could not displace top quality science with inferior science just because they had a better management structure or because they scored so high on practical application. The other three criteria were ephemeral, intangible, hard to measure or understand. [Reviewers] could not bring themselves to knock out top science on the basis of criteria they did not understand and could not operationalize (policy advisor; NCE-SS: 3) In the end, the reviewers decided to 'gamble on the best [science] and...hope that [the rest] happens' (program officer; NCE-MB: 6) Mobilizing Networks; Changing Attitudes The NCE program introduced 'two radical and important' hypotheses according to Stuart Smith, chair of the International Peer Review and Implementation Committees. At a November 1989 briefing session for the wmning networks, he told participants that the first hypothesis would test whether collaborative research could be done at a distance using telecommunications technologies. The second would test 'whether it was possible in the field of long-term and fundamental research to force researchers to think about the economic and social impact of their work, and more particularly about the channels by which the research results will be commercialized' (address reported in NCE program internal newsletter, Liaison 1 (1) January 1990). The federal bureaucracy had no operational framework for the implementation of NCE policy. Ottawa and the networks made up and modified rules and expectations as the concepts evolved. One of the tasks of the program directorate, in the early years, was to convince scientists that their responsibilities extended beyond the standards of traditional funding programs, and beyond the 79 norms of academic science. Program staff realized that researchers initially viewed the program as just one more funding source for basic science. (See Chapter 4 for the way this attitude manifested at the network level). A policy advisor says 'the scientists didn't know what they were getting into. They just went into it for the money. Very clearly at the start, it was just another pot of money with some arbitrary rules that they would pretend to follow' (OTHDR: 24). It was necessary to convey the 'expectation that [they] were going to interact with industry and that there was going to be some kind of measurable outcome from that interaction' (senior civil servant: NCE-JW: 6). For the networks, that first phase was all about inventing themselves, consolidating themselves, establishing relationships among researchers, host institutions, and industry partners. Industrial partnerships were slow in coming. 'There was a lot of courting in Phase I and not a lot of commitment' (senior civil servant: NCE-JW: 23). The first year, fiscal 1991, was only a partial year. Networks spent most of their time establishing the mechanics of administration—systems, committee structures, and so on. After that, only three full years remained before funding ended. At that point, no guarantees had been given that the program would be renewed. The program was experimental. As far as anyone knew, four years total was all they had. That situation changed in December 1992 when the Mulroney (Conservative) government brought down its final budget.69 In the same speech that abolished the Science Council of Canada and the Economic Council of Canada, Finance Minister Don Mazankowski70 announced that the NCE program would be extended. A new competition would be held in targeted areas and existing networks would be able to compete for a second four-year phase of funding (fiscal years 1995-8). The decision was supported by a positive interim program evaluation carried out between July and December 1992. The evaluation reviewed the effectiveness of program and network management, 69 Mulroney announced his resignation in February 1993. He stayed on as caretaker until Kim Campbell won the leadership contest in June 1993. The party was routed by the Liberals at the polls in October 1993, losing all but two seats 80 the level of networking, and the nature and extent of industrial involvement. From the tenor of the announcement, it was clear that the latter was deemed less than satisfactory. In order to be renewed, networks would have to deliver much more in terms of commercial relevance and industry partnerships. From the beginning, the need for industry involvement and cooperation in the networks has been stressed. Given the need to strengthen this kind of industry collaboration with the research community, funding is being extended. This will ensure that the most successful of the existing networks continue to contribute to competitiveness. (1992 Budget Announcement) A reduced budget of $197 million was allocated for the four-year period, 1995-98, with 25 percent set aside for developing the planned new networks. Modified selection criteria reflected the shift in emphasis from excellence to relevance, and precipitated the dilution of meaning mentioned earlier. Now five criteria, all equally weighted, had to exceed an established 'threshold of excellence': • excellence of the research program : 20 percent (was 50 percent) • ttaining of highly qualified personnel: 20 percent (new) • networking and industry partnerships : 20 percent (same as before) • knowledge exchange and technology exploitation : 20 percent (new) • network management: 20 percent (was 10 percent). As a senior civil servant noted, ISTC had successfully 'reorientfed] the program to sometiiing that they were more comfortable with.' (NCE-JW: 12) The new criteria reflected what they had wanted from the start: a program that fostered more industrially relevant research (senior civil servant; NCE-DH: 6). A rotation of research council presidents helped consolidate this position. The new leaders of the MRC and of NSERC were 'very much focused on developing university-industry Mazankowski's connection with NCEs lasted beyond his political career. In 2000, he became chair of CGDN's board of governors 81 linkages [and] on having academics work outside of their traditional environments for interaction' (senior civil servant; NCE-JW: 17). The attitude of faculty was more ambivalent. The top-down decision to shift priorities represented 'a very serious concern for [some of] the researchers involved' (program officer; NCE-SM: 9) and considerable turnover among scientists occurred. Some found the program more appealing and enlisted; others 'knew this wasn't the place for them [and] got out' (senior civil servant; NCE-MAL: 9). Since Phase II, all networks have conducted more applied and less fundamental research. Reduced budgets for the renewed networks forced the scientists to 'focus much more on... lines of research that were likely to be of interest to industry'. The research still had basic components but was aligned 'to be of greater interest to the existing industrial environment' (senior civil servant; NCE-MAL: 10). With the election of a Liberal government in October 1993, the emphasis on relevance became even more entrenched. By now 'neoliberal' principles had become a political orthodoxy as even centrist parties shifted to the right. Shordy after assuming office, the Liberals undertook a massive reorganization of ISTC. As if to confirm the subordination of science to the economy, the department now became simply Industry Canada. It assumed a much enlarged portfolio and a mandate to foster Canada's international competitiveness. The following year, 1994, a major science and technology program review was announced, together with the intention of moving towards a new, national science and technology strategy. Months of exhaustive consultation and review followed. After some considerable delay, the new national policy—Science and Technology for the New Century: A Federal Strategy —was finally announced in March 1996 (Industry Canada 1996). The strategy adopted science and technology as a federal priority. Taking a 'National System of Innovation' (Nelson 1996) approach, it integrated academy, industry, and government research 82 under the rubric of job creation and economic growth. The focus was on the 'strategic investment' of resources for 'the maximum economic, social, and scientific returns' (Industry Canada 1996: 9). The principal means of achieving this was through the strategic use of public-private research arrangements between universities, industry, and other levels of government. Both the Conservative and Liberal administrations had crafted a climate hospitable to commercial relevance by applying a multitude of mutually reinforcing policy instruments. Available data indicate their efforts were successful. Industrial support of university research appears to be advancing more rapidly in Canada than elsewhere. Table 1 shows that while the proportion of industry funding for university research has increased in all G7 countries from 1985 to 1996, Canada's share in 1996 is significandy higher than other G7 nations. Table 1: Share of university research funded by industry (%) in 1996,1990, and 1985 1996 1990 1985 Canada 10.4 6.3 4.3 United States 5.8 4.7 3.8 Japan 2.4 2.3 1.5 France 3.3 4.9 1.9 Germany 7.9 7.8 5.9 Italy 4.7 2.4 1.5 United Kingdom 6.2 7.6 5.2 Source (OECD 1998:165) By separating funding and performance sectors, Table 2 indicates that in 1996 Canadian universities performed a higher percentage of national R&D than other G7 countries, with the exception of Italy. 83 Table 2: Percentage of R&D Expenditures by Financing and Performing Sectors for the G7 Nations in 1996 Financing Sector Performing ! Sector Domestic Business Foreign Business Gov't Other. Internal Business Gov't Univs Canada 48.2 12.7 33.7 5.4 62.2 14.9 21.7 US 61.4 0.0 34.6 4.0 72.7 9.8 14.6 Japan 72.3 0.1 20.9 6.7 70.3 10.4 14.5 France 48.3 8.0 42.3 1.3 61.5 20.4 16.8 Germany 60.8 1.9 37.9 0.3 66.3 18.1 15.6 Italy 49.5 ' 4.4 46.2 0.0 57.7 19.9 22.4 UK 48.0 14.3 33.3 4.3 65.5 14.5 18.8 Source (OECD 1998:166) However, the Canadian business sector remains a low performer, suggesting Canada's industries continue to rely on publicly supported research rather than develop their own infrastructure. Overall, the new strategy introduced in 1996 produced a reduction in federal funding support for science and technology, especially for the research councils. The NCE program was among the initiatives that would be cut. However, the networks came together, launched a public relations and lobbying campaign, and were successful in reversing the decision (for details see Chapter 5). As a result, the NCE program was made permanent in the February 1997 budget, albeit with a 'sunset clause'. The purpose was to allow the program 'to continuously reinvent itself through a constant influx of new people and ideas' (senior civil servant; NCE-MAL: 11). The networks least likely to survive without government support would be culled, funding to those deemed to have 'graduated' from the program would be discontinued, and funding for all networks would be capped at a maximum of 14 years. For the surviving original networks therefore, Phase III would be the end of the line. Policymakers did not intend NCEs to become entrenched and institutionalized. They wanted researchers to be instilled, 'from the very beginning with a vision of life after NCE funding' (program officer; NCE-SM: 11-12). 84 But as I will demonstrate later, this sunset clause may have been a policy error. Especially for networks in the life sciences sector, where a 10 to 12-year gap can separate discovery and final-stage clinical trials, the timing seemed incomprehensible. The change created detrimental amounts of goal displacement among networks in this sector. Instead of focusing on advancing fundamental and translational research, networks facing sunset focused their attention on speculative financial projects in order to replace federal funding. Part of the intention of the NCE initiative was precisely to generate this kind of cultural change in academic science. The program's biggest achievement, according to one interviewee, has been to establish 'a market orientation in academic researchers and a predisposition for collaborating with the private sector' (program officer; NCE-CA: 6). This included finding and developing receptor capacity in Canadian industry, securing venture capital, negotiating multiparty intellectual property agreements, and establishing an effective process whereby network technologies could be licensed to industrial partners. The numbers of patents filed and inventions disclosed increased significantly.71 Sophisticated alliances with the financial sector allowed some of the networks to attain experiential knowledge of business and finance that often surpassed that of Directorate staff. They knew what was needed to run their own programs, and felt constrained by the pedestrian advice of NCE officials. Not surprisingly, the networks began to take on a 'life of their own' as they claimed increasing autonomy (program officer; NCE-SM: 11). As one senior civil servant put it: In some fields, however, patenting and licensing are not the normal routes for technology transfer; dissemination occurs instead through traditional routes, such as training and conference presentations. 85 We started to see change where the people who were working in the program had a very strong concept of what it was that they were doing. It wasn't always exactly the same as our concept, but they began to drive the program in certain ways.. .We [government] still set the agenda, but the level of contribution is much higher from the networks now and I would say that many times now we are learning from them as opposed to them learning from us....we started to see a change from us really driving the program to them taking much more ownership for it and starting to push into new directions (senior civil servant; NCE-JW: 29-30) The NCE Directorate became somewhat uneasy with the aggressive commercial ethos that developed in some of the networks. They sensed things had gone too far. In its review of one of the life science networks, for example, the Phase III Selection Committee suggested that the network's research program should be 'directed to goals that are appropriate in an academic setting' (NCE-SC 1997). In other words, the network 'should not try to compete in areas of research where major pharmaceutical companies are already investing enormous amounts of money and have a clear research lead and advantage' (NCE-SC 1997). But these Phase III funding proposals were prepared by networks facing the sunset of NCE support. They were required to show how they would handle the transition. It was almost inevitable that they would respond in commercially aggressive ways. In recent years, many of the networks have formally incorporated to facilitate the management of their extensive research programs, intellectual property portfolios and partnerships. Incorporation was always Industry Canada's preference. They saw formal, legal structure as a means of eliminating the model of collegial governance that had guided academic decision making in the past. But the research councils resisted, preferring to leave the decision up to the individual networks. After initially adopting a 'wait and see' position, most have now incorporated. They have also created arms-length, for-profit corporations that use standard business tools such as mission statements and strategic plans. A decision to incorporate raises some interesting conceptual issues. A network is a loose association of researchers, nodes, projects, and partners. It is the people and entities that make it up. But a 86 corporate body has legal powers of association and personhood. It exists apartfrom the people and entities that make it up. An incorporated (literally: embodied) network seems almost contradictory. Incorporation institutionaHzes these 'virtual' entities, cloaking them in substantive legality. The increasing adoption of the corporate form signals the approaching funding sunset for 'mature' networks, and their desire to sustain themselves beyond this horizon. Summary and Discussion In Canada, as elsewhere, national policies promote the integration of public-sector research organizations into the economic mainstream: public science must move out of academic and government labs and into the marketplace. Policy goals include the commercialization of research results as proprietary products, and the adoption of new market-friendly institutional arrangements for the conduct of research. Policy tools like intellectual property rights and public/private research networks promote the development of closer academy-industry relations and facilitate what can loosely be called the privatization of the public knowledge base. Yet at the same time as promoting commercial relevance these policies also promote scientific excellence—a combination that may appear at first appear counterintuitive. But Canada has a long tradition, sttetching back into the 19th century, of state involvement in the promotion of programs that seek both.72 The National Research Council was founded in 1916, largely to address the needs of industry for research that would advance innovation. At a time when universities were in the business of humanistic scholarship and teaching, rather than the advancement of scientific knowledge, NRC's establishment represented the institutionaUzation of federal attempts to advance 'useful' research. Over time, however, this intent was subverted as NRC 72 The first federally supported science initiative was the Geological Survey of Canada, founded in 1841, which laid the basis for the mining industry. In the 1880s federal support of astronomy produced longitudinal maps used in building the railways. The creation of experimental farms patterned 87 became increasingly focused on conducting fundamental research and promoting the same in universities. Beginning in the 1960s, attempts at policy reform proposed ways to 'correct' the orientation of federally funded research and scientific cultures and turn public research towards economic development goals. The scientific establishment successfully resisted these attempts until the 1980s, when the neoliberal turn in Canada's political culture established a Strategic Science regime that would harness public science to the needs of the economy. One of these initiatives established the Networks of Centre of Excellence program. As a hybrid of the National Research Council's Industrial Research Advisor Program and Dr. Fraser Mustard's Canadian Institute of Advanced Research, the NCE program was dedicated to both scientific excellence and commercial relevance. Because of its novelty and dual commitments, the program was the subject of fierce jurisdictional struggles within the federal bureaucracy as the research funding councils and the ministry responsible for industrial and economic expansion fought for control. In the first phase of the program (early 1990s) the culture of the research councils dominated and scientific excellence was the primary selection criterion. In terms of my conceptual framework, this phase was concerned more with basic research performed under 'open science' conditions in public institutions, or 'Bohr's Quadrant'. In the second phase (mid 1990s), Industry Canada's concerns for commercial relevance came to the fore. As a result, some of the networks entered into market relations more aggressively than had been anticipated. In other words, these networks 'overflowed' in pursuit of applied research for private profit and moved into 'Edison's Quadrant'. After 1997, when the program became permanent, new networks were selected on more balanced criteria and relevance was redefined in social as well as economic terms. 'Pasteur's Quadrant' was the goal. after the USA's land grant movement produced innovations suited to a cold climate and large gains in agricultural productivity. Before the end of fh^ But this goal has been pursued throughout the program's history. Mechanisms have been sought that will couple creation of knowledge and traditional means of diffusion, such as journal articles, with 'translation' of knowledge and new means of diffusion such as technology transfer to industry partners. The two are rife with tension and ways have been sought to reconcile, for example, publication norms with the protection of intellectual property rights. Or, when considering who to recruit into a network, to reconcile traditional criteria of scientific merit with strategic judgements of a research program's commercial relevance. Control of these and other tensions is accomplished within the formal organizational and management structures the program requires networks to adopt. Overall, the program sought to promote a broad shift in the research culture. Inter-institutional, inter-sectoral, cross-disciplinary, and multi-regional collaborations were favoured in the network selection process. Constructive relations with industry and cost-efficient, even revenue-generating, operations were to be pursued. The extent that these goals were achieved is an empirical question addressed in my case study of the Canadian Genetic Diseases Network. From the material presented to this point, it is possible to develop a model of Canada's Strategic Science policy regime, and the way the NCE program relates to it (Figure 5). I suggest that this model, with modifications for local conditions, may be generalizable to other countries operating under a similar regime. 19th century, several federal government departments had established national laboratories for the exploitation of natural resources. 89 Figure 5: Model of Canada's Strategic Science Policy Regime in relation to the NCE program Interests Ideas Institutions POLICY FORMATION t t Relevance (IC) -» Research Culture <- Excellence (RC) POLICY INSTRUMENTS NCE PROGRAM NETWORK BUILDING i y i. CAPITAL FORMATION Social Economic Human Capital Capital Capital NATIONAL RESEARCH CAPACITY BETTER RECEPTOR CAPABILITY NEW RESEARCH CULTURE Legitimation Source: JAG 2000 The model shows the influence of powerful interests, ideas, and institutions at the agenda-setting stage of policy formation. Once an agenda for commercial relevance and scientific excellence is mobilized, competing state agencies (in this case: Industry Canada—IC, and the Research Councils—RC) place countervailing pressures on the research culture and attempt to influence the 90 development of policy instruments that will further their interests. The NCE program is such an instrument. The construction of 'networks of centres of excellence' is intended to promote the formation of human, social, and economic capital, leading to a new national capacity in research, improved receptor capability in industry, and a new research culture. These results would then legitimate such programs and encourage the development of other similar initiatives. In the next section of the dissertation, I move from the abstractions of policy development to the materiality of the actual practices and relations policy instantiates. 91 CHAPTER 4: CONFIGURING THE CANADIAN GENETIC DISEASES NETWORK What is 'a network'? Often, we think of something flimsy or ephemeral, like a cobweb, that can easily tear and drift apart, just webs of relationships with nothing visible anchoring them in place.73 But as translation sociology (ANT) has shown, that is not the case. Networks are anchored in the materiality of the actors that make them up: in the infrastructures actors inhabit; in the resources actors command; in the allies they enrol; and in the artifacts and instruments they employ (or, as is often the case, are employed by). As Callon puts it, networks are 'the very simple counterparts of the spatial and time persistence of actors: to translate is to exist' (in press, fn. 7). Thus actors 'come before' networks and actors 'make' networks; powerful actors make powerful networks. This chapter is about precisely that process. What follows is the first part of my case study of the Canadian Genetic Diseases Network (CGDN). The chapter is divided into two sections. In the first I examine the way CGDN 'knitted the first few stitches of a web that still did not exist' (Callon in press, fn7) and how it secured itself to the material foundations of universities. My entry point is the individual leadership of the network's Scientific 73 Used figuratively, the noun 'network' means 'an interconnected chain or system of immaterial things' (OED). Another usage is an 'interconnected group of people; an organization' (OED). 92 Director. The discussion is then expanded to take in his enrolment of a 'core-set' when setting up the network.74 Flowing from that, chronologically, is a description of the network's genesis in 1988 and the recruitment of the founding researchers and professional staff. The section ends with a description of the management structure and the formation of an institutional identity, separate from the university. The second section comprises a critical analysis of problems that have emerged from the way the network has been configured. These have to do with issues of regional distribution, elitism and equity, social reflexivity, and public accountability. I. Power of One To succeed wilFiin the parameters stipulated by the NCE program, member networks seem to require a strong, even visionary scientific leader; someone who perceives the program as a means 'to animate their vision and execute it' (manager, PS-DS-23). According to ANT, the most powerful actors—those who assume a network's leadership and become its spokesperson—are those who enrol the largest number of allies. Spokespersons actually create the groups they speak for, by the very act of speaking (Cambrosio, et al. 1990:214). Generative leadership of this type was common to all the networks created in Phase I, but particularly those in the life sciences. Strong leadership is consistent with the culture of molecular biology, where the laboratory leader focuses all the resources and recognition of the lab, and represents the entity as a whole to the lab's various communities. The leader functions 'as a symbol of the lab, as the lab's information interface, its 'provider', and as the one who plays the games of the field' (Knorr-Cetina 1999:254). In CGDN, that spokesperson was Scientific Director Michael Hayden. His vision, communicated in a January 1991 essay entitled 'Science and Dreams', was To use 'enrolment' and 'core-set' in the same sentence is to mix metaphors from two branches of science studies: ANT and SSK respectively. I will 93 to create a functionally integrated but spatially dispersed intellectual consortium.. .to open new pathways for collaboration and networking while breaking down the old style, conventional, departmental and institutional barriers. This is not business as usual (CGDN SCAN-1: 2) All interviewees agreed75 that Hayden was the person most responsible for the network's initial success and that he remains its biggest influence. He conceived the network, envisioned its framework, and personally enrolled most of the researchers and staff. He is often characterized as 'a network in himself (board member, B-MP-23) in that it is his contacts and force of personality that stamp the network's style as entrepreneurial and fast-moving. For a former NCE program officer 'Hayden is unrivaled as a scientific leader. He was the right person in the right place. He was certainly the most effective of the scientific leaders I observed' (NCE-PO-MAL). Hayden appears to command the loyalty and respect, even affection, of colleagues. He has actually made this one of the most, if not the most, successful networks out of all those centres of excellence that were set up. (Researcher, BR-9-10) It's very strongly led by Michael Hayden. He has maintained the leadership through the whole time. He's certainly done an excellent job. I think it's very much his baby. (Researcher, DC-7) Simply put, Michael Hayden is a wonderful, wonderful, network leader. He always has been, right from the beginning. He's a rare combination—a person that's guided by principle but tremendously goal oriented. He knows what he wants to accomplish and he is tenacious. He won't let go of an objective he believes in, and he believes in the network. (Senior executive, PS-DS-6) Hayden's leadership style is characteristic of the traditional command-and-control ('Mode 1') model of academic science, in which senior scientists exercise almost total control of their eponymously named laboratories. This is the milieu in which the current generation of researchers was socialized. So it is not surprising that Hayden runs the network, in the words of a recent recruit, as 'a avoid engaging in the underlying theoretical disputes. 75 In many cases the opinion was volunteered, rather than prompted 94 benevolent dictatorship', nor that everybody seems to accept autocracy as the natural order. As the recruit puts it, this is not a democracy; one cannot run a network like this like a democracy. Michael Hayden makes most of the decisions. He has the best background. He's the best choice. So it runs quite smoothly (MW-34-5). An external observer notes that Hayden provides strong scientific leadership, but that his style is less collaborative and consultative than some. 'Hayden sets scientific directions by force of personality although he seems to do so without mffling too many feathers. Not necessarily bad, but different from the other two networks I think' (HC, personal correspondence). One of the NCE program officers—all of whom are scientists themselves—explains it this way. It's not really a dictatorship. You have to understand the scientific community that you're dealing with...It's a highly educated population. A highly critical, opinionated population. We are trained to be very critical of each others' work. So when you're dealing with that sort of culture it requires very strong leadership. Others might equate it to dictatorship but it is not. You have to be able to stand strong against all of the criticism. And so the leaders have to be very strong. And very firm. Because it's not going to work otherwise. (NCE-PO-LD-7) Hayden's way, explains a senior researcher, is to put his imprint on something and set the strategic direction, then hand it over to professional staff and move on to something else. 'He has the final word, but those people are now so indoctrinated that they run on their own. They don't need to go to him for everything. And it works' (BG-43). A veteran staff member agrees. Hayden makes the decisions and sets direction, she says, but, over the years, 'he backed off and let us do our own thing' (PS-CS-24). A senior science bureaucrat, who was the network's program officer for a number of years, notes that Hayden indeed did less hands-on management than most of the other leaders. 'But when he did intervene,' she says, 'he had vision and a pretty good schtick. He really got things done' (NCE-PO-MAL). 95 Hayden's willingness to allow the network's professional staff to manage network affairs was, in part, an artifact of the program's design. As described later in the chapter, a major novelty of the NCE program was that network research was conceived as managed research. Given the large amount of funding allocated to each network, and the complexity of linking so many institutions and researchers together, formal management structures were deemed essential. In effect, each network had two leaders. One was the scientific director. The other was a network manager who 'made bloody sure they knew what everybody was doing. And kept tabs on everything. Which is very unusual in a science program' (Policy analyst; NCE15-13). As scientific director, Hayden coordinated and integrated all the research projects and programs. But the network's senior executive officer controlled the spending and monitored the researchers to ensure that all the network's non-scientific mandate points were being met. In accepting the position, says one of these senior staff members, he knew working alongside Hayden would be demanding, but felt confident enough to accept the challenge. T knew that I could work with hirn long enough to work it out. You just have to be strong. He backed me and I backed him, it worked both ways' (PS-DS-22). Part of Hayden's success as a leader came from his strategic abilities. He knew how to mobilize resources, at the last minute, for the highest impact. For example, the face-to-face aspects of funding applications—expert panel visits, presentations to the NCE selection committee, and so on—were orchestrated to maximum effect. According to informants, every ally, every board member, every industry partner, every network scientist was invited to sit at the table. Everyone gave five-minute presentations on their research and/or role in the network, literally overwhelming panelists with information and enthusiasm for the science. 96 These funding reviews and site visits were highly polished performances. Everyone was well prepared. The whole effort was timed and scripted, without appearing slick. As the Managing Director describes it, 'everybody was there to back up that this organization was doing its stuff.. .you can't leave anything to chance, you have to cover all of the bases' (PS-DS-61). Hayden himself, however, relied on staff to set things up, rarely focusing until the very last minute. He caused more than a few anxious moments but people learned to have faith in his ability to deliver the goods. The following anecdote, by the NCE program officer responsible for the network in Phases I and II, provides an example of his eleventh-hour style. I never saw anybody like Michael for pulling things off at the last minute. I'd talk to him one day and he'd have to do something the next day and he would be totally disorganized. And I'd expect an utter disaster. And, then the next day I'd see him perform and he always seemed to pull the rabbit out of the hat. Yeah, the lights went on and Mike was there. He'd just put in a terrific performance and really inspire people in the network. The night before the selection committee meeting [for Phase II] there was a dinner for Michael Smith in recognition of the Nobel Prize. And Michael Hayden was at the dinner and I talked to him and he was really nervous about appearing before the selection committee the next day and all that went along with it. And I thought 'Oh God! He is unprepared. He is going to bomb,' you know? But when he came in the next day he did a really smart thing. He brought in JG, a private-sector partner, to say what was great about this network from industry's point of view. Hayden was the only person who did that. Everybody else brought in their scientific director and their management person. So his network was unique in that way. And that was exacdy the dimension that the committee wanted to hear. A distinguished member of the selection committee.. .quite an influential guy.,.said 'you know we can't not fund this guy. This guy shakes trees.' And I always remember that and it certainly is true. Michael really did have that impact. (NCE-PO-MAL) Involving so many network members—scientists, board members, and industry partners—in the renewal effort was extremely innovative at the time. Not all networks took such an inclusive 97 approach. For example, a researcher from another life science network76 reported having few companions when he attended a renewal panel. The leaders had invited only three or four scientists to present a synopsis of what was happening in that network. 'None of the other scientists was invited; it was only a handful of people' (FT-8). That network subsequendy lost fanding because, to this researcher, they had failed to engage their scientists in the process. Unlike Michael Hayden, that network's leadership 'essentially excluded all the scientists and then tried to move forward. But of course, they had nothing left. The scientists had abandoned ship' (FT-8). At CGDN, in contrast, 'every one of our scientists was at the review committee meetings. No one was missing unless their mother was dying. There were no excuses. You had to be there' (Manager, PS-DS-14). Thus the essence of Hayden's scientific leadership was to involve others. Hayden extended that concept of involvement to the wider community. He calls this 'civic science'. When scientists accept public money, he says, they accept a responsibility to the communities that provide those funds. Science and scientists must not be cloistered; they must participate actively in society and be fully accountable. The obligation is not so much to the government, Hayden argues, but to the public at large. In return for the privilege of being funded to practice science, scientists must accept the responsibility of ensuring that that the community understands what they do. He says facilitating this understanding is as important as his work on human health. We have a responsibility to reach out to the people who support us.. .We are guests of the public. And so we have a responsibility to acknowledge that they are the source of what we're doing, and why we're doing it.' (MH1-6-8) Although 'civic science' sounds high-minded, it seems to have more to do with furthering public funding of science, than public understanding of science. To use the vocabulary of ANT, when scientists are astute about enrolling and mobilising the public as allies; when they convey a convincing The researcher also belonged to CGDN, so was able to compare both networks 98 message, the public will pressure politicians to maintain or increase funding levels. The cuts to the basic research budget, in the mid-1990s, he says, occurred because scientists 'were not civic enough. And so people didn't place enough priority on it' (MH1-50A). Seeing what was happening to other programs, NCEs 'had to get out there and make sure [network] research was high up on the political agenda. Governments do respond to the people, particularly around election time' (MH1-50A). The reference here is to 1996. As part of the deficit reduction program, the federal government had decided to discontinue NCEs. The winding-up process had begun; no more fimding would be forthcoming. In response, the networks, led by CGDN, waged a national public relations campaign to save the program. As a senior network manager explains, 'it took about four months but we won. We won big.. .We convinced the government that this was a program that they couldn't afford to let die' (PS-DS-29).77 In other words, through a process of interessement government had been persuaded to define their problem in such a way that the NCE program was the solution: the obligatory passage point for Strategic Science. Since then, according to Hayden, network scientists have been 'tremendously civic'; in every part of the country, 'they are out there talking to the wider community' (MH2-26). Perhaps pardy as a result of the mobilization of public sentiment in this way, scientific research recovered its place on the policy agenda. As the deficits turned into surpluses, former funcling levels began to be restored, then equalled, then exceeded78. Research funding was back on the federal 'radar screen': a major priority item in the budget for four consecutive years (1998-2001). Powerful advocacy coalitions (Sabatier 1988) mobilized to lobby for NCEs. Program funding almost doubled between 1997 and 1999, from approximately $40 million a year at the end of Phase II, to $78 million a year in the 1999 budget. 99 Civic science can thus be seen as a rhetorical strategy that aligns scientists' self-interest with the public interest by enrolling the public as allies in the network. 'By doing it,' says Hayden, 'we ensure our future' (MH1-6). While mobilizing public support for science funcling is a legitimate activity, some observers find something slightly 'slick' about the way Hayden packages it. One critic, a senior scientist and policy consultant, says, 'Mike Hayden is what I would call an operator. I do not mean this in a terribly critical way. It is just the sort of person that he is' (HC-1). Another senior scientist criticizes Hayden's ability to present genetics as the solution to a host of medical problems, thereby ckverting attention from the complex 'web of causation' in disease of which genetics is but a minor part (OTH-B37). Hayden has extended his personal network and entrenched his leadership role over the decade of the network's existence. Like 'Pasteur' (Latour 1988), 'Hayden' has become the authorized spokesperson for legions of molecules, machines, and tests; patients, doctors, and researchers; founder populations; government hinders; disease foundations; and pharmaceutical interests. By interesting and enrolling powerful allies and mobilizing the rhetoric of medical genetics in the public arena, Hayden's science has become a political practice, a science of associations, what ANT calls 'politics by other means' (Latour 1988:40). To understand 'Hayden' and 'CGDN' as consolidated complexes of linkages, it is helpful to map the beginnings of the network, before any taken-for-granted relationships were stabilized. In the early days, Hayden reached out to senior colleagues to help build the network. He was enrolling an elite nucleus of allies, a core set. 77 As discussed earlier, however, there was a sting in the tail of success. While the program itself was made permanent, individual networks would not be. 78 For example, within 3 years of its founding in 1998, the Canadian Institutes for Health Research budget was twice that of the MRC it had replaced 100 Enrolling the Core-set Harry Collins proposed the idea of a 'core-set' in relation to scientific controversies and their outcomes (1981,1985). He used the term to describe the group of scientists involved in the resolution of any given technical controversy. Membership in the set does not depend on common institutional affiliations or seniority but only on a mutual interest in the outcome. A core-set thus can be understood as a web of interests and associations formed by people of disparate linkages and alliances. Because of its descriptive generality, the term has relevance outside controversy studies. Following Michael & Birke (1994), I combine it with ANT's concept of enrolment. In January 1988, when Prime Minister Mulroney announced funding for something called the NCE program, Hayden, then a young Associate Professor at the University of British Columbia, immediately saw the potential for a genetics network. As a relatively junior researcher, however, he would need to enrol established members of the genetics community if a proposal was to succeed. He telephoned the two top medical geneticists in Canada: Charles Scriver (an expert on Tay-Sachs and PKU) at McGill University and Ron Worton (discoverer of the Duchenne Muscular Dystrophy gene) then at the University of Toronto's Hospital for Sick Children ('Sick Kids'). Hayden knew neither man personally—they had not worked together at all previously—but he knew their work and he knew their stature. He told them 'you know, we've got an opportunity here for a network in the genetic basis of human disease'. Worton had been thinking along similar lines himself and was willing to work on it with Hayden. Scriver was more circumspect. Hayden says 'I was really young back then and Charles was like the Father of Genetics. Why would he care? And why would he trust me enough to work with me on this?' Scriver was an essential ally for several reasons beyond his scientific seniority. First, he had helped found a well-known program called the Quebec Network of Genetic Medicine, twenty years earlier, 101 in 1969. That network ran a screening program for newborns and a distributed system of centres providing diagnostic follow-up, genetic counseling, and treatment. The group had recendy published an article in Science's first theme issue on how science could contribute to societal initiatives and concerns. Scriver suggests that within this context the network's name and structure attracted Hayden's interest. Second, Scriver had research projects funded under Quebec's Programme d'action structurantes. That provincial program, formed in the early 1982, appears to have been one of the prototypes of the federal NCE program, formed in 1988. Like NCEs, Action Structurantes projects had to be performed by a team of investigators. While industry partnerships were not required, they had to be multi-university and mmti-msciplinary. Scriver was coming to Vancouver the following week on a personal matter. Hayden arranged a meeting. The two researchers, separated in age by a generation, sat on the steps of Vancouver Art Gallery, in the chilly middle of February, going over the issues. Hayden summarized the federal announcement and pointed out the similarities with what Scriver had built in Quebec. He remembers talking about pulling together the 'best of the best' across the country, in the same way that Scriver had pulled together the 'best of the best' in Quebec. He talked about the rnilHons of dollars being made available for research. Finally, he asked Scriver whether he would join in and Scriver agreed. Hayden calls it 'a pivotal conversation'. Scriver says of his recruitment, I think Michael recognized an interesting opportunity when he saw it, which has been his trademark all along. He was aware of what we had been doing in Quebec with bringing academic genetics to a societal interface, and he thought that would make an NCE proposal look good All three had their own personal networks of colleagues and contacts and technical capacities, and these quickly combined and multiplied the way networks do, sparking from node to node. Hayden, Scriver and Worton were thus the embodied 'centres of excellence' from which the network originally sprang, and they continue to lead the network today. Senior members of the network 102 called them 'the triumvirate'. Beyond these three founders was the elite group of scientists they enrolled to craft the initial letter of intent and subsequent proposal. Worton recruited two people from 'Sick Kids'—Lap Chee Tsui and Rod Mclnnes, while Scriver brought in Roy Gravel and Emil Skamene from McGill. Together with Hayden, that made a core set of seven. This 'group of seven' met in a Toronto hotel room for a day and a half to brainstorm ideas. But that first session was followed by a long hiatus as they waited for the government to specify what was expected in the letters of intent. Ron Worton takes up the story. The next thing I remember is that I'd planned a three-week holiday for that summer and I'd just bought a cottage the fall before. So this was my first summer in my new cottage. I had never had a three-week holiday before. This was going to be my first lengthy vacation. And I'd been there about a week and a half and I got a call from Michael and he said he'd just heard that NSERC-the leaders of this program at the time—were doing a cross-Canada tour talking about the network model and how to apply and so on. The tour would be in Toronto the following week. That ended my three-week holiday. I went back to Toronto, listened to the presentation and took notes and called Michael and two weeks later I was with him in Vancouver. I guess we spent the best part of that summer putting together the letter of intent.. .and then.. .in the fall, it had to go very fast.. .We only had six weeks between notification of the success of the letter of intent and the requirement for the proposal. The core-set identified and enrolled people in other universities and hospitals, expanding in multiples from the original group of seven, to fourteen, and then to twenty-one for the formal proposal. Roy Gravel remembers recruiting people into the program during the summer of 1988. 'I recall there was a meeting in Toronto, the Genetics Society or something of this sort, that was North America wide. It brought a lot of these people into the city. But that was very close to the deadline. We already had most of the people identified by that point'. One of the most novel aspects of the NCE initiative, one that caught the attention of scientists, was that research was to be extended across Canada in lateral, east-west interactions. This was not the 103 traditional way Canadian science had been organized. Few national forums brought Canadian scientists together. Most connections and collaborations were north/south. Canadian scientists tended to meet each other, if at all, at conferences in the United States. As a result, apart from those recruited from the same institution, people came into the network as strangers, but with a new basis for interaction, which was the network itself. As one scientist explains, I didn't know who Michael Hayden was and I didn't know many of the scientists who subsequently became involved. It wasn't so much that people stayed on one side of the continent or another. It was just harder to find people throughout Canada. So this network idea became interesting very quickly, because we met new people doing collaterally related things. RG-5 The recruitment process was quite divisive, however, as will be discussed shordy. The rights and wrongs of who was, and was not, invited to join are still being debated. Four levels of investigator were specified in the proposal. Six of the original 'group of seven' were designated principal investigators (Pis)— individuals with 'established international reputations' in the field of molecular and/or human genetics. All men, three of the six Pis were based at Sick Kids; two were from McGill, while Hayden was the sole representative from the West. The seven scientists at the next level were designated research associates. These four women and three men (one from the original core-set) were individuals with 'established reputations' in human genetics, many of whom were shifting their research program to the molecular level. Of the seven, four were based at Sick Kids, one at McGill, while two represented the prairies. Hayden was still the sole representative of UBC, the headquarters institution. A third level was called young investigators: All men, these three young Canadian scientists—one each from the universities of Ottawa, Montreal, and British Columbia—were said to have demonstrated 'outstanding creativity' in the early stages of their career. The significance of the final level—core facilities directors—was immediately understood by Hayden, but perhaps not by the 104 others. Directed by four men and one woman, the core facilities quickly became the key to the network's success. In fact, the core facilities came to define what it meant to do 'network science'— they were true 'collaboratories' (Finholt and Olson 1997; Wulf 1993). As will be explained later, core facilities had both cognitive (human) and material (non-human) elements. They were a combination of the directors' technical expertise and interventions, and the material equipment and instrumentation. Because Hayden realized the importance of these advanced technologies, directors of three of the five core facilities specified in the proposal were based at UBC. In all, the 21 scientists listed as network members in the 1988 funding proposal represented eight universities and five associated hospitals and/or research institutes:79 University of British Columbia (including the University Hospital and the Biotechnology Research Centre); University of Calgary; University of Toronto (including the Hospital for Sick Children); McGill University; University of Montreal (including Hopital de Ste. Justine); University of Ottawa (including Children's Hospital of Eastern Ontario); Queen's University; and the University of Manitoba. Figure 6 below summarizes the investigators by level, their institutions and locations, as well as their research interests. (See also Figure 8, later, for comparison with Phase III). All the hospitals/institutes are associated with universities but some are more autonomous than others. 105 Figure 6: CGDN Investigators, listed in 1988 Proposal for Phase I of NCE Program Name Institution City Research interests PRINCIPAL INVESTIGATORS Gravel0 HSC/UT Toronto inherited biochemical disorders including Tay Sachs Hayden UBC Vancouver late onset genetic disorders including Huntington Scriver* McGill Montreal physiological genetics and human genetic variation Skamene McGill Montreal genetic susceptibility to disease Tsui* HSC/UT Toronto cystic fibrosis and gene regulation Worton*3 HSC/UT Toronto Duchenne Muscular Dystrophy and genome structure/function RESEARCH ASSOCIATES Cox" HSC/UT Toronto antitrypsin deficiency and human genetic variations Fielde UC Calgary genetics of multifactorial disease including diabetes Gallie HSC/UT Toronto Retino Blastoma and other genetic malignancies Greenberg1 UManitoba Winnipeg hypophosphatasia Mclnnes HSC/UT Toronto genetic diseases of the retina and inherited biochemical disorders Morgan* McGill Montreal complex phenotypes and population genetics Robinson HSC/UT Toronto lacticacidemias YOUNG INVESTIGATORS Goodfellow3 UBC Vancouver multiple endocrine neoplasia Korneluk CHEO/UO Ottawa myotonic dystrophy Mitchell HSJ/UM Montreal inherited biochemical disorders CORE FACILITIES DIRECTORS Aebersold2 UBC Vancouver Protein Analysis and Sequencing Duncan1 Queens Kingston In situ gene mapping Jirik« UBC Vancouver Transgenic Mice and Gene Targeting Lea3 UT Toronto Hybridoma technology Lee4 UBC Vancouver Electron microscopy 1: Not renewed 1996 (a) relocated to University of Ottawa, Childrens' Hospital of Eastern Ontario, 1996 2: Resigned 1994 (b) relocated to University of Alberta, Edmongton, 1996 3: Resigned 1992 (c) relocated to University of Calgary, 1999 *: Also core facilities directors (d) relocated to University of Calgary, 2000 (e) relocated to University of British Columbia, 2001 106 The last few days before the submission deadline for the proposal were especially intense. In the words of Ron Worton, it was 'an enormous effort'. I flew with my secretary to Vancouver for the last six days or so before the proposal was due, because it was too awkward to try to manage it from two cities. And this was the early days of computers, they were fairly crude at that time. Their memories were small. But Excel had just become available.. .So, we went out and bought that program a couple of days before I flew to Vancouver. My secretary was reading the Excel manual on the airplane so that when we got to Vancouver, she could do all the spreadsheet work to put the budgets together. With everyone working around the clock the proposal was submitted on time, November 30 1988, under the tide Genetic Basis for Human Disease: Innovations for Health Care (CGDN-FP 1988). It was one of some 158 formal proposals submitted in response to the original call. The leadership issue had been decided by then. Hayden would be Director and, by virtue of that fact, UBC would host the network's administrative offices. Worton and Scriver were listed as Co-Directors. After the excitement subsided, everyone went back to their labs while the process worked its way through the bureaucracy. Given the intense activity of 1988, the hiatus was something of an anticlimax. It took almost a year before the successful networks were announced (see Chapter 4 for a description of activities at the federal level in the mtervening months). Then, on October 26 1989, the 15 networks were notified of their awards. The genetics network would receive $17.5 million over four years.80. Asked why he thought the CGDN proposal succeeded, one of the founders responded The NCE review committees looked at our science, first. That's your ticket to get in. Once you've accomplished that, you also have to demonstrate that you have a different outlook within the network than in the basic science system. So, the balance I thought was good. We had the breadth of everything. RG-22 80 Because of delays, the first phase was actually only a little over three calendar years, although it spanned four fiscal years. The fiscal year ends on March 31". 107 For the new networks, the nine months following the announcement of Phase I awards—the gestation period from November 1989 through July 1990—were chaotic, as federal bureaucrats struggled to put administrative structures in place. The first tranche of funding was not advanced until August 1990, more than three years after the program was first announced as part of the April 1987 InnovAction strategy, and 30 months after the funding commitment was made in January 1988. The delays indicate the novelty of the program. Federal systems to implement and manage it had to be developed de novo. In the selection process, most of the attention had been paid to scientific excellence. In the implementation process, consideration had to be given to the other criteria: linkages and networking; relevance to future industrial competitiveness; and administrative and management capability. These non-scientific elements constituted a large part of the program's novelty. Taken together, they meant NCEs would function as 'research economies' with proper management and governance. These elements would be covered by a 'memorandum of understanding' as it was then called, an internal agreement governing each network's formal 'powers of association'81—its management and governance structure, and its public- and private-sector partnerships. CGDN's first internal agreement was signed on July 4 1990. Two industry partners were signatories—MDS Health Group Limited and Merck Frosst Canada Inc—as well as the 13 institutional partners referred to earlier.82 Once the formal agreement was in place, funding was released and the network could seek staff to fulfil the non-scientific criteria. The dynamics of network formation came into play here too. The network's administrative manager was recruited from industry partner Merck Frosst's research planning division in Montreal. She set up the initial systems. Then Dr. David Shindler—a leading science policy advisor—was identified by one of the network researchers as a 'person of interest'. 81 Note that legal powers resided with the host universities. 108 He was recruited from Canada's science secretariat in London as the network's Managing Director. With the two key employees in place, the network's administrative centre was opened at UBC in September 1990. II. Managing the Network The history of the NCE program has been described as 'the evolution from free research to managed research to industrial participation' (policy advisor, NCE-MB: 13). The NCE directorate believed that management expertise and governance could 'make or break the networks' and was 'as important as the excellence [of the science]' (program officer; NCE-SD: 3). Management would be one of the key features that extinguished networks from academic science-as-usual. As stated earlier, the NCE program was conceived as large-scale managed research. For a former program officer, now a policy advisor, 'this was 'a major novelty [and] a shock to many; perhaps it [was] the first culture shock' (NCE-MB: 9). But an Industry Canada bureaucrat views NCEs as simply 'slighdy more managed or administered' than is usually the case in academic science; managers simply looked after the paperwork, knocked on doors looking for partners, or otherwise freed researchers from tasks that diminished their productivity (NCE-DH: 14). The two interpretations: 'culture shock' and 'normal practice' reflect the cultural differences between the program's governing agencies: the research councils, on the one hand, and the Ministry, on the other. Stipulations were put in place that all networks would have a board of directors, a scientific committee to organize the research program, and a management team. Network boards and committees were to be structured to bring the expertise of industrial partners to bear on research 82 Where researchers worked in university hospitals, both the university and the hospital were named as network partners, making the network appear more extensive than it was.. 109 management. This industrial representation took time to achieve, however. In Phase I, CGDN's board was heavily weighted to academics, with UBC's Dean of Medicine as Chair. The federal decision to restructure the selection criteria for the Phase II competition put the scientific and non-scientific mandates on a par. This decision reflected Industry Canada's concern that, in Phase I, too much emphasis had been placed on research excellence and not enough on industrial relevance. In other words, unless formal management structures were given equal status, it was far too easy for a network to allow researchers to do 'science-as-usual', that is, to follow serendipitous directions, and do 'more or less what they wished to do' (NCE program officer; CA: 10). The pressure for increased management was also a function of the increasing size of network research programs. Management brought an overall vision, 'the strategic vision for the whole group, which was unusual in academia' (NCE Program Officer SD: 10). As CGDN interpreted the management mandate, 'some level of cohesion, some level of network identity, some level of management, some level of cooperation' was required (senior executive, PS-DS-24). In a distributed network, where people do not necessarily see each other, 'there has to be some [management] glue at the core; if there's no glue there it ain't going to work' (network administrator, PS-CS). But CGDN scientists were not used to being monitored by managers. At least in Phase I, network funding looked to them like 'just another federal grant; just business as usual' (network administrator, PS-CS-9). It was the task of management to persuade them otherwise—that not only the standard of excellence but all of the program's mandate requirements had to be met. Managers made the baselines clear. 110 If you fell down on any one of them, you were finished.. .We had to be pretty tough and it was hard. It was painful. We had to kick people out of the network when the work wasn't up to scratch. When they didn't maintain their science, or they weren't doing it the way we saw it had to be done (PS-DS-24). This level of control over researchers was possible only because the most senior executives held PhDs. Both the original Managing Director and his successor belonged to the scientific culture and enjoyed peer status with network researchers. Their scientific credentials helped to establish their credibility when enforcing accountability. As members of the culture, they understood the competitive nature of scientific careers. While they reinforced high standards and the orientation to excellence, they also sought to encourage researchers to maintain their science and be acknowledged for it. Tt wasn't just about grants. But to be recognized by their peers for the good work that they were doing' (senior executive, PS-DS-17-18) The maintenance of standards paid dividends. By adjusting to the program's changing demands, CGDN won a total of 14 years funding in all, the maximum allowable. The network was successful in each competition, being renewed for Phase II, in the mid 1990s, and again for Phase III. After Phase I, much of the hierarchical partitioning of researchers disappeared. In subsequent competitions, all the original associates were reclassified as Principal Investigators. Core facility directors were also listed as principals, reflecting the reality that most ran research programs as well as providing a service to other members. The category of 'junior researcher' disappeared. (Subsequendy, promising young researchers were appointed as 'network scholars' on a fixed term.) Network documents83 show 33 Principal Investigators at the start of Phase II, representing nine universities and four related hospitals/institutes. After the Phase III expansion, the network agreement details 50 Principal Investigatorss at 12 universities and eight related hospitals/institutes. 83 See (CGDN-FP 1993; CGDN-NA 1994; CGDN-FP 1997; CGDN-NA 1998). Often documents disagree. For example, the funding proposal will list more partner institutions than the network agreement. When there is a discrepancy, I take the network agreements to be the more reliable source, since these list only formal signatories. However, the fact of the matter often lies in between. Ill The hospitals and universities referred to above were rarely enthusiastic signatories to the network agreements. To them, a network was a problematic organizational entity. Given that Ottawa's original intent was to bypass university autonomy, it was little wonder that conflicts occurred between these reluctant 'hosts' and their unwanted 'guests', as the networks established their institutional identity. As Michael Hayden describes the relationship, 'universities didn't trust the networks. They saw us as a power grab. They saw too much power going to the networks away from the universities. And they didn't trust and didn't understand the process' (MH2-1). Institutional Friction Universities and hospitals that house network offices and researchers are called 'host institutions' but their hospitality is largely involuntary. The legal status of 'networks' is an important factor in understanding the host/network relationship. Under corporate law, collectives (e.g. societies or associations) hold certain 'powers of association' not available to members as individuals. Those powers are exercised through the association's officers, professional staff, and governance mechanisms. Legal powers of association, and legal personhood, require incorporation and CGDN did not incorporate until 1998. Until then, in legal terms, it did not exist.84 As a CGDN manager says, 'these are very fragile organizations; they're built on practically notiiing. There is very little holding them together except money' (PS-CS-66). Until 1998, then, CGDN was an 'ephemeral organization' (Lanzara 1983:88) existing only in the interstices of university accounting systems. Its status in relation to the university was highly ambiguous. Commenting on the network's location on the periphery, Michael Hayden says 'we were federal but we weren't in the mainstream. It was strange' (MH). But from the margins, as 'federal agents', NCEs 84 This is one reason Industry Canada, in early planning for the program, wanted to insist on incorporation. As described earlier, the Research Councils resisted 112 were able to mobilize significant informal powers of association. In the absence of formal identity, they bound CGDN together with a willed identity. When you are a network, when you're not incorporated, when you're undefined, when you're an instrument of the university, (the universities consider you their instrument even though you are not.) And when you're trying to do something in between everybody else, it's very difficult to establish an identity. And we worked hard to create an identity. (Senior Executive, PS-DS-64-65) As the networks developed distinct identities two clear sources of friction with host institutions emerged. The first source of friction was the financial costs of hosting networks. Unlike the National Institutes of Health in the United States, Canada has never funded infrastructure costs85 for medical research and only rarely allows researchers to charge their salaries to research grants. Whenever a new program was established, universities had to cover the additional costs. By any standard, NCE overheads were large and expensive for university budgets to absorb. In effect, these institutions supplied the incubation facilities in which networks could flourish, but received no compensation from the program, or recognition for their contribution. As well, it was a case of 'taxation without representation' since universities had no power to regulate the activities of networks, which were accountable only to Ottawa. Overall public investment in the program from fiscal 1990 to fiscal 2000 exceeded $650 million (see Table 3). But this figure does not include university infrastructure or the salaries and benefits of university researchers. The NCE Directorate conservatively estimated the latter at approximately $100 million a year in 1996 (NCE Annual Report, 1996-97). Using the growth of the program since 1996 as a base for calculation, the annual salary figure has likely doubled to approximately $200 million a year in 2001. According to one federal informant, by absorbing these costs universities have contributed at least as much as the program itself over the years (NCE-SM: 19). 85 Effective July 2001, a white paper was circulating in Ottawa proposing to allocate a standard percentage of research funding to universities for infrastructure 113 Acknowledging the historical under-reporting of public support for the program, the Director of the NCE program estimated that 'the additional contributions from both the granting councils86... and the universities tends to almost triple the total amount' (JCG-11). In contrast, the private-sector is credited with only $75 million, or approximately 10% of the program's 'official' $730 million cash budget. Even this figure may be overstated due to various reporting anomalies regarding cash contributions that will be discussed later. The same anomalies prevent any reliable estimate of 'in-kind' contributions from industry partners. Without full estimates of cost, it is hard to calculate the program's cost/benefit ratio. Table 3: Total cash contributions to NCEs, 1990-2000, in C$M (excludes in-kind gifts and overhead support) Agency C$ % NCE Grants 509.5 69.9% Federal Agencies 27.3 3.7% Administration/sundry 14.2 1.9% Sub-total-Federal 551.0 75.6% Provincial Agencies 45.8 6.3% Subtotal-Government 596.8 81.9% Universities (direct only) 8.5 1.2% Other—hospitals and tax-exempt foundations 48.4 6.6% Sub-total-public supported institutes 653.7 89.7% Industry contributions 75.0 10.3% Total Cash 728.7 100.0% Source: compiled from NCE annual reports, 1990-2000 Perhaps understandably, universities resented their expensive and uncontrollable guests and did what they could assert their institutional authority. According to one CGDN informant, the initial reaction was 'the government has forced these damned networks on us.. .why should we even talk Comprising prior funding of fundamental research by the research councils, in the form of grants to network researchers for the basic element of 114 to these network guys? What do they bring to the table?' (senior researcher; RW-35). One way for universities to manage the intruders was through bureaucratic controls. As already stated, prior to incorporation CGDN had no legal capacity to hire employees, make contracts, or receive funds. In all such arrangements the host university acted as surrogate, as if the network were a minor child, incapable of forming intent. CGDN's researchers were employed by their individual hospitals and universities; network staff worked for UBC. When CGDN wanted to hire David Shindler as Managing Director in 1990, UBC refused. Michael Hayden recalls He was the guy we wanted [but] the only way to hire him was not to hire him but to get him to take a secondment from his current job. We would pay the Ministry of Foreign Affairs and they would pay him. We did that for five years. It took UBC that long to approve the appointment... [to] become more trusting of the networks (MH-2) The universities wanted the networks brought under university control. As one network manager describes the situation, 'this was about power and greed. They wanted control of our budget. They wanted the ability to claim that the networks came under the universities, so that anything the networks accomplished, could be attributed to the universities. It's more money for them, it's more profile for them. It's a case of the bigger our basket is, the more of a power base we have1 (PS-CS-31). CGDN's principals resisted the administrative blocks imposed by the university. While acknowledging that university budgets were inadequate, they saw no reason to accept the blame and pointed to waste and inefficiencies that 'leaner' structures like networks avoid. 'Universities are under funded, but they are over-headed', says one founding member. 'There is too much infrastructure. To lay blame onto the networks for some aspect of it is unfortunate and misplaced' (RG-83). On the contrary, he suggests, universities should recognize the networks as assets. their network research. 115 A second cause of friction between host universities and networks relates to the management of intellectual property (IP) generated by network researchers. Both are involved in what Merges (1996) has described as a process of 'creeping propertization' as discoveries that would otherwise have remained in the public domain, are 'captured' (privatized) as intellectual property then exploited for profit. In this drive to propertize the products of science, NCEs and their host universities compete for profits. Each seeks to depict itself as the most legitimate agent and skilled representative in the drive to turn science towards the market. Beyond the drive for profit he several distinct irritants. First, the 'internal agreements'87 that are supposed to govern intellectual property issues are universally described as 'ugly.' The program directorate is trying to set up a template to simplify these complex and unmanageable documents. In the meantime, the agreements are supposed to clarify relationships and IP ownership issues but they do not. This means that each commercialization deal must be treated on a 'one-off,' case by case basis. Second, over time networks have become more aggressive about intellectual property. As I describe in some detail later, the networks had fairly limited interest during Phase I because program demands in this regard were modest. Phase II brought increased expectations on the part of the program and a matching response from the networks. Since Phase III, the networks have been looking to IP commercialization to carry them beyond sunset of NCE funding. As one university technology manager comments, 'the networks are really fighting for our intellectual property.. .the reality is that if they're going to be self-sustaining, they have to insert themselves into the process' (UA-SC-1). Another says, '[these] people are trying to protect their future at our expense' (UBC-CB-2). 87 The NCE Directorate requires such agreements. They govern all aspects of the relationships between a network and its university and industry partners 116 Finally, there is a sectoral disparity among the networks in their ability to deliver commercialization services, and in their approach to technology transfer. According to university technology managers, the information technologies and electronics networks tend to be 'fairly hands off and laissez faire', while the life sciences networks like CGDN tend to be proprietary and centralized. Because life science networks control their boundaries and members, they been able to make themselves 'obligatory passage points' (Callon 1986: 205) for IP protection in a way that university commercialization offices have not.88 In networks, the processes of interessement and translation ensure that discoveries with commercial potential are disclosed to the network first. Industry Liaison Offices (ILOs) in universities argue that NCEs duplicate existing technology transfer infrastructure and add litde value in the process. In turn, the networks point out that historically universities had no incentive to pursue commercialization nor any particular interest in doing so. One of the driving forces behind the establishment of the NCE program, they say, was to 'leach out' technologies otherwise langviishing in universities. University technology managers argue that they carry most of the workload for the development of NCE technologies while receiving litde credit. 'On any technologies that I've been dealing with NCEs, I would say I've done 80% of the stick handling' (UBC-CB-2). But to a CGDN board member (private sector) university ILOs 'appeared to be uniformly inept or nonexistent or both. The networks were much more competent' (B-MP-6). In comparison to 'Johnny-come-lately' narrowly focused networks, ILOs depict themselves as deeply experienced and possessing a 'whole university' vision. In contrast, networks hold themselves out as fast-moving, sectoral specialists, moving strategically to secure IP. They depict ILOs as lumbering, bureaucracy-bound generalists, with no industry experience, trying to handle everything See Nelson & Sampat (2001) and Atkinson-Grosjean & Fisher (1999) for more thorough discussions of institutional constraints on ILOs 117 from astrophysics to zoology. According to a former CGDN commercial director, ILO staff just don't develop a good understanding of how industry thinks, so they don't really understand how to find market prospects. 'They mean well, and they try hard and they work hard. They often are extremely over-worked for what they get paid. But, you know, we were focused on our own field. And that meant we could specialize' (PS-MarglVl-13). Heroic tales are told about the relative competence and ineptitude of the network and ILOs. These myths have entered the collective unconscious and seem to be part of the enculturation process. A classic example is CGDN's Alzheimer's Genes Legend, which was repeated to me, in various forms, by board members, researchers, and professional staff. The discovery of two genes for early-onset Alzheimer's disease was a big find. The university was not \villing to move fast enough on protecting the technology so the network took the lead, realizing that 'if we didn't patent it—yesterday!—we'd lose it' (Network Manager; PS-CS-45). The legend describes how the heroic managing director got the genes patented within 48 hours, therefore protecting the technology for Canada. As recounted by the network's associate scientific director, the authorized version goes as follows: This was well into the NCE process, by now we're talking about Phase II and we're into about the winter of 1995. The researcher called me one day and said 'you know, we've got the Alzheimer's gene finally. I've gone to the university and they don't think that it's worth patenting. They don't think that it's worth anything. They don't want to follow-up on it. What should I do? Do you think the network would be interested in helping me to patent it?' So I called the network's managing director in Vancouver five minutes later and said 'you've got to call this guy and talk to him about the patenting. The university is going to be convinced that they need to be involved in the end, but would you take a lead role here and at least make sure that he doesn't go out and publish the stuff before it gets patented?' 118 And the managing director said he would do something. That was like 5:00 in the afternoon. Ten o'clock the next morning, he phones me back. He is in Toronto, walking down University Avenue, talking to me on his cell phone. He'd flown in on the red-eye overnight, set-up a meeting with the researcher for that morning, and by mid-afternoon, they were well on their way to developing the patent position and talking about the whole strategy for exploiting this intellectual property. And of course, as soon as he got involved, the university realized that there really was something there that they should be involved in. And in the end, it worked out well for everybody. But, I think that was the first time I had seen the network really play a catalytic role in making something happen. (RW-37) Ultimately, this initiative resulted in what was, at the time, the largest IP deal in Canadian university history, between Schering Canada Inc. and the University of Toronto in 1997. Schering's initial $9M funded a three-year research program in the development of drugs and technologies to treat and prevent Alzheimer's. Over the long-term, the agreement has a potential value of $34.5M, not including royalties. Despite the sniping about the relative levels of commercial competence, network researchers work not in 'networks' but in universities and hospitals which pay their salaries, provide their lab space, and pay their overhead and operating costs. Resulting technologies are owned by the institutions. Their ownership of IP is 'cast in stone' and they are not about to cede their interests to the networks. Thus networks and universities have to work together or nobody benefits. In game theory terms, it is a classic prisoner's dilemma. Over time, both have made concessions and a truce of sorts has been worked out. While the chapter so far has described how the network configured a structure and took on an institutional identity, the telling has failed to capture several critical areas. The report of CGDN's configuration is shot through with power relations and exclusionary criteria. These can best be understood as issues relating to the network's spatial-structural dynamics: the larger 'why' questions of regional distribution, elitism and equity, social reflexivity, and fiscal accountability. 119 III. Spatial-Structural Dynamics Regional Distribution As befits a federal program, success in fostering wide national distribution of networks and resources is a policy concern. However, the experience of CGDN shows this goal may not be realistic. When the program was being planned, the 'network' component appealed to politicians because it offset the elitism implied by 'excellence'. To a Canadian politician, elitism means geographical concentration. The program was sold to Cabinet 'as an economic development package—a regional economic package. But Cabinet was sold 'a bill of goods' (federal informant, NCE-SS-2). Despite rhetorical claims of national scope, and significant expansion in Phase III, CGDN's main clusters are still at the three original institutions: Vancouver's University of British Columbia, the University of Toronto's Hospital for Sick Children, and McGill University in Montreal. An examination of research and core facility funding allocated to network Pis shows that these three institutions commanded more than 70% of the network's $33.5 M research budget in the period from 1991 to 2000 inclusive (see Table 4 below). Looking at the provincial distribution of network funding in the same period, Pis in British Columbia received 22%, those in Ontario got 43%, while researchers in Quebec received 27%. The remaining 8% was allocated across all other provinces. Table 4: Funding Allocations by Institution, 1991 to 2000 Totals 1991-00 % University of British Columbia 7,076,592 21.1% Vancouver, BC Hospital for Sick Children (University of 9,486,464 28.3% Toronto), Toronto, ON McGill University 7,343,483 21.9% Montreal, PQ All Others 9,590,292 28.6% 33,496,831 100.0% Source: Compiled from CGDN financial records 120 These figures indicate that the network is tri-nodal rather than widely distributed. In a sense, the 'network' metaphor is misleading; the dominant image is of 'spokes and hubs' (see Figure 6 below). A Matthew effect (Merton 1968) is at work, favouring those researchers and locations that are already well-established. Figure 7: Tri-Nodal Distribution of Funding Actor-network theory relates the density of linkages in particular areas to the activities of spokespersons and their success at interessement and enrolment. This is certainly the case. The tacit or embodied aspect—the 'spokesperson factor'~can be clearly seen when established researchers relocate to another university. New clusters begin to form around them, confirming the importance of face-to-face interactions. When Diane Cox relocated from Sick Kids to the University of Alberta in 1996, the university had no network members. Now three Pis are based in Edmonton as well as several associates. In the same year, Ron Worton moved from Sick Kids to Ottawa, where Bob Korneluk was the sole representative of the network. The University of Ottawa now represents a significant node and StemNet, the new NCE directed by Worton, will be headquartered there. 121 Finally, Leigh Field was for many years the solitary network researcher at the University of Calgary until Roy Gravel moved there from McGill in 1999, followed by Frank Jirik in 200089. Other spatial and structural factors must be accounted for as well, for example proximity effects and institutional context. As Wolfe (2000) points out, economic geographers have long emphasized the significance of space and proximity ('territoriaUzation') in creating the conditions under which resources and tacit forms of knowledge are generated and shared. The phenomenon of regional clustering among researchers, institutions, and firms is well recognized in the literature on industrial districts and regional systems of innovation.90 As Murdoch (1995:743) notes, 'networks are differentially embedded in particular places and.. .different forms of organization evolve in different sociocultural contexts.' I suggest that something similar is occurring with CGDN. The combination of inertia and proximity means it is easier to build linkages with researchers in the same or nearby institutions than with those at a distance. The institutional context is another key factor in faciktating clustering. Again, the Matthew effect is at work. One institution begets more. They layer together to create a regional system for the production and exploitation of knowledge. Amin & Thrift (1994) call this 'institutional thickness'. The network's Toronto node is a good example, with six hospitals and the main university campus within steps of each other. But an internal 'thickness' is also important. Kleinman (1998) has shown that laboratory practices are shaped by the university's formal structure and context. This context defines the 'rules of the game'; for example, how university resources are allocated and who can command them. Some institutions focus more power than others and can assign more resources to particular enterprises, providing a hospitable environment for network activities. 89 Field relocated to UBC in 2001 90 For an authoritative analysis of the former see Lash and Urry (1994); for a Canadian perspective on the latter, see the articles in Holbrook and Wolfe (2000), also Wolfe (2000) in Rubenson & Schuetze (2000) 122 In summary, if CGDN is indicative, the NCE program supports the institutional status quo by directing resources to existing research 'centres' while 'peripheries' remain marginalized. However, the embodied nature of knowledge is such that if smaller universities can find the means to attract network researchers and their programs, these people become agents of change that attract others. Elitism: Norms of Equity and Exclusion The concept of centres and peripheries is closely linked to that of inclusion and exclusion. Both are cultural oppositions, linked to spatial notions of familiar and strange, presence and absence.91 In this section, I examine the norms guiding enrolment to discover why some 'strangers' became present and included in the network while others remained absent and excluded. To the first international peer review committee, who were 'unapologetically elitist', the term excellence meant that 'we should pull together world class teams of scientists: the very best people who, with support, could pull the rest forward' (NCE-SS-1). Roy Gravel recalls that excellence was defined as the top five percent of scientists in a field, worldwide. Gravel considered that an odd and arrogant statement, Taecause science doesn't work that way. That wouldn't be the way you would identify the cream of Canadian science. And that wasn't a Canadian number.. .so it had no meaning' (RG-8). Nevertheless, given the 'excellence' requirement, the biggest challenge in putting the proposal together was choosing the people. The core-set had to ensure program requirements (for example, geographic distribution) were satisfied, while covering the domains of science that interested them—human genetics, medical genetics; and key technologies. But the program's preference for Mode-2-type interdisciplinarity was An expanded analysis of these concepts can be found in Rob Shields'( 1992) examination of Simmel's (1950) notion of 'the stranger' 123 largely ignored. Early in the planning, they decided 'that this would be a network of molecular geneticists. And so anybody who was doing cybergenetics, or biochemical genetics or any other type of generics were automatically excluded in order to keep it focused' (RW-19). This network would operate almost entirely within traditional disciplinary bounds. Beyond that, a degree of arbitrariness, capriciousness, surrounded debates about who the core-set did and did not want to work with. Perhaps this was inevitable given the need to select only a couple of dozen people from across the country. However, in designating a handful of people as superior scientists, 'excellent' enough to be in the network, they left an implication that those excluded were somehow inferior. The process left a legacy of ill-feeling. Lap Chee Tsui still regrets the elitist direction. 'In retrospect,' he says, 'I think we should have included everyone. The whole community is very small, and in the end about 75% became a part of the network. So there was a small number of people who did not get in. I just felt it wasn't really necessary to go through the agony when the numbers were so small' (LCT). As Ron Worton describes the process, The biggest challenge was not in determining who we should choose, but who we should not choose. We made that determination with difficulty, and somewhat arbitrarily. There were some pretty good scientists in the country that we excluded.. ..For whatever reason. Maybe we felt their publication rate wasn't high enough, or they weren't well enough known, or we didn't like the way they did their science, so we excluded them. And in the early days I got phone calls from some of my friends who said 'I'm really angry that you guys did not include me in the network. Why did you not include me?' And when you're asked a question like that, it's almost impossible to answer. It's about standards and focus really. RW-20 The network is a kind of elite club, where membership is increased by invitation only. The inner circle—the priorities and planning committee—'sits around the table.. .and throws names on the table and discusses them' (RW-20). Often, names are put forward by other members, but even with those bona fides not all are selected to come in. Few outside the inner circle understand the selection process. Worton says merely that they try to identify people whose research looks really interesting and is complementary to the existing research program. 124 One member, a junior researcher back in 1988, thought the decision to include him in the network was circumstantial. He had trained at Sick Kids under Roy Gravel and was located in Ottawa, which gave the network an opportunity to add a node beyond the Vancouver, Montreal, Toronto triangle. He says, 'they tried to cover all possible aspects. Scientists in different parts of the country. Scientists that were young and scientists with a lot of experience. So when they went down the list, I guess I ended up [included]' (RK-2). He recalls that Mike Hayden used to joke that they needed at least one person in Ottawa to deliver the funding proposals. For a similar reason—to get wider geographical representation—Hayden contacted a researcher at the University of Manitoba who would represent genetics researchers in the prairies. Another, at the University of Calgary, self-selected: T heard they were doing this and I wrote them a letter, I guess it was to Mike Hayden, and said I'd like to be part of it. And he said "well send me your CV" and I did and they invited me in' (LF-2). One person from the group at Toronto's Sick Kids remembers 'it was initially extremely exclusive. And then it widened out a little bit to include those people who had a particularly high ranking in MRC and I was one of those' (DC-9). In a recent Nature opinion piece, a molecular biologist and a zoologist argued that the life sciences are in danger of losing their originaUty (Lawrence and Locke 1997). The authors perceived an homogenization of opinion, with fewer independent schools of scientists finding novel approaches to problem solving. Scientists are 'playing safe' by following established lines of inquiry, rather than taking intellectual risks. The authors believe this situation is perpetuated, in part, by the dominance of 'star' scientists at conferences and in the literature, and in the inherent conservatism of the peer review process. In other words, as argued in Chapter 4, by limiting selection to elite scientists, these networks tend to limit the variety that feeds more risky innovation-led research. 125 Another anomaly relates to gender. Women Pis say that the role of female scientists in the network has always been equivocal.92 Only five of the original twenty-one members were women, and all the original Pis were men. Of the total research and core facility funding allocated in the period 1991 through 2000, women researchers received 11 percent rather than a proportional 24 percent. The two founder members who were not renewed, in 1996, were both women. As one of the five female founders points out, 'some very senior women scientists were not in the network, at all. They were not invited' (DC-38). The proportion of women has increased slighdy over the years. In the Phase III proposal, submitted in 1997, eight were listed as members. In 2001, one of three new Pis was a woman as well as three of five new junior researchers called 'network scholars'. All five women founders were interviewed and all made some reference to gender issues. Mosdy, they saw the problem as systemic rather than specific to the network but expressed a degree of exasperation at the general lack of concern shown by the network's male core set. More than a frustration with gross numbers was the fact that women were not represented in the power positions. As one says It was very strongly male dominated. And we [women] have had little involvement in [running] the organization. I'm not even sure that [the men] notice, particularly. The women used to joke about it. But there's a problem that way in our field, in Canada, in general. There's a core of people who are very supportive of each other, in and out of the network. And it's very difficult because you're not a Tauddy' of the guys. I'm not suggesting it is a major complaint or anything, but it's simply a fact. I think it's better now for the younger investigators in the network.. .but the senior women are scarce. DC 38-40 The network made no serious effort to attract females, says another, 'even though there is a lower percentage of women in the network than is generally the case in human genetics in North America.' (LF-32). A recent report by the National Science Foundation tends to support this assertion: unlike in the physical sciences, about half the doctorates in biology are awarded to women. Even in the 92 Knorr-Cetins (1999) found the same in her ethnography of a molecular biology lab. 126 1980s, one in three biology doctorates was awarded to a woman.93 This researcher also finds it curious that all of the individuals dropped from the network have been women. One of those former Pis explains that it is simply much harder for a woman to succeed in medicine and science. 'The nature of the [science] system is that it's run by men. If women ran the system it would be very different. So there is no question there is a sexist component to it. It is just because men make the rules' (CG 15). The women find the elitist 'invitation only' approach particularly troubling, and complain of a lack of transparency in the selection process. They can find no logical explanation for who is 'in' and who is 'out' of either gender. Names have been proposed, but to little effect. T don't exactly know what happened to those suggestions, but apparently they were looked at by the [leaders] who decided not to invite them' (LF-3). Excluding people placed a question mark over their career, especially as the network grew in academic prestige. 'People began to wonder.. .why didn't they invite me, you know? It's the coalition of top geneticists in Canada, well why haven't they invited me?' (woman manager, PS-CS-81). When the network was starting out, 'if you were left out, it didn't matter too much. But.. .the bigger the network got, the worse it was to be left out' (woman PI, DC50). Certainly several well-known Canadian geneticists have been excluded. One says 'it gave the impression [then], and probably still does today, of being a kind of an elitist club, and one in which I didn't belong'(CG-13). With the exception of Lap-Chee Tsui visible minorities are also notable by their absence. The network's board, its scientific and professional leadership, and principal investigators are uniformly white. Whether or not this reflects the field of medical genetics as a whole, the homogeneity of race and gender perhaps indicates a profound social, if not scientific, conservatism at the heart of this reported in Chronicle of Higher Education, 23.02.01 127 network. This conservatism is also reflected in the absence of social reflexivity and public accountability. Accountability as Social Reflexivity According to Gibbons et al. (1994), one of the defining elements of new network forms of organization is their social reflexivity. Rather than being accountable to the community of science, these networks are accountable to the community at large. It is a pluralist framework, where the pushes and pulls of the agendas of relevant social actors condition the decisions and policies that emerge. Thus, argue Gibbons and colleagues, public interest groups, lawyers, social scientists, as well as natural scientists, have a voice in the governance of Mode 2 networks and, more controversially, in the composition of research teams. This broad representation is deemed essential because of the risks and issues inherent in contemporary science and technology. Similarly, Callon (1999) has noted the emergence of 'knowledge co-production' models in which patient groups establish themselves as 'partner associations' with research groups, and establish parity between lay and expert knowledges of the disease process. Bruno Latour also emphasizes the social accountability and reflexivity of 'new' network formations. He argues that in a culture of 'open science', where autonomy is sacrosanct, there is no direct connection between scientific results and the larger societal context. But in the type of culture Callon describes as 'overflowing networks' there is a new deal with society—a type of collective experiment in which science and society are mutually entangled for mutual benefit. He concludes that 'scientists now have the choice of mamtaining a 19th century ideal of science or elaborating— with all of us—an ideal of research better adjusted to the collective experiment on which we are all embarked' (1998:209). 128 Recently, Nowotny, Scott and Gibbons (2001: 258-9) extended the reflexive elements of their original Mode-2 formulation even further, arguing that scientific knowledge must be 'socially robust' as well as conventionally 'reliable'. Whereas reliable knowledge has traditionally been produced in cohesive and restricted scientific communities (Mode-1), social robustness depends on 'sprawling socio-scientific constituencies with open frontiers' (Mode-2). Socially robust knowledge is superior to reliable knowledge, they argue, first, because it has been tested and retested in contexts of application and, second, because it is the 'underdetermined' outcome of 'intensive (and continuous) interaction between results and their interpretation, people and environments, applications and implications' (258). The more open and 'comprehensive' the knowledge community, the more socially robust the knowledge produced. Further, public contestation, controversy and conflict.. .are not to be shunned on grounds of principle. Rather, they are a sign of a healthy body politic and part of the process of democratization.. .Space has to be made for what people want, what their needs are, and.. .even contradictory responses and claims (258) To the extent that the NCE program was apparendy seeking to create the type of networks envisioned by Gibbons,94 Callon, and Latour, presumably with a broad understanding of public accountability, Michael Hayden's notion of civic science seems impoverished. As Irwin (2001) has shown, the construction of the scientific citizen is a far more complex process than Hayden suggests. For Hayden, the sub-text seems to be that the public (non-scientists) are useful when mobilized en masse but must otherwise be kept at arm's length, lest their ignorance and/or interests impede the research enterprise. This is a classic example of science/non-science boundary work (Gieryn 1995). As Wynne (1999) points out, the lay public is often assumed to lack the 'epistemic capacity' required to judge science. One of the network's board members commented, for example, that 'the public is 94 Again, note the advisory connection between the NCE program and Gibbons and, to a lesser extent, Callon 129 generally quite ignorant on the subject of genetics. I don't say that with any negative sort of connotations. It is just a fact of the matter. Why would they not be ignorant? It is a very complex science' (B-MM-14). Because of their ignorance of the science, it is assumed the public has nothing to contribute to the network, despite the ethical issues and broad social questions that accompany research in medical genetics.95 At the same time, the network states 'no satisfactory policies will emerge if public concerns about genetics in health care are not addressed, and if those concerns are not fully and objectively researched' (website; July 2001). Similar attitudes were found in a study of medical geneticists by Kerr and colleagues.96 The study showed that these scientists view science as a 'gold standard' that clearly demarcates 'good and value free research from illogical or politically distorted opinion, which they paternally attribute to an undifferentiated lay public' (Glasner 2000:11). More troubling, in giving apparendy objective assessments of risks associated with the new genetics, the experts in this study 'simultaneously disguis[ed] the extent of their own social location and vested interests' (ibid.). The demarcation of lay and expert knowledge and interests can be clearly discerned in the following remarks made by a senior network manager (a science PhD) in response to a question about the potential for appointing a lay member to the board. I mean what would a lay [board] member do? They would just ask us what we were doing. Well we can't explain that. We don't have time. So we try to pick intelligent members who at least understand the field a little bit. Public interest science is just politics. I want to tell you that right now! That's politics and I don't want politics in my network. If somebody has an agenda about organic foods or genetic engineering, I'm not interested. What I am interested in is: are we curing disease? Are we solving a social problem? We're just as capable of looking at the risks and balances as anybody else. But in the end, would you rather have a cure for Alzheimer's or not? Which is better? And people agree that, in the end, finding the cure for Alzheimer's is certainly a greater social good than being in favour or against clinical trials, or animal rights, or whatever. 95 For example, the goal of integrating genetic therapy into the health care system is to predict and prevent disease; predictive capacity requires population-wide genetic testing and stratification based on genetic variants, an issue that carries significant social "baggage' in the form of eugenics. 96 Kerr, et al. 1997 reported in Glasner (2000:11) 130 The fact is that it would have been very disruptive to have grandstanding on the network board. Of any kind. The interests of the organization have to be paramount, not the individual agendas of board members. And if you have a board that has a bunch of people with individual agendas on it—public agendas, private agendas, political agendas—then you are going to have a dysfunctional board and a dysfunctional organization. You are going to lose that cohesiveness that is so important. You're not going to be able to function. Because they are going to block you and then you're not going to be able to carry out your program., So we had federal program officers on the board; we had foundations, we had industry, we had universities, we had intelligent people—medical people, physicians—that were thinking about all of these things. PS-DS-54-6 Apart from the evident paternalism, this network appears compelled to equate the public's legitimate interest in the conduct of the biosciences with anti-science or fringe activities. The reaction is exaggerated: if the public is given a voice, rationality will be lost; when scientific problems arise, we must 'trust the experts' to solve them. Brian Wynne (1999) calls this approach to problem-solving 'deterrriinistic uncertainty', i.e. when problems caused by science are deemed reducible only by the application of more science. Categories of 'lay' and 'expert' are mtrinsically problematic and socially constructed." Scientific discourses exert normative influence over the public domain and attempt to 'reshape the world in their image'. Wynne (1999) calls it 'a profoundly unaccountable and unreflexive process'. Recent work in the public understanding of science (PUS) shows that exclusionary discourse underpins much of the public's mistrust of scientific expertise. Barnes and Edge (1982:237) suggest that 'the tragedy of expertise' is its ultimate contingency. In a high-trust, high-risk area like medical genetics, the absence of external voices within the network means the absence of fundamental questioning as to what might be an appropriate place for genetic approaches to illness. As one prominent critic points out, 'it's a major social hazard that 131 nobody is looking at those ethical, legal, and social questions within CGDN. Because there is an implicit assumption that all this will be good for us. And we need to ask: will it be?' (OTHB-21). Langdon Winner (1993), speaking of the politics of technological change, has raised these questions in a wider context. The 'problem of elitism', according to Winner, is a question of the way powerful actors and groups skew the agenda 'in ways that favor some social interests while excluding others' (1993:370). The powerful define the rules of the game and the allocation of resources. Winner urges those who study the social aspects of science and technology to ask what about groups that have no voice but that, nevertheless, will be affected by the results of technological change? What of groups that have been suppressed or deliberately excluded? How does one account for potentially important choices that never surface as matters for debate and choice? (369) Thus taking CGDN as an example, claims that networks are more publicly accountable appear insupportable. Although the idea of a 'new deal' between science and society is appealing, it is not apparent that it works. Far from expanding the public sphere, network arrangements can be viewed as contributing to its erosion. In addition to deficiencies in public accountability, deficiencies in fiscal accountability also need to be examined. Accountability as 'value for money' The NCE program was conceived under a neoliberal agenda of public sector reform that was fuelled by a rhetoric of fiscal accountability. Results- or performance-based approaches tie funded science to key economic and social outcomes. It seems both responsible and logical to account to the public 97 For a sample of recent discussions see Epstein 1999; Haraway 1999; Irwin 2001; Yearley 2000 132 for the use of their funds. But accountability goes beyond use to value. Asking if money is.'well-spent' involves asking if it is effectively spent, and if it could be more effectively spent elsewhere. Put specifically, are programs delivering value for money9'? Can they demonstrate cost-effectiveness? The problem of ensuring that public programs remain accountable and return value for money can be understood as a 'principal-agent' problem of delegation and information asymmetry." The state (principal) delegates provision of research that will fuel innovation to university scientists (agents) who are induced by incentives (research funding) to comply with the regime of Strategic Science. But especially in technical areas, agents always know more about delegated tasks than principals. This asymmetry of information makes it difficult for the state to reassure itself of the integrity and productivity of the scientists they are funding (Guston 2000b:33). One solution would be to regulate research performance direcdy, but that means state control. The neoliberal preference is for refined forms of 'remote control' or steering that induce internaHzation of the state's expectations. Through these mechanisms of governmentality (Foucault 1978),100 'normalized' subjects come to control themselves according to previously established understandings of what constitutes 'the norm' (Hacking 1990). Governmentality requires fidelity devices that will measure and induce compliance, and provide 'discursive validation' that agents are doing what principals expect them to. Largely, these devices are accounting tools: budgets, cost/benefit analyses, ratios and comparisons, statistics, financial and compliance audits.101 Accounting tools are far from unproblematic. While appearing impartial, they 'Value for money', or 'comprehensive' audits are fundamental to NPM (Power 1995) and have now been adopted -at least in principle—by all federal and provincial auditors general 99 For a fuller elucidation see David Guston'srecent work, e.g. 1999, 2000a, 2000b 100 The pos-Foucauldain governmentality literature is extensive, but see, for example, Burchell et al, 1991; Barry et al, 1996; Power 1995, Ericson & Haggery 1997 101 For more on accounting's 'calculative practices' and 'rituals of verification' see Power 1995; Porter 1995; Miller 1994 133 selectively 'construct the world' from a complex web of social and economic considerations and negotiations. Through its surveillance and control capacities, and its ability to determine financial norms, accounting has the power to create a new 'factual' visibility and discipline performance (Hoskin & Macve 1993; Harris 1998:137). Embedded layers of accounting and accountability induce the required compliance. These types of reporting relationships govern relations between the NCE program (principal) and CGDN's administrators (agent); and between CGDN administrators (principal) and network scientists (agent). The network's head office thus acts as an intermediary that helps assure state-principals that scientist-agents are following the policy agenda.102 It becomes a 'centre of calculation' (Latour 1987) for the accumulations of facts to send to Ottawa. Following this logic, reported data gather 'positive modalities' and become harder to resist as they move away from their conditions of production (the lab) to the network office (the centre) and then to the program directorate in Ottawa ('centre of centres'). At each stage data are recombined and reinscribed. The NCE directorate seeks to control the network by specifying what 'makes up' the numbers (Hacking 1990). But network administrators reinterpret the directions in instructing scientists what information is to be supplied. To illustrate, CGDN would report as network accomplishments almost everything their (university-and hospital-funded) researchers achieved, from scientific breakthroughs, to publications, external grant funding, and the raising of venture capital by researchers in network spin-offs. This over-reporting was so prevalent that many of the 'official' network statistics I consulted proved unreliable for the purposes of this study, because they failed to conform to the guidelines set down by the NCE Directorate. A serious example is that networks were supposed to report as 'cash contributions' from partners, only funding that flows direcdy through network accounts. In many 102 In an international comparative study, Atkinson-Grosjean & Grosjean (2000) found that the proliferation of such intermediary agencies was a generalized feature of higher-education systems under neoliberalism. 134 cases, CGDN reported funding that went direcdy to network researchers. The network's legitimate interest in those funds was minimal, but because they flowed to members they were reported to Ottawa as contributions received by the network. Also, researchers were asked to report almost all their research activities as network activities, for the annual statistical report. As one complained, It seems sort of ridiculous, talking about all of these accomplishments, when in fact you know maybe 5% of them were funded by the network. And yet they want to hear about all [of them]. So every year I have the same argument, like: Vhat do you want me to do? Write what my student MF did last year? Because that's all that you funded.' And she says 'oh no, put it all in.' And I say Svell, why should I?' And it's gotten, quite frankly, a little bit ridiculous, given the amount of money we get versus the accountability and justification. I mean what do you do? Write your whole program down and attribute it to the network? .. .1 mean I would say things jokingly like 'I think we should just spend all the money on.. .having great meetings in ski resorts. I'd get more out of it than you pretending to send me money and pay for my student.' RK-65-71 The directorate is not only aware of reporting anomalies but may have contributed to them. As shown earlier, for the program as a whole, additional public funding is under-reported while aggregated private sector contributions—both cash and in-kind—are over-reported. As early as 1938 the US National Resources Committee called such practices 'window dressing' (Godin 2000-3:16). Today, we more often label it 'spin'. The purpose is simply to make results look better than they are, to protect budgetary resources and allocations. The NCE program's first fun-time director was appointed January 2000. He says that the problem has been brought to his attention and agrees that, 'yes, maybe some better discipline should be followed.. .that's something that we will be looking at' (JCG-14). In September 2000, the Directorate instituted an audit requirement, meaning networks now have to submit externally audited annual reports. Since that directive, CGDN has restructured its administrative staff. Responsibility for financial and statistical reporting has been assigned to a new staff member with appropriate quakfications. 135 Summary Discussion This complex chapter has attempted to capture the way CGDN forged an institutional identity and organizational structure under multiple constraints, including: demands for both scientific excellence and commercial relevance under managed conditions; resistance from local host institutions; the traditional structure of basic research and conservatism of researchers; and the sheer novelty of doing something that had never been done before. Now, after more than a decade has elapsed, CGDN's successes are clear. But, equally clearly, some have been achieved at the cost of consequences perhaps unintended by program architects. The concentration of resources in CGDN creates a hegemony. The network defines the field of medical genetics in Canada. Non-members are 'othered'. Careers can be affected. Yet no objective criteria for membership exist. Instead, membership is an 'invitation only' affair, within the arbitrary remit of the same elite inner group of scientists that has controlled the network from the start. Power relations are asymmetrical; they concentrate in the most powerful actors and in the centre (s) they control. It is, quite literally, a self-reproducing 'old boys' network. Relatedly, network resources flow to the power centres rather than being distributed to scientists across the country. The consequence of exclusion and concentration is reduced diversity within the Canadian 'science system'. As a concomitant, there is no room in the network for 'lay' representations. The 'public interest' is constructed and defined in the abstract, within expert discourses that exclude authentic voices of interested publics. That being the case, and in the spirit of 'value-for-money' accounting, we can ask about the extent of public investment in the network (as well as in the program more generally) and about the returns on that investment. In the ten years from 1990 to 1999, CGDN's six original Principal Investigators received between $1.4 million and $1.8 million each in network funding, while the 15 other founders 136 received on average between $800 thousand and $1 million. These are modest amounts, on an average annual basis, but it must be remembered that network funding is incremental funding. Network researchers also receive direct support from non-profit disease foundations, research councils, and industry contracts while their home institutions underwrite salary and direct costs. By the time of federal exit, in 2005, CGDN will have received in excess of $60 million in direct NCE program funding. This figure does not include provincial and industry contributions, commercial revenues, or university subsidies to network researchers. It is impossible to tease out of this complex of funding sources what results are attributable to the network and what would have happened anyway. The same is true of the program as a whole where, as already shown, public investment exceeds $650 million. In other words, there is no reliable way to determine whether or not CGDN and the NCE program deliver direct Value for money'. But the problem with accountability frameworks is that they seek to capture and evaluate only those dimensions that can be quantified, objectified, and made accountable. Non-quantifiable and less tangible practices are literally not taken into account. At the same time, other elements assume new weight because they can be quantitatively evaluated: quantity (not quality) of research publications; numbers of patents held; dollar value of research contracts. In short, by focusing on readily quantifiable inputs and outputs we risk neglecting more complex social variables that resist measurement but are, nevertheless, valid outcomes. I am ihinking, in particular, of the construction of intangibles such as 'network culture' and 'network science'. The next chapter examines the way the network forged a scientific culture and community and a scientific legacy. 137 CHAPTER 5: CULTURE AND SCIENCE Forms and practices of scientific culture and community103 were in place well before Robert Boyle convened an 'invisible college' in Oxford and London in the mid-17th century. In the mid-1960s, Derek Price (1963) borrowed and extended Boyle's metaphor, reminding us that small, informal collectives of closely interacting scientists are the principal means of scientific advance. Subsequendy, Diana Crane (1972) defined an 'invisible college' as an informal interpersonal network based on shared scientific interests, rather than geographic proximity. As Philip Agre (1999) points out, 'so-called invisible colleges are in many ways more visible to the researchers than the physical campuses where they organize their places of work'. The distributed and informal nature of scientific interaction is also captured in the term 'communities of practice' which describes self-organizing, self-selecting groups of colleagues whose members are informally bound together by their shared expertise (Lave & Wenger 1991). Note the family resemblance with the scientific 'thought-collectives' identified by Ludwik Fleck (1979). These communities, characterized by intellectual interaction and the mutual exchange of ideas, constitute This section draws in part on Fisher, et al (2001) 138 the 'carriers' of a field's knowledge and culture. Similarly, Kxiorr-Cetina (1999) speaks of the very different 'epistemic cultures' of molecular biology and high-energy physics. Together with actor-network theory these concepts, drawn from the wider field of science studies, will help us understand the development of a distinctive culture and community in the Canadian Genetics Diseases Network (Part I of the chapter), and the nature of what might be termed 'network science' (Part II). I: 'A Nation of Colleagues' The cooperation and collegiality have just been incredible. It's created a nation of colleagues that is totally unbelievable. (Michael Hayden, Scientific Director. MH2-21) At the end of its first year of operation, CGDN listed among its achievements the development of 'an ethos and common understanding of what it means to be in a network' (CGDN-AR 1991: 8). The use of the term ethos indicates an interesting ambivalence. It draws around the network the cloak of Mertonian ideals relating to the normative structure of science. But at the same time it invokes the new ideal of 'network science' with its emergent (counter-) norms such as patents and industry partnerships. The rhetorical purpose of the claim was to persuade NCE bureaucrats that CGDN took the program's non-scientific requirements seriously. Another claim about network ethos can be found two years later, in the proposal for the second phase of funding: 'we have created a nationwide department ofhuman molecular genetics' (CGDN-FP 1993, emphasis original). The subtext here is recognition of Ottawa's intent to change the overall research culture in Canada, network by network, by overriding university boundaries and autonomy. Even if we are to take the idea of a 'network ethos' seriously, the claims were premature to say the least. Ethos can be understood as a cultural achievement, and the development of culture takes time. 139 As well, an interesting question can be posed about whether culture can be induced by the imposition of a network model, or the provision of funding. But in examining CGDN's history, we can see that very gradually, and taking on a different tenor in each of the three funding phases, a distinctive ethos or esprit de corps (CGDN-FP 2001) did, in fact, emerge. CGDN's 'induced' epistemic community anchored itself in the production of a discursive space of face-to-face interactions that promoted trust and reduced competition. Inducing Solidarity Although socialized in 'invisible colleges', network researchers were confused about, and initially resisted, the whole concept of 'mandatory networking'. No real agreement suggested what that might be, or how it might be accomplished. The network's professional staff had to invent virtual and face-to-face ways of meeting program requirements. They had to grapple with the complexity of somehow linking together a dozen institutions, two dozen principal investigators, as well as post doctoral fellows and graduate students. And the reporting requirements meant that networks couldn't just say they were doing networking; they had to prove they were doing it to the NCE directorate. So ways had to be devised of enticing scientists to comply. The method they implemented was to make principal investigators' funding conditional on participation in network activities. Subsequendy, it was hoped, Pis would realize the manifold benefits of voluntary participation. Almost all network researchers interviewed commented on this creative relationship between network funding and network-building. For example, Although the other aspects of the network have been much more important, you wouldn't have pulled the people together without the bait of the funding. We would have said, 'I haven't got time to just go and talk with these people.' But you'll go and talk when you know that if you don't, you won't get your funding. And then you find it is really worth while having talked to them and it is really fun. BG-17 140 The biggest value of the network is not the funds that they give us, but the networking opportunities and the collegiality and so on. Although, I have to say that if we didn't have funding for our labs in addition, we'd probably say, 'Oh, I'm so busy, I don't think I'll go to the annual meeting. I don't really need to be there.' Whereas, if we're funded by the network, we have an obligation to be there. RW-17 The network funding was not a significant proportion of a network researcher's total budget. Only a small component of their research program would come into the network. Usually the component that would profit best from the collaborative opportunities. Other aspects stayed outside. Even in the early phases, when the network was less extensive, the funding allocated to researchers probably never amounted to much more, on average, than 15% or 20% of their research budget. This would have been enough to perhaps support a senior technician or post-doctoral fellow. Put another way, 'out of perhaps 15 to 20 projects in my lab, maybe two or three were covered by network funding, the rest were covered by other kinds of funding' (RW-16). But a moral obligation was attached to the network funding. It got people 'to buy-in to the network concept and become part of it' (FJ-21). It helped to overcome the resistance to leaving the lab for yet another meeting. And it was this face-to-face aspect that quickly became far more important than the virtual aspects of networking. The latter soon became taken-for-granted, an enabling technology104 to further the personal relationships and community of practice that was being forged. As a policy advisor explains, The network mechanism...forced people to get together face to face, because of the funding provided.. .Face-to-face meeting is really important, especially early on. You need a lot of personal interaction to make that networking work. And after that you can do it by e-mail and telephone and fax and all the rest of it, but in the beginning you really have to have the face-to-face communication. ARA-DR-49 The face-to-face community that became the Canadian Genetic Diseases Network began to take shape in 1991, at the first network meeting. Another enabling technology is the conference call. Board and committees frequently 'meet' by telephone 141 Face to Face Community As the main forum for interactions and exchange, the network's early scientific meetings laid the foundations of network culture and community. Unanimous about the cultural importance of these meetings, scientists considered them one of the main benefits of belonging to the network. The first meeting, held at Whistier, BC, in May 1991, set the format for those that followed. Because of the NCE requirement to dedicate 10% of the budget to networking, full costs of attendance were covered for Principal Investigators and Core Facility Directors. These individuals could, in addition, nominate three members of their teams for full subsidy. For example, students and fellows funded by the network or working on network projects could attend cost-free. In rare cases, a technical support member of the group could be included if their contribution was deemed to constitute fundamental research. In molecular biology, where rewards usually go to lab leaders (Knorr-Cetina 1999), subsidizing conference travel for junior researchers was so unusual as to be unique. Each participant was expected to present and discuss their results, either through a poster (students, fellows) or an overview lecture (Pis). As a result, delegates to the Whisder meeting faced a busy three-day schedule of scientific sessions, workshops, and discussion periods. Approximately 100 participants attended from across Canada mcluding board members, external collaborators, and industry partners, as well as network researchers and special guests. Concurrent workshops debated, among other issues, the topics of 'Industrial Relationships' and 'Search for the Gene'. This routine may seem much the same as any scientific meeting or conference. Scientists get together and give papers as matter of course. But there are significant differences. First, as one of the researchers explains, 'a network provides you with access to a completely different and much broader group of people than you would ordinarily associate with at meetings' (FT-3) Normally, scientific meetings are segregated by narrow research interest. In contrast, network meetings are 142 broad, covering the field of generics in Canada. Second, from the start, the norm was 'full disclosure'. The meetings were intended to encourage in-depth discussion of interesting, early-stage research results, often prior to journal publication. Sensitivity to priority, if nothing else, would have precluded this level of frankness in a 'normal' scientific meeting. At the same time, however, even in these first meetings, a countervailing force emphasized confidentiality. Unless you were a network principal—that is a researcher or partner (industrial or institutional) listed in the network's Internal Agreement—you were required to sign a confidentiality agreement. Intellectual property rights had to be preserved in order to fulfill the network's commercial mandate. So those 'full and frank' discussions had to take place behind closed doors; participants were advised that discussing results in a closed forum of colleagues did not constitute disclosure for patent purposes.'05 Even so, researchers were cautioned to apply 'normal discretion in disclosure of scientific data' (CGDN-ASM 1991). In practice, however, it soon became clear that 'normal discretion' was not required. It is totally different than going to a meeting where you have to be careful what you say because someone will rush off and do your experiment and publish it before you get to it. BG-20 In the network, you're not in competition. And so you can confide and get some valuable feedback from these people, right? It's nice to get up there and maybe brag a bit about the stuff that you've got before it's published. It isn't like you feel T can't say anything because Frank in Vancouver's gonna scoop me' (RK-65) It's one of the strengths of this network that we're all in this together. It's difficult out there. The more that you can discuss things, in confidence, the better. You have to be confident that the person you talk to is not going to spill the beans. The trust relationships and the reliance on individual integrity is very important. PS-SH1-20 Debatable but not tested 143 The third factor that marked these meetings as different was the social cohesion they engendered. Despite all the scientific gravitas, the social aspects remain particularly vivid for most people. Asked what she recalls about the first meeting, one of the founders, a distinguished scientist, says, 'we went skiing up on the glacier all together. It was great' (BG-31). The second and third meetings, in May 1992 and June 1993 respectively, were held at the Far Hills Inn, Val Morin, PQ. Again, her recollection of the Quebec meetings is that 'we had afternoons off. We went hiking.. .We did plays-skits and things. And we had fun' (BG-31). Few more effective ways can be found to build trust and loyalty—and the foundations of future collaborations—than to play together and build personal relationships. When you know somebody personally, because you've met them at these network meetings, then you are much more liable to approach them, to work with them. It increases the potential for collaboration. LF-42 For me the network has meant a lot of relationships with people that I wouldn't have met otherwise, so I have a whole circle of friends now that I wouldn't have had. That's just on a personal level. FJ-37 I have a strong sense of belonging to the network. What I do is defined within my grant applications. How I feel is defined in my interactions with the network. RG-38 So the network community was about openness and sharing, on the one hand, and building a sense of solidarity and belonging, on the other. Through the annual scientific meetings, everyone in the network knew something about what the rest were doing and that facilitated a climate for collaboration. A Climate for Collaboration Network scientists became familiar with each other's research from hearing presentations on work-in-progress. This annual 'overhearing' enabled synergies to happen. As one researcher explains, 144 'going to the network meeting, it's a very easy, fast way of getting a survey of who's doing excellent research in Canada in our field. And that saves a heck of a lot of time for us all' MW-30. Perhaps kstening to somebody talking about a particular gene, a researcher will realize that they have a piece of the same puzzle. Or perhaps they need to find someone with particular skills, to help them with a project. In either case, they can make contact, confident that their overture will not be rejected. In other words, to borrow a felicitous phrase from one PI, the network acts 'kind of like a blanket purchase order on collaboration' (BG-19). The whole game is sitting open on the table and then you can reach in any direction. Anyone who gets a call from another person within the network has a sense of obligation to talk and participate and collaborate... It is like asking your brother or sister for something as opposed to someone with whom you don't really have the same relationship. They can't say 'sorry, I'm too busy.' Or 'sorry, you're competing with me.' BG-19 I know those people well. I've met them many times at network meetings. I've heard them talk. And if there was anything I needed or wanted, I certainly wouldn't hesitate to pick up the phone and expect that I would get a very positive response. DC47 The fostering of trust and reciprocity on this scale was a unique experience for network scientists, who were more used to a culture of competition than one of co-operation. Reducing competition and enhancing the ability of network scientists to work together constituted an advantage for the entire collective. It should be noted, however, that the absence of competition was in part an artifact of the selection process. Researchers were chosen for the complementarity of their programs. No two teams were working on exactly the same thing. So in the network, as one PI says, 'we're not in competition, because we're doing different things. We're tied together with the common interest, but we are distinct' (MW-42). 145 Being collegial also included working for the common good, and trusting community decisions. Through the years of meetings and network-building, a process of sedimentation took place. CGDN began to setde into the shape it had claimed at the start—a community of colleagues, with a shared ethos and a common understanding of what it means to be in a network. One researcher comments that, 'as a gtoup of geneticists we really got to know each other much better than would have happened otherwise' (LF-13.) Another says that the network created value through 'personal contact, personal motivation, driving the science' (RK-61). Over time, members began to identify themselves as network researchers. Almost by accident, they agreed, government had 'got it right' and produced a capacity to do 'national science'. As Hayden comments, 'it's quite unusual to be led from Ottawa. But this was real leadership' (MH2-2). For Tsui /whether by design or by accident, the federal government somehow had the foresight to create these kind of networks. [Now] we are leading the world' (LCT-23). The beneficial effects of this foresight on the conduct of science was noted. Because of the networks, across Canada we are doing science in a manner that I don't think could possibly have happened before.. .A very large piece of the scientific community is [now] involved in promoting collaboration—inter-university and interdisciplinary, not just geographic. That is a very positive thing. RG-78 The network is like a national lab without the consequences—the bureaucracy, the 9 to 5 mentality. Here, it's academic, competitive, but then we get together and we figure we're all part of this same process. RK-64 We created research groups that would not have existed otherwise, that spanned the country. Or involved different components of the country where we might not otherwise have encountered each other. These are cross-country collaborative interactions. RG-28 However, the network did not evolve quite the way the program's architects envisioned. They had anticipated large-scale, cross-country collaborations. For whatever reason—^institutional logistics, 146 egos, distance—that did not happen. And, despite mutual goodwill, the number of researchers who built one-to-one, bench level collaborations was less than the potential would suggest. We don't interact on a project by project basis as much as was hoped we would. I think we fail a little bit there, just because there is too much to do and no time BG-12 There are some collaborative projects within the network. But, it's not as heavily networked as it could be, I think. FJ-40 I have not been one of the ones who has interacted.. .perhaps as much as some other people. Because I don't really have a collaborative project with anybody in the network.. .it's not because I'm not interested, it simply hasn't been beneficial DC46 Still, by creating; the intellectual and collegial infrastructure described above, the network allowed individuals to formulate different questions and approach their science differendy. So even in the absence of hands-on collaborations, researchers benefited from their interactions in the network. Says one network researcher, 'I don't think we would have done that project in quite the same way if it wasn't for the network' (BR-4). Another confirms that we have changed in the way we ask questions and, therefore, the questions that we answer and what we publish. I know that for me—the kind of science I was doing, the directions I was taking—it's very, very clear that I do things differently than I would have done before RG-80 But because each phase of funding added new researchers, institutions, and industry partners, the capacity for collaboration and the nature of the network community was not static. The orientation changed over time. 147 Phase Transitions When the network was renewed for Phase II, with its enhanced emphasis on commercial results, it meant more industry partners'06 and more emphasis on commercial potential at the annual meetings. Yet the overall ethos stayed much the same. Largely, this was because the core-set remained unchanged and because the expansion had been relatively modest, from 21 to 33 researchers, and from 11 to 13 institutions. So the growth was easy to absorb. That was not the case in the transition from Phase II to Phase III. With the expansion to 50 researchers in more than 20 institutions, intimacy was almost impossible. Almost all founders felt the culture changed radically at that point and that something important was lost. As one comments, 'in the early days I knew everybody and now I don't. That happens when a group gets big enough. It means that we're now more of a conglomerate than a bunch of guys working together' (RG-24). A comparison between Phase I and Phase III follows in Figure 8, showing growth in numbers of investigators and institutional partners. Details of industry partnerships appear in Chapter 7 148 Figure 8: Growth in Partner Institutions and Principal Investigators Comparing Phase I to Phase III Phase III Phase I PRINCIPAL INVESTIGATORS 50 21 UNIVERSITY PARTNERS -Alberta • X -Calgary y y -Laval • X -Manitoba • y -McGill y -McMaster • X -Montrea y • -Ottawa y -Queens X y -Toronto y y -UBC S y -UVic y X TOTAL UNIVERSITIES 11 8 HOSPITALS & INSTITUTE PARTNERS y -Biotechnology Res. Centre, UBC X -Children's & Women's Hlth Cntre UBC y X -Children's Hosp. East. Ontario, Ottawa y y -Hopital Ste-Justine, Montreal y y -Hdpital Saint Francois d'Assise, Laval y X -London Health Sciences Centre y X -Mount Sinai Hospital, Toronto y X -Hospital for Sick Children, Toronto y y -Montreal Children's Hospital y X -Montreal General Hospital y X -Ottawa Hospital Research Institute y X -Robarts Research Institute, London y X -University Hospital, Vancouver X y TOTAL HOSPITALS & INSTITUTES 11 5 Earlier, I discussed how the elite recruitment criteria that were applied in the first two phases caused a fair amount of debate. Many were uncomfortable with the emphasis on exclusivity. However, the wisdom of this approach was that it produced a strong and cohesive culture. As a result, when the approach was reversed in Phase III, it tended to undermine what had been built to that point. One of the founders had spoken strongly in the past about including all qualified scientists. But when that eventually happened, he found the effects disturbing. 149 We had such stringent criteria in the beginning and then, in order to get the Phase III funding, we had to open it up again. Wide open. That was a most difficult decision for me. I was not very happy about opening the thing wide because it was so indiscriminate. Some people were recruited just for their name. They didn't really have any interest in the community . They are part of the network and as yet I still haven't seen any contribution from these people. LCT-13 Because so many people and institutions were now members, mamtaining the same level of familiarity was impossible. The mechanisms of interaction that worked so well in a relatively small group stalled when numbers grew. People were disappointed that they could no longer get to know each other in the same way. Fear was expressed that a more corporate, commercially oriented style of doing things would undermine collegiality. Even the tenor of the scientific meetings—the great bmding mechanism of the past—was affected. The meetings haven't been great. All scientific talk; no play. This year's meeting was held in the middle of Vancouver, in a small hotel, where there was nothing that you could do together for fun. And it was tied to another huge conference. So everyone had been away from home too long, and were too tired to play together. BG-30 It is immediately obvious when you go to a network meeting, that this is not.. .the style that we have been used to. These are meetings where the commercial aspect of what we work on is stressed. That's probably the biggest thing. And then the scientific content comes second. MW-6 Not only were the meetings different, the sense of commitment was different. When researchers were recruited for Phase I and Phase II, it was for the long term. Renewal of funding was not guaranteed, of course; competitions were fierce and anxiety on that score was high. But no one sensed a finite horizon. In those early years, funding could be lost in only two ways: either the whole NCE experiment would be cancelled, in which case all the networks were in the same situation; or a network would not be renewed because its proposal would be judged inferior to others, and that was the luck of the draw. No third contingency, no sunset provision, appeared until Phase III. It came as a complete shock and a bitter irony that when the program was made permanent, in 1997, removing fears of overall cancellation, it was at the cost of continuity for individual networks. Thus 150 researchers recruited for Phase III came in knowing that, at best, they would be with this group for a maximum of seven years. Together with the sheer numbers of new recruits, the sense of finitude limited 'buy-in'. In fact, by this point, several scientists were members of two or even three networks. So the relationships, and the willingness to trust, were not there in the same way. This was manifesdy the case in attitudes to the annual scientific meetings. In the past, attendance had been mandatory, not discretionary. But to many of the Phase III recruits it was 'just another meeting'; they did not bother to attend. As one of the managers complains, 'the minimum that we ask is that you come to the annual scientific meeting. The old groups from Phase I and II are always there... [but]... there is a much weaker understanding [among the new recruits] of why they need to be there. Some of them from the new gioup just didn't come' (PS-CS-80). The funding bait was so diluted, because of the number of researchers, that it no longer offered sufficient inducement. As well, many of the associations written in to the Phase III proposal were strategic. The purpose was to simulate dynamic expansion; actual connections were tenuous at best and in some cases divisive. For example, principal investiagtors had been recruited from Mount Sinai Hospital in Toronto but historical disagreements marred relations between this team and their neighbours across the street at Sick Kids. The most recent concerned the administration of funds for Genome Canada, the new umbrella body for genome research. Genome Canada is very much the legacy of CGDN. And we [Sick Kids] worked very very hard to get the government to do that. And I think it is just a crying shame that we at this institution, the place where most of the genetic diseases work is done, are not being given the job of making sure the money goes to the right places. It is going to go to Mount Sinai. It has been diverted. There is a lot of political stuff that goes on. If Mount Sinai is going to use the money for genetic disease, that would be great. But it sounds like it is going to be diverted to doing all kinds of rubbish that has got nothing to do with genetic disease. BR-53-7 151 In a climate of tenuous connections and actual rivalry, the authority to compel attendance was lacking. As a result, enculturation into the network was minimal. At this late stage of the network's development, the best way to describe it may be as an 'imagined community.' Benedict Anderson (1983:1-7) coined this term in developing a theory of nationality and 'narion-ness' but it provokes some interesting thoughts when applied to this network as it presendy stands. Anderson proposes to define nationalism as an imagined political community [that is] imagined as both inherently limited and sovereign (5-6). It is imaginedbecause most members will never meet their fellows 'yet in the minds of each lives the image of their communion' (6). Anderson suggests that all communities are imagined, once they exceed the possibilities of face-to-face contact achievable in primordial villages. What distinguishes communities is not their reality, he says, but their style of being imagined. It is imagined as limitedbecause it has finite, though elastic, boundaries beyond which lie other nations. It is imagined as sovereign because nations dream of being free. And it is imagined as a community because it is conceived as a deep, horizontal comradeship. I suggest these attributes are applicable to the imagined community that is the Canadian Genetic Diseases Network today. In this section I have explored the idea of the network as a community and a culture. In the next, I investigate the type of science produced by this community. II: Network Science? Grounded in laboratory practices and commercial motivations, molecular biology is an example of a 'practical science'. Divisions between the creation of knowledge (theory) and its applications (practice) are largely rejected. Meaning collapses into application, and truth value collapses into use and exchange values.107 The focus is converting lab results into profitable new therapies. In this 107 The phrase 'practical science' was R.G. Collingwood's and these points were made by Evelyn Fox Keller in a lecture at St John's College, UBC, March 2000. For a political economic perspective see Mackenzie, Keating and Cambrosio (1990) 152 section, I will review what happens when individual research programs in molecular biology (medical genetics) are brought together under the banner of 'network science'. Science is normally conducted in a highly competitive environment; individual labs are pitted against each other in races for resources and priority (Merton 1957). At the same time, within a laboratory and under the direction of its leader, people co-operate, share resources and ideas, and publish together. In a sense, CGDN extended the boundaries of 'the laboratory' to include everyone (and everything) in the network.108 All members of the network were considered colleagues; all had access to the network's technologies. In the long run, this 'extended lab', proved 'more important to the scientific enterprise that a lot of the rest of what CGDN does, because this is where the new ideas and approaches that power everything else will be generated' (Expert Panel Report; CGDN-EP 1997:15). The ethos of trust and cooperation allowed network researchers to reduce competition. They helped each other with scientific problems, reviewed each others' papers, exchanged students, and advised each other at all levels. These tangible and intangible aspects of belonging made the network a coherent and cohesive entity. It provided an organizational structure, albeit loose, that contributed to the production of first-class science. But whether this science could be described as a distinctive form of 'network science' is an open question. In my initial reading for this study, I found in network and program documents descriptions of a clearly defined network research program, divided into projects and themes, with teams of researchers working together under the direction of project leaders. I imagined the discussions at the start of each phase, about what 'we' were going to do next. I imagined scientists working together, according to plan, to discover genes and therapies. On closer examination, as I will explain, the 153 reality of network science proved elusive. Network science was not where I expected to find it, in the 'network' research program. But it was very much in evidence elsewhere: in the services provided to members by core facilities and their directors. In order to approach these questions, I first needed to develop an understanding of the Medical Genetics field. Medical Genetics: An Overview The science of CGDN is medical genetics, the field that studies the relationship between human genetic variation and diseases. Genetic disorders are classified into one of three types: single gene disorders, chromosome disorders and multifactorial disorders (Prater and Newlands 1999). Single gene defects are caused by mutant genes, usually a single critical error in the genetic code. More than 4000 single gene disorders have been described. Chromosome disorders are due to an excess or deficiency in the number of genes contained within an entire chromosome. The most common example is Down Syndrome (Trisomy 21), which is an extra normal copy of chromosome 21. Multifactorial inheritance is responsible for a wide range of disorders, believed due to multiple genetic mutations. Some cancers, coronary artery disease and diabetes meUitus are included in this group. A mutation is defined as any permanent change in the nucleotide sequence of DNA. Mutations may occur in somatic or germline cells, but only germline mutations are inherited. Somatic mutations, however, are responsible for many medical problems. For this reason cancer and coronary artery disease are often considered 'genetic' diseases (Prater and Newlands 1999). The practical goal of medical geneticists is to understand the basis for mutations and to use that information to design new therapies for gene-related disorders. The field contains numerous, rapidly Latour (1988) describes a similar effect in the Pasteurization of France. 154 advancing areas of interest, such as chromosomal analysis; cytogenetics; biochemical genetics; clinical genetics; population genetics; genetic epidemiology; developmental genetics; immunogenetics; genetic counselling; and foetal genetics. Michael Hayden's research program in Huntington's disease is one example of the type of cross-overs that occur. Hayden's team has identified a marker used in genetic testing for Huntington's disease. As well as researching the genetic basis of the disease and testing for it in patients, they are also involved pre-natal testing, and in studying the psychological consequences of genetic testing on patients. The history of medical genetics and the history of the gene are mtertwined (Childs 1999). Keller (1995) traces an arc through three periods. The early 20th century was dominated by a very powerful discourse of gene action. But the gene itself remained a statistical entity; a black-boxed construct. In general, medical science paid litde attention. Interest increased when the physical basis of heredity was established, but mainly among those who studied rare anomalies (Childs 1999). But little progress was made until 1953 when James Watson and Francis Crick described the molecular basis of DNA. The mid 20th century was the era of early molecular biology, which seemed to provide answers to questions about the nature of the gene and gene action—the 'genetic program'. At this point, according to Childs, medical genetics began in earnest, following the functional definition of one gene-one enzyme. In the 1960s, the development of the structural definition of the gene meant that inborn errors of metabolism could be described in terms of protein differences. The comparative youth of the field can be illustrated by network scientist Charles Scriver., who learned biochemical genetics in its infancy. When Scriver joined the McGill faculty in 1961, he was the first biochemical geneticist in Canada, meaning that he was 'the first one formally trained to do that type of thing and be taken onboard as a person who would do biochemical genetics' (CGDN-CS). 155 In the late 20th century, the molecular definition of the gene led to a technological explosion that moved genetic and molecular analysis beyond rare single-gene disorders to complex, multifactorial diseases. The tools of molecular genetics underwent revolutionary changes. They include the identification and use of restriction enzymes, cloning for recombinant DNA, vectors, probes, polymerase chain reaction, DNA sequence analysis and protein analysis. The availability of these tools, and the promise of genetics, led to the foundation of the Human Genome Project in the early 1980s. As the project neared completion, molecular biology again changed radically as fields like proteomics and functional genomics came to the fore. The 'new genetics' is revolutionizing medical genetics. It raises the prospect of altering the genome to prevent disease rather than treat disease. Virtually all disease progresses as a combination of environment and genetics ('nature versus nurture'). Medical geneticists believe 'nature' plays the most significant role and act to intervene. Many believe this prospect raises the spectre of biological deterniinism and a new eugenics.'09 For others, the new genetics ignores the significance of 'nurture', i.e. the socio-economic determinants of health and disease.110 While these debates and issues are compelling, except where they impact direcdy on CGDN they he beyond the scope of this study. The next section examines issues of space and scale in the molecular sciences and relates these to CGDN. Space and Scale In her comparison of high-energy physics and molecular biology, Karin Knorr-Cetina (1999) describes the latter as small-scale 'benchwork science' geared to 'treatment and intervention'. By definition molecular biology manipulates small objects in small labs . This modest scale was 109 Richard Lewontin is an authoritative source, see 1991 & 1999 156 illustrated on one of my site visits to a network researcher in Toronto. The team was just setting up a new laboratory in a university annex. The lab, quite literally, came in two cardboard boxes. One contained a powerful PC, pre-loaded with genetic analysis software. The other contained slides, reagents, and biological materials. We laughed about franchising 'Lab-in-a-Box', or 'Lab-to-Go'. Of course, the physical infrastructure of the laboratory is provided by the university but the space and benches are generic. Beyond unpacking the boxes, nothing special is required. Gieryn (1999a & b) has commented on the standardization of space in these labs and the architectural boundary work they embody. I noted similar effects in my site visits to different locations. The organization of space is predictable. For example, the labs at the Centre for Molecular Medicine and Therapeutics are laid out in such a way that the upper and lower floors are virtually identical. A common room/kitchen is located on each floor, at one end of the hallway. This area is the social focus, with a lot of coming and going. Signs on cupboard doors advertise meetings, seminars, and social events. Groups of grad students and post-docs chat over coffee and microwaved food at the common table. Overheard conversations: 'I had to sacrifice my first mouse last night'; 'I just found a mouse up my sleeve; its tail was sticking out. I thought I'd lost it'. (The mouse core facility was located at CMMT at this time). The labs are situated around the circumference of each floor, while the heavy and/or shared equipment is in the centre. Each lab appears to have two working benches in a bay and a computer desk. The building's architectural boundary work discloses no 'public face', not even a functioning reception area. All exterior doors are locked and electronically controlled. None is identified as the main entrance to the bunding. The most likely candidate carries a sign advising visitors, in no uncertain terms, that they are at 'the wrong place'. 'This is not the hospital', it says. Those who In Canada, note the work of Patricia Baird e.g. 2000 and Clyde Hertzman e.g. 1999. Both are members of CIAR 157 persist must use the intercom to ask someone to come and physically admit them. Indifference to (or fear of?) public intrusion was a spatial feature of the all the network facilities I visited. The sites of knowledge production were not 'open'. These sites, molecular biology labs, house 'biological machines' for the genetic engineering of knowledge. Knorr-Cetina calls these machines 'prolific small-scale factories' for the mass-production of cell-lines, bacteria, viral vectors, and purified mice, like those the grad students were discussing. These were 'knock-out mice', used in the study of oncogenes (cancer), that the network supplies from its mouse core facility. ("We put genes into the mice and then send them off to the investigators' [Mouse Core Facility Director, FJ-16]). Mouse models ('animal helpers') are research tools. Geneticists engineer them by 'knocking out' particular genes to try to cause cancer. The mice are bred to be exactly the same; a blastocyst injection into the ovum changes the organism. These mice are not 'natural'; they are constructed in the laboratory. Bruno Latour (1987) talks about the 'purification' of wild nature that takes place in a lab. ' In her comparison of the cultures of high-energy physics (HEP) and molecular biology, Knorr-Cetina (1999) notes that experimentation in HEP involves large and very expensive experimental devices and hundreds of scientists. These huge investments demand a long-term communitarian orientation to the management of spaces and technologies. Thus 'big science' like HEP is largely a collective enterprise. Publications list hundreds of authors in alphabetical order; discourse is open and free-flowing along 'confidence pathways' that link people together; a variety of spokespersons represent the work. Knorr-Cetina calls this a 'post-traditional communitarian structure'. In contrast, molecular biology's 'lab in a box' has no dominating technical apparatus that would focus a community. Instead, says Knorr-Cetina, individual scientists occupy separate spatial and epistemic lifeworlds. In contrast to HEP, molecular biology is highly individuaUstic: witness the tradition of naming labs after the leader (the Hayden lab; the Worton lab). As described in Chapter 5, leaders speak for and represent the lab as a whole. They are the focal point for public and scientific recognition. They appear in the media, give papers at conferences, accept the awards, while those who actually do the work often go unrecognized. Glasner & Rothman (1999; 2000) show that the most prominent and authoritative 'experts' are those who are furthest from bench research. A dual system is at work. Teams of post-doctoral fellows, graduate students, technicians, and junior faculty do the actual hands-on science under the direction of project leaders, while the lab director attracts the resources and plans the research program. One of the network's core set, Lap Chee Tsui, is chair of CGDN's Scientific Advisory Board and head of the International Human Genome Organization. He says, I'm still in the lab in terms of interactions but not day to day, not hands on anymore. I have to rely on people telling me what is going on. Of course I miss it. But it would be very difficult to go back. Because now I design experiments so complicated I need people to help me out. LCT-26 Given the dominance of laboratory leaders, and the fragmentation of molecular biology, CGDN's achievements in fashioning 'something like' a communitarian network culture, and 'something like' network science, are worthy of comment. Unable or unwilling to overcome embedded epistemic norms, they were able nevertheless to scale-up until the network approximated 'big science'. Scaling Up Until quite recently, molecular biology in Canada was a competitive and fragmented world where solitary researchers, in small laboratories, conducted small-scale experiments. Interactions were limited, if nothing else because of the time and costs involved. As one of CGDN's investigators recalls, 159 You might see your research colleagues at meetings or even make special trips to go to their lab and discuss research in common. And you might even send some grad students around or a technician to leam a procedure or something. But that was a relatively small number of interactions that each lab would have with another lab.. .There was [no] money there. You could [not] justify saying well, I would like to go over and see so-and-so do this, [and] take it out of your operating expenses (Researcher, AD-17). But as the research issues became more complex, it was increasingly clear that molecular biology could no longer operate effectively at a small scale and remain competitive internationally. By the time Michael Hayden reached out to colleagues across Canada, in 1988, it was already unlikely that a medical geneticist, working alone, would find both the gene and subsequently the cure for a genetic disease. A more likely scenario for that type of advance was the kind of 'heterogeneous engineering' (Law 1992) that combined medical geneticists and other molecular biologists with viral agents, tissues, genetic physicists, pharmaceutical chemists, gene sequencing technologies, 'purified' mice, and bioinformatics. Like high-energy physics, biology was becoming 'big science'. Lap Chee Tsui, gave a clear description of the differences. The way we do science is definitely different now than it was say 15 years ago. Back then it was all very small experiments. And of course things were very pnmitive too. Medical research has definitely changed-- its scope, the way it approaches things, the knowledge required to run or operate it. It is no longer just a solitary person dreaming up some experiment. It definitely requires quite a lot of help from other people. And if not from other people, from computers and the internet. Before, the literature and meetings were the only dungs we had. You got all your connections that way. Now the scope has just broadened so much. To undertake a biological question, you need engineers and statisticians to come in. A single person can't operate effectively in biology any more. I don't know how to put it. Compare biology to physics. In physics diese days, although a few are still doing investigator-driven research in small laboratories, seeking answers to a few very specific questions, the bulk of the experiments are done by big groups, large-scale networks using central facilities. I think biology is moving towards that model. LCT-21-2 Through the NCE program, Canadian biologists were able to aspire to the benefits of big science. NCEs helped the Canadian life sciences earn respect and remain internationally competitive in medical genetics, protein engineering, bacterial diseases, neuroscience, respiratory diseases and other 160 biological areas. As one of CGDN's founders comments, 'the network has been very good for the field of medical genetics in Canada. It has strengthened the discipline. People regard Canada as being a good place to do genetics' (BR-52). Another network researcher compares his experience in CGDN with his experience in the UK. In England, I [belonged to] a large collection of scientists working on a similar topic. The group is so big it's like a force of nature. In that that type of institute you are immersed in science in a way which we can't do in Canada. We don't have the resources. We can't allocate that much money to do focused research of that type. But that's what we're doing here in the network. We're doing focused research.. .The network allows us to bring together a critical mass of people who think about medical genetics problems, from different perspectives. And I think that's a real strength. MW-39 But to begin with, beyond the fact that everyone was doing something to do with human genetics, this 'critical mass of people' was not focused. It took time to develop an understanding of what it meant to have a network research program and to weave together the projects of individual researchers in some way that made sense. A Network Research Program? When the founding researchers were recruited in 1988, they were asked to write up a 'wish list' of projects they would choose to undertake were funding available. Brian Robinson recalls that when Ron Worton visited his lab to invite him to join the network, 'he said, well, have you got projects that you are not doing now but you would like to propose? And I said, oh yes. There are always lots of those' (BR-1). The desiderata of individual researchers were then creatively combined to constitute the network's research program in the funding proposal. To reinforce the point: the 'network research program' was an imaginary, rhetorically constructed from individual research programs for the purposes of obtaining funding. What was proposed was 161 simply a continuation and expansion of ongoing individual studies, with some of the expansion being due to network funding. The overall scientific objective of this composite was to study the molecular basis of genetic disease and the genetic basis for susceptibility to common diseases. The major goal, at that point, was to clone the genes responsible for selected genetic disorders. This would evolve in later phases, but in 1988 geneticists were still preoccupied with 'gene-hunting'. Little changed once the network was operational. Early NCE assessments criticized the emphasis on the individual researcher: 'for the most part, [the science] seems to be too much Pl-driven and not enough project-driven' (CGDN-EP 1992: 4). Over the years, however, the network became more astute at shading annual reports and statistical materials to convey the impression of integrated research projects and active lab-to-lab collaborations, despite the relative paucity of both. We always said we had research projects, because that's what we were supposed to have, but we didn't really. We had people working on different diseases.. .So it was pretty hard for us, at the end of the day, just to describe what our projects were' (Manager; PS-CS-74). In the original proposal, individual projects were loosely grouped under seven themed headings: (1) identification of disease genes based on chromosome location, for example cystic fibrosis; Huntington disease; myotonic dystrophy; Wilson disease; (2) mutation and functional analysis in Duchenne muscular dystrophy, retinoblastoma and retinitis pigmentosa; (3) genetics and biochemistry of inborn errors of metabolism, for example in Tay-Sachs and Sandhoff disease; (4) analysis of genetic factors predisposing to common diseases in mice and humans, using recombinant congenic strains in mouse models of human disease, and amplified sequence polymorphisms; (5) the structure of human genetic variation, such as thassalemia in French Canadians, and Tay-Sachs in French Canadians and Ashkenazi Jews; (6). construction of chromosome specific cDNA maps for specific tissues including retinal cDNA isolation and mapping and linkage analysis in diseases 162 affecting the retina; (7) core technology facilities—the nine technologies offered in Phase I are listed in the next section. At the end of Phase I, this research program was assessed by an expert panel, based on self-reports submitted by the network and a 2-day site visit by the panel to the network's head office at the end of September 1993.'" Descriptions of'themes', 'projects', and 'teams' were accepted at face value as part of an integrated program. The panel recommended teuriming some projects, focusing others on more competitive fields of research, and regrouping physicians and scientists into smaller numbers of highly competitive teams (CGDN-EP 1993: 11). But overall, in their estimation, the network had achieved 'outstanding progress'. If there were an international standard in genetic research, they said, CGDN 'might well be on top of such an international comparison' (CGDN-EP 1993:8). The panel submitted a favourable report to the NCE Directorate on October 25 1993. In part, that report read The Site Visit Committee noted the outstanding role played by scientists in this network on the international level with respect to the cloning of disease genes and investigating their functions.. .The Committee was also impressed by the collegiality and networking established among the investigators of the network and noted the importance of the establishment of the core facilities as a catalyst in this process. The Site Visit Committee, therefore, enthusiastically recommends that the network continue (CGDN-EP 1993: Cover letter) On October 28 1993, three days after the expert panel had submitted its favourable report, CGDN tendered its proposal for Phase II of the NCE program. While building on what went before, the research program was restructured to accommodate the research interests of new recruits. The research emphasis would now switch to common multigene disorders like Alzheimer's and breast cancer instead of the rare single-gene disorders that had been the focus of Phase I. According to Ron Worton, this was a pragmatic decision made because 'if we don't get into the complex diseases, the reviewers are going to wonder why and they're not going to give us fanding for Phase II' (RW-163 25). Even mote pragmatic was the fact that these were profitable diseases. As another researcher comments, 'the big pharmaceutical companies are interested in these big polygenic diseases.. .the diabetes, the inflammatory bowel disease, the sort of things that tens of thousands of people suffer from. Because that is where they are going to make their money' (BR-43). The eight themes for the Phase II research program were: (1) identification of disease-causing genes; (2) genes and phenotypes; (3) dynamic mutations (novel causes of human genetic disease); (4) genetic analysis of complex traits (mouse models of human disease); (5) genetic epidemiology and population genetics; (6) therapeutic interventions for genetic diseases (new theme); (7) applications of molecular genetics to health care (new theme); (8) core facilities. The two new themes emerged from the new emphasis on relevance in program criteria, that weighted translation of findings into practice equally with excellence of fundamental research. Theme 6 was a move into gene-based therapeutics and clinical trials; theme 7 into commercial diagnostics. By the end of Phase II, the network had adopted in its reporting a language of 'key discoveries', 'breakthroughs', and 'commercial impacts'. They maintained metrics on all, claiming 170 discoveries overall in Phase II, of which 100 were related to common, multigene disorders. Twenty 'key discoveries' were highlighted, including the isolation of the first two Alzheimer familial disease genes by a researcher at the University of Toronto in 1996. The discoverer was new to the network that year, recruited when he was close to the breakthrough after working on the project for a number of years. Even though the discoverer was a new member who allocated only 10% of his time to the network, CGDN was able to claim credit because he was a member at the time the genes were cloned. Another of the new Phase II researchers identified 111 Note that site visits assess all aspects of a network's mandate. In addition to the scientific program, its commercialization activities, partnerships and linkages, management, and training activities are also reviewed. 164 breast and ovarian cancer mutations in the genes BRAC1 and BRAC2. These too were claimed as 'network discoveries'. On the other hand, it was one of the original Pis—a 1988 'young researcher' who had spent almost his entire career with the network—who discovered a family of proteins that inhibit cell death. This breakthrough was quickly patented and spun-out into a company (see Chapter 7). In the new theme concerned with therapeutic interventions (#6), researchers had not yet translated findings into applications; rather they had 'created tools for gene-based therapeutics, setting the stage for therapeutic advances in Phase 3' (CGDN-FP 1997a: 11). Progress had been made in biological problems in hematology that had been barriers to the use of gene therapy for blood diseases, and in the use of herpes simplex virus (HSV) as a vector for gene delivery. The second new theme, Genetics in Health Care (#7) demonstrated much more translational progress. For example, headway had been made towards the identification of a direct genetic marker for osteoporosis risk, based on estrogen receptor variants, and of predisposing genes for risk of coronary artery disease (atherosclerosis). In addition, one of the researchers developed a novel technology for rapid, accurate, and cost-effective DNA sequencing of mutations, that was quickly adopted by the Human Genome Project. Also, key advances had occurred in the mutation analysis of the gene for Retinoblastoma (Rb), a devastating childhood cancer of the eye. Because each Rb mutation was revealed as virtually unique, efficient methods for mutation analysis were required. This need was translated by the researcher into mutation diagnostic reagents and kits for cost-efficient diagnosis and cascade testing in families. The investigator comments that, without the network, we might never have developed the RB test the way we have. We would have failed, like every other lab in North America, to practically help patients, because the test would have been too expensive, and too difficult. [Without the network] I don't know where I could have got funding to do that research. BG-44 165 The network submitted its progress report on Phase II , together with an application for Phase III funding, on April 29 1997. In February 1997, the NCE program had been made permanent, but individual networks—^including CGDN—had been 'sunsetted'. At that point, the Phase III funding proposal had been in preparation for almost a year. In less that two months, it had to be reoriented towards sustainability beyond the exit of NCE funding. The research program was collapsed into the four elements with the most potential for commercial exploitation"2: (1) identification of disease causing genes; (2) pathogenesis and functional genomics; (3) genetic therapies; and (4) genetics and health care. A 2-day site visit was arranged for late June. Subsequendy, the conclusion of the panel was that fonding should continue for the maximum, allowable period: until March 31 2005, subject to mid-term review in 2001. They cited the increasing number of multiple-authored papers across projects as an indicator that 'the group now shows much more evidence of working together as a team', and concluded that the network's evolution had been notfiing short of 'remarkable, in that it has not only achieved its stated goals in fulfilling the mandate established for NCEs, but in almost all cases has surpassed them' (CGDN-EP 1997:15, 17). Weaving together individual strands to give the appearance of coherence, such that reviewers were convinced the network had 'achieved and surpassed' the stated objectives, was a considerable rhetorical achievement. But whether the credit belonged to the network or the individual researchers is an open question. It remains unclear how much of the network's research program would have been achieved in its absence or how to calculate the incremental value the organization added to existing individual research programs. Recognizing these ambiguities, CGDN has recentiy revised its organizational purpose. Until early 2001, the mission was 'to research the diagnosis and treatment of genetic diseases and to help move see Chapter 6 for detailed discussion of the network's commercial activities in all three phases 166 the resisting discoveries into the health care system' (CGDN-AR 1999:1). It now defines itself, more accurately, as an 'enabling organization' and a 'catalyst' for research (CGDN-FP 2001: CD1). But in one aspect of its research program—the provision of Core Facilities—little doubt existed about the network's contribution. Core Facilities are the advanced technologies and technological expertise that helped network investigators speed research progress and 'breakthroughs'. They were the 'enabling technologies' on which the network's research program rested. The Core Facilities are what legitimate the network's claims, and justify the notion of 'network science'. Core Facilities: 'Where all the spokes converge' The core facilities are a kind of network legacy, I think. They are really the axle where a lot of the spokes converge. FJ-62 The network's core facilities simulated the technological support infrastructure of 'big science'. Easy access to powerful and expensive technologies allowed relatively small labs to undertake ambitious projects and compete internationally. In a priority race to identify genes, where every additional day matters, and where specialized technologies may not be available at a researcher's home university, they enabled resources to be dedicated to a particular project in order to move it ahead rapidly. The network would fund core facilities when it could balance demand and supply; that is to say, when demand for a novel and/or sophisticated 'leading edge' technology could be matched to a principal investigator, ready to act in the capacity of director, and willing to offer that technology to other members of the network. As discussed earlier, in defining an NCE the network metaphor itself is less than helpful; the more accurate image is of 'spokes' and 'hubs'. This was the case with core facilities. Network researchers across Canada (spokes) drew on core facilities and expertise (hubs). The hubs supplied the network's 167 material and intellectual infrastructure. Rather than researcher to researcher, collaboration was between researchers and core facility directors—the network's 'master collaborators'. The nine Core Facilities and directors available in Phase I are listed in the Figure 9 below. At McGill, Ken Morgan built databases for analysing population genetics and Charles Scriver maintained a longstanding cell bank holding about 2100 cell strains. Alessandra Duncan at Queens provided radioactive detection of short probes. At UBC, Rudi Aebersold supplied Protein Analysis and developed improved sequencing reagents and protocols; Frank Jirik and Jamey Marth started to create transgenic and knockout strains of mice, while Greg Lee focused on production of monoclonal antibodies."3 At Sick Kids, the first facility for sequencing small fragments of DNA was set up in Lap Chee Tsui's lab and was heavily utilized from the start. Ron Worton provided somatic cell mapping, to map cells to specific chromosomes. Peter Lea supplied electron microscopy at the University of Toronto. Figure 9: CGDN's Core Facilities, End of Year One, Phase I (1990-1) Facility Director(s) Institutions Computing and genotyping Morgan McGill Cell Bank Scriver McGill In Situ Chromosome Hybridization Duncan Queen's Protein Analysis Aebersold UBC Transgenic and knockout mice Jirik and Marth UBC Hybridoma Lee UBC Electron Microscopy Lea UToronto DNA Sequencing Tsui UT/HSC Somatic Cell Mapping Worton UT/HSC Source: CGDN-AR 1991; CGDN-EP 1993; CGDN-FP 1988 The status of the core facilities developed at the end of Phase II and the begmriing of Phase III are shown in the Figure 10 below. By this point three DNA Sequencing facilities were supported. A new large-scale sequencing site at UBC, a small fragments core at UVic, plus the original in Toronto. By this time, Francois Oulette, based at CMMT, was offering Warning in computational biology for the importance of monoclonal antibodies as a research tool see Mackenzie, Keating and Cambrosio 1990 and Cambrosio and Keating 1998 168 (bioinformatics) so researchers could develop skills needed to access the new genomic databases being produced by the human genome project. At McGill, Emil Skamene screened recombinant congenic strains of mice to idenify genes controlling complex traits. Jeremy Squire, at the Ontario Cancer Institute used FISH techniques to map genes and cDNA to chromosomal regions of human and mouse genomes. Mount Sinai's Joseph Culotti isolated mutated c. ekgans gene homologues of human disease genes. Two new facilties for the provision of genetically modified mice, at McMaster and Mt Sinai, eased the load on Frank Jirik's existing facility at UBC. A new immunoprobes facility was established by John Wilkins, at the University of Manitoba, to develop reagents for cell and molecular biology experimentation. At Laval, Rejean Drouin analyzed the physical state of DNA in vivo for information on DNA-protein interactions. Three researchers at the University of Toronto's Banting & Best Institute established a facility to isolate and identify interacting proteins. At Sick Kids, Joanna Rommens identified transcribed sequences in genomic DNA in aid of gene discovery projects. 169 Figure 10: CGDN's Core Facilities, End of Phase II, beginning of Phase III (1998) Facility Director(s) Institutions Bioinformatics training Oulette UBC/CMMT Complex Traits Analysis Skamene McGill/MGH Fluorescent In Situ Hybridization Squire UT/OCI DNA Sequencing Hayden UBC/CMMT Scherer UT/HSC Koop UVic Genome alteration in C.elegans Culotti UT/Mt Sinai Genome alteration in mice Rudnicki McMaster Jirik UBC/CMMT Nagy & Rossant UT/Mt Sinai Core computing and genotyping Hudson & Morgan McGill/MGH Immunoprobes Wilkins U.Manitoba In vivo DNA analysis Drouin Laval/SFA Protein-protein interactions Friesen, Greenblatt, Pawson UT/B&B Transcribed sequence detection Rommens UT/HSC Source: CGDN-AR 1999; CGDN-EP 1997; CGDN-FP 1997 By the time of the Phase III mid-term review (May 2001) the network had instituted a major shift in emphasis. As described earlier, the network's new mission was to be a catalyst for research advances in the wake of the sequencing of the human genome. *We are now in the post-genomics age. Many genes involved with pathology have been cloned. The focus now shifts to the proteome and pathogenic mechanisms' (CGDN-FP 2001). The core facilities were rationalized into four clusters: (1) Core Technology Platforms DNA sequence analysis; bioinformatics;"4 (2) Gene Technologies: in vivo DNA analysis; genotyping; transcribed sequence detection; (3) Protein Technologies: immunoprobes; proteomics; (4) Genome Alteration: c.elegans; mouse. As before, the highest demand was for DNA sequence analysis. A partial cost-recovery program shifted some of the burden for facilities maintenance from the network to the users, reflecting the Phase III focus on sustainability. See (Keating and Cambrosio 2000 on the significance of platform technologies 170 All participants interviewed agreed that the core facilities, and the skills of their directors, represented one of the network's key legacies. Researcher: The core facilities were a real catalyst for promoting interactions. We did a lot of cross country running about among different labs, but a lot of them centred around core facility usage. RG-29 Researcher: I think a key feature of the network has been the [core] facilities, especially the sequencing facility. There is no way I could have got that sequencing done without the resources of the network (DC-12) Researcher: For me, the high point of the network has been the core facilities. That's been my favorite component of the network. It's been fantastic. RG-89 Core Facility Director: If you want something immediately there is immediate cooperation. When we know that someone is getting close to a gene, and they need this kind of help, we put the secondary requests aside and emphasize this competitive project. LCT-11 NCE Selection Committee: The committee attributed the success of this network to an exemplary collegial exchange of knowledge and its reliance on and extensive sharing of resources, such as the core facilities. Genetic research, especially human genetics, is extremely cosdy to perform. The committee considered that the sharing of core facilities alone represents a significant benefit from the investment (NCE-SC 1997: 11) The added value was in setting up an infrastructure for undertaking the technical work that no single researcher could afford to set up independendy, in their own labs, but needed to use sporadically. Gene mapping was an example. When the original core facility was set up a backlog of demand quickly accumulated. As the director states, 'if somebody wanted something mapped, they just sent it to me, and it was a given that I was going to do it.. .If they hadn't been part of the network, they would have had to organize for just one little probe to be mapped with somebody else' (AD-21). As technologies like this became more and more central to research progress, and demand for them increased, universities and hospitals started to acquire their own capacity. At that point, network resources were redirected to other technologies not yet generally available. Core facilities would also be terminated if they were not used enough. For example, as can be seen in the two Figures above, 171 seven of the ten Phase I facilities had been replaced by the first year of Phase III (1998/9), when CGDN offered 11 core technologies in 15 locations. In between, other core facilities had been started and abandoned. Between 1991 and 2000, some $8M—approximately 20% of the network's total program funding-was dedicated to core facilities. The system appears to have been a cost effective way of sharing resources. Some researchers argued that all the network's resources should be directed into such facilities rather than into the relatively inconsequential amounts of fanding allocated to each researcher. One researcher says, 'I always thought that the majority of [network] activity should go into the maintenance and development of core facilities, to encourage collaboration' (RK-10). Another researcher, the director of a core facility, allocated most of his own network funding towards its support: 'Most of the money I get through the network we've thrown on the core facility—two people and about 600 mice.. .and various equipment and instruments' (FJ-16). But core facilities were more than just sharing expensive equipment and biological materials; they also represented the pooling and sharing of expertise. They were an efficient way to leverage the productivity of researchers and ongoing research. Rather than duplicating facilities at different sites, resources were concentrated at one site and in one person. As a network researcher explains, 'it is the expertise of the people tiiat is core, rather than the machines' (BG-39). In fact, it is the combination of people and machines that counts. 'The core resource is one thing and the experience of the director.. .and the people who work there, is another (LCT-10). The combining of machines and their directors in this way constituted what Latour (1987) calls a human/non-human hybrid and Pickering (1993:373) describes as a human-machine interface. Such 'cyborgs' can find answers far more expeditiously than any 'regular' scientist or technician would. The issue is familiarity and the way constant practice refines skills. 'I don't want my technician to have to learn a whole technique 172 to do 10 samples. That is a waste of everyone's time and money, quite apart from the machine' (BG-39). As technical and scientific experts, core facility directors operated the 'mangle of practice' (Pickering 1993) at the intersection of the network's material culture and moral economy. The material culture of a science is its 'tools of the trade': the machinery and methods of knowledge production, its instruments and experimental practices. The moral economy is the social rules and customs that regulate access to the material culture, establish authority over research agendas, and allocate credit. As Robert Kohler points out 'tools and methods only become productive when they are part of a social system for socializing recruits, identifying doable and productive problems, mobilizing resources, and spreading the word of achievements' (1998:243). The interesting question, according to Kohler, is how material culture and moral economy operate together to make research productive. Pickering (1993:374-5) argues that the mechanism is the 'mangling together' of human agency and performative material devices in 'a dialectic of 'resistance and accommodation'. With this in mind, the following combination of factors in relation to core facilities might be considered salient. (1) The researcher's requirement to have results processed, say genes to be sequenced. (2) The budgetary resources required to mobilize machines and/or technical staff to do the processing. (3) The power of these machines and technicians to produce inscriptions and standardizations from the data supplied. And (4) the technical and scientific expertise of the core facility director, who manipulates the technologies, even when they resist, to process the experiments. When we relate machines, money, molecules, and magi in this way we are able to perceive modest, 'local' actor-networks of human and non-human elements, that become nodes in the larger actor-network that is CGDN. From their location at the nexus of science and technology, 173 knowledge and expertise, core facilities represent as much a form of artisanal or craft 'know-how', as fundamental 'know-that'."5 Earlier, I referred to core facility directors as 'master collaborators'. This was because, by virtue of their position at a hub, they were aware of and participated in the majority of research projects, and could suggest potentially fruitful interactions between researchers who may have been unaware of each other's work. As one director describes, 'in the early days.. .1 was among a small number of people who were actually connected to most other people in the network.. .Virtually everybody had been storing up a bunch of stuff that they wanted mapped.. .1 interacted with a lot of people' (AD-8). But, more than that, directors could actually steer the direction of a project and the research agenda. By virtue of ninning a core facility, I know a lot of things that are going on, like new projects and stuff. And I have had input ability to actually participate and to help steer some of the research. A researcher will come to me and say that they want to do something and I say, well, maybe it wouldn't be good to do it that way, it's better if you do it this way. You see what I mean? I can actually play a role in determining the projects. If you're in a core facility well then, everybody is coming to you and saying 'I want to do this, what do you tfiink?' And so you have a chance for having input there. FJ-39-40 In terms of the communal life of science, Kohler (1998:249) argues that three elements 'seem especially central to its moral economy'. These are access to the material culture; equity in assigning credit for achievements; and authority in setting research agendas and deciding what is actually worth doing. Under this definition, which encompasses rules of mutual obligation, I would argue that the central role of core facilities directors makes them responsible for a substantial portion of the network's moral economy. 15 For interesting historical treatments of artisanal knowledge see Eamon (1985) and Jackson (2000) 174 Conclusion This chapter has presented two contradictory impressions of CGDN. On one hand is a sense of the chimerical: an 'imagined' community with an 'imaginary' research program; now you see it, now you don't. On the other hand is a sense of real durability: established relationships founded on mutual trust and anchored in significant technologies. Is the black box empty or full? We can approach an answer to that question by looking at the shift between Phases. The addition of new actors into an existing network is always destabilizing. New actors come with their own networks, all with goals of their own. Stability requires the disconnection of alternative associations such that the network becomes the only point of passage. A process of mutual shaping must take place to incorporate the new into the existing actor-network. That integration was successful in the shift from Phase I to Phase II. But in Phase III, the enrolment of new allies (researchers and institutions) seems to have taken place without enough attention to interessement. The latter is where network-builders lock-in potential allies by gaining their commitment to a set of goals and a course of action. Enrolment without interessement creates a fragile network that readily fragments. The Phase III expansion was overwhelmingly strategic, thus translations were incomplete and the voice of the spokesperson no longer spoke for all. When there is 'interpretive flexibility' (Bijker 1994), the system's stability becomes precarious: black boxes open; points of passage are ignored; and ambivalence becomes pervasive. . What then to make of strong associations that only seem to strengthen with time? Perhaps we can think of networks within networks; layers of associations like tree rings, showing different stages of expansion. The older layers are the most dense; compacted; difficult to ^associate. The newer layers are more porous; they can be peeled apart, and peeled away. As well, it is clear that materiality makes networks durable and that more-durable materials tend to produce relatively more-stable networks. 175 Ideas and talk are ephemeral; to persist they need to be embodied in inanimate materials like machines, books and birildings (Law 1992). The core facilities thus 'anchor' the network in complex and costly technological tools and in the embodied knowledge of the scientists and technicians that operate them. As Law points out, however, durability itself is a relational effect. 176 CHAPTER 6: FROM SCIENCE TO COMMERCE Truth and understanding are not such wares as to be monopolized and traded in by tickets and statutes and standards. We must not think to make a staple commodity of all the knowledge in the land, to mark and license it like our broadcloth and our woolpacks. John Milton. Areopagitica. (1644) NCEs were funded with the idea that they would, among other benefits, generate products and technologies for profit. Although 'excellence of the research' was the dominant criterion in Phase I selection, and remained a background condition, commercialization and partnerships with the private sector were key to the core mandate. With the sunset of NCE funding looming, CGDN focused on constructing a portfolio of licensing deals and spin-off companies that would provide a stream of future revenues. All alternative sources of income were investigated. In this chapter, I draw on the metaphor of the 'pipeline' that links the lab and the market. According to a recent description the process of traversing 'the pipe' is 'arduous, passionate, rich in ritual, and steeped in conflict and controversy'."61 begin by discussing the nature of the pipe and CGDN's position in relation to it. I then review changes in CGDN's connections with its industry partners. "6 A network of Canadian social scientists has recendy begun a SSHRC-funded study (Financing the Pipe) that explores the moral basis of profit when disease is defined as a market opportunity (what I earlier called 'profitable diseases'). Although there are as yet no results or publications from the study, the funding application (supplied to me by the principals and available on the web page contains powerful and evocative descriptive language 177 Next, I map the two major strategic shifts in the network's evolution 'from science to commerce': first, in the mid-1990s, bringing some coherence to the commercial portfolio; second, in the late-1990s, with a focus on network sustainability. In relation to the latter, two new initiatives are discussed that ratchet networking to a higher level, by bringing the life-science NCEs together, to joindy finance, 'bundle', and market the technologies in a combined pipeline. I. Understanding the Pipe The pipeline metaphor originates in the linear understanding of innovation that underpinned the postwar social contract for science. Even proponents of the 'open science' model on whichMost now view the linear model as an unrealistic depiction of the public/private, basic/applied relationship, especially in 'forefront' sciences like information technology and molecular biology which 'overflow' attempts to contain them. Yet the pipeline metaphor survived the collapse of the linear model; it remains ubiquitous in the 'pharmaceutical talk' of molecular biologists, as well as in the policy discourse. As Godin (2000-3:7, fn.31) argues it is, in fact, 'the spontaneous philosophy of scientists' and has been used in public discourse since the end of the 19th century. Certainly, 'the pipe' accurately represents the realities of commercial development in the life sciences. In this sector, the pipeline is the 10-12 year evolutionary pathway between the discoverer's laboratory bench and the packaged, brand-name drug or testing kit on the pharmacist's shelves. Once a candidate gene or pathway is discovered in the lab, patents are secured."7 The patents are then licensed out to biotechnology companies (sometimes the researcher's own 'start-up') which raise venture capital on the basis of the intellectual property then 'add-value' to the discovery. After scaling up and early trials are successful, smaller biotechnology companies often merge in order to 117 For an interesting discussion of this process, and the inherent tensions, see Mackenzie, et al (1990) 178 'bundle' their candidate technologies and advance them further. Eventually, a partnership will be entered into with a pharmaceutical company large enough to command sufficient resources to navigate the late-stage clinical trials and regulatory approval process."8 The length and complexity of the pipe made the NCE program's expectations of commercial prospects unrealistic. Government had a poor understanding of how long it takes to move 'raw science' out into the market. The federal attitude 'was short terrnist and linear, very linear. We will do some research, we will have a result and we will make a product and we will sell it. That type of approach' (Policy advisor, ARA-DR-38). As a senior CGDN scientist comments about anxieties on this score We were really very scared that it would be impossible to get renewed if they expected us to produce a line of products and a group of connections in five years.. .It's taken us into the third term to begin to produce what they thought we were supposed to do from the outset. Which was to create the links with the private sector, to produce the spin-off companies, to generate patents and products. And I think that's just about the right timeframe. Ten to twelve years is the realistic timeframe (CS-7-11) There is no shortage of good ideas; good ideas are plentiful. But it takes a great deal of time, money"9 and effort to steer a discovery from the front-end of the pipe, through myriad competing ideas, to commercial success at the far end of the pipe. 'Ideas are cheap' (CGDN-PS-RW) but most do not survive. 'For every hundred academics that spot something they think is commercially interesting,' says the network's CEO, 'only one will actually get it together to carry it through to the marketplace. The other 99 ideas just languish. They never happen' (PS-RW1-3). This attrition rate was one reason behind concern at government's expectations. Even the pharmaceutical industry was disturbed at federal misunderstandings of the way 'the pipe' worked. As one of CGDN's industry partners stated in the network's first annual report This description draws on the 'financing the pipe' materials referred to earlier. 179 It is important to realize... what the time frame is likely to be for the emergence of product candidates, especially in the pharmaceutical area. It is important that this [network] research be government funded, and that renewal oj funding not depend on the commercialisation ofproducts in academic research centres. This is the best way to assure that academic research stays at the cutting edge in each field, and generates the unexpected discoveries that can be pursued and developed in strong industrial research centres (Michael Gresser, Merck-Frosst Director of Chemistry in CGDN-AR 1991, emphasis added) This is an ardent defence of the division of labour in the linear 'open science' model: government funds science; science publishes results; industry takes up and develops results. Under the 'overflowing' model in the Strategic Science regime, the state hopes universities and research networks will become 'profit centres' by patenting and commercializing their own discoveries. This interferes with the traditional division of labour and increases transaction costs for industry (Rappert & Webster 1997). Because of the risks and costs involved in cornmercialization, network and university technology managers hedge their exposure by mamtaining portfolios of discoveries 'in the pipe', each at a different stage of translation and financing (which are mtimately related.) In CGDN, recent activities have focused on the far end of the pipe, as the strategic plan moves from translation to speculation; that is, from early-stage scaling up of research results, to speculation in finance and investment vehicles and venture capital funds. In the next part I examine the role CGDN's industry partners play in this process of moving network discoveries along the pipe. After that, I examine the network's trajectory along the pipe 'from science to commerce'. Conventionally estimated, with little supporting evidence, at around $500 M to take a new drug through clinical trials and the regulatory approval process 180 Industry Partnerships Industry partnerships are not as extensive as may be thought from a cursory perusal of program or network documents. Many alliances are listed but most involve minimal commitment and funding. Willingness to sign on to the formal network agreement is an indicator of who is, and is not, a 'real' industry partner. In CGDN's case, only two private-sector partners signed the first formal network agreement: MDS Health Group Limited and Merck Frosst Canada Inc., and only Merck "Frosst made a funding comrnitment--$70,000 a year for three years to provide research fellowships. A third 'industrial partner' signatory—BR Centre Limited—was actually UBC's Biotechnology Research Centre, where three of the network's researchers worked. (This body was also listed as an institutional member.) Calling the Centre an industrial partner was a fiction that helped gloss over the fact that litde attention had been paid to the NCE mandate for industrial linkages. Two of the scientific leaders simply imported their longstanding relationships with Merck Frosst and MDS respectively, into the network. As one of the founders comments, We didn't know what industry partnerships meant. We needed partners and the government kept saying the partners must contribute in a direct fashion. But, obviously there had to be a desire on the part of industry to participate and some means for them to feel that this is worth their time and effort. They weren't going to join us to make charitable contributions. There was also the concept of in-kind [contributions], which was, in those days, very primitive. We didn't know what in-kind really meant. So this was an extremely difficult thing for us to cope with.. .nobody knew what the rules should be and nobody knew what the government was looking for. RG-10 The Phase I funding proposal also listed Pharmacia (Canada) Inc., Squibb Canada Inc., and an entity called EuGENE Scientific Inc. as potential partners (CGDN-FP: 1988-S4). The majority of discussion in the section of the proposal on 'Potential for New Products and Processes for Commercial Exploitation' relates to EuGENE. The company was to be the network's research and development corporation, a public/private joint venture between the Hospital for Sick Children at the University of Toronto and MDS Health Ventures Inc. Half-a-dozen pages were given over to 181 EuGENE's prospects, products, capitalization, and profile in the network. According to the proposal, Eugene Scientific is a new company which is determining its goals as a direct consequence of the proposed establishment of this network...It is primarily because of the proposed involvement of network investigators that MDS laboratories has agreed in principle to make an investment in the order of $2 to $2.5M to this company (CGDN-FP 1988: 3.7H) Scientists who are part of the network will participate as scientific advisors to EuGENE for development of their gene probes for diagnostic tests [and] diagnostic kits...The scientists in the Network see the establishment of EuGENE as vital to the proper exploitation of their gene probes (CGDN-FP 1988: 3.6G.1) However, EuGENE proved to be a chimera. Between proposal and legal agreement, the company changed its name, then more-or-less disappeared, apparendy despite investments from NRC-IRAP and the MDS investment fund. By the following year no further trace of the company could be found. The Industry Liaison Office at the Hospital for Sick Children believes it ceased operations in 1991 or 1992 (personal communication). Another phantom company haunted the proposal for Phase II funding, submitted in 1993 (CGDN-FP 1993). The industrial linkages section of that proposal was structured around a spin-off called "NGI" (Network Genetics Inc) that had been formed to commercialize network research. The language of justifation on diagnostics and therapeutics was similar to that used for EuGENE. In an effort to create Canadian receptor capacity for CGDN's intellectual property, the network has taken the bold step of launching a new venture [NCE Genetics Inc. or NGI] the first Canadian company focused on genetic diagnostics and therapeutics. This is part of a long-term strategy by the network to capture value in Canada and enhance Canadian commercial contributions in this area (CGDN-FP 1993:1.2) 182 The ultimate competitive edge for this company is based on its special relationship with network researchers [which] represents an invaluable source of commercial and market intelligence which will assist in ensuring the development of new IP.. .The new network venture will begin its commercial activity within the next months and start the process of technology transfer (CGDN-FP 1993:1.8-9) According to the Phase II proposal, RGI had hired a scientific director. Its financial and business plans would be ready by the end of the year; and the proposal confidently predicted the company would be operational in early 1994. But, as with EuGENE, after the renewal award, further references to NGI ceased. Subsequendy, according to network documents, CGDN researchers developed working contacts with some 21 authentic companies or corporate divisions, during Phase II, of which three were network spin-offs (see Figure 11 below). As can be seen, of the $10.2 M generated from these contracts and contacts, more than half ($5.6 M) came from two 'big pharmas': Merck Frosst ($2.4 M) and Schering Canada ($3 M). Most of the Merck contribution relates to their support for the new Centre for Molecular Medicine and Therapeutics at UBC, while Schering's investment is for the presenilin genes project (Alzheimer disease). 183 Figure 11: Industry Relationships, Phase II Company Cash Inv (C$K) Principal Investigators Project Amgen 130 Dick Stem cell technology Apotex 132 Gallie Retinoblastoma protein ApoptoGen (spin-off) 1,194 Korneluk, MacKenzie Apoptosis/ cancer BioChem Pharma/ Gene Chem 328 Skameme, Gros, Rouleau BCG therapy, bladder cancer, congenic mice Connaught 130 Morgan, Skameme TB/BCG genotyping Glaxo-Wellcome 25 Hayden Huntington's disease treatments IBEX Technologies Inc 70 Scriver PK treatments ID Biomedical 15 Jirik Genetic testing technology IN EX Pharmaceuticals 287 Cullis, Worton, Tsui, Dick Liposome carrier therapy Leo Laboratories 50 Rousseau Psoriasis MDS-SCIEX 370 Dovichi DNA sequencing technology Merck Frosst 793 Triggs-Raine, Jirik Yeast 2 hybrid/ tyrosine phosphatases Merck Frosst/CMMT 2,448 Hayden, Jirik, Hieter CMMT Merck, Sharpe, Dohme 151 MacLennan Phospholamban interactions Millenium 48 Gros Cloning LPS locus Myriad Genetics 60 Rommens Breast cancer NeuroVir (spin-off) 405 Tufaro Neurological/ HSV gene therapy Rhone Poulenc-Rorer 446 Hayden Lipoprotein lipase therapy Schering Canada 3,000 Hyslop Presenilin genes/ Alzheimer Visible Genetics 30 Gallie, MacLennan Retinoblastoma/ malignant hyperthermia Xenon BioResearch (spin-off) 80 Hayden Gene identification in unique populations Source: CGDN-FP 1997a: 20 The nature of these relationships, and the degree to which they were attributable to network facilitation, is not clear from the documentation. The network classifies them as 'industry collaborations' for reporting purposes but also refers to them as 'sponsored research' (CGDN-FP 1997a: 20). Apart from the Merck relationship, the majority of these linkages appear to be arrangements whereby network researchers are funded to further develop patented technologies licensed by the company. Where the relationship is with a spin-off company, the amounts reported parallel the funds raised in the investment community to advance the patented technologies. 184 Despite what we might expect from program and network discourse, litde evidence exists of bench-level collaborations between academy and industry researchers, working together to advance technologies along the pipe. A CGDN private-sector board member confirms that, to the best of his knowledge, 'there are no network/private sector collaborations in the same sense as there are network/public sector collaborations based on the relationships amongst the scientists' (B-MP-14). The main factor mhibiting bench collaborations is that industry labs are largely concerned with product development while researchers are in the business of knowledge creation. Industry rarely involves itself in collaborative basic or even translational research. With the possible exception of Hayden's work on Huntington Disease with Merck-Frosst, network researchers cite no examples where they have worked direcdy with researchers in industry. According to a policy analyst, this is the case for the NCE program in general. Side by side bench collaborations are few and far between. I can't think of an example off hand. I think that it's rare. Collaboration is [defined] much more in terms of planning and monitoring the research and dealing with disclosures and IP issues and training and so forth. I can't off hand think of an example where 2 people actually sat side by side at the bench and did things. ARA-DR-76 As mentioned earlier, large pharmaceutical companies tend to wait until small biotechnology start ups and spin-offs have completed early-stage proof-of-concept and development work, then they buy the company. Their unwillingness to collaborate at more basic levels of the pipeline causes a degree of resentment among researchers. If you are looking for a disease gene, forget it. Nobody is going to support you in terms of a company, a commercial business. You want to isolate genes for diabetes? They say 'good luck'. But if you already have a gene, then, yeah, they are very interested. But the support doesn't come until you have a gene. You have to have a result. LCT-7 185 The big ones, the Pfizer's and the Glaxo's of this world, they haven't been anywhere near the network. Despite lots and lots of overtures to try and get them to show some interest.. .They are not interested in big collaborative projects with basic scientists at all. They want to do clinical trials and they want to do basic research in their own facilities where nobody can see what they are doing and they get all the patents. They don't want to be involved with basic researchers in universities.. .BR 35-41 They will come and pick stuff up. If they see you doing something interesting that they like, they will come and try and pick it off you. But they don't want to work with you on it.. ..If you look at the stuff that they an funding, a lot of it is clinical trials. So what they are basically doing is they are getting the government to help them do their clinical trials. I mean they are laughing all the way to the bank. I am very cynical about this. I have been at it a long time and I have watched this stuff and I have tried to talk to them about doing some basic stuff and they don't want to do it. BR 35-41 Even Michael Hayden, an indefatigable booster of industry and a close collaborator with Merck-Frosst, admits that support from big pharma is weak. T think industry has a legitimate right to serve their shareholders,' he says, 'but at the same time they should think about how they can invest.. .more in fundamental research. That's my only real criticism.. .that not enough has gone into basic research, too much has gone into marketing and that doe