Shortly after taking office, U.S. President George W. Bush announced the withdrawal of

SOCIAL SCIENCE COMPUTER REVIEW Henman / COMPUTER MODELING AND GREENHOUSE GAS POLICY Computer Modeling and the Politics of Greenhouse Gas Policy in Au...
Author: Guest
6 downloads 0 Views 105KB Size
SOCIAL SCIENCE COMPUTER REVIEW Henman / COMPUTER MODELING AND GREENHOUSE GAS POLICY

Computer Modeling and the Politics of Greenhouse Gas Policy in Australia PAUL HENMAN

Macquarie University, Australia Computer modeling technologies have increasingly become part of the conduct of science, public policy making, and the practice of politics. Their contribution to the world can be understood as intellectual, scientific, forecasting, governmental, truth production, and political technologies. This article focuses on the ways in which computer modeling has reconfigured the world of politics as illustrated in a case study of the use of economic modeling in the politics of Australia’s greenhouse gas emissions policy. From 1997, the Australian government pursued a policy of increased greenhouse gas emissions supported by computer modeling that forecast significant negative economic impacts on the Australian economy if emissions were reduced. These modeling results were publicly contested. The case study illustrates how computer models can be used in politics to construct a partisan point of view. It is argued that despite differences in the political use of computer models, their growing complexity constrains the capacity for the conduct of democratic politics.

Keywords: computer models, politics, economic modeling, greenhouse gas emission policy, public policy

S

hortly after taking office, U.S. President George W. Bush announced the withdrawal of the United States from the Kyoto agreement on limiting greenhouse gas emissions. The reason given was that the treaty was not in U.S. national interests. In particular, limiting greenhouse gas emissions would be bad for the U.S. economy. President Bush’s views reflected those of the Australian government from 1997, which has emphasized the considerable negative economic impact of reducing greenhouse gas emissions on Australia’s economy as forecast by computer-based economic modeling. Computer modeling may have been a quiet background actor in these two events. However, the increasing use of computer modeling and simulation in public policy making merits analysis of the way their use might be changing the practice of democratic politics. This article examines some of the political developments made possible by computer modeling, as illustrated in a case study of the use of computer economic modeling in the formation of greenhouse gas emission policy in Australia. The article begins by outlining a conceptual approach to computer models that highlights how the political use of such models rests on their technological ambiguity. The second section presents the Australian case study, closely analyzing the way computer models were shaped, employed, and contested. The artiAUTHOR’S NOTE: This article was researched and written while the author was employed as a Macquarie University Research Fellow. An earlier version was presented in September 2000 at the 4S/EEAST conference in Vienna. Particular thanks to Nikó Antalffy for her very helpful and detailed comments on an early version of the article. Acknowledgements to Ross Gittins and The Sydney Morning Herald for permission to reproduce text, and to Greenwood Publishing Group for permission to reproduce a table. Social Science Computer Review, Vol. 20 No. 2, Summer 2002 161-173 © 2002 Sage Publications

161

162

SOCIAL SCIENCE COMPUTER REVIEW

cle concludes by analyzing how the development of complex computer models constrains democratic politics.

THE CONTRIBUTION OF COMPUTER MODELING Each artifact introduced into the universe of people and things alters the behaviour of both. —Henry Petroski (1993, p. 235)

The development of increasingly powerful computer technology at cheaper prices has enabled both the creation of computer models of growing complexity and their more widespread use. Scientific work, from biochemical engineering to particle physics, is indebted to computer modeling and simulation,1 which assist in the development and testing of scientific theories and their empirical implications. Executives in private enterprise and economists in central governments use economic modeling to identify economic trends, projected income and expenditure, and the responses they wish to take. City and transport planners use computer models to plan transport corridors and govern air pollution. Private actuaries and welfare state policy makers use computer models for projecting the implications of demographic change on public and private expenditure. Military uses of computer modeling include war strategy and the development and testing of nuclear weapons. Engineers and architects also use computer modeling for designing and testing structures. Petroski’s (1993) observation, quoted above, suggests that the utilization of computer modeling has altered, though not always imperceptibly, the dynamics of the domains in which they are used. To obtain an appreciation of the contribution computer modeling and simulation can make, it is important that we understand the kind of technologies that computer models are.2 In other words, to what practices does computer modeling contribute, and how? First, computer models are intellectual technologies. They are tools with which to think (cf. Smith, 1998). Although constructing models is not new, it is with the advent of computing technology that such models can be made practical, testable, and even more complex (Morgan & Morrison, 1999). The process of constructing complex computer models requires model makers to clarify the relationships they understand to exist between various variables (King & Kraemer, 1993). As a model of the world, computer models help their users to identify the predicted outcome (i.e., the value of endogenous variables) given a set of input variables. Through comparing the models’ predicted results with other known outcomes, the model’s accuracy of the world can be tested and subsequently refined. Through refinement or otherwise, computer models also assist in the development of increasingly more complex, representative models. Second, computer models can be regarded as scientific technologies. They are tools with which to conduct scientific work. Central to the production of science is the development and testing of theories through experimental and empirical observation. Abstract models can often form the bridge between the theory and empirical data. Computer modeling of these abstract models helps to assess the theory by providing the means by which to test whether the theory’s and the computer models’ predicted outcomes match observed outcomes. Subsequent refinement of the computer model can also assist the development of scientific theory. Third, computer models with a temporal dimension constitute forecasting technologies. They are tools with which to make forecasts, as for example in meteorological and economic modeling. Such computer models take input variables and predict their value at a later point in abstract time. Applied to data from the present, such models provide forecasts of the future. In doing so, they constitute the future in our present.

Henman / COMPUTER MODELING AND GREENHOUSE GAS POLICY

163

Fourth, forecasting computer models also constitute governmental technologies. They are tools with which to govern, by which I mean to shape or manage the conduct of one’s self or others.3 Forecasting models enable those wanting to achieve specific future ends (such as maximized profits or productivity) to manipulate the present settings to (hopefully) achieve the desired outcomes. For example, Stephen Levy (1989) observed how the early use of electronic spreadsheet models enabled managers to assess the business implications of various economic settings. The consequent manipulation of present conditions to achieve desirable aims is a form of government. Forecasting models also enable undesirable outcomes (such as global warming or economic recession) to be identified, in response to which governmental action can be taken (hopefully) to avert such outcomes. Fifth, computer models can also act as truth-production technologies. They are tools with which to construct “truth.” The production of truth using computer models depends on the level of agreement about the abstract models that the computer embodies. The model’s predicted outcomes may come to be seen as the true (future) state of affairs. There are other factors reinforcing this truth production. The complexity of models means that without a contrasting model with which to compare results, one is obliged to accept the output of the model due to its superior calculating capacities compared to human intuition. The public image of computers as being highly accurate and “rational” in operation reinforces the idea that the “computer must be right.” In constructing and constituting truth, computer models thereby act as “independent” experts. To act as experts, the assumptions and values embodied by the computer model are treated as a black box (e.g., Latour, 1987; cf. Breslau & Yonay, 1999). Finally, computer models may act as political technologies. They are tools with which to conduct politics. Bruno Latour (1988), in rewriting Machievelli’s The Prince for contemporary times, convincingly argued that politics involves the construction of human and nonhuman allies to minimize the power of human and nonhuman opponents. Computer models are readily suited to being co-opted for politics in this manner. This is because computer models can be constructed to embody specific values and assumptions, and the partisan nature of those assumptions can then be “neutralized” through the complexity and the perceived truthproduction capability of such models. In short, computer models are constituted as aligned “independent” experts, that is, experts that give independent opinion or advice that aligns with a partisan view. To be sure, I am not, however, arguing that computer models are an inherently political technology (cf. Winner, 1986, pp. 19-39). The above characterization of computer modeling can be visualized as a conceptual onion. Each of the layers builds on the previous. This characterization of the contribution to computer modeling is helpful in analyzing the dynamics in which the technology is used. For example, scientists use computer modeling both as an intellectual and a scientific technology, but it may in turn be invested with truth claims, thereby becoming a truth-production technology. Similarly, when computer models are used as political technologies, they also simultaneously act as truth-production and forecasting technologies. In short, the way computer modeling is used in one setting may be ambiguous, and it is this very ambiguity that provides fertile grounds for computer modeling as a political technology.

COMPUTER MODELING AND THE POLITICS OF AUSTRALIAN GREENHOUSE GAS EMISSIONS POLICY In 1992, at the UN Framework Convention on Climate Change and the later Earth Summit in Rio de Janeiro, Brazil, Australia was a vocal advocate for securing international agreement on limiting global greenhouse gas emissions. But by 1997, at the Climate Change Conference in Kyoto where national targets were established for developed countries, despite an

164

SOCIAL SCIENCE COMPUTER REVIEW

almost unanimous agreement to reduce national emissions, Australia argued for an 18% increase in its national greenhouse gas emissions from 1990 levels. The explanation for this dramatic shift in Australian policy is a complex political story (Christoff, 1998; Hamilton, 2001), but the focus here is to examine the role of computer modeling in that story. Although climate change science extensively uses computer modeling to estimate the meteorological effects of increased greenhouse gases (e.g., Bouma, Pearman, & Manning, 1996; Edwards, 1999), economic models of climate change policy are relatively new. Economic models to estimate the impact on Australia’s economy of reducing Australia’s greenhouse gas emissions were MEGABARE and GIGABARE, constructed by a “professionally independent” public sector agency, the Australian Bureau of Agricultural and Resource Economics (ABARE), which was then located in the Department of Primary Industry and Energy.4 It was ABARE that provided the principal source of advice to the Australian Government on resource economics.

Background To appreciate the socioeconomic milieu, one must note the importance of the mining and energy-intensive industries to Australia’s economy. Mining constitutes 26% of Australia’s export industry. About three quarters of Australia’s total energy resource production—black coal, crude oil, natural gas, uranium—is exported. Most domestic energy is produced by coal (Australian Bureau of Statistics [ABS], n.d.). Australia also has the second highest rate of greenhouse gas emissions per capita. Despite this, Australia is regarded as a leader in solar energy research. Since 1992, when Australia enthusiastically embraced international agreements for reducing greenhouse gas emissions, the Australian political environment has changed. In March 1996, a far-right Coalition government replaced the center-left Labour government, involving a dramatic shift in public policy. The Coalition parties had a poor record on environmental issues, seeing the environmental movement a left-wing issue, and viewed economic and environmental issues as being in conflictual, rather than compatible, relationship. Accordingly, the rural constituency and the resource industry were given policy priority at the expense of indigenous Australians and the environment. This agenda was perhaps reinforced by a perceived reduced public interest in environmental policies. During this period, the resource industry also enhanced its political advocacy activities, having felt outmaneuvered by the environmental movement in the early 1990s. Within this new government, the key ministers involved in greenhouse gas emissions reinforced these political dynamics. The then–minister for resources, Senator Warwick Parer, a close friend of the prime minister, had considerable family business investments in the coal industry. Indeed, despite breaching the prime minister’s code of conduct on share ownership and perceived conflict of interest, Senator Parer retained the prime minister’s “full confidence.” This minister had indirect responsibility for ABARE and responsibility for policy on the energy, mining, and other resource industries. The minister for the environment, Senator Robert Hill, who had responsibility for greenhouse gas emissions policy, did not have the political power and influence of Senator Parer. Senator Hill was of a different party faction than the prime minister. Alongside these political developments, important changes were occurring in public sector administration, which created the scene for its political and industry capture. Although some developments had occurred under the previous government, the new government, which held a deep distrust of the public sector, hastened its politicization. For example, it moved to reduce union membership and to put senior executives on contract. As a result,

Henman / COMPUTER MODELING AND GREENHOUSE GAS POLICY

165

“serving the minister” became the public sector’s primary aim, with any sense that the public service was to serve the public sidelined. This was evident in the increasingly political and unbalanced nature of government statistics and advertisements and the reduced openness of senior public officials in Parliamentary accountability committees. The public sector had undergone political capture. Alongside this, changes were instituted enabling industry capture of the government bureaucracy. In 1993, the Labour Government required ABARE to obtain 40% of its revenues from the private sector. Thus, ABARE decided to meet half of the cost of developing MEGABARE—a general equilibrium economic model to assess the economic impact of reductions in Australia’s greenhouse gas emissions—through membership fees for the model’s steering committee. From 1996, with the development of the later computer model, GIGABARE, an annual fee of Aus.$50,000 was charged. Despite the “professional independence” of ABARE, this set the stage for industry capture of the agency and the creation of a model that was inherently biased toward the perspective of greenhouse-gas-producing industries. Indeed, the steering committee membership included representatives from the coal, steel, aluminum, and oil industries. During 1997, leading up to the December Kyoto conference, the Australian government positioned itself for a policy of increased greenhouse gas emissions relative to 1990 levels. It commissioned a group of skeptical scientists to write a critical report on the international consensus of climate change science. There was also a complete lack of action (or perhaps active discouragement) to enhance energy conservation or renewable energies, despite Australia having leading-edge renewable energy research and a great capacity for energy savings. Furthermore, the government began to emphasize the economic cost of greenhouse gas reduction. It was the MEGABARE computer modeling results from the “independent” ABARE that the Australian government used to highlight the apparent massive economic cost of reducing greenhouse gas emissions. It claimed that Australia’s gross domestic product (GDP) would be cut by 2% by 2020, causing each Australian to lose Aus.$9,000. Moreover, tens of thousands of jobs would be lost. The government sought to construct, through the use of “independent” experts, an image of a disastrous economic future for Australia should greenhouse gas emissions be reduced. Not only had the analysis that the government said informed its policy come from a “professionally independent” body, namely ABARE, but ABARE had a computer model—that is, a forecasting expert—that showed beyond doubt Australia’s future reality. The idea that liberal forms of government use experts in order to govern the behavior of their citizens has been widely studied (e.g., Johnson, 1993; Rose, 1993). This strategy succeeds because people regard experts as speaking the truth of the matter due to their specialist knowledge. In enrolling the experts and their truth claims, the actions that politicians seek to promote are given greater gravity than if they made policy statements themselves.

Contesting the Expert Unfortunately, the apparent predictions of economic doom were misleading. As Hamilton and Quiggan (1997) observed, Welfare changes in the MEGABARE model are measured by changes in annual per person real gross national expenditure (GNE). The model results in 1995 indicate that real GNE falls below the “business-as-usual” path by amounts ranging from –0.27% in the year 2000 to –0.49% in 2020. . . . It is most important to recognise that this does not mean that the growth rate of GNE is

166

SOCIAL SCIENCE COMPUTER REVIEW

lower by these amounts, but that the absolute levels of real GNE are lower by these amounts. This is a very small change by any standard. Clearly a projected fall in GNE by half a per cent over a 25 year period will be swamped by many other changes in the economy. (p. 19)

In short, the forecasting expert’s assessment of the situation was not anywhere near as bad as the government had made it out to be. Thus, a careful dose of “spin” or “presentational tricks” was ambiguously mixed with the computer model’s results. Furthermore, these presentational tricks were issued in press releases by the so-called professionally independent ABARE. ABARE’s misleading public statements of its calculations of the economic effect of reducing greenhouse gas emissions can only be understood in the context of both a government policy opposing reductions in greenhouse gas emissions and a climate whereby public sector agencies had been captured by the government. ABARE was undoubtedly aware that it was expected to give the best possible scientific argument for government policy. Instead of government ministers’ presenting ABARE’s modeling results in the most favorable manner, the “independent” experts at ABARE did so, thereby further increasing the credibility of the government’s policy approach. Although ABARE’s statement was not an inaccurate representation of the results, it was misleading. In this regard, the government and ABARE made use of another characteristic of expert knowledge—technical complexity. In general, technical complexity means that nonexperts are not in a position to assess the veracity of truth claims and so are likely to accept experts’ claims. In this case, the textual ambiguity of the presentation of ABARE’s result gave, as probably intended, an impression of greater economic cost of greenhouse gas reductions than actually calculated. Social studies of scientific conflicts have observed that the credibility of an expert’s truth claims relies on the extent to which such claims are contestable. In trying to convince their peers, scientists will enroll results from experiments to verify their claims. The truth is what is agreed on and has been “black-boxed,” that is, not under investigation or questioned (e.g., Collins & Pinch, 1993; Latour, 1987). The Australian government sought to black-box, through the use of experts and technical complexity, the forecasts of economic doom resulting from reducing greenhouse gas emissions. However, Hamilton and Quiggan’s (1997) critical analysis indicated that the truth claims were contested. But crucially, did their counterview have weight?

Opening the Black Box ABARE’s model was also critically examined. The independent expertise, both of ABARE and of its model, was contested. This was most notable in a public letter signed by 131 professional economists, including 16 economics professors, published in June 1997 in leading national newspapers. The letter, in part, stated that, “the economic modeling studies on which the Government is relying to assess the impacts of reducing Australia’s greenhouse gas emissions overestimate the costs and underestimate the benefits of reducing emissions” (as quoted in Hamilton & Quiggan, 1997, p. 26). In particular, Hamilton and Quiggan (1997) claimed ABARE’s models were limited in numerous ways: • They did not incorporate the costs of not reducing greenhouse gases, including increased flooding and intensity of tropical cyclones, higher incidence of tropical diseases, increased weed problems, and greater number of days of extreme fire danger.

Henman / COMPUTER MODELING AND GREENHOUSE GAS POLICY

167

• No consideration was given to technological change as a response to greenhouse gas policies, such as improvements in renewable energy production or energy-using processes. • They assumed that there were no “no regret” energy efficiencies to be made, that is, efficiencies that can be made without overall economic cost. The model assumed, despite significant evidence to the contrary and a poor Australian record of energy conservation, the existence of perfect economic efficiency, meaning such savings had already been made. • They neglected land clearing, accounting for about 40% of Australia’s total greenhouse gas emissions. • They did not take account of the likely reduction in coal export value as other countries reduced their reliance on greenhouse-gas-producing energy. • The ABARE modelers applied carbon taxes in a questionable manner.

This contesting of the model was an explicit challenge to the work by ABARE and the Australian government to create the model as an independent expert by treating it as a black box. To reduce the chances of such criticism, they hid the assumptions on which the model and results were based from public scrutiny. In all the use of the model’s results in press releases and pamphlets, there’s no acknowledgment of the key assumptions that produce those results. Nor is there any presentation of alternative results that would be produced by different assumptions. (Though, no doubt, the bureaucrats have covered their backsides by including an impenetrable discussion of the model’s properties in their full report.) (Gittins, 1997, p. 96)

The failure of ABARE to publish results based on alternative assumptions reinforces the impression of the political capture of ABARE and the ascendant attitudes within the public sector to “serve the minister.” As results from computer models are often sensitive to initial assumptions, it is normal professional practice to present the results of sensitivity-testing the model. The only reason for not doing so is to construct the single result as absolute truth, thereby minimizing contestation and maximizing the gravity of the government’s policy approach. It was only through an analysis of MEGABARE’s technical guide that the 131 economists and Hamilton and Quiggan (1997) were able to contest MEGABARE’s assumptions and the results it produced. In this regard, it was the “framing” of the model that was contested. Michel Callon (1988) used the notion of frame to refer to the way economic calculations frame what relations are taken into account and those that are not (i.e., externalities) (cf. Goffman, 1971).5 The frame defined by MEGABARE, what was and what was not included, came to be contested. Only certain things were included—either deliberately or unintentionally—and, as Hamilton and Quiggan pointed out, many things were excluded. There is, of course, no correct place to draw the frame. It would depend on what one seeks to measure or calculate. For example, given the dominance of the resource industry on the model’s steering committee, it is quite appropriate that the committee should only have been interested in the economic effect, on them, of greenhouse gas emission policies. This aside, the claims that ABARE had calculated the economic cost to the Australian nation of a specific greenhouse gas emission policy ought to have rested on a more comprehensive frame. Due to the complexity and resource-intensive nature of economic modeling, ABARE had a monopoly on resource economic computer modeling. This monopoly was important in enabling ABARE to construct itself and its models as experts. But as has been seen, claims to expertise are open to contestation. Indeed, the restricted framing of the models was seen as undermining claims to expertise and even as an abuse of professional responsibility as experts in economic modeling. Certainly, this was Gittins’s (1997) view, who stated that

168

SOCIAL SCIENCE COMPUTER REVIEW

ABARE “abuse[d] this monopoly [on resource economic computer models], using their models as black boxes with which to dazzle the punters with science” (p. 96).

Contesting the Process During the controversy over the model’s assumptions, it also became public knowledge that the Australian Conservation Foundation (ACF), a leading national environmental lobby group, had in May 1997 unsuccessfully sought membership on ABARE’s GIGABARE Steering Committee through a request to waiver the membership fee. ACF argued that it had a legitimate interest in the model but few funds and that its participation was in the public interest. In June 1997, ACF approached the Commonwealth Ombudsman to investigate perceived bias in the development of ABARE’s economic models. The Commonwealth Ombudsman’s (1998) report, issued after the Kyoto Conference in February 1998, noted, The composition of the MEGABARE and GIGABARE committees did not adequately conform to the characteristics of a government steering committee dealing with an important—and controversial—public policy matter. In particular [the Ombudsman’s investigation concluded] that the development of the steering committee did not ensure a balance of views and technical skills. (p. 4)

The complaint by ACF and the subsequent Ombudsman’s report constituted yet another domain of contestation of the claims of the independent expertise of ABARE and its computer modeling. It also provided an explanation for the framing assumptions of the model. The committee consisted only of representatives from resource and energy-producing companies and Commonwealth government departments. The latter had become aligned with “serving the Minister”—that is, government policy—whereas the government was sympathetic to the former. It is hardly surprising, then, that MEGABARE’s assumptions and the modeling results were consistent with the worldview of those intimately involved. Despite the force of the arguments against the economic forecasts (and the institutions and technologies that mutually constituted them) that the Australian government used to justify its greenhouse gas emission policy, there was not sufficient political opposition to overturn the government’s policy. At Kyoto, Australia argued for an 18% increase in greenhouse gas emissions. Although there were clear signs that the international community did not accept the technical basis for Australia’s position, Australia was able to negotiate an 8% increase in greenhouse gas emissions from 1990 levels. This was principally because the international community was more preoccupied with ensuring that the United States would commit itself to reductions in greenhouse gas emissions (Christoff, 1998); the United States promptly reneged. In the end, ABARE’s computer models were tangential to the international decision-making process in Kyoto and the Australian government’s emissions policy.

COMPUTER MODELS AND THE PRACTICE OF POLITICS Science and technology are politics pursued by other means. —Bruno Latour (1988, pp. 38-39)

Computer modeling technologies have contributed to and intensified the development of increasingly complex quantitative models of the world. This has meant that governments increasingly are required to make decisions based on knowledge generated through the

Henman / COMPUTER MODELING AND GREENHOUSE GAS POLICY

169

TABLE 1 Political Perspectives on Models

Dominant Interests Served Dominant Locus of Control

Partisan/Self-Interest

Nonpartisan/Pluralistic Interests

Technical experts

Technocratic perspective

Rational perspective

Political elites

Partisan perspective

Consensual perspective

SOURCE: Dutton and Kraemer (1985, p. 5). Copyright © 1985 by Ablex Publishing Corporation, Norwood, New Jersey. Reproduced with permission of Greenwood Publishing Group, Inc., Westport, Connecticut.

application of computer modeling. But how have these models transformed the practice of politics? During the early 1980s, two major studies of the role of computer modeling were conducted. Dutton and Kraemer (1985; cf. Danziger, Dutton, Kling, & Kraemer, 1982; King & Kraemer, 1993; Kraemer, Dickhoven, Tierney, & King, 1987) studied the use of fiscal impact computer models in American local government, whereas Kraemer and his colleagues (1987) examined modeling in American national politics. The first study identified four different modes in the political use of computer models: technocratic, rational, partisan, and consensual (see Table 1). The rational perspective involves the use of modeling for “scientific” management and analysis to identify the most appropriate policy and is much in evidence in normative accounts of policy making. The technocratic mode involves models’being used by computer modelers and the information elite to promote their position and is more related to internal organizational politics. Partisan approaches involve the use of computer modeling results to legitimize, not formulate, policy, as occurred in the Australian case study. When used consensually, computer modeling is a tool of interactive decision making, negotiating, and bargaining, which occur through involving contestants in the building and use of the model. It was found that this last approach was dominant in (successful) computer modeling in American local government, whereas in American national politics, both partisan and rationalist approaches were prominent. As all the four forms of political conduct involving computer modeling predate computers, one might conclude that computer modeling has not transformed the political domain. Not so. Dutton and Kraemer (1985) have contended that through their capacity as intellectual technologies, computer models help to clarify the issues at stake and “change the way in which policy debates are carried out, and thereby fundamentally change the nature of policy making and policy outcomes in the long run” (p. 206; cf. King & Kraemer, 1993). Similarly, Edwards (2000) has observed that computer modeling has helped build modern concepts of the world and, thus, the way we think and act. Thus, as Petroski (1993) would remind us, the advent of computer models has altered the landscape in which political action takes place, including possible forms of political action. We might then conclude from these earlier American findings that the way computer modeling is used depends entirely on the pattern of motivations between human political actors and that there are—as always—several forms of political action, and my Australian case study is simply an illustration of one such form. However, this characterization conceals a significant transformation in political conduct as computer modeling technologies have developed. A lot has changed since these American studies were undertaken in the early and mid-1980s, when personal computers were yet to

170

SOCIAL SCIENCE COMPUTER REVIEW

become commonplace. Since then, the sophistication of computer models has grown and models have become increasingly complex due both to increased computing power and ongoing organizational investment in such models. Alongside this, the continually cheaper prices of computers have enabled computer modeling to be more widespread. Whereas the former reinforces the importance of technical expertise, the latter provides a democratizing force. It is the former that I believe has become ever more salient. In this account, the complexity of computer models exaggerates the importance and monopoly of expertise, thereby undermining democratic public debate. When policy decision making is based on knowledge that is widespread, the general public has a greater capacity to assess the validity of the truth claims of others and the value of the actions they espouse. In this case, a high level of contestability is possible, with deliberation and debate potentially encompassing the whole public. As knowledge has become increasingly more technical, requiring specialized training, the number of people who can competently engage in the public policy debate has correspondingly declined. When people do not feel competent or capable of contesting the specialist truth claims of others—that is, the experts—they rely on the contestability of expert knowledge from other experts. The town hall public debate shrinks to the confines of the academy. The development of highly complex computer models tends to intensify this shift to greater specialization and technical complexity, which is reinforced by the cost of resources required to construct such models. Dutton and Kraemer’s (1985) finding of the widespread use of the negotiation form of computer modeling seems to contradict this assertion. However, they also identified that for models to be conducive to consensus politics, models must be relatively simple and comprehensible (pp. 16-17), characteristics which are now rare. Thus, when a domain becomes largely defined by the results of complex computer modeling, the scope for public debate is somewhat narrowed. The most common form of contestation in this situation involves the conflict of alternative models. Instead of a battle of the experts, contestation becomes a battle between computer models as experts. Modelers often will seek to obtain copies of their opponents’ models. Results from one model can be countered with the results from another, thereby moving the debate to a consideration of the details and assumptions of the model. This form of political conduct has been widely observed (Evans, 1999b; King & Kraemer, 1993; Kraemer et al., 1987). Social studies of science have noted that when there is conflict in technical information, a resolution is sought through an analysis of the credibility of contestants (e.g., Collins & Pinch, 1993)—in this case, the modelers and models. For instance, governments may argue that Treasury models are more objective and sophisticated than those of the private consultants used by oppositions. This tactic may work by creating uncertainty among the public about what to believe; alternatively, as is often the case, they may disengage and leave matters to the experts. In either case, democracy is not served because the public is not in a position to engage in the debate. Contrary to the present account, Evans (1999a, 1999b) has recently argued that competing models can enhance democracy. In a study of macroeconomic modeling in the United Kingdom, he argued that different modeling results highlight the contested nature of macroeconomic theory and forecasting and, in doing so, draw attention to the fact that macroeconomic policy decisions are not simply technical decisions made by experts but also involve political decisions about the kind of society we want. Edwards (1999) made a similar point regarding climate modeling by arguing that the question of how much risk a society is prepared to take is a political choice, not a technical one. Although this is a possible outcome, the evidence suggests it is unlikely. Indeed, in Evans’s studies, the economists minimized

Henman / COMPUTER MODELING AND GREENHOUSE GAS POLICY

171

public awareness of their deep disagreements by agreeing to speak with one voice to the public and the government regarding policy recommendations. Rather than highlighting the political nature of macroeconomic policy, they reinforced its technical nature. The lack of public contestability regarding the use of computer models in public policy making is further reinforced when only one computer model of a particular domain exists. Given that computer model construction is quite time- and resource-intensive, this is a common scenario. The opportunity for monopolizing truth construction is greatly increased. Contestability can be maintained by making public both the details of the computer model’s construction and a range of results based on different modeling assumptions. Given that the former are highly complex and take some time to critically examine, potential opponents are often disadvantaged in the arena of politics, where short timeframes are often important. Accordingly, partisan politicians seek to minimize the amount of information with which one might critically assess modeling results, as was the case in the Australian example. Advocates of the rationalist perspective of modeling might argue that it does not matter that only one computer model may exist for a specific domain. In particular, it might be thought that if the topic is both highly technical and well defined, there would be little disagreement over the model’s construction, the variable settings, and the results. However, computer models do not make up the world in the same way that optical microscopes and telescopes give reasonably accurate representations of the world. The Australian case study illustrated the importance of framing. Even among experts, what should and should not be included in a model is often contested, as are the formulae of how the various items interact and the values of initial variables (cf. Evans, 1999a, 1999b). Furthermore, it is commonplace for computer models to be sensitive to what might be regarded as minimal changes to initial variables or formulae. This characteristic of computer models is a real boon for people wanting to use computer models as an advocate, dressed up as an independent expert, for a particular view of the world. A model is constructed to reflect the views espoused. The foregoing focus on contestation highlights how computer models have increasingly become partisan political actors. However, whereas the use of modeling is very often partisan, studies by Kraemer et al. (1987) and others since (e.g., Edwards, 1999; Evans, 1999b) have shown that “rational” uses of computer models as intellectual technologies often also occur. In this case, models provide a means for thinking through policy options by identifying likely impacts of the policy and its cost to the government. When used in this manner, there may or may not be contestation between the various experts involved. Regardless, this use is generally unseen by the public and, accordingly, has little say in the model’s influence on policy. To summarize, the advent of complex computer models has reinforced modes of political action that undermine democratic politics by emphasizing the importance of expertise. Early uses of computer models for consensus politics have given way both to rational uses involving technical experts formulating policy and to partisan politics wherein modeling results are used to legitimize policy. In both these cases, public debate is minimized. In partisan politics, technology has increased the capacity of those with resources to construct computer models that provide results that embody and reflect the dreams and schemes of the model’s political authors, a process of “automating bias” (Dutton & Kraemer, 1985; Kraemer et al., 1987). Computer models can be used as a political technology through the enrolling of “independent” experts as allies. Once the model has been constructed and the required results obtained, the political operator then seeks to black-box the construction work to constitute the model as an independent expert, which they can then use to support their case. When confronted with the truth claims resulting from such computer modeling, the layperson has little capacity to interrogate them, whereas the complexity may mean that other experts may not be able

172

SOCIAL SCIENCE COMPUTER REVIEW

to respond in a timely fashion. Thus, for computer models to operate as a political technology, they must also simultaneously be scientific and truth-production technologies.6 To be sure, computer models are only one small element in the conduct of politics. They may help shape the form it takes, but they do not determine it. They may add weight to the claims and commands of political players, but they are not able to resolve political debate. Indeed, in the Australian case study, the results of the computer models, which increasingly failed to bolster the government’s case, became marginal. The Australian government was intent on a policy course to be achieved at all costs, and this policy course was achieved. As always, relative political power remains the key element in determining political disputes.

NOTES 1. Throughout the article, I will refer only to computer modeling. Computer simulations are similarly important, particularly as visual tools, which Joerges (1999, p. 430) has referred to as LogIcons, meaning “pictures to think with” (cf. Fyfe & Law, 1988). 2. There are numerous technical publications explaining how computer models are constructed. For an overview of (economic) models that provides a sociological analysis of the process, see Evans (1999b). 3. This notion of government as the “conduct of conduct” has been popularized following the latter works of Michel Foucault (e.g., Foucault, 1991; see also Barry, Osborne, & Rose, 1996; Dean, 1999). 4. The key documents consulted include Australian Bureau of Resource Economics (1997), Hamilton and Quiggan (1997), Kinrade (1998), Commonwealth Ombudsman (1998), and various newspapers of the day. See also Hamilton (2001) and Pezzey and Lambie (2001). 5. The idea of boundary and boundary devices portrays a similar idea (Star & Griesemer, 1989; cf. Woolgar & Pawluch, 1985). 6. Although this article examines the contribution of computer modeling to the conduct of politics and others have examined its contribution to the conduct of science (e.g., Sismondo, 1999), the way computer modeling can be used as a political technology in the conduct of science remains to be analyzed.

REFERENCES Australian Bureau of Agriculture and Resource Economics. (1997). The economic impact of international climate change policy (Research Rep. No. 97.4). Canberra, Australia: Author. Australian Bureau of Statistics. (n.d.). Australia now—A statistical profile of Australia. Available from http:// www.abs.gov.au Barry, A., Osbourne, T., & Rose, N. (Eds.). (1996). Foucault and political reason: Liberalism, neo-liberalism and rationalities of government. London: University College of London Press. Bouma, S. J., Pearman, G. I., & Manning, M. R. (Eds.). (1996). Greenhouse: Coping with climate change. Melbourne, Australia: CSIRO. Breslau, D., & Yonay, Y. (1999). Beyond metaphor: Mathematical models in economics as empirical research. Science in Context, 12(2), 317-332. Callon, M. (1998). An essay on framing and overflowing: Economic externalities revisited by sociology. In M. Callon (Ed.), The laws of the markets (pp. 244-269). Oxford, UK: Blackwell. Christoff, P. (1998). From global citizen to renegade state. Arena Journal, 10, 113-127. Collins, H., & Pinch, T. (Eds.). (1993). The Golem: What everyone should know about science. Cambridge, UK: Cambridge University Press. Commonwealth Ombudsman. (1998). Report of the investigation into ABARE’s external funding of climate change economic modelling. Canberra, Australia: Author. Danziger, J. N., Dutton, W. H., Kling, R., & Kraemer, K. L. (1982). Computers and politics: High technology in American local governments. New York: Columbia University Press. Dean, M. (1999). Govermentality: Power and rule in modern society. London: Sage. Dutton, W. H. & Kraemer, K. L. (1985). Modeling as negotiating: The political dynamics of computer models in the policy process. Norwood, NJ: Ablex. Edwards, P. N. (1999). Global climate science, uncertainty and politics: Data-laden models, model-filtered data. Science as Culture, 8(4), 437-472.

Henman / COMPUTER MODELING AND GREENHOUSE GAS POLICY

173

Edwards, P. N. (2000). The world in a machine: Origins and impacts of early computerized global systems models. In T. P. Hughes & A. C. Hughes (Eds.), Systems, experts and computers (pp. 221-254). Cambridge, MA: MIT Press. Evans, R. (1999a). Economic models and policy advice: Theory choice or moral choice? Science in Context, 12(2), 351-376. Evans, R. (1999b). Macroeconomic forecasting: A sociological appraisal. London: Routledge. Foucault, M. (1991). Governmentality. In G. Burchell, C. Gordon, & P. Miller (Eds.), The Foucault effect: Studies in governmentality (pp. 87-104). London: Harvester Wheatsheaf. Fyfe, G., & Law, J. (Eds.). (1988). Picturing power: Visual depictions and social relations. London: Routledge. Gittins, R. (1997, June 28). Model agencies caught with their pants down. The Sydney Morning Herald, p. 96. Goffman, E. (1971). Frame analysis: An essay on the organization of experience. Chicago: Northeastern University Press. Hamilton, C. (2001). Running from the storm: The development of climate change policy in Australia. Sydney, Australia: University of New South Wales Press. Hamilton, C., & Quiggan, J. (1997). Economic analysis of greenhouse policy: A layperson’s guide to the perils of economic modelling (Discussion Paper No. 15). Canberra: Australia Institute. Joerges, B. (1999). Do politics have artefacts? Social Studies of Science, 29(3), 411-431. Johnson, T. (1993). Expertise and the state. In M. Gane & T. Johnson (Eds.), Foucault’s new domains (pp. 139-152). London: Routledge. King, J. L., & Kraemer, K. L. (1993). Models, facts, and the policy process: The political ecology of estimated truth. In M. F. Goodchild, B. O. Parks, & L. T. Steyaert (Eds.), Environmental modelling with GIS (pp. 353-360). New York: Oxford University Press. Kinrade, P. (1998, February). Australia’s downhill diplomacy at Kyoto Summit. Habitat Australia, pp. 8-11. Kraemer, K. L., Dickhoven, S., Tierney, S. F., & King, K. L. (1987). Datawars: The politics of modeling in federal policymaking. New York: Columbia University Press. Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Cambridge, MA: Harvard University Press. Latour, B. (1988). The Prince for machines as well as for machinations. In B. Elliott (Ed.), Technology and social process (pp. 20-43). Edinburgh, Scotland: Edinburgh University Press. Levy, S. (1989). A spreadsheet way of knowledge. In T. Forester (Ed.), Computers in the human context: Information technology, productivity, and people (pp. 318-326). Oxford, UK: Basil Blackwell. Morgan, M., & Morrison, M. (Eds.). (1999). Models as mediators: Perspectives on natural and social science. Cambridge, UK: Cambridge University Press. Petroski, H. (1993). The evolution of useful things: How everyday artefacts—from fork and pins to paper clips to zippers—came to be as they are. London: Pavilion. Pezzey, J., & Lambie, R. (2001). CGE models for evaluating domestic greenhouse policies in Australia: A comparative analysis (Rep. to the Productivity Commission). Canberra, Australia: AusInfo. Rose, N. (1993). Government, authority and expertise in advance liberalism. Economy and Society, 22(3), 283-299. Sismondo, S. (1999). Models, simulations and their objects. Science in Context, 12(2), 247-260. Smith, R. (1998). Emergent policy-making with macroeconomic models. Economic Modelling, 15(3), 429-442. Star, S. L., & Griesemer, J. R. (1989). Institutional ecology, “translations” and boundary objects: Amateurs and professionals in Berkeley’s museum of vertebrate zoology, 1907-39. Social Studies of Science, 19, 387-420. Winner, L. (1986). The whale and the reactor: A search for limits in an age of high technology. Chicago: University of Chicago Press. Woolgar, S., & Pawluch, D. (1985). Ontological gerrymandering: The anatomy of social problems explanations. Social Problems, 32(3), 214-227.

Paul Henman is currently a postdoctoral research fellow in the Department of Sociology of Macquarie University in Sydney, Australia. His work focuses on the nexus of technology, public policy and administration, and the conduct of government. He is currently preparing a book manuscript on the use of information technologies in Australian and British social security and banking systems since 1880, and he is undertaking a study of e-government in Australia. He may be contacted at the Department of Sociology, Macquarie University, NSW 2109, Australia; telephone: +61 2 9850 8239; fax: +61 2 9850 9355; e-mail: paul. [email protected].

Suggest Documents