GOVERNANCE AND PERFORMANCE. A COMPARATIVE PERSPECTIVE ON CONTEMPORARY EUROPE

GOVERNANCE AND PERFORMANCE. A COMPARATIVE PERSPECTIVE ON CONTEMPORARY EUROPE Notes supporting a presentation to a conference organized by the Norwegi...
Author: Cecilia Gaines
2 downloads 2 Views 123KB Size
GOVERNANCE AND PERFORMANCE. A COMPARATIVE PERSPECTIVE ON CONTEMPORARY EUROPE

Notes supporting a presentation to a conference organized by the Norwegian Ministry of Finance and Ministry of Local Government and Modernisation, Oslo, September 24, 2014

Christopher Pollitt Emeritus Professor, Institute of Public Governance, Katholieke Universiteit Leuven Past Editor of Public Administration and International Review of Administrative Sciences Email: [email protected]

Edition 12-9-2014

1   

INTRODUCTION

  

 

Auckland, 1997 (an incident explained in the presentation) Basic debates on performance management go back to the mid 1980s (e.g. Pollitt, 1985) Vast international literature, both practitioner and academic It includes detailed case studies (e.g. Barber, 2007; Kelman and Friedman, 2009; Propper et al, 2008), analyses of specific policies in particular countries (e.g. Ingraham et al, 2003; Laegreid et al, 2008; Perry et al, 2009), broad international comparisons (e.g. Bouckaert and Halligan, 2008; Pollitt, 2006a) and overall theories of how it works (or doesn’t) (e.g. Boyne et al, 2006; Moynihan, 2008a; Talbot, 2010). For all its problems, there is a wide consensus that PM remains a core element in modern public management (e.g. Ingraham, 2003; Kettl and Kelman, 2007, Talbot, 2010; Van Dooren and Van de Walle, 2008). Still no generic ’best practice’ model, but there are reasons for that, as we shall see

See end of this note for a general listing of key performance references.

2   

PRELIMINARIES: ’GOVERNANCE’  



Definitional chaos: ’like trying to nail a pudding on a wall’ (Pollitt and Hupe, 2011) Inter alia: digital-era governance, network government, network governance, network meta-governance, holistic governance, public value governance and New Public Governance Will here take a simple meaning: government as one institution among many involved in governing

3   

PRELIMINARIES: PERFORMANCE MANAGEMENT Broad lessons from the last 30 years: 





  

  

Dreams of comprehensive, standardized national systems have not worked out well (NZ, UK, USA). International comparisons are even more difficult and, despite their popularity in some quarters, have serious technical limitations (Pollitt, 2010; 2011). Even trying to establish a uniform, centralized system for a large municipality has considerable challenges (Jääskeläinen and Laihonen, 2014) Performance–related pay (PRP) in particular has not worked well (e.g. Perry et al, 2009). There is some evidence that difficulty in removing poor managers has a bigger effect on staff morale than monetary rewards for good managers (Brewer and Walker, 2013) Ownership is often a problem – if imposed from above PMSs will often evoke resistance. But if the system reflects the priorities of all stakeholders it may become unfocused and unwieldy. Public/citizen interest in performance measurement has, in most but not all cases, been very limited (Martin et al, 2013; Pollitt, 2006b) Performance management systems (PMSs) take time (iterations) to settle And they are endogenously dynamic (Kuhlmann and Jäkel, 2013; Martin et al, 2013; Pollitt et al, 2010). One example would be the cycle from few to many, another would be the shift from private/professional to public, and a third would be the gradual attachment of incentives to indicators that were originally ’just for discussion and information’. Distortions grow easily (Bevan and Hood, 2006; Pollitt, 2013a; Smith, 1995; Van Thiel and Leeuw, 2002) PM systems tend to be constitutive (Dahler-Larsen, 2014) Different sectors/activities often need (and often actually have) different types of measurement system (Hammerschmidt et al, 2013; Martin et al, 2013, Wilson’s 1989 classifcation)

4   

THREE USEFUL FRAMEWORKS FOR ANALYSIS Over the next four pages I briefly summarize three frameworks – each suggested by different authors – for thinking about the design of performance management systems (PMSs). The first concerns basic design decisions that have to be taken, whenever any PMS is set up. The second offers four types of governance regime – all of which have been seen at various places and times in the UK over the past two decades. The third is a famous typology of types of organization, based on the degree to which their outputs and outcomes can be identified and monitored.

5   

FIRST FRAMEWORK: UNAVOIDABLE CHOICES IN PMS DESIGN (Pollitt, 2013a)

ACTIVITY

MEASUREMENT DATA

Activities Programmes Policies

USE Decisions or assessments of, or attitudes towards, activity/activities

of selected aspects of activity/activities

INFORMATION presentation, including aggregation, composites, weightings

numeric and other

APPLICATION OF CRITERIA averages standards targets etc

  The following is a minimum list of unavoidable decisions that the designers of performance  management systems must, implicitly or explicitly take:  



  

Which aspects of which activities are to be measured and which are not?  This matters, inter  alia, because it is widely believed that what is measured gets more attention than what is  not (’whats measured gets managed’).  There is nearly always a choice here ‐ and the more  complex the service the more difficult the choice becomes ‐ some aspects have to be left  out; not everything can be measured or the system becomes impossibly heavy .  Who is going to be responsible for measurement?  The service providers themselves?  Their  superiors?  Independents outsiders?  Service users?  If providers measure themselves there  are obvious dangers of bias.  How will they measure?  Using what techniques?  Visibly or unobtrusively?  At pre‐ determined times or unexpectedly?  How reliable are these measurements likely to be?  Who will have access to the raw data?  How, if at all, will it be validated?  Which criteria will be applied?  Will a specific performance be compared with a) a standard  or threshold (if so, how determined?), b) an average or mean, c) the past performance of the  organizational unit in question or d) the best performance of any organizational unit in the  6 

 







set under consideration, either nationally or internationally e) some combination of the  above.  Clearly a particular activity may score well/look good against one of these criteria  whilst simultaneously scoring low/looking bad against another.  How will the information be presented – in particular will there be processes of aggregation,  the construction of indices, weighting or other procedures which move the presented  information further away from the raw data.  Who will have access to the resulting information so as to be able to use it?  Service  providers?  Supervising ministries and departments?  Inspectorates and similar bodies?   Legislators?  Service users?  Members of the general public?  As many commentators have  observed, it is unlikely that any one set of figures could be equally useful and salient for all  these diverse audiences.  How will it be used?  Use could include a) symbolic or ritual use (’look, we have a modern  performance management system!’), b) punitive use against low scorers, c) as a basis for  resource claims (either ’we have done well, so give us more’,or, alternatively, ’we are failing,  so give us extra help’) d) to promote organizational learning e) to support decisions about  the continuation/expansion/termination of activities or programmes, f) to advance  individual careers, g) to be filed and forgotten and g) almost any combination of the above. 

Notice that many of these questions go well beyond the technical.  They involve the weighing of  values or choices which can reasonably be thought of as ’political’ (one of the classic definitions of  politics was ’the authorative allocation of values’).  Thus performance measurement can never be  ’purely objective’ or a simple, automatic reflection of some external reality.  It needs political inputs  and it has to be managed. 

7   

SECOND FRAMEWORK: THE GOVERNANCE OF PERFORMANCE. (after Bevan and Wilson, 2013) Bevan and Wilson define four models: 1. TRUST AND ALTRUISM (T & A). Just supply comparative information to selfmotivating professionals, or enable them to do that themselves. They will improve. 2. HIERARCHY AND TARGETS (H & T). Centralised specification of priorities and measures. Centrally organized rewards and sanctions. 3. CHOICE AND COMPETITION ( C & C). Give users choice of provider and provide them with information on service quality. Money follows choice, so unpopulat proviedrs will suffer financially and be motivated to improve. 4. TRANSPARENT PUBLIC RANKING (TPR). Name and shame. Has to be easily understood by the public. Can be combined with C & C or operated by itself.

Each of these has different costs and benefits, strengths and weaknesses. Each leaves more control in the hands of some stakeholders and less in the hands of others. Bevan and Wilson found that, using the same indicators and the same services, T&A produced much less performance improvement in Wales than H&T and TPR did in England.

8   

OUTPUTS OBSERVABLE?

NO

OUTCOMES OBSERVABLE? YES A. Production organisations C. Craft organizations

NO B. Procedural organisations D. Coping organizations.

THIRD FRAMEWORK: J.Q.WILSON’S TYPOLOGY OF ORGNAIZATIONAL TYPES (1989)

NOTES Examples of A = driving license agency, B = mental health counselling, C. = forest rangers, D. = diplomatic service An organization may well have several functions and these may be located in different celss. Changes in technology may shift a function from one cell to another by making aspects more observable IMPLICATION: different types of measurement system for different types of organization. Crudely, if outcomes are very difficult to observe (or to attribute), it makes little sense to focus mainly outcome indicators. If neither outcomes nor outputs are easily identified, then procedural indicators or connoisseurial evaluation may be the best we can hope for. There is also the important point that staff can be held responsible for outputs but often they can only partly - if at all - be held responsible for outcomes (e.g. a government employment service suffers lower rates of placement because of a general economic downturn). Because of this some experts counsel a mixture of process, output and outcome measures, rather than a predominance of just one of these types.

9   

WESTERN EUROPE



 

 

 

Huge variation in extent of PMSs (e.g. UK, Belgium). Fairly stable (but different) national trajectories over time (see Kuhlmann and Jäkel, 2013; Pollitt, 2006a; Pollitt and Bouckaert, 2011). Apart from Norway, and to a lesser extent Germany and Sweden, austerity is the name of the game at the moment – not performance. Austerity cuts have seldom been driven by performance data: across-the-board approaches have prevailed. (Kickert et al, 2013). Sometimes the cuts have actually acted against a performance orientation. Cultural and organizational variation in how PMSs are used (e.g. UK vs Nordics) (De Kool, 2008; Pollitt et al, 2004; Pollitt, 2006a) Performance measurement has outlived NPM, and seems to be here to stay. It is not just an ’Anglo-Saxon’ practice, as Francophones sometimes claim (Kettl and Kelman, 2007; recent Italian performance related pay system, new Austrian performance budgeting system) Gradually (but only gradually) being extended southwards and eastwards (Nacrosis, 2008; Nemec, 2007) Note, however, that the whole idea of autonomous government agencies is in retreat in some parts of Europe, including Ireland, the Netherlands and the UK (Tonkiss, 2013)

10   

CONCLUDING THOUGHTS 







 

’Governance’ also means the sharing between stakeholders of the design, construction and operation of PMSs themselves. This is connected to the perennial issue of ownership. Different stakeholders have different priorities and purposes (Barber, 2007; Behn, 2003; Pollitt, 2006a, b; 2013a) One stakeholder which has seldom played much role in either the design or the use of performance indicators systems is the public itself (Martin et al, 2013; Pollitt, 2006b). Neither have most legislatures been very interested (studies in Canada, Netherlands, UK) Tight-loose balance is crucial, and will vary with the activity and the culture (e.g. police versus passport office). The question of incentives (by no means necessarily monetary) needs to be addressed. Some research, using a ’natural experiment’ comparing English with Welsh performance measurement systems for schools and hospitals showed that performance improvement was much stronger in the system that had strong incentives (Bevan and Wilson, 2013; Propper et al, 2008). Stable/churned balance is also crucial: will vary with time scale of activity e.g. infrastructural investment (long term) vs. performance of emergency services (short term). Too stable: ’performance paradox’. Too fast: confusion and cynicism (Pollitt, 2013a; Talbot, 2010). Distortions are hardy perennials – to be managed rather than eliminated (Pollitt, 2013a) 1969, London, civil service training (anecdote, explained in the presentation)

11   

REFERENCES Audit Commission (2003) Waiting list accuracy, London, The Stationary Office Barber, M. (2007) Instruction to deliver: Tony Blair, public services and the challenge of achieving targets, London, Politico’s Behn, R. (2003) ’Why measure performance? Different purposes require different measures’, Public Administration Review 63:5, pp586-606 Bevan, G. (2006) ’Setting targets for health care performance: lessons from a case study of the English NHS’ National Institute Economic Review 197 (1), pp67-79 Bevan, G. (2009) ’Hitting and missing targets by ambulance services for emergency calls: effects of different systems of performance measurement in the UK’, Journal of the Royal Statistical Society 172:1, pp161-190 Bevan, G. and Hood, C. (2006) ’What’s measured is what matters: targets and gaming in the English public health care system’, Public Administration, 84:3, pp517-538 Bevan, G. and Wilson, D. (2013) ’Does ”naming and shaming” work for schools and hospitals? Lessons from natural experiments following devolution in England and Wales’, Public Money and Management, 33:3, pp245-252 Bouckaert, G. and Halligan, J. (2008) Managing performance: international comparisons, London, Routledge/Taylor and Francis Boyne, G.; Meier, K.; O’Toole Jnr., L. and Walker, R. (eds.) Public service performance: perspectives on measurement and management, Cambridge, Cambridge University Press Brewer, G. (2008) ’Employee and organizational performance’, pp136-156 in Van Dooren, W. and Van de Walle, S. (eds.) Performance information in the public sector: how it is used, Basingstoke, Palgrave Macmillan Brewer, G. and Walker, R. (2013) ‘Personnel constraints in public organizations: the impact of reward and punishment on organizational performance’, Public Administration Review, 73:1, pp121-131 Christensen, T.; Laegreid, P. and Stigen, I. (2006) ’Performance management and public sector reforms: the Norwegian hospital reform’, International Public Management Journal, 9:2, pp1-27 Dahler-Larsen, P. (2014) ’Constitutive effects of performance indicators: getting beyond unintended consequences’, Public Management Review 16:7, pp969-986 De Bruijn, H. (2002) Managing performance in the public sector, London, Routledge De Kool, D. (2008) ’Rational, political and cultural uses of performance monitors: the case of Dutch urban policy’, pp174-191 in Van Dooren, W. And Van de Walle, S. (eds.) (2008) Performance information in the public sector: how it is used, Basingstoke, Palgrave Macmillan Fitzgibbon, J. (1997) The value added national project: report to the Secretary of State, Durham NC., University of Durham/School Curriculum and Assessment Authority 12   

Francis, R. (2010) Independent inquiry into the care provided by Mid Stafforshire NHS Foundation Trust January 2005-March 2009, HC375-1, London, The Stationary Office Goddard, M.; Mannion, R. and Smith, P. (2000) ‘The performance framework. Taking account of economic behaviour’, pp138-161 in P.Smith (ed.) Reforming markets in health care, Buckingham, Open University Press Hammerschmidt, G.; Van de Walle, S. and Stimac, V. (2013) ‘Internal and external use of performance information in public organizations: results from an international survey’, Public Money and Management, 34:X, pp261-268 Healthcare Commission (2009) Investigation into Mid Stafforshire NHS Foundation Trust, London, Healthcare Commission, March Hood, C. (2012) ‘Public management by numbers as a performance-enhancing drug: two hypotheses’, Public Administration Review (early view online 24 September 2012) Hood, C. and Dixon, R. (2010) ‘The political payoff from performance target system: nobrainer or no-gainer?’ Journal of Public Administration Research and Theory, 20, pp i281i291 Ingraham, P.; Joyce, P. and Donahue, A. (2003) Government performance. Why management matters, Baltimore and London, John Hopkins University Press Jacobs, R.; Goddard, M. and Smith, P. (2006) Public services: are composite measures a robust reflection of performance in the public sector? University of York, Centre for Health Economics, Research Paper 16 (June) Jääskeläinen, A. And Laihonen, H. (2014) ‘A strategy framework for performance measurement in the public sector’, Public Money and Management, 34:5, pp355-362 Johnston, C. and Talbot, C. (2008) ‘UK parliamentary scrutiny of Public Service Agreements: a challenge too far?’ pp157-173 in Van Dooren, W. and Van de Walle, S. (eds.) (2008) Performance information in the public sector: how it is used, Basingstoke, Palgrave Macmillan Kahneman, D. (2012) Thinking, fast and slow, London, Penguin Books Kelman, S. & Friedman, J. (2009). Performance improvement and performance dysfunction: an empirical examination of the distortionary impacts of the emergency room wait-time target in the English National Health Service. Journal of Public Administration Research and Theory, 19(4), 917-946. Kettl, D. and Kelman, S. (2007) Reflections on 21st century government management, Washington DC, IBM Center for the Business of Government Kickert, W., Randma-Liiv, T, and R. Savi (2013) Fiscal consolidation in Europe: a comparative analysis. COCOPS deliverable 7.2, September 2013 (www.cocops.eu, accessed 14 August 2014) Kuhlmann, S. and Jäkel, T. (2013) ‘Competing, collaborating or controlling? Comparing benchmarking in European local government’, Public Money and Management, 33:4, pp269275

13   

Laegreid, P.; Roness, P. and Rubecksen, K. (2008) ‘Performance information and performance steering: integrated system or loose coupling?’ pp42-57 in Van Dooren, W. and Van de Walle, S. (eds.) (2008) Performance information in the public sector: how it is used, Basingstoke, Palgrave Macmillan Martin, S.; Downe, J.; Grace, C. and Nutley, S. (2013) ‘new development: all change? Performance assessment regimes in UK local government’, Public Money and Management, 32:3, pp277-280 Maynard, A. (2008) Payment for performance (P4P): international experience and a cautionary proposal for Estonia, Health Financing Policy Paper, Copenhagen, Division of Country Health Systems, World Health Organization Europe Meyer, J. And Gupta, V. (1994) ‘The performance paradox’, Research in Organizational Behavior 16, pp309-369 Minister for Government Policy (2011) Open public services white paper, Cm8145, London, The Stationary Office, July Moynihan, D. (2008a) The dynamics of performance management, Washington DC, Georgetown University Press Moynihan, D. (2008b) ‘Advocacy and learning: an interactive-dialogue approach to performance information use’ pp24-41 in Van Dooren, W. and Van de Walle, S. (eds.) Performance information in the public sector: how it is used, Basingstoke, Palgrave Macmillan Moynihan, D. (2008c) ‘The normative model in decline? Public service motivation in the age of governance’, pp247-267 in Van Dooren, W. and Van de Walle, S. (eds.) Performance information in the public sector: how it is used, Basingstoke, Palgrave Macmillan Nakrosis, V. (2008) ‘Reforming performance management in Lithuania: towards resultsbased government’, pp53-108 in G.B. Peters (ed.) Mixes, matches and mistakes: New Public Management in Russia and the former Soviet republics, Budapest, OGI/LGI Nemec, J. (2007) ‘Performance management in the Slovak higher education system: preliminary evaluation’, Central European Journal of Public Policy, 1:1, pp64-78 Ofqual (2012) GCSE English 2012, Ofqual report 12/5225, Coventry, Ofqual Pawson, R. (2013) The science of evaluation: a realist manifesto, Los Angeles and London, Sage Perry, J.; Engbers, T. and Yun Jun, S. (2009) ‘Back to the future? Performance-related pay, empirical research and the perils of persistance’, Public Administration Review, 69:1, pp1-31 Pollitt, C. (1985) 'Measuring Performance: a New System for the National Health Service', Policv and Politics January pp 1-15 Pollitt, C. (2006a) ‘Performance management in practice: a comparative study of executive agencies’, Journal of Public Administration Research and Theory 16:1, pp25-44 Pollitt, C. (2006b) ‘Performance information for democracy: the missing link?’, Evaluation, 12:1, pp38-55 14   

Pollitt, C. (2010) ‘Simply the best? The international benchmarking of reform and good governance’, pp91-113 in J.Pierre and P.Ingraham (eds.) Comparative administrative change: lessons learned, Montreal and Kingston, McGill-Queen’s University Press Pollitt, C. (2011) ‘”Moderation in all things”: international comparisons of governance quality’, Financial Accountability and Management 27:4, pp437-457 Pollitt, C. (2013a) ‘The logics of performance management’, Evaluation: an international journal of theory and practice, 19:4, pp346-363 Pollitt, C. (ed.) (2013b) Context in public policy and management: the missing link?, Cheltenham, Edward Elgar Pollitt, C.; Talbot, C.; Caulfield, J. and Smullen, A. (2004) Agencies: how governments do things through semi-autonomous organizations, Basingstoke, Palgrave/Macmillan Pollitt, C. and Bouckaert, G. (2009) Continuity and change in public policy and management, Cheltenham, Edward Elgar Pollitt, C. and Bouckaert, G. (2011) Public management reform: a comparative analvsis, (3rd ed.) Oxford, Oxford University Pollitt, C. and Hupe, P. (2011) ’Talking about government. The role of magic concepts’ Public Management Review 13:5, pp641-658 Pollitt, C.; Harrison, S.; Dowswell, G.; Jerak-Zuiderent, S. and Bal, R. (2010) ‘Performance regimes in health care: institutions, critical junctures and the logic of escalation in England and the Netherlands’ Evaluation 16:1, pp13-29 Pollitt, C. and Dan, S. (2011) The impacts of the New Public management in Europe: a meta-analysis December (www.cocops.eu) Pollitt, C. and Dan, S. (2013) ‘Searching for impacts in performance-oriented management reform: a review of the European literature’, Public Performance and Management Review 37:1, pp7-32 Propper, C., Sutton, M., Whitnall, C. & Windmeijer, F. (2008). Incentives and targets in hospital care: evidence from a natural experiment. Bath, CMPO Working Paper 9. Radin, B. (2006) Challenging the performance movement: accountability, complexity, and democratic values, Washington DC, Georgetown University Press Reay, T. and Hinings, C. (2009) ‘Managing the rivalry of competing institutional logics’, Organization Studies, 30:6, pp629-652 Smith, P. (1995) ‘On the unintended consequences of publishing performance data in the public sector’, International Journal of Public Administration 18, pp277-310 Smithers, R. (2002) ‘Schools cheat to boost exam results’, Guardian, p1, 5th June Sundström, G. (2006) ‘Management by results: its origin and development in the case of the Swedish state’, International Public Management Journal 9:4, pp399-427

15   

Talbot, C. (2010) Theories of performance, Oxford, Oxford University Press Thomas, J. (1999) ‘Quantifying the black economy: “measurement without theory” yet again’, The Economic Journal, 109, pp381-398 (June) Thornton, P.; Ocasio, W. and Lounsbury, M. (2012) The institutional logic perspective: a new approach to culture, structure and process, Oxford, Oxford University Press Tonkiss, K. (2013) ‘Bring up the bodies: the transformation of arm’s length relationships’, Guardian Professional, 26 April (accessed 2 August 2014) Townley, B. (1997) ‘The institutional logic of performance appraisal’, Organization Studies, 18:2, pp261-285 Transparency International (2012) Corruption risks in the Visegrad countries: Visegrad integrity system study, Hungary, Transparency International Van Thiel, S. and Leeuw, F. (2002) ‘The performance paradox in the public sector’, Public Performance and Management Review, 25:3, pp267-281 Van de Walle, S. and Roberts, A. (2008) ‘Publishing performance information: an illusion of control?’ pp211-226 in Van Dooren, W. And Van de Walle, S. (eds.) Performance information in the public sector: how it is used, Basingstoke, Palgrave Macmillan Van Dooren, W. And Van de Walle, S. (eds.) (2008) Performance information in the public sector: how it is used, Basingstoke, Palgrave Macmillan Varone, F. and Giauque, D. (2001) ‘Policy management and performance-related pay: comparative analysis of service contracts in Switzerland’, International Review of Administrative Sciences, 67:3, pp543-565 Verhoest, K.; Van Thiel, S.; Bouckaert, G. and Lægreid, P. (eds.) (2012) Government agencies: practices and lessons from 30 countries, Basingstoke, Palgrave Macmillan Weick, K. (2001) Making sense of the organization, Thousand Oaks, CA., Sage Wilson, J.Q. (1989) Bureaucracies – what government agencies do and why they do it, New York, Basic Books

16   

Suggest Documents