PERFORMANCE MEASUREMENT AND FEEDBACK IN A PUBLIC SECTOR PROGRAM

PERFORMANCE MEASUREMENT AND FEEDBACK IN A PUBLIC SECTOR PROGRAM Paul Hyland, School of Management, Queensland University of Technology, Email: paul.hy...
Author: Gillian Johnson
3 downloads 0 Views 107KB Size
PERFORMANCE MEASUREMENT AND FEEDBACK IN A PUBLIC SECTOR PROGRAM Paul Hyland, School of Management, Queensland University of Technology, Email: [email protected], Phone: (07) 3138 2938 Mario Ferrer, Faculty of Business and Informatics Central Queensland University, Rockhampton, Australia, Email: [email protected], Phone: (07) 49309510 Ricardo Santa, School of Law and Business, Charles Darwin, Email: [email protected], Phil Bretherton, School of Law and Business, Charles Darwin University, Darwin, Australia, Email: [email protected], Phone: (08) 8946 6108

ABSTRACT Establishing a framework for measuring the performance of public sector programs is fraught with dangers. Many public sector organisations are satisfied with measuring activity in programs and fail to see the need for establishing a framework that will meet the needs of participants and measure outcomes as well as activities. This paper explores how a government department in Queensland went about establishing a performance management framework to measure the outcomes and activities in a program that was a public private partnership. Findings indicate that using an iterative consultative approach performance measure can be put in place that are meaningful and assist the participants to review the program. Keywords: performance measures, feedback, continuous improvement INTRODUCTION The goal of a performance measurement system is to communicate and implement strategy [32] as well as ensuring the alignment between actions, objectives and strategies. Consequently, performance measurement systems and frameworks need to represent efforts to measure how activities and processes contribute separately and jointly in meeting an organisation’s or program’s strategic objectives, link operations to strategic goals, focus business activities on the customer, drive future activities and needs, and enhance performance, [15]. It is important that any measurement of performance outcomes provide meaningful measures of activities, processes and achievements and allow for feedback between key stakeholders in the organisation. The Mentoring for Growth (M4G) program is a unique public, private collaboration, in that the private sector is the beneficiary or customer of the program and the program delivery is dependent on private sector mentors throughout Queensland. While the M4G program has been in existence for more than six years it has evolved to suit the needs of businesses and targeted industry sectors in the regions. The program provides suggestions to businesses on issues affecting their growth, the program is organised and sponsored by the Queensland 210

government through the Capital Raising Unit and the advice is provided by mentors from the private sector. The program has provided over 600 businesses throughout Queensland with mentoring on strategic growth challenges. Over the last six years there has been regular measurement of M4G activities, however, the QLD government’s Capital Raising Unit recognised a need for a formal process to measure outcomes and provide improved formal feedback to stakeholders. This paper outlines the process involved in developing a performance measurement and feedback framework in a public private partnership and provides some lessons for organisations seeking to include clients feedback in their performance measurement systems.. Performance Measurement Performance measurement has increased markedly in public organisations and NicholsonCrotty et al [50] maintains that it has generated growing interest. Bouckaert [9] traced the history of performance measurement and demonstrated the value of tracking organisational performance on certain indicators [49, 57]. Nicholson-Crotty et al [50] have also investigated the many obstacles that public organisations face when they try to develop and strategically use performance measures [2]. Some have focused on the ways in which entrepreneurial public managers have taken the lead in performance measurement, whereas others have highlighted the many organisations and programs that still fail to benefit from this growing trend [3, 44]. One consistent theme within this literature is that no one measure is sufficient to answer all questions that are asked about the performance of an organisation or program (e.g., [36]). Some research has focused on the specific purposes to which performance measures can be put [6, 22] or the different types of measures, including output, outcome, quality, workload, and others, that can be used to gather information about different components of public-service delivery [7, 16]. This implies that it is up to managers to select the appropriate performance measure by narrowly defining the activities they want to know more about and the purpose to which they want to put that information. While many public sector organisations are putting considerable energy into measuring performance, the effectiveness of the measures used varies considerably. In designing and implementing any performance measurement system it is vital to address the essentials of performance measurement. Moullin [43] has put forward eight characteristics of effective performance measurement and Moullin’s framework guided the process used in this study.. Delivering excellent services requires a high standard of performance on a wide range of factors. So it is important that performance is assessed on a balanced framework reflecting the different areas that are of strategic importance to the organisation. There are a number of balanced frameworks available. One is the Public Sector Scorecard[43],which adapts Kaplan and Norton's balanced scorecard for public sector organisations. Many organisations collect a vast amount of information, but do not have an effective system for translating this feedback into a strategy for action. Performance Measurement Frameworks According to Bititci et al [8] there is a widespread recognition about the limitations of financial, internal and historically-based performance measures [20, 23, 26, 27, 31, 52]. Since then, there have been a number of frameworks and models developed for performance measurement and performance management [8], such as strategic measurement and reporting technique [15], the performance measurement matrix [31], results and determinants 211

framework [19], balanced scorecard [27-29] Cambridge performance measurement systems design process, [47], integrated performance measurement system reference model [8], performance prism [46], as well as various business excellence models, such as the European business excellence model (EFQM, 1999). Holloway [25] argues that much of the research and development effort has been focused on particular models and frameworks for performance measurement, but that little has been done to describe and analyse problems with the application of these models and frameworks. Only a handful of researchers [11, 32, 48] have used action research methods to investigate and study the life-cycle of performance measurement systems. Bourne [10] defines a successful performance measurement implementation as a performance measurement system, which is used by the management team on a regular basis to discuss and manage business performance related issues. Feedback and Measuring Performance While there are many possible approaches to improving the performance of smaller firms this can be problematic as they generally have fewer resources [34] and the number of employees makes it difficult to build economies of scale from formal programs. Although participation in formal courses has benefits, having a mentor and learning from experience is more appropriate in an entrepreneurial environment as it is more cost effective [53]. A number of different theories and research streams support the notion that feedback can motivate and enable people to do their jobs better, including goal setting theory [37] and control theory [13]. The most influential, however, may be Thorndike's law of effect [33]. Thorndike [54] proposed that behavior which results in pleasant outcomes will be repeated, while behavior that results in unpleasant outcomes will not. In other words, when people receive positive feedback in response to their behavior, they tend to do the same thing again, but if they receive negative feedback, they are unlikely to try it again and will probably explore other approaches. The simplicity and practicality of the law has made it appealing to both managers and researchers over the years. Feedback can improve performance and this is supported by research that has been conducted for at least a century, with a majority of writers over the years concluding that this approach does work [33]. While it can be demonstrated that using feedback on performance is an effective form of improving performance, it does not always work as feedback and in some studies actually leads to a drop in performance. This shows that it is important to have the right measures of performance and the right feedback mechanisms. According to London and Sessa [41] while there is substantial literature on the role of feedback in performance at the individual level [40], the role of feedback in groups to individual members or the group as a whole has not been explored in much depth [5, 12, 35]. Feedback is the transmission of evaluative or corrective information about some sort of action, event or process. "Feedback guides, motivates, and reinforces effective behaviors and reduces or stops ineffective behaviors" [40]. Feedback may be formative in that it can be used for ongoing development or summative in that it is used to evaluate the recipients. In a project or program, feedback can be given to individual members, subsets of members, or the group as a whole. The feedback can come from within the group or from someone outside the group. Feedback can also be delivered based on objective data or information about behaviors, processes, and outcomes. London and Sessa [41] argue that the focus of the feedback can vary but can be individual members, or the group as an entity, or subgroups.

212

Any group usually fulfils its purposes or achieves its outcomes by any number of paths, so it needs and relies on feedback to regulate itself and to ensure that it is on the most appropriate paths. London and Sessa [41] maintain that as soon as a group forms, it acquires direction and momentum, and this momentum is strong, even if it is in the wrong direction [4]. Without feedback, a group such as mentors cannot determine the extent to which it is moving toward its goals or whether it needs to change in some way to achieve those goals. London and Sessa [41] believe there are four ways that feedback helps groups and individuals learn and perform: (a) feedback helps the group regulate actions to achieve the group's goals, (b) it helps members assess and respond to outside influences, (c) it promotes group development and member interdependence, and (d) it helps members formulate a shared conceptualization of the group's distinct identity and purpose. Firstly, feedback adds to the group's understanding of what works and so should be repeated in the future and what does not work and should be improved or dropped. Feedback allows the group to recognize the effects of its actions and choices and, if need be, to change those actions and choices over time to have a different outcome. That is, feedback helps the group regulate its work and objectives [56]. At the individual level, it is well accepted that objectives and performance feedback are the most effective interventions available to improve learning and performance [38]. Programs and groups that set goals and receive feedback on their goals are more likely to improve their performance than groups that do not [39, 42, 45]. Secondly, feedback allows the group to assess its openness to outside influences so that it can be made more appropriate [1]. For example, the more severe the consequences of an error, the more learning is likely to occur [51]. Furthermore, groups that learn from errors ultimately perform better than groups that are less able to learn from errors [17, 18]. After an error with severe consequences, groups may learn to be more open to errors with slightly less severe consequences in the future. Thirdly, feedback can assist the group through its stages of development and into more interdependent work along the way [30]. The learning that occurs in the group depends on the stage of development of the group [21]. During the early, formative phase, groups need to be motivated if they are to understand. It may be the case that different types of feedback may need to be obtained at different stages to ensure that the group can use the feedback appropriately. Newcomers to a group can learn from feedback and become more valuable to the group sooner than if they receive little or no feedback. Newcomers need to adapt their behavior to the demands of the group. Newcomers who are able to obtain needed skills and knowledge, express innovative ideas, work well with others, and go above the call of duty are likely to improve their performance more quickly, and feedback aids their ability to adapt [14]. Finally, feedback can help the group and its members clarify how they see that their group is related to other systems (other groups, the organisation of which they are a part, and external groups and organisations). It can help group members recognize that these other systems share a common view of the world. Group feedback may change members' focus from themselves as individuals to their group [24]. Also, group feedback may increase members' sensitivity to the environment by helping them recognize that other members may have made different choices than they did [55]. London and Sessa [41] argue that explicit communication about expectations and goals at the outset promotes the development of a

213

unified mental model. Feedback about behaviors and performance during the task enhances the development of this model and facilitates coordination and task accomplishment.

METHODOLOGY To determine what measures were to be collected or reported by mentees, mentors and regional centres, researchers and DTRDI staff conducted focus groups and interviews with Departmental staff, mentors and mentees in Toowoomba, Cairns and the Sunshine Coast. The purpose of visiting the regions, and collecting data and input from people involved, is to enhance the trustworthiness of the data obtained. In the data collection process with Departmental Officers in Brisbane and the 3 Regions it was proposed that feedback and reporting would occur at several levels and the direction of the proposed feedback and reporting is outlined in Figure 1. The performance measurement is used to provided data that can be used by the different stakeholders to facilitate the collaboration between the different actors. The regional centres and the Capital Raising Unit collect the data and measure the performance of mentors and mentees as well as the government agencies. This data is then analysed to provide feedback on performance that can be used to identify areas for improvement. Once the areas of improvement are identified then the relevant stakeholders work collaboratively to improve their activities. Mentors and mentees report their feedback and outcomes to Regional Centres and the Capital Raising Unit who in turn provide feedback to mentors and mentees. The information can then be used to evaluate aspects of the program and develop ways of improving the program and the outcomes for stakeholders.

214

Capital Raising Unit

Leadership

Feedback Loop

Regional Centres

Mentors

Performance Measures • Satisfaction • Business growth • Financial • Employment • Exports • Innovation • R&D activities • Business skills • Business advice

Mentees

Figure 1 Performance Outcomes and Feedback Framework The Mentoring for Growth program was formed after the identification of a market failure. The failure of existing market mechanisms to support many SME and ideas or concepts that would grow business and the Queensland economy meant that Queensland entrepreneurs were not achieving their full potential. It was recognised that a central government agency with a process enabling businesses to access the range of skills, resources and networks required to grow from an idea to a fully operational business was needed. The Mentoring for Growth program was established in SE Queensland and rolled out to regional Queensland in 2000. The program involves business approaching government agencies such as Department of State Development for assistance with a business problem. The business is then assessed by a departmental officer, with business experienced, as to its suitable for mentoring. Once clients have been identified as a suitable candidate for mentoring, a mentoring panel is then formed. The panel members are selected from a database of registered mentors who give their time on a pro bono basis. The panel composition depends on the needs of the client, once the clients key challenges have been identified, mentors who have skills in those particular areas are invited to the mentoring session. These mentors can then assist the clients with practical advice, as well as providing the clients with a link to people who may be able to help through the mentor’s personal contacts. Mentors come from a diverse background, such as: - successful entrepreneurs, former company CEO’s, investment bankers, accountants, lawyers, marketing professionals and many other fields. Overall, majority of firms benefited from the trusting environment established by mentors and SDC officers which provided security and confidence for participants. Another valuable benefit derived was the new knowledge about business issues. The new ideas and suggestions from mentors enabled firms to improve different aspects of the business. The firms obtained valuable business contacts during the program and used these to develop their operational capabilities. The Mentoring for Growth program also benefited clients through facilitating the 215

process of learning, suggesting improvements to business operations and sharing of best practices. Most of the firms were able to prepare for growth and develop management skills as a result of mentoring. Mentors and departmental staff helped respondents to develop a strategic focus and direction in the firms. The firms were able to learn and develop the expertise relevant to their business operations or growth. There were networking opportunities, links to contacts, and expansion opportunities as a result of the mentoring process. Mentors and departmental officers also benefitted as they learned from one another and clients about what helped the firms to grow. Data was collected using focus groups, face to face interviews and telephone interviews. The sampling was purposive to ensure that experienced mentors and departmental officers had input. Similarly purposive sampling was used to identify mentees who had been mentored in the previous 12 months for interviews. The type and sample of participants is summarised in table 1. The face to face interviews were conducted using a semi-structure interview to ascertain the experience of mentees and Departmental officers and their willingness to engage in a performance feedback system. Telephone interviews were conducted to test the measures using a structured interview. The data was collected over a 4month period in 2008. Location Brisbane Sunshine Coast Toowoomba Cairns

Focus groups 6 Department staff 8 Department staff 10 Mentors 6 Mentors 12 Departmental staff 10 Mentors

Face to face Interviews Nil 5 Mentees 2 Departmental staff 4 Mentees

Telephone interviews Nil 5 Mentors 14 Mentees Nil 5 Mentors 3 Mentees

Table 1 Data collection and sampling RESULTS From the focus groups departmental staff and mentors had differing priorities when assessing the performance of businesses in the regions. In some cases departmental staff felt that measures such as profitability and earnings before interest and tax were essential while mentors were more focussed on the mentees satisfaction with the mentoring sessions. For example one staff member espoused the view that “Profit is the only worthwhile measure” Mentors on the other hand were concerned about the how mentees felt about their experience of a mentoring session. In one region, Toowoomba, a mentor took it upon himself to follow up with every mentees after the mentoring sessions. Mentors in Toowoomba also commented that they contributed significant time to the M4G program and that should be measured As one mentored suggested, “We recently discussed what we do for mentees and we make a large amount of time available and that’s worth money”. In focus groups mentors and departmental staff commented on the quality of mentoring and the interactions between people in the mentoring sessions. When asked for specific examples or evidence to support claims most participants maintained it was “more a feeling that mentees feel threatened”. The focus groups allowed staff and mentors to voice any concerns they had with M4G and gave them the opportunity to suggest measures of performance for mentees, mentors and the Department. There was a general consensus that more feedback was needed and feedback should occur on regular basis. The mentors were very interested in finding out if mentees had implemented any suggestions they had received at mentoring sessions and how the company was faring, had business improved, had any real changes occurred. Mentors were also asked 216

if they would provide feedback on their own businesses and if they had benefited in any way from being involved in M4G. In the main mentors were enthusiastic about giving feedback and performance data. As a Cairns mentor articulated, “mentoring is about helping our region grow and we need to know how well we’re doing” Mentees in Cairns and on the Sunshine Coast were interviewed to determine if they were prepared to provide feedback on the mentoring sessions and to see if they would have any problems with providing information on their business performance. All but one mentee were effusive in their praise of the mentoring sessions. While they found it challenging and they often felt they were under-prepared for the session they all found it worthwhile. Mentees had no hesitation in providing information on their business performance. In the case of a Cairns business when asked if they were prepared to give financial figures the mentee stood and said “I’ll get that for you now!” Mentees in almost all cases were very open about their businesses and were prepared to provide data and feedback at three points in time. The mentees realised they had access to a great resource in the mentors and most wanted to be mentored again. After the interviews and focus groups researchers with senior officers in the Capital Raising Unit designed three structured interviews schedules which were to be trialled using telephone interviews. One interview sheet was designed to be used with mentors on an annual basis the other two sets of structured questions were designed to be used 6months after a mentoring session and then another interview 12 months after mentoring. To test these data collection instruments mentors and mentees in two regions Cairns and the Sunshine Coast were contacted and asked to take part in telephone interviews. The items that were reliably answered and provided the data that would allow adequate feedback and measurement are summarised in Table 2. The performance measures identified in Table 2 were based on the strategic objectives of the department and were agreed to through an iterative consensus building process. The performance measures were selected to reflect departmental objectives articulated in the departmental strategic plan. For example one objective of the department was to “grow emerging globally-focused, high growth and knowledge-based industries” so to measure at least some aspect of this objective the performance measure exports can be used as a proxy measure of global focus. Similarly the department was seeking to” increase the application of successful business skills of small and medium enterprises including indigenous enterprises” so after consultation with relevant officers it was agreed that this could be measured by collecting data on business skills and business advice and analysing trends to determine if there was an increase. Performance Measures

Items for Mentors and Mentees

Satisfaction

Was the M4G Panel Meeting useful? Do you agree that the M4G Panel Meeting allowed you to test and explore some key business issues? Do you agree that the Mentors at the session offered honest and constructive criticism about your business? Did the Mentors at the session provide worthwhile options for your business? Did the M4G Panel provide you with an opportunity to reflect on the performance of your business? Have you acted on feedback received in the M4G Panel meeting? Was the support you received in preparing for your M4G Panel Meeting useful? What future improvements do you think could be made to M4G? What if any are the drawbacks of M4G?

217

Business growth

Financial

Employment Exports Innovation and R&D activities

Business skills

Business advice

What is your company’s previous financial period (year/month) turnover/sales? Have you raised any additional finance for business growth? Have you received any business referrals or projects from other M4G mentors? How has your own business benefited from your involvement in Mentoring for Growth? What is your company’s previous financial period (year/month) turnover/sales? What is your company’s Profit/ EBIT? Have you raised any additional finance for business growth? What is the total number of employees in your company including family members and owners (Full time equivalent)? Did you export? If yes then what is your company’s revenue from exports in the last financial year? What changes have you introduced to your business? Do you conduct any research and development activities if yes then what is the nature of research or R&D collaborations? How much do you think you invest in R&D a year? Have you licensed any of your IP in the last 12 months? What is your company’s dollar investment in accessing business advice e.g. accountants, consultants? Have you accessed any other advisory programs? Have you taken an active role in any M4G businesses you have mentored? Number of hours of follow-up advice received Value of advice paid/unpaid What suggestions have you implemented or acted on? Have you taken an active role in any M4G businesses you have mentored?

Table 2 Performance measures and measurement items

CONCLUSION The Mentoring for Growth program will benefit from implementing a performance feedback framework that is designed to assist stakeholders to continuously improve the program delivery and measure the contribution of the mentoring to the growth of Queensland business. The framework presented in this paper allows for multiple feedback loops that assist all stakeholders, including mentors, mentees and departmental staff to understand the contribution they are making through their involvement in the program. As London and Sessa [41] argue the focus of the feedback varies and in this case the feedback is used with individual members, the group as an entity, and subgroups. Further as London and Sessa [41] maintained as soon as a group forms, it acquires direction and momentum, and this momentum is strong, even if it is in the wrong direction, the performance framework developed in this study will allow monitoring of the direction and performance of groups. The data collected will also be used to continually improve group processes and activities and provide stakeholders with valid data concerning the performance of groups and the M4G program. To optimise the benefits of the framework the data is analysed at two levels and presented in aggregated form to protect the confidentiality of mentors and mentees. Regional data is used to compare the performance of regions to the overall state performance of the M4G program 218

to allow regional staff to assess how they are performing compared to the rest of the state and regional staff can also compare their performance on a year on year basis to ensure they are continuing to improve. This supports Bourne’s [10] assertion that a successful performance measurement system needs to be used by the management team on a regular basis and should form the basis of discussions on how to manage business performance related issues. The Capital Raising Unit can report on a state and regional basis and reports on trend data that demonstrates how involvement in the program is enhancing business skills development and assisting businesses to improve their performance and grow. As [43] suggested the measures of organisation performance encompass five perspectives: the achievement of its strategic objectives, service user/stakeholder satisfaction, organisational excellence, financial targets and innovation and learning. In developing a framework stakeholders included all of the dimensions. The framework is not a static measurement instrument and will evolve over time as the objectives of the department and regional centres change. As issues such as climate change, financial governance and risk management are incorporated into government priorities it will be possible to reflect these new objectives in a revised framework. The measures will also change to meet the needs of stakeholders and to provide appropriate measures that reflect the priorities of regional centres. The measurements and data provided by the framework can be used in any decision making in relation to the Mentoring for Growth program. The strength of the framework as [43] argued is that it was developed in consultation with key stakeholders and that it provides data that is relevant to operational perspectives of the program and the data can be aggregated to provide a snapshot of regional businesses involved in a business improvement program that benefits from a truly public-private partnership. Over time the information collated through this framework will provide regional centres and departmental staff with a detailed picture of activities and issues facing business throughout Queensland.

219

REFERENCES 1. 2. 3.

4.

5. 6. 7. 8.

9. 10. 11.

12. 13. 14. 15. 16.

17. 18.

19. 20. 21.

Alderfer, C.P., Consulting to Underbounded Systems, in Advances in Experiential Social Processes. 1980. p. 267-295. Ammons, D., Productivity Barriers in the Public Sector, in In the Public Productivity Handbook, M. Holzer, Editor, Marcel Dekker: New York. 1992 117-136. Ammons, D. Overcoming the Inadequacies of Performance Measurement in Government: The Case of Libraries and Leisure Services. . Public Administration Review., 1995, 55(1), 37-52. Arrow, H. Mcgrath, J.E. & Berdahl, J.L. Small Groups as Complex Systems: Formation, Coordination, Development, and Adaptation. Thousand Oaks, CA: Sage, 2000. Barr, S.H. & Conlon, E.J. Effects of Distribution of Feedback in Workgroups. Academy of Management Journal, 1994, 37 641-655. Behn, R. Why Measure Performance? Different Purposes Require Different Measures. Public Administration Review, 2003, 63(5), 586-604. Berman, E. & Wang, X. Performance Measurement in U.S. Counties: Capacity for Reform. Public Administration Review, 2000, 60(5), 409. Bititci, U.S. Carrie, A.S. & Mcdevitt, L.G. Integrated Performance Measurement Systems: A Development Guide. International Journal of Operations & Production Management, 1997, 17(6), 522-535. Bouckaert, G., Public Productivity in Retrospective, in The Public Productivity Handbook, M. Holzer, Editor, Marcel Dekker: New York. 1992 5-46. Bourne, M. Implementation Issues. Hand Book of Performance Measurement. London: GEE Publishing Ltd, 2001. Bourne, M. & Neely, A. Why Performance Measurement Interventions Succeed and Fail. in the 2nd International Conference on Performance Measurement. 2000. Cambridge. Bunderson, J.S. & Sutcliffe, K.M. Management Team Learning Orientation and Business Unit Performance. Journal of Applied Psychology, 2003, 88(3), 552-560. Carver, C.S. & Scheier, M.F. Origins and Function of Positive and Negative Affect: A Control-Process View. Psychological Review, 1990, 19-35. Chen, G. Newcomer Adaptation in Teams: Multilevel Antecedents and Outcomes. Academy of Management Journal,, 2005, 48 101-116. Cross, K.F. & Lynch, R.L. The Smart Way to Define and Sustain Success. National Productivity Review, 1989, 9(1). De Lancer Julnes, P. & Holzer, M. Promoting the Utilization of Performance Measures in Public Organizations: An Empirical Study of Factors Affecting Adoption and Implementation. Public Administration Review 2001, 61(6), 693-708. Edmondson, A. Psychological Safety and Learning Behavior in Work Teams. Administrative Science Quarterly, 1999, 44 350-383. Edmondson, A.C. Bohmer, R.M. & Pisano, G.P. Disrupted Routines: Team Learning and New Technology Implementation in Hospitals. Administrative Science Quarterly, 2001, 46 685-716. Fitzgerald, L. Johnston, R. Brignall, T.J. Silvestro, R. & Voss, C. Performance Measurement in Service Industries. London: CIMA, 1991. Goldratt, E.M. & Cox, J. The Goal: A Process of Ongoing Improvement. New York, NY: North River Press, 1986. Hackman, J.R. & Wageman, R. A Theory of Team Coaching. The Academy of Management Review, 2005, 302(269-288). 220

22. 23. 24. 25. 26. 27. 28. 29.

30. 31. 32.

33.

34. 35.

36.

37. 38. 39. 40. 41. 42. 43.

Hatry, H. Performance Measurement: Getting Results. Washington, DC: Urban Institute, 1999. Hayes, R.H. & Abernathy, W.J. Managing Our Way to Economic Decline. Harvard Business Review, 1980, (July/August), 67-77. Hinsz, V.B. Tindale, R.S. & Vollrath, D.A. The Emerging Conceptualization of Groups as Information Processors. . Psychological Bulletin, 1997, 121 43-64. Holloway, J. Investigating the Impact of Performance Measurement. International Journal of Business Performance Management, 2001, 3(2/3/4), 167-180. Johnson, H.T. & Kaplan, R.S. Relevance Lost – the Rise and Fall of Management Accounting. Boston, MA: Harvard Business School Press, 1987. Kaplan, R.S. & Norton, D.P. The Balanced Scorecard – Measures That Drive Performance. Harvard Business Review, 1992, (January/February), 71-79. Kaplan, R.S. & Norton, D.P., Translating Strategy into Action: The Balanced Scorecard, in Harvard Business School Press, Boston, MA. 1996. Kaplan, R.S. & Norton, D.P. Transforming the Balanced Scorecard from Performance Measurement to Strategic Management: Part Ii. Accounting Horizons, 2001, 15(2), 147-160. Kasl, E. Marsick, V.J. & Dechant, K. Teams as Learners: A Research-Based Model of Team Learning. Journal of Applied Behavioral Science, 1997, 33 227-246. Keegan, D.P. Eiler, R.G. & Jones, C.R. Are Your Performance Measures Obsolete? Management Accounting, 1989, (June), 45-50. Kennerley, M. & Neely, A. Measuring Performance in Changing Business Environment. International Journal of Operations & Production Management, 2003, 23(2), 213-229. Kluger, A.N. & Denisi, A. The Effects of Feedback Interventions on Performance: A Historical Review, a Meta-Analysis, and a Preliminary Feedback Intervention Theory. Psychological Bulletin, 1996, 119 254-284. Kotey, B. & Slade, P. Formal Human Resource Management Practices in Small Growing Firms. Journal of Small Business Management, 2005, 43(1), 16-40. Kozlowski, S.W.J. & Klein, K.J., A Multilevel Approach to Theory and Research in Organizations: Contextual, Temporal, and Emergent Processes, in Multilevel Theory, Research, and Methods in Organizations: Foundations, Extensions, and New Directions, K.S.W.J.K. K. J., Editor, San Francisco: Jossey-Bass. 2000 3-90. Kravchuk, R. & Schack, R. Designing Effective Performance Measurement Systems under the Government Performance and Results Act of 1993. Public Administration Review 1996, 39(2), 348-358. Latham, G.P. & Locke, E.A. Goal Setting - a Motivational Technique That Works. Organizational Dynamics, 1979, 8(1), 68-80. Locke, E.A. & Latham, G.P. A Theory of Goal Setting and Task Performance. Englewood Cliffs, NJ: Prentice Hall, 1990. Locke, E.A. & Latham, G.P. Comments on Mcleod, Liker, and Lobel. Journal of Applied Behavioral Science, 1992, 28 42-45. London, M. Job Feedback: Giving, Seeking, and Using Feedback for Performance Improvement. 2nd ed. Mahwah, NJ: Lawrence Erlbaum, 2003. London, M. & Sessa, V. Group Feedback for Continuous Learning. Human Resource Development Review, 2006, 5(3), 303-330. Mcleod, P.L. Liker, J.K. & Lobel, S. Process Feedback in Task Groups: An Application of Goal Setting. Journal of Applied Behavioral Science, 1992, 28 15-41. Moullin, M. Evaluating a Health Service Taskforce. International Journal of Health Care Quality Assurance, 2004, 17(5), 248-257. 221

44. 45.

46. 47.

48.

49. 50.

51.

52. 53. 54. 55.

56.

57.

Murphy, D. Presenting Community-Level Data in An "Outcomes and Indicators" Framework. Public Administration Review., 1999, 56(1), 76-82. Nadler, D.A. The Effects of Feedback on Task Group Behavior: A Review of the Experimental Research. Organizational Behavior and Human Performance, 1979, 23 309-338. Neely, A. & Adams, C. The Performance Prism Perspective. Journal of Cost Management, 2001, 15(1), 7-15. Neely, A. Mills, J. Gregory, M. Richards, H. Platts, K. & Bourne, M. Getting the Measure of Your Business, Manufacturing Engineering Group. Cambridge: University of Cambridge, 1996. Neely, A. Mills, J. Platts, K. Richards, H. Gregory, M. & Bourne, M. Performance Measurement System Design: Developing and Testing Process a Process-Based Approach. International Journal of Operations & Production Management, 2000, 20(9/10), 1119-1145. Newcomer, K. Using Performance Measures to Improve Programs. New Directions for Program Evaluation, 1997, 75(1), 5-14. Nicholson-Crotty, S. Theobald, N.A. & Nicholson-Crotty, J. Disparate Measures: Public Managers and Performance-Measurement Strategies. Public Administration Review., 2006, 66(1), 101-114. Ramanujam, R. The Effects of Discontinuous Change on Latent Errors in Organizations: The Moderating Role of Risk. The Academy of Management Journal, 2003, 46(3), 608-617. Skinner, W. The Decline, Fall, and Renewal of Manufacturing. Industrial Engineering, 1974, (Octuber), 32-38. Sullivan, R. Entrepreneurial Learning and Mentoring. International Journal of Entrepreneurial Behaviour and Research, 2000, 6(3), 160-169. Thorndike, E.L. The Law of Effect. American Journal of Psycology, 1927, 39 212222. Tindale, R.S. Group Vs. Individual Information Processing: The Effects of Outcome Feedback on Decision Making. Organizational Behavior and Human Decision Processes, 1989, 44 454-473. Vohs, K.D. & Ciarocco, N.J., Interpersonal Functioning Requires Self-Regulation in Handbook of Self-Regulation: Research, Theory, and Applications, R.F.B.K.D. Vohs, Editor, Guilford: New York. 2004 392-407. Wholley, J. Performance-Based Management: Responding to the Challenges. . Public Productivity and Management Review, 1999, 22(3), 288-307.

222

Suggest Documents