PILOT QUALITY JUDGEMENT FRAMEWORK EVALUATION

PILOT QUALITY JUDGEMENT FRAMEWORK – EVALUATION Report for Care and Social Services Inspectorate Wales Mark Llewellyn Welsh Institute for Health and S...
Author: Rose Robertson
0 downloads 0 Views 1MB Size
PILOT QUALITY JUDGEMENT FRAMEWORK – EVALUATION Report for Care and Social Services Inspectorate Wales

Mark Llewellyn Welsh Institute for Health and Social Care · University of South Wales April 2014

CONTENTS INTRODUCTION AND METHODOLOGY..................................................................................... 1 RESEARCH FINDINGS - PROVIDERS .......................................................................................... 4 1. QUESTIONNAIRE FINDINGS ......................................................................................................... 4 2. EMERGENT THEMES .................................................................................................................... 6

RESEARCH FINDINGS – INSPECTORS AND MANAGERS .......................................................... 14 1. SURVEY FROM MANAGERS ....................................................................................................... 14 2. EMERGENT THEMES FROM INSPECTOR FOCUS GROUPS ......................................................... 14

AREAS FOR FURTHER CONSIDERATION ................................................................................. 23

APPENDIX I · INTERVIEW SCHEDULES ................................................................................................ 29

ACKNOWLEDGEMENTS Thanks are due to David Francis at CSSIW for commissioning this study and providing direction at the outset. We are particularly grateful for the contribution of Sue Whitson from CSSIW who provided us with important context and contacts for the successful completion of the study. As with any such project, this study was only possible thanks to the contributions of the participants – in this case the providers, inspectors and their managers across Wales who took part in the pilot. Their willing engagement with the study, openness and honesty is gratefully acknowledged. The report analyses the findings generated during the course of the evaluation. The ‘Areas for Further Consideration’ are based on our understanding of the evidence presented to us by the respondents and any errors of interpretation are solely due to the authors. We trust that the independent analysis of the responses will help to ensure that the Quality Judgement Framework continues to develop and evolve, and CSSIW is able to respond to the challenges facing it, and those that it regulates. Dr Mark Llewellyn Welsh Institute for Health and Social Care · April 2014

INTRODUCTION AND METHODOLOGY The study was commissioned by the Care and Social Services Inspectorate Wales (CSSIW) in early 2014. The Welsh Institute for Health and Social Care (WIHSC) at the University of South Wales was asked to undertake an evaluation of the pilot Quality Judgement Framework (QJF) inspection process to reflect on what worked well, what worked less well and what needs to improve. The purpose of the study was to independently examine the process from a variety of different perspectives – from those of the settings (n=43) who took part in the pilot, the inspectors who visited them, and their managers.

THE QUALITY JUDGEMENT FRAMEWORK By way of introduction, it is worth describing briefly the premise of the QJF and what the pilot was aiming to achieve. Building on the experience of others elsewhere, CSSIW sought to develop a QJF which moved beyond minimum standards and considers the experience of people using services as the basis of the judgement, i.e. the outcomes of the service provision in relation to their day to day experience of receiving a service. The framework that was piloted in Spring 2014 utilised the SOFI (Short Observational Framework for Inspections) as a key part of the methodology to deliver a judgement on each of CSSIW’s four themes: quality of life, quality of staffing, quality of leadership and management, and quality of environment. Inspectors during the pilot were not seeking to combine these into a single judgement, but did give added significance and profile to ‘quality of life’. These four ‘judgements’ were offered against three bands for the pilot: good, satisfactory, and poor. Forty-three children’s day care settings were selected as the pilot sites, each of which was scheduled to receive a baseline inspection during Spring 2014.

EVALUATION METHODOLOGY The findings are drawn from four principal evaluation methods: 1. Two focus group discussions were held with the inspectors that delivered the pilot. The first group was held in January after their training had been delivered, but before any of the inspections had taken place; and the second group was held in April after all of the inspections had been completed. An indication of the kinds of questions that were asked at the meetings is contained in Appendix 1; 2. A bilingual online survey was sent to all of the pilot settings once their inspection had been completed and their report had been received. These were therefore staggered throughout the pilot period, and Table 1 below describes the number of responses received. Overall, 28 of the 43 settings provided us with their views, a response rate of 65%; 3. In addition to the invitation to all of the settings to complete the survey, a sample of providers was invited to take part in an in-depth interview about their experiences. The interviews (all of which were completed over the telephone) typically lasted up to 30 minutes, although on occasion longer interviews were undertaken. Seven such interviews were completed, and the settings were selected entirely independently by the researchers, but covered the three regions of Wales; North, West and South East. Again, a list of the questions asked during the interviews is contained in Appendix 1; and 4. The managers of the inspectors that had taken part in the pilot were invited to complete an online questionnaire. Of the ten managers that were sent the invitation, two responded. The group discussions and interviews were recorded with the permission of the respondents. The findings were then thematically analysed as per the chapters below.

Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 1

Table 1 · Responses to the invitation to take part in the ‘provider’ survey No. issued

No. completed

Response rate (%)

Week 1

14

12

86

Week 2

3

2

67

Week 3

8

7

88

Week 4

1

0

0

Week 5

15

6

40

Week 6

2

1

50

43

28

65

Survey dispatch date

TOTALS

The responses reproduced here are anonymous in that names of the participants and identifiable details have been removed from the quotations (in italics) below. That said, in order for the voices of the participants to resonate, the original intonation and sense of their words has been preserved. Quite deliberately this report uses a large number of quotations to ensure their perspectives are heard, and whilst the material is organised in a way that reflects the key themes that emerged, care has been taken not to over-interpret the data.

KEY MESSAGES Ahead of becoming immersed in the following sections of the report, it is important to bear in mind the following four key contextual points: 1. The report deliberately contains a lot of detail and it should be stated at the outset that, on balance, the following evidence provides much more support for the QJF pilot than criticism. That is not to be blind to the suggestions for improvement (and significant space is given over such that these points of view are ‘heard’ below) but it is to recognise that the majority of providers, inspectors and managers felt this was an important and effective development of the work of CSSIW. In particular, and despite the operational challenges of the pilot, the inspectors who worked on it saw considerable benefit in utilising the QJF approach in these and other settings. They perceived that it enhanced the work they currently do; 2. There are limitations in this as any study of this kind. It is important that CSSIW interpret this evidence in context, and uses the opportunity to continue to learn from the experience of providers, inspectors and their managers as they develop the QJF approach in other settings. That said, there is significant merit in having engaged an independent organisation to complete this study, despite the fact that we were working with a relatively small sample, and that there remains a need to draw from a wider-evidence base over a longer period of time to be more definitive about the outcomes from this approach; 3. Elements of the QJF method will need to be continually modified in the light of this wider evidence-base – it would be foolhardy to conclude that the evidence presented here represents the ‘last word’ in the development of the QJF; and 4. It needs to be borne in mind that a number of the questions were deliberately worded to identify where improvements were needed. As such it is not uncommon in evaluations such as this for people to provide much more detail on the negative than the positive elements of the experience.

Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 2

STRUCTURE OF THE REPORT The report is presented in three distinct chapters. The first considers all of the findings from the providers, the second contains the feedback from CSSIW staff – the inspectors and managers, and the third concludes the document with ‘Areas for Further Consideration’ that are drawn from the findings.

Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 3

RESEARCH FINDINGS - PROVIDERS The research findings in this chapter are presented in two sections. Firstly, the data from the ‘hard coded’ questions within the questionnaire is presented and discussed. Secondly, the open text answers from the survey and the data from the interviews is combined and organised according to theme – against the broad positives and negatives of the pilot as reported.

1. QUESTIONNAIRE FINDINGS As described above, 65% (n=28) of the settings that were invited to take part chose to fill in a questionnaire asking about how they felt the process of this pilot was handled, and what they considered to be the strengths and weaknesses of being given a quality judgement. The invitation was sent to the email address held for the setting and did not prescribe exactly which person should fill in the questionnaire. By way of context, Table 2 provides a list of the roles/job titles of those that completed the survey. Table 2 · Roles / job titles of those completing the survey Role / job title Acting Manager

Nursery Manager

Centre Manager

Nursery Manager

Creche Team Leader/Responsible Person

Nursery Manager

Deputy Manager

Nursery Manager/Person in charge

Childcare Leader

Nursery owner/Manager

Manager

Owner/Manager

Manager

Owner/Manager

Manager

Owner/Director

Manager

Proprietor

Manager

Registered Person

Manager

Registered Persons

Nursery Director

Responsible Person

Nursery Manager

Secretary

Nursery Manager

Temporary Manager

In the first part of the questionnaire, providers were asked their view about how the process had worked in practice. Table 3 presents the findings from the answers given against the four statements. Taking each in turn, answers were firstly very positive in response to whether providers felt that the purpose of the pilot was clear to them, with 26 of the 28 respondents replying positively – the remaining two respondents were on negative side. There was a similar positive trend, albeit not as clearly expressed, when reflecting on how the pilot had worked in practice. Three-quarters of the respondents either ‘agreed’ or ‘strongly agreed’ that this had been positive, with two respondents replying negatively to the question. There was a much more mixed picture when providers reflected on whether the inspections had lengthened, and the relationship with inspectors. In respect of the former, there was a spread across Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 4

Table 3 · Number and percentage of responses – Process (n=28) Statement

%

No. respondents

Strongly agree:

32.1%

9

Agree:

53.6%

15

Slightly agree:

7.1%

2

Neither agree nor disagree:

0.0%

0

Slightly disagree:

7.1%

2

Disagree:

0.0%

0

Strongly disagree:

0.0%

0

Don't know:

0.0%

0

Strongly agree:

17.9%

5

Agree:

57.1%

16

Slightly agree:

10.7%

3

Neither agree nor disagree:

7.1%

2

Slightly disagree:

3.6%

1

Disagree:

0.0%

0

Strongly disagree:

3.6%

1

Don't know:

0.0%

0

Strongly agree:

21.4%

6

Agree:

10.7%

3

Slightly agree:

7.1%

2

Neither agree nor disagree:

28.6%

8

Slightly disagree:

3.6%

1

Disagree:

17.9%

5

Strongly disagree:

3.6%

1

Don't know:

7.1%

2

Strongly agree:

17.9%

5

Agree:

7.1%

2

Slightly agree:

3.6%

1

Neither agree nor disagree:

50.0%

14

Slightly disagree:

7.1%

2

Disagree:

0.0%

0

Strongly disagree:

3.6%

1

Don't know:

10.7%

3

a. The purpose of the pilot was clear to me

b. The new way of doing things worked well in practice

c. The pilot took longer to complete than other comparable inspections

d. The relationship with the inspector was better than in previous inspections

Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 5

all answers in the range, and when commenting on the relationship with inspectors, opinion was again divided. Half of the respondents neither agreed nor disagreed that the relationship was better, with just over a quarter expressing that they felt things had indeed improved. One in ten perceived that the relationship was worse than in previous inspections. In the second section of the questionnaire, providers were asked to reflect on being given their quality judgement, and the implications this had for them (see Table 4). The vast majority of respondents (>90%) agreed that the idea of having a judgement was a positive thing for their setting, although when it came to the accuracy of that judgement, there was much more mixed response. Nearly one in five respondents disagreed with the accuracy of the judgement that they were given (n=5) compared with nearly 40% of those who replied that they strongly agreed with their outcome.1 There was greater unanimity of view around how settings felt about the opportunity to discuss the judgement they had been given. Only one respondent perceived that that had not had the opportunity to discuss the judgement, with approximately 90% agreeing or strongly agreeing that that had enjoyed the chance to talk this through with the inspectors. Thinking more about the future of the QJF, a majority of respondents (80%) agreed that there was a potential for the process to positively influence standards of care in other regulated settings in the future. Around one in ten was undecided, with two respondents disagreeing with the statement. Overall, there was a mixture of views represented in these data. Care should be taken not to overinterpret the findings and look too closely at the percentage values given the overall small size. That said, the trend is a more positive than negative one, with a degree of unanimity of view expressed on occasion. Perhaps most positively is the fact that 26 of the 28 respondents felt (albeit to varying degrees) that they liked the idea of being given a judgement about the quality of care in their setting.

2. EMERGENT THEMES The following section draws on data from two sources – the open text questions that formed part of the questionnaire,2 and the in-depth interviews with seven providers drawn at random from the 43 pilot settings. The data was analysed thematically, and is arranged in two substantive sections below – the broadly positive and broadly negative comments made about the QJF pilot. There is no ‘scientific’ rationale for how the issues are presented, other than to say that they have been arranged to reflect the number and strength of feeling expressed i.e. the closer to the top of the list the issues appear, more people had more to say about them. It must be noted that there were, overall, fewer positive comments made about the QJF pilot than negative ones. That said, settings did identify a number of constructive outcomes from the pilot which mean that it was generally well received.

POSITIVE COMMENTS Despite other comments to the contrary (see below) there was some positive feedback about the fact that providers felt that they knew what was coming in the pilot: Information was provided well in advance about the new the process which enabled this to be relayed to all nursery staff members. A range of topics were identified, but the issue that attracted most positive comment from the pilot was that the QJF allowed settings to be more specific in identifying areas for improvement than previous ways of working with CSSIW had permitted. The following sections describe these key themes.

1

It should be noted that as the researchers were unaware of the judgements given to settings, no correlation analysis was undertaken to test whether the level of the judgements given (good, satisfactory or poor) had any impact on whether respondents felt they were accurate. 2

The following were the open text questions asked: what comments would you make (either positive or negative) to support your answers?; what are the benefits of receiving a quality judgement in this way and what are the drawbacks?; what were the positives and negatives of this new way of undertaking inspections from your point of view?; what improvements would you suggest to improve the process and the experience of receiving a quality judgement?; and what challenges exist for CSSIW in making this new approach work effectively?

Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 6

Table 4 · Number and percentage of responses – Receiving a Quality Judgement (n=28) Statement

%

No. respondents

Strongly agree:

35.7%

10

Agree:

46.4%

13

Slightly agree:

10.7%

3

Neither agree nor disagree:

0.0%

0

Slightly disagree:

0.0%

0

Disagree:

3.6%

1

Strongly disagree:

3.6%

1

Don't know:

0.0%

0

Strongly agree:

39.3%

11

Agree:

28.6%

8

Slightly agree:

10.7%

3

Neither agree nor disagree:

3.6%

1

Slightly disagree:

10.7%

3

Disagree:

3.6%

1

Strongly disagree:

3.6%

1

Don't know:

0.0%

0

Strongly agree:

42.9%

12

Agree:

46.4%

13

Slightly agree:

7.1%

2

Neither agree nor disagree:

0.0%

0

Slightly disagree:

0.0%

0

Disagree:

3.6%

1

Strongly disagree:

0.0%

0

Don't know:

0.0%

0

a. I like the idea of being given a judgement about the quality of care in our setting

b. I think that the judgement we were given was accurate

c. I was able to discuss my view about the judgement given

d. The QJF has the potential to positively influence standards of care in all regulated services in the future Strongly agree:

35.7%

10

Agree:

39.3%

11

Slightly agree:

7.1%

2

Neither agree nor disagree:

10.7%

3

Slightly disagree:

3.6%

1

Disagree:

3.6%

1

Strongly disagree:

0.0%

0

Don't know:

0.0%

0

Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 7

‘The QJF is constructive and you are given the areas that you need to improve...’ Generally, providers were very clear that one of the main benefits of the QJF was that it was a more useful ‘diagnostic’ for where improvement was needed: I liked the new way of doing things particularly because the inspector was very clear about things in the feed-back and I felt I could use that information to move individuals (and our nursery) on in the right direction. This sense was, of course, only effective when, in the words of one provider, there was a detailed set of reasons behind their judgements. The following comments provide an insight into the views of providers as to how the QJF provided ‘added-value’ over the previous approach of CSSIW: It allows the setting to show good practice. Allows the setting to make changes where necessary Parents will have a better understanding of the areas inspected The benefits of this system are that you can pinpoint areas that need addressing - if you agree with the judgement. Once the inspector explains their reasons for the judgement, you can work on improving that area. The nursery can target specific areas that need improving directly by the inspection providing separate judgement for the different areas of provision. This enables strengths and weaknesses to be established and by continued inspections in this manner we can establish whether or not these are being improved on or not. I like the idea of being given a judgement about the quality of care in our setting - we work hard to achieve good standards and it is valuable for this to be recognised and in the future be made public for potential parents to view. I do think the judgements given were accurate - and it was useful when the inspector shared where she thought we were on the scale (i.e. just achieved ‘Good’ or moving over to the other end of ‘Good’ so that if a higher judgement came in at some point in the future we could possibly achieve it). A good manager can use these judgements to help move their setting / the team forward. ‘It was very encouraging to have a stronger emphasis placed on interaction with children...’ The change of emphasis in this inspection, towards closer observation of the child using the SOFI methodology, was noted by providers and generally felt to be one of the positive outcomes of the pilot: Generally this is much better. I liked the fact that it was very much more observation based and the comments in the report mean that they were watching closely what was going on. They probably saw more of the day-to-day experiences of the children than previous inspections have. Providers felt that this was a more appropriate means of undertaking inspections in these kinds of settings: It’s easier on this model as the focus is on trying to get a feel for what’s going on between staff and children, and that’s what parents are interested in as well. I like the fact that there is a greater focus on the childcare and that this is stressed in the report, after all this is the most important thing in a parent/carers’ mind. In the past I’ve felt there’s been a preoccupation with the paperwork which was overly forensic about the nature of very small specific details, which didn’t focus on the nature of the children’s experience – and I can see in this model a real attempt to move on in, and a new way of working. Linked to this was the feeling that the QJF pilot would afford parents a much clearer insight into the work of the settings, and how they rated compared to others: It is a direct judgement and parents can see at a glance the standard identified by CSSIW. It is easier for parents to understand the grading and compare reports. This new focus on the child at the centre had not only the potential to impact on parents, but also positively impact on the broader economy in which settings are operating: It will ensure that the general public have a clear understanding of what service they are about to access instead of being bombarded by insufficient and inaccurate information by the facility. This should encourage standards Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 8

to rise within our industry and force those not willing to offer a quality service out of the market. ‘The inspection was conducted professionally – I can’t fault the inspectors at all...’ The approach of the inspectors themselves came in for considerable praise. Interestingly whilst there are criticisms of the process in the section that follows, there were no comparable negative comments about the work of the inspectors themselves. Providers reflected positively on the professionalism and general conduct of the inspectors, despite the obvious challenges in undertaking a relatively new task: The inspectors that attended our setting were very fair and approachable. They were very thorough and made the whole experience easier to get through. Before commencing the inspection they went through what they would be looking at and were happy to answer questions I had. We found the inspector to be professional but also friendly and courteous. The relationship with the inspector was the same as with the last inspector. We had seen the last inspector regularly and built a rapport with her, however, we felt that the new inspector was easy to speak to and did not find any problems. It was a pleasure to meet this inspector. I would like to thank [the inspector] for her visit, she was a wonderful representative for CSSIW, she in no way affected the way in which we provided care for our children, and fitted around our routines. She was friendly and warm, and the children and staff interacted well with her, feeling settled and comfortable around her. I find no problems with the QJF and find it informative and beneficial to both the CSSIW and settings. Thank you. For me a key ingredient in a useful inspection is the quality of the people doing the inspection, and I’m happy to say that I had two very professional and capable inspectors, and that hasn’t always been the case. I’ve welcomed the application of a series of judgements was much more applicable. They came in it seemed to me without an agenda, an open mind and a determination to make objective, impartial and evidence-based judgements. ‘I liked the fact we were able to talk about the results with the inspector on the day...’ Providers also reflected positively about the fact that there was an enhanced opportunity within the QJF pilot to talk through the conclusions with the inspectors: Positives were the feedback from the inspector. Hearing about all of the things she thought were good and knowing that she has looked at ALL aspects of our service. In a focused visit often things are missed or you are not given opportunity to report on other areas of the service. It was felt that this was especially valuable given that this created greater parity between the approach of CSSIW and Estyn: The meeting at the end was a positive session and we’re Estyn registered and the meeting felt like one of their meetings which I think is a good thing. That’s a good direction to be moving in if you ask me. We clarified a couple of details and went through each section and they provided us with positive examples of what they had found. I think the inspection worked really well and I think it’s definitely the way forward. It’s very similar to the Estyn style of inspection. Before this inspection there was a tendency to be quite general, whereas this time it was much more specific. I liked the particular nature of the feedback. Other issues - positive Three further issues were identified by respondents as being positive. The first centred on the fact that the QJF pilot had the feel of a more holistic way in understanding the work of the settings, rather than somewhat forensically unpicking practice with a ‘tick-box’ mentality: The positives were that it looked at the nursery as a whole, and gave an oversight into positives and negatives. By having two inspectors it gave one the chance to go through office paperwork whilst the other observed staff practices and children. I think all of this new way of undertaking inspections is good as the inspectors focus on all parts of the nursery and not inspect one particular part, which I think is good as it lets me know that I Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 9

need achieve high standards in all parts of the nursery Secondly, some respondents identified that having new inspectors, as well as a new approach to inspection, was of particular benefit: The inspector was new to myself, and it did help that she was Welsh speaking in a Welsh setting. We’ve had the same inspector coming here for a number of years now and because we had two new inspectors this time it was nice to have a fresh approach. The impetus of new faces who were very professional was a really good thing. It felt very good having new people come along, as when the previous inspector came it was like having a friend pop in for a cup of tea! It was a nice feeling to put ourselves a bit out of our comfort zone, and while I understand that it’s important to build a relationship with an inspector, it was a good experience to ensure that the new inspectors understood and saw all the things that we offer here. Finally, one of the respondents drew on some comparisons between the English and Welsh approaches to judgements, which reflected positively on this pilot: a colleague who had worked in England felt that this judgement framework was better than the English system because it had grey areas within it and that things were not just black and white, even though on the face of it it looks like there are much greater areas of subjectivity in there.

NEGATIVE COMMENTS The following section provides a counterpoint to what has been presented above and in some respects, offers a mirror image to some of the positive reflections on the process. Before discussing the specific themes that emerged from the analysis, the following quotation provides some overall context for this section. It is presented neither as representative of all providers nor indeed a majority of providers, but it does provide a powerful insight into the challenges that need to be met with a minority of providers, and speaks very strongly to the need to provide further clarity about the approach: The criteria for each judgment made was worded in such a way that can only be deemed as “good is not good enough!” Although we are above minimum standards, the words “satisfactory” would repel future possible service users, and make those who do use our service lose faith in the good service we provide! Yes, the quality of care that each child receives is highly important, but is not something that can be made from a short observation, and we certainly shouldn't be labelled as such for as long as 12 months based on a small observed “snippet” when certain circumstances are not the norm. It was very clear that in some areas the lines/points between one judgement and the next were very fine indeed, and we should have received higher judgements in ALL areas, mere wording decided otherwise. It appeared as though they had a predetermined number of settings that they were going to place into the “satisfactory” category. ‘We are worth so much more than the judgments made...’ The single biggest criticism of the pilot was that settings felt that the process of being given a judgement did not fully take into account nor reflect their practice. 3 There were a number of specific issues that respondents pointed to. Firstly, some wondered about the nature of the methodology, and in particular the SOFI approach, and whether this is best suited to making such judgements: The drawbacks are that the observations carried out on individual children do not reflect a true picture of a child in a short space of time as an inspector does not have the same knowledge of the child as staff and the child does not always react or behave in the same way on every occasion. The benefits highlight good practice and also areas that can be improved. 3

As above, it should be noted that as the researchers were unaware of the judgements given to settings, we can draw no inferences about the comments made in this section. It is not possible to test whether the level of the judgements given (good, satisfactory or poor) had any impact on whether respondents felt they were accurate, other than where respondents make this explicit in their own words.

Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 10

Others felt that there were problems in the amount of time that CSSIW had spent in preparing people to undertake inspections in this way, and whether the current balance between the types of activities that inspectors engage in whilst they are with the setting, and the nature of evidence that they produce is correct: Before our inspection I thought this was a good way of giving parents a more tangible way of comparing child care settings and helping their decision making process. Having received this inspection and the outcomes I am not convinced whether it be the process itself or the people conducting it, but it doesn't seem to me on our experience that it is really going to give a fair reflection of what a nursery has to offer. I think more work needs to be done on the criteria and the evaluation process. The process and its validity depends on the attitude and fairness of the inspector. By its nature, the inspection is a snapshot of the nursery. More credence needs to be given to parents' and the staff's views. Time needs to be taken to assess the conditions of the day and value judgements made using that information. It will not work if inspectors don't listen to a provider's explanation of events. Inspectors can be too critical over minor matters and if a provider tries to express a point of view it can just be brushed aside. In the past we’ve had a good discussion with the inspectors but this time it appeared that it was very rushed, and they had no answers at all about how they had applied the criteria. I was left with many questions about how they had come to their judgements and they were unable to answer them. There was no baseline or benchmarked evidence to support the judgements they had made. I found the process very subjective and I found that I didn’t agree with the judgement made. That’s fair enough but I expected the inspectors to be able to back up the points that they had made. Lots of the statements that they had come to had no corroborating evidence underpinning them. It’s the process that I’m critical of and not the inspectors themselves. The reason for the amount of concern expressed was related to the kinds of consequences that settings perceived when the judgement did not take account of the services they provide: If you got a judgement that was lower than what you'd expect then it could have a very negative impact. As these judgements will be made public in the future - they will be a valuable source of information for potential parents and it could put them off coming to the setting - it could affect the setting’s reputation which in turn could have a detrimental effect on its feasibility. This is all very well if the judgement is fair and really does reflect what is happening in the setting and action plans can be done to improve things but what if it was because the inspector's snap-shot was not a real reflection e.g. they chose to do a SOFI observation on the newest member of the team who was still settling and then used this to make a overall judgement for quality of staff. A good inspector will take everything in to consideration and ensure their snap-shots are important but must be considered amongst all the other evidence (i.e. SOFI observations can take a long time to do which is fine but I don't think it is wise to focus on one area at the expense of spending time evenly around the setting). ‘All inspectors need an agreed standard mark and expectation, and not interpreting the information on their own...’ Linked in some ways to the issue above were a range of concerns about how far the judgements made were consistent one with another: what are the levels of expectation and how can we make sure the inspectors are working to the same expectations and interpretations. Much of the criticism was borne out of a general lack of information about how the judgements were arrived at and the process that inspectors followed: The quality of care has always been assessed but individual inspectors have individual ideas about quality, which is a subjective notion. I would like more information about SOFI to understand how this copes with subjectivity and objectivity. After such a long inspection, time was a need to reflect Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 11

on the process - this was not available after such a long day. I believe that everyone's judgement could differ to the situations, so there are no guidelines on getting a 'good' rating. We don't think that it is a fair process to average out marks into such small categories, poor, satisfactory and good. None of which are particularly positive. I was able to discuss the judgement given however I would have liked some paper feedback even though this may be a pilot. As previously mentioned, all people have different views and 'judgements' about things. Also there were no set guidelines to follow for us as a placement to strive for to gain top marks. This point was made effectively by one respondent who had experienced the very difference in approach as alluded to by others: Obviously for nurseries that work hard and offer a high standard of care the judgement will set them apart from other nurseries. A drawback could be if inspectors have different standards and are not all Inspecting in the same manner. Over the years inspectors have said something is excellent here, then we have a new inspector who says they do not like that method of doing something. ‘We’ve got lots to show inspectors and they perhaps didn’t get around everything that we do...’ Concern was expressed about the overall time allocation that had been made for the pilot inspections. As in the survey, more people suggested that the new form of inspections had taken longer than previously, and the general consensus was that there was not enough time, even with two inspectors, to complete all the tasks that were needed: We weren’t able to get through everything that we wanted with them during their time here which is obviously a bit of a drawback. The pilot did however take longer than other comparable inspections as it finished 20 minutes after the nursery usually closes at 6pm. The inspection did take a long time but I feel this was due to it being the first of its kind for the two inspectors who inspected us on the day. It is difficult when an inspection runs for a long day as staff work shifts etc. Feedback was late in the day and some staff had been working from 7.30am myself included, obviously we wanted to see the outcome of the inspection and waited for the feedback meeting. Staff and myself have families and other commitments, hence why we work to a rota system. It could maybe help if feedback was given on a separate occasion when is convenient for both parties. Both inspectors on the day I am sure would have done this but you worry about asking. This concern was especially acute for those providers in charge of large settings: It would be good to split the inspection up into a morning and an afternoon session. This would ensure that we’d be able to showcase everything that we’re doing. We’ve had successive positive reports and the documents that are produced are of significance to us because we operate within an environment where there is a Governing Body so we want to make sure that all of our good work has been seen. It’s important then that the reports reflect accurately everything that goes on here. In addition, it was noted that disagreements about the judgements reached on the day would extend the amount of time that inspectors would need to complete their work: I feel that it would be a lot of work for one inspector alone to complete, and in a setting disagreed on a judgement I feel that the inspectors would have to strongly evidence and give reasons for their award. Other issues - negative Two other substantive issues were raised by the providers in their feedback. The first concerned the lack of information and communication received about the new approach and how it worked in practice. This was a concern because providers felt a little unprepared for what was happening: We did not receive paperwork enough in advance to have a good read through. We didn't appear to Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 12

see the inspectors doing anything different. Being honest I thought that when we’d been asked to be part of the pilot I thought we were supposed to be having an invite to come to CSSIW headquarters to clarify what we might expect from the pilot. This didn’t materially impact on the day but it was unclear to me and the staff about what to expect and what might be happening. Staff were very aware that she was taking notes the whole time, with little interaction an no immediate feedback. Staff were unaware of the purpose of the pilot though the owner and I were given information at the beginning of the inspection. She tracked some children but didn't explain to the staff the purpose of what she was doing. It was too much like a judgement. Secondly, and linked to this in some ways, was a perception that the QJF pilot was somewhat less inclusive than the previous approach of CSSIW: Last time we were inspected staff were engaged in conversations, this time they weren’t. They spent a disproportionate amount of time in the office than last time, and the close-up meeting at the end was much shorter. We felt totally excluded from this process.

Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 13

RESEARCH FINDINGS – INSPECTORS AND MANAGERS The research findings in this chapter are drawn from two sources – a questionnaire for managers of the inspectors, and from two focus groups that were undertaken with the inspectors themselves. There is only a very short section on the feedback from managers as only two of the 10 that were invited to take part offered any information, whilst the majority of the data presented here is taken from the inspector focus groups.

1. SURVEY FROM MANAGERS As noted above there were two responses to the survey, and interestingly against the four ‘hardcoded’ questions, both managers gave the same responses. They both ‘agreed’ that the purpose of the pilot was clear to them, and ‘agreed’ too that the new way of doing things worked well in practice. A reason as to why was offered by one of the managers: the inspector expressed reservations that although the new way worked well in practice this might be attributed to the higher risk services being inspected early in the year leaving better services to inspect during the pilot. It would be interesting to see how the new way works with poorer services. They ‘strongly agreed’ that the pilot took longer to complete than other comparable inspections – the inspector confirmed that preparation takes longer but this might get batter as people become more familiar with the process – but ‘neither agreed nor disagreed’ about the fact that the relationship between the inspector and the setting was better than in previous inspections. In terms of overall positives and negatives they identified the benefit of drilling down on what good like and that the use of SOFI was good for identifying good interactions. Also, one of the managers felt that the new inspections had ‘seen’ more of what was going on: more was noted in the inspection with the focus being on quality for people using the service. Two improvements were identified – around the process and training: The methodology needs to be more prescriptive in terms of pulling the information together to inform the judgement; Training for all [is needed] which I believe will be provided. Finally they remarked on what they considered to be the key challenges for CSSIW in making this new approach work effectively: Informing providers of the new way of working and what the scores mean. Training is a must for all staff, along with some protection with case-loads in the early stages while people become familiar with the process. Training for staff, and a review of the time taken to inspect and produce the report. Consistency of judgements will be difficult, alongside responding to challenges when services disagree with the judgement given.

2. EMERGENT THEMES FROM INSPECTOR FOCUS GROUPS The following sections of this chapter present the findings from the two focus groups with the inspectors that took part in the pilot. There were 11 participants at the group that was held in January before the pilot began, and 12 inspectors were present at the group that was held in March after the pilot had been completed. The discussions were very open ended, but structured around a series of questions that can be found in Appendix 1. The discussions were very frank, honest and candid about the process of engaging in the pilot, and this is reflected in the nature of the comments reproduced below. Not surprisingly there was considerable overlap between the kinds of issues that were discussed in the ‘before’ and ‘after’ groups and as elsewhere in this report, the order in which the topics are presented hereafter reflects the prominence and magnitude of the issue in the discussion i.e. those that were of greatest significance for the discussants are presented first. Ostensibly whilst the same territory was covered in both discussion Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 14

groups, there were subtle changes of emphasis and accordingly the order in which the issues are presented differs. It is very important to see the information below in context – in that it represents two very different perspectives – one offered in the nervous atmosphere of stepping into somewhat uncharted territory, and another looking back over a challenging period. What both accounts do is to provide an honest insight into how the inspectors felt at the time, and they provide a number of interesting learning points. The respondents were able to identify a number of issues that need to be dealt with in order to improve this process for others.

BEFORE Resources and workload The most prominent issue in the first discussion group was a series of concerns over the resources that were available to inspectors and how they would effectively deal with the workload pressures that the pilot was, in their view, inevitably going to bring: We want to do it right – so if they want to learn from it so in order to do it right I wanted to have time on my side that I could sit down and plan. If I’m going into a day care setting I’m going to spend all day there because I need to be confident that when I give my judgment and I leave that I’m not going to come away and feel like I need to change that judgment because that leaves me exposed. It’s not an exact science – the amount of time you spend there is dependent on history and size of the nursery and it depends on what you find when you walk through the door…so it’s not an exact science….but in terms of case load allocation and the tool they use they say that a day nursery should be done in a day and that includes planning, visit and report writing. I just don’t think that’s possible because we have to sit and observe we can’t be doing anything else where previously we could have been checking files and keeping an eye on what was going on. I think it’s really valuable but then if they could resource it to have two inspectors – one doing the observation and one doing the paper checking. I know I’ll need to take the time to take the time while I’m there to make sure I’ve got the evidence. I can’t walk out of there by half past two and say you’re ‘satisfactory’. I know some of them are going to say “what you going at half past two for – you haven’t seen this and you haven’t seen that” and I know some people will challenge me on that. So if that’s the case then the management need to understand that. This point was exacerbated for some by the fact that they were aware that they would have to be visiting very large settings as part of the QJF pilot, which they felt would bring additional pressures to bear: I’ve got a 146 place nursery on my case load – I wouldn’t feel comfortable not doing a full day there and making a judgment. Lack of preparation time Very much linked to the concerns expressed above were a number of comments about the fact that there hadn’t been an enormous amount of time to make preparations for the pilot. The following comments are somewhat blunt and pointed, but do represent a number of the anxieties that were expressed by inspectors ahead of the pilot starting: It just feels a little rushed – we’ve had huge debate this morning – it’s been very interesting, very positive and we’re not negative are we? We’re very committed to pushing up standards and we see the judgement system as a way to do that but it’s rushed. I mean it would have been helpful if we had had all the paperwork but we haven’t. This is why I think it’s all being done in such a rush. It’s like all built on sinking sands, moving sands really because we’ve got the descriptors for ‘good’, we haven’t got them for ‘satisfactory’ so we just have to make Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 15

that up, we haven’t got the report structure and we have to make that up too. We haven’t got the papers in place to go out and do it so we are going to need the time to put our own things in place and make sure we do it and we know what we’ve got to do and we’re capable of doing it and we’ve got to do that paperwork ourselves. My feeling is that I thought I was going to come into something that was a little more prepared. When I had the pack handed to me I thought it’d have a draft report template, a draft observation plan and tools to help me do my job. I now realise it’s those sheets that we are going to fill in that are going to shape those tools. And I’ve got that in my head now and I’m ok with it but I’d rather have been where I thought I was going to be. It arms you better. The tools could have been there to make life – not easier – but perhaps more straightforward. Support from CSSIW – managers, peers and others The next most significant issue that emerged centred on the professional support that was available to the pilot inspectors. Positively, the inspectors felt reassured by the support that they felt was on offer from their peers: I feel satisfied with my colleagues. In it all, because I think I would be able to say I am struggling with this the only thing that I struggle with is time. I will do it but in terms of going out there and having the confidence to do the judgement I feel that I would like more but based on we’ve got we’ve got no option it’s case of run with it and come back and think right this is what didn’t work and this is what we need to improve on. What helps me through that the most is that I’ve got good colleagues here and elsewhere to draw upon. That said, there was less confidence expressed in the support that managers might be able to offer for a variety of reasons – whether because of time pressures on those individuals, or because they only had a very limited understanding of the purpose and mechanics of the pilot: My area manager might not be involved in the quality judgment framework and someone else’s might be – so they might have a better understanding and be more supportive to the inspector because they understand what it’s about and someone else might not have that support. And I think all managers need to have the same level of understanding. Do we want to do this right and allocate the time that it needs or do we want to make sure that we hit our targets? I don’t have the support from my manager. My manager is adult focused, so when I make my professional judgments with my day care background, we’re not seeing it in the same way. So when reports are challenged it’s not necessarily going to be whose got the same professional knowledge that are dealing with them which I think is a bit of a problem. I agree that some of the things we’re worrying about might not ever come off but I would still like to know that I have got the confidence there if my manager is off and I go to another manager that I will get the same support, and advice that I need. Making the judgement – evidence and challenge Ahead of the pilot starting, there was a degree of anxiety expressed about the way in which evidence would be collected, and how robustly it would be able to stand up to challenge: I know I sound negative and I am not. I am looking forward to going out and doing it and embracing it and all of that but…we haven’t got any information, proper information to write on. All we will be doing is going out and trying to give a judgment on what’s good on you know what information really, giving a professional judgement that’s going to be wide open to challenge. What are the other areas of this you are concerned with? Challenge, and whether our data holds up to the challenge. For one respondent, echoing the views of many others, the issue of how the CSSIW ‘system’ would respond to challenge was a significant concern. It was linked to the comments about support from the organisation (see above) but in some ways was exacerbated by the fact that a professional judgement Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 16

would now be the focus of any challenge that was made: Challenge and complaints for inspectors in an occupational hazard that’s not going to go away but I just think precedent shows that managers are too quick to change and tone down inspectors’ judgments and for me there’s not enough information at the moment to show what support systems are going to be in place to make sure that doesn’t happen. As far as I’m aware there’s nothing gone out to providers to indicate what they can challenge and what they can’t challenge. So factual inaccuracies, for example, are fair enough – I’ve said there are 40 children and there are 52 – so we change it. But if I have made a professional judgment based on my observations and what I’m asked to do against the indicators and descriptors and I’ve worked within policy and guidance I would not expect the organisation to even investigate a professional judgment challenge. How can this work otherwise? Making the judgement – inconsistencies Connected to the issue discussed above, the inspectors had a very open conversation about their concerns over ensuring their judgements were consistent one with another. Positively, one of the inspectors noted that current practice gave grounds for optimism: We’ve done an activity where we’ve all given a grade and we’ve been fairly consistent. However, others were a little more worried that inherent differences in approach would be exposed by having to be more explicit about their judgements: I’ve talked to colleagues from other regions and how they work and I don’t recognise some of it , and there’s inconsistencies within our own area as well you know and I think this new judgement framework will just add to those inconsistencies. I thought we were coming together last time to work out the descriptors and for us to be given an opportunity to say what we feel is ‘good’ is so that we could write the descriptors. We arrived yesterday and the descriptors are there so somebody is now telling me what ‘good’ is so I’ve got change and my way might be slightly different. What they say is ‘good’ I think might be satisfactory. So I have to start thinking differently. In addition, the issue of managers being somewhat removed from the detail of the pilot was also identified as potentially problematic: I feel the inconsistencies are at the manager level because I might have an issue and my manager will have my back and a colleague will have exactly the same issue and their manager is saying that things have to be changed. My worry is the consistency at the next level that’s going to have the impact on what we’re doing on the ground and we’re the ones then that have got to go back into that setting a year later and I think they are going to challenge us again. Training Underpinning many of the topics covered was a conversation about the training that was needed to help inspectors to deliver the pilot effectively. There were to distinct perspectives expressed – as exemplified in the quotations below – on the more positive, and less positive sides of the debate: For me the training has been better than I thought it would be and I feel a bit more equipped now and I am happy to be part of the pilot. I am sure my colleagues are going to have phone calls from me whilst I am in the pilot but I think that’s true of more people than me. The materials and training haven’t been robust enough to give you the confidence to say if I followed these processes I will be OK, regardless of whether I think it’s right or wrong. The robustness isn’t there. Some of the things we’ve heard over the three days, my feeling has been this is basic and I’ve sat there thinking “this is what we do anyway”. There’s a big difference between being confident and assured so I know what I’m doing. I’m walking out of here this afternoon with more doubts in my mind than there are certainties… Making the judgement – relationship with provider There was also a discussion about the relationship that inspectors hold with providers. In part this was Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 17

due to the fact that the inspectors perceived that not enough information, especially about the judgement itself to the settings – what would be nice is when they send out the information about being selected for the pilot is having a letter or a statement on that letter that said factual inaccuracies can be challenged and will be changed but the professional judgments may not be changed – but also centred on issues like the timings of visits – it’s also about fitting the day in with the important people that are there because they are working and especially at the end of the day they might be counting numbers, staffing, so you know it’s them making and time and we might have to wait till they finish to feedback. That said, there was a degree of challenge to these worries: I think we are worrying about it a bit too much. I think because we are day care and they are used to Estyn and they are used to Estyn grading I think that they will be more used to it. Overall Notwithstanding all of the challenges identified, the inspectors were asked at the end of the discussion as to how they felt about completing the pilot. Five people indicated that they felt confident enough, whilst six suggested that they were not confident enough at that stage. Despite this, the following comment summarised the spirit of the group: I think as Inspectors, with our background and the type of people we are, it is about our own personal strengths – if I’m chosen to do something I will do it and I will do it to the best of my ability That’s me being professional. So as inspectors you can put us in any position and we might scream and moan to kingdom come – and I’m the first one to moan – however we’ll do the job.

AFTER As discussed above, the second focus group provided an interesting counterpoint to the first, not least because the inspectors were able to effectively reflect on their recent experiences. They identified a specific set of improvements – written up at the end of this section – as well as covering a number of the themes from the previous section, albeit with a different emphasis on the same issues. Making the judgement – evidence and challenge Post-pilot, the inspectors discussed the process of collecting evidence and dealing with challenge at great length. There were a number of dimensions to this, including dealing with probing questions from providers about the new process, and how best to undertake the inspections themselves in such a way that would mean that evidence was collected robustly: When it came to discussing and making the judgements settings were saying to me time and time again, “right, well you’re saying that but show me what satisfactory is, show me how I get a good” and we just couldn’t do that. It was a bit like we were struggling in the dark a bit and we need more information about the descriptors, and I know that we’re not going to be too prescriptive. I was struggling at the outset to make sure that all of my evidence was in line with the judgements I was making but as the pilot developed I got much better at doing this as I went along and was better at seeing the overall outcomes along the way. When you’re looking at the current levels, there’s nothing to say what the consequence is for individual things that happen. It’s still not exactly clear what we’re supposed to do when you see combinations of good and satisfactory in one setting. We were working towards ‘best-fit’ and working towards there which was useful, but there has still to be much clearer guidance. There’s an exposure with working in this was because you a literally giving them an overall professional judgement on what you’ve seen. If this then gets challenged it’s a much more direct challenge to you and your professionalism which is actually quite hard to deal with because it could undermine you and be very personal. Making the ‘poor’ judgement was much easier when I made it because it was basically a nonQuality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 18

compliance issue and this is fairly straightforward. This is much less clear when it comes to being able to define a ‘good’ – this is a much more blurred concept. Despite the concerns they expressed about the process, the inspectors displayed a degree of confidence in looking back over the judgements that they had come to and felt that if indeed they were challenged, they would stand by the conclusions reached: I’d not change any of the judgements that I made on reflection – I prepared the judgement independently, and when I was working with one of my colleagues and we when tin together we were able to bounce that off each other and confirm that we were happy that we’d go things right; Having gone through my own analysis of my own judgements, because I’ve been challenged on the process, I’m happy with the judgement I came to. The process worked and I’m confident that I can stand by these. Interestingly there was a discussion about the nature of the first time the QJF is used with settings. This centred on two specific issues – the change of approach need for the settings, and for the inspectors. The long quotation that follows outlines an interesting argument that should be given some serious consideration when it comes to bringing new settings and new inspectors into the framework: It’ll be better next time round for these settings because they’ll have a base, and when they get their next judgement there’ll have been something to compare it to, work from and when it’s in the public domain they’ll feel better about it because they knew what the last one in private said. For the next round of settings they’re not going to get that, their first judgement, which we know is going to be a bit of a culture shock, is going to be straight in the public domain, and I don’t think it should be. I think everyone’s first judgement should be private and the second one should be public. And that also gives us time too to get better at it too, because there’s going to need to be a much larger number of inspectors trained who haven’t experienced the pilot, and they haven’t had the opportunity to learn either. So if the first time they were making judgements they weren’t published, like it was for us, that might give them a sense of confidence too that they weren’t being thrown straight into the spotlight. Resources and workload Unsurprisingly, resources and workload again featured heavily in the group’s deliberations. Notwithstanding the fact that the inspectors recognised that there were ways in which their practice would become more efficient as they became used to the new approach, there remained a series of serious concerns about the implications that the QJF has for the organisation of human resources within CSSIW: It was exhausting. It tended to start early and I was often there still at 6 o clock, and giving the judgement and explaining that took a long time and was challenging at the end of a long day. It took longer than normal because we wanted to make sure that everything was covered so that we had the evidence in case we were challenged. You are trained to be proportionate and just look at what needs to be done. Here we were making a judgement on everything that was done, and you have to make sure that you look at everything that went on. I didn’t look at every single file – there was one that had nine rooms and I had about 20 minutes in each and doing SOFI in that time was challenging. It is not possible to overlook the impact on the time and resources needed to undertake these new inspections, especially when you are doing very large settings, whether you base this criteria on the number of rooms that they have or the number of children that are in the setting. It took absolutely ages to write the new style of reports, and yeah, we’re going to get quicker about that, but I think that it’s still going to be taking loads more time than before. You’ve got to think much more before you put pen to paper and that means that they take longer. I don’t even know how much the Area Managers were told about this – they might have been aware of it but that’s all.

Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 19

In addition, it was suggested that there may be important repercussions for the QJF going forward of the way in which certain parts of Wales organise their activity, an issue touched upon by one of the managers. If this reaches across the whole of Wales, this is clearly a workload pressure that needs to be managed effectively: One of the issues that we’re going to have is that at the beginning of the year i.e. from April onwards, we’re told to do your inspections with the more challenging settings, and to leave the ones that tend to be better towards the end of the year, i.e. in the January-March period. That means that most of the settings in this pilot have been the better settings and the kind of feedback we’ve had from them is going to be nothing to what we’re likely to get from the ones that are not so good, and I can’t help but think that the workload is only going to increase when it comes to doing the judgements with poorer settings and then writing the reports. Making the judgement – relationship with provider The relationship between the inspectors and the settings again emerged as important in the discussion. Part of this centred on issues raised in the first discussion about the lack of information about the process – There wasn’t enough guidance on the process, for example, on what we can and can’t say on the day. The settings didn’t have any expectation of the amount of time that we would be with them and they said to me that it would have been really helpful to have known that we would be there for as long as we are going to be. This is something that they need to know ahead of the inspection – and in part this focused on the ‘new’ relationship between the inspectors and the staff within the settings: We need to understand the impact on the nurseries. People said to me that their staff were on edge because of the SOFI. I think it’s hard for people to get used to being just observed for such a long time without being interacted with. It’s a very different experience for the nurseries and registered providers and other managers haven’t yet worked out how they need to be instructing their staff. Doing the observations was quite challenging – whilst it’s not a massive difference from what we normally do, there are some subtle challenges to you as an inspector of not engaging and remaining detached. One of the consequences for me was that the staff tried to guess which of the children you were watching as part of the observation and then focus much more of their attention on that child. There were also implications for the amount of time spent in individual settings, based on the intricacies of getting used to a new approach, coupled with the fact that coming to a conclusion could be very challenging: If it’s coming out with a good it’s fairly easy to give the feedback and quite short. Anything else takes much longer – you need to present your evidence and then give them the chance to come back and have the discussion and for me it was a bit of a shock that it took so long to do and so much longer than normal. Most close-up meetings used to take about 15 minutes, but these were taking at least an hour. The feedback felt much more formal and you had to prepare for it. Sometimes the best fit was really difficult, and this was the thing that meant that the inspections this time were much longer, because you had to reconcile all of those little differences between some things that were satisfactory and some things that were good, and feeding that back to the managers was a challenge because they would ask “well what did they do wrong?” and it’s hard to answer that. Support from CSSIW – managers, peers and others The inspectors all agreed that undertaking the pilot had been tough, and that without the support of their peers would have been even more difficult: It would be very hard to do this on our own. As discussed in the first session, the support offered to them by their managers was somewhat suboptimal. This seemed to focus, in particular, on the the fact that they did not seem to be especially well-informed about what was being undertaken in the pilot: The managers were told that this was a priority and that it had to be completed but not much more than that. There’s a real issue with training our managers. In all fairness to them they don’t really have the first Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 20

clue about the detail of what’s going on and how these things are developing and that means that they haven’t been able to give much useful support event thought they might have wanted to, but going forward that training gap needs to be dealt with and really quickly I think. Furthermore, the role of the senior inspector was one which the inspectors reflected upon, and which, they felt, could have been deployed differently during the pilot: It would have been good if the senior inspectors had come out on inspections with us because I don’t think they can really understand this unless they have accompanied us on visits, which they haven’t done. And that’s a problem because when it come to the training, assuming that they are going to do the training, how can they do that effectively when they haven’t been out with us to see any element of it working in practice? I think that it’s a bit farcical to be honest, and a real missed opportunity. Making the judgement – inconsistencies In contrast to the first group discussion, there was a much shorter time spent talking through the potential for inconsistencies between inspectors. In no small part this was put down to the fact that the inspectors felt that this was not such a big issue after the pilot than they perceived it might have been before the pilot: I don’t feel that from the informal processes we’ve been through so far that we’re as far apart on these matters as we might have been. There’s a degree of reassurance that when we’ve been able to discuss things we’ve been closer together. There’s always going to be some differences but they are no worse on working in this pilot than for other inspections that we do. That said, there were matters upon which the inspectors reflected there was room for improvement: I may have been completely wrong but I went in on the premise that satisfactory meant meeting the National Minimum Standards – to get ‘good’ you were having to do more than that, but I know that not everyone approached it that way and that does create some inconsistency in the way we do our work. This is something that needs to change as we go forward. Overall On the basis of a very open second discussion, the inspectors concluded that the pilot had been a challenging process, and one which had provided constant stimulation: I was still learning on the last one. They reflected on the fact that working under the QJF had resulted in a fundamental shift in their thinking and approach to undertaking inspections: There also something to be said about how this new way of working has impacted on our expectations of what we do and how we do it – we’ve made an adjustment in our working and what we do and that has definitely impacted on my understanding of what doing an inspection means. In bringing the discussion to a close, the inspectors were asked to remember the first discussion before the pilot had commenced. The following quotation summarises neatly the views of all of those who took part: Thinking back to the last time that we got together as a group and now, it was better than I expected. When I think about how I felt beforehand and how I feel now, it was nowhere near the worst case scenario that I had imagined. Improvements This last section provides a series of ideas for improvements that emerged from the discussion, and are represented against three sub-themes: training, workload, and relationships with providers. Other than to say that all of these ideas merit serious consideration, no comment is passed about these as the quotations very clearly ‘speak’ for themselves. Training More emphasis needs to be placed on report writing and more time spent in discussing the way this works in practice. I thought that we would have a resource pack with a series of structured Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 21

documents that would help us, with the outcomes and a load of information that was in there but we didn’t and we sort of had to work our way through all of that with very little to guide us. What’s happened is that we’ve all developed our own versions of those packs so that we can actually get the job done. We’ve got loads of experience now and I think it would be a real shame if CSSIW didn’t make use of us in some way in training others. The training has to look more about the real processes that are needed and being more interactive – asking people “what are your fears” and then dealing with them. There is a wealth of experience in the group of inspectors who’ve undertaken this pilot and they should be used to help with bringing others into this. If you get the training right you’ll improve people’s confidence going into this. Before people come to the training they should be given the specific settings that they are going to work on so that they can prepare for that setting in the training. That would be real in the sense that it would directly relate to what they’re going to have to do. Get everything in place for that setting and then that person could have some peer support going out of the room so that they’re not exposed. We could be used a bit like buddies or mentors over the phone or whatever. The outcomes need to be smarter and we need to be issued with guidelines that give us more of a steer on what good is, and where the tipping points are. If I’m feeling that they’ve been delivering care effectively, what does this mean? If there are some small details that they’ve overlooked does this mean that they’re not engaging effectively, and what is the consequence of this? Workload There’s just one thing for me that would improve the process – time, time, time. That’s the biggest impediment to being able to do this effectively. It would be good to have a very experienced inspector coming out to shadow you for at least one inspection because that gives you a sense of confidence. We just need to be honest and realistic and say things the way they are and acknowledge the extra work that is needed. Something that I feel strongly about is that you need to have someone with you when you do one of the new inspections for the first time. Ideally you would want to accompany a new inspector around three times before they do a QJF inspection on their own, but if it has to be the very minimum then the very minimum is that they have one accompanied inspection. It is a false economy to send them out on their own without a second inspector because they will make mistakes and those mistakes will take time to rectify, and most of those mistakes could be avoided if they were working alongside a more experienced QJF inspector. One system that might work for bringing new people online in three visits is that on the first visit the new inspector goes out with an experienced inspector to observe them do the judgement. On the second visit, the new inspector does the judgement, but an experienced inspector is there with them. And the third time, their manager goes out with them so that they can then understand what their staff member is doing so that they can then be supported. Relationships with providers The settings need more information at the beginning about the process and what’s going to happen and that is fine. They also want lots of detail about the thresholds and how we make judgements and I don’t think they should have that – I think they could tie us up in knots unless we’re really careful. I fear that CSSIW need to be braced to potentially deal with a lot more challenges and possible complaints, because this is entering different territory for all parties. We need to give them the basic information about how long we’re likely to be there and how things will work, ahead of getting there. It’s OK to reiterate this on the day but they need to be prepared for this ahead of time. Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 22

AREAS FOR FURTHER CONSIDERATION This final section draws the evaluation evidence together, and identifies a series of ‘Areas for Further Consideration’ borne out of the data provided during the study. They are contextualised by primary material and a brief commentary about the nature of what needs to be considered. They are not however presented in any formal priority order, but do reflect the general weight of opinion about the need to improve these areas. As an over-riding principle it is suggested that whatever happens, CSSIW realises the amount of expert knowledge that resides in the experiences of the people who have participated in the pilot and does not miss the opportunity to utilise that resource. Area for Further Consideration

Indicative supporting evidence

Commentary about action needed

We believe there should be four categories for the judgements and an overall judgement based on the individual areas. My only concern about the judgement would be that from a parent’s point of view looking for childcare, a setting with 'good' (although this is the highest judgement) does not initially sound like a setting that provides a high standard of care. I feel that the judgements should be excellent, good, satisfactory or poor.

Extend the judgement scale to include a fourth point, and consider a ‘composite’ judgement

We were disappointed that there was no 'outstanding' judgement as we feel we would have had this in some areas. It will definitely improve standards as no setting would be happy with a 'poor' judgement and should want to improve immediately. We would have liked to have an 'overall' judgement based on the individual judgements - this would then be the status of that setting - good, poor, outstanding, etc. Practices who receive “good” judgements may not strive to improve further, as there is no higher judgement to achieve. Settings that receive anything other than “good” will struggle to attract new service users and may eventually be forced to close, although all points that relate to the word "satisfactory" should technically be judged as good!

Whilst CSSIW have made an undertaking to identify a fourth point on the scale, this needs now to be more broadly communicated. Further there was some support for a ‘composite’ score which may need to be thought through. One further consideration for the pilot sites, is how their judgement will compare to others when the four point scale is issued, even though these pilot judgements were not intended to become public knowledge.

I would prefer more grades to be available to the inspectors. Maybe excellent, good, satisfactory, below, poor Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 23

Area for Further Consideration

Indicative supporting evidence

Commentary about action needed

Wider and clearer criteria for each judgement made, and clearer wording for each judgement. I feel that CSSIW need to have a set list for what they feel is good, satisfactory or unsatisfactory.

Provide greater clarity about the thresholds and descriptors that underpin the QJF

We would like to receive a copy of the guidelines used for each judgement, not only to ensure that each point made is being met, but to have clear goals to work towards to ensure that the best possible outcome. The criteria need to be sent out ahead of the inspection – it’s only fair to see what you are going to be judged on. We need greater clarification on where the tipping points are – what would it take to get to ‘outstanding’? There is of course a place for the ‘tick-box’ but there has to be clear added value in this new approach and I think CSSIW has to do better in selling that.

There was considerable concern about the lack of clarity about the descriptors and how the judgements are reached. It appears to be in the best interests of all if clarification about the process is issued so that inspectors and providers are able to discuss the judgements in that context.

More information is needed on the scheme to be given to the placements. Where will the rating be displayed, and will they be explained how the rating was achieved, and what improvements need to be made to achieve the higher rating. So much of the judgements will be the inspector’s personal opinion. Maybe a checklist should be in place with a scoring system. If you do it or have it - you get a point, if you don't - no points awarded. Thus everyone should have a score based on their operations. Any additional services or areas of excellence could score additional points for the inspector to award as they see the evidence. Address the perceived difference in approach between inspectors

Having not seen the actual marking criteria they used I find this hard to comment on. Maybe like a driving test; 1 breach or 'X' amount of 'bad' practices would signify a 'poor' ...... lack of 'best' practices, working to rule would signify satisfactory and lots of 'best practices' and striving to achieve over and above, would signify a good. There should be an 'excellent' or 'very good' rating as well. It was clear that this was an attempt to go beyond just reporting on statutory requirements, and I do think that it is difficult territory for them. It’s easy to determine issues of compliance, but once you move to more qualitative ways of working means that you need work differently.

Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

There are a series of quality assurance exercises that have followed the pilot, and it is important that CSSIW continues these processes such that they are able to reassure settings in this regard. Again, an open approach to discussing this ‘moderation’ process may help to inform and give confidence to providers.

Page 24

Area for Further Consideration

Indicative supporting evidence It was effective having two inspectors record information separately, this helped to speed up the process and reduce disruption to the nursery. Equally having more than one view point is always a bonus and provides a more rounded opinion.

Carefully consider the workload implications of undertaking the QJF inspections

The Self-Assessment Survey needs to be redesigned to more effectively meet the requirements of the QJF

I believe that in facilities of over 30 capacity CSSIW should have two inspectors present to get a more accurate finding on the service. They do need to ensure that they get the balance right in the amount of time they spend speaking to staff. I think that might need to be re-balanced. Because they spent more time doing observation they didn’t seem to chat to the staff as much as they have done previously and there was one member of staff in particular who found that a bit odd. Once I’d had the chance to explain to the staff the new way of working that was better and ideally of course you’d brief all the staff before you start but we’ve got 35 staff and 106 children here and it’s a bit hard to do I appreciate! We do a Self Assessment Survey document each year which is quite lengthy and I don’t feel that they use the information that’s contained in it much when they come along to do the inspection. We’re providing 30 pages of information and I feel like I’m repeating myself every single year. It could be condensed down to asking “what has changed since you last filled this in” rather than asking the same questions. It’s not fit for purpose – it should form the basis of our questioning, they should be saying “on page 7 of your SAS you say...” – but they never do! There is a real opportunity to change this given the new approach to the inspections they are taking. There needs to be a much closer tie-in between the information provided on the Self Assessment Statement (SAS) and the inspection. The SAS information was supplied last April and was way out of date by the time the inspection took place this February. This needs to be a much better process.

Commentary about action needed

There are, from both viewpoints, serious challenges in being able to deliver the new QJF inspections within existing time and resource allocations. The evidence presented suggests that this is an important area to resolve given the centrality of being able to comprehensively make a judgement in a fair, balanced and evidenced-based manner.

There is a recognition on all sides that the SAS is no longer fit for purpose given this new methodology. There is clearly a need to address this and create a much more meaningful synergy between CSSIW’s documentary exercises and face-to-face inspections.

The Self-Assessment template is turgid and having one-size fits all is just a nonsense – it rides roughshod over what you’re trying to do as a nursery.

Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 25

Area for Further Consideration

Indicative supporting evidence

Commentary about action needed

Training for both inspectors and most definitely for the nurseries. Training of the inspectors to ensure that their own preferences and beliefs do not shade their decision process. For example, just because they like brand spanking new build child care settings, full of community child play things doesn't make any nursery that falls short of that a bad nursery! Look closely at the training needs of all of those involved in the QJF process and deal with any gaps

There is a real training issue in expecting people to slide over from the tick-box mentality towards a more qualitative approach. It’s really important to get this right – monitoring staff needs to be even more rigorous now. There’s one issue in our report that needs to be reflected on. We had a good report, and we received four ‘goods’ in our report, so we didn’t expect to have six Recommendations as we had absolutely no deficits at all; making six Recommendations seems to be rather heavy handed. If they are making more subjective judgements they need to be careful about this kind of thing.

There are important training considerations for the inspectors, and the settings they are working with given the challenges of the QJF. These needs must be taken seriously and met effectively, especially when it comes to inducting new inspectors into using the SOFI methodology and making judgements. There are considerable human and other resources available to CSSIW and these have to be capitalised upon.

So much of this is dependent upon the training of the inspectors, and the quality of the observational evidence that they are able to produce.

Communicate the implications of being part of the pilot to settings and others

Consider the wording of reports and the impact that they have under the QJF

Also, even though it is a pilot, it would be nice to publicise that we are one of the first in Wales to undertake the new system and get so many 'good' judgements. By the time the scheme is rolled out, we will not be due inspection and other settings will announce that they are the FIRST to complete it. What are the implications for us of having been first but not able to publicise the findings? We’d like to say we’ve done well. It’s relatively easy to have a conversation when things are positive. There’s a real responsibility on the inspectors in the nature of the words that they use in reports, and I’m not sure that they are aware of the inferences within their words. It is often the case that clumsy writing which lacks sensitivity can appear to raise problems that aren’t really there. That said things can be resolved relatively easily and if we’re going to have more dialogue this is only likely to improve.

Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

An important principle of undertaking a pilot is that there will be a lot to learn about what worked well and what needs to improve. This has implications for all involved in the process, but being open and inclusive about the benefits and drawbacks is an important principle. There is an enhanced burden on inspectors to ensure that the words they use in reports to support the judgements made are fair, and reflect the reality of the situation as they see it.

Page 26

Area for Further Consideration

Consider a different approach to understanding and judging ‘leadership and management’, and other roles in these settings

Indicative supporting evidence Leadership and management is integral to all of this, but the way in which the inspectors approached it was too admin based. They were looking at the records and general admin things, and that’s absolutely right and proper, and they’ll make an assessment on that. But having recently had an Estyn inspection, they look in lots of detail at my role as a leader and manager, and what I’m doing to enhance the quality of the nursery. They have lots of criteria that look at the particular characteristics and qualities of the leaders and managers, and how they manager the staff and what their expectation and support are. From what I saw as part of this pilot, CSSIW aren’t looking at any of this and they really should be – you can’t have quality in other areas without quality management and leadership. I think they’re not quite assessing the right things – they’re fixated a little bit too much on the paperwork and not quite enough about the actual leaders themselves. It’s perfectly appropriate to have a sub-section looking at the administration, but CSSIW shouldn’t focus too heavily on the process and paperwork, and they need to focus much more emphasis on what actually makes a difference to the experience of the children in the nursery. The current system is far too heavily biased on dropping you down for minor admin trip-ups which are important, but it’s doing this at the risk of missing the big picture. They need to look carefully at the Estyn criteria, and I think there’s a fair way to go in evolving this and moving this forward. To be honest, it’s a bit misleading to call it leadership and management given that they are only really currently looking at admin which is only one tiny element of the whole thing.

Commentary about action needed

There are potentially far-reaching consequences of needing to expand the reach of inspections as dictated by the move to the QJF. One of these is elaborated eloquently in the quotations reproduced here and speaks to the need for CSSIW to reflect on its role, and what it is seeking to achieve through the course of the new approach.

There are problems for large settings for the likely requirements under the 1:1 ratio for key workers. It’s just not possible for us to be able to permanently guarantee that ratio, and if that’s one of the requirements to get an ‘outstanding’ or ‘excellent’ or whatever it’s going to be called, how are we ever going to get there? 1:3 might be possible so I wonder if that could be reflected in the judgement framework? We need to have obtainable levels in the future.

Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 27

The evidence of the previous sections led to the nine ‘Areas for Further Consideration’ as above. By way of a concluding comment, it is important to see this section, and indeed the report as a whole, in the light of the four Key Messages as outlined on page 2 above: 1. The evidence presented herein provides much more support for the QJF pilot than criticism; 2. There are limitations in this as any study of this kind, and there remains a need for CSSIW to gather wider-evidence base over a longer period of time to be more definitive about the outcomes from this approach; 3. The QJF method will need to be continually modified in the light of this wider evidence-base; and 4. Whilst there is much to learn from here it is not uncommon in evaluations such as this for people to provide much more detail on the negative than the positive elements of the experience.

Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 28

APPENDIX I · INTERVIEW SCHEDULES PROVIDER INTERVIEWS The following questions were covered in the interviews. The interviews were a very conversational exploration of the issues from the point of view of providers. They mirrored the kinds of questions in the online survey but interview allowed for the issues to be explored in much greater depth.

QUESTIONS 1. How do you feel that this new form of inspection worked in practice? 2. Did you understand the reasoning behind the new approach? 3. Did it take longer than normal? 4. Was it a better way of working from your point of view? 5. What did your staff think about the new approach to inspection? 6. How was the relationship with the inspector given the new approach? 7. How did you respond to being given the judgement? 8. Did you think that the judgement was accurate? If so, why? If not, why not? 9. What did you like/dislike about the new form of inspection? 10. What improvements would you suggest?

INSPECTOR FOCUS GROUPS The following questions were covered during the focus groups, but not necessarily in this order. Inevitably the discussions ranged around these topics, although as far as possible they formed part of both the ‘before’ and ‘after’ groups. The questions were generated after listening to an initial conversation with inspectors about the QJF.

QUESTIONS 1. How do you feel that this new way of working will differ in practice from the established ways of working? 2. What do you think will be the easiest parts of this new way of working to complete, and which will be the hardest? Why? 3. What are the consequences of this project for the Self-Assessment Survey (SAS)? 4. What are the workload implications for this type of inspection? Is there enough time available to do what needs to be done? 5. How long do you anticipate needing for these visits – will they be shorter or longer than normal? 6. How will you reconcile needing to do the ‘paperwork’ and the SOFI observations? 7. How applicable is this approach to other different settings? Will it work in other areas than children’s day care? 8. Are you consistent one with another? How are you able to ensure that you will be consistent in the pilot? How might any variation be minimised? Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 29

9. What are likely to be the issues around the representations and appeals against the judgements? 10. How confident are you feeling about making the judgements? What do you consider are the likely issues that may arise? 11. What’s the nature of the evidence that will be collected? Will it all be observational and then written down afterwards, or is there some scope for using photographic and/or video evidence? 12. How effectively is the CSSIW supporting you as the frontline inspector? 13. How will you interpret the results? 14. What are the likely outcomes from the pilot for you, providers and the inspectorate?

Quality Judgement Framework Pilot – Evaluation for CSSIW · April 2014

Page 30

Dr Mark Llewellyn · Welsh Institute for Health and Social Care University of South Wales, Glyntaf Campus, Pontypridd, CF37 1DL wihsc.southwales.ac.uk · [email protected] · 01443 483070