See ACE Report CONTENTS
funded by: The English National Board for Nursing Midwifery and Health Visiting, London
INTRODUCTION TO THE REPORT
The issue of developing and implementing adequate assessment strategies for nursing and midwifery education programmes has challenged both state bodies and educators across the world for over fifty years. The ACE project was set up to report on current experiences of assessing competence in pre-registration nursing and post-registration midwifery programmes. Nursing and midwifery have undergone rapid and far reaching changes in recent years both in initial educational requirements and in the demands being made on professionals in their everyday work. It is intended that the report will contribute to current developments in educational programmes to shape the future of the professions to meet the increasing demands being made upon them.
Decisions are made at every level of the professions, at national, local and in face-to-face practice with clients that affect both the quality of educational processes and the delivery of care. This report intends to contribute to the quality of educational decision making at each of these levels. For this reason the report provides both general analyses of structures and processes directed towards policy interests and also concrete illustrations of the issues, problems met and the strategies employed by staff and students during assessment events.
CONTEXTUAL INFORMATION ABOUT THE RESEARCH PROJECT
The Setting Up and Operation of the Research
ACE was funded by the ENB during the period July 1991 to June 1993. It was conducted as a joint project between the School of Education of the University of East Anglia, the Suffolk and Great Yarmouth College of Nursing and Midwifery and the Suffolk College of Higher and Further Education.
The focus was on the assessment of competence of students on pre-registration nursing courses (Project 2000 (all branches) and non diploma) and 18 month post registration midwifery courses (diploma and non diploma). The project conducted fieldwork in nine colleges of nursing and midwifery and their associated student placement areas, in the three geographical regions of East Anglia, London and the North East. Appendix A 1 provides full details of the conduct of the fieldwork. In brief, there were two phases and through a process of progressive focusing the issues relevant to the current state of assessment of competence were explored.
During the first phase, data was collected in all nine approved institutions to identify issues of national importance operating in their local contexts. Issues related to the whole of the assessment process were explored including planning & design, assessment experiences and monitoring & development. At the end of this phase an interim report was written which provided a means of articulating initial findings and of firming up the research questions for the second phase which then directed the collection of relevant data in greater depth in a smaller number of fieldsites. This approach is generally known as 'theoretical sampling' which produces 'grounded theory' (Glaser and Strauss 1967 ). 
The ACE proposal set out with the following aims:
1. To establish the effectiveness of current methods of assessing competencies and outcomes from education and training programmes for nurses and midwives.
2. To examine the relationship between knowledge, skills and attitudes in the achievement of competencies and outcomes.
3. To establish the extent to which profiles from the assessment of individual competencies adequately reflect the general perception of what counts as professional competence.
4. To investigate the feasibility of simultaneously assessing understanding, the application of knowledge and the delivery of skilled care.
5. To collect perceptions of the usefulness of the UKCC's interpretive principles in helping nurse and midwife educators to assess competencies and outcomes.
Upon inspection it soon becomes clear that there is an overlap between each aim. It is difficult to do one without also doing the others. However, they each have their individual stress.
Aim one stresses 'effectiveness'. If a mechanism is to be effective, then its intended event must occur. Thus to be effective, if an assessment procedure designates someone as being competent, then that person must actually be competent. This is quite different from concerns with say, cost efficiency. A system which produces 80 or 90 per cent of people as being competent may still be considered cost efficient. It is then a matter of the level of risk that is considered as being acceptable. In an effective system the level of tolerable risk is zero. However, this may not be accepted as cost-efficient. It is the aim then of this project to critique the assessment of competency from the point of view of effectiveness. This has the advantage of making both the risk and the cost or resource implications clear in any discussion that may then take place on the issue of 'effectiveness' as against 'cost efficiency'.
Aim two stresses the complex interrelations of knowledge, skills and attitudes. If the appropriate competencies and outcomes are to be achieved, then educational and assessment strategies must be attuned to the development of knowledge, skills and attitudes. None of these are simple categories for study. They resist the kind of observation that is appropriate for the study of minerals. Their observable dimensions are highly misleading and the situation is rather like the iceberg that has nine tenths of its bulk hidden. Human behaviour is managed behaviour. That is to say, it is not open to straightforward interpretation. Impressions are managed by individuals to produce not only unambiguous communications but also multi-levels of possible interpretations and deceptions. What counts as knowledge to one person may not be considered knowledge at all by another. This is as true for scientific communities as it is for lay people (Kuhn 1970 , Feyerabend 1975 ). Again, there is no easy distinction to be made between 'knowledge' and 'skill'. Knowledge may initially be thought of as 'theoretical' as distinct from practical action or skills. Yet, in professional action, knowledge is expressed in action and developed through action. To analyse professional action into 'skills' and aggregate them into lists required to perform a particular action may well do violence to the knowledge that encompasses and is expressed in the whole action. To see professional action as an aggregate of skills may thus lead to an inappropriate professional attitude. Knowledge, skills, attitudes and the processes of everyday action may in this way be regarded as different faces of the same entity. It is the aim of this project to begin with the experience of professional action through which concepts of 'knowledge', 'skills' and 'attitudes' are expressed and defined in practice.
Aim three stresses the relationship between the assessment process and what it purports to assess. In short, are the assessment profiles that result from the assessment process fit for their purpose? In order to examine this question it is essential that 'what counts as competence' has been identified. It may not be that there is a single 'general perception'. Rather, there may be a range of acceptable variation in what is perceived to be 'competence'. This implies a debate of some kind. One prime intention of this project then is to describe the debate and discuss the extent to which assessment structures and processes fit the purposes that are currently being debated. This in turn refers the discussion back to questions of effectiveness and of the ways through which 'knowledge', 'skills' and 'attitudes' are being identified and defined.
Aim four stresses the feasibility of assessing understanding and the application of knowledge at the same time as delivering care. Effectiveness and feasibility are closely allied. It must be feasible for it to be effective. In short, the aim is directed towards the relationship between educational processes and care processes. This may be seen as presupposing a distinction between the two so that assessing would be an additional burden to be carried at the same time as delivering care. The aim of this project is to explore the professional process in terms of its dimensions of care and education: is the one aggregated to the other, or are they indissoluble faces of the same coin?
Aim five is different in kind to the preceding four. This aim has a survey dimension to it where the others are interpretive and analytic in orientation. For ease of reference the UKCC's interpretative guidelines are reproduced in appendix C 2. It is a straightforward matter of asking a range of individuals in the participant institutions whether the guidelines have been found to be useful. Whilst the UKCC's interpretive principles acted as a focus of this aim, it became apparent from interviewing that the inclusion of comments on the usefulness of national guidance in general ( i.e. including ENB guidance) provided a more comprehensive exploration of the issue. Consequently this wider perspective on the usefulness of national guidance was pursued.
A Qualitative Approach for the Study of Qualitative Issues
The project aims define the kind of methodology which is appropriate to their achievement. For example, to identify what counts as an effective method of assessing competencies and outcomes, a structural analysis of cases considered to be effective is required. Before one can begin this, however, it is necessary to define what is to count as 'effectiveness'. This in turn requires the collection of views as to what is to count as competence and as outcomes that signify competence. The initial task then is to conduct a conceptual analysis of these key terms as they are expressed in the appropriate professions. Aim two equally demands a conceptual analysis of the relationship between the key terms 'knowledge', 'skills' and 'attitudes'. Once this has been established, then it becomes possible to analyse the structural relationships between assessment procedures and processes, and the real events in which competence is expressed as a professional quality. With some understanding of what is involved in the relationships between the performance of assessment and the delivery of care then aim four can be explored. The methodology appropriate to these aims is one which identifies those instances in which the necessary features of the key terms are exhibited. Through an analysis of those instances, the structures, mechanisms and procedures through which effective assessment takes place can be identified and described in order to facilitate future planning and design. This essentially fits the approach known as 'theoretical sampling' . It is not a quantitative approach and thus does not result in percentages and tables which illustrates the distribution of variables. Rather it generates theoretical and practical understandings of systems.
The methodology of the ACE project then, is qualitative, focusing upon structures, processes and practices as these are revealed through documentation, interviews and observations. A full exploration of the methodology can be found in appendix B, but broadly, the task has been to generate an empirical data base. By a process of comparison and contrast, key groups of structures, processes and practices are identified as a basis for the more formal analysis of events.
Alongside the strategies for the gathering of data and their analysis have been strategies to ensure the 'robustness' of the data and their interpretation. These have included the use of an expert 'steering group', dialogue and feedback with participating staff and students, theoretical sampling, the application of the triangulation of perspectives and methods, and reference to research output from other projects. The sensitivity of the methodology, with its emphasis on communication and personal contact has been a feature, and attention to principles of procedure have facilitated fieldwork relationships.
In summary, methods of data collection were:
• In-depth interviews (individual and group) with students, clinical staff, educators and other key people in the assessment process. Recordings of interviews were transcribed for analysis
• Observation of assessment related events in clinical and classroom settings
• Creation of an archive of assessment related documentation from approved institutions
The result was a large text based archive constructed from interview transcriptions, observational notes and documentation of courses, planning groups and official bodies. The method of analysis involved various strategies of conceptual analysis employing discourse and semiotic approaches to try to pin down the meanings of particular key terms employed by professional and student discourse communities. This in turn provided a means of identifying the institutional, local and national structures necessary for the construction and delivery of assessment. Structural analyses could be made of particular approaches to identify the roles and associated mechanisms and procedures through which events (both intended and unintended) are effected. These events in turn were then analysed into their stages, phases and process features in order to identify what counts as professional competence in action, in situ.
Whilst gathering and analysing the data was clearly impossible to understand the experiences of professionals and students without having grasped the contemporary changes taking place in nursing and midwifery. There are thus discourses of reform, of innovation and of change (whether or not perceived as being innovations or reforms) which act as the context for the conceptual, structural and process analyses described above. This context is the subject of the next section.
THE CONTEXT OF REFORM
Professional and Educational Change in a Changing World
By 1991, when the ACE project started its work, a number of significant changes had taken place both within nursing and midwifery education and within the structures of the occupational settings of nursing and midwifery. These changes formed part of a relatively long term strategy for NHS reform which was to continue to develop and have impact throughout the life of the project. The field of study was and still is characterised by the complexity of wide variation with differential pace of change across both regional boundaries and local, internal boundaries. This complexity has been further compounded by the regularity with which new demands have been made on participating institutions as NHS reform gathered momentum and concepts such as the regulated internal market (DoH, 1989a ) were tested and reformulated in the light of experience. Not only has this climate had an impact on practice and education in nursing and midwifery, but it has also made particular demands on the research methodology. A field of study which is in constant state of flux and change demands the contextualisation of any account of the assessment of competence.
The move of nurse and midwife education towards full integration with Higher Education institutions has added further complexity to the situational aspects of the assessment of competence. Alongside the strategy for NHS reform there has been a parallel movement towards educational reform which has encompassed the organisation and funding mechanisms of all Higher and Further Education institutions (DES 1987, 1991 Studying nursing and midwifery education during this period has therefore inevitably raised a number of issues which speak directly to the more general issues relating to both the impact of NHS reform and the impact of education reform. ). Throughout the study therefore, the fields of nursing and midwifery education faced two challenges; firstly to prepare practitioners for workplace environments which were themselves experiencing major organisational and ideological change; and secondly, as they moved closed to Higher Education, to contend with the structural changes occurring within those institutions.
It is the intention here to make explicit the main areas of change which were already having some impact at the start of the project and to describe those changes which occurred during the study period in an attempt to set the scene for the arguments and recommendations raised in this report. These areas of change will have inevitably shaped ideas about what midwives and nurses do, what is expected of them, their educational needs and the ways in which competence is defined and assessed.
Since the publication of the government white paper ‘Working for Patients’ (DoH, 1989a ‘Working for Patients’ arose as part of a major review of NHS provision and was to provide the impetus for extensive NHS reform during the 1990’s. The NHS and Community Care Act 1990 was the statutory instrument which finally placed firmly into legislation, reforms which were to have far reaching and on-going impact on virtually all aspects of health service provision.), the pace of change within NHS service provision has been relentless, and the impact of the subsequent legislation inescapable.
One of the central stated arguments for reviewing NHS provision, structure and funding has been the need to find economic and ideological solutions to identified changes in health needs of the population. Demographic and epidemiological trends (HAS 1982, DoH, 1989b ) have created new demands on health provision and have influenced recent moves towards a demand-led rather than service-driven health care economy. ‘Working for patients’ attempted to address the challenge of creating provision on the basis of population need rather than the presence of clinical expertise, by creating a regulated internal market where Health Authorities purchase services on behalf of their population from a range of potential service providers. The creation of this market has rearranged local provision from a single resource into several separate and semi autonomous units.
The period of fieldwork undertaken in this study spanned two years of intense activity in relation to the recommendations imbedded in ‘Working for Patients’. The first NHS Trusts were approved in 1990 and throughout the study many of the clinical areas served by colleges of nursing and midwifery had gained Trust status or had applications in progress. This separation of purchasing activity from services and the division of local provision not only presented challenges for the management of the research especially in terms of access to clinical areas, but was evidenced in the data in terms of concerns about availability of student placement areas, workload of clinical staff and the potential for even greater variation in the expectations about the outcomes of nursing and midwifery courses.
The Changing Roles of the Nurse and Midwife
Any change in the demands which are placed on nurses and midwives within their occupational roles will have an impact on what counts as professional competence and on the way in which competence is assessed.
The Strategy for Nursing (DoH, 1989b These responded to changes which had already occurred in service provision and professional practice and anticipated the demands on nursing and midwifery into the next century.) described a range of strategic targets for nursing and midwifery.
Already nurses and midwives faced a number of initiatives over the previous decade which would have direct impact on their role and practice. For example the Griffiths Report (DHSS, 1983 These suggest a trend towards a changing ideology and value base within nursing and midwifery and a re conceptualisation of professional role and status in relation to other health care workers. For midwives in particular the last decade has seen continuation of the strong movement away from their traditionally close identification with nurses and nursing practice. It is a clear reflection of the dynamic and changing nature of the field of study that by the time the ACE fieldwork was complete, a major revision of the Strategy for Nursing had taken place to take account of other fundamental changes within service provision (DoH, 1993).) had introduced general management to the health service and the unquestionable right of nurses or midwives to hold senior generic management positions in hospitals and the like was gone. This left a major gap in opportunities for career advancement outside clinical practice. 1988 saw the achievement of two major initiatives which were intended to raise the value of clinical practice and provide opportunities for career progression through, on the one hand, a new clinical grading structure and on the other, Project 2000 and academic accreditation of nursing and midwifery courses.
Other ideological changes were taking root within nursing and midwifery practice. Throughout the 1980’s increasing emphasis has been placed on community care (DHSS 1986, DoH, 1990 Changes in the location of care have had significant impact on nurses and nursing practice. Under The NHS and Community Care Act 1990, responsibility for community care was invested in Social Services rather than the Health Service (DoH, 1990 ) and questions are being raised about both the role and competence of nurses in community settings, and the extent to which health care should, or indeed, could be separated from social care. This change in location of care has created different demands not just in relation to the skills required by nurses in community settings, but also in the demands on nurses in hospital settings where patients require acute care over shorter periods. ) based on the notion that care in the clients normal everyday surroundings is of greater benefit than institutionalised care. For many community midwives this has meant less emphasis on high technology births and more emphasis on the individual needs of women and their families.
In similar vein there has been an increasing orientation within nursing towards holistic care, the prevention of ill health and health education. Midwives have always worked predominantly with healthy women and as a result have perhaps been better placed to reject a sickness oriented model of care and adopt an approach centred on health, normality and education. This trend towards a health orientation has mirrored a national concern for health and health promotion over recent years. The Health of the Nation (DoH, 1992 ), described the governments policy and strategic targets in these areas, and reinforced the demand on nurses and midwives towards curricula which were firmly based within a framework of health as well as ill health.
Changes have also occurred in the delivery of care. For more than a decade the trend has been to move away from task-based routinised systems of care to more individualised, client centred approaches. Primary nursing and team nursing started to spread throughout the country and the publication of the Patients Charter (1991 ) formally introduced the concept of the ‘named nurse’ for each patient. It can be argued that individualised care, primary nursing and the concept of the named nurse have contributed significantly to a shift towards a model of nursing and midwifery practice in which judgement, assessment, care planning and reflective critical analysis are becoming increasingly valued role components. Where role expectations and values shift, so too should ideas about what counts as competence and how that competence should be assured. A major question therefore must be, to what extent have role expectations and values embedded in those expectations, kept pace with changes in policy and legislation? To what extent do practitioners, managers and educators, hold onto role expectations which have not yet taken account of major policy shift? The implication here for the research is to uncover and explicate the relationships between role expectation and policy implementation in order to inform possible mismatches between the rhetoric of assessment documents and the realities of assessment experience.
Changes within Education
Although apparently less directly affected by the main thrust of NHS reform, professional education has been in the process of a fundamental transformation. Major changes were taking place within nurse and midwifery education both in terms of the nature and content of educational programmes and in the structure and organisation of institutions. A subsidiary paper of ‘Working for Patients’, ‘Working Paper 10’, addressed the need to separate education provision from service units and purchasing authorities by investing the relationship between service and education with similar market processes. What followed was a wholesale review of nurse and midwife education across the country and consequent major reorganisation. At the beginning of the ACE project most education institutions had already undergone some form of rationalisation. All approved institutions involved with the study were the products of the amalgamations of several small schools of nursing and midwifery, which had traditionally been located on NHS hospital sites into much larger colleges of nursing and midwifery. Most were therefore multi-site institutions which were in various stages of incorporation.
Later, as the overall intention to embed nurse and midwife education into a HE framework took shape, colleges of nursing and midwifery were to begin the process of wholesale integration with HE institutions. During the period of study, colleges were in various stages of integration ranging from validation-only arrangements through to full integration.
Clearly, given the overall trend towards integration with HE, all fieldsites were experiencing major upheaval in terms of both organisational structures and working arrangements hard on the heels of one, if not more, previous periods of re-organisation. In one college, senior staff were facing the prospect of re applying for their jobs for the third time in a space of two years.
Concurrent with these various strands of organisational restructuring, fundamental changes were being implemented to the nature of courses. Project 2000 (UKCC 1986) and moves towards devolved continuous assessment were having a dramatic impact on the nature of pre registration nursing courses as were the increasing number of direct entry midwifery courses and the accreditation of midwifery courses to the level of Diploma in Higher Education.
Project 2000 represents a major move away from the apprenticeship style training of previous years. One of its fundamental and over-riding stated aims is to provide nurses with the type of preparation which will best meet the changing demands and expectations on qualified nurses in changing contexts of health care delivery. If nurses are to cope with a working environment characterised by its changeability and ideologically committed to the primacy of the individual, then nurses will need new skills to be flexible and adaptable enough to manage the unpredictability of individualised systems of care within a constantly changing professional context. These are the skills most frequently associated with HE. Colleges of nursing have therefore been required to form collaborative links with HE institutions in order to develop and validate Project 2000 courses. The process of conjoint validation between nursing professional bodies and HE institutions placed different and sometimes competing sets of demands in relation to course assessment strategies. On the one hand professional bodies were concerned that assessment strategies were sensitive to the demands of professional practice and on the other the HE institutions concerns focused on academic credibility and the extent to which the assessment design was adequately sensitive to intellectual competence.
Although midwifery education remains separate from Project 2000, a number of direct entry midwifery programmes share components with the Project 2000 Common Foundation Programmes. Even where Project 2000 has not had such a direct impact on midwifery education, there has been a parallel trend within midwifery to incorporate some of the more generic educational principles of Project 2000 within their own curricula.
Project 2000 and diploma level midwifery education are only one aspect of a broader set of educational initiatives which challenge traditional expectations of what nurses and midwives do, how they interpret their roles and how they should be prepared for practice. PREPP (UKCC, 1990) They imply a distinct move away from a view that nursing or midwifery can draw on discrete, finite and stable sets of knowledge and understanding and move towards the notion that maintaining professional competence is more to do with providing skills for continual self development. Central to these initiatives is the need to demonstrate evidence of continual progression and learning in order to be considered fit and competent to practice. and the ENB framework and Higher Award (ENB, 1990 ) address the increasing concern for opportunities for lifelong learning.
Changes to the structure, content and philosophy of nurse and midwife education were not occurring in isolation from wider changes which were impinging on HE and FE throughout the period of study (DES, 1987, 1991) (DES, 1992 ) has brought about a number of changes in the Higher Education institutions into which nurse and midwife education continues to integrate. These changes were heralded by the government as:. Recent legislation,
far reaching reforms designed to provide a better deal for young people and adults and to increase still further participation in further and higher education.
(Lord Belstead, Paymaster-General, Hansard, H.L. Vol. 532, col. 1022)
Changes to HE included a new system of funding (DES, 1988) in their titles. The impact on some institutions was experienced as a series of priority changes as the pace of these changes gathered momentum throughout 1992. For institutions seeking to meet the criteria set by the Privy Council to gain university status, the main priority was experienced as a pressure to develop, market and deliver HE courses to increasing numbers of students. Once achieved, many 'new universities' faced new demands for increased research activity in order to benefit in any substantial way from the research assessment exercise which was to determine the allocation of university research moneys. , which merged the functions of the old Polytechnics and Colleges Funding Council and the Universities Funding Council to form the Higher Education Funding Council. The intention behind this was to introduce greater competition between HE institutions for both students and funds in order to achieve greater cost effectiveness. The act also created opportunities for a wider range of HE institutions to award their own degrees and to include the term 'university'
Although the effect of these changes on the project fieldwork was not as direct nor dramatic as the effect of NHS reform, several colleges involved with the study had HE partners who were undergoing fundamental changes as a direct consequence of the above legislation. Some colleges involved in the study started their integration process with polytechnics who have since gained university status. For colleges of nursing and midwifery these changes were not just about nomenclature but were also about the nature, structure and expectations of their relationships with HE validating body and partner.
In summary, during the period of study a number of pressures upon both understandings and administration of assessment of competence were in operation and which can be categorised into the following groups:
• changes in population health needs
• values about health care and service provision
• political/ideological changes (structural changes)
• educational reform
Each category exerts its own distinct range of changes and pressures upon individuals and groups involved in the assessment process on both personal and professional levels, affecting what counts as competence and the means by which it should be assessed. Consequently this section concludes with a selection of extracts from the data which articulate some experiences of the changing context. Further examples can be found throughout this report.
THE EFFECTS OF CHANGE ‘ON THE GROUND’
Individual Experiences of Change
The research examines the assessment of competence in nursing and midwifery education within the changing context described above. It does so from the perspective of the individuals who deliver the service, upon whom these changes impinge directly, but who also, as members of a body which has campaigned for a considerable time for the changes, the motivators of the continuing developments. As affectors and affected, people experience change with mixed feelings, which the research has set out to capture. For some, the effects of changes within educational and health care environments are experienced as a continual break on educational planning:
The Health Authority was in a state of flux and there was a lot of change going on. First we amalgamated with another Health Authority and then second we amalgamated as one college of nursing with other schools of nursing. So every time you thought, "Now we've got some ideas coming on paper," you had to stop and re-evaluate because you got new schools joining and then you had to look at what they were doing.
Organising and guaranteeing a range of clinical experience for students on placements is also difficult in some instances:
I find the clinical areas are changing their speciality month by month. You know you have one area that's doing so and so (...) and then you find that they're no longer doing that because some other consultant has actually gone in there and they're doing something else. It's a constant battle, it really is. (educator)
A prevailing climate of uncertainty makes long term planning difficult and unsettling in many instances:
The whole future's up for grabs. The college may become an independent (...) it may become completely separate, someone may take on a faculty of nursing in Middletown. The next six months should give some indication of...politically...of how things go. (educator)
The cumulative effect of change was highlighted by one educator:
I think it's...not just how it's changed, it's the speed of change. There is more coming on, you just get one set of initiatives finished and then there's another set going through, and another set. And on top of that there's changing the curriculum...there's changes, it's the speed of change. Change has always been there but there's been more time to assimilate it, to take it out there to work out there to change it. Now it's so hard to keep up with the change and take it out there. And a lot of people out there in the clinical field are not really sure what is going on.
Those involved in education are keen to ensure that colleagues in patient care are kept up to date with educational change. Likewise the need to share understandings about developments occurring in service is recognised, but remains a difficult task in a climate of competing demands:
...I think our staff here don't always recognise all the great changes that are happening in education, they see their own changes, changes in technology, the way we're pushing patients through, reducing patients' stays, the way we are changing our structures and our ways of working and contracting, and income comes in and goes out. We don't get a budget any more, we have to earn out income through so many patients we see, and they don't see that the college have got their own stresses and strains. What the college don't see is perhaps the speed at which we're moving forwards and the new language. I'm not convinced that my college friends really have an understanding and grasp of the new NHS. They have not got a grasp of contracts and earning income through numbers of patients. (nurse manager)
The report offers a detailed record of individual perceptions of change and provides an account of the manner in which these have affected, and are likely to continue to affect, the implementation and further development of structures, mechanisms, roles, and strategies for devolved continuous assessment.
Assessment in general has a range of purposes, including the formative ones of diagnosis, evaluation and guidance, and the summative ones of grading, selection and prediction. It is expected to be reliable, valid, fair and feasible, and to offer what is usually called, somewhat mechanistically, ‘feedback’. The assessment of professional competence has additionally to be able to evaluate practical competence in occupational settings, and to determine the extent that appropriate knowledge has been internalised by the student practitioner. Approaches to assessment which lie within the quantitative paradigm, including technicist and behaviourist approaches as well as quantitative approaches proper, are suitable for collecting information about outcomes within highly controllable contexts, and for collecting information which can be measured, or recorded as having been observed. Such approaches are inappropriate for assessing the degree to which the student professional has developed a suitably flexible and responsive set of cognitive conceptual schema that facilitates intelligent independent behaviour in dynamic practical situations. Neither do they take account of the fact that contexts of human work themselves continue to evolve and change, and that therefore the individual’s ability to blend knowledge, skills and attitudes into a holistic construct that informs their practice, is crucial. Assessment from within the educative paradigm, on the other hand, does do these things, whilst also acknowledging that assessment itself is an essential element of the educative process. Educative assessment takes full account of institutional and occupational norms, and of the fact that there are actual individuals involved who are not automatons but people who interpret and make sense in terms of their experience; its structures are generated in response to those features rather than in contradiction of them. It offers structures, mechanisms, roles, and relationships that reflect interior processes and take into account the essential ‘messiness’ of the workplace. It does not attempt to impose a spurious logical order on what in practice is complex. In so doing it performs a formative function as it performs the summative one. The one does not follow the other, but happens in parallel. Assessment from the educative paradigm is integral to the learning process that generates individual development. Competency-based education stands provocatively on the bridge between the quantitative paradigm and the educative paradigm, still making up its mind about the direction in which it should move.
THE ASSESSMENT OF COMPETENCE: A CONCEPTUAL ANALYSIS
The study of the assessment of competence would seem straightforward if it were not that considerable controversy and confusion over what is to count as 'competence' takes place at every level in the system. One way of beginning the analysis of the 'assessment of competence' is to ask such questions as:
• what function it serves within a symbolic system or social process
• how it is related to other elements or features
• how it is accomplished as a practical activity
What characterises human activity is its symbolic dimension. That is to say, it is not enough just to observe a behaviour or an action, one has to ask what it means within a complex system of thought and action. Key concepts are regulative agents in a system. In other words, they generate order, they give a pattern to behaviour such that each element is related to each other element. Every element can be analysed for its function in the system. Meaning, however, is not open to inspection like a physical object. What is said is not always what is meant. What one intends to mean is not always what is interpreted by others to mean the same thing. The intended outcome of an action may have unforeseen consequences because it has been variously interpreted, or because the system is so complex it defies accurate prediction.
The intended outcomes of assessment, for example, are to ensure that certain levels of competence are achieved so that employers and clients can be assured of the quality, knowledge and proficiency of those who have passed. The unintended or hidden purposes may be quite different. For example, educationalists have long referred to the 'hidden curriculum' and its ideological functions in terms of socialising pupils to accept passive roles, gender and racial identities, their position within a social hierarchy as well as social conformity and obedience to those in power.  Occupational studies in a range of professions reveal that a similar social process occurs through which students undergoing courses of education into a particular profession become socialised into that profession's occupational culture. In studies of police training, for example (NSW 1990) police trainees talk about the gap between the real world of practice that they experience when on placement in the field and the lack of 'reality' of their academic studies. Similar, experiences are recorded in studies of nursing and midwifery (Melia 1987; Davies and Atkinson, 1991). It could be said then that there are hidden processes of assessment where students are assessed according to their ability to 'fit in' to the occupational culture. This hidden process may parallel that of the official or overt forms of assessment. How the two kinds of process interact in the production of the final assessment judgement is a matter of empirical study. The following chapter will set the scene for such empirical analyses by exploring alternative approaches to conceptualising the issues involved in the study of a) assessment, b) competency/ competence/competencies, and c) assessment of competency/competence/competencies. It would be artificial to separate completely these strands in the following sections. Nevertheless, it facilitates the organisation of the argument to emphasise each in turn under the following headings:
• The Purposes of Assessment
• The Professional Mandate
• Approaches to Defining Competence
• From Technicist to Educative Processes in the Assessment of Professional Competence
• Finding a Different Approach
1.1. THE PURPOSES OF ASSESSMENT
1.1.1. From Technicist Purposes to Professional Development
Traditionally, the purpose of assessment is to gauge in some way the extent to which a student has achieved the aims and objectives of a given course of study or has mastered the skills and processes of some craft or area of professional and technical activity. The act of assessment makes a discrimination as between those who have or have not passed and further ranks those who have passed in terms of the value of their pass. The grade or mark awarded not only says something about the work achieved but something about the individual as a person in relation to others and the kinds of other social rewards that should follow. Eisner (1993) in tracing the relationships between testing, assessment and evaluation from their origins in the scientific purpose 'to come to understand how nature works and through such knowledge to control its operations.' Through the influence of Burt in Britain and Thorndike in America psychological testing was founded upon principles modelled upon the mathematical sciences. During the 1960s, however, new purposes arose : 'For the first time, we wanted students to learn how to think like scientists, not just to ingest the products of scientific inquiry'. This required approaches different to the educational measurement movements:
Educational evaluation had a mission broader than testing. It was concerned not simply with the measurement of student achievement, but with the quality of curriculum content, with the character of the activities in which students were engaged, with the ease with which teachers could gain access to curriculum materials, with the attractiveness of the curriculum's format, and with multiple outcomes, not only with single ones. In short, the curriculum reform movement gave rise to a richer, more complex conception of evaluation than the one tacit in the practices of educational measurement. Evaluation was conceptualised as part of a complex picture of the practice of education.
Scriven (1967) The focus upon the formative possibilities of evaluation drew attention to the processes of learning, teaching, personal and professional development and the intended and unintended functions of assessment procedures. To address these kinds of processes, methodology shifted from quantitative to qualitative and interpretative approaches which focused upon the lived experiences of classrooms. What was found there was a complexity and an unpredictability that earlier measurement methods had overlooked. introduced the terms formative and summative evaluation placing attention not simply upon specified outcomes that could be 'measured' but also on the quality and purposes of the processes through which attitudes, skills, knowledge and practices are formed.
During the mid-1970s and to the mid 1980s in America, and broadly the 1980s to the present day in the UK concern was expressed regarding the outcomes of schooling. There was a general call from politicians and employers to go 'back to basics'. This call was articulated through increasing political demands for testing and for accountability. However, as Eisner points out many realised that 'educational standards are not raised by mandating assessment practices or using tougher tests, but by increasing the quality of what is offered in schools and by refining the quality of teaching that mediated it.' In short, 'Good teaching and substantive curricula cannot be mandated; they have to be grown.' Professional development together with appropriate structures and mechanisms for the development of courses and appropriate methods of teaching and learning are thus essential.
With the return to demands for 'basics' and 'accountability' the term assessment has come to supplant that of evaluation in much of the American literature. However, this term is new in that it does not simply connote the older forms of testing course outcomes and individual performance, but includes much of what has been the province of evaluation. As Eisner concludes 'we have recognised that mandates do not work, partly because we have come to realise that the measurement of outcomes on instruments that have little predictive or concurrent validity is not an effective way to improve schools, and partly because we have become aware that unless we can create assessment procedures that have more educational validity than those we have been using, change is unlikely.'
Brown (1990) The purposes included are: fostering learning, the improvement of teaching, the provision of valid evidence bases about what has been achieved, enabling decision making about courses, careers and so on. The TGAT report on assessment in the National Curriculum for schools saw information from assessments serving four distinct purposes: has described the emergence in British education of a multi-purpose concept of assessment 'closely linked to the totality of the curriculum'.
1 formative, so that the positive achievements of a pupil may be recognised and discussed and the appropriate next steps may be planned;
2 diagnostic, through which learning difficulties may be scrutinised and classified so that appropriate remedial help and guidance can be provided;
3 summative, for the recording of the overall achievement of a pupil in a systematic way;
4 evaluative, by means of which some aspects of the work of a school, an LEA or other discrete part of the educational service can be assessed and/or reported upon.
In addressing these concerns, there are four kinds of purposes that need to be considered when thinking about assessment:
• social, and
• individual developmental purposes.
1.1.1.a. Technical Purposes
The essential technical purposes of an assessment procedure are reliability, validity, fairness and feasibility in what it assesses, providing in addition, feedback to student, teacher, the institution and national and professional bodies ensuring the quality of the course. To these may be added the six possible purposes of assessment provided by Macintosh and Hale (1976) : diagnosis, evaluation, guidance, grading, selection, and prediction.
1.1.1.b. Substantive Purposes
The substantive purpose of a nursing or midwifery course includes a grasp of the appropriate knowledge bases as well as the accomplishment of appropriate degrees of practical competence in occupational settings. The question of who should decide what counts as an appropriate knowledge base and an appropriate level of performance are made more complex with the advent of local decision making. At a local level, market forces may lead to the tailoring of courses to meet local needs. The question arises then, concerning how national standards, or levels of comparability can be maintained. A professional trained to meet the needs of one area may be inadequately trained to meet the needs in another part of the country. Substantive issues are thus vital to maintaining a national perspective not only on professional education but also on professional competence.
1.1.1.c. Social Purposes
At a social and political level the public needs to be assured of the quality of professional education. Assessment in this case serves the function of quality assurance and can contribute to public accountability. However, there are other hidden (even unintended) social functions. To gain a professional qualification means also gaining a certain kind of social status. It means taking on not merely the occupational role, but the social identity of being a nurse, a midwife. With the role goes an aura of expertise, a particular kind of authority that can extend well beyond the field of professional activity into other spheres of social life. On the one hand, it can be argued that through their authority the professions act as agents of social control; equally, it can be argued that they act as change agents, raising awareness of say the impact of unemployment or poverty on health.
1.1.1.d. Individual Developmental Purposes
Work is still the dominate social means through which people form a sense of self value, explore their own potential, contribute to the well being of others and feel a sense of belonging. Therefore, becoming qualified to enter a profession marks a stage not only in the social career of the individual but also in the personal development of the individual. Assessment is thus, in its widest sense, about human development, purposes and action.
To become a professional means that the individual has internalised a complex cognitive conceptual schema to respond appropriately to dynamic practical situations. Knowledge, skills and attitudes blend in the person to the extent that the individual's identity is bound up with professional activity. This is what makes both defining competence and its assessment so difficult to achieve. It is not just that the individual is perceived as a professional. The individual is perceived to have a mandate to act.
1.2. THE PROFESSIONAL MANDATE
A mandate to act can be defined at one level as having the legal power to enforce an action. A professional mandate, however, is not limited to this. The mandate arises because the professional has an authority, a social standing, a body of knowledge through which change can be effected. Both nursing and midwifery have during this century undergone changes in status and currently lay claim to a professional identity.
As recently as 1969 Etzioni Not only is nursing perceived by many as subordinate to the medical professions, midwifery is in the process of distinguishing its own professional identity from that of nursing. Whittington and Boore (1988: 112) following their review of the literature identified the characteristics of professionalism as: regarded nursing as a semi-professional occupation because training was too short and nurses were not autonomous nor fully responsible for their decision making.
1. Possession of a distinctive domain of knowledge and theorising relevant to practice.
2. Reference to a code of ethics and professional values emerging from the professional group and, in cases of conflict, taken to supersede the values of employers or indeed governments.
3. Control of admission to the group via the establishment, monitoring and validation of procedures for education and training.
4. Power to discipline and potentially debar members of the group who infringe against the ethical code, or whose standards of practice are unacceptable.
5. Participation in a professional sub-culture sustained by formal professional associations.
Hepworth (1989) Nevertheless, at first glance at least, nursing and midwifery could be said to be increasingly able to meet the above criteria. Hepworth, however, points back to the underlying uncertainty concerning the status of nursing as a profession and the impact this has upon attempting to assess students when assessors: refers to the fact that nursing has been considered variously as an 'emerging profession, as a semi-profession and as a skilled vocation'.
are required to assess a student's competence to practice as a professional, when the role of that professional is ambiguous, changing, inexplicit, and subject to a variety of differing perspectives. The effect of this complexity is evident in the anxiety and defensiveness which the subject of professional judgement often raises in both the students and their assessors/teachers, particularly if that judgement is challenged or it is suggested that the process should be examined.
Added to this, the British political context for all the health professions has been, is, and is likely to be for the foreseeable future, one of considerable change where old practices and definitions are replaced by new ones, where professionals often feel under threat and de-skilled by innovations and their demands. In the face of such external pressure, there has never been a greater need for both nursing and midwifery to reflect upon their status as possessors of domains of knowledge and theorizing in order to assert their independence and identities. However, like any complex occupation there is no homogeneous, all embracing view of 'nursing' or 'midwifery' as the basis upon which to construct domains of knowledge. There is rather an agglomeration of spheres each with their own views and associated practices which broadly assembled come under the name of 'nursing' or 'midwifery' (c.f. Melia 1987) .
Project 2000 and direct entry midwifery diplomas each speak to a change in what may be called their appropriate 'occupational mandates'. This mandate includes not only the official requirements as laid down by the ENB, UKCC and EC but also the knowledges, skills, competencies, values, conducts, attitudes, and images of the nurse and the midwife in relation to other health professionals that have developed historically. These interrelated images, ideas and experiences constitute the concept of the competent professional. Change cannot simply be mandated by legislation. The historically developed beliefs and practices of a profession cannot be altered overnight. Of course legislation can force changes. Nevertheless, these may not be in the directions desired. Official changes can be subverted, resisted, or glossed over to hide the extent to which practice has not changed. If real change is desired then it needs to be 'grown' rather than imposed. Project 2000 and the direct entry midwifery diploma may be seen as an attempt to grow change in the professions. In the process, competing definitions as to competence emerge some of which draw upon traditional legacies, others upon official pronouncements and legal texts and yet others upon the personal and collective experiences of practice. Accordingly, professional competence as a concept is open to variations in definition, many of which are vague.
1.3. APPROACHES TO DEFINING COMPETENCE
1.3.1. Some Approaches to Finding a Definition
For Miller et al (1988) The first is accessible to observation, the second, being a psychological construct, is not. However, it could be argued that the psychological construct should lead to and therefore can be inferred from competent performance. Hence, the two definitions of competence are compatible. The question remains, however, how easily and unambiguously can performance signify competence? The breadth of definitions of competence ought to lead researchers to some caution as to the answer to this question. Runciman (1990) draws on two broad definitions of competence: competence can be seen either in terms of performance, or as a quality or state of being.
Occupational competence is the ability to perform activities in the jobs within an occupation, to the standards expected in employment. The concept also embodies the ability to transfer skills and knowledge to new situations within the occupational area .... Competence also includes many aspects of personal effectiveness in that it requires the application of skills and knowledge in organisational contexts, with workmates, supervisors, customers while coping with real life pressures.
(MSC Quality and Standards Branch in relation to the Youth Training Scheme)
(Competence is) The possession and development of sufficient skills, knowledge, appropriate attitudes and experience for successful performance in life roles. Such a definition includes employment and other forms of work - it implies maturity and responsibility in a variety of roles; and it includes experience as an essential element of competence.
(Evans 1987: 5)
Although these definitions offer an orientation towards competence, neither offers sufficient precision to be clear about how such competence can be manifested unambiguously in performance. In order to overcome this, one approach has been to take a strategy of behaviourally specifying individual competencies in the form of learning outcomes and associated criteria or standards of performance, the sum of which is the more encompassing concept of competence. Thus competence is seen as a repertoire of competencies which allows the practitioner to practice safely (Medley 1984 ). This approach is broadly quantitative. How these competencies may be identified for quantitative purposes is then the next problem.
According to Whittington and Boore (1988) there has been little research in actual nursing practice thus competencies have generally either been produced in an intuitive, a priori fashion, or have been based upon experts' perceptions of what counts as competence (as in the Delphi or Dacum approaches). However, this criticism is being addressed in the work of Benner (1982, 1983) , qualitative studies such as Melia (1987), Abbott and Sapsford (1992) and the increasing interest these kinds of work are stimulating through which an alternative approach can be developed. In the final section of this chapter, this alternative approach will be discussed in relation to its implications for the development of an educative paradigm through which competent action may be educed and evaluated.
Although it is not the central purpose of this project to explore methods of identifying competence, such methods have direct implications for the forms that assessment processes and procedures take. A predominantly quantitative approach has quite different implications than a largely qualitative approach. Norris and MacLure (1991) provided a summary of approaches following a review of the literature in their study of the relationship between knowledge and competence across the professions. For the purposes of this study their summary has been reframed into two groups together with a minor addition as follows:
[Based on MacLure & Norris, 1991: p39]
There are those, group A, which are essentially a priori and quantitative, seeking measurable agreements and those, group B, which focus primarily upon the analysis of observation and interview accounts. Group A tends towards the quantitative paradigm, whereas group B tends towards the qualitative paradigm. In group A, it could be argued that expert panels include a high degree of qualitative material and in group B the McLelland approach results in quantitative criteria. Qualitative approaches do not necessarily exclude the use of quantitative techniques and quantitative approaches frequently depend upon 'soft' or subjective approaches to develop theory for testing. The difference, in each case, is a difference of value and purpose, the essential difference being that quantitative approaches more highly value measurement and explanation; where as qualitative approaches tend to value more highly, meaning and understanding. The former tends to reinforce technical (and in the extreme, technicist) approaches to training and assessment, whereas the latter tends to reinforce the development of personal and professional judgement. In this latter approach, both training and assessment demands the provision of evidence of critical reflection on practice in which appropriate judgement has been the key issue. Since judgement is context and situation specific it cannot be reduced to behavioural units but can be open to public accountability through the discussion of evidence.
The issue for assessment concerns the nature of the domain(s) of knowledge and theorizing relevant to the sub-spheres of practice that is possessed by competent nurses and midwives and which is essential to marking them out as professions. It may be an argument for their status as emergent professions rather than as fully fledged professions, that much of their knowledge is held implicitly or tacitly. Or, it may be that such tacit knowledge is characteristic of any profession. In either case, what approaches to the identification of competence and its assessment are appropriate to such complex fields of action?
1.4. FROM TECHNICIST TO EDUCATIVE PROCESSES IN THE ASSESSMENT OF PROFESSIONAL COMPETENCE
1.4.1. Differentiating Quantitative and Qualitative Discourses
A formal assessment process requires both a social apparatus of roles, procedures, regulations, and also a conceptual structure adequate to generate evidence upon which to base judgements on student achievements. A structure to make this happen can be logically, even scientifically, formulated. However, the actual events that take place as a result may not always be those expected. Events are contingent whereas structures may be rationally determined or legally imposed. Where a role may be rationally defined and related to other roles in a clear structural pattern, the individual who occupies that role is contingent in the sense that it is the role that is necessary to the structure not the individual. Each individual who could occupy a given role brings different individual needs, interests and aspirations as well as abilities, values and experiences which frame how in practice the role is interpreted and realised. Broadly, the assessment process can be analysed according to such structural and contingent aspects. In the development of an assessment structure, the issue is whether the structural aspects are to be imposed upon, or to be derived from actual practice.
In one sense, the process of assessment can be read as an attempt to impose a logical order on the 'messy' reality of actual practice. Its purpose would be to control or regulate processes through well defined mechanisms and procedures to produce outcomes which ensure some comparability and to assure certain standards of quality or attainment. In the second sense, assessment structures, mechanisms and procedures are seen as outcomes generated by reflective feedback on practice. Through reflection structural or common features of practical competence are identified but not to the detriment of specificity, difference and variety. The second is thus sensitive to the dynamics of situations in a way that the first is not.
Generally speaking, quantitative methods are typically employed in approaches which seek to control and hence compel the adoption of a certain kind of structure. The alternative approach which seeks to generate (or grow) structure based upon reflection upon practice, places at the centre of its arguments concepts of 'value', 'meaning' (as distinct from observable and measurable units), attitudes, judgement and other personal qualities - a qualitative approach. The latter approach thus places human action, reflection and decision making at the centre of its discourses whereas the former replaces the human decision maker by instruments which are constructed to measure or calculate and thus reduce 'human judgement' which is seen as a source of potential error.
Each has quite different implications. Firstly, there are implications both for the way education to enter the professions is organised, and also for the legitimation of and status of a profession in relation to its client groups and its employers. Secondly, there are implications for the principles, procedures and techniques of assessment. For the sake of convenience, the first group of discourses about competence will be referred to as the quantitative, and the second as the educational. The term educational or educative is chosen so as not to reinforce the easy opposition between mathematical approaches and qualitative approaches in the social sciences. Where a quantitative approach in the interests of 'objectivity', may seek to exclude discourses of value, judgement and human subjectivity, an educational approach values all the power of precision that mathematics and logical forms of analysis can contribute to the full range of human discourse, judgement and action. In this sense, the educative approach is inclusive and action centred, whereas the quantitative approach is exclusive.
1.4.2. The Quantitative Discourses
(With Particular Reference To The Behavioural and Technicist Variants)
Assessment should not determine competence, but rather competence should determine its appropriate form of assessment. How competence is defined depends upon the methods, beliefs and experiences of the professional. Such definitions can be revealed through the kinds of texts they produce and the ways in which they talk about, support and contest meanings of competence. The definitions that emerge or can be drawn out (educed) from the range of texts and discourses provide accounts of how practical competence is seen. Within these discourses, it is frequently the case that quite distinct, even mutually exclusive views can be described. To mark such distinctions the term paradigm is often used.
The term 'quantitative paradigm' as to be employed here, refers to those discourses of science which involve throwing a mathematical grid upon the world of experience. Logical deductive reasoning, measurement and reduction to formulaic expressions are its features. The technicist paradigm as employed in this report is a particular kind of version of the more general quantitative paradigm which seeks measurement, observable units of analysis and logical arrangements. The technicist paradigm is reduced in scope in that it takes for granted its frameworks of analysis and its procedures and employs them routinely rather than subjecting them to the judgement of the practitioner. Although this characterisation is an 'ideal type' it has a basis in the data. Later discussions will report the sense of frustration some assessors and students feel in filling out assessment forms, in trying to interpret the items in relation to practical experience and in accordance with their best judgement. The typical complaint may be summed up as being that the key dimensions of professionality cannot be reduced to observable performance criteria.
The aims of the technicist paradigm can be seen most clearly in the 'scientific management' of Taylor (1947) and the developments in stop watch measurement of performance, the behaviourism of Watson (1931;). Here the emphasis was upon control and predictability through measurement and the reinforcement of appropriate behaviours to produce desired outcomes.) and later Skinner (1953, 1968 ), the mental measurement of Burt (1947), Thorndike (1910) and Yerkes (1929) and programmed learning and instructional design (Gagné 1975
In making such a reduction, the technicist paradigm reinforces a split between theory (or knowledge) and practice by separating out the expert who develops theory (knowledge) from the practitioner who merely applies theory that can be assessed in terms of performance criteria. Also implied in this is a hierarchical relation between the expert and the non-expert whether seen as practitioner or trainee. In addition, within a quantitative/technicist paradigm, skills assessment models, and competency based education each assume the student initially lacks the required skill or competence. Through training a student then acquires the particular skill or competence required. A particular combination or menu of such skills or competencies then defines the general competency of the individual. This is most clearly expressed by Dunn et al in a medical context (1985:17):
... competence must be placed in a context, precise and exact, in order for it to be clear what is meant. To say a person is competent is not enough. He is competent to do a, or a and b, or a and b and c: a and b and c being aspects of a doctors work.
The essential 'messiness' of everyday action, the complexity of situations, the flow of events, and the dynamics of human interactions make the demand for a context which is 'precise and exact' unrealistic. The operationalisation of such an approach is exemplified in the programmed learning of Gagné (see Gagné and Briggs 1974) or in the exhaustive and seemingly endless lists of Bloom (1954, 1956 ). The issue raised at this point is not about the value of analysing complex activities and skills, but the use to which such analyses are put in everyday practice.
Schematically the relationship between the quantitative view, the behavioural and the technicist can be set out as follows:
The diagram represents the decreasing scope from quantitative to technicist which moves from a systematic method of investigating and comprehending the whole world of experience open to thought, to the reduction of scope to observable behaviours (as opposed, say, to felt inner states) and finally the reduction of methods and knowledge for limited purposes of social control or the engineering of performance. Thus, in general terms, as Norris (1991 ) comments, discourse about competence:
has become associated with a drive towards more practicality in education and training placing a greater emphasis on the assessment of performance rather than knowledge. A focus on competence is assumed to provide for occupational relevance and a hardheaded focus on outcomes and products. The clarity of specification, judgement and measurement in competency based training indicates an aura of technical precision. The requirement that competencies should be easy to understand, permit direct observation, be expressed as outcomes and be transferable from setting to setting, suggests that they are straightforward, flexible and meet national as opposed to local standards.
This requirement can be seen in the three dominant approaches in the quantitative paradigm to assessing practical competence:
• Minimum Competency Testing (MCT)
• Competency Based Education (CBE)
• National Vocational Qualifications (NVQs)
Each will be discussed in turn in the section which follows the summary immediately below.
1.4.3. Minimum Competency Testing (MCT)
When some notion of a golden age when life was simpler is held by policy makers, forms of assessment can be seen as tools to engineer this state. A particularly reductive form of the behavioural approach is to be seen in Minimum Competency Testing which exemplifies a minimalist version of the technicist paradigm.
According to Lazarus (1981:2):
Minimum competency testing is an effort to solve certain problems in education without first understanding what the problems are. In medical terms, minimum competency testing amounts to treating the symptom without paying much attention to the underlying ailment. Here the major symptom is a number of high school graduates who cannot read, write, and figure well enough to function adequately in society. No one knows how many there are, though they certainly constitute a small fraction of all high school graduates. The treatment for this symptom? Test all students in the basic skills of reading, writing and arithmetic. Some states go further; they make receipt of a high school diploma conditional on the student's passing the test.
Whether such a diploma sanction applies or not, minimum competency testing is precisely what the name implies: a programme to test students in terms of, and only in terms of, whatever competencies state or local authorities have decided are minimally acceptable result of an education.
As Lazarus goes on to point out, MCTs feed the test construction industry which in turn 'impede nearly all attempts at educational reform' (p.9). This is because a considerable investment is placed into the construction of tests and thus the investment has to be recovered through sales. Once a test is in place, it defines the curriculum. The curriculum cannot be radically changed without changing the test and the test is concerned only with outcomes, not processes.
By emphasising outcomes rather than processes, schools and colleges become learning delivery systems, where instruction, as Lazarus points out, is an analogue to manufacture (p. 13), aimed at a well defined market.
In 1977 Glass Similarly, Norris (1991) comments: criticised the use of psychological and statistical tools as lending a spurious rationality and precision to the arbitrary criterion levels or standards chosen.
If the assessment of competence presents difficulties of standards setting this is in part because the relationship between standards and good practice or best practice is not at all straight-forward. Like theories standards are always going to be empirically under-determined. What is worrying is the extent to which they are not empirically determined at all, but are rather the product of conventional thought. Even if this were not the case the pace of economic and social change suggest that standards once set might quickly become obsolete.
Competency based education (CBE) seemed to offer an alternative to MCTs.
1.4.5. Competency Based Education (CBE)
Competence should not be equated with behavioural definitions (f. Grussing 1984). In practice, the relationship between a test outcome and the real competencies involved in the cultural application of a particular complex skill may be tenuous. Competence in everyday life can be defined in terms of a vast range of changing contexts, needs and interests that defy any attempt to formulate a minimum set. Competency Based Education seeks to address the legitimate concern to ensure that professionals are actually safe and competent to practice not by focusing upon minimum standards but by seeking to ensure agreed objectives are met. These agreed objectives may be lent weight through drawing upon panels of expert opinion. However, such approaches do not overcome central objections. On the one hand, the drive towards consensus that the Delphi and Dacum approaches are subject to, filters out the full range of alternative views. Secondly, there is no guarantee that such approaches do not merely reinforce folklore and prejudice. Thirdly, as Benner (1982 ) points out it is to be doubted that the appropriate testing technology can actually service the expanded requirement.
According to Fullerton et al (1992 They describe their own approach of constructing criterion-referenced essay exams. Still, its main focus is upon producing standardisation across markers rather than upon the nature of competency itself and the relation between competency, the form of assessment, and the process through which formative evaluations can be made in areas of clinical practice. As such, it is a sophisticated form of the technicist approach and one which does not meet Benner's doubts.) in America, norm-referencing has been the basis for pass-fail examinations in midwifery but there has been an increasing interest in criterion-referencing.
Thus the inherent danger of student assessment which follows CBE approaches is that it glosses over central methodological questions to do with the definition of standards. It also glosses over issues concerning what the student knows as distinct from how the student performs. Ticking off an objective achieved is not equivalent to probing the extent to which the student knows and understands. Such issues are often glossed over because they are either considered too hard, or too philosophical and thus impractical. For example, Hepworth (1989 ) indicates the existence of such problems but then sides steps them explicitly in a parenthesis writing that 'it is difficult to see how' such a philosophical discussion of the nature of knowledge 'could provide nurse educationalists with the practical support which is needed now'. However, such a discussion engages directly the alternative paradigms concerning what counts as knowledge of competence through which assessment can take place. Choice of paradigm has vital practical implications concerning what is or is not taken into account in the assessment procedure. The temptation is to slip towards a technicist view which seems to speak directly to the control, surveillance and measurement of performance without having to consider how performance relates to knowledge, understanding and the development of professional judgement. In short, the choice affects not only the way in which data is collected about a student and upon which pass/fail assessments are made but also what counts as data.
By not engaging in such a discussion Hepworth and others while being aware of alternative methods and their associated problems do not possess a sufficient framework for development. The assessment of students in many ways is an unsatisfactory game of how to fit the assessors professional judgement of the student into the appropriate boxes. In this sense, the assessment categories are interpreted in the light of background knowledge concerning 'competence' and concerning the student. This background knowledge may be neither very deep nor made explicit.
The weaknesses of the CBE approach were explored in a three-year project led by Benner (1982 It was found that:) which sought 'to develop follow-through evaluation instruments for schools of nursing and hospitals' and to this end 'develop competency-based examinations that reflected the performance, demands, resources, and constraints of actual nursing practice for new graduates.'
These test-development efforts, however, were hindered by the lack of adequate methods for identifying competencies and the lack of adequate pre-existing definitions of competency in nursing. Most efforts to identify competencies in nursing, to date, have been based on expert opinion rather than on observation, description, and analysis of actual nursing performance. Thus, identification of competencies and evaluation of competency-based testing for nursing was undertaken.
In order to pursue the project they undertook their own identification of competencies and consequent construction of a method of assessment. Competency based education, if it is to be of more than ritualistic use, must attempt to predict successful performance in work roles post graduation. However, competency must be distinguished from other work-related qualities an individual may have. Is an attitude a competency? What is the relation between a skill and a competency? Is insight a competency? Rather than attempt to make a wide ranging set of distinctions at this point, it is useful, at least, to refer to Benners distinctions between a basic skill, attainment and competence:
A basic skill is the ability to follow and perform the steps necessary to accomplish a well-defined task or goal under controlled or isolated circumstances. In attainment, the desired effects or outcomes are also judged under controlled circumstances. Competency, however, is the ability to perform the task with desirable outcomes under the varied circumstances of the real world.
She provides the following summary of frequently cited elements of competency-based curriculum and testing:
(1) identification of competencies in specified roles, based upon observation and analysis of actual performance in real situations; (2) relating the identified competencies to specific outcomes; (3) establishment of a criterion level for the competence; and (4) derivation of an assessment strategy from the competency statement that is objective and predictive of competent performance in actual performance situations.
This clear summary statement is essentially programmatic. To accomplish the programme is complex and difficult. Benner details six major reasons for the difficulty. The following is an interpretation of these:
1. There is an absence of well defined behaviour domains in nursing. Nursing possesses few identifiable outcomes since these are largely dependent upon situationally specific interactions, and the limited nature of research and development efforts.
2. There is the confusion between competence as denoting actual success in a real setting with objectives that seek to enable a student to improve a particular skill without being placed into a real situation possessing a particular goal and context.
3. All tests have problems with predictive validity. This is particularly so where the behavioural domain is not well defined as in the case of problem solving and clinical judgement.
4. Skills relating to the building of working relationships are not only the most important but also the most difficult to test - e.g., empathy, ability to relate to others. Assessment is only possible in realistic situations.
5. The creation of lists of behaviours listed in incremental steps associated with a task excludes the inherent meanings of the whole performance comprised of a related set of tasks. There is an absence of guidelines concerning priorities or the relative importance of tasks.
6. The creation of formal lists and sub-lists in task analysis at best may well be an infinite process, at worst an impossible mission. Indeed, it overlooks the way in which specific situations demand a meaningful organisation of responses, not simply the reiteration of procedures.
1.4.6. National Vocational Qualifications (NVQs)
Burke and Jessup (1990:194 They diagrammatically represent the approach as follows:) provide a detailed account of NVQs which take a broadly competency based approach.
NVQs, on this model, seek to combine a wide range of methods to construct an evidence base. At first sight it may seem to offer a step beyond the quantitative paradigm in that it appears to employ forms of assessment not easily reducible to measurable entities (essays, assignments, simulations, reports) along side those that are (multiple choice questions, skills tests and so on). While the evidence base so constructed is richer than the other methods, it shares with them the basic orientation of imposing a pre-determined, standardised structure upon occupational practice. For example, although performance evidence is constructed from 'natural observation in the work place' it is already framed within pre-determined categories of 'Elements of competence with Performance criteria'. It is not a structure that is 'grown' from reflection upon practice. It thus is subject to similar criticisms as Benner lays against competency based education in general.
Technicist and behaviourist approaches to the assessment of competence are predicated on the notion that predictability of outcome is possible in human activity. They assume situations sufficiently controllable to enable learning to be measured in terms of pre-specified outcomes. It is, however, unrealistic to expect contexts to remain stable (ie. unchanging) and equally unrealistic to believe that the only outcomes of a specified action will be the intended ones. The contexts of human interaction are, in any case, essentially ‘messy’, requiring judgement as much as knowledge and technical skill; judgement is not obviously amenable to assessments which look only for what is directly observable, and can be either measured or ‘ticked’ as having been observed. Behind assessment operated according to the quantitative paradigm, there is a desire for accountability, but also for an easy way of identifying strengths and weaknesses so that reinforcement can be given to maximise the chance of achieving a desired outcome. There are three major problems with this paradigm as a means of assessing competence in nursing and midwifery. Firstly it splits the ‘expert’ theorist from the practitioner who becomes the person who applies theory that can be assessed. Secondly, it places greater emphasis on the assessment of performance criteria than it does on the assessment of knowledge. Thirdly, it fails to take any account of the complexity and dynamism of human interaction and organisational processes.
1.5. FINDING A DIFFERENT APPROACH
1.5.1.Towards Alternative Paradigms
The alternative paradigm begins with actors as agents in their own definitions of and approaches to competence and its assessment. Appropriate structures with their mechanisms and procedures to produce desired outcomes are developed by reflection upon work place practice. Such structures are continually negotiated and redefined because work is both dynamic and situationally specific.
Light is increasingly being thrown upon these structures and processes by qualitative research which has focussed in particular upon the unintended or hidden processes involved in occupational socialisation and learning. For example, Woods' (1979) studies of pupils negotiating workloads with teachers, albeit in schools, is relevant in alerting researchers to how students negotiate what they consider to be appropriate workloads in classrooms and clinical settings and appropriate tasks for assessment. Davies and Atkinson (1991) have identified a number of student midwife coping strategies. The particular students were already qualified nurses who had the added problem of coping with a return to student status. Such coping included 'doing the obs' (that is, observations) which organised their time and allowed them to 'fit in'. It included avoiding certain staff, or 'keeping quiet'. In short, students learnt to manage the kinds of impressions that they were giving to their assessors and other key staff. These may be referred to as student competencies. Having spent many years in student roles (whether in school or in college, or clinical situations) most are experts or at least highly proficient in such roles. Some students are very sensitive to and readily pick up on the cues that staff provide concerning what is or is not acceptable to them. Others are cue-deaf.
Clinical practice for students who have no prior clinical experiences is itself a phase of socialisation into work practices. Workplace cultures have their own idiosyncratic practices as well as drawing upon wider, more general professional belief systems, formal and informal codes of conduct ( f. Melia 1987 ). Students thus have to juggle not only their developing understandings of workplace cultures but also the academic or educational definitions.
Such studies indicate the complexity of the learning process within which assessment takes place. There are quite distinct kinds of competency, which include:
• competency as defined by course and/or official statements
• competency as defined by assessment documentation
• competency as defined by occupational cultures
• student competency to negotiate and manage impressions, workloads and expectations
• tutor/mentor/assessor competency to impose/negotiate practice
These are not meant to be exhaustive but rather illustrative of what may be involved in what seems at first sight a simple act of assessment.
Alternative paradigms attempt to engage with work practices and social and educational interactions rather than impose upon them. The focus is not on the aggregation of elements, but upon processes, relations, and meanings, that is, upon selves in action. Norris (1991 ) in addition to the behavioural approaches described above identifies two further views or constructs of competence: the generic and the cognitive. Generic competence 'favours the elicitation through behavioural event or critical incident interviewing of those general abilities associated with expert performers' Cognitive constructs have reference to underlying mental structures through which activity is organised. However, these alternatives do not exhaust the possibilities. One could refer to theories where intuitive relationships are formed through a combination of experience, intelligence and imagination. One may ask to what extent competence is some product of personality, linguistic habits of thought and discourse repertoires. Such alternatives attempt to grapple with the complexity of those processes through which expertise is accomplished. They mark the difference between painting by numbers and painting from life.
Benner (1982, 1983) drawing on Dreyfus and Dreyfus (c.f. 1981 It falls within a cognitive approach. She postulates five stages towards expertise: novice, advanced beginner, competent, proficient, expert. What is appropriate of the student nurse, and particularly of the undergraduate as opposed to the Project 2000 diploma nurse or non-Project 2000 nurses? ) provides one of the most sophisticated attempts in health education to understand competence.
The Benner model assumes not simply a progression but a qualitative transformation in the way an advanced beginner operates and a competent nurse operates, and then a further qualitative transformation in the move towards proficiency. Benner (1983:3;) focusses her analyses of nursing upon actual practice situations The differences:
can be attributed to the know-how that is acquired through experience. The expert nurse perceives the situation as a whole, uses past concrete situations as paradigms, and moves to the accurate region of the problem without wasteful consideration of a large number of irrelevant options (...). in contrast, the competent or proficient nurse in a novel situation must rely on conscious, deliberate, analytic problem solving of an elemental nature.
Such expert knowledge while not amenable to exhaustive analysis can be 'captured by interpretive descriptions of actual practice' (p.4). The task is to make the 'know-how' public. There are six areas of such practical knowledge identified by Benner:
(1) graded qualitative distinctions; (2) common meanings; (3) assumptions, expectations, and sets; (4) paradigm cases and personal knowledge; (5) maxims; and (6) unplanned practices. Each area can be studied using ethnographic and interpretative strategies initially to identify and extend practical knowledge.
These areas are common to most professional action. Such action is not bound by exact mechanical procedures, rather it is framed by judgement, and appropriate actions are dictated by the specifics of the situation. Thus the interpretative strategies employed by experts rather than the procedures become the main focus of analysis. A procedure may be competently, even skilfully executed but if it is not appropriate, it will fail. The vital element is judgement.
Benner reports studies by Herbert and Stuart Dreyfus (1977 The example given is of undergraduate pilots who had been taught a fixed visual sequence to scan their instruments. The instructors while issuing the rules were found not to follow them. Because they did not follow their own rules they were able to find errors much more quickly. Actual practice and official procedures may diverge radically. Indeed, in some circumstances following the rules may be dangerous. If the practice of expert practitioners in nursing and midwifery is under researched as many suggest, then upon what is competency based assessment founded? ) which 'demonstrated that only by dropping the rules can one become really proficient' (p. 37).
Ashworth and Morrison (1991 It is inappropriate because:) in discussing the assessment of competence sees it as 'a technically oriented way of thinking, often inappropriate to the facilitation of the training of human beings'.
assessing involves the perception of evidence about performance by an assessor, and the arrival at a decision concerning the level of performance of the person being assessed. Here there is enormous, unavoidable scope for subjectivity especially when the competencies being assessed are relatively intangible ones. Moreover, the specification of assessment criteria in competence is unlikely to affect the problem of subjectivity.
Does the approach by Benner offer an alternative method of assessment? Rather than trying to exclude subjectivity, the approach actively involves the subjective experiences of experts in trying to access and build up a body of 'know-how' which can then form the basis for inducting novices into expert practice. The alternative paradigm offered here rests upon being able to access the expertise of the expert. Ethnographic or qualitative forms of research methodology are argued to be the appropriate methods. While these methodologies can provide a detailed data base of professional practice, they do not in themselves provide either a method of teaching nor a curriculum, nor a method of assessment.
Benner (1982) provided illustrative examples of what would be the basis of such a curriculum and later (1983) provided more detailed analyses of appropriate nursing competencies. These point the way towards an educative paradigm. Benner (1983) employs a concept of experience narrower than the commonsense use of the term but appropriate for the development of expert knowledge. For her, experience 'results when preconceived notions and expectations are challenged, refined, or disconfirmed by the actual situation.' It is therefore, as she says, 'a requisite for expertise'. This formulation contains within it the basis for an educative approach in the development of nursing knowledge. It is important for a nurse to be able to read a situation and make this explicit:
The competent nurse in most situations and the proficient nurse in novel a situation must rely on conscious, deliberate, analytical problem solving, which is elemental, whereas the expert reads the situation as a whole, and moves to the accurate region of the problem without wasteful consideration of a large number of irrelevant options. (p. 37)
Unfortunately, as Meerabeau (1992) There are two implications. The first is for the development of nursing theory, where knowledge is derived from practice; the second is for nursing education. Although, these implications do not yet adequately take into account sociological dimensions to the construction of competence as a social phenomenon there are nevertheless important implications for assessment which rest upon the nature of the evidence that must be collected and recorded in order to ground judgements. If increasing professionality depends upon the richness and variety of experience, then assessment should be directed towards not only the student's performance at safe levels of practice, but also the student's knowledge of situations as evidenced through an ability to record, analyse and critique an appropriate range of their own clinical experience and set this into relationship with the clinical experience and practice of others. The task for the student is to make explicit personal and shared understandings through which action takes place in given situations in ways that are open to critique and are knowledgeably argued. This method does not abstract procedures from situations but sets them meaningfully into relationship with personal and shared experience. It is the role of knowledge and evidence relating to performance and technical accomplishment that becomes critical in an educative paradigm. comments, the knowledge of the expert is typically tacit and as such is a 'methodological headache' because it is very difficult to make explicit.
1.5.2. Towards the Educative Paradigm
The educative paradigm depends upon a structure of dialogue through which competent action , knowledgeability and their evaluation are educed (or drawn out) by students and staff reflecting together upon evidence. Evaluation/assessment under this paradigm seeks to inform the decision making of all parties (student, assessor, teacher, ward staff, clients). The approach does not exclude quantitative and analytical approaches but employs them within a relationship that focuses analytical and critical reflection upon performance in clinical areas.
As discussed in the previous section, the background interpretative strategies of the practitioner are what distinguishes the novice from the competent and from the expert. Since the tacit knowledge of the expert is not freely available to the student, a process is required which sets students and staff into educational relationships. The educative paradigm bases teaching and assessment upon identifying the repertoires of interpretational strategies available to the practitioner. In practice this means that the student and the staff adopt a standpoint of mutual education which involve taking research and inquiry based strategies to make explicit the assumptions, the values, the rationales for judgement, the case-lore built of memories possessed by the expert. The educative approach sets theory and practice into a different kind of relationship to the traditional separation of the two. Theory building and practice become the two sides of the same action.
Rather than a division between academic and professional competence as implied by crude distinctions between theory and practice, it could be argued that to be a professional requires students to have the special competence to inform practice through academic reflection. This is the perspective of the reflective practitioner (Stenhouse 1975, Dreyfus 1981, Schon 1983, 1987, Elliott 1991, Benner 1982,1984 ) where theoretical knowledge, far from being developed independently of practice, is grounded in the experiences of practitioners who test theory through practice and broaden their practical frame of reference through principled application of that theory. Such an approach is founded upon a notion of the mutuality of theory and practice which entails the modification of theory through practice and the modification of practice through theory. It is an approach which demands an appreciation of professional identity which places research at the heart of professional judgement and action. To come to any worthwhile conclusions about the achievability of excellence in both academic standards and professional competence, an evaluation must be able to examine the nature and quality of judgements, and gain access to students' reflections in both the clinical and the classroom environment.
Eisner (1993) in his reconceptualisation of assessment in schools and colleges proposed, in summary form, the following criteria for practice:
• The tasks used to assess what students know and can do need to reflect the tasks they will encounter in the world outside schools, not merely those limited to the schools themselves
• The tasks used to assess students should reveal how students go about solving a problem, not the solutions they formulate
• Assessment tasks should reflect the values of the intellectual community from which the tasks are derived
• Assessment tasks need not be limited to solo performance
• New assessment tasks should make possible more than one acceptable solution to a problem and more than one acceptable answer to a question
• Assessment tasks should have curricular relevance, but not be limited to the curriculum as taught
• Assessment tasks should require students to display a sensitivity to configurations or wholes, not simply to discrete elements
• Assessment tasks should permit the student to select a form of representation he or she chooses to use to display what has been learned
These criteria may provide a point of departure to construct principles of educative assessment for the professions. Although it is not the object of this report to complete such a task, in the succeeding chapters further insights will be explored that may contribute to such an endeavour (see in particular chapters 8, 9, 10).
1.6. THE EDUCATIVE PARADIGM: A SUMMARY
There is a commonly held but spurious distinction between theoretical and practical knowledge. It is a distinction reinforced - albeit unintentionally - by assessment methodologies employing quantitative discourses which adopt behaviourist and/or technicist strategies. Theory and knowledge become two sides of the same coin in approaches which see knowledge and theory production as being generated through action, and reflection upon the effects of action.
It has been argued that the latter approach more closely fits the needs of contemporary demands being made upon assessment. Assessment is increasingly being seen as having multiple purposes that are integral to the educational process and not additional to it.
Brown (1990) in connection with assessment in schools, sees five emergent themes:
• a broader conception of assessment which fulfils multiple purposes
• an increase in the range of qualities assessed and the contexts in which assessment takes place
• a rise in descriptive assessment
• the devolution of responsibilities for assessment
• the availability of certification to a much greater proportion of young people
The first four of these are clearly relevant to nursing and midwifery assessment, and to them may be added:
• the assessment of professional judgement
• the assessment of professional action and problem solving
• the assessment of team participation
• issues of professional and cross professional dialogue and communication
• issues in the generation of learning environments
There are here issues both of cross disciplinarity and of what counts as knowledge. In particular, assessment must address itself to the question of what constitutes the domain of professional knowledge. Assessment thus becomes integral to the whole question of the development of a professional mandate.
Texts (i.e. written statutory guidelines, approved institution documentation, etc: and spoken advice) about competence and its assessment are interpreted in relation to situationally-specific, ‘real-life’ nursing and midwifery events. As they work in their particular clinical location, individuals strive to understand official regulations in relation to their career-long personal experience. Institutions are made up of such individuals and consequently organisations themselves are dynamically evolving ‘learning institutions’. They are places where the meaning of competence is continually being developed and better understood, and where strategies for implementing devolved continuous assessment are changing in response to that new understanding. Institutions which have (at best) a well-established culture of reflective practice or (at least) an embryonic one are in a strong position to support the introduction of the new responsibilities and roles necessary for this implementation; responsibilities and roles which incorporate reflective practice at their very heart. Where the culture is less established, the guidelines and regulations are experienced as impositions and resistance occurs. Individuals are less able to adjust their concepts of competence, construing knowledge, for instance, as a sort of tool kit of information containing separate items which are useful in themselves but are rarely used as a set because the job for which they are to be used is not yet clearly identified. It is only through the experience of working with concepts of competence and addressing the issue of what counts for assessment purposes, that coherence is achieved. In the course of work people ‘test’ the concepts that frame the statutory texts, through their practice itself and the accounts they share. Concepts of competence are, then, defined from a multi-dimensional perspective and developed through individual reflective practice within a reflexive institutional culture. While competence is assessed by strategies that are adapted in operation to accord with the historically embedded perceptions of the purpose of assessment that are peculiar to the particular institution. The national strategy for devolved continuous assessment must, therefore, take account of the variety and range of perceptions and institutional cultures. It must encourage the development of structures for principled movement towards a form of assessment that takes account of the global needs of the profession and the local needs of individual institutions.
THE ASSESSMENT OF COMPETENCE: THE ISSUES DEFINED IN PRACTICE
This chapter explores the issues identified by staff and students in their reflections upon and experiences of the assessment of competence. It is divided into two broad sections. The first section draws upon the ways in which practitioners and students define competence in practice. The second draws upon staff and student experiences of assessment practices as they relate to the traditional forms of assessment and the new demands on them made by devolved continuous assessment.
For some of the students, practitioners and educators involved in the study, it was obvious that they had previously given much thought to the concept of competence and how it should be assessed, but even then had only reached provisional understandings. For many, articulating their thoughts on competence and its assessment was a difficult activity. Their discussions during interview appeared to be first attempts at exploring tacit and unarticulated beliefs. What is described in this chapter, therefore, is the range of developing understandings of competence that people work with on a daily basis. They are separated for convenience in creating a description, but because they are in a state of becoming, they flow into each other in practice.
2.2. HOW PEOPLE DEFINE COMPETENCE IN PRACTICE
2.2.1 Developing Conceptual Maps in Practice
It is important to distinguish between definitions of competence and assessment within academic debates and official texts, and understandings and beliefs concerning competence that arise during the course of practice. The discussions in chapter one provided conceptual maps which arise within different academic or scientific paradigms. In practical day-to-day affairs, understandings concerning competence and its assessment may arise in many different ways. They may be held tacitly, they may be adopted uncritically, they may be part of a strong value or belief system regarding traditional or radical images of how professionals ought to behave. In short, each individual has their own conceptual map of what counts as competence, the competent practitioner, and assessment. These idiosyncratic maps may be informed by academic and official discourses, or unreflectingly adopted through a process of occupational socialisation. The focus in this chapter is on the range of 'maps' that people actually draw upon in their judgements about, and interpretations of, competent practice and student performance.
It is not just a matter of there being different definitions of competence. Each definition is the result of a different kind of discourse, that is, a different way of talking about experience and of providing rationales for action. Paraphrasing Kuhn (1970) adopting one conceptual map as distinct from another involves perceiving the world differently. Different maps define the boundaries between one entity and another differently. Rather like the gestalt figure which can be seen either as a duck or a rabbit, the same material entity (the lines on the page) can be perceptually organised in quite different ways and result in mutually exclusive judgements: to one person it is a duck, to another person it is a rabbit, to a third it is a perceptual illusion or trick! Similarly, one person's conceptual map of professional practice may yield quite different judgements about what procedures should apply in a given situation to the judgements of an individual whose perceptual map is organised in a quite different way. In changing the conceptual structures that define competence or professionality, the mechanisms and procedures which guide actions also change. Thus for example, the perceptions, mechanisms and structures underpinning a ward based practical test used to assess the Rule 18 nursing competencies are not appropriate for the continuous assessment of the broader Rule 18a nursing competencies for Project 2000. In general, the events that are considered significant and valuable under one conceptual map may not be so under another; indeed, an event recognised as having existence under one conceptual scheme may not be considered 'real' under another. It thus matters very much how people come to define competence. This in turn has consequences for the development of professional practice and a body of knowledge.
2.3. UNIDIMENSIONAL APPROACHES TO DEFINING COMPETENCE
2.3.1 Statutory Definitions of Competence
Statutory competencies for nursing and midwifery (Rules 18 and 33 respectively) have been in place since 1983 with the introduction of the Nurses, Midwives and Health Visitors Act. These statutory requirements guide individuals as they attempt to come to an understanding of the concept of competence:
Well, we have a criteria as a midwife, we've got a set of rules, a code of conduct...rules that we have to follow and your practice has to be safe otherwise you wouldn't be following those rules so that's how I judge my competency....this is what you...have to be able to do to be a midwife. (midwife)
As this midwife makes clear, statutory guidelines constrain the way in which a conceptual map can be drawn up; they can also broaden them, however. If we take nursing as an example, we can see how Rule 18a has constructed competence as a broader set of things to be achieved than Rule 18.  For instance, under Rule 18 students must:
b) recognise situations that may be detrimental to the health and well being of theindividual
However for students to fulfil this particular focus of competence under Rule 18a, their understanding of factors detrimental to health requires greater scope and detail. They must demonstrate the following:
a) the identification of the social and health implications of pregnancy and child bearing, physical and mental handicap, disease, disability or ageing for the individual, her or his friends, family and community
b) the recognition of common factors which contribute to, and those which adversely affect physical, mental and social well being of patients and clients, and take appropriate action
d) the appreciation of the influence of social, political and cultural factors in relation to health care
These revised statutory competencies for Project 2000 courses incorporate new educational aims, placing greater emphasis on professionality. Nurses and midwives are expected to demonstrate competence through:
c) the use of relevant literature and research to inform the practice of nursing
e) an understanding of the requirements of legislation relevant to the practice of nursing
Statutory definitions of competence provide a way of criterion-referencing assessment and thus ensuring that standards are met. At the same time they offer the individual a set of criteria for formulating their own version of the essential conceptual map.
2.3.2 The Tautology of Statutory Definitions of Competence
Where assessment criteria are constructed narrowly in terms of statutory definitions of competence this can lead to a tautology. Used in this way they can shape the concept of competence in terms of the minimum standard necessary to meet the assessment requirements. Fulfilling the assessment criteria for the course defines competence; anything not defined in the assessment criteria is ‘worth-less’ as far as this particular aim is concerned. Just as IQ has been defined as what IQ tests measure, so competence can be defined as what assessment procedures measure:
We're competent as a qualified nurse because we've satisfied the system that requires us to demonstrate it. We've passed the assessments etc so therefore we are competent. That's one definition. (education manager)
This points to two very real assessment issues. How much of assessment is about bureaucracy rather than judgement or education? How do personal understandings about competence 'fit' with those defined through assessment documentation? The issue of competence as a minimum standard is a live one for curriculum planners and practitioners alike.
2.3.3 Statutory Competencies- a Minimum Standard?
Whilst some interviewees identified the statutory competencies within their own 'maps' of competence, others expressed worries about statutory definitions being perceived by others as 'minimum' standards, where no reflection and development occurred once the 'basics' had been achieved:
...they're a good springboard for further development, but I would challenge anyone who sees them as the be all and end all. Or who always points to the competencies and says as long as a nurse can do that she is therefore a nurse. (educator)
...if it's a statutory requirement then it can not be ignored (...) a course has got to be approved. I think there is a danger that they become the bench mark, and I've got no evidence to support that, this is a personal opinion, I just speculate that it could be a danger, that curriculum development group would say that's what we've got to achieve. (...)I would see it as a minimum gateway that you get through some time during the course, but there's far more to be achieved than that, bearing in mind levels and that we're only looking for a diploma level etc, etc. I'm not advocating that we build in the course any more than that, but I still think there's a great difference between the statutory minimum requirements and what can be achieved even within the diploma level course. And I guess if you went around the country and looked at all the different courses that they all probably fall at different points within that. I'm sure there are...I know there are some on the statutory minimum in a sense and that they just achieve the minimum requirement and that's it, but I know there are others that I'm sure are much more innovative than that.
If statutory definitions can be regarded as a 'minimum' to be attained or as a 'springboard' for development, there is a danger that teaching and learning may be addressed to the minimum rather than to developmental opportunities.
2.3.4 Competence as a Cluster of Components
Interviewees talked about competence in terms of it being broken down into 'components' which are considered to contribute towards the whole. These components clustered into categories, examples of which are outlined below:
• possessing a wide range of skills:
• safe practice
• knowledge base which is up to date
• critical thinking
• functions as a member of a team
• professional attitude
• motivation and enthusiasm
This list is not exhaustive, but illustrates the range and kinds of components interviewees described. Each of these components may be further broken down into sub-components in what may well become an indefinitely extendible series of 'bits', as in a broadly behavioural approach (c.f. Medley 1984, Evans 1987 and the MSC) . However, upon closer inspection many of the 'components' referred to by interviewees seem to resist exact definition. This is demonstrated in the examples of the 'components' in the following figure.
...you've got to keep yourself
aware of changes in
current thinking... ...tremendous self awareness...
...good practical skills... ...safe...who admits when they don't know something...and goes to the right place and finds out who to ask...
...enthusiastic and eager to learn,
wanting to develop new skills... ...an adequate knowledge base...
personality counts for a lot...
...someone who's organised and can think
things through logically...doesn't rush things, ..the ability to provide a high
just stands and thinks for a while standard of care...not only meeting the
or can organise themselves... mother's physical need but also her psychological needs as well...
...it's being a patients advocate...
...motivation...and to be able to take
...thinking; if you want just an interest in ward activities.
want one word it's about That's very important....
Figure: Components of Competence (Part i)
..able to show empathy with relatives
...you can be competent at knowing
what to do, competent at
knowing how to do it...but it's really knowing why you're doing it...
...using your resources suitably...
...good with staff and able to help, see that
somebody's drowning under a lot of work (...)
to go in and help and guide them through, support
Figure: Components of Competence (Part ii)
It is possible to itemise what constitutes 'good practical skills' for an individual. It is even possible to regard keeping up to date with changes in 'current thinking' or having 'an adequate knowledge base' as observable and measurable elements. It is more difficult to define 'tremendous self awareness' , or ‘regarding oneself as a professional 'first and foremost', or 'thinking for a while' in the same way. These latter point towards a more holistic conception that implies processes of interpretation and judgement. All the so-called components could be reinterpreted as features of a multi-dimensional holistic map, not reducible to bits but integral to the work process as it reveals itself in concrete practice. Nevertheless, a focus upon particular skills and qualities as discernible 'bits' or 'elements' required for good practice, is a common way of talking about practice whether or not that practice can actually be itemised and quantified.
2.4. MULTIDIMENSIONAL APPROACHES TO DEFINING COMPETENCE
2.4.1 The Factors which Contribute to Multi-dimensionality
There are, then, several factors which lead to development and reconceptualisation of competence. One is the changing professional mandate. In nursing, the amended competencies for Project 2000 were described by one educator as:
...a new form of competence. That it's not the skills based competence that we had before, it's much more open, learning, flexible, outcomes type of thing.
There are concomitant changes in the educational process too:
I think I'm taught about what a competent nurse is, is somebody who can maintain the safety of the patients on the ward. But I think there's more to the competencies, I don't think a lot's put down on communication skills which is really what nursing, a lot of it's really about, it's being a patient's advocate, and you can't be a patient's advocate unless you've got really good communication skills. (student)
In addition, there is change motivated by a changing world ‘out there’ to be taken into account:
...there are clinical skills which I think a nurse needs to learn in order to survive in an ever changing world. If you look at technology and things like that, so the whole issue about the nurses role needs constant refining, i.e. can nurses give IV drugs? (educator)
Multi-dimensionality is also encouraged by working practices which continue to change roles. As one midwife, in making a comparison with the 'extended role of the nurse', says:
...they're going on about the extended role of the nurse, I just fall about because our midwives, it's not an extended role, it is their role; for instance they perform episiotomies...they also suture their cuts...they can also give life saving drugs without having to wait for medical staff to get there.
Neither the behavioural nor the legislative conceptualisations of competence address these kinds of issues:
I mean generally people I think in nurse education aren't happy with competency based training. We think it concentrates on performance, skills, the technician... and doesn't take sufficient account of the development of the individual. The cognitive, the intellectual, the reflective practitioner. And certainly this is a worry since one of the major things about Project 2000 (...) at Diploma level, is that it strives to develop cognitive and intellectual skills which enable the person to be reflective, a critical change agent at the point of practice, but also someone who can resource their own learning, their own continuing education and direct that and influence all of those things as an equal partner with all sorts of other professions. And I'm not sure that the competencies necessarily reflect that side of the professional role...either of them (pause) 18a is certainly better but I think the very fact that they are still cold competencies, which has a very clear manual task related definition.
2.4.2 Working Relationships and the Dynamics of Work
In the multidimensional approach, competence is defined by describing the features of the totality of the concept as it is expressed within the context of work. It is not an easy task to convey the variety of highly individual understandings in a way which makes sense of the diversity without over-simplification. Interviewees’ reflections on competence appeared frequently to be exploratory in character, often revealing inconclusivity and difficulty in articulating the indefinables without resort to concrete instances or events. The following interview gives a sense of this:
It's very hard isn't it? Because each individual's probably, you know, different. I think (pause) a competent nurse...(pause) someone who can work in a team, work with other disciplines, I think someone who is aware of current research, doesn't sort of stay stagnant, is always trying to update her knowledge...a person who's approachable, a person who has genuine regard for her patients...maybe has experience of life as well. Erm, can identify maybe mood changes and take into consideration why this happens. A patient maybe has just been given bad news...so can adapt her approach (pause) you don't want a nurse coming in bouncing if her patient has just been told they've got inoperable carcinoma. So you've got to have counselling skills, listening skills. Erm and to have a good rapport with patients, you know for the patient to feel that they are able to come to the nurse, even just to sit in silence and for the nurse to be there and just offer support, to sort of show empathy with her patients. (staff nurse)
At its broadest, competence includes 'life experience'. Most importantly, there is the sense of competence being based in work, and in particular, being based in a working relationship. The features of working relationships are that they are situationally specific and skills required in the situation are shaped or tailored according to specific needs and circumstances. In the conveyor belt technology of car production it is possible to standardise patterns of work and procedures so exactly that a robot can be programmed to perform them. What characterises nursing and midwifery is the opposite. Work situations are dynamic, conditions change, no two situations are identical. Programmed responses of the robotic kind are not merely impossible but undesirable. These concerns are again echoed in the following extract from a student:
It's a hard question. One aspect of it is actually having a good grasp of the kind of nuts and bolts of the job, like when it comes to psychiatric nursing...you should as a competent nurse know the relevance of the sections of the Mental Health Act thoroughly so you're not fumbling around when the situation comes up....and similarly when it comes to carrying out procedures like intramuscular injections dressings and so on....I think when you're at least familiar you're far more competent in things like that and I feel more confident and then I think that sort of flows over you into sort of the other areas. (pause) I find it difficult to put into words, but part of it is a sort of sensitivity to other people because it's very much about personal relationships and building relationships and a rapport with people who you know are in various kinds of mental distress...So to me that's quite an important part of being competent. I mean there's so much involved in that, it's not always what you do it's what you don't do...knowing when to actually say something to somebody, when to get into deep conversation, when to play it cool and when to stop a conversation. (student)
It could be argued that this interviewee while pointing to contextual matters also points to particular 'elements' that are necessary to professional competence. For example, having a grasp of the 'nuts and bolts' seems to imply a kind of tool kit knowledge of the job. Knowing relevant sections of the Mental Health Act is a case in point. This kind of knowledge is not just ‘a bit of knowledge to be added to other bits of knowledge’, however. The Act is itself a complex text that has to be interpreted in relation to situationally specific events. There are two kinds of reading here: a reading of the text, and a reading of the real life situation. This double reading then leads to a decision that certain procedures are required. In attempting to explain how this is done the interviewees give the sense of trying to hold onto the image of a very large picture, while trying to bring into focus each of its details.
In each case, however, there is a structural coherence to this picture. It is a coherence that is provided by the experience of work. Experiences of work provide the materials for accounts and reflections which can be shared with others. Asking what the relationship is between one account of work and a developing body of professional knowledge is rather like asking what the relationship is between the particular and the general. Competence involves acts of generalisation which at one level draw upon the common features of a wide range of experiences and on another level relates those generalisations to other bodies of knowledge or conceptual maps . These acts of generalisation allow the professional to make decisions with respect to the immediate case at hand. Without such acts of generalisation there would be no guidelines for decision making.
In the previous two extracts, both staff nurse and student try to make some generalisations but are well aware of the situational specificity of competence. The student suggests that a procedure that is appropriate to one context, or with one patient is not necessarily appropriate to another apparently similar situation. Her remarks seem to indicate that competence involves the ability to build up a repertoire of experiences and situations that bear some similarity to each other but at the same time reveal significant differences.  To be able to work in a given situation requires an extraordinary sensitivity to its specifics, such as responsiveness to intangibles like mood changes. It is a background knowledge of the effect of context on application that makes the vital difference. The subtlety required in being able to discern which approaches and decision making are required for actions in individual contexts is considerable. This in itself has major implications for learning and professional development, in particular it means that a sharing of personal experience and group reflection on cases is vital to building up an internalised body of case histories relevant to decision making.
2.5. COMPETENCE AS REFLEXIVE KNOWLEDGE EXPRESSED IN WORK
2.5.1 Competence and Professional Practice
The attitude of working to satisfy minimum criteria, from an ethical point of view, cannot be regarded as professional. Mechanisms can be set in place to enable development beyond the acceptable minima. Professional action is effective action. Effectiveness does not work toward, nor is it satisfied with, minimal criteria. Rather it works according to criteria of continual improvement in professional action. In this view, competence is always developmental in orientation, never looking back to the minimal criteria but always looking forward to better performance, improved decision making and greater quality of outcome. This view of competence firmly centres it in 'work' and work relationships. At its broadest work can be defined as the process of manipulating and transforming physical and social environments for human purposes (Sayer 1993). Work as the dominant means for people to structure their lives, find self value, form a sense of identity and engage in relationships with others is fundamental not only to self development but also to cultural and professional development (Lane 1991). Competence, then, and the forms of knowledge and knowing and acting that underpin it, can be defined as expressions of work.
2.5.2 Competence and the Development of a Reflexive Body of Knowledge
Self and peer review are procedures that contribute to the development of a reflexive body of professional knowledge, grounded in shared experience. They are procedures that facilitate the internalisation of processes of professional judgement and evaluation. The importance of individual understandings of competence is emphasised as these frequently form the basis of self and peer review. Some interviewees focused upon these activities as professional competencies in their own right. They talked about nurses and midwives developing their own standards, and being ready to assess themselves. They saw the development of competence as an on-going process which takes place in a relationship of mutual support and critique with colleagues.
You have to be self judging as well, as well as your peers judging you I suppose...so if you feel you're achieving those competencies yourself, then you're in (a) position to judge someone else I suppose. (midwife)
Such imprecision may disappoint those who want to measure clearly defined signs of competence. Yet, what is being judged is imprecisely defined because it is not a single entity but rather a complex of concepts, perceptions, feelings, values that constitute an orientation, a focus and a rationale for acting:
I feel a lot of the time nurses loose sight of the prime reason that they are there, and that is the patient. Because it's so easy to do because there's so much else going on. And a professional person is able to constantly pull themselves and say, "Now hang on, what am I doing? Where am I going? It's not good enough just to measure myself against the competencies and say, "Oh well I come up to that standard."
In this case then, a defined set of competencies is inadequate to generate the sense of competence that is being described here. A rationale is not a set of competencies but is generative of the criteria and reasons that distinguish between competent and incompetent action. Such a view requires a sense of continuous assessment of action, indeed a sense of mutual assessment:
...I mean I think we all assess ourselves and our work colleagues continuously, all the time anyway. I mean I'm sure as a team leader, the team leader will look at her staff... if she's got a very poorly patient and know who to shout for if there's an emergency. And that way she's assessing her staff isn't she, as she goes on...you assess people by how they go about their work, how organised they are, are they tidy?...Are their patients happy?...Are they lifting patients that are in pain without analgesia and that sort of thing. (student nurse)
The view of assessment as supportive critique is an important one which requires further exploration. It is necessary to discover, for instance, the extent to which that kind of critique is possible given the pragmatic (eg time) and cultural (eg busy-ness) constraints in nursing and midwifery environments, and also the extent that these constraints affect everyday competence. 
Some interviewees suggested that the competence of the qualified practitioner becomes such that individual competencies are no longer distinguishable, having become features of performance in general. They pointed out that these features are so embedded in the practitioner's daily activity that it was often difficult, if not impossible, to articulate them as specific competencies, as in Benner's (1984) work:
...some of the skills come with experiences of life and I think intuitiveness. I could work with a student and I could say, 'What do you see in this patient?' and the student could say, 'Well she looks fine.' Now intuitively I might say, 'Well I don't think she is' and I can't explain why I think that. It's come with experience...so that I don't think can be taught. That is something that has to be acquired throughout as they go on. (staff nurse)
The concept of competence as an expression of work seems inextricably linked also with a concept of continual development.
2.5.3 Competence and Professional Development
As understandings of competence and expectations of role develop, notions of appropriate outcomes of courses need also to change. A newly registered practitioner may be able to take on certain kinds of work expectations but may not be considered fully competent in their new role until they have begun to consolidate their course-based knowledge in relation to an extended period of work. As one educator commented:
...I have got feelings that we are expecting maybe too much to say that at the end of training the student is a competent practitioner. And you would say that perhaps a student becomes a competent practitioner when they have had sufficient time, and that's got to be individual, to consolidate their overall training in a specific area which they choose to work.
Even given time for development, however, there is not necessarily a linear progression in competence from level to level. Development is often not orderly and its pace is certainly not open to external regulation. Seen from a developmental perspective competence can be conceived in terms of individual readiness for transition to competent practitioner status. This means, as the nurse quoted below suggests, that there is no externally pre-specifiable point at which it is possible to say for each and every individual that competence has been fully accomplished with respect to every aspect of nursing or midwifery.
And I don't think that in three years of training a nurse has even approached perfecting those sorts of skill, (interpersonal and communication skills)if she ever perfects them, but she certainly hasn't even begun...So things like the skills you need to be self aware and to have good interpersonal relationship skills takes years and years, if I look at myself (...) to build up, to define and refine.
It also means that the maintenance or development of competence is not guaranteed, and that competence is on a continuum along which, in common with all other continuums, it is possible to move backwards as well as forward.:
I think it's transient. I think it (pause) you glide in and out of competence and I don't think you have it for ever and a day erm and yet our notion of it really is that when you're competent that's it and you always are forever more, so it needs nurturing.
If this is the case for registered nurses and midwives, then it is certainly true for students, whose development is taking place within a time-limited Project 2000 programme, and whose learning still remains to be consolidated by extensive experience. Work situations are dynamic, and the situationally specific events of everyday experience are not precisely controllable. The experience of being competent to handle such situations is thus likely to fluctuate according to the sense of being in control, knowing what to do next and handling the unexpected. Thus:
It's not an all or nothing state is it? They (students) are partially competent and I think most practitioners are only partly competent ...
In this view, then, competence is not a steady state; it is a fragile achievement and never a total accomplishment.
I mean to be competent as a nurse do you have to be competent in everything? Because I'm not competent then... Because applying it to my own situation there are a range of skills that I have to bring to my job. There are some I do well I think, there are some I can handle reasonably well, and there are some I'm quite poor at. What I tend to do is delegate the poor ones, avoid them myself. So am I competent or not? I'm competent in the areas in which I practice, but because I avoid the areas where I might not be, does that keep me competent?
But while the professional development view defines competence as something which continues to evolve over time in fits and starts rather than by linear progression, it nevertheless recognises a sense of purposeful direction, a sense of striving to improve action in professional work.
2.6. COMPETENCE: A SUMMARY
The concept of competence resists easy categorisation. There are many different aspects of this wide-ranging and complex concept which have to be taken into account when designing strategies for it’s assessment. Professional perceptions indicate clearly that there is more to competence than simply what can be easily observed and measured. From the developmental perspective, statutory competencies may be considered as merely an initial framework, or starting point for professional development. Competence continues to develop and grow as the individual begins to construct a more detailed picture of the general requirements of nursing or midwifery by building up a repertoire of situation-specific experiences. Competence is, therefore, a concept which is worked out and continually reformulated through work itself. Assessment needs to take account of all these complexities.
Devolved continuous assessment was introduced into nursing and midwifery education as a response to the need to assess a wider range of educational purposes and a developing concept of competence. Long-established cultural practices are not changed overnight, however, and the move towards devolution has been marked by a degree of culture clash. Where there is transition there is also variety, as new forms of assessment are introduced to run, for the time being, alongside more traditional forms. Consequently, the experience of non-continuous, four-ward based assessment has continued to affect the perceptions of assessment for some time after the introduction of the more educative approach. Twin-track assessment (i.e. different forms of assessment running in parallel within the same institution) offers both a cultural challenge to the institution, and a potential psychological challenge to the individuals in it. The inevitable pragmatic and conceptual confusions become part of the discourse about what constitutes satisfactory assessment practice. Like competence, therefore, assessment itself is worked out, or constructed, in the process of doing it. The net result is that both the quality of nursing and midwifery being assessed and the instrument for assessing it are defined from multiple perspectives. To discover how far devolved continuous assessment is effective in assessing competence, it is necessary to identify not only the range of views of competence that make up that unstable concept, but also the range of experiences of assessment that have created the culture in which assessment is to take place.
2.8. TRADITIONAL FORMS OF ASSESSMENT
2.8.1 Centralised Periodic Assessment
In 1971, the assessment of nursing practice through classroom based simulations was replaced by a series of four practical tests conducted in clinical settings; a maximum of three attempts at each test was permitted. During placement experiences, the practical tests were supplemented with King's Fund type ward reports, completed by clinical staff. Assessment of theory took the form of a final determining exam, for which students could make three attempts. Assessment for midwifery students consisted of a final qualifying examination, with written and oral components.
The overall assessment systems were the subject of much criticism, which was recognised at all levels in the profession (Gallagher 1985; Aggleton et al 1987; Bradley 1987; ENB 1984, p1; ENB 1986, p 1; Lankshear 1990) and stimulated ongoing debate about more preferable alternatives. As a consequence of the criticism and debate, continuous assessment was debated on the assessment agenda for a number of years. A small number of pilot schemes were operated by the ENB from the 1970's onward; however such developments were not widespread (Spencer 1985 ) until the national implementation of continuous assessment.
2.8.2 Practical Assessment
Although many practitioners still cling to some of the ideals of traditional assessment with the consequence that the nursing and midwifery culture in specific locations within particular institutions has been slow to change, most when interviewed were clear about its shortcomings. The apparent contradiction in this serves only to illustrate the power of habituated practice to continue to frame corporate action in the face of contrary innovative practice constructed around essentially unstable (in the sense of developing) concepts. The information people offer about their dissatisfaction with piecemeal, periodic forms of assessment is interesting, therefore, from two points of view. Firstly, it confirms the general sense that nurses and midwives are, in principle, committed to continuous assessment as a ‘fairer’ means of gauging student competence. Secondly, it provides evidence of the fact that people can be aware of shortcomings and yet still continue to work for some time without complaint within a culture in which flawed practices persist.
1. One-off practical tests
Interviewees acknowledged that this approach was inadequate for the assessment of practical competence. They comment about performance situations which:
Interviewees made it clear that assessment focusing on the ability to perform satisfactorily on one-off occasion within an essentially 'false' situation did not reflect the realities of everyday practice. Staff and students spoke of how a great deal of rehearsal occurred prior to the assessment. Typical of comments that suggested this performance element were the following:
I don't always feel it's fair to the student really, because it's not real and when they qualify and they get out there it's so different... And I can remember thinking it, "But on my management it wasn't like this!" Everything went so smoothly.' (staff nurse)
You tend to do probably extra things that you wouldn't normally do. (student)
..making sure the trolley's perfectly clean and that you've got everything on the bottom, where probably on a normal drug round...day to day you'd just look quickly then rush off with the trolley. (student)
Unsurprisingly, many students expressed anxiety prior to and during assessment events. As one student commented wryly:
I suppose it assesses how to cope with stress...you're nerves are just shot to pieces. I'm not a particularly nervous person or highly strung...I'd hate to find someone who gets quite intimidated by it all..it really is horrible.
•assessed limited application of principles to different contexts
Many students were concerned that the focus on one-off performances provided little opportunity to show what they knew about the principles of nursing or midwifery in a variety of settings:
...like aseptic assessments, you can have an assessment on a dressing and can not have gone near a suture line or clips or anything and suddenly you're competent enough to go off with your trolley and take out clips and sutures and God knows what else on the ward.
Others were concerned that success in a one-off situation did not guarantee transfer of competence to a variety of contexts:
...It doesn't suddenly mean that you're labelled safe and it doesn't mean that you can start pulling down the barriers of double checking...
•poorly discriminated levels of performance
The limited capacity of the assessment to pass 'good' students and detect and where necessary fail 'poor' ones was noted by several interviewees.
...You do get referrals to people that shouldn't be referred, and you know are quite capable that have done something amazingly silly. But you know if they'd actually done it on a day to day basis you'd just say, "Well that's stupid, I've got to start all this again"..and then you get those who, you know to be honest don't really put in a great deal more than they have to on a day to day basis that come out with a glowing performance on the day.(educator)
2. Ward Reports
The shortcomings of the King's Fund or 'ward report' forms used on students' clinical placements as an adjunct to practical tests were largely to do with the lack of real evidence that they provided:
...the King's Fund report form is very erm... open and there's not a lot of room on it for comment.. (staff nurse)
Well, the thing is they're never used properly, I mean half the time they just tick and there's never any comment made at all, to reflect what the ticks are actually saying. (educator)
3. Final examinations
Many students had strong views about the inappropriateness of the final summative written exams in nursing, and the final examination papers and orals in midwifery. They were unhappy with its one off nature and its associated pressures. It was also felt that an exam did not assess their nursing or midwifery competence, and was therefore seen as unnecessary as well as unwelcome. Typical worries of student were:
I mean that sort of really worries me, the fact that it comes down to an exam paper at the end of the day, you know the final decision. The fact that I've passed the ward-based, the practical assessments and been deemed to be competent or whatever... erm comes down to the fact that at the end of the day, pass the finals.
Personally I'm very bad at exams...I've always had very good or outstanding ward reports but on exams my marks are normally borderline or just over borderline, sort of 50 to 60%...I get into an exam and I know what I want to put and I know it all, well I know quite a lot of it! But putting it down on a piece of paper's just something completely different...everything hinges on that day.
The general attitude towards exams and one-off assessments is summed up by the student who, in advocating a more continuous approach, indicates what is missing from the traditional one:
...you see it as a hurdle...some people thrive on hurdles and jump them. But a lot of people see them as a barrier...you've got to get through that barrier. If it was like continuous assessment erm, I think you'd keep yourself more aware of what you were doing and would be more willing to change your practice, not just as a student but you'd have that sort of framework, so that when you do qualify you're still able to look about at what's happening and change your practice.
He, like the majority of students, practitioners and teachers interviewed, saw examinations as being about something other than competence. Indirectly, such a perception re-iterates the view that theory and practice are separate entities.
2.9. DEVOLVED CONTINUOUS ASSESSMENT
2.9.1. The Reality of Continuity
National guidance for devolved continuous assessment was produced by the ENB in its Regulations and Guidelines for the Approval of Institutions and Courses 1990. The strategy requires students to demonstrate the ‘acquisition of knowledge, skills and attitudes of differing complexity and application’, and to include clearly identified summative and formative assessment activities (ENB, 1990, p 50 ). The guidelines place a strong emphasis on formative assessment, reflecting principles which are intended to maximise learning, build on students' strengths, respond to their weaknesses, provide profiles of progress, link the identified 'parts' of the overall strategy and encourage student participation in self assessment (ENB, 1990, p 50).
The increased sense of reality that this continuous process can provide is identified by this midwifery teacher:
...the new assessment will be a much...not so much fairer but a more realistic assessment of a students progress.
The in-depth approaches to the assessment of theory which continuous assessment allows is also welcomed:
...at the General Hospital they do a project where they've got so many weeks to do it, and I think that's a good idea because you can demonstrate your knowledge in a much deeper way...you've got more time and you can spend more care and...it's not as traumatic as doing a big, you know a two, three hour exam.
Insofar as competence was seen as an ability to perform consistently over time, there was a view that continuous assessment facilitated this.
An approach which represents changes in educational aspirations must also map into work contexts, the only scenarios through which professional competence is expressed and realised. The recognition of distribution of responsibilities is therefore essential to the functioning of continuous assessment. The cascade of responsibilities to all levels was recognised by interviewees. Some commented on the effects on students:
I think it makes you more aware for three years, certainly you've got to know what you're doing all the time. (student)
I'm sure that a lot of people believe that continuous assessment is an easy option, and that ultimately it will dilute the performance of the individuals... I believe that it will actually concentrate their minds considerably...(educator)
Others noted that not only did continuous assessment have the potential to concentrate students' efforts but also to foster in them a developmental attitude and an expectation of education as an on-going process.
2.10. INNOVATION IN AN ESTABLISHED WORKPLACE CULTURE
2.10.1. The Persistence of the ‘Old’
Interviews show that perceptions of the weaknesses of single-event assessment and the strengths of continuous assessment do not always lead to different approaches to the activity of assessing. It seems that for many a reconceptualisation is required if assessment is to be part of an educative process. Some interviewees comment that some schedules are still dominated by behavioural formats which do not adequately reflect the complexities of continuous assessment. In them, competencies are broken down into numerous parts or sub-skills in order to measure them on a pass/fail basis. They focus on techniques rather than 'complex' competence. It appears that 'technicist' approaches are still evident in some continuous assessment documentation and hence they do not promote assessment of the 'complex' or 'higher order' competence that typically characterises a professional. The implication seems to be that in evaluating the effectiveness of various forms of assessment, there is a need to consider not only the forms themselves but also the general methodologies employed within them.
In the same way that nursing and midwifery competencies have been statutorily defined but are modified and extended in practice, so the strategies for assessing competence are set down in official procedures but adapted 'on-the-ground' as they are put into operation. There is, for instance, a clear distinction in the Board's regulations between formative and summative assessments, whereas interviewees' comments provide evidence that some assessors employ all assessment diagnostically and formatively. And despite the Board's clear distinction between 'single event' assessment and continuous assessment, people talk about both in ways that suggest both forms of assessment are carried out in similar ways. All the interviewees, however, have definite views about the nature and quality of different forms of assessment, and those views inevitably affect their assessment practices. 
The part played in shaping continuous assessment by 'residual' perceptions, held by those still operating the earlier approach, albeit alongside the new one, and the often cynical attitudes these engender, cannot be ignored. The following comment encapsulates that cynicism.
I'd argue that there's a conspiracy of passing people at the moment because if they judged it by their (clinical assessors)values they (students)shouldn't pass, they're not competent within their values, but they know their values aren't what's being asked for but they have to assess them...to say someone's not competent to do that you've got to know what you're talking about and their training may not have given (them that) ...if you don't know you might as well put a tick because if you put 'no' and they challenge you...
The dilemma is summed up nicely in the following comment, as the speaker unfavourably compares preparation for the major attitude and practice shift required to facilitate the radically different form of assessment required by devolution and continuity, with the preparation offered to people in industry who are about to undergo an innovation of similar proportions .
...half of the people (assessors) or whatever number have been trained literally as they work to operate within one system. We are then asking then to assess with a different philosophical view...apart from a 998 course here or there, I mean not particular help to do it. If any industry was remodernising it would put in massive resources to change it to operate the new machinery, Nursing somehow hasn't put that (in), and after all assessing that they (students) are competent to practice in this new way when you've never practiced in it (pause) it's like asking me to judge something I know nothing about.
There is clearly an issue of what constitutes an appropriate investment in resources for a major innovation.
2.11. ASSESSMENT: A SUMMARY
Assessment, like competence, is an activity informed by an evolving concept. It is also an activity which is carried out regularly and thus becomes habituated. Where institutions have a culturally embedded reflexivity, their assessment strategies and practices develop gradually and meet the educational needs of the profession. Where the imperative for change is entirely an external one which impinges upon a non-reflective institutional culture, the attempt to accommodate to that change is often traumatic, and the strategies adopted lead to piecemeal adaptation within an ‘unready’ context. By contrast, in a nascent reflective culture there is principled movement towards a form of assessment that satisfies the professional requirement that competence should be founded in the application to practice of appropriate skills, knowledge, and attitudes. Institutional histories are therefore of considerable importance in determining the extent and rate at which a particular assessment strategy will succeed. The success of devolved continuous assessment relies upon the people who operate it having a sense of ownership, without which they will lack the necessary commitment and understanding to exercise effectively the increased responsibility it brings.
A system of assessment which is devolved, must be capable of handing responsibility for design and implementation to individual approved institutions without losing the capacity to hold them accountable for meeting national criteria. The institution itself must be able to ensure that it provides a course of professional preparation that equips students with the competence to practice at a professional level. In any assessment system there are, then, internal and external points of reference, and individuals, committees, and planning groups must all respond to both sets of constraints. The devolvement of decision-making scatters the centres of decision; this can lead either to the development of collaborative or of competitive relationships between institutions. Where there are well-defined roles and structures that promote partnership between all the interested parties, the personnel involved in the design and subsequent implementation of an assessment policy feel a sense of ownership. Internal partnership rather than external legislative imposition ensures a comfortable fit between the needs of practice, teaching and learning, and assessment. The most facilitative structures enable rather than constrain the process of parallel curriculum and assessment policy design from the earliest possible opportunity. The most supportive roles are defined in a way that encourages dialogue, and brings together different perspectives (e.g. nursing and midwifery) and specialist interests (e.g. branches within a course) in a mutually enabling relationship. Where the assessment histories of merging or merged Schools and Colleges are very different or these institutions are at different stages in the development of their assessment policies, individuals are obliged to invest large amounts of their personal time and energy to achieve compatibility. They do this willingly where there are stable structures for dialogue and partnership, but experience frustration where the re-organisation process has resulted in destabilised structures and uncertain role-definitions. This seems to point to the need for a policy co-ordinator and assessment quality assurance role. It is not simply at the local level that clearly-defined assessment roles and structures are appreciated, however; they are also valued at a regional and national level. There are many instances of a happy relationship between ENB Education Officers and their local approved institutions, in which officer and approved institution are able to critique national guidelines as a route towards making sense of them and gaining the ownership mentioned earlier. This relationship recognises the scope for local interpretation of national guidelines, and looks positively for creativity in the approved institution’s operation of them. Where the relationship between the parties is genuinely one of partnership, guidelines are used as an enabling framework and the need for dialogue about different interpretations fully recognised. Without such a relationship, assessment documentation, schedules, and procedures are introduced in which no-one at the approved institution level has any faith. Inevitably these fail to provide reliable or valid assessment information.
DESIGNING DEVOLVED CONTINUOUS ASSESSMENT STRATEGIES
A planning structure must be able to ensure appropriate conceptual frameworks, principles of conduct, role and communications structures, and individual commitment to the enactment of the process. This section will explore the features and experiences of planning in relation to these:
• by giving an overview of the relevant features of the operating context of institutions in terms of 'mapping the system'
• and by describing and analysing the experiences of staff operating within that 'map'
3.1. MAPPING THE SYSTEM
In general terms an institution can be regarded as a system which must meet the demands made upon it both internally and externally if it is to survive and develop to fulfil its intended mandate. Assessment is of course a key function in the mandate of an approved institution to engage in the education of professional nurses and midwives. Assessment is not merely an internal matter. Reference is made to demands from a variety of sources external to the approved institution (ENB, UKCC, EC directives and other appropriate legislation). These reference points provide one set of guidelines common to all by which to construct a map which can then act as a framework for analysis, comparison and contrast. These external points of reference in turn make demands upon and place limits upon the ways in which institutions can organise themselves to meet the demands. There are thus external and internal factors to take into account.
The appropriate structures will of course depend upon local circumstances. However, it is possible to begin the process of analysis by setting out, in the first instance, a simplified schema as follows:
Without appropriate structures and associated role definitions, committees, working groups, planning procedures and communications structures, little is likely to be accomplished. Although oversimplified as a representation, this initial schema does point to two important structural dimensions for the approved institution. There are two directions that it must face, first 'vertically' towards the national bodies (ENB, UKCC), European directives and local bodies (RHA, Trusts); and secondly, 'horizontally' towards the clinical areas of the region (purchasers, clinical placements). This therefore divides external demands upon its organisational structure into two kinds. Roles must be established to respond to the two kinds of external demand. In addition, to be properly informed about regional operating conditions, it needs access to information regarding the relationship between national bodies and the clinical areas in its region. That the schema is oversimplified is made clear when the complexities begin to be identified for each broad category of institution. Thus the national-local level can be further amplified:
Clearly, the approved institution must respond not only to national demands but also to local demands. The local context is not a mediating layer between national and institutional levels but rather is symptomatic of global developments throughout society. There is no unambiguous hierarchical 'line-management' relationship running from the national through to the local and then to the approved institution. Devolvement of decision making scatters the centres of decision making them at least quasi-autonomous. The movement then is into the formation of collaborative and/or competitive relationships where negotiation rather than 'command' is the central operating feature. With the increasing 'marketization' of once nationalised public services, decision making, while made more sensitive to local demands and operating conditions at the same time, is subjected to national demands for 'quality assurance', 'standardisation of outcomes', efficiency, effectiveness and so on. Tailoring services and training to meet local conditions of demand may come into conflict with national demands for consistency, for a common professional education. This particular tension between the local and the national represent a modern feature of society which can be termed the 'global-local' problem (c.f., Harvey 1989).
The implications for the approved institution do not neatly separate into two classes but must rather address the central problem posed by the new operating context with its global-local poles of decision making. This means, in general terms, a need for structures for collaboration, dialogue and information gathering.
The following diagram begins to unpack the internal complexity of the approved institution itself. The internal organisational logic of a given institution has its own historic origins. This provides the internal operating context within which planning takes place. While there may be surface similarities between some institutions, in practice the operating conditions are unique to each institution. In general terms, there are typical operating differences as between the 'new' and the 'old' universities. The new universities come from a polytechnic culture with its CNAA influenced principles and procedures of course development, validation and assessment. The old universities draw upon a quite different culture of autonomy and self-validation. Where the polytechnic culture has expressed itself in terms of strong line management patterns of control (Heads of Department and Deans being seen as professional managers), the old universities typically incline towards individualism, a limited style of line management (Deanships rotating amongst senior academic staff) or a democratic mode of School/Department/Faculty management. With amalgamations or affiliations of colleges of nursing and midwifery different ways of working, different role definitions and different occupational cultures and institutional histories are brought together. Mediating roles and structures not only within the institutions but between the institutions become important.
It is not only the map of the variety of institutions delivering education that is under change but also the 'clinical areas' for student placements.
Each schema provides a way of beginning the process of mapping the range and sources of information necessary to plan the assessment structure.
In order to specify in more detail the roles required for assessment design the structure for implementation purposes must first be identified. For this purpose a further schema can be offered:
The roles that the above schemas have identified in general terms are articulated in practice according to the circumstances faced by each institution. However, if the system is to operate it must meet the functions defined in terms of these roles.
3.2. THE EXPERIENCE OF DESIGNING CONTINUOUS ASSESSMENT STRATEGIES
3.2.1. Operating in the Local Context
Responsibility for the detailed planning of assessment strategies belongs to individual approved institutions. In theory that responsibility is determined by a set of ENB guidelines, UKCC statutory requirements and EC directives. In practice, because every institution has a unique assessment history there are marked differences in the approach each adopts to the design of assessment strategies to meet the Board's most recent requirements. Those who have already been operating continuous assessment informally for several years simply continue the institutionalised process of evolution and development; whereas those for whom the experience is a new one face a considerable challenge of innovation. The following extracts typify the range of experience:
...it's where you are now, what experience, what you've come from, which is what we've tried to do with that profile (...) we know where we are at now is the King's Fund assessment forms. Let's see this as an interim, let's move slowly (...) because it's not just the educationalists in the school, it's the people out there you know, it's like moving where you are now, from where you've been in the past, to moving to the future...it's about development.
The experience of designing a new assessment strategy is most difficult where it represents a major innovation. The experience is quite different for those professionals who are are working in a context that has member of a professional group like the one described below that has engaged in a gradual evol.
...historically, we have been running continuous assessment here for at least ten years, but initially it was run in conjunction with the statutory mechanisms deemed by what was previously the General Nursing Council and subsequently the ENB and as a result of that obviously we feel we could work through that which in the early days were quite primitive tools for determining the knowledge, skills and values of a student (...) so evolutionary we've moved on and yet in some respects of course we've still got tools that were relatively primitive.
This teacher and her colleagues work in a culture which is familiar with regularly evaluating and developing continuous assessment. For them, each new requirement is an opportunity to upgrade what they have been doing previously.
3.2.2. Ensuring Ownership Through Partnership Structures
Assessment is normally planned in common with other aspects of a course, and the planning team that does one is the same group of core educators, practitioners, and (less frequently) students, responsible for the other. This partnership ensures that in planning their assessment strategies a team considers both the educational agenda and the agendas of nursing and midwifery practice. Partnership, as the teacher below points out, avoids the pursuit of an ‘ideal’ strategy which is inoperable in practice;
It will involve teachers, managers, practitioners, student representatives and any other person who has specialist knowledge about this particular issue that is going to be written about or discussed. (...) To write a curriculum with purely educationalists, I mean people tend to say that educationalists tend to live in the ideal world, and sometimes don't tend to realise the reality of the situation. It is fine to sit down and write the ideal curriculum, but in practice it can't be implemented. (...) But here we have always taken the view that education is a partnership between the college of nursing and the service area.
Partnership is not always easy, however, and sometimes it can be a little imbalanced with either the practitioners having slightly more say:
...both the midwife teachers and the clinical midwives, they both need each other to function properly and so it would have been in my opinion very wrong for it to have been as assessment strategy designed by midwife teachers. It's got to be both sides, (...) in fact we had just one midwife teacher heading up a small sub group and there were three preceptors plus her, so in fact...the emphasis within that small team if you like, was more on the practising midwives rather than the midwife teacher.
or the teachers:
...I mean we say we say that the student owns the document, but they can't own it if they've not designed it really to their own use...as long as educationalists are designing the form then there's going to be problems. As soon as the clinicians design the form then there's going to be problems with the educationalists. I think until we can all sort of get together and speak the same language there will be problems won't there?
The issue here is one of ownership. In a satisfactory partnership, because all the partners make a significant contribution to the design of the assessment strategies and documents they eventually use, all partners feel a sense of ‘ownership’. Not all planning groups achieve that sense however. The comment of the educator quoted above reflects a view which was often expressed, and highlights the need for greater dialogue between educators, practitioners and students to promote shared ownership.
In the true partnership between practitioners, teachers, and students, the relationship between planning and assessment is perceived as integral. Far from being something designed separately by educators and imposed on practitioners, or given an undue practical bias by virtue of an over-heavy practitioner input, assessment is seen as fulfilling both a curricular and a practical function. Where all the people involved in the design of the assessment strategy become ‘owners’, as in the instance below, the match between the needs of practice, teaching and learning, and assessment is high.
...our experience has taught us you can’t divorce assessment from the main curriculum planning, and you can’t have curriculum planning taking place and assessment coming later. Our experience has shown us this is not on. So from 1990 you could say that we have had to work very very closely together.
The implicit statement is about the importance of ownership, and the potentiality for alienation where one partner feels they have had something imposed on them which they do not own.
3.2.3. Strategies for Achieving Coherence
Policies, procedures and regulations for continuous assessment must, by virtue of having to meet validation criteria, be designed to reflect a shared institutional philosophy (ENB 1990 p 49). Even if this was not the case, it would make sense for institutions to rationalise their assessment approaches from time to time to reduce the potential for confusion, prevent unnecessary duplication of work, and promote shared understandings. Course planning occurs to less than ideal time scales, however, a factor which can lead to the relegation of assessment to a low priority position vis-a-vis curriculum design. An assessment strategy created in such circumstances can easily be at odds with the curriculum, either because the strategy is poorly designed or because the curriculum itself lacks coherence that is only revealed once the assessment strategy is in place. Coherence then has to be created at the post-planning stage, with a consequent waste of human and time resources. With a sensible timescale it would be possible to ensure that discussion about curriculum and assessment took place in parallel at the design stage, and to lessen the potential need to find ways of making a course ‘hang together’ post-hoc.
The need here is for structures which enable rather than constrain the development of coherent and integrated assessment policies. This suggests that what is required for all courses is the creation of core assessment teams, headed by a member of staff in an assessment/examinations officer role whose responsibilities would include the provision of general guidance and the facilitation of assessment activities. Where there are clear role definitions for each member of a planning team and a designated responsibility for ensuring that the system operates effectively and individuals are given appropriate support, assessment planning is highly effective:
...we utilised the examination and assessment officer that was within the college as a whole, and we had a small group of midwives and him, who were looking at all our existing assessments and we in fact drew up...certainly the practical assessments based very much on the (...) model that's being used for the Project 2000, because we found it fitted what we were doing anyway. And I don't believe there's any point in re-inventing the wheel if the wheel is...applies to midwifery in a way in which we think we'll be able to utilise, that the students will understand, that the preceptors in the clinical field will understand.
The picture which emerges is of a planning team with well specified roles which enable individuals to work together in a unified way. This again suggests that where assessment roles for team members are clearly defined and supported by a role holder with specific responsibility for co-ordinating all planning activities, the outcome is more satisfactory than when roles are ill-defined, and tasks randomly dispersed amongst a set of individuals who have a range of other equally pressing activities as part of their role. The frustration of simultaneously juggling an assessment planning role with other equally demanding ones is summed up well in the following comment:
When I was saying there are problems, I mean I think it's with the documentation in the first instance. It's difficult to, to take a huge chunk out of people's work time to prepare documents, and you don't know where to start...
The functions of a person in a central planning role, focussing more or less exclusively on assessment planning and implementation, are suggested succinctly in the next remark:
Who works best with whom? Who has got the particular experience? (...)...very early on(...) you must divide, you must create a division of labour, a task, who is going to do what? How is it going to be done? By when is it going to be done? What standard are we expecting? What difficulties may arise? Who will sort out the difficulty? (...) We set that ground very early on so that it is really a partnership.
What becomes increasingly apparent is the need for continuous dialogue, facilitated by a person whose role it is to ensure the coming together of all those working on assessment planning. Where no such role exists dialogue takes place in the margins, and assessment strategies which unite different perspectives and specialist interests (i.e. which bring together branches within a course or, where appropriate, marry assessment principles for nursing and midwifery) are less evident. Without dialogue at all levels planning is carried out in ‘pockets’, and anomalies arise where none should exist. Even where there is dialogue problems are not always easily resolvable, of course, as the comment below from a nurse teacher speaking about another branch of nursing clearly shows. Nevertheless, dialogue offers the best hope of achieving what the speaker calls ‘compromise’.
..it was a question of compromising...they wanted 112 competencies and there was just no way that we were going to...go along with that because it completely went against what we were trying to achieve, which was an individual competency rather than one which followed a rigid pattern.
Dialogue is also essential to minimise the discontinuity created by mergers and amalgamations of the old schools and colleges of nursing and/or midwifery to form new approved institutions. It facilitates course planning, enables different experiences and developments to be successfully incorporated, and assists the redesign of assessment strategies in ways that draw on the best from previous developments. Where the assessment histories of different chools and colleges were not always harmonious, or they were at different stages in developing their assessment policies, the need for individuals to invest large amounts of their personal time and energy to achieve compatibility through dialogue was evident.
Despite an obvious cost in personal time where there is little provision of structured time for discussion and dialogue, most people commit themselves fully to the planning activity necessary to harmonise previously disparate systems, as the quote below indicates.
Ultimately what we finally did was we sort of put together the bits that were already being done here on this site, and quite a bit of work they'd already done at X college when we amalgamated...there were all these issues that we had to take on board... So that's how we came up with the scheme that we have now, with a lot of changes on the way...formats had to be changed, had to be changed to fit in with our current curriculum...
The emphasis is on a complexity which requires ‘quite a bit of work’ if it is to be made sense of without resort to the simplistic. That work is made harder where the composition of teams is unstable because of the re-organisation process, and it becomes necessary to spend even more time ensuring common understandings.
I was representing mental health in the team, and the team itself changed (...) a little bit over the development period because umm, different people came into the college and different people had different jobs and so they came and went.
Dialogue about assessment strategy design in approved institutions where principles and procedures are not yet fully united, has the knock-on effect of furthering professional development across the board:
So we have this dilemma in that the college came together as a single institution and inherited various degrees of developed courses, some of which were still summatively assessed and others which were continuously assessed, which meant that you had teachers who were familiar with such processes and teachers who weren't. And the training circuits, in the clinical placement circuits you also had practitioners who were familiar with continuous assessment and those who weren't, which meant there was a need to level off right through the organisation. (educator)
The greater the diversity of the histories an ‘amalgamated’ approved institution brings together, the greater the demand for dialogue to expose and analyse differences of perception, and to promote institution-wide understandings.
3.3. DESIGNING ASSESSMENT IN THE LOCAL CONTEXT: A SUMMARY
The research has identified four key features required to ensure assessment planning at an approved institution level which offers an appropriate structure for devolved continuous assessment. It has also revealed how certain characteristics of the most successful roles have facilitated the development of successful assessment strategies in even the most difficult circumstances where discontinuity has become institutionalised. The first and perhaps most important feature was:
1 the identification of an assessment quality assurance role which incorporates responsibility for the co-ordination of assessment activities, and enables:
• allocation of specific responsibilities to individuals
• monitoring of these responsibilities in action to ensure that they are compatible and do not lead to unnecessary role strain or conflict
• monitoring of these responsibilities in context to identify where additional resources are required
• development of action-oriented responses to assessment issues
Other features included:
2. the full representation of teachers, clinical staff, students, and representatives of other relevant groups on assessment planning teams to ensure partnership
3. the establishment of dialogue structures to facilitate partnership
4. the linking of assessment planning, evaluation, and development
There are clear implications for the development of both roles and structures for assessment planning at the level of approved institution. The following section reveals that there are similar implications for local policy.
3.4. OPERATING AT LOCAL LEVEL WITHIN THE NATIONAL FRAMEWORK
3.4.1. ENB Education Officers as Facilitators
In local contexts there are certain roles and structures which play a key part in facilitating the planning necessary to implement devolved continuous assessment. There are similarly important roles and structures at a national level where guidance for course and assessment design is issued from the ENB (in the form of written guidance and through conversation with education officers) and the UKCC, and at European level by the EC directives. The extent to which an institution acts confidently and independently when designing their local assessment strategies depends a great deal on whether these national (and international) guidelines have been fully explored at the appropriate point in the development process. Crucial to this process of exploration are the ENB’s own education officers.
ENB education officers have a key mediation role, intervening between institutions -in the form of the people who make up the institutions - and the national bodies (see schema 1). A recent review of the role of ENB education officers has established new relationships with approved institutions. Each education officer has two roles, acting on the one hand as a designated education officer with a direct personal relationship to a small number of approved institutions (i.e. liaising with those institution regarding all courses); and on the other hand in a specialist capacity as a regional source of specialist advice to assist colleagues in their designated officer role. Given what has been said above about dialogue it is clear that the quality of the working relationships formed between approved institutions and ENB education officers will play an important part in the harmonisation between national and local perceptions, and the promotion of a sense of ownership referred to earlier. ENB education officers themselves express the hope that their old 'inspectorial' role has been superseded by a more facilitatory one, as some of their comments illustrate:
We work on the premise that they are the experts out there, they should know what they want for their standards and quality of practitioner.(...)I mean the UKCC produces the content, kind and standard of the programme and what we're looking at (is) does the institution reflect that? And we're actually monitoring the delivery of it, we're not writing the standards for them, they must do that at grass roots.
Professional expertise at local level is acknowledged, and the facilitative nature of the ENB education officer requires assisting approved institution at a highly individual level:
I can't tell you how you should programme your computer because I don't know what software you've got, whether it will take it, whether it's this compatible, that compatible. How can I therefore tell them exactly what sort of assessment strategy they should be using, because I don't know what materials they've got to work with, I don't know what time they've got for preparation, I don't know what method they're going to be using, how they are going to link theory with practice, what's their regulations? They must set their own targets. But I will support and discuss it with them, and I have found, albeit that the institutions do that, some people are better at it than others. I can't destroy somebody who is developing a system, because I'm not sure how it is going to work out, it might be comfortable for them. They must have a team of people to prepare it, they must know what their strategy is, there's nothing to stop them reviewing it. I say to them, "OK this is a start off."
This officer also refers to her monitoring role, a role which she sees as centrally involving supportive discussion. It is difficult to avoid the conclusion that where effective liaison happens, discussion has once again played a central part in promoting it.
3.4.2. Establishing a Responsive and Negotiated Relationship
It is essential to the development of a successful assessment strategy within individual approved institutions that the relationship between that institution and the ENB is felt to be a negotiated one in which concerns and understandings are shared. When there is major change, it can be difficult, and therefore take time, for individuals to come to terms with new roles, and sensitive support is essential. In addition, because the facilitation needs of approved institutions are highly individual (as is evident from the previous exploration of local contexts) education officers will need to vary their guidance and support to match these needs. As indicated earlier, education officers like to see themselves as facilitators, but where role-holders are diffident and institutions feel uncertain, an education officer’s response must be clear and unequivocal. Where the education officer’s response does not match the needs of the institution, the conflicting and insufficient guidance given can be perceived very negatively..
...we've had so much contradictory advice...advice in our first assessment document came from a midwifery officer and then we developed the assessment along those lines, and then we got our generic officer who disagreed entirely with what the midwifery officer said and the whole document was changed.
Contradiction then becomes perceived as confusion and lack of clarity by the Board itself about its own guidelines.
I don't know, I think the Board's reneged on their responsibilities somewhat...they set out you know, however many competencies (...) really they've given you the bare bones but they haven't sort of allowed or given any guidance on the flesh.(...) Now I understand and I recognise that you have to err, develop them to reflect the course, the curriculum, which is correct...and the local area where training or education is taking place, but I think they could have given some more general guidelines, you know clarification under each one of what they, you know, what they expect of a first level nurse.
In such circumstances relationships are characterised by uncertainty and lack of confidence, although in most instances where staff felt unsupported by ENB education officers, they asked for further guidance, rather than prescription.
I'm not suggesting, and I would never suggest and I don't want to suggest that the Board say, "This is what you must do," like the old syllabus, err... or like the old book that the student used to get (...) But I think some, some more general guidance on how the Board perceive, define assessment...
I think because we're in the developmental stages and what seemed OK at the time ...I mean I think people, ENB officers as a rule tend to know a good thing when they see it, but won't or can't, I'm not sure (...) which it is, give you examples to start off with. (...) If they're really going to give advice to me it's, "Well look at this. What do you think about such and such." That might be a starting point. Or (...) very definitely say, "I don't want such and such in," erm, "I want a two point scale as opposed to a multi point scale, let's be clear about that." Or "You'll be unlikely to get it passed if you go for a multi..." I would prefer that, that they came to some agreement about it, if they gave some examples of good practice. So that's been a problem."
Where individuals felt a lack of direction and support in their efforts to satisfy national criteria, 'second guessing' what their ENB education officer would find acceptable as an assessment strategy became the main aim. In such cases, the potential for stifling development and further damaging confidence is clear. Like the following educator, who described how she felt that knowing the formula to satisfy the ENB education officer would be helpful to colleagues even though she felt the result was reductionist, they will prepare assessment materials in which they have no faith:
...I think if our assessment document's been accepted then now our work with nursing colleagues and colleagues at the polytechnic (is) to help them, because I know, I feel that I know exactly what Mrs Smith wants now, so I can help them to achieve...you know, the same in their documentation. Because really what you put into it doesn't matter, it's the way` it's set out and how we're actually measuring skills.
The perpetuation of an older hierarchical relationship between education officers and their approved institutions can, where it exists, have the unfortunate consequence of taking away ownership from the institution which to compensate develops a collective cynicism.
This present document I feel doesn't measure midwifery at all. The fact that we only measure skills means that only a small percentage of midwifery is actually being assessed which is directly in line with the competencies laid down by the ENB...that I feel (are) restricting...(educator)
Fortunately, however, there are very many instances of a much more equitable relationship in which both education officer and institution are able to reflect critically on (i.e. to critique)national guidelines. ENB education officers and educators alike recognise the scope for local interpretation within the national guidelines and look positively for creativity at the approved institution level.
I mean to give the ENB their due, they've actually said that the way you interpret them can be really broad, but there's nothing in writing that says you can interpret them broadly. (...) We went with the broadest one we possibly could, (...) they were quite happy with that...(Break)... And I think the other issue there of course is that because they are as vague and woolly as they are you find that different education officers interpret them differently. So if you happen to have fairly approachable, realistic education officers who keeps in touch with what is actually going on, and the individuals and their needs, then I suppose in that way they are beneficial. But it is very frustrating when you've been categorically told... I just recall a conversation that I had with someone on Tuesday that they can't do something and I was told we can in our approval document. (educator)
...I think some colleges are very innovative so long as they prove it meets the rules and if there are specific branches they meet what is required of them in that...(education officer)
Where the relationship between an education officer and their approved institutions is genuinely one of negotiation and mutual respect, guidelines are used as an enabling framework and the need for dialogue about different interpretations fully recognised.
I'm hedging while I answer this because I haven't got quite as much experience as other officers, and I suppose yes, one could say that you could interpret it in many ways, but often when you talk it through with them, and often maybe it's what they want to do with it, I'm trying to think of an example. If you take the enrolled nurses, the opportunities. Some people read the regulations and they think, "I can't do that," and you'll say, "Well it does tell you you can." Sometimes they read it, I often feel from my own experiences, that they can't do it, rather than they can. I mean I don't think I've ever come across anybody that says, "Well it says that," and I think, "No it doesn't." It's often been the other way round, they've hedged on the safer side. (education officer)
Education officers are unique in the influence they have over the development of assessment strategies at an approved institution level because they, more than any other person that the members of an institution meets, act as interpreters of ENB guidelines. Variation amongst ENB education officers was noted regarding interpretation of some ENB guidelines where 'personal' views were expressed (and acknowledged as such) in response to the needs of approved institutions for greater clarification:
..the Board has the perception that all education officers work differently. (education officer)
Whilst in many cases such action is doubtless helpful and unproblematic, the issue of varied interpretations between approved institutions did arise:
I think if there is going to be any standardisation (of guidelines), it should be the education officers interpretation of them, because I do think it actually leads to a lot of antagonism between colleges, and between students. (educator)
If education officers are to be effective facilitators of local programmes and to offer advice that is appropriate to the needs of individual approved institutions, they must themselves have opportunities to explore some of the issues that are raised by the ENB’s assessment guidelines. There is clearly a good case to be made out for regular regional and national meetings between ENB education officers solely for the purpose of discussing assessment. There is also a case to be made for education officers and members of approved institutions to meet to discuss what they have learned from their positive developments and their less successful innovations. Because teachers, clinical staff, and education officers are all still learning, there is a need to provide a facilitative framework to foster discussion at every level about development and innovation. It is not always easy for one institution to learn directly from the experiences of other approved institutions, because of the relative lack of experience ‘out there’ to draw on:
..because it's a complex issues in its own right, and when we were looking at continuing assessment there weren't many places who had sort of really got it off to a rolling start in nursing.
And in the new market-oriented business culture there is a growing concern to get value-for-money which prompts institutions to reclaim the time and resources they have expended in curriculum and assessment planning by only allowing their work to be made available to other approved institutions in exchange for a cash payment. The education officer, therefore, has a unique opportunity to bring people together to discuss the more general issues they will all have experienced in designing assessment strategies. Amongst the most important issues on the agenda would be several which people have very little official opportunity to discuss elsewhere, including the experience of finding that their understanding of a particular assessment principle or practice is no sooner developed than the goal posts change, and that lack of clarity and direction in the initial stages of course development coupled with the introduction of new ‘blue prints’ as clarity is developed makes it difficult to know what 'current thinking' is, let alone meet it.
It's about finding out what they like and don't like, and it's also about being better than so and so was (...) they've actually admitted to me when I said, (...) "It means if you're a demonstration district, you get away with a lot more than we would get away with now." He said, "Oh yeah that's right."
After completion of assessment strategy design, the process of validation provides a further opportunity for dialogue about assessment issues between individuals working in local and national contexts. Experiences of validation continue to develop a result of recent changes in educational approaches and institutional relationships. As innovations in curriculum and assessment design (now often at diploma level) are created and produced through developing relationships between approved institutions and HE, it is clear that all participants in the process are benefitting from greater understanding through dialogue. Whilst accounts of satisfactory validation relationships did vary, there was a sense of overall progession, as one ENB education officer comments on links with staff in HE:
I think they've come to realise that maybe nursing has a lot to offer, that in the past we were seen as certificate level, but a lot of nurse educators are now at Masters level, and sort of are as articulate and well educated as their colleagues. (...) But yes I think the relationship has got better. I don't feel I'd have any difficulties ringing up my equivalent in a University and having the honest relationship to say, "Well I've not done it before, so how is your protocol, do the protocols work together?"
3.5. IMPLICATIONS OF THE EXPERIENCE OF DESIGNING DEVOLVED ASSESSMENT STRATEGIES FOR FUTURE ACTION
Following a review of literature on competence, Runciman set out a series of suggestions for those involved in course planning and curriculum development, several of which reflect the findings of this research. Runciman suggests, for example, that;
• the nature of competence should be widely discussed
• the extent to which learning outcomes can and/or should be defined in a precise and measurable manner should be debated
• collaboration between educators, learners, employers, and practitioners should be established at an early stage in the planning of programmes and should continue as programmes develop
• the advantages and the limitations of a competence-based approach to education should be debated (Runciman 1990)
The ACE Project has shown that there are so many mediating roles and structures between national, regional, local and individual interests that it is only possible to achieve coherent policies for assessment by ensuring dialogue at all levels. It has revealed the importance of factional needs and interests in shaping the progress of assessment and has demonstrated where roles and structures have helped the development of understanding between factions, and where they have provided constraints. It has demonstrated that interpretation is inevitable, and by implication that opportunities for sharing insights or causes for concern can enable closer understandings and facilitate the development of an approved institution’s assessment strategy. And it has shown that any framework for negotiation and facilitation must be sufficiently sensitive to particular needs and specific local circumstances to be able to cope with a demand for specific and unequivocal information at some points (particularly at the beginning of the design process) and for framework advice at other points. It has revealed in particular the need for important new institutional and regional co-ordinating roles for assessment policy, and for the establishment of regional and national forums for the exploration of assessment issues.
The list of emerging needs with which we conclude this chapter points forward to the recommendations at the end of the Report. It indicates needs identified through emerging themes, and has clear implications for the sorts of mechanisms, structures and roles required to facilitate further development in the assessment of competence at regional and institutional levels. There is an apparent need for:
• national and regional forums for the discussion of assessment issues
• helping all education officers to develop additional specialist assessment knowledge
• further research into the role of the education officer at the interface between assessment planning and development in approved institutions
• continued research into assessment methods, with particular emphasis on the assessment of clinical practice and higher order aspects of competence.
• a designated role in every approved institution, carrying responsibility for the co-ordination of assessment planning and development
• protected time, and where appropriate designated resources, for the role-holder to carry out their responsibilities.
• more opportunity for ‘piloting’ major innovations before wide scale implementation
Assessment texts ‘refer’ to institutional rationales for assessment, in the sense that the texts are actually embeddeded in the educational culture that shapes and reflects both the culture and public statements about it. There is, however, no simple relationship between institutional statements of intention and what actually happens on the ground. In practice, intentions are affected on the one hand by the fact that systems are operated by people who interpret what they have to do, and on the other by the fact that the organisation itself is an interpreting organism that in turn transforms official guidelines and directives. And just as there are many intervening factors that bring about ‘translations’ from philosphy to assessment documenatation, so there are a whole range of interpretations that intervene between the drawing up of an assessment system and its use in practice. In the end the success or failure of assessment texts as a means of assessing competence depends upon the extent to which they are able to recognise and take account of the judgements that inform the assessors’ decision-making. The format of these documents (and the other texts that are used to record evidence of competence) is therefore extremely important because formats constrain or enable opportunities for evaluating the reliablility and validity of assessor judgements. They also shape the nature of the assessment activity by framing the possibilities for evidence collection, and negotiation over perceptions of the evidence base. Consequently, what emerges as a central issue is whether the formats of assessment documents (together with the mechanisms and procedures for using the documents) allow the independent re-evaluation of evidence, or whether they depend upon the evidence of the ‘accredited witness’ whose judgement is not open to consideration because no evidence is provided. Those formats which use a priori categories in an effort to achieve a high degree of standardisation offer a tight frame that constrains the relationship between assessor and student within an authority to novice dimension. Formats which employ categories derived from reflection and that concur with professional experience, are much more loosely framed and provide an independent evidence base that can be examined for reliability. Each format has its typical form. The document which asks for a tick from an accredited witness, although requesting that tick to indicate a whole range of things from the complex to the atomised behavioural skill, provides no evidence of the way in which the decision about competence was reached, nor any indication of the context in which it was achieved. The tick becomes a surrogate for the evidence and, where it has occurred, the discussion about the evidence. Despite these shortcomings, the tick box approach is often seductive to policy-makers because it purports to offer objectivity through the accredited witnesses signature to closely specified categories. Assessors, however, draw on their own experience and make judgements even about such categories and so the tick box can be seen as no more reliable than any other subjective record of competence. Indeed, it is impossible to remove subjectivity because human beings are interpreting subjects; what is needed is dialogue to accommodate different subjectivities and to enable comparison. If this happens, then assessment enters an educative domain, but it also becomes more reliable assessment. Assessment texts in the educative domain encourage judgements from multiple perspectives and look for interaction between assessors and students to identify and reveal understanding of the important professional qualities that are less easily assessed. These forms of text, of which the learning contract is a prime example, integrate theory and practice through providing a triangulated evidence base that can continue to be the focus of discussion over time, and thus provide additional evidence of the process of development. To achieve an educative function, assessment texts must document valid as well as reliable evidence of competence by drawing on a wider constituency of witnesses, and by being context-sensitive.
THE FORMAT OF ASSESSMENT TEXTS:
CONSTRUCTING THE EVIDENCE BASE
To accord with national guidance issued by the ENB, assessment strategies designed by approved institutions must ensure that students achieve separate pass criteria for theory and practice, and that the overall assessment strategy reflects a curriculum in which there is an inter-relationship of theory and practice (ENB, 1990 p 49 , 1993 p 5.15). This chapter seeks to outline the range of assessment texts, the evidence bases produced and the extent to which theory and practice inform each other within assessment texts.
4.1 THE PLACE OF ASSESSMENT TEXTS WITHIN AN ASSESSMENT RATIONALE.
To demonstrate that the ENB's regulations have been satisfied, detailed assessment documentation is produced by each approved institution, giving an overall rationale, role definitions, definitions of assessment categories employed together with the required mechanisms and procedures for carrying through the assessments. The rationale offers a justification and a general framework for interpreting assessment events.
Approved institutions must recognise the multipurpose functions of assessment as outlined in national guidance and seek to address the need for quantitative and qualitative approaches. How this is accomplished in practice varies from institution to institution. For example, one institution opens its rationale by stating:
Assessment entails judgement and an attempt to evaluate. It may be characterised quantitatively or qualitatively. Assessment presupposes standards and is necessarily based on criteria, held by the assessor, who may be the student herself or an external agent. It is axiomatic that assessment is necessarily interpretative, and therefore subjective to some degree. Assessment procedures ought to seek to minimise this.
Assessment rationales have to address not only national standards or requirements, but also individual developmental needs. Formative assessment is typically defined to address individual student developmental needs, and summative assessment is defined as addressing particular pre-defined standards. Where formative assessment is typically not seen as compulsory, summative assessment is compulsory and failure to reach a specified standard at any point in the course may then, depending upon circumstances, result in failure of the course. Assessment documentation for placement areas combine formative and summative elements to produce a structure that articulates the general rationale for continuous assessment. The intimate relationship between educational philosophy and assessment rationale is clearly illustrated in this example from one institutions' assessment and quality assurance handbook:
The curriculum philosophy addresses the benefits of a variety of learning experiences and, therefore, a variety of summative assessment modes is appropriate. The role of the student of nursing is seen to be active in her learning, therefore, formative self assessment and self direction are of paramount importance throughout the course. The student will be greatly involved in her own assessment. Opportunity for improvement in performance will be available to the student as part of the assessment process. This allows for rapid feedback and remedial action should this be necessary. It also enables the student to take on the responsibility for reviewing her own progress, and developing her ability for realistic self appraisal.
The question then arises whether the format of the document together with the mechanisms and procedures for the implementation of assessment are appropriate to fulfilling such rationales. For example to what extent does student self assessment feature within assessment texts, and how are far are curicula which integrate theory and practice represented? The focus for discussion in this chapter then, is on the relationship between the format of assessment texts and assessment purposes, and also on the creation of evidence bases on which to make judgements about developing competence.
4.1.1 Texts as evidence bases.
Assessment texts can be within the range of two extremes: those that are tightly framed in terms of specifying the categories to be assessed; and those that are loosely framed, providing guiding principles instead of specifying categories. A tight format for assessment of practice typically employs a standardised document where the categories are formed in advance of use, with each category being ticked, often indicating levels of performance or measurements of some kind. At the other extreme, a loose format employs less-standardised documentation where the categorisation is determined after considerable reflection upon data derived from practice.
The chosen format of the assessment text itself structures relationships between assessor and assessee. Its principles and categories direct attention and organise the processes of reflection and judgement that have to be made in fulfilling the demands to construct an evidence base. The collection of completed assessment texts constitutes an evidence base to support decision making. It is important to ask about the nature, reliability and validity of the evidence base that is provided by the different kinds of format.
The following diagram formalises the relationship between tightly and loosely framed assessment formats. The vertical axis signifies the continuum of evidence bases from those that are highly open to independent reassessment, that are highly tailored to practical circumstances, and that fit with professional perceptions of competence, to those that are low in each of these regards. The horizontal axis describes the continuum between assessment formats that are tightly prescribed, aiming at standardisation, to those that are loosely prescribed aiming to draw assessment categories from the situation itself. It will be argued that the tighter the frame the more restricted the evidence base, and the more it is standardised the less it provides evidence that is open to independent reassessment. Conversely, the looser the frame, the less it is open to standardisation but the more it accords with professional perceptions of competence in practice.
Figure 1: From Tight To Lossely Framed Evidence Bases
The nature of assessment evidence is an important issue when seeking to explore, as this study does, the effectiveness of assessment strategies. Consequently it is a guiding theme in this chapter on assessment texts.
4.2 ASSESSMENT OF PRACTICE.
What follows is not an institutional analysis but rather an analysis of the potential for different features of assessment structures that are currently in existence. The analysis will develop by discussing:
• ticking boxes - the accredited witness
• witness triangulation
• increasing the evidence base
4.2.1 Ticking Boxes - the accredited witness.
No approved institution in this study relied exclusively upon the simplistic tick box, however the majority did employ this format to some extent. The following extract illustrates the basic features of the format:
The choice, number and wording of areas for assessment, the items contained within each area and specified levels to be achieved varied according to course (and branch) design. Assessment items could be as broad as 'can undertake care plan' or could be focused precisely on a well defined behavioural procedure or aspect of a procedure. The actual examples in illustration A are highly complex if subjected to searching questioning. What, for example, constitutes a source of data? This question alone would be substantial enough to generate a course in research methodology. Then to identify what constitutes a 'level' of achievement in being aware of data would be sufficient for several philosophy seminars.
The format of illustration "A", structurally limits the kind of evidence that can be collected. First, if it is employed within a behaviouristic framework for the standardisation of analyses of skills and competencies, it results in an atomistic categorisation of units of behaviour. As discussed in chapter 1 such lists can be extended indefinitely and focus attention only upon clearly observable and measurable units. However, once the box is ticked, the statement which gives significance to the tick becomes a surrogate for what actually happened. Even if deep discussion had taken place over these items, they would simply be reduced to a tick in the box. The same tick would be there even if there were no discussions at all.
What an outside commentator cannot be sure of is whether what actually happened bears any relationship with what the statement alongside the box signifies. That is, there is no primary data of the interactions which led up to the ticking of the box. It is therefore tightly framed, but low on independent primary evidence capable of re-assessment for purposes of moderation.
Nevertheless, the tightly framed format is highly seductive, particularly where other forms of assessment format can be criticised as being too subjective:
Now assessment was the same across the board in theory and practice, it was working quite well but I mean I accepted the ENB officer when they came and said, you know, "A lot of this is open to a great deal of subjectivity in clinical practice." So really they tore our assessment apart. I mean I could see what they were saying and they said...in a clinical area you have to have skills that are measurable that a mentor (sic) doesn't have to think about, they just have to think, "Yes they're achieved." They don't have to think, "Well what level is the student at?" (educator)
This approved institution had been compelled into a tick box approach, an approach that they were not happy with because it overlooked the multi-dimensional and dynamic nature of professional competence.
As already noted, there are different assessment styles which will lead to different kinds of interaction preceding the completion of the document. Of course, not anybody can tick the document. It is to be presumed that the individual who does this is considered competent to make the appropriate judgement. Such an individual is in effect an accredited witness. A considerable reliance is therefore placed upon the individual who carries out this role. It is important to ask how much reliability can be placed upon this role.
In practice such items as those in illustration A are in fact open to considerable variation in interpretation which at once undermines the reliability of the accredited witness role, and necessarily subverts the very standardisation that is intended.
...each nurse has their own view of looking at it. They could see a different...view point...you know they might bring up something else that another mentor maybe didn't think of. You know say for organisational skills, one person might think she's organised in herself when another person might look at it as how she organises the workload on the ward.
Educators were also aware of this issue and highlighted the importance of increased dialogue about assessment issues as a means of overcoming widely differing interpretations.
...I think a lot of people make the document work and that's why you get so many interpretations on it.
I wondered whether the piece of paper that we've got is inappropriate because it's not, it's got no meaning, it's open to interpretation the meanings are vast so it really is meaningless because it's open to interpretation...
Indeed, there is a danger of the documents being used tautologically. If, according to a particular stage in the course a student is expected to be at level three, then performance tends to be rated level three. It is the self fullfilling prophesy (Rosenthal and Jacobson 19..) familiar to all educational researchers. Additionally, assessors could be misled by the definition of the level in relation to the task to be performed and thus confused as to which is actually the appropriate box to tick:
...I think everyone finds them quite hard to know, the wording of the assessment is quite hard and people always have trouble knowing which box to put you in, and the boxes don't run on brilliantly, and so it might seem that you should be in box one (when in fact the student should be achieving box three at that stage of the course.)
The kind of judgements which assessors have to make are illustrated by the following three examples.
from the normal
Can participate with instructions to assist in abnormal situations
With assistance recognises deviation from normal and provides appropriate care under supervision
Is aware of the potential effects of abnormality in midwifery and can initiate appropriate action
Able to interpret and undertake care prescribed by registered medical practitioner where appropriate
Figure 2A: Use of Performance Criteria
Illustration 2B: An Adaptation of Steinaker and Bell's Taxonomy
(Steinaker and Bell, 1979)
The student is beginning to
relate knowledge and skills
to practice but requires
constant aid and
The student performs with
minimal supervision, can
explain actions and
modify practice utilising
The student can manage and
effectively and justify own
planning and decision
Illustration 2C: A Standards of Achievement Model.
Our data suggested some fundamental problems especially with the first of these examples. For many the categories within each box did not always reflect progression as it is actually experienced and expressed by the student on placement. In other words, the first levels of a particular scheme may in fact represent for the assessor and student a level of activity which is not necessarily less complex than higher levels.
Additionally, in adopting an assessment framework which intends to demonstrate student development and progression, many institutions were struggling with the challenge of maintaining a sense of continuity between levels of achievement. Progression through a series of predetermined levels implies an accumulation of experience and skills with each learning experience and assessment event, building on the achievements of previous levels. For some institutions information about previous experience did not flow easily between placements and students were given primary responsibility for maintaining continuity between their placements.
As the following student outlines, assessor interpretation in relation to the different levels of expectation of student attainment was a problem for some:
...I brought this up with the school...what I found with the clinical objectives is that they're very ambiguous, they don't really give a level. Like for instance...on the first ward, one clinical objective would be, 'To be able to measure and record a temperature' for instance. Now one mentor or staff nurse might for instance say, "Well I've seen them take or record a temperature, I've checked it, it's right, it's correct, that's okay." Someone else might see it as the competency of taking a temperature and so you'd be asked about different areas of taking a temperature, the length of time a temperature would be taken, "Do you know the relevant research for taking a temperature? What things are you looking for? When would you worry about a temperature?"...etc, etc etc. When I was asking the school (they said) 'Every mentor does go on a workshop and are (is) told about what levels to go onto." But I've never found that...
It would seem that subjectivity and multiple interpretation are inevitable in assessment:
...it all comes down to what you do on a one to one with somebody...it's an intensive task, it's not something you do on a cursory level, it's something that ultimately comes down to a feeling you have about an individual because you spend so much time with them, and there are markers or indicators within their performance... certain social skills have to be present...after that it becomes a measure of whether or not to put into practice quite subjective components. I'm talking here about mental health perhaps rather than what I call the tricks of nursing which are injections and some of the more practical procedures which are obviously good or bad, because they're observable, measurable...I have...no difficulties with those...it comes down to whether or not you...follow certain patterns of behaviour like when do you give advice? When don't you give advice?...What kind of attitude did you adopt towards somebody? What was your approach? And it has to be something which is analysed afterwards...the process of assessing somebody's level of safety or competence or quality if you like, is through a pretty intensive, interactive process, trying to establish what they were trying to do, how they did it...
In this view, what is overlooked in the so called objective approach is the educational dimension that is founded upon interactive processes. It is through such interactive processes that the less tangible aspects of competence are determined and assessed.
Attitudes. Hardest thing in the world to measure, it's practically impossible. Err (pause) you can have a feeling about attitudes, right attitude, you assume from your social mores what's a good attitude to the patient...you can measure knowledge. You can measure skills. But the best psychologists around have looked at various tools to measure attitude and have been singularly unsuccessful. There are lots of things that you can use, but the fact that there's so many of them tells you that there's nothing particularly works.
This educator goes on to express her opinions on what is an appropriate form of educational assessment:
You could tighten up the schedule, the actual assessment process 'till it becomes...so specific that there's no variation at all. But I don't think that's really feasible when you're educating professionals, because if you tighten them up so far, then there's no room for judgement in there... so that's another way you can measure validity; it's so tight that there's no deviation. 'You can do it that way and that is the only way'. Now I don't think that's good educationally .
The fully standardised approach to assess practice is counter educational. Indeed, the principles of classification differ in each case. Standardisation proceeds by identifying units of behaviour that precisely fit the category box. That is to say each unit of behaviour placed in a particular category should be identical to any other unit of behaviour in that category. However, when the categories are subject to confusions (for whatever reason) or to alternative viable interpretations, there will be a wide range of possible candidates for inclusion in the category. In these circumstances, it will not be possible to assure homogeneity.
The structure itself, also inhibits re-evaluation of assessments because there is insufficient evidence to provide the basis for such re-evaluations. There is nothing either to support or to challenge the tick in the box.
Two possibilities to overcome these inadequacies are:
• To increase the number of independent witnesses who come to an agreement as to a particular judgement
• To ensure the recording of an evidence base for assessment decisions that is sensitive to context rather than being highly pre-determined and structured
These two alternatives will now be explored.
4.2.2 Increasing Witness Reliability.
As has been demonstrated by the assessment texts discussed so far, if there is a weak evidence base,the possibility for further study of the decisions taken and the criteria for those judgements is less open to external scrutiny.
For the assessment of practice, it is the judgement of the professional - in ticking a particular box, or in making a particular interpretation - that is at the heart of assessment.
Most of my colleagues will argue strongly that by the nature of the thing, in terms of practical assessment we have to accept that professional integrity and the professional judgement of the practitioner assessing. And I don't accept that. I work from the other extreme and say that because they are a competent practitioner (that) does not make them a competent assessor. (ENB Representative)
The competence of the assessor to assess is here said to be different from the competence to practice as a clinician. There are distinct types of judgement involved. The assessor's judgement is a judgement about the student's capability to judge, make decisions and act in a given circumstance at a given stage of development in the professional role. The reliability of the witness can therefore be increased by appropriate assessor professional development. Meanwhile:
...I do have concerns about this (assessment skill)...you know, when I'm talking to...staff at audits and these sorts of questions I'm asking. Erm, you know, what is it you're assessing? You know, and they often haven't really broken down these skills and often really thought about it...they haven't really thought about the level of the learner and what their particular needs are...they talk about things like, you know what's the role of the designated nurse in relation to preliminary interview? What are they supposed to be doing?...What information are they gathering about the student at that particular time?
Even if appropriate professional development can be provided there are still likely to be variations in judgement. This issue, it can be argued, can be overcome by taking into account more than one perspective, a process of witness triangulation. The triangulation of views can be be employed to draw out both commonalities and points of disagreement. In this way individual bias can be reduced. The range of perspectives that can be set along side each other include:
• the assessor
• the student
• the co-assessor
• other clinical colleagues
• the educator acting in a clinical liaison role
In line with ENB requirement for contributions to the assessment process, students were called upon to contribute to assessment texts. In the texts we studied, the minimum requirement consisted of the student's signature to witness that the discussion that had taken place with the assessor at the initial, intermediate and final interviews. It also indicated that there had been an opportunity for the student to respond to the assessors' written comments with their own. A greater degree of self assessment was encouraged in the type of text illustrated below, where the student contributed their own judgement to each assessment item. While not overcoming the problems so far discussed, it does at least offer student self assessment and the potential for the student to challenge a particular judgement.
Figure 3: Midwifery Assessment Schedule
In this example, an agreement between assessor and student defines whether the 'competency' has or has not been met. There is the potential for greater triangulation of views, but once signed, there is no further evidence available for an independent examiner. The validity of the assessment is thus still dependent upon the credibility of accredited witnesses.
In relation to the stress placed on formative assessment in overall assessment rationales the intermediate and final assessments offer the possibility of facilitating diagnostic development and identifying progression. However, it would be misleading to regard the one-off intermediate assessment as being 'formative' in the fuller sense of implying a process of continuous assessment; it offers an intermediate step towards the final summative assessment during the final interview.
Once again, considerable dependence is placed upon upon the quality of staff development for the purposes of assessment, and upon the staff-student assessment relationships. Theoretically, if it can be confidently maintained that each assessor is interpreting criteria in the same way, and providing high quality feedback via a high quality consultation processes, then it may be accepted that the tick in the box, or the signed affirmation that a particular competency has been achieved actually signifies what it claims to be. However, if as the data outlines, there is considerable variation of practice, multiple interpretations of criteria and limitations on professional development for the assessor and mentoring roles, then consistency of quality cannot be assured. It is not only that consistency may vary between assessors but also according to circumstance with the same assessor:
(...) my last student, it was fine. I worked with her a lot and I actually saw her getting better so I felt I could write a fair assessment on her. But the student I've got now, I've worked with her three times, four times, and I still have to write an assessment on her. But when I asked her whether she'd worked with anybody else in particular, she hadn't, so I couldn't even go to them and say, "How's she doing? Is she improving?" whatever, "What does she need?" I did in the first talk with her, about what I expected of her and what she hoped to gain, and we went through the things that she had to do, I did that. But like I say, I'm going to have to write another assessment on her and I don't know much about her and how she's progressed. So that's difficult.
In short, there is very little triangulation of assessment required by the formats of documentation discussed so far, and practical circumstances limit their possibility. This creates potential obstacles for monitoring, and if an assessment is contested it remains difficult to see what would form the independent evidence base on which the issues could be resolved. All educators, of course, have some background knowledge of their students, form impressions and listen to the views of others; consequently they are often aware of the difficulties associated with some assessment texts:
...we get these assessment forms back of course here in the college, and we discuss them and we look at the issues raised by them and then we try and iron out difficulties whether that be the student contending what was written on the form or the staff on the ward contending what they've written on a form, and that sometimes will clarify issues related, especially I find if they are issues it's normally related either to professional behaviour or communication skills on the ward...it's normally because...people have different perceptions of...how effective a first year can be for instance in dealing with a bereaved relative, and so those are issues that we have to constantly revisit.
Unless, the evidence actually provides a detailed record of 'what happened', judgements remain susceptible to bias, misinterpretations, misunderstandings. These issues point not only to the complexity of the skills involved in assessment, it is also a matter of ensuring the appropriate assessment texts are created. Assessment is not simply a matter of exercising unambiguous judgement in relation to ticking a particular box. A considerable degree of reflection, analysis and dialogue needs to take place before common agreements can emerge as the basis for individual assessments. The philosophy, the principles, the procedures and the structures appropriate to support the assessment process needs to be in existence.
4.2.3 Increasing the Evidence Base.
Participating institutions generated evidence beyond the tick and the witnessing signature in one or more of the following ways:
• through a record of formal diagnostic interviews
• through a 'discussion statement' that cited an evidence base
• through learning contracts
An evidence base can be used for a multiplicity of educational purposes. It can therefore begin the process of integrating theory with practice as the assessment text becomes increasingly loosely framed to encompass research based projects and assignments of various kinds. There is a kind of progression in the above list as it moves from relatively tightly framed formats towards genuinely open ended research based texts.
• Recording Discussions of Formal assessment Interviews.
As described in chapter six three formal interviews are normally planned during each placement, the first two are diagnostic, the final is summative. They act as a minimum requirement for discussion during a process of continuous assessment. These interviews may be used in conjunction with tightly framed assessment formats as described earlier, or with more open assessment formats.
For example, the assessment document may specify that the preliminary interview is to take place in the first week of the placement with the object of including:
• a discussion of the personal objectives of the student
• the learning opportunities available during the placement period
• a discussion of progress to date towards achieving the specified levels
Typically student and assessor are required to sign that these have taken place. As outlined in the previous section on the triangulation of views, the practice involved in recording these discussions varies. In some cases, a blank page is given with only general advice as to what should take place provided. In the blank space a record in summary form of the agreed objectives/agenda for the placement may be provided. Other documents may more precisely specify the questions to be addressed in the interview. Similarly, for the intermediate interview the assessment document may include space to record progress to date. At its minimum this may involve no more than penciling in the ticks in the tick box format. At its maximum it involves a detailed open ended discussion. One such typical example of the format for the intermediate interview is as follows:
Figure 4: Placement Review Document
The space given to record the results of the interviews varies from little more than a third of an A4 page, to a whole page. At their minimum, formative interviews may be regarded as a record of aspiration, and summative interviews as statements of overall achievement rather than constituting an evidence base of learning and accomplishment. In this document they do not yet provide an independent evidence base which can be used for purposes of re-assessment. The external reader of the document would still have to accept the judgements of the accredited witnesses. The evidence is still insufficient to be able to challenge the judgements. Even if a student agrees that certain skills are lacking, there is no evidence base to challenge this judgement by the student. Only if a member of staff testifies that the student is being too hard in their judgement can a challenge be accepted. But who is actually right? What evidence would be required to make such a judgement?
• Discussion statements and the citation of evidence
An approach which begins to break down the listing of categories to ticks (supported by overall comment) is one where broad statements are provided as a basis for which evidence of achievement must be supplied. One example of this approach for Project 2000 student community placements used broad statements referred to as 'stems', under which were suggested possible activities which could be reflected on as follows:
This then is supplemented with a list of sub-statements:
Figure 5: Sections from Documents.
Instructions are given in the student profile document as to how to proceed. A series of interviews is required where 'the learning needs and expected learning outcomes will be discussed and mutually agreed'. Completion of the document requires all experience gained and teaching given during the placement to be recorded in the appropriate section. The approach allows for more detailed evidence to be recorded than in the previous examples.
The approach is worth exploring in further detail because it gives an insight into the increased demands it makes upon the assessing processes. In the approved institution where this is operated, it is seen as an interim step towards a more open ended process of discussion and negotiation. The idea, as the speaker below indicates, is to provide a general area for discussion with students which staff can develop in relation to experience:
What we've got to get them to think about is if we said this is an important area that we want them to assess then they've got to find some experience that makes that live, and if we dictate it what they'll do is try and do what we want as opposed to think about the area. What we want them to look at and find suitable experience and do suitable teaching for (the student) ...
She develops this in relation to Rule 18a:
...we were looking at sort of areas I suppose that, that enliven rule 18A but don't try to fragment it, because I think of these things we get into sometimes is we try and take rule 18a and then we try and break it down into it's bits and sometimes by doing that we actually lose what rule 18a is about which says that it should be all sorts of skills, developing a multitude of areas and the more we try and pull those different skills out the more we fragment it I feel. So we've got things like "the nurse will promote a partnership with the family and the rights of the child and will ensure that she or he's always treated with respect and the dignity is maintained" and we wanted them to actually say, well what does that mean? What does that mean I have to teach? What does that mean I want them to assess? But as I say for the beginning we've done it, such as attention giving to age, gender, sexuality, culture, religion, personal choice, time or belonging, will be respected, empowering the child of the family through the sharing of knowledge, skills and resources, acting as the client's advocate as and when necessary, so we've actually decided those areas this time, but what we'd hope to do eventually is take all that out and just leave the gap.
This type of assessment text moves away from a highly standardised format, and the evidence cited reflects the true contexts and experiences of the student. In addition, evidence of teaching undertaken is recorded.
The further developments required to facilitate this approach clearly depend upon the quality of professional development that can be achieved. There are demands upon the educational assessment expertise of staff which are not currently possessed but can be developed over time and with the appropriate structures in place.
•The learning contract.
Another approach offering a more developed strategy which moves towards tackling some of the inadequacies of the texts previously explored is the learning contract. For example, page one of a contract asked three basic open ended questions:
Page two (and subsequent pages of the document) were structured as:
Figure 6: Learning Contract.
The contract was set out on large A4 pages, the blank areas to be filled in by the student and validated by an assigned member of staff. The space available for the student was sufficient to allow the development of an essay. Objectives and competencies were identified and discussed in relation to resources and strategies. Completed examples showed the inclusion of descriptions of practices and experiences, supported by referencing of relevant literature. As a result, the final product was very close to being a research based assignment. There was a very considerable body of evidence provided as to what exactly a student accomplished, together with integrated understanding of the knowledge informing that practice.
This different documentary format implies different kinds of skills appropriate to the assessment role. Indeed it was the case that at the approved institution from which this example was drawn, assessors were prepared and actively supported by a network of lecturer practitioners. The nature of the lecturer practitioner role, based as it was in clinical settings, differed to that of an educator performing a clinical liaison role, located predominantly in the approved institution setting.
Assessment texts which captured the multi-dimensional nature of practice within a strong evidence base were valued by a educators in colleges that were experimenting with learning contract approaches,:
I think, I wonder whether it would be more beneficial if we just gave a blank sheet of paper to the staff nurse and said tell us whether this person's a competent nurse. (...) I think that (...) pieces of prose freely given looking at positives and negatives would probably give you more of a picture of competence than something broken down into areas which are open to interpretation and rather vague... (educator)
This kind of thinking suggests a move towards a method of assessment which starts to address the whole issue of assessing theory and practice as an integral whole in the manner which is implied in the UKCC interpretive principles
4.3 TOWARDS THE INTEGRATION OF THEORY AND PRACTICE IN THE ASSESSMENT OF NURSING COMPETENCE
4.3.1 Texts Which Create Binary Opposition or Texts Which Promote Integration?
Theory and practice are often referred to as if they were in a binary opposition with each other; the theory separated from and valued differently from practice. In an educative perspective no such discrete binary distinction is implied. Rather the relation between theory and practice is one of mutuality and of interdependence. While not identical to each other they can be regarded like the two faces of the same coin. It has often been said, there is nothing more practical than a good theory. The question then concerns the extent to which the evidence base drawn from practice can be employed for further educative purposes which include the development of theory derived from practice. Thus what most approved institutions term 'theoretical assessment' will be the focus of discussion for this section, to see the extent to which it can be integrated with evidence derived from practice. .
Compared to texts used for the assessment of practice, those used for assessing theory provided a wider range of evidence and were more loosely framed (hence they would be situated to the right hand side of diagram 1 at the start of this chapter. One educator summed up the general trend, in comparison to assessment of practice, as follows:
I think very much clinical assessment is still...it's focusing on competencies and therefore what it assesses is restricted, it's constrained. (...) However we do have the other side of the equation and that is the theoretical assessment schedules within the course which the college manages, in that they certainly (...) give the student opportunity in a variety of ways to demonstrate that they are developing the skills of analysis and synthesis.
The tendency for assessment strategies in practice areas to actively constrain what can count as competence in order to meet the demand for increased standardisation has already been discussed. In contrast the wider range of evidence which can possibly be generated from theoretical assessment strategies suggests the potential for greater incorporation of practical experience within theoretical texts. In the section which follows a range of examples of 'theoretical assessment' is described. These are explored in terms of their sensitivity to issues of practice and the extent to which it is possible to incorporate those issues into a broad theoretical framework.
4.3.2 The Range of Methods Employed.
The range of methods used for the assessment of theory include the following:
• literature searches
• case studies
• care plans
• neighbourhood studies
• poster displays
• personal journals/learning diaries
• critical incident analysis
• research proposals
• seen written examinations
• unseen written examinations
Each method in this list has the potential to contribute either formatively or summatively to the overall assessment process. Decisions about how each method should be used varies from institution to institution. For example, one approved institution may use neighbourhood studies only as a means of formative assessment whereas in another institution this would be a summative piece of work. There are however some methods which are more usually assigned to one particular type of assessment. For example, personal journals and poster displays are more likely to form part of the students formative assessment whereas seen and unseen written examinations are more likely to be used summatively.
• Literature searches
Literature searches are often an integral part of other types of assignment. Typically the student is required to explore the available literature on a given subject and give a written account of the important issues and debates embedded in the body of literature. At its most distant from practice, literature searching is experienced as an academic exercise. When it occurs as part of a problem solving exercise where students are required to explore the literature relevant to a problem they have experienced in practice and to give and account, not only of the literature but the way in which it informs the area of practice in question, then literature searches start to bring these course elements together.
• Case studies
Case studies provide students with the opportunity to carry out an in depth study on one particular client/patient in their care. Typically case studies are constructed from the students actual experience of a client over a period of time and although students might be expected to draw on relevant theory to inform the case the essential material which is used by the student often comes from students direct observation as well as historical documentary evidence.
• Care plans
Assignments based on the construction of care plans not only require the student to use observational and documentary evidence in assessing their client, but also require the student to reflect on how that clients needs might be met and kept under continual review. Students might be expected to construct a care plan from scratch, critically evaluate an on-going and pre-existing care plan or develop part of a care plan. For example, a student in the early part of their course might only have to write about how they have contributed to a clients assessment. Most assignments based on care plans are constructed around the students own actual experience of a client on placement and like case studies draw on relevant theory.
• Neighbourhood studies and community studies.
These typically require the student to explore and describe in detail the characteristics and available resources in a given locality. They may be based on the needs of one particular client or a specific client group. Sometimes these studies are carried out as a group project and in such cases are more likely to be used formatively.
• Poster displays
Like neighbourhood studies, poster displays of project work are often carried out collaboratively in groups. They can reflect project work which is abstract/ academic and highly theorised or in contrast can reflect work which has been focused on practice and student experience.
• Personal journals and learning diaries.
Personal journals and learning diaries typically require the student to record their reflections on their experience throughout the course and draw heavily on the students experience of practice. They may require the student to consider personal learning objectives. Journals and diaries can be either confidential to the student, open to scrutiny by peers and teachers or a mixture of both.
• Critical Incident Analysis
Although critical incident analysis can be used as a verbal learning tool in clinical areas and classrooms, they are also used for the focus of written assignments. Critical Incident Analysis tends to focus on a particular student experience which may be part of everyday practice or it may be unusual in some way. Students are usually required to show a broad understanding of antecedent, contextual and theoretic issues in written discussion of the incident chosen. Critical Incident Analysis therefore usually has high potential for assessing the application of theory to practical problems.
• Research proposals
Where students are required to produce a research proposal as part of their assessment the extent to which the activity draws on students own experience of practice is dependent to some extent on the way in which research activity is defined and understood within the approved institution. For example where research is experienced as a means of enquiring into practical problem which students have encountered through their course then its integrating potential will be high. Where it is taught and experienced at a greater distance from the rest of the curriculum then its integrating potential will be lower.
• Written examinations
The extent to which examinations are able to reflect students ability to contextualise abstract theory and theorise about practical issues and experience depends both on the style of questioning and on the opportunity they provide for student reflection. Unseen examinations offer less opportunity for reflection than seen papers and questions which focus on the the range of considerations which are needed for a particular client are more likely to draw on the students practical experience than questions which focus more on theoretical concepts. 
4.4 CONSTRUCTING THE EVIDENCE BASE: A SUMMARY
To provide adequate evidence for the purposes of assessing competence as an integral part of an educative activity, texts must be open to independent re-examination, must make possible the monitoring of the assessment processes, and must provide a form of quality assurance. From our examination of the range of assessment texts in operation, this does not always happen.
Critical to the professional role of assessing students’ practice is the development of judgement. Assessors need to be skilled enough to form and defend appropriate judgements to act professionally:
(assessors) need to realise that it is a crucial stage of training for a nurse that the patient is central, that they must make their decision based on the criteria laid down and that they should stand by what they decide. Because no one has the right to challenge another professional's professional judgement unless it can be proven that that professional judgement is flawed....So I think that's where we need to be more forceful in our teaching, is trying to get across to people that they have a right to make these judgements, they are employed to do that. (educator)
Continuous assessment in placement areas requires a complex evidence base, which reflects the realities of practical experience. It is the contention of this project that the evidence base provided by assessment documentation is frequently not adequate for these purposes.
4.5 CHAPTER EPILOGUE.
AN INDICATIVE LIST OF MECHANISMS & PROCEDURES FOR COLLECTING EVIDENCE.
informational mechanisms, such as:
• the provision of handbooks
• briefing meetings to ensure students and staff are apprised of role definitions of:
° link tutors
° other staff in the placement area
• understand the purposes of the placement
• understand the purposes of and procedures necessary for the assessment of
placement experiences and activities
professional development and student preparation mechanisms
to ensure that staff and students are competent in techniques of:
• interviewing each other to identify agendas of needs and interests
• situational analysis to make explicit:
° the wider context of social, political, economic conditions
affecting the clinical area
° the professional context
° the structures, principles, and procedures underpining
practice in the clinical area
° the particular circumstances of the situation
° the influences on decision making
° the constraints on action
reflection and dialogue structures to encourage
• reflection upon practice to make explicit:
° staff tacit expertise
° comparable and contrasting cases/events that influenced
° student understandings/misconceptions etc
• discussions with a range of staff and students to:
° share experience
° internalise the appropriate values, ways of behaving etc
formative mechanisms to develop
• diagnostic interviewing with mentor/assessor/link tutor
• participation in discussion with care teams
• the provision of accounts of practice, e.g., giving a running commentary during practice, making explicit internal thought processes; writing up accounts of practice for later discussion.
summative mechanisms to ensure
• triangulation of opinions between: assessor, other mentors, colleagues who have worked with the student, the link tutor, the student, other students
• completion of formal assessment documents
monitoring and quality assurance structures to ensure:
• role holders are appropriately qualified and experienced
• roles and procedures actually carried out
• judgements are backed with appropriate evidence
• judgements are consistent
• assessment structures, mechanisms and procedures consistent with
• appeals procedures available
This chapter has considered a range of so called 'theoretical assessments' in order to describe their potential for integrating knowledge and understanding learnt outside the practice situation, with students own experiences of practice. It is the contention here that this integrating function should be a high priority when strategies are developed for the assessment of theory.
With the increased amount and complexity of assessment activity that accompanies devolved continuous assessment come pragmatic and conceptual problems for assessors. These give rise to a lack of confidence among the nurses and midwives who are doing the assessing, and especially among those who have actually been prepared for the different role of mentor. Since reflective practice requires assessment that is indivisibly part of the process of action, analysis, critical reflection and further action, the boundaries between assessing and mentoring roles have become blurred. This makes the superficiality of mentor preparation for assessing, a particular cause for concern. To cope with such problems the professional preparation of clinical assessors -including putative assessors in a mentor role- must focus on the development of understanding. Nurses and midwives who are asked to assess practice in terms of knowledge, skills, and attitudes must themselves have an opportunity for developing their own competence in each of these. Until approved institutions are able to meet the basic need for understanding that assessors comments imply, assessment will be perceived primarily as an additional burden. Without that understanding clinical staff, like students, will continue to seek clear procedural guidance to frame their activities. The three main forms of assessor preparation appear to be meeting assessor needs in part only. The ENB’s 997/998 courses and the City and Guilds 730 course are the oficial preparation for assessors. Where successful they provide both content and experience of the process of exploring educational issues, and equip staff for assessing in the clinical area. There is, however, a question of whether even these courses are sufficiently long to allow the process aspect to develop, and more particularly to give the course members a real sense of being supported in their attempts to come to grips with new ways of thinking and working. An unintended outcome is that practitioners who have been prepared for assessment of students’ competence, become unofficial teachers of colleagues, through an informal cascade model of staff preparation. This is a task for which they feel ill-prepared. Short in-house courses offer update information but are less effective as a way of changing conceptual maps, providing piecemeal knowledge rather than encouraging the kind of paradigm shift in the bigger picture which allows principled new thinking. Staff attend short courses if the course has a clear relationship to immediate patient needs, if it provides them with access to further educational opportunities, or if it offers them a considerable amount of personal support. Even when courses satisfy all these conditions, the pressure and unpredictability of workload, and the inhospitability of the time and location at which the course is offered, make staff attendance difficult. The third and final means of preparing assessors is through clinical liaison, by which teachers attend placement areas on a regular basis. This is capable of providing an opportunity for clinical staff to share worries and doubts, to discuss ideas with teachers and to come to a better understanding of assessment criteria. All too often, though, it is an occasion for trouble-shooting, and focuses on the problems the students have rather than the needs of the assessors. This leaves a proportionately small amount of time for the development of mutually beneficial relationships, and the development of dialogue. There appears to be a need to make current professional development structures more effective. Assessors perceive their needs in terms of support and guidance, and look for opportunities to increase their own understanding through dialogue. They need to be able to gain ‘safe’ experience of the assessment process themselves if they are to understand it from inside.
IMPLEMENTING THE ASSESSMENT PROCESS:
THE PROVISION OF PROFESSIONAL DEVELOPMENT AND SUPPORT
In this chapter we consider the professional development experiences of the students, educators and practitioners involved in the assessment process. The emphasis, however, is on the group which the research data identifies as perceiving itself most in need of professional development and support, namely the nurses and midwives who act as assessors in clinical and community placement environments.
Prior to the introduction of continuous assessment, assessment roles and responsibilities in clinical areas were clear cut. More senior grades of staff were prepared for assessment roles by the ENB 997/998 "Teaching and assessing in clinical practice" courses (for midwifery and nursing respectively), or equivalents such as the City and Guilds 730 Further Education Teachers Certificate course. They were then registered with Health Authorities to assess the 'one off' practical tests undertaken by students in clinical areas and to complete placement reports. With the introduction of continuous assessment considerable changes in the operation of, and philosophy behind, assessment have occurred. In brief:
• The amount of assessing has increased dramatically. Students are assessed throughout the duration of a greater number of placements, and as a consequence the demand for assessors has risen significantly. Meeting this demand has been problematic for some approved institutions.
• The skills required of assessors are more complex and hence re-skilling is of great importance to ensure that activities such as reflection, formative assessment and facilitation of self assessment occur. In many cases, assessment is now conducted at a greater academic depth, reflecting curricula developments at diploma level.
• The transitions in educational cultures which have re-shaped approaches to assessment have been predominantly led by (and hence taken on board more quickly in) educational rather than clinical settings. As a consequence, professional development for clinical assessors has extended beyond merely providing information, to ensuring deeper understanding and appropriate execution of assessment intentions within a wider educational framework.
5.1 PROFESSIONAL DEVELOPMENT FOR ASSESSMENT
5.1.2. The Inadequacy of Mentor Preparation to Deal with Assessment Issues
There is one thing the people who have responsibility for assessing students are agreed about, and that - as the quotation below indicates succinctly- is that they cannot carry out their role satisfactorily without proper preparation.
I think you could have excellent assessment documentation. I think that's by the by. I think the biggest part of assessment is the preparation of assessors. (educator)
There are two roles basic to continuous assessment for which staff require professional development. These are the mentoring and the assessor roles. According to the ENB guidelines (1993: 6.5) a mentor is:
An appropriately qualified and experienced first level nurse/midwife/health visitor who, by example and facilitation, guides, assists and supports the student in learning new skills, adopting new behaviour and acquiring new attitudes.
Again according to the guidelines (6.4) an assessor is:
An appropriately qualified and experienced first level nurse/midwife/health visitor who has undertaken a course to develop his/her skills in assessing or judging the students' level of attainment to the stated learning outcomes.
Mentors are predominantly staff who have a least six months qualified experience, and have been prepared to take part in mentoring activities through in-house preparation. This is far less extensive than either the ENB 998/997 course or the City & Guilds 730 course, lasting in most cases only for a day or two. Often these induction programmes are included in more broadly focused short courses for recently qualified staff nurses. The majority of mentors find this form of preparation entirely inadequate for the assessor role which they may have to take on.
I did a curriculum awareness study day and I did a workshop on mentorship. But I think, sometimes you have no actual awareness of what you're letting yourself in for really. You suddenly go on to the ward and you're told...you're going to be a mentor...and I think people have different attitudes as to what the mentor role should be....what the student perceives (as a) mentor and what actually the mentor feels that she should give as input to the student. (staff nurse)
I know the school says that, "Oh yes, mentors go on a workshop." It's only an afternoon. (student)
Since learning of the kind which leads to reflective practice requires assessment to be indivisibly part of the process of action, analysis, critical reflection and further action which every student will engage in on the journey towards competence, the boundaries between assessing and mentoring roles are blurred. The new demands of continuous assessment relate directly to these changes in education, where the focus is on contextualised problem solving, holistic care and reflective understanding rather than atomised, de-contextualised skills to be performed on demand. Ideally, the people who hold the separate roles of assessor, mentor and teacher are able to co-operate to ensure that assessment and the educational process are integrally related in developing the professional competence of the student. But the ideal is only achieved where there are appropriate structures in place to promote collaboration, and where the role-holders feel confident in those roles. To ensure that this is the case nursing and midwifery staff, whose first job is to care for clients, need adequate preparation for their second role as assessor or mentor.
The superficiality of mentor preparation is a particular cause for concern when the boundaries between mentoring and assessing activities have become so blurred.
5.1.2. Assessor Preparation
If the mentor role is not currently well adapted to the assessment role, the assessor role itself is often not particularly well supported by programmes of preparation. There are three basic forms that professional development for practitioners responsible for clinical area assessment may take. They are:
• ENB 997/998 and City and Guilds 730 courses
• in-house courses or workshops
• clinical liaison meetings
The first is looked for wherever possible, and as time goes on there are more assessors who have had the opportunity to participate in one of these approved courses. The others are offered as an interim measure until such times as it becomes possible to arrange for all assessors to undertake an ENB 997/998 style of course. None is entirely satisfactory in meeting the full range of needs of assessors.
What is described below are practitioners' perceptions of the adequacy of their preparation for the role they are already carrying out. The comments reflect a great deal of insecurity in the role, and considerable appreciation of the support and encouragement they are given where it does actually occur. They also demonstrate that until approved institutions succeed in meeting the very basic requirement that assessors have for opportunities to acquire new knowledge for themselves, and to discuss their doubts and anxieties, they will be perceived as placing a further burden on practitioners instead of helping them to develop their professional expertise. The initiation of mechanisms to encourage the exchange of ideas about competence and the assessment of practice in a spirit of mutual education would ensure that preparation for assessment became a truly professional development matter. In the meantime, the support mechanisms there are (including courses and liaison provisions), appear to need strengthening.
5.1.3. Preparation Through ENB 997/998 and City & Guilds 730 Courses
Continuous assessment together with changes in educational philosophy has increased the scope of the assessment role and has also increased the demand for assessors. There is, it is argued by many, an overriding need for all members of staff to be appropriately prepared for the role of assessor.
People who become continuing assessment assessors of pre-registration students should ideally be registered for six months or more, ideally they should have completed a 998 course or attended a preceptorship course or maybe being an original ENB or GNC ward based practical assessor and had some kind of adaptation, or to have been prepared in a way as deemed appropriate by the senior management of (the) college. (...) probably almost every registered staff nurse is going to be called upon soon at least to be a preceptor or a mentor as a part of the continuing assessment and they may well be contributing to the formative and the summative continuing practical assessments. They probably wouldn't be the over-riding signature, maybe they should be the countersigned or whatever (...) And this is why I am trying very hard to make myself available whenever possible to actually talk to more and more groups of staff nurses to prepare them for that role. (Educator)
The probability is that it will be some time before there is a full contingent of fully qualified assessors who have attended an approved full-length course. In the meantime, assessment preparation is disseminated using an informal cascade model. There are qualified assessors who see their role solely as involving a direct contribution to the assessment process, but many find themselves acting as supporters and facilitators for colleagues conducting assessments without formal assessing qualifications. The result is a situation for which neither the qualified assessor nor the colleague with whom they are working is properly prepared. In these circumstances there is a clear mismatch between the content of the ENB 997/998 course and the role as it is actually practised, which involves supporting and teaching other members of staff:
...the 998 (it) seems to me, will have to be completely reviewed in view of continuous assessment...it will also have to be reviewed in relation to the supervisory component of a 998 holder...as a 998 holder they'll probably do assessments but they might be responsible as a primary nurse for supervising a colleague, another primary nurse who's doing continuous assessment without a 998. Now that's a whole different ball game. So that needs to be reviewed and relatively quickly. (educator)
Chief among the other perceived shortcomings of the current ENB 997/998 courses, is its relative shortness for the task of moving practitioners into a different conceptual territory.
Considering the amount of detailed content such a course needs to encompass to prepare nurses and midwives to meet current assessment demands, the relatively short duration of courses aiming to cover both teaching and assessing is a particular problem. When course time is given over predominantly to providing access to content, the time available for attending to the exploration of educational assessment within professional contexts of practice is proportionately reduced. Some individuals question whether ENB 997/998 courses adequately alert practitioners to both the educational and assessment implications of devolved continuous assessment, suggesting that it is instead a mechanical exercise that does little to support reflective practice:
...the 998 is a mechanical exercise isn't it? It's about getting people through an ENB course, teaching, assessing. I mean it's fine, I'm not knocking the course, the course management team do what they can within what's available. But I mean they can't produce the sort of reflective practice within supervisors that's necessary to encourage it within students. (educational manager)
I know they all do the 997, but sometimes the feeling that I get from them is that it's something to be endured...(student midwife)
The issue here is about whether the courses can deliver what is needed given the constraints of time, resources and so on. Clearly, there is a value to the courses; they meet a real need for those people confronted with students who are products of the new approaches, and with whom they have to work immediately. Those courses which reflect current educational approaches are valued by practitioners not least because they aid their understanding of the experiences faced by students. This sort of understanding (of process, through experience) may well help the longer-term development of a professional approach to assessment as part of the learning process.
It came as quite a shock at first because I'd never come across...I mean it's 17 years since I did my midwifery and then we were just lectured to and it was like a totally different way of teaching (...) they brought out things that I'd never though about, I mean the teaching, making teaching plans out and things like that. I started to read more than I had done previously, a lot of interesting things, things that I'd never heard of, you know like the first time they said, "We'll brainstorm this." I didn't know what it was! Everybody else did but I didn't, and just little things like that.
I can see from the student's side a bit more from doing the 997, having to go and try and find bits in journals, but of course the journal's been ripped out, the article's missing, the books are not there. It's been really difficult sometimes and I think, students are having to put up with this fifty times more than me because I'm only having to produce three pieces of work. They're having to produce an article a week for this project, for that project.(...) So I feel sorry for them.
Of course, the major function of a course for assessors is to provide an opportunity for nurses and midwives to learn more about that role. It may be the case, as some assert, that there is a dimension of teaching integral to the professions of midwifery and nursing. However, the assumptions that underpin these views of how and what nurses and midwives teach in their everyday contact with clients, colleagues and students remain largely implicit. Courses such as the ENB 997/8 and City & Guilds 730 may contribute to making such assumptions more explicit:
Well I think everyone can teach in this profession but I think having the 997 guides you...it shows you what you can actually use within the area you're working which you probably never realised was there initially. You're doing it but maybe you're enhancing those skills(...) so it does broaden your mind towards, you know, how to get what you want out of the student, how to give them information so they retain it (...) you can actually find out whether or not they're gaining from their experience. So it does help, definitely. I think you've got to do it, I don't think it's umm, like pie in the sky sort of thing, it's something you need to do, it enhances what you probably already do to make you more effective.
Courses which prepare practitioners for their teaching role by raising their awareness of teaching, will only prepare them for the assessor role if the teaching, learning, and assessing functions of programmes of practical professional preparation are seen as part and parcel of a single entity.
5.1.4. Preparation Through In-House Courses and Workshops
All approved institutions organise short sessions to update educational staff on new assessment strategies, especially for those not involved in their design. Educators require an understanding of the totality of assessment strategies to liaise effectively with practitioners and facilitate them in their role for assessing clinical practice. They also need a detailed knowledge of the assessment of theory and their marking guidelines.
...we've had marking workshops organised within the college (...) I mean I've never marked work at diploma level. And so it was as new for me as it was for them(...) The tutor who arranged these marking workshops (...) drew up markers guides and sort of made it easier for teachers to be able to look at pieces of work and say, "Well yes, there's a knowledge base here," and going all the way up the taxonomy. (midwifery course leader)
And there have been preparation sessions with staff for the introduction of the assessment strategies(...) again that is something that we felt was important. There should be preparation, even though it was fairly brief...(education manager)
Separate short courses for practitioners are also held. Often lasting a couple of days, they typically focus on specific changes to the assessment system such as new documentation or the process of determining levels of achievement. It is questionable whether courses of such brief duration and specificity provide adequate preparation for assessing complex forms of behaviour. It is also doubtful whether they can support the sort of reflective thinking about the relationship between knowledge, skills, and attitudes which are necessary. Given that attendance at even the small number of days set aside for in-house courses is not always high, there is a serious question about the success of such courses in reaching their intended audience:
A small number of us from the college's continuing assessment committee, which I chair, set up a series of nine study days, three at (each of the three sites linked to the college). Attendance at those study days was varied, (...) at the three at Raven hospital I would suggest that we had an average of 40 to 50 psychiatric charge nurses or staff nurses at each one, which must have covered almost every qualified member of staff in the unit and I was delighted with the attendance there. There was one study day that I held at Penrose hospital where I had an audience of five, and all five of those people had come across from Thorpe hospital to be there. In fact if I'd relied on the Penrose hospital audience I was meant to be relying on then I'd have had nobody. (educator)
This kind of response from practitioners was found in a range of placement areas. The reported reasons for non-attendance are all related to pressures it would be normal to find in a healthcare environment. It follows, then, that short in-house courses are designed to fail unless some way can be found of moving assessment preparation from the margins. This would ensure that practitioners do not have to cope with the added pressure of having to attend top-up sessions located at some distance from their workplaces, and balance the course against other competing priorities of staffing difficulties and unpredictable workload of client care.
One community educator described the difficulties of providing educational resources to mount a programme and secure the motivation and attendance of practitioners:
...I realised when I first came into post that we were going to have a problem with freeing up sufficient assessors and supervisors (to attend the sessions). Unfortunately it was not a priority for the service side in terms of all the other things they had on their agendas and it didn't matter how many times we raised it, it still wasn't important until it actually came to the crunch about two months ago when we said, "Well look, these students are coming out, can you identify these supervisors?"
This educator managed the situation of competing educational and patient care priorities by
staging updating sessions which were not incompatible with workplace priorities:
...it's difficult enough to pull a hundred practitioners off the community all at one time for a study day(...)what we're hoping is if we get sixty then we'll do two half days and pull thirty in at a time, and we've a chance of getting that so it's manageable for them and manageable for us.
When assessment strategies are designed for courses operating in newly merged approved institutions and their associated placement areas, some practitioners are more familiar with continuous assessment than others:
...it's interesting too that the state of play on each site is different in terms of developing practitioners to participate in forms of continuous assessment and indeed even the form of assessment. (...Staff at the General Hospital) are further along the road really in the sense of how you implement Project 2000 assessment and continuous assessment than say other sites are. And the other thing about it is perhaps the environment in which the change is being introduced is one in which teachers and practitioners are already talking to each other about assessment in the clinical situation in a constructive new way. So the ground has been prepared really. Whereas on other sites it starts from scratch. (educator)
There is a particular need to prepare community staff to meet some of the new demands and expectations of Project 2000. One community educator insisted that clinical liaison for placement in the community needed to include a sizeable component about Project 2000, in order to 'sell' it, and discussion about the relationship between Project 2000 ideals and the values of community nurses and midwives. Community-based staff often feel isolated from educational opportunities, and many felt the need to be valued and supported as well as informed about the assessment process:
...we've actually spent a lot of time going into the health centre to do tutorials(...)it's given us access to people to make them familiar, because I think the community quite rightly felt aggrieved that here was the community that's always been looked down on for years and now all of a sudden because Project 2000's come on the horizon, "You want us to move heaven and earth to accommodate these people." (community educator)
5.1.5. Preparation Through Clinical Liaison
All approved institutions have established systems for ensuring liaison between educators and staff in placement areas. In fact, one major purpose of the liaison, perceived by both sides as important, is to ensure that information about assessment procedures and requirements is shared. A second purpose, to support students on placement, is a little more problematic, placing the educator in the position of having to decide from time to time between the learning needs of the student and the developmental needs of the practitioner. For example:
You know some of us have been here for a while, courses aren't available for different reasons, you know, financial or can't be released or lack of places...we are going to actually have recognised teaching sessions on topical developments...like Project 2000, PREP, continuous assessment...I do a lot of work at home and I do try to improve my existing skills but the time involved is difficult. In fact one of the suggestions at our trained staff meetings was that we actually do a piece of research about a topic ourselves and feed it back to the group, in our own time, not using hospital's resources. And the thing is that none of us can actually do it. We have not got the time out of work. And we really don't feel very well supported...(staff nurse)
The quality and quantity of clinical liaison varies. In some areas links are good and ongoing relationships are forged which ensure staff are up to date and well supported regarding educational issues and assessment. However, more frequently practitioners spoke of their needs not always being met, and educators of their difficulty in fulfilling the kinds of commitments they would like to make to their link areas. Practitioner's needs for clinical liaison, and the importance of the activity are recognised by many educators:
It's not that we don't want to do them, don't get me wrong, but sometimes you do think well really I'd like more time to give more time to these areas.
I used to be a clinical teacher years ago before I came here, and it could be a full time job just really being on the wards and doing this when we've got heavy commitments in the college (...)I think the majority of teachers (...) would like to give more but it's difficult to do.
Many practitioners and students are aware of the constraints experienced by educators in fulfilling their liaison role. As previously outlined, educators spoke of the pressures of work in approved institutions which frequently take precedence over clinical liaison. As many institutions have merged and cover multiple sites, some educators have difficulties allocating sufficient time to travel to, and spend useful periods on, link areas. This is particularly the case if they have a number of clinical links which are geographically spread.
Some staff juggle their workloads to fulfil the commitments stipulated by approved institutions, although the contact is not always as regular as they would wish:
...well the teaching staff...I mean I'm speaking for myself here obviously as well (...) there really is more and more you know...workload if you like, expected of us, particularly on the clinical areas, and personally, speaking personally I find that weak. I mean we do have this commitment to give, it's a limited commitment but we are supposed to be at least half a day per week to the clinical areas, and if we can't make that up, you know maybe we save up two days in the month.
Unsurprisingly, the result in some areas is that liaison contact is reduced to the minimum, with trouble-shooting being the main function:
...I find that I'm only doing it when people ask. What I should be doing is doing it on a more regular basis and we'd love to do it. Erm, but it's just a time scale. But it's always this business of you're doing it when you need to do it, when there's problems...
Building up relationships that are positive and valued by both practitioners and educators is a considerable task. The need for credibility within the designated link is identified as an important issue by many education staff in order to create relationships of mutual respect:
At the end of the day the relationship is really the most important thing...I spent 18 months on a placement area before I started to feel that I was in a position to actually question what they were doing and suggest changes...before I felt confident or comfortable because their reactions to me started to change (...) initially they saw me as a complete and utter threat, checking up on them (...) I approached them as equals I suppose. I helped them and I think that might have been the best thing because I set up work books, learning aids and things like that on the ward...
...I think the fact that we have now got a fair number of people who have a district nurse and health visitor background who are given that credibility, I think it's completely different from somebody from the institution going out and selling the idea (to community staff.)
A small number of interviewees were of the view that clinical liaison was neglected by a minority of educators because they either did not regard it as important, or because they felt professionally insecure in practice settings. Where clinical links were poor for whatever reason, the situation was often acknowledged by those involved. Students were often aware too, and as the following extract highlights, this left some feeling uncomfortable:
Although there is a link teacher at school, there seems to be a bit of a ...I don't know, a bit of animosity between school and the clinical area. I don't know whether it's because of the diploma or what, but there's definitely some aggro there. So I'm a little bit reluctant to ask advice from the link teacher.
Sustaining mutually beneficial relationships requires effort. Even where good links have been established, there is the potential for the liaison role to be eroded by other pressing commitments:
...when you're in an environment like that, you don't actually need to be there. So I have to be careful that I don't resist the temptation to say, "Look, call me when you need me." I mean when things get rough here and there's a lot of work to do and I can't get over there it's nice to know that it will function on it's own, but I like to be there at least once a week.
In light of this general exploration of clinical liaison, it is unsurprising to note that specific assessment-related input to supplement assessors' professional development varies considerably. The following extracts illustrate the typical range:
...we do spend quite a bit of time going out and talking about the documentation to them. I think the midwives are quite threatened by it as well. I tend to go out and say, "Read it and I'll come back to you next week 'cos it's a big document, there's quite a lot to get through," rather than just go to people and going through it because they just don't know, they can't take it all in, it's quite horrendous. So I tend to leave it with them and then go back. (educator)
...link teachers working with practitioners (...) dealing with immediate issues and helping clarify those that then leads to greater clarification of a whole if you like. And there has been quite a lot of that in some areas although you know, in other areas not so much. It's interesting, it's depending upon the strength of the link teacher and that kind of approach. There have been formal sessions initially in order to prepare people to participate in the assessment strategy and I think they've tended to die out in sites A and B, although I'm not sure about site C. I think they're continuing... (educator)
To give an example, the area I'm linked to, I know there are some midwives that aren't that familiar with the course as much as they could be to help these students, and yet I've said it would be useful if we could spend some time together and talk through the course itself and look at this in terms of approaches to the assessment, but it's getting them to take that opportunity up. And that may well be to do with timings, priorities, you know the priority of patient care, students don't come first. And I can understand that, I can understand the difficulties they have, some of the areas in terms of patients come first and that's right, but it creates problems for us...And I'm sure it creates, well problems for the students. (educator)
Some of the problems affecting the development of supportive clinical liaison may be resolved over time; in the meantime the problems have to be taken into account.
5.2. MEETING CURRENT DEMANDS FOR PROFESSIONAL ASSESSMENT
Unfortunately the increased demand for qualified assessors for continuous assessment is not matched by a proportionate rise in the numbers of appropriately qualified assessing staff in all placement areas. Insufficient places on courses for preparation of qualified assessors is a common problem in many of placement areas, recognised by education, clinical and managerial staff alike. One clinical manager described the situation of a lack of assessors on her acute unit:
Terrible pressures. Terrible pressures because we haven't had enough 998 places, and we (don't) have I wouldn't say, a terribly high turn over of staff here, we're very lucky, and we don't have agency nurses or anything like that. But we still have a problem because we've gone for a... push, all the G grades, F grades and E grades through, and that is difficult. And we look to the college to find innovative ways of running the 998.
Assessment demands are not always spread evenly throughout health districts. In some districts the allocation of precious places on ENB 997/998 courses does not reflect the concentration of student assessment needs. In areas where assessment demands are high, staff feel aggrieved if the allocation of places is not prioritised according to current assessing need:
And what we whinge on about is the fact that we're desperate and we'll see one of the local health services units who send a part-time staff nurse who may not even have students, or once in a blue moon, taking our place (...) there is tension there. (unit manager)
This was confirmed by a leader of the ENB 997/998 course:
But one of the biggest problems is that it seems to be a kind of conveyor belt, a sausage machine where managers are pushing practitioners onto the 998 course. Now quite a, no I can't say quite a few because I can't substantiate that, but there are certainly people there who don't have students, and in order to fulfil this it's very difficult, so they tend to develop role play situations and that kind of thing, which to my mind is not satisfactory. So we are having practitioners on the course who don't have students, may possibly not have students for a long while and so that's potentially quite difficult and creates some problems.
Difficulty in retaining staff with assessment skills once qualified is a further problem affecting some areas. This is particularly the case in large cities, where a high turnover of qualified staff is reported. Not only is there a lack of qualified assessors to conduct student assessments, but a valuable resource is lost as staff do not develop and pass on their skills and experience to more junior colleagues:
And inevitably we are actually teaching some people, giving them skills and then loosing them because that was the very skill that they wanted to (...) get a sisters post (...)we have such a tremendous turnover of qualified staff...(educator)
So the difficulty is then as far as we're concerned, is actually having experienced assessors who have actually done something like the 998 or the preceptor course, but they're not practicing the skills. (...) I mean we've got many people actually doing the course but the numbers of people at the end of the day that you can actually count, who have sometime previously finished the course and now practising and developing those skills, they seem very few and far between. (educator)
The consequences of this shortfall are recognised, in that the quality of student assessment may be put in jeopardy:
...the change over of staff is just phenomenal really. And what concerns me is that erm...basically students are not getting assessed. Students are going through without assessment, with non-assessment. (educator)
A lack of qualified assessors in some placement areas leads to an extension of the mentor role and a change in the role of the ENB 997/998 holder to that of a supervisor:
I haven't been on the 998 course because there's a lack of places available. However as there was only one trained person on my ward who had done the 998 course and (they) worked completely opposite shifts to us, how we got around that with Margaret (the link tutor) was to arrange a co-assessing role for me. (staff nurse)
...ideally you should have a mentor (sic assessor)with...the 998 but...it's impossible. There's only me at the moment on the ward who can assess. The college stipulates that someone should have that course if they're gonna be a mentor,(sic assessor) we need to reach that standard, but at the moment we're not. And I know the college are fully aware of it but it's a problem...we have two preceptors linked to each learner...I usually have discussions with them, I countersign their reports and go through it with them beforehand. (ward sister)
Failure to live up to 'ideal' assessment scenarios, and provide sufficient professional development is common. It is acknowledged by both educators and practitioners that the reality of assessment practices is often role confusion and blurring:
What is meant to happen and I don't think does happen, purely because there are not enough people for the 998, is that mentors, and there's this confusion between mentors and assessors and what's the difference between the two, is that mentors do assess. However the mentors who assess should hold the 998 certificate, but not all of them do. So that's quite problematic. (ENB 998 course leader)
These, it could be argued, are largely transitional problems. Eventually, all will be appropriately qualified, given sufficient time and resources. Indeed some approved institutions have targeted allocation of ENB 997/998 places according to assessing needs during the duration of the project. Meanwhile, students are meeting some quite unsatisfactory situations. It only takes two or three placements of this kind to represent a significant proportion of a students practical experience.
Where initial professional development has not been totally successful, some students are, in effect, preparing assessors for their role, because of their greater understanding of assessment requirements:
I've had quite a lot of (assessment) experience 'cos I'm on my last but one placement, so there were problems initially and as we've gone on, as the course has progressed there's a greater understanding of what's expected I think. But still it's not a hundred percent from students and clinical staff. There were a lot of problems with the first one which was our delivery suite allocation. Staff had supposedly been prepared but didn't feel prepared and we weren't told anything 'cos we were sent out with, "The staff know what to do." So it was the blind leading the blind initially, but it's lot better now, eventually. (student)
5.3. ONGOING SUPPORT AND DEVELOPMENT FOR ASSESSMENT
Practitioners develop their confidence as assessors through a gradual learning process, resolving issues of concern slowly as they increase their understanding of the concept of continuous assessment through increasing familiarity with documentation and procedures:
...I think until you actually use the forms and get familiar with the documentation... then you don't feel totally happy with it. But now we've had one set of students I think just from doing their first interview we got to realise what the form was like. (assessor)
Many realise that the period of growth is a long one, recognising that confidence and understanding take time to build up:
I think you get better at it as, as you become more confident in yourself definitely, and your own practice. I'm confident in my own practice now so I feel able to assess someone fairly, whereas initially when I was very first qualified I didn't...when I was assessing someone I found myself thinking, "Am I doing this right?"...questioning all the time rather than looking at it objectively... (staff midwife)
Although the accumulation of experience and confidence in the assessing role is essential, such learning by doing is of not of itself a guaranteed method of gaining insight, for experience without reflection does not necessarily improve insight. The development of skills and confidence in assessment is enhanced by regular contact with colleagues who can offer advice and a sympathetic ear. The security of knowing that support is at hand can diminish the personal threat that some novice assessors experience:
I'm very frightened...because while we're doing formal assessments who's to say that my standard is too high or too low? Would other people pass a student who I maybe would fail? So that worries me a lot...But at the same time I feel I'm confident enough now to refer somebody if their care was detrimental...I have gained confidence but it still worries me. (assessor)
The need to develop understandings about assessment also applies to students. All student groups receive information about their approved institution's assessment strategy via written guidelines, and there is often a session with an educator who takes them through the overall strategy. Students stress the importance of this activity to them, but at the same time point out their belief that there is a need for more than a one-off session at the start of the course.
Although in the case quoted below, the student had the opportunity to clarify issues at a later date, students who are the initial intakes on new courses are generally adversely affected by hurried introduction to assessment:
Our tutor didn't seem terribly familiar with the material and we understood that it had just been landed on her desk more or less, and I think most of the tutors here are in the same position. That might be untrue but it was certainly the impression that we got. We had the examinations officer from the college who came in at the end of our introductory course to talk to us (...) specifically about the assessment schedule, and that was quite helpful in clarifying our minds.
Some students comment that their initial understandings hold until they start to be assessed or to complete written assessments, at which point they realise that they are unsure not only about what they were being assessed for but also how the assessment is supposed to be conducted. Students perceive a need for regular discussion with teachers to clarify procedural issues and develop confidence and understanding of the system:
...we were given our assignment title, we were given guidelines (as) to what the marking criteria would be...but as to writing it and what was really needed, that was up to the individual to go to the tutor and find out really. But the problem with that occurred in the fact that once you went to one tutor you had to stay with that tutor because if you went to another tutor you'd get totally different feedback and they'd expect it to be different which caused a lot of controversy. (student)
It might be argued that in a culture of mutually educative practice this perception would be replaced with another that expressed the need in terms of an opportunity for discussing differences of view with a range of people. In such a culture, differences in advice about procedures would be treated as an opportunity for students and teachers together to identify and come to a better understanding of the principles by which assessment procedures operate. True confidence would be engendered where all parties are able to feel the independence which comes from an awareness of the underlying structure as opposed to the surface procedures. The fact remains, however, that students were rarely sufficiently familiar with these principles and so they did not have the confidence to act independently. Under those circumstances they wanted desperately to be given clear procedural guidance, and to have open access to a teacher who could reassure them about these procedures when there appeared to be some difficulty in operating them.
In the same vein, students sought for consistency between educators engaged in facilitating and marking assessments of theory. This time, however, the issue is not simply one of insufficient understanding of the principles of procedure. There is clearly a case to be made for fairness and consistency in assessment criteria. On this matter there is a need for development for all those involved; teachers too benefit from opportunities for dialogue on assessment issues:
...we're continuing to develop staff in relation to marking and so on, so we've had some workshops that people attended in order to try and develop that. Now I think we need to do quite a lot more of that, but at least we haven't forgotten about that. (education manager)
Whilst there is a perceived need for developmental activities and the sharing of ongoing assessment dialogues throughout approved institutions, staff involved with the assessment of practice are identified as having some of the greatest needs. Given their uncertainty about how they should conduct assessments, and the fact that many do not see the assessment activity as an integral part of the clinical placement teaching and learning process, the variability of quality in the professional development they receive is of some concern. Of particular concern is the apparent divergence in practice within approved institutions for developing and supporting practitioners as they move from dependence on the purely informational to an understanding of the complexity of the assessment process:
And if half the people or whatever number have been trained literally as they work to operate within one system, we are then asking them to assess with a different philosophical view...without...apart from a 998 course here or there, I mean not particular help to do it. If any industry was re-modernising it would put in massive resources to change it, to operate the new machinery. Nursing somehow hasn't put that (in), and after all, assessing that they're competent to practice in this new way when you've never practiced in it...it's like asking me to judge something I know nothing about. They would argue that there isn't that much difference, I think there is a tremendous gap. (educator)
And I think the other problem is that practitioners haven't as yet the framework to judge things by. In other words what I mean by that is that they tend to be able to relate clearly to and very easily to, generally speaking, traditional methods of assessment which they have been exposed to themselves but they do have a problem, and understandably a problem, not a criticism, of relating to new methods and with new intentions and I think we've got some work to do in that respect. (educator)
The development of a 'framework to judge things by' is vital if the assessment of competence is to be effective. Where practitioners do not receive regular professional development, new educational approaches, curricular and assessment strategies sometimes reach practitioners via the arrival of 'new' students on placement. The quality of staff responses to students on Dip HE level courses appears to be governed by the strength of practitioners' knowledge, skills, understanding and confidence in both the educational programmes and the assessment processes. The kinds of activities engendered by these educational and assessment approaches, i.e. questioning and reflective practice based on sound knowledge and research evidence, are perceived in some cases as a positive challenge which stimulates and enhances staff skills and knowledge base through dialogue, and in others as a threat to their skills and knowledge base.
But there's a lot of midwives who feel quite threatened by the students, diploma students, (...) they've got the experience of midwifery obviously in lots of practice, but in terms of thinking diploma that's quite difficult for a lot of them, understandably. Many of them haven't done research, students get research in their programme now, a lot of midwives don't know the first thing, you know, critiquing pieces of research, that's a problem isn't it for the student looking at the issues of practice and questioning practice. Sometimes midwives find it very hard to respond to the questioning student. (educator)
I mean a lot of people are very threatened by us, being called a "new fangled student'" and "you're a funny Project 2000 thing aren't you?"...I think a lot of the time they feel very threatened, they think, oh that when we're qualified we're going to step up the ladder quicker, but in practice we're not going to do that. (student midwife)
The need for professional development to extend beyond a small number of informational sessions is clear. Ongoing, regular contact is required to support assessors and develop assessment practices. Dialogue about assessment skills and practices, and the sharing of experiences is identified as a vital ingredient:
...the feedback we get from them (assessing staff) is continuous because you invariably see them at least once a week, once a fortnight maximum (...) as a clinical link. And so you are not only responsible for the student in that clinical area but you're also responsible for the supervisor, you've prepared the supervisor, prepared them for that particular student. (...) So there is a very definite feedback process(...) it's usually about their worries and fears, about a) whether they're doing the right thing, b) how do they actually do the mechanics of the thing and c) the levels at which they're pitching. (educator)
...I mean like...I would like regularly to meet. I mean I go down once a month, I would like regularly to discuss with the unit I liaise with, which I feel I've got a good relationship with, (...) some of the educational issues. I mean it's things like, O.K. we've had continuous assessment now since June. There must be all sorts of things that they want to discuss with us but I don't think anybody's actually yet had a meeting which actually ...we've got together with those staff and say well, "What difficulties have you had so far?" (educator)
In some approved institutions, ongoing support to discuss progress with innovations and address issues such as 'standards' and 'failure' is provided via regular forums. Most approved institutions run forums every two to three months, and qualified assessors are often obliged to attend a minimum number to retain their status as practising assessors.  Planned activity which stimulates dialogue and uses a critical incident analysis approach is valued by assessors. It is unfortunate, therefore, that some forums do not focus on assessment experiences and consequently critical debate and dialogue are not stimulated. Attendance varies considerably too. And as was the case with short in-house courses and workshops, whilst some forums are well attended, reports and observations at others reveal that teachers outnumber practitioners. Pressing workplace demands and difficulty attending forums at sites other than the practitioners workplace are identified as problems for some, as well as lack of enthusiasm and low motivation for forums that do not usefully tackle assessment experiences.
I think the problem always is, where's the time to do all these things and where are all the resources to replace the staff coming off the ward and also from the educational centres, where are the resources? Well I mean you can always say, well it's part of their work anyway, but it's so...in this changing climate there's just so much else to do isn't there? I mean that's a one day mentor workshop and a one day on continuous assessment...I think it's safe to say we need a lot more than that for the issues to be aired properly(...) they could be looked at in assessors meetings, (...) perhaps break into small groups. (educator)
Practitioners acknowledge that the resource implications of professional development and ongoing support and development are not inconsiderable, but remain certain that it is essential to provide support sufficient to ensure that the assessment of competence is carried out in a way that promotes the registration of high quality reflective practitioners.
5.4. PROFESSIONAL DEVELOPMENT AND SUPPORT: A SUMMARY
In a study of the effectiveness of assessment strategies in assessing the competencies of students, it would be easy to perceive the needs of practitioners as something to be attended to once the system is up and running. It is clear from the comments of those interviewed, however, that structures for the preparation of and support for assessors must be in place from the very start so that practitioners can begin to gain confidence in their new role. Without proper professional development, assessors are obliged to 'do the best they can' in moments that they have free from their main nursing or midwifery activities. To be able to operate assessment so that it is integral to the teaching and learning process, assessors must be able to develop their own understanding of the knowledge and skills necessary for competent practice, and have the opportunity to reflect on their own attitudes. Assessors perceive their needs in terms of support and guidance, and look for opportunities to increase their own understanding through dialogue. Currently they express a preference for developing their understanding ahead of discussion with the student. They are not confident about open discussion with the students because their own understanding is at an early stage. This implies a need for practitioners to be able to gain 'safe' experience of the assessment process themselves in order to begin to understand the process from inside. If this experience is to be given, and assessors are to be given the professional development and support they need and deserve, there are a number of issues to be resolved. It is necessary to ensure, amongst other things:
• that the number of qualified assessors is adequate to meet assessment needs
• that courses reach the total population of assessors
• that assessors receive ongoing support to:
-tackle competing priorities in the workplace
-free people to have time for assessment mentoring
-have time to carry out educational liaison
• that the sense of de-skilling and the feelings of threat typically associated with having to handle the new, are overcome.
Opportunities for inter-assessor dialogue and mutual education, as well as an understanding - through experience - that the central processes of assessment are in fact central to the caring process, will increase as a consequence of the introduction of strategies to ensure the factors mentioned.
Learning and assessment take place alongside each other and, in one view, are integrally related within the ‘learning institution’. The institution is the place where education occurs, but is also itself learning; it provides a context and at the same time modifies that context in response to what happens when learning and assessment take place. The ‘learning institution’ has structures and mechanisms for encouragng reflection on these processes and consequently affects the processes themselves. Individual experiences suggest that the institution does not learn at a constant rate and evenly across-the-board, though, and it those differences that students and clinical practitioners report when they talk about the problems they experience in doing assessment in workplace. Staff express their concern in terms of the work conditions they have to contend with and the strength of liaison and support they receive, while students talk about the commitment and availibility of their assessors. Each is referring to pragmatic and often institution-wide problesm rather than making a criticism of individuals, although of course both realise that the relationship between the individual nurse or midwife and the individual student is of some considerable importance. There are many pragmatic constraints that can turn the link between an assessor and their assessee into a relationship in name only; unpredictable staff or student absence, incompatible shifts, and a general difficulty in finding time to be together can give rise to a situation in which the assessor rarely sees the student at work. In such circumstances it becomes extremely difficult to monitor progress in terms of process, or to learn holistically. The structural problem of the relatively short duration of many placements exacerbates the pragmatic problems of ‘mismatch’, for neither intermittent contact nor short contact is conducive to the development of a holistic perspective on care through reflective practice. What becomes the over-riding issue is the ‘competition’ between the demands of the work and the demands of assessment, with the needs of the client taking precedence over the needs of the student so that the latter are placed in opposition to the former and marginalised. This construction of assessment as something you do ‘on top of your job’ if you have the space, fails to recognise the importance of dialogue about the very issues of pressure and priorities that feature so centrally in. The value placed on good communication by students and assessors makes it clear that they would welcome the further development of supportive relationships through which they could learn to understand better the assessment issues that trouble them most. In the meantime, learning about assessment and its possibilities occurs in the antecedent events, through an hidden curriculum, and at a tacit level. The models that students develop are the result of attempting to ‘do the job’ and build a nursing or midwifery theory. As they observe and work alongside staff, they learn the essential elements of the job, and thus what counts for assessment purposes, discovering in particular that they cannot always take for granted that the assessor’s values will be the same as their own, or ewven of another assessor. They learn the importance of reading the situation, and discover the relationship between theory and values. In antecedent events, then, the knowledge and value base that provides the framework for looking at practice is built up. Successful participation in assessment requires an understanding of the ‘rules’ in the first instant. But when the rules have been ‘learned’, formative assessment can be developed. Since formative assessment is closely allied to work processes, its proper use facilitates the internalisation of reflective practice on the one hand, and contributes to summative assessment on the other. The ‘learning institution’ has a tendency to provide support for formative assessment by offering encouragement and support through the the establishment of organisational patterns that allow continuity of contact and time for conversation and reflection. By monitoring its activities it develops a set of principles for learning and assessment that foregrounds interpersonal relationships, discussion, and action.
TOWARDS THE LEARNING INSTITUTION:
THE EXPERIENCE OF ASSESSMENT, LEARNING, AND MONITORING DURING CLINICAL PLACEMENTS.
This chapter examines the close relationship between assessment and learning from the points of view of staff and students. Briefly, the intended aims of student placements in terms of learning and assessment events include:
• providing appropriate learning experiences
• ensuring that students are progressing towards, and achieving 'competence'
• contributing to the integration of theory and practice
• providing formative feedback on students' progress towards achieving competence
• opening the horizons of the students towards future developments in the profession
• and compiling documentary evidence to support summative judgements made regarding the students' levels of achievement
In order to ensure that these aims are met, monitoring must focus upon a) the mesh between course intentions and the contextual conditions within which they are realised, b) the decision making events, and c) the extent to which monitoring structures and strategies reveal and facilitate action upon issues, concerns and suggestions for improvement. In short, structures and strategies for assessment, learning and monitoring comprise what may be termed the 'learning institution' that is, an institution having structures appropriate to the aim of learning from experience and capable of modifying its structures accordingly. Each of these points will be discussed in term under each of three major sections:
• Contextual Issues
• Assessment and Learning Events
• Monitoring - Towards the Learning Institution
6.1 CONTEXTUAL ISSUES
Occupational cultures form the context to everyday practice. They mediate 'formal conceptual' systems, translating formal concepts into everyday occupational ways of categorising and constructing meaning as the basis for action. When referring to assessment, students and staff talk about 'feedback', 'discussion' and to a lesser extent 'reflection' rather than formative activities; and 'doing' the assessment and filling in documentation, 'forms' or reports rather than summative assessment. The formal concepts are interpreted in the language of everyday activities. The language of 'doing' takes precedence over the language of academic analysis. It is what Schutz (1976) referred to as the primacy of the pragmatic attitude pervading everyday affairs. In the pragmatic attitude the aim is to carry out a particular task 'for all practical purposes'. Placing a tick in the appropriate box, for all practical purposes, signifies the completion of an assessment task. In the pragmatic attitude this task is to be acquitted as quickly and as clearly as possible (cf. Cicourel 1964; Douglas 1970). For example, if a student has completed two weeks in a clinical area without causing any problems, then for all practical purposes that student may be deemed to have successfully completed the placement experience. This may seem cynical. However:
I think it's just like the old system really. If you go about things in the right way, if you're fairly pleasant to the staff and you get on on the ward, you're O.K. kind of thing. It's not a question of whether you're giving the right advice to women because they don't really know what you tell them. It's erm, you're not assessed closely enough or supervised closely enough.
Or, in contexts of pressure:
My preceptor wasn't there, I wasn't working with her for a long time...I'd gone to the sister before and said, "Well, since my preceptor is not here can I have somebody else to do my interview?" and she said, "No you can't, you have to wait for the preceptor." When my preceptor came (back) she (said), "Well I want to find out hows she's been working" and and in the end she ended up saying, "You've got to go to the sister to have it done." So...it was the last day of my placement and in the end the sister...just took the book and she just ticked it. She just wrote a few comments. And some of the things she wrote, I never even did them. (student)
On the other hand, the reflective attitude interrogates the contextual features of practice, contrasting sharply with a non questioning attitude:
..there's certainly no in-depth discussion of like the reason why you've made any decisions or anything like that. It's, "Oh I've been watching you work and I think you work very well." That's about the level it gets to. (There is ) no, "Why did you do this? Why did you do that? Can you explain this? Have you got any questions?" That's about as far as it goes and of course you say no. (student)
Interrogation of the context to the placement experience inevitably raises issues. The main issues raised during interviews can be described in four broad categories:
• approved institution/placement area liaison and continuity of support
• course constraints
◊ short placements
◊ timetable inflexibilities
• placement/work conditions
◊ staff absence (sickness, holidays)
◊ competing placement priorities
• individual commitment to role performance
These will be discussed in turn exploring their implications for the 'learning institution'.
6.1.1. Liaison and Continuity of Support
Educators from approved institutions liaise with practitioners regarding the timing and nature of student placements. Despite periodic educational audits of placement areas, circumstances may alter by the time a student arrives on placement due to changes such as skill mix or the speciality of client care provided. Consequently there is a need to maintain regular liaison between educators acting in link roles with placement areas was highlighted. Such communication must ensure that placement area staff receive sufficient information about the forthcoming student placement, in good time. The liaison role is thus essential in ensuring that preparatory mechanisms are in place.
For the duration of each student placement, it is intended that formal assessing relationships with clinical staff arere arranged. The issues of which staff hadve official assessor roles are not always clear;however, all students are assigned to one of more named members of staff as their assessor(s). The rationale behind allocation to more than one member of staff is to provide continuity of student assessment and support.
The specified number of shifts during which students and assessors should be working together varied from college to college, in line with directives in policy documents. Although there were accounts by students of well maintained relationships which met or exceeded policy requirements for students and assessors working together, there were also reports of problematic relations by many students, assessors and educators.
There were reported examples where the clinical links was effectively in name only and practitioners did not have regular contact with their link tutor. Where this was the case, it was not conducive to building assessor motivation or confidence. Most interviewees said, however, that they would feel able to contact the tutor if any problems arose during the student's allocation.
6.1.2. Course Constraints
Short placements, especially in the initial stages of courses and sometimes only two weeks long, are typically thought unrealistic for establishing and developing relationships through which formative assessment can occur and summative decisions can be reached. Students commented on their need for an initial settling in period for any allocation to 'find their feet', become familiar with routines and start to develop relationships with colleagues. Students described these activities alone as taking approximately a fortnight, hence expectations of assessment activities occurring within such periods could be argued to be unrealistic:
The other aspect of the whole thing that we've got to try and address I think, and again I think it's very difficult to do with the present course structure, is the length of time over which people are assessed. I mean it is...well the most positive way of putting it is a challenge to people to assess (...) students over two weeks. It's probably not very reliable (...) I think we need to look at when assessment occurs and the length of time that is allowed for that assessment and it may be that in some areas we don't do summative assessment ,we do formative. (educator)
The difficulty we've had here is in the allocation of students to the programme. They have only spent two weeks in some places as a placement, well that's five working days and that's impossible. There's no ownership by the student nurse and there is no ownership of her mentor to look after her for five days, ten working days of which at best you might get seven working days together and on day number six she's got to assess that person and that's a nonsense(...)it makes continuous assessment a mockery, I mean you might as well almost, God forbid, go back to the old four ward based assessments because all you're doing then is taking a snap shot. Well that person can perform good, bad or indifferently on a snap shot, but if you're saying as one should with continuous assessment, on a sustained period, if you've hardly got to know what that person's first name is and you are writing an assessment of that person which will give them their registration or not, the task is not only onerous, it becomes an impossibility. (clinical manager)
The brevity of placements not only affects commitment through a lack of a sense of value of the assessment, but also runs counter to the fundamental principles underlying continuous assessment. Many of these issues are already being tackled by approved institutions and remedial strategies set in motion. These included either increasing placement length, removing some short placements from courses or substituting formative activities for summative assessment.
Placement periods were also perceived to limit the extent to which a student could develop a holistic sense of care. For example, one manager speaking about a care of the elderly commented that because students had to leave the placement area for a study day each week at the approved institution, and because of limits on the length of their working day, could not gain the full experience of care. They therefore missed important elements of the holistic nature of care and its management.
6.1.3. Workplace Demands
The rhythm of everyday work in the placement area does not necessarily mesh smoothly with learning and assessment intentions. To ensure this mesh, the structure necessary to underpin assessment and learning experiences may be clearly and appropriately worked out. Unless this structure is compatible with the structures underpinning the work environment there are likely to be conflicts.
The main obstacles to the smooth functioning of the placement element of the course experienced by students were:
• incompatible shifts due to holidays, night duty and requests for specific days off
• unpredictability due to sickness or changing workload/priorities (requiring assessors to forfeit working with their student for that shift to comply with care or management priorities)
These are illustrated in the following comments:
Sometimes I've been luckier than some of them. I've managed to work with my mentors quite a lot but I haven't worked with either of them for the last three weeks. which out of an eight week allocation is quite a lot. But prior to that I'd worked with them fairly frequently. (student)
I think that's the ideal. I think it would be nice to erm...say that they do work very closely together (...but ) things like holidays, study days, students requesting off duty, sickness on the ward...other people being off, their mentor has to be in charge of the ward or whatever can interfere with the process. (educator)
I know there are difficulties with assigning them to a clinical supervisor/assessor and match up the duty rota so they spend at least two shifts with that person. (educator)
The extent to which responsibility was passed from the named assessor(s) to other members of staff when disruption of student/assessor relationships occurred, varied. Where such back up systems worked or did not need to be activated too often, students felt supported. However some students had more uncomfortable experiences where there was no 'safety net' of continuity:
...you're passed from pillar to post really...you work with one person one day and another person another day and you, I don't know, there doesn't seem to be the time to sit down or you don't feel it's appropriate to say to somebody, "Would you mind sitting down and going through this (the documentation) with me?" Especially when you don't really know them or they don't know what you want from it...
The amount of time that students and assessors spent together was obviously important. When students felt 'passed from pillar to post', it was apparent that opportunities for developing relationships in which quality, meaningful assessment could occur were reduced. Examples such as this reveal the essential problem of making things happen in social contexts. Unlike chemical entities, or programmable robots, the human agent in the system has to be personally committed to making something happen and has to weigh the relative merits of competing commitments in conditions of scarce or limited resources.
6.1.4. Competing Workplace and Assessment Demands
(Clinical staff) do their best and they work very hard, and their commitment and motivation when I audit the wards, they spend their own spare time, practically in every instance. When I audit the areas there'll be a current awareness file, err badly typed up things for the students information...photo-statted sheets out of books. That staff nurses...do this in their own time because their work time is taken up caring for patients, and they are spending their own work time supporting the students, and I think when you go out there they show you this. (educator)
The issue of competing priorities in placement areas was raised by many individuals. The extent of any problems did vary. Some areas did not experience difficulty in accommodating client needs and students' learning and assessing needs, however in others, the volume and increasingly involved nature of student assessment was placed in direct competition with the fluctuating demands of client care needs. How staff tackled these conflicts whenever they arose varied considerably from accounts in data.
It was apparent that client need took priority. This stance was understood and unquestioned by clinicians, students and educators alike. However, the extent to which students' learning and assessing needs were marginalised and placed in opposition to client care needs rather than working with the situation and utilising all available learning opportunities was an issue for concern in some areas. The course of action appeared dependent upon the motivation, morale and skills of the nursing and midwifery staff. Where staff felt supported and confident with their clinical and assessment skills, and had experience in each, they appeared more able to work with client care demands and meet student's learning and assessing needs. Where this was not the case, the quality of student placements was affected:
And in some ways it's just not a priority and they will say to you, some midwives will actually say to you, we're short staffed, we're busy, we've got two midwives on, we've got three students or four students or whatever, and my priority is the care that I have got to give and therefore, you know, the students will just have to get into that. And O.K. that's the reality of practice today and in some ways the students have to learn to adapt to the realities of practice when these things are like that, that's part of their learning and provision of care, prioritising care, but it's not the idea for a student who is new, trying to learn midwifery practice. And learning doesn't get planned, it happens ad hoc, which if it's persistently like that it's not a very good learning experience. (educator)
The increased demands of assessment, and having to facilitate students on an ongoing basis was stressful to some staff, despite the overall enthusiasm. The effect was illustrated by a midwife who reported that a constant round of student supervision and assessment did not leave much space for her own professional development:
...there's a group that's gone out, the second group and we've tried to make sure that the first people don't have continuous assessment this time, so they get a break from that. Otherwise it means that everyday you go to work and you've always got a student, and that's a bit unfair. You need some time to look at your own practice. (educator)
For the student, placements may be experienced as precious, unique; however for the assessor, the student may be perceived as just another student from yet another course:
You're a named midwife to a particular student, so you might end up with a student like I've had a student on this block, and when this lot go you get another one next block and you just carry on. And you fill out the initial form, going though the areas that you want to develop, and then the middle part of the form where you're going through areas, how they're doing. And then the final bit, how have we done? Have we managed to cover all these areas?
6.1.5. Individual Commitment to Assessing Roles
Commitment to assessment relationships affects their quality and hence the educational outcomes or benefits. From students' accounts we learn what is important and valued by them in the assessor/student relationship. There were accounts of relationships which worked well and were students appreciated the benefits. Such accounts provided insight into the features (including mechanisms and procedures) required for effective relationships; they included continuity and support in facilitating the development of skills leading to independence (while providing appropriate levels of assistance), and giving the kinds of feedback that the students felt they needed. Equally, when there was communication breakdown and responsibilities went unacknowledged, the essential features of an effective system were noticeable by their absence. There is, of course, a sense in which:
...you're going to learn I think wherever you are if you are willing to learn and you've got mental faculties...but it's so helpful if you've got a supportive network around you and people who are encouraging you rather than just leaving you to...discover things for yourself. (student)
Being able to watch others at work and learn from observation is important, however:
...although it's good to work with other people and see how they are doing things I don't feel like I've had anybody to sort of fix to or have a relationship with that I can you know, sort of confide in really, which is a pity.
There is a close relationship between good practice and educational processes:
And again the people I aspire to be as role models also seem to turn out to be the best teachers, they enjoy teaching and they'll ask if you want to go over anything (...) and they'll get you doing the practical...like doing vaginal examinations and asking you what you're finding and making you apply what you're finding, and like, "What does this tell you? What does that tell you?" Whereas some of the others will let you sit there let you do the observations like blood pressure and pulse all afternoon. They'll do the vaginal examinations, you'll ask if you can do one and the answer will be, "No there's no need for you to do one." Or they'll let you do one but they won't discuss it with you which to me... there's no point doing it because it's very difficult to assess what you're feeling when you first start.
The reflective capabilities, motivation and commitment of students as agents in their own learning, while vital, are not sufficient in coming to grips with professional action. The above student comments refer to their learning needs and psychological needs such as 'confidence building', someone to confide in and so on. However a further, educational dimension is needed, so that each student is facilitated in making links between the pratical application of knowledge and not left to 'discover things for yourself'. The educational processes, psychological support and assessment functions must all work together. This involves attempts to 'get inside' professional action for educational purposes. For this, dialogue with experienced professionals is essential. It requires time and commitment and must not be marginalised by shrinking resources, variable levels of commitment and competing priorities.
6.1.6 CONTEXTUAL ISSUES: A SUMMARY.
Everyday work demands provide the context within which learning and assessment takes place. Reflective practice interrogates everyday workplace activities and is the condition for improvement in practice. The processes of education and clinical practice are not mutually exclusive. For the intentions of reflective practice to be realised an appropriately structured environment must form its context. That environment itself must be capable of responding to needs and modifying its own structures to meet those needs. This includes all the elements of staff time to commit to learning and assesment events, the professional preparation, personal commitment and liaison roles. The quality of such elements as these, frame the quality of assessment and learning events.
6.2. ASSESSMENT AND LEARNING EVENTS
There are two kinds of learning and assessment events to distinguish. The first are those which constitute professionality. Professional action involves learning from experience, assessing courses of action and performance, and making judgements about outcomes. The second concerns the learning and assessment events that relate directly to the performance of the student. These latter may either be perceived as an additional burden or as part of the continuum to professionality. In this latter sense, student assessment and learning can be integrated into daily professional activity.
From the view point of the student, any activity which happens prior to an 'official', summative assessment decision can affect students' perceptions of what is important in judgements about nursing or midwifery competence, that is, about what it means to be a professional. Meanings about learning and assessment are constructed in various parts of the placement experience, culminating in the transaction(s) between student and assessor which take place when assessment documentation is completed. 'Official', summative assessment decision making is not a stand-alone occasion, but is embedded in the composite assessment 'event' that runs through time and continuously shapes both the assessor's judgement of the student and the students' values about and attitudes towards particular nursing or midwifery practices. The features of this composite event can be discussed under the following headings:
• Tacit/Hidden Aspects of Learning & Assessment
◊ differring ideals about practice
◊ values and attitudes toward client care
◊ building a knowledge base and rationale for practice
• Handling the Tacit or Hidden Aspects of Learning and Assessment
• Formal Formative Relationships for Assessment
Reaching Summative Assessment Decisions
• The Scheduled Assessment Events
• Self Assessment
• Questions of Pass and Fail
6.3. ANTECEDENT EVENTS: TACIT OR HIDDEN ASPECTS OF LEARNING AND ASSESSMENT
Learning takes place all the time. As well as the intended processes and outcomes of a particular learning environment, some learning takes place which is not openly expressed, not necessarily intended. This hidden or unintended curriculum largely takes place at a tacit level. It is the kind of learning which occurs when the newcomer begins to 'sus out' the rules of the social situation, negotiates outcomes and learns the appropriate ways of presenting themselves in formal and informal relationships in order to achieve the kind of impressions that they desire. In short, they have to develop a model of 'the way things work in real life'.
The models that students develop are the result of attempting to 'do the job' and build a nursing/midwifery theory. Although one of these activities is usually thought of as practical and the other as intellectual, each is in fact a mixture of both; it is not possible to do the job without a theory of some sort, nor is it possible to develop a satisfactory theory without doing the job.
Insofar as there are some procedures and practices that occur 'universally' within nursing/midwifery as part of daily delivery of healthcare, there is a fair degree of unanimity about the need to ensure that students are capable of doing the job successfully by carrying them out. As students observe and work alongside qualified staff, they discover that for example, explaining a procedure to a client, confirming a client's identity before administering a drug, and ensuring client safety in the bathroom are essential parts of doing the job. They learn these aspects of practice informally and intuitively, through observation and emulation of role models in clinical environments. Doing the job with respect to these things becomes second nature, the theory is embedded within them and rarely drawn out.
However in addition to the agreement about some nursing/midwifery practices, others are in fact the idiosyncratic preferences of individuals and groups; they are based upon differing degrees of experience, knowledge, argument, critique, values and beliefs. During learning and assessment processes, students come face to face with the fact that professional practice contains such complexities. Sometimes they experience this as a surprise, sometimes a disappointment, often a challenge. Students begin to realise that they cannot always take for granted that the value-bases that inform particular choices are agreed within a uniform theory of nursing/midwifery. The student must quickly learn how to read the situations they meet on placement. The diversity students face in attempting to build a coherent theory of nursing/midwifery which is comfortable in workplace settings can be summarised as follows:
• Differing 'Ideals' About Practice
For some students, dilemmas were faced when having to interpret the 'idealised' theory of practice learnt in the classroom in the light of experience of practice and resource limitations. As one student put it:
The major disappointment I've found is when you're in school obviously it's a Utopia they talk about, which is I think good, you've got to have set standards in what you want to do, and you blame yourself when you come to the real world and you've got staffing shortages, people being sick... and you know what you've been taught, research, counselling skills... all these wonderful ideas of actually sitting down and talking to people. And when you come on the wards, you really have to give ...like...second best.
This student learnt to take account of two different views of nursing, a theoretical view and a clinical view. She experienced a polarity between the 'Utopian' theoretical view, and the 'real' world of practice.
Educators were aware that students often faced this dichotomy in practice, and had to develop ways to reconcile both experiences:
The criticism as far as we're concerned is that we're trying to emphasise nursing care whereas they'll turn round and say, "Well, you know what you're saying is unrealistic, they wouldn't possibly be able to do that in (...) the ward, it just wouldn't happen." And I think that's generally quite a criticism. (...) What we actually try to say it, "Well it's an ideal situation, this is what you should be doing." What they're saying is, "There's no way, there just isn't the staff to do it..." (educator)
• Values and Attitudes Towards Client Care
It is not only the relationship between theory-as-knowledge and practice-as-action which students have to work out in the course of observing or working with clinically qualified staff. They also have to learn to resolve the conflicts between theory-as-values and practice-as-action. Students encountered many different attitudes as qualified staff and peers went about their nursing and midwifery care. They spoke of being faced with attitudes that mirrored their ideals or conflicted with their own values in practice situations:
She's the sort of midwife I would like to be. She doesn't rupture membranes and apply scalp electrodes unless there's a need for it. But the women are always given an option, everything is discussed with them, she always finds out what they want first and will always comply with their wishes as far as she can... Things like pain relief you can have a big influence on that by the rapport you establish. I've seen Kay talk to to women with first babies who haven't needed any pain relief at all, which to me is pretty impressive. Whereas with other midwives it's, "I think you need an epidural," just because it's easier for them, they don't have to give all the support. (...) Some (midwives) will get in an epidural and wheel in a television so they don't have to talk to them. You just sit there and think, "Oh God, I don't want to be here."
Good and bad role models were identified by students as they experienced working with different staff members during placements. Students appeared to engage in a process of sifting out the practices and styles they wanted to emulate, and those which they found uncomfortable either working with, or pressurised into having to do because of their student status.
• Building a Knowledge Base and Rationale for Practice
Developing practical experience and exploring the theoretical basis for practice is also part of students' learning when on placement. The following example of a staff nurse preparing for the administration of intravenous medication with a student demonstrates the kind of craft knowledge that is often provided as they work together:
...when you put needles in, be careful not to push through there (...) these are designed so that you should be able to get the syringe in and draw it without losing the needle...it doesn't always work...
...then we have a single flush because we use the flush to check that the line is working...so we put a bit of drug in...and also to flush it between drugs, 'cos some drugs don't mix in the vein...
Such teaching and learning is highly valuable, and needs to be backed up with rationale and up to date knowledge. However sometimes problems occurred when students encountered practices which either conflicted, or the rationales provided by staff were inadequate. These caused confusion and did not facilitate students' learning and their development of practice:
I've been on certain wards and been told about not to use a steret for when you're taking bloods for a BM, for instance, because it alters the reading. And yet you can be on the same ward with another nurse who says yes you should use that for hygeine reasons. I mean I know what you should be doing because I've looked it up..I was getting so concerned about it I phoned up the diabetic liaison nurse and had a chat with her...that's the difficult bit I find, the continuity of knowledge that we're given isn't there. (student nurse)
I mean I've worked with some people who've told me to do one thing