Back to Imaginative Spaces Home
See ACE Report CONTENTS
John Schostak
Terry Phillips
Jill Robinson
Helen Bedford
1994
funded by: The English National Board for Nursing Midwifery and Health Visiting, London
INTRODUCTION
TO THE REPORT
The issue
of developing and implementing adequate assessment strategies for nursing and
midwifery education programmes has challenged both state bodies and educators
across the world for over fifty years.
The ACE project was set up to report on current experiences of assessing
competence in pre-registration nursing and post-registration midwifery
programmes. Nursing and midwifery
have undergone rapid and far reaching changes in recent years both in initial
educational requirements and in the demands being made on professionals in
their everyday work. It is
intended that the report will contribute to current developments in educational
programmes to shape the future of the professions to meet the increasing
demands being made upon them.
Decisions
are made at every level of the professions, at national, local and in
face-to-face practice with clients that affect both the quality of educational
processes and the delivery of care.
This report intends to contribute to the quality of educational decision
making at each of these levels.
For this reason the report provides both general analyses of structures
and processes directed towards policy interests and also concrete illustrations
of the issues, problems met and the strategies employed by staff and students during
assessment events.
CONTEXTUAL
INFORMATION ABOUT THE RESEARCH PROJECT
The
Setting Up and Operation of the Research
ACE was
funded by the ENB during the period July 1991 to June 1993. It was conducted as
a joint project between the School of Education of the University of East
Anglia, the Suffolk and Great Yarmouth College of Nursing and Midwifery and the
Suffolk College of Higher and Further Education.
The focus was on the
assessment of competence of students on pre-registration nursing courses
(Project 2000 (all branches) and non diploma) and 18 month post registration
midwifery courses (diploma and non diploma). The project conducted fieldwork in nine colleges of nursing
and midwifery and their associated student placement areas, in the three
geographical regions of East Anglia, London and the North East. Appendix A 1 provides full details of
the conduct of the fieldwork. In
brief, there were two phases and through a process of progressive focusing the
issues relevant to the current state of assessment of competence were explored.
During
the first phase, data was collected in all nine approved institutions to
identify issues of national importance operating in their local contexts.
Issues related to the whole of the assessment process were explored including
planning & design, assessment experiences and monitoring &
development. At the end of this
phase an interim report was written which provided a means of articulating
initial findings and of firming up the research questions for the second phase
which then directed the collection of relevant data in greater depth in a
smaller number of fieldsites. This
approach is generally known as 'theoretical sampling' which produces 'grounded
theory' (Glaser and Strauss 1967 ). [1]
Project
Aims
The ACE
proposal set out with the following aims:
1. To establish the effectiveness of current methods of assessing
competencies and outcomes from education and training programmes for nurses and
midwives.
2. To examine the relationship between knowledge, skills and attitudes
in the achievement of competencies and outcomes.
3. To establish the extent to which profiles from the assessment of
individual competencies adequately reflect the general perception of what
counts as professional competence.
4. To investigate the feasibility of simultaneously assessing
understanding, the application of knowledge and the delivery of skilled care.
5. To collect perceptions of the usefulness of the UKCC's interpretive
principles in helping nurse and midwife educators to assess competencies and
outcomes.
Upon
inspection it soon becomes clear that there is an overlap between each
aim. It is difficult to do one
without also doing the others.
However, they each have their individual stress.
Aim one
stresses 'effectiveness'. If a
mechanism is to be effective, then its intended event must occur.
Thus to be effective, if an assessment procedure designates someone as
being competent, then that person must actually be competent. This is quite different from concerns
with say, cost efficiency. A
system which produces 80 or 90 per cent of people as being competent may still
be considered cost efficient. It
is then a matter of the level of risk that is considered as being
acceptable. In an effective system
the level of tolerable risk is zero.
However, this may not be accepted as cost-efficient. It is the aim then of this project to
critique the assessment of competency from the point of view of
effectiveness. This has the
advantage of making both the risk and the cost or resource implications clear
in any discussion that may then take place on the issue of 'effectiveness' as
against 'cost efficiency'.
Aim two
stresses the complex interrelations of knowledge, skills and attitudes. If the appropriate competencies and outcomes are to be achieved,
then educational and assessment strategies must be attuned to the development
of knowledge, skills and attitudes. None of these are simple categories for
study. They resist the kind of
observation that is appropriate for the study of minerals. Their observable dimensions are highly
misleading and the situation is rather like the iceberg that has nine tenths of
its bulk hidden. Human behaviour
is managed behaviour. That is to
say, it is not open to straightforward interpretation. Impressions are managed by individuals
to produce not only unambiguous communications but also multi-levels of
possible interpretations and deceptions.
What counts as knowledge to one person may not be considered knowledge
at all by another. This is as true
for scientific communities as it is for lay people (Kuhn 1970 , Feyerabend 1975 ).
Again, there is no easy distinction to be made between 'knowledge' and
'skill'. Knowledge may initially
be thought of as 'theoretical' as distinct from practical action or
skills. Yet, in professional
action, knowledge is expressed in action and developed through action. To analyse professional action into
'skills' and aggregate them into lists required to perform a particular action
may well do violence to the knowledge that encompasses and is expressed in the
whole action. To see professional
action as an aggregate of skills may thus lead to an inappropriate professional
attitude. Knowledge, skills,
attitudes and the processes of everyday action may in this way be regarded as
different faces of the same entity.
It is the aim of this project to begin with the experience of
professional action through which concepts of 'knowledge', 'skills' and
'attitudes' are expressed and defined in practice.
Aim three
stresses the relationship between the assessment process and what it purports
to assess. In short, are the
assessment profiles that result from the assessment process fit for their
purpose? In order to examine this
question it is essential that 'what counts as competence' has been
identified. It may not be that
there is a single 'general perception'.
Rather, there may be a range of acceptable variation in what is
perceived to be 'competence'. This
implies a debate of some kind. One
prime intention of this project then is to describe the debate and discuss the
extent to which assessment structures and processes fit the purposes that are
currently being debated. This in
turn refers the discussion back to questions of effectiveness and of the ways
through which 'knowledge', 'skills' and 'attitudes' are being identified and
defined.
Aim four
stresses the feasibility of assessing understanding and the application of
knowledge at the same time as delivering care. Effectiveness and feasibility are closely allied. It must be feasible for it to be
effective. In short, the aim is
directed towards the relationship between educational processes and care
processes. This may be seen as
presupposing a distinction between the two so that assessing would be an
additional burden to be carried at the same time as delivering care. The aim of this project is to explore
the professional process in terms of its dimensions of care and education: is the one aggregated to the other, or
are they indissoluble faces of the same coin?
Aim five
is different in kind to the preceding four. This aim has a survey dimension to it where the others are
interpretive and analytic in orientation. For ease of reference the UKCC's
interpretative guidelines are reproduced in appendix C 2. It is a straightforward matter of
asking a range of individuals in the participant institutions whether the
guidelines have been found to be useful.
Whilst the UKCC's interpretive principles acted as a focus of this aim,
it became apparent from interviewing that the inclusion of comments on the
usefulness of national guidance in general ( i.e. including ENB guidance)
provided a more comprehensive exploration of the issue. Consequently this wider perspective on
the usefulness of national guidance was pursued.
METHODOLOGY
A
Qualitative Approach for the Study of Qualitative Issues
The
project aims define the kind of methodology which is appropriate to their
achievement. For example, to
identify what counts as an effective method of assessing competencies and
outcomes, a structural analysis of cases considered to be effective is
required. Before one can begin
this, however, it is necessary to define what is to count as 'effectiveness'. This in turn requires the collection of
views as to what is to count as competence and as outcomes that signify
competence. The initial task then
is to conduct a conceptual analysis of these key terms as they are expressed in
the appropriate professions. Aim
two equally demands a conceptual analysis of the relationship between the key
terms 'knowledge', 'skills' and 'attitudes'. Once this has been established, then it becomes possible to
analyse the structural relationships between assessment procedures and
processes, and the real events in which competence is expressed as a
professional quality. With some
understanding of what is involved in the relationships between the performance
of assessment and the delivery of care then aim four can be explored. The methodology appropriate to
these aims is one which identifies those instances in which the necessary
features of the key terms are exhibited.
Through an analysis of those instances, the structures, mechanisms and
procedures through which effective assessment takes place can be identified and
described in order to facilitate future planning and design. This essentially fits the approach
known as 'theoretical sampling' [2]. It is not a quantitative approach and
thus does not result in percentages and tables which illustrates the
distribution of variables. Rather
it generates theoretical and practical understandings of systems.
The
methodology of the ACE project then, is qualitative, focusing upon structures,
processes and practices as these are revealed through documentation, interviews
and observations. A full
exploration of the methodology can be found in appendix B, but broadly, the
task has been to generate an empirical data base. By a process of comparison and contrast, key groups of
structures, processes and practices are identified as a basis for the more
formal analysis of events.
Alongside
the strategies for the gathering of data and their analysis have been
strategies to ensure the 'robustness' of the data and their
interpretation. These have
included the use of an expert 'steering group', dialogue and feedback with
participating staff and students, theoretical sampling, the application of the
triangulation of perspectives and methods, and reference to research output
from other projects. The
sensitivity of the methodology, with its emphasis on communication and personal
contact has been a feature, and attention to principles of procedure have
facilitated fieldwork relationships.
In summary, methods of
data collection were:
¥ In-depth interviews
(individual and group) with students, clinical staff, educators and other key
people in the assessment process. Recordings of interviews were transcribed for
analysis
¥ Observation of assessment related
events in clinical and classroom settings
¥ Creation of an archive of
assessment related documentation from approved institutions
The result was a large
text based archive constructed from interview transcriptions, observational
notes and documentation of courses, planning groups and official bodies. The method of analysis involved various
strategies of conceptual analysis employing discourse and semiotic approaches
to try to pin down the meanings of particular key terms employed by
professional and student discourse communities. This in turn provided a means of identifying the
institutional, local and national structures necessary for the construction and
delivery of assessment. Structural
analyses could be made of particular approaches to identify the roles and
associated mechanisms and procedures through which events (both intended and
unintended) are effected. These
events in turn were then analysed into their stages, phases and process
features in order to identify what counts as professional competence in action,
in situ.
Whilst gathering and
analysing the data was clearly impossible to understand the experiences of
professionals and students without having grasped the contemporary changes
taking place in nursing and midwifery.
There are thus discourses of reform, of innovation and of change
(whether or not perceived as being innovations or reforms) which act as the
context for the conceptual, structural and process analyses described
above. This context is the subject
of the next section.
THE CONTEXT OF REFORM
Professional and
Educational Change in a Changing World
By 1991, when the ACE
project started its work, a number of significant changes had taken place both
within nursing and midwifery education and within the structures of the
occupational settings of nursing and midwifery. These changes formed part of a
relatively long term strategy for NHS reform which was to continue to develop
and have impact throughout the life of the project. The field of study was and still is characterised by the
complexity of wide variation with differential pace of change across both
regional boundaries and local, internal boundaries. This complexity has been further compounded by the
regularity with which new demands have been made on participating institutions
as NHS reform gathered momentum and concepts such as the regulated internal
market (DoH, 1989a ) were tested and
reformulated in the light of experience. Not only has this climate had an
impact on practice and education in nursing and midwifery, but it has also made particular demands
on the research methodology. A
field of study which is in constant state of flux and change demands the
contextualisation of any account of the assessment of competence.
The move of nurse and
midwife education towards full integration with Higher Education institutions
has added further complexity to the situational aspects of the assessment of
competence. Alongside the strategy for NHS reform there has been a parallel movement
towards educational reform which has encompassed the organisation and funding
mechanisms of all Higher and Further Education institutions (DES 1987, 1991 Studying
nursing and midwifery education during this period has therefore inevitably raised
a number of issues which speak directly to the more general issues relating to
both the impact of NHS reform and the impact of education reform.
It is the intention here
to make explicit the main areas of change which were already having some impact
at the start of the project and to describe those changes which occurred during
the study period in an attempt to set the scene for the arguments and
recommendations raised in this report.
These areas of change will have inevitably shaped ideas about what
midwives and nurses do, what is expected of them, their educational needs and
the ways in which competence is defined and assessed.
NHS Reform
Since the publication of
the government white paper ÔWorking for PatientsÕ (DoH, 1989a ÔWorking for
PatientsÕ arose as part of a major review of NHS provision and was to provide
the impetus for extensive NHS reform during the 1990Õs. The NHS and Community
Care Act 1990 was the statutory instrument which finally placed firmly into
legislation, reforms which were to have far reaching and on-going impact on
virtually all aspects of health service provision.
One of the central
stated arguments for reviewing NHS provision, structure and funding has been
the need to find economic and
ideological solutions to identified changes in health needs of the population. Demographic and
epidemiological trends (HAS 1982,
DoH, 1989b ) have created new demands on
health provision and have influenced recent moves towards a demand-led rather
than service-driven health care economy.
ÔWorking for patientsÕ
attempted to address the challenge of creating provision on the basis of
population need rather than the presence of clinical expertise, by creating a
regulated internal market where Health Authorities purchase services on behalf
of their population from a range of potential service providers. The creation
of this market has rearranged local provision from a single resource into
several separate and semi autonomous units.
The period of fieldwork
undertaken in this study spanned two years of intense activity in relation to
the recommendations imbedded in ÔWorking for PatientsÕ. The first NHS Trusts
were approved in 1990 and throughout the study many of the clinical areas served
by colleges of nursing and midwifery had gained Trust status or had
applications in progress. This
separation of purchasing activity from
services and the division
of local provision not only
presented challenges for the management of the research especially in terms of
access to clinical areas, but was evidenced in the data in terms of concerns
about availability of student placement areas, workload of clinical staff and
the potential for even greater variation in the expectations about the outcomes
of nursing and midwifery courses.
The Changing Roles of
the Nurse and Midwife
Any change in the
demands which are placed on nurses and midwives within their occupational roles
will have an impact on what counts as professional competence and on the way in
which competence is assessed.
The Strategy for Nursing
(DoH, 1989b
These responded to changes which had already occurred in service
provision and professional practice and anticipated the demands on nursing and
midwifery into the next century.
Already nurses and
midwives faced a number of initiatives over the previous decade which would
have direct impact on their role and practice. For example the Griffiths Report
(DHSS, 1983 These suggest a
trend towards a changing ideology and value base within nursing and midwifery
and a re conceptualisation of professional role and status in relation to other
health care workers. For
midwives in particular the last decade has seen continuation of the strong
movement away from their traditionally close identification with nurses and
nursing practice. It is a clear
reflection of the dynamic and changing nature of the field of study that by the
time the ACE fieldwork was complete, a major revision of the Strategy for
Nursing had taken place to take account of other fundamental changes within
service provision (DoH, 1993).
Other ideological
changes were taking root within nursing and midwifery practice. Throughout the
1980Õs increasing emphasis has been placed on community care (DHSS 1986, DoH,
1990 Changes
in the location of care have had
significant impact on nurses and nursing practice. Under The NHS and Community
Care Act 1990, responsibility for community care was invested in Social
Services rather than the Health Service (DoH, 1990 )
and questions are being raised about both the role and competence of nurses in community
settings, and the extent to which health care should, or indeed, could be
separated from social care. This
change in location of care has created different demands not just in relation
to the skills required by nurses in community settings, but also in the demands
on nurses in hospital settings where patients require acute care over shorter
periods.
In similar vein there
has been an increasing orientation within nursing towards holistic care, the
prevention of ill health and health education. Midwives have always worked predominantly with healthy women and as a result have
perhaps been better placed to reject a sickness oriented model of care and
adopt an approach centred on health, normality and education. This trend towards a health orientation
has mirrored a national concern for health and health promotion over recent
years. The Health of the Nation (DoH, 1992 ),
described the governments policy and strategic targets in these areas, and
reinforced the demand on nurses and midwives towards curricula which were
firmly based within a framework of health as well as ill health.
Changes have also
occurred in the delivery of care. For more than a decade the trend has been to
move away from task-based routinised systems of care to more individualised,
client centred approaches. Primary
nursing and team nursing started to spread throughout the country and the
publication of the Patients Charter (1991 )
formally introduced the concept of the Ônamed nurseÕ for each patient. It can be argued that individualised
care, primary nursing and the concept of the named nurse have contributed
significantly to a shift towards a model of nursing and midwifery practice in
which judgement, assessment, care planning and reflective critical analysis are
becoming increasingly valued role components. Where role expectations and
values shift, so too should ideas about what counts as competence and how that
competence should be assured. A
major question therefore must be, to what extent have role expectations and
values embedded in those
expectations, kept pace with changes in policy and legislation? To what extent
do practitioners, managers and educators, hold onto role expectations which
have not yet taken account of major policy shift? The implication here for the
research is to uncover and explicate the relationships between role expectation
and policy implementation in order to inform possible mismatches between the
rhetoric of assessment documents and the realities of assessment experience.
Changes within
Education
Although apparently less
directly affected by the main thrust of NHS reform, professional education has
been in the process of a fundamental transformation. Major changes were taking place within nurse and midwifery
education both in terms of the nature and content of educational programmes and
in the structure and organisation of institutions. A subsidiary paper of
ÔWorking for PatientsÕ, ÔWorking Paper 10Õ, addressed the need to separate
education provision from service units and purchasing authorities by investing
the relationship between service and education with similar market
processes. What followed was a
wholesale review of nurse and midwife education across the country and
consequent major reorganisation. At the beginning of the ACE project most
education institutions had already undergone some form of rationalisation. All approved institutions involved with
the study were the products of the amalgamations of several small schools of
nursing and midwifery, which had traditionally been located on NHS hospital
sites into much larger colleges of nursing and midwifery. Most were therefore multi-site
institutions which were in various stages of incorporation.
Later, as the overall
intention to embed nurse and midwife education into a HE framework took shape,
colleges of nursing and midwifery were to begin the process of wholesale
integration with HE institutions. During the period of study, colleges were in
various stages of integration ranging from validation-only arrangements through
to full integration.
Clearly, given the
overall trend towards integration with HE, all fieldsites were experiencing
major upheaval in terms of both organisational structures and working
arrangements hard on the heels of one, if not more, previous periods of
re-organisation. In one college, senior staff were facing the prospect of re
applying for their jobs for the third time in a space of two years.
Concurrent with these
various strands of organisational restructuring, fundamental changes were being
implemented to the nature of courses. Project 2000 (UKCC 1986)
Project 2000 represents
a major move away from the apprenticeship style training of previous years. One
of its fundamental and over-riding stated aims is to provide nurses with the type
of preparation which will best meet the changing demands and expectations on
qualified nurses in changing contexts of health care delivery. If nurses are to cope with a working
environment characterised by its changeability and ideologically committed to the primacy of the individual,
then nurses will need new skills to be flexible and adaptable enough to manage
the unpredictability of individualised systems of care within a constantly
changing professional context.
These are the skills most frequently associated with HE. Colleges of nursing have therefore been
required to form collaborative links with HE institutions in order to develop
and validate Project 2000 courses.
The process of conjoint validation between nursing professional bodies
and HE institutions placed
different and sometimes competing sets of demands in relation to course
assessment strategies. On the one
hand professional bodies were concerned that assessment strategies were
sensitive to the demands of professional practice and on the other the HE
institutions concerns focused on academic credibility and the extent to which
the assessment design was adequately sensitive to intellectual competence.
Although midwifery
education remains separate from Project 2000, a number of direct entry midwifery programmes share
components with the Project 2000 Common Foundation Programmes. Even where Project 2000 has not had
such a direct impact on midwifery education, there has been a parallel trend
within midwifery to incorporate some of the more generic educational principles
of Project 2000 within their own curricula.
Project 2000 and diploma
level midwifery education are only one aspect of a broader set of educational
initiatives which challenge traditional expectations of what nurses and
midwives do, how they interpret their roles and how they should be prepared for
practice. PREPP (UKCC, 1990) They imply a distinct move away from a view that nursing or
midwifery can draw on discrete, finite and stable sets of knowledge and understanding and move towards the
notion that maintaining professional competence is more to do with providing
skills for continual self development.
Central to these initiatives is the need to demonstrate evidence of
continual progression and learning in order to be considered fit and competent
to practice.
Changes to the
structure, content and philosophy of nurse and midwife education were not
occurring in isolation from wider changes which were impinging on HE and FE
throughout the period of study (DES, 1987, 1991) (DES, 1992 ) has brought about a number of changes in the
Higher Education institutions into
which nurse and midwife education continues to integrate. These changes were heralded by the
government as:
far reaching reforms
designed to provide a better deal for young people and adults and to increase
still further participation in further and higher education.
(Lord Belstead,
Paymaster-General, Hansard, H.L. Vol. 532, col. 1022
Changes to HE included a
new system of funding (DES, 1988) in their titles. The impact on some institutions was
experienced as a series of priority changes as the pace of these changes gathered momentum throughout
1992. For institutions seeking to
meet the criteria set by the Privy Council to gain university status, the main
priority was experienced as a pressure to develop, market and deliver HE
courses to increasing numbers of students. Once achieved, many 'new
universities' faced new demands for increased research activity in order to
benefit in any substantial way from the research assessment exercise which was
to determine the allocation of university research moneys.
Although the effect of
these changes on the project fieldwork was not as direct nor dramatic as the
effect of NHS reform, several colleges involved with the study had HE partners
who were undergoing fundamental changes as a direct consequence of the above
legislation. Some colleges
involved in the study started
their integration process with polytechnics who have since gained university status. For colleges of nursing and midwifery
these changes were not just about nomenclature but were also about the nature,
structure and expectations of their relationships with HE validating body and
partner.
In summary, during the
period of study a number of pressures upon both understandings and
administration of assessment of competence were in operation and which can be
categorised into the following groups:
¥ changes in population
health needs
¥ values about health
care and service provision
¥ political/ideological
changes (structural changes)
¥ educational reform
Each category exerts its
own distinct range of changes and pressures upon individuals and groups
involved in the assessment process on both personal and professional levels,
affecting what counts as competence and the means by which it should be
assessed. Consequently this
section concludes with a selection of extracts from the data which articulate
some experiences of the changing context. Further examples can be found
throughout this report.
THE EFFECTS OF CHANGE
ÔON THE GROUNDÕ
Individual
Experiences of Change
The research examines
the assessment of competence in nursing and midwifery education within the
changing context described above. It does so from the perspective of the
individuals who deliver the service, upon whom these changes impinge directly,
but who also, as members of a body which has campaigned for a considerable time
for the changes, the motivators of the continuing developments. As affectors
and affected, people experience change with mixed feelings, which the research
has set out to capture. For some, the effects of changes within educational and
health care environments are experienced as a continual break on educational
planning:
The Health Authority was in a state
of flux and there was a lot of change going on. First we amalgamated with another Health Authority and then
second we amalgamated as one college of nursing with other schools of nursing. So every time you thought,
"Now we've got some ideas coming on paper," you had to stop and
re-evaluate because you got new schools joining and then you had to look at
what they were doing.
Organising and
guaranteeing a range of clinical experience for students on placements is also
difficult in some instances:
I find the clinical
areas are changing their speciality month by month. You know you have one area that's doing so and so (...) and
then you find that they're no longer doing that because some other consultant
has actually gone in there and they're doing something else. It's a constant battle, it really is. (educator)
A prevailing climate of
uncertainty makes long term planning difficult and unsettling in many
instances:
The whole future's up
for grabs. The college may become
an independent (...) it may become completely separate, someone may take on a
faculty of nursing in Middletown.
The next six months should give some indication of...politically...of
how things go. (educator)
The cumulative effect of
change was highlighted by one educator:
I think it's...not
just how it's changed, it's the speed of change. There is more coming on, you just get one set of initiatives
finished and then there's another set going through, and another set. And on top of that there's changing the
curriculum...there's changes, it's the speed of change. Change has always been there but
there's been more time to assimilate it, to take it out there to work out there
to change it. Now it's so hard to
keep up with the change and take it out there. And a lot of people out there in
the clinical field are not really sure what is going on.
Those involved in
education are keen to ensure that colleagues in patient care are kept up to
date with educational change.
Likewise the need to share understandings about developments occurring
in service is recognised, but remains a difficult task in a climate of
competing demands:
...I think our staff
here don't always recognise all the great changes that are happening in
education, they see their own changes, changes in technology, the way we're
pushing patients through, reducing patients' stays, the way we are changing our
structures and our ways of working and contracting, and income comes in and goes out. We don't get a budget any more, we have
to earn out income through so many patients we see, and they don't see that the
college have got their own stresses and strains. What the college don't see is perhaps the speed at which
we're moving forwards and the new language. I'm not convinced that my college friends really have an
understanding and grasp of the new NHS.
They have not got a grasp of contracts and earning income through
numbers of patients. (nurse manager)
The report offers a
detailed record of individual perceptions of change and provides an account of
the manner in which these have affected, and are likely to continue to affect,
the implementation and further development of structures, mechanisms, roles,
and strategies for devolved continuous assessment.
CHAPTER
ONE
ABSTRACT
Assessment in general has a
range of purposes, including the formative ones of diagnosis, evaluation and
guidance, and the summative ones of grading, selection and prediction. It is
expected to be reliable, valid, fair and feasible, and to offer what is usually
called, somewhat mechanistically, ÔfeedbackÕ. The assessment of professional
competence has additionally to be able to evaluate practical competence in
occupational settings, and to determine the extent that appropriate knowledge
has been internalised by the student practitioner. Approaches to assessment
which lie within the quantitative paradigm, including technicist and
behaviourist approaches as well as quantitative approaches proper, are suitable
for collecting information about outcomes within highly controllable contexts,
and for collecting information which can be measured, or recorded as having
been observed. Such approaches are inappropriate for assessing the degree to
which the student professional has developed a suitably flexible and responsive
set of cognitive conceptual schema that facilitates intelligent independent
behaviour in dynamic practical situations. Neither do they take account of the
fact that contexts of human work themselves continue to evolve and change, and
that therefore the individualÕs ability to blend knowledge, skills and
attitudes into a holistic construct that informs their practice, is crucial.
Assessment from within the educative paradigm, on the other hand, does do these
things, whilst also acknowledging that assessment itself is an essential
element of the educative process. Educative assessment takes full account of
institutional and occupational norms, and of the fact that there are actual
individuals involved who are not automatons but people who interpret and make
sense in terms of their experience; its structures are generated in response to
those features rather than in contradiction of them. It offers structures,
mechanisms, roles, and relationships that reflect interior processes and take
into account the essential ÔmessinessÕ of the workplace. It does not attempt to
impose a spurious logical order on what in practice is complex. In so doing it
performs a formative function as it performs the summative one. The one does
not follow the other, but happens in parallel. Assessment from the educative paradigm is integral to the
learning process that generates individual development. Competency-based
education stands provocatively on the bridge between the quantitative paradigm
and the educative paradigm, still making up its mind about the direction in
which it should move.
THE ASSESSMENT OF
COMPETENCE: A CONCEPTUAL ANALYSIS
Introduction
The study of the
assessment of competence would seem straightforward if it were not that
considerable controversy and confusion over what is to count as 'competence'
takes place at every level in the system.
One way of beginning the analysis of the 'assessment of competence' is
to ask such questions as:
¥ what function it serves
within a symbolic system or social process
¥ how it is related to other
elements or features
¥ how it is accomplished as a
practical activity
What characterises human
activity is its symbolic dimension.
That is to say, it is not enough just to observe a behaviour or an
action, one has to ask what it means within a complex system of thought and
action. Key concepts are
regulative agents in a system. In
other words, they generate order, they give a pattern to behaviour such that
each element is related to each other element. Every element can be analysed for its function in the
system. Meaning, however, is
not open to inspection like a physical object. What is said is not always what is meant. What one intends to mean is not always
what is interpreted by others to mean the same thing. The intended outcome of an action may have unforeseen
consequences because it has been variously interpreted, or because the system
is so complex it defies accurate prediction.
The intended outcomes of
assessment, for example, are to
ensure that certain levels of competence are achieved so that employers and
clients can be assured of the quality, knowledge and proficiency of those who
have passed. The unintended or
hidden purposes may be quite different.
For example, educationalists have long referred to the 'hidden
curriculum' and its ideological functions in terms of socialising pupils to
accept passive roles, gender and racial identities, their position within a
social hierarchy as well as social conformity and obedience to those in power. [3] Occupational studies in a range of
professions reveal that a similar social process occurs through which students
undergoing courses of education into a particular profession become socialised
into that profession's occupational culture. In studies of police training, for example (NSW 1990) police trainees talk about the gap between the
real world of practice that they experience when on placement in the field and
the lack of 'reality' of their academic studies. Similar, experiences are recorded in studies of nursing and
midwifery (Melia 1987; Davies and Atkinson,
1991). It could be said then that
there are hidden processes of assessment where students are assessed according
to their ability to 'fit in' to the occupational culture. This hidden process may parallel that
of the official or overt forms of assessment. How the two kinds of process interact in the production of
the final assessment judgement is a matter of empirical study. The following chapter will set the
scene for such empirical analyses by exploring alternative approaches to
conceptualising the issues involved in the study of a) assessment, b)
competency/ competence/competencies, and c) assessment of
competency/competence/competencies.
It would be artificial to separate completely these strands in the
following sections. Nevertheless,
it facilitates the organisation of the argument to emphasise each in turn under
the following headings:
¥ The Purposes of Assessment
¥ The Professional Mandate
¥ Approaches to Defining
Competence
¥ From Technicist to Educative
Processes in the Assessment of Professional Competence
¥ Finding a Different Approach
1.1. THE PURPOSES OF
ASSESSMENT
1.1.1. From
Technicist Purposes to Professional Development
Traditionally, the
purpose of assessment is to gauge in some way the extent to which a student has
achieved the aims and objectives of a given course of study or has mastered the
skills and processes of some craft or area of professional and technical activity. The act of assessment makes a
discrimination as between those who have or have not passed and further ranks
those who have passed in terms of the value of their pass. The grade or mark awarded not only says
something about the work achieved but something about the individual as a
person in relation to others and the kinds of other social rewards that should
follow. Eisner (1993) in tracing the relationships between testing,
assessment and evaluation from their origins in the scientific purpose 'to come
to understand how nature works and through such knowledge to control its
operations.' Through the influence
of Burt in Britain and Thorndike in America psychological testing was founded
upon principles modelled upon the mathematical sciences. During the 1960s, however, new
purposes arose : 'For the first
time, we wanted students to learn how to think like scientists, not just to
ingest the products of scientific inquiry'. This required approaches different to the educational
measurement movements:
Educational evaluation
had a mission broader than testing.
It was concerned not simply with the measurement of student achievement,
but with the quality of curriculum content, with the character of the activities
in which students were engaged, with the ease with which teachers could gain
access to curriculum materials, with the attractiveness of the curriculum's
format, and with multiple outcomes, not only with single ones. In short, the curriculum reform
movement gave rise to a richer, more complex conception of evaluation than the
one tacit in the practices of educational measurement. Evaluation was conceptualised as part
of a complex picture of the practice of education.
Scriven (1967) The focus upon the formative
possibilities of evaluation drew attention to the processes of learning,
teaching, personal and professional development and the intended and unintended
functions of assessment procedures.
To address these kinds of processes, methodology shifted from
quantitative to qualitative and interpretative approaches which focused upon
the lived experiences of classrooms.
What was found there was a complexity and an unpredictability that
earlier measurement methods had overlooked.
During the mid-1970s and
to the mid 1980s in America, and broadly the 1980s to the present day in the UK
concern was expressed regarding the outcomes of schooling. There was a general call from
politicians and employers to go 'back to basics'. This call was articulated through increasing political
demands for testing and for accountability. However, as Eisner points out many realised that
'educational standards are not raised by mandating assessment practices or
using tougher tests, but by increasing the quality of what is offered in
schools and by refining the quality of teaching that mediated it.' In short, 'Good teaching and
substantive curricula cannot be mandated; they have to be grown.' Professional development together with appropriate
structures and mechanisms for the development of courses and appropriate
methods of teaching and learning are thus essential.
With the return to
demands for 'basics' and 'accountability' the term assessment has come to
supplant that of evaluation in much of the American literature. However, this term is new in that it
does not simply connote the older forms of testing course outcomes and
individual performance, but includes much of what has been the province of
evaluation. As Eisner concludes
'we have recognised that mandates do not work, partly because we have come to
realise that the measurement of outcomes on instruments that have little
predictive or concurrent validity is not an effective way to improve schools,
and partly because we have become aware that unless we can create assessment
procedures that have more educational validity than those we have been using,
change is unlikely.'
Brown (1990) The purposes included
are: fostering learning, the
improvement of teaching, the provision of valid evidence bases about what has
been achieved, enabling decision making about courses, careers and so on. The TGAT report on assessment in the
National Curriculum for schools saw information from assessments serving four
distinct purposes:
1 formative, so that the positive achievements
of a pupil may be recognised and discussed and the appropriate next steps may
be planned;
2 diagnostic, through which learning
difficulties may be scrutinised and classified so that appropriate remedial
help and guidance can be provided;
3 summative, for the recording of the overall
achievement of a pupil in a systematic way;
4 evaluative, by means of which some aspects of
the work of a school, an LEA or other discrete part of the educational service
can be assessed and/or reported upon.
In addressing these
concerns, there are four kinds of purposes that need to be considered when
thinking about assessment:
¥ technical,
¥ substantive,
¥ social, and
¥ individual developmental
purposes.
1.1.1.a. Technical
Purposes
The essential technical
purposes of an assessment procedure are reliability, validity, fairness and
feasibility in what it assesses, providing in addition, feedback to student,
teacher, the institution and national and professional bodies ensuring the
quality of the course. To these
may be added the six possible purposes of assessment provided by Macintosh and
Hale (1976) : diagnosis, evaluation, guidance, grading, selection, and
prediction.
1.1.1.b.
Substantive Purposes
The substantive purpose
of a nursing or midwifery course includes a grasp of the appropriate knowledge
bases as well as the accomplishment of appropriate degrees of practical
competence in occupational settings.
The question of who should decide what counts as an appropriate
knowledge base and an appropriate level of performance are made more complex
with the advent of local decision making.
At a local level, market forces may lead to the tailoring of courses to
meet local needs. The question
arises then, concerning how national standards, or levels of comparability can
be maintained. A professional
trained to meet the needs of one area may be inadequately trained to meet the
needs in another part of the country.
Substantive issues are thus vital to maintaining a national perspective
not only on professional education but also on professional competence.
1.1.1.c. Social
Purposes
At a social and
political level the public needs to be assured of the quality of professional
education. Assessment in this case
serves the function of quality assurance and can contribute to public
accountability. However, there are
other hidden (even unintended) social functions. To gain a professional qualification means also gaining a
certain kind of social status. It
means taking on not merely the occupational role, but the social identity of
being a nurse, a midwife. With the
role goes an aura of expertise, a particular kind of authority that can extend
well beyond the field of professional activity into other spheres of social
life. On the one hand, it can be
argued that through their authority the professions act as agents of social
control; equally, it can be argued that they act as change agents, raising
awareness of say the impact of unemployment or poverty on health.
1.1.1.d.
Individual Developmental Purposes
Work is still the
dominate social means through which people form a sense of self value, explore
their own potential, contribute to the well being of others and feel a sense of
belonging. Therefore, becoming qualified
to enter a profession marks a stage not only in the social career of the
individual but also in the personal development of the individual. Assessment
is thus, in its widest sense, about human development, purposes and action.
To become a professional
means that the individual has internalised a complex cognitive conceptual
schema to respond appropriately to dynamic practical situations. Knowledge, skills and attitudes blend
in the person to the extent that the individual's identity is bound up with
professional activity. This
is what makes both defining competence and its assessment so difficult to
achieve. It is not just that the
individual is perceived as a professional. The individual is perceived to have a mandate to act.
1.2. THE PROFESSIONAL
MANDATE
A mandate to act can be
defined at one level as having the legal power to enforce an action. A professional mandate, however, is not
limited to this. The mandate arises
because the professional has an authority, a social standing, a body of
knowledge through which change can be effected. Both nursing and midwifery have during this century
undergone changes in status and currently lay claim to a professional
identity.
As recently as 1969
Etzioni Not only is nursing perceived by many
as subordinate to the medical professions, midwifery is in the process of
distinguishing its own professional identity from that of nursing. Whittington and Boore (1988: 112) following their review of the literature
identified the characteristics of professionalism as:
1. Possession
of a distinctive domain of knowledge and theorising relevant to practice.
2. Reference
to a code of ethics and professional values emerging from the professional
group and, in cases of conflict, taken to supersede the values of employers or
indeed governments.
3. Control
of admission to the group via the establishment, monitoring and validation of
procedures for education and training.
4. Power
to discipline and potentially debar members of the group who infringe against
the ethical code, or whose standards of practice are unacceptable.
5. Participation
in a professional sub-culture sustained by formal professional associations.
Hepworth (1989) Nevertheless,
at first glance at least, nursing and midwifery could be said to be
increasingly able to meet the above criteria. Hepworth, however, points back to the underlying uncertainty
concerning the status of nursing as a profession and the impact this has upon
attempting to assess students when assessors:
are required to assess a
student's competence to practice as a professional, when the role of that
professional is ambiguous, changing, inexplicit, and subject to a variety of
differing perspectives. The effect
of this complexity is evident in the anxiety and defensiveness which the
subject of professional judgement often raises in both the students and their
assessors/teachers, particularly if that judgement is challenged or it is
suggested that the process should be examined.
Added to this, the
British political context for all the health professions has been, is, and is
likely to be for the foreseeable future, one of considerable change where old
practices and definitions are replaced by new ones, where professionals often feel
under threat and de-skilled by innovations and their demands. In the face of such external pressure,
there has never been a greater need for both nursing and midwifery to reflect
upon their status as possessors of domains of knowledge and theorizing in order
to assert their independence and identities. However, like any complex occupation there is no
homogeneous, all embracing view of 'nursing' or 'midwifery' as the basis upon
which to construct domains of knowledge.
There is rather an agglomeration of spheres each with their own views
and associated practices which broadly assembled come under the name of
'nursing' or 'midwifery' (c.f. Melia 1987) .
Project 2000 and direct
entry midwifery diplomas each speak to a change in what may be called their
appropriate 'occupational mandates'. This mandate includes not only the official
requirements as laid down by the ENB, UKCC and EC but also the knowledges,
skills, competencies, values, conducts, attitudes, and images of the nurse and
the midwife in relation to other health professionals that have developed
historically. These
interrelated images, ideas and experiences constitute the concept of the
competent professional.
Change cannot simply be mandated by legislation. The historically developed beliefs and
practices of a profession cannot be altered overnight. Of course legislation can force
changes. Nevertheless, these may
not be in the directions desired.
Official changes can be subverted, resisted, or glossed over to hide the
extent to which practice has not changed.
If real change is desired then it needs to be 'grown' rather than
imposed. Project 2000 and the
direct entry midwifery diploma may be seen as an attempt to grow change in the
professions. In the process,
competing definitions as to competence emerge some of which draw upon
traditional legacies, others upon official pronouncements and legal texts and
yet others upon the personal and collective experiences of practice. Accordingly, professional competence as
a concept is open to variations in definition, many of which are vague.[4]
1.3. APPROACHES TO
DEFINING COMPETENCE
1.3.1. Some
Approaches to Finding a Definition
For Miller et al (1988)
The first is accessible to observation, the second, being a
psychological construct, is not.
However, it could be argued that the psychological construct should lead
to and therefore can be inferred from competent performance. Hence, the two
definitions of competence are compatible.
The question remains, however, how easily and unambiguously can
performance signify competence?
The breadth of definitions of competence ought to lead researchers to
some caution as to the answer to this question. Runciman (1990) draws on
two broad definitions of competence:
Occupational competence
is the ability to perform activities in the jobs within an occupation, to the
standards expected in employment.
The concept also embodies the ability to transfer skills and knowledge
to new situations within the occupational area .... Competence also includes
many aspects of personal effectiveness in that it requires the application of
skills and knowledge in organisational contexts, with workmates, supervisors,
customers while coping with real life pressures.
(MSC Quality and
Standards Branch in relation to the Youth Training Scheme)
(Competence is) The
possession and development of sufficient skills, knowledge, appropriate
attitudes and experience for successful performance in life roles. Such a definition includes employment
and other forms of work - it implies maturity and responsibility in a variety
of roles; and it includes experience as an essential element of competence.
(Evans
1987: 5)
Although these
definitions offer an orientation towards competence, neither offers sufficient
precision to be clear about how such competence can be manifested unambiguously
in performance. In order to
overcome this, one approach has been to take a strategy of behaviourally
specifying individual competencies in the form of learning outcomes and
associated criteria or standards of performance, the sum of which is the more
encompassing concept of competence.
Thus competence is seen as a repertoire of competencies which allows the
practitioner to practice safely (Medley 1984 ). This approach is broadly
quantitative. How these
competencies may be identified for quantitative purposes is then the next
problem.
According to Whittington
and Boore (1988 ) there has been little
research in actual nursing practice thus competencies have generally either
been produced in an intuitive, a priori fashion, or have been based upon
experts' perceptions of what counts as competence (as in the Delphi[5] or Dacum[6] approaches). However, this criticism is being
addressed in the work of Benner (1982, 1983) ,
qualitative studies such as Melia (1987),
Abbott and Sapsford (1992) and the
increasing interest these kinds of work are stimulating through which an
alternative approach can be developed.
In the final section of this chapter, this alternative approach will be
discussed in relation to its implications for the development of an educative
paradigm through which competent action may be educed and evaluated.
Although it is not the
central purpose of this project to explore methods of identifying competence,
such methods have direct implications for the forms that assessment processes
and procedures take. A predominantly
quantitative approach has quite different implications than a largely
qualitative approach. Norris and
MacLure (1991) provided a summary of
approaches following a review of the literature in their study of the
relationship between knowledge and competence across the professions. For the purposes of this study their
summary has been reframed into two groups together with a minor addition as
follows:
Summary of
methodologies for identifying competence
Group A
¥ brain-storming/round table and
consensus building by groups of experts (eg ETS occupational literature; Delphi
and Dacum);
¥ theorising/model building (based
on knowledge of field/
literature - eg Eraut, 1985; 1990) ;
¥ occupational questionnaires;
¥ knowledge (conceptual framework)
elicitation for expert systems through
interviewing (eg Welbank, 1983 )
¥ knowledge elicitation through
observation in experimental settings;
(Kuipers & Kassirer,
1984 ) or modelling of expert judgement
via simulated events (Saunders, 1988)
Group B
¥ post hoc commentaries on
practice by expert practitioners (based on recordings/notes/recollections - eg Benner, 1984 )
¥ on-going commentaries on
practice (eg Jessup forthcoming);
¥ practical knowledge/tacit theory
approaches (self-reflective study): (Elbaz, 1983; Clandinin, 1985; Schon,
1985).
¥ critical incident survey and
behavioural event interviews (eg McLelland, 1973);
¥ observation of and inference
from practice (based on practitioner-research e.g., Elliott 1991 )
[Based on MacLure &
Norris, 1991: p39
There are those, group
A, which are essentially a priori
and quantitative, seeking measurable agreements and those, group B, which focus
primarily upon the analysis of observation and interview accounts. Group A tends towards the quantitative
paradigm, whereas group B tends towards the qualitative paradigm. In group A, it could be argued that
expert panels include a high degree of qualitative material and in group B the
McLelland approach results in quantitative criteria. Qualitative approaches do not necessarily exclude the use of
quantitative techniques and quantitative approaches frequently depend upon
'soft' or subjective approaches to develop theory for testing. The difference, in each case, is a
difference of value and purpose, the essential difference being that
quantitative approaches more highly value measurement and explanation; where as
qualitative approaches tend to value more highly, meaning and
understanding. The former tends to
reinforce technical (and in the extreme, technicist) approaches to training and
assessment, whereas the latter tends to reinforce the development of personal
and professional judgement. In
this latter approach, both training and assessment demands the provision of
evidence of critical reflection on practice in which appropriate judgement has
been the key issue. Since
judgement is context and situation specific it cannot be reduced to behavioural
units but can be open to public accountability through the discussion of
evidence.
The issue for assessment
concerns the nature of the
domain(s) of knowledge and theorizing relevant to the sub-spheres of practice
that is possessed by competent nurses and midwives and which is essential to
marking them out as professions. It may be an argument for their status as
emergent professions rather than as fully fledged professions, that much of
their knowledge is held implicitly or tacitly. Or, it may be that such tacit knowledge is characteristic of
any profession. In either case,
what approaches to the identification of competence and its assessment are
appropriate to such complex fields of action?
1.4. FROM TECHNICIST
TO EDUCATIVE PROCESSES IN THE ASSESSMENT OF PROFESSIONAL COMPETENCE
1.4.1.
Differentiating Quantitative and Qualitative Discourses
A formal assessment
process requires both a social apparatus of roles, procedures, regulations, and
also a conceptual structure adequate to generate evidence upon which to base
judgements on student achievements.
A structure to make this happen can be logically, even scientifically,
formulated. However, the actual
events that take place as a result may not always be those expected. Events are contingent whereas structures
may be rationally determined or legally imposed. Where a role may be rationally defined and related to other
roles in a clear structural pattern, the individual who occupies that role is
contingent in the sense that it is the role that is necessary to the structure
not the individual. Each
individual who could occupy a given role
brings different individual needs, interests and aspirations as well as
abilities, values and experiences which frame how in practice the role is
interpreted and realised. Broadly,
the assessment process can be analysed according to such structural and
contingent aspects. In the
development of an assessment structure, the issue is whether the structural
aspects are to be imposed upon, or to be derived from actual practice.
In one sense, the
process of assessment can be read as an attempt to impose a logical order on
the 'messy' reality of actual practice.
Its purpose would be to control or regulate processes through well
defined mechanisms and procedures to produce outcomes which ensure some
comparability and to assure certain standards of quality or attainment. In the second sense, assessment structures,
mechanisms and procedures are seen as outcomes generated by reflective feedback
on practice. Through
reflection structural or common features of practical competence are identified
but not to the detriment of specificity, difference and variety. The second is thus sensitive to the
dynamics of situations in a way that the first is not.
Generally speaking, quantitative
methods are typically employed in
approaches which seek to control and hence compel the adoption of a certain
kind of structure. The
alternative approach which seeks to generate (or grow) structure based upon
reflection upon practice, places at the centre of its arguments concepts of
'value', 'meaning' (as distinct from observable and measurable units),
attitudes, judgement and other personal qualities - a qualitative
approach. The latter approach thus
places human action, reflection
and decision making at the centre of its discourses whereas the former
replaces the human decision maker by instruments which are constructed to measure
or calculate and thus reduce 'human judgement' which is seen as a source of
potential error.
Each has quite different
implications. Firstly, there are
implications both for the way education to enter the professions is organised,
and also for the legitimation of and status of a profession in relation to its
client groups and its employers.
Secondly, there are implications for the principles, procedures and
techniques of assessment. For the
sake of convenience, the first group of discourses about competence will be
referred to as the quantitative, and the second as the educational. The term educational or educative is
chosen so as not to reinforce the easy opposition between mathematical
approaches and qualitative approaches in the social sciences. Where a quantitative approach in the
interests of 'objectivity', may seek to exclude discourses of value, judgement
and human subjectivity, an educational approach values all the power of
precision that mathematics and logical forms of analysis can contribute to the
full range of human discourse, judgement and action. In this sense, the educative approach is inclusive and
action centred, whereas the quantitative approach is exclusive.
1.4.2. The
Quantitative Discourses
(With Particular Reference To
The Behavioural and Technicist Variants)
Assessment should not
determine competence, but rather competence should determine its appropriate
form of assessment. How competence
is defined depends upon the methods, beliefs and experiences of the professional. Such definitions can be revealed
through the kinds of texts they produce and the ways in which they talk about,
support and contest meanings of competence. The definitions that emerge or can be drawn out (educed)
from the range of texts and discourses provide accounts of how practical
competence is seen. Within these
discourses, it is frequently the case that quite distinct, even mutually
exclusive views can be described.
To mark such distinctions the term paradigm is often used.
The term 'quantitative
paradigm' as to be employed here, refers to those discourses of science which
involve throwing a mathematical grid upon the world of experience. Logical deductive reasoning,
measurement and reduction to formulaic expressions are its features. The technicist paradigm as employed in
this report is a particular kind of version of the more general quantitative
paradigm which seeks measurement, observable units of analysis and logical
arrangements. The technicist
paradigm is reduced in scope in that it takes for granted its frameworks of
analysis and its procedures and employs them routinely rather than subjecting
them to the judgement of the practitioner. Although this characterisation is an 'ideal
type' it has a basis in the data.
Later discussions will report the sense of frustration some assessors
and students feel in filling out assessment forms, in trying to interpret the
items in relation to practical experience and in accordance with their best
judgement. The typical complaint
may be summed up as being that the key dimensions of professionality cannot be
reduced to observable performance criteria[7].
The aims of the
technicist paradigm can be seen most clearly in the 'scientific management' of
Taylor (1947[8]) and the developments
in stop watch measurement of performance, the behaviourism of Watson (1931;). Here the emphasis
was upon control and predictability through measurement and the reinforcement
of appropriate behaviours to produce desired outcomes.
In making such a
reduction, the technicist paradigm reinforces a split between theory (or
knowledge) and practice by separating out the expert who develops theory
(knowledge) from the practitioner who merely applies theory that can be
assessed in terms of performance criteria. Also implied in this is a hierarchical relation between the expert and the non-expert
whether seen as practitioner or trainee.
In addition, within a quantitative/technicist paradigm, skills
assessment models, and competency based education each assume the student
initially lacks the required skill or competence. Through training a student then acquires the particular
skill or competence required. A
particular combination or menu of such skills or competencies then defines the
general competency of the individual.
This is most clearly expressed by Dunn et al in a medical context
(1985:17):
... competence must be
placed in a context, precise and exact, in order for it to be clear what is
meant. To say a person is
competent is not enough. He is
competent to do a, or a and b, or a and b and c: a and b and c being aspects of a doctors work.
The essential
'messiness' of everyday action, the complexity of situations, the flow of
events, and the dynamics of human interactions make the demand for a context
which is 'precise and exact' unrealistic.
The operationalisation of such an approach is exemplified in the
programmed learning of GagnŽ (see GagnŽ and Briggs 1974) or in the exhaustive and seemingly endless lists
of Bloom (1954, 1956 ). The issue raised at this point is not
about the value of analysing complex activities and skills, but the use to
which such analyses are put in everyday practice.
Schematically the
relationship between the quantitative view, the behavioural and the technicist
can be set out as follows:
(Figure 1)
The diagram represents
the decreasing scope from quantitative to technicist which moves from a
systematic method of investigating and comprehending the whole world of
experience open to thought, to the reduction of scope to observable behaviours
(as opposed, say, to felt inner states) and finally the reduction of methods and
knowledge for limited purposes of social control or the engineering of
performance. Thus, in general
terms, as Norris (1991 ) comments, discourse
about competence:
has become associated
with a drive towards more practicality in education and training placing a
greater emphasis on the assessment of performance rather than knowledge. A focus on competence is assumed to
provide for occupational relevance and a hardheaded focus on outcomes and
products. The clarity of
specification, judgement and measurement in competency based training indicates
an aura of technical precision.
The requirement that competencies should be easy to understand, permit
direct observation, be expressed as outcomes and be transferable from setting
to setting, suggests that they are straightforward, flexible and meet national
as opposed to local standards.
This requirement can be
seen in the three dominant approaches in the quantitative paradigm to assessing
practical competence:
¥ Minimum Competency Testing
(MCT)
¥ Competency Based Education
(CBE)
¥ National Vocational
Qualifications (NVQs)
Each will be discussed
in turn in the section which follows the summary immediately below.
1.4.3. Minimum
Competency Testing (MCT)
When some notion of a
golden age when life was simpler is held by policy makers, forms of assessment
can be seen as tools to engineer this state. A particularly reductive form of the behavioural
approach is to be seen in Minimum Competency Testing which exemplifies a
minimalist version of the technicist paradigm.
According to Lazarus
(1981:2)
Minimum competency testing is an effort
to solve certain problems in education without first understanding what the
problems are. In medical terms,
minimum competency testing amounts to treating the symptom without paying much
attention to the underlying ailment.
Here the major symptom is a number of high school graduates who cannot
read, write, and figure well enough to function adequately in society. No one knows how many there are, though
they certainly constitute a small fraction of all high school graduates. The treatment for this symptom? Test all students in the basic skills
of reading, writing and arithmetic.
Some states go further; they make receipt of a high school diploma
conditional on the student's passing the test.
Whether such a diploma
sanction applies or not, minimum competency testing is precisely what the name
implies: a programme to test
students in terms of, and only in terms of, whatever competencies state or local
authorities have decided are minimally acceptable result of an education.
As Lazarus goes on to
point out, MCTs feed the test construction industry which in turn 'impede
nearly all attempts at educational reform' (p.9). This is because a considerable investment is placed into
the construction of tests and thus the investment has to be recovered through
sales. Once a test is in place, it
defines the curriculum. The
curriculum cannot be radically changed without changing the test and the test
is concerned only with outcomes, not processes.
By emphasising outcomes
rather than processes, schools and colleges become learning delivery systems,
where instruction, as Lazarus points out, is an analogue to manufacture (p.
13), aimed at a well defined market.
In 1977 Glass Similarly,
Norris (1991) comments:
If the assessment of
competence presents difficulties of standards setting this is in part because
the relationship between standards and good practice or best practice is not at
all straight-forward. Like theories
standards are always going to be empirically under-determined. What is worrying is the extent to which
they are not empirically determined at all, but are rather the product of
conventional thought. Even if this
were not the case the pace of economic and social change suggest that standards
once set might quickly become obsolete.
Competency based
education (CBE) seemed to offer an alternative to MCTs.
1.4.5. Competency
Based Education (CBE)
Competence should not be
equated with behavioural definitions ( f.
Grussing 1984)[9]. In practice, the relationship between a
test outcome and the real competencies involved in the cultural application of
a particular complex skill may be tenuous. Competence in everyday life can be defined in terms of a
vast range of changing contexts, needs and interests that defy any attempt to
formulate a minimum set.
Competency Based Education seeks to address the legitimate concern to
ensure that professionals are actually safe and competent to practice not by
focusing upon minimum standards but by seeking to ensure agreed objectives are
met. These agreed objectives may
be lent weight through drawing upon panels of expert opinion. However, such approaches do not
overcome central objections. On
the one hand, the drive towards consensus that the Delphi and Dacum approaches
are subject to, filters out the full range of alternative views. Secondly, there is no guarantee that
such approaches do not merely reinforce folklore and prejudice. Thirdly, as Benner (1982 ) points out it is to be doubted that the
appropriate testing technology can actually service the expanded requirement.
According to Fullerton
et al (1992 They describe their own approach of constructing criterion-referenced
essay exams. Still, its main focus
is upon producing standardisation across markers rather than upon the nature of
competency itself and the relation between competency, the form of assessment,
and the process through which formative evaluations can be made in areas of
clinical practice. As such, it is
a sophisticated form of the technicist approach and one which does not meet
Benner's doubts.
Thus the inherent danger
of student assessment which follows CBE approaches is that it glosses over
central methodological questions to do with the definition of standards. It also glosses over issues concerning
what the student knows as distinct from how the student performs. Ticking off an objective achieved is
not equivalent to probing the extent to which the student knows and
understands. Such issues are
often glossed over because they are either considered too hard, or too
philosophical and thus impractical.
For example, Hepworth (1989 )
indicates the existence of such problems but then sides steps them explicitly
in a parenthesis writing that 'it is difficult to see how' such a philosophical
discussion of the nature of knowledge 'could provide nurse educationalists with
the practical support which is needed now'. However, such a discussion engages directly the alternative
paradigms concerning what counts as knowledge of competence through which assessment can take
place. Choice of paradigm has
vital practical implications concerning what is or is not taken into account in
the assessment procedure.
The temptation is to slip towards a technicist view which seems to speak
directly to the control, surveillance and measurement of performance without
having to consider how performance relates to knowledge, understanding and the
development of professional judgement.
In short, the choice affects not only the way in which data is collected
about a student and upon which pass/fail assessments are made but also what
counts as data.
By not engaging in such
a discussion Hepworth and others while being aware of alternative methods and
their associated problems do not possess a sufficient framework for
development. The assessment of
students in many ways is an unsatisfactory game of how to fit the assessors
professional judgement of the student into the appropriate boxes. In this sense, the assessment
categories are interpreted in the light of background knowledge concerning
'competence' and concerning the student.
This background knowledge may be neither very deep nor made explicit.
The weaknesses of the
CBE approach were explored in a three-year project led by Benner (1982 It was found that:
These test-development
efforts, however, were hindered by the lack of adequate methods for identifying
competencies and the lack of adequate pre-existing definitions of competency in
nursing. Most efforts to identify
competencies in nursing, to date, have been based on expert opinion rather than
on observation, description, and analysis of actual nursing performance. Thus, identification of competencies
and evaluation of competency-based testing for nursing was undertaken.
In order to pursue the
project they undertook their own identification of competencies and consequent
construction of a method of assessment.
Competency based education, if it is to be of more than ritualistic use,
must attempt to predict successful performance in work roles post
graduation. However, competency must be distinguished from
other work-related qualities an individual may have. Is an attitude a competency? What is the relation between a skill and a competency? Is insight a competency? Rather than attempt to make a wide ranging
set of distinctions at this point, it is useful, at least, to refer to
Benners distinctions between a
basic skill, attainment and competence:
A basic skill is the
ability to follow and perform the steps necessary to accomplish a well-defined
task or goal under controlled or isolated circumstances. In attainment, the desired effects or
outcomes are also judged under controlled circumstances. Competency, however, is the ability to
perform the task with desirable outcomes under the varied circumstances of the
real world.
She provides the
following summary of frequently cited elements of competency-based curriculum
and testing:
(1) identification of competencies in
specified roles, based upon observation and analysis of actual performance in
real situations; (2) relating the identified competencies to specific outcomes;
(3) establishment of a criterion level for the competence; and (4) derivation
of an assessment strategy from the competency statement that is objective and
predictive of competent performance in actual performance situations.
This clear summary
statement is essentially programmatic.
To accomplish the programme is complex and difficult. Benner details six major
reasons for the difficulty. The
following is an interpretation of these:
1. There
is an absence of well defined behaviour domains in nursing. Nursing possesses
few identifiable outcomes since these are largely dependent upon situationally
specific interactions, and the limited nature of research and development
efforts.
2. There is the
confusion between competence as denoting actual success in a real setting with
objectives that seek to enable a student to improve a particular skill without
being placed into a real situation possessing a particular goal and context.[10]
3. All tests have
problems with predictive validity.
This is particularly so where the behavioural domain is not well defined
as in the case of problem solving and clinical judgement.
4. Skills relating
to the building of working relationships are not only the most important but
also the most difficult to test - e.g., empathy, ability to relate to others.[11] Assessment is only possible in
realistic situations.
5. The creation of
lists of behaviours listed in incremental steps associated with a task excludes
the inherent meanings of the whole performance comprised of a related set of
tasks. There is an absence of
guidelines concerning priorities or the relative importance of tasks.
6. The creation of
formal lists and sub-lists in task analysis at best may well be an infinite
process, at worst an impossible mission.[12] Indeed, it overlooks the way in which
specific situations demand a meaningful organisation of responses, not simply
the reiteration of procedures.
1.4.6. National
Vocational Qualifications (NVQs)
Burke and Jessup
(1990:194 They diagrammatically represent the approach as follows:
NVQs, on this
model, seek to combine a wide
range of methods to construct an evidence base. At first sight it may seem to offer a step beyond the
quantitative paradigm in that it appears to employ forms of assessment not
easily reducible to measurable entities (essays, assignments, simulations,
reports) along side those that are (multiple choice questions, skills tests and
so on). While the evidence base so
constructed is richer than the other methods, it shares with them the basic
orientation of imposing a pre-determined, standardised structure upon
occupational practice. For
example, although performance evidence is constructed from 'natural observation
in the work place' it is already framed within pre-determined categories of
'Elements of competence with Performance criteria'. It is not a structure that is 'grown' from reflection upon
practice. It thus is subject to
similar criticisms as Benner lays against competency based education in
general.
1.4.7. SUMMARY
Technicist and
behaviourist approaches to the assessment of competence are predicated on the
notion that predictability of outcome is possible in human activity. They
assume situations sufficiently controllable to enable learning to be measured
in terms of pre-specified outcomes. It is, however, unrealistic to expect
contexts to remain stable (ie. unchanging) and equally unrealistic to believe
that the only outcomes of a specified action will be the intended ones. The
contexts of human interaction are, in any case, essentially ÔmessyÕ, requiring
judgement as much as knowledge and technical skill; judgement is not obviously
amenable to assessments which look only for what is directly observable, and
can be either measured or ÔtickedÕ as having been observed. Behind assessment
operated according to the quantitative paradigm, there is a desire for
accountability, but also for an easy way of identifying strengths and
weaknesses so that reinforcement can be given to maximise the chance of
achieving a desired outcome. There are three major problems with this paradigm
as a means of assessing competence in nursing and midwifery. Firstly it splits
the ÔexpertÕ theorist from the practitioner who becomes the person who applies theory that can be
assessed. Secondly, it places greater emphasis on the assessment of performance
criteria than it does on the assessment of knowledge. Thirdly, it fails to take
any account of the complexity and dynamism of human interaction and organisational
processes.
1.5. FINDING A
DIFFERENT APPROACH
1.5.1.Towards
Alternative Paradigms
The alternative paradigm
begins with actors as agents in their own definitions of and approaches to
competence and its assessment.
Appropriate structures with their mechanisms and procedures to produce
desired outcomes are developed by reflection upon work place practice. Such structures are continually
negotiated and redefined because work is both dynamic and situationally
specific.
Light is increasingly
being thrown upon these structures and processes by qualitative research which
has focussed in particular upon the unintended or hidden processes involved in
occupational socialisation and learning.
For example, Woods' (1979) studies of
pupils negotiating workloads with teachers, albeit in schools, is relevant in
alerting researchers to how students negotiate what they consider to be
appropriate workloads in classrooms and clinical settings and appropriate tasks
for assessment. Davies and
Atkinson (1991) have identified a number of
student midwife coping strategies.
The particular students were already qualified nurses who had the added
problem of coping with a return to student status. Such coping included 'doing the obs' (that is, observations)
which organised their time and allowed them to 'fit in'. It included avoiding certain staff, or
'keeping quiet'. In short,
students learnt to manage the kinds of impressions that they were giving to
their assessors and other key staff. These may be referred to as student competencies. Having spent many years in student
roles (whether in school or in college, or clinical situations) most are
experts or at least highly proficient in such roles. Some students are very sensitive to and readily pick
up on the cues that staff provide concerning what is or is not acceptable to
them. Others are cue-deaf.
Clinical practice for
students who have no prior clinical experiences is itself a phase of
socialisation into work practices.
Workplace cultures have their own idiosyncratic practices as well as
drawing upon wider, more general professional belief systems, formal and
informal codes of conduct ( f. Melia 1987 ).
Students thus have to juggle not only their developing understandings of
workplace cultures but also the academic or educational definitions.
Such studies indicate
the complexity of the learning process within which assessment takes
place. There are quite distinct
kinds of competency, which include:
¥ competency as defined by
course and/or official statements
¥ competency as defined by
assessment documentation
¥ competency as defined by
occupational cultures
¥ student competency to
negotiate and manage impressions, workloads and expectations
¥ tutor/mentor/assessor
competency to impose/negotiate practice
These are not meant to
be exhaustive but rather illustrative of what may be involved in what seems at
first sight a simple act of assessment.
Alternative paradigms
attempt to engage with work practices and social and educational interactions
rather than impose upon them. The
focus is not on the aggregation of elements, but upon processes, relations, and
meanings, that is, upon selves in action.
Norris (1991 ) in addition to the
behavioural approaches described above identifies two further views or
constructs of competence: the generic and the cognitive. Generic competence 'favours the
elicitation through behavioural event or critical incident interviewing of
those general abilities associated with expert performers' Cognitive constructs have
reference to underlying mental structures through which activity is
organised. However, these
alternatives do not exhaust the possibilities. One could refer to theories where intuitive relationships
are formed through a combination of experience, intelligence and
imagination. One may ask to what
extent competence is some product of personality, linguistic habits of thought
and discourse repertoires.
Such alternatives attempt to grapple with the complexity of those
processes through which expertise is accomplished. They mark the difference between painting by numbers and
painting from life.
Benner (1982, 1983)
drawing on Dreyfus and Dreyfus (c.f. 1981 It
falls within a cognitive approach.
She postulates five stages towards expertise: novice, advanced beginner,
competent, proficient, expert.
What is appropriate of the student nurse, and particularly of the
undergraduate as opposed to the Project 2000 diploma nurse or non-Project 2000
nurses?
The Benner model assumes
not simply a progression but a qualitative transformation in the way an
advanced beginner operates and a competent nurse operates, and then a further
qualitative transformation in the move towards proficiency. Benner (1983:3;) focusses her analyses of nursing upon actual practice
situations The differences:
can be attributed to the
know-how that is acquired through experience. The expert nurse perceives the situation as a
whole, uses past concrete situations as paradigms, and moves to the accurate
region of the problem without wasteful consideration of a large number of
irrelevant options (...). in
contrast, the competent or proficient nurse in a novel situation must rely on
conscious, deliberate, analytic problem solving of an elemental nature.
Such expert knowledge
while not amenable to exhaustive analysis can be 'captured by interpretive
descriptions of actual practice' (p.4).
The task is to make the 'know-how' public. There are six areas of such practical knowledge identified
by Benner:
(1) graded qualitative
distinctions[13]; (2) common meanings;
(3) assumptions, expectations, and sets; (4) paradigm cases and personal
knowledge; (5) maxims; and (6) unplanned practices. Each area can be studied using ethnographic and
interpretative strategies initially to identify and extend practical knowledge.
(p.4)
These areas are common
to most professional action. Such
action is not bound by exact mechanical procedures, rather it is framed by
judgement, and appropriate actions are dictated by the specifics of the
situation. Thus the interpretative
strategies employed by experts rather than the procedures become the main focus
of analysis. A procedure may be
competently, even skilfully executed but if it is not appropriate, it will
fail. The vital element is
judgement.
Benner reports studies
by Herbert and Stuart Dreyfus (1977 The example given is of
undergraduate pilots who had been taught a fixed visual sequence to scan their
instruments. The instructors while
issuing the rules were found not to follow them. Because they did not follow their own rules they were able
to find errors much more quickly.
Actual practice and official procedures may diverge radically. Indeed, in some circumstances following
the rules may be dangerous. If the
practice of expert practitioners in nursing and midwifery is under researched
as many suggest, then upon what is competency based assessment founded?
Ashworth and Morrison
(1991 It is inappropriate because:
assessing involves the
perception of evidence about performance by an assessor, and the arrival at a
decision concerning the level of performance of the person being assessed. Here there is enormous, unavoidable
scope for subjectivity especially when the competencies being assessed are
relatively intangible ones.
Moreover, the specification of assessment criteria in competence is
unlikely to affect the problem of subjectivity.
Does the approach by
Benner offer an alternative method of assessment? Rather than trying to exclude subjectivity, the approach
actively involves the subjective experiences of experts in trying to access and
build up a body of 'know-how' which can then form the basis for inducting
novices into expert practice. The
alternative paradigm offered here rests upon being able to access the expertise
of the expert. Ethnographic or
qualitative forms of research methodology are argued to be the appropriate
methods. While these methodologies
can provide a detailed data base of professional practice, they do not in
themselves provide either a method of teaching nor a curriculum, nor a method
of assessment.
Benner (1982) provided
illustrative examples of what would be the basis of such a curriculum and later
(1983) provided more detailed analyses of appropriate nursing
competencies. These point the way
towards an educative paradigm.
Benner (1983) employs a concept of
experience narrower than the commonsense use of the term but appropriate for
the development of expert knowledge.
For her, experience 'results when preconceived notions and expectations
are challenged, refined, or disconfirmed by the actual situation.' It is therefore, as she says, 'a
requisite for expertise'. This
formulation contains within it the basis for an educative approach in the development
of nursing knowledge. It is
important for a nurse to be able to read a situation and make this explicit:
The competent nurse in
most situations and the proficient nurse in novel a situation must rely on
conscious, deliberate, analytical problem solving, which is elemental, whereas
the expert reads the situation as a whole, and moves to the accurate region of
the problem without wasteful consideration of a large number of irrelevant
options. (p. 37)
Unfortunately, as
Meerabeau (1992) There are two implications. The first is for the development of nursing theory, where
knowledge is derived from practice;
the second is for nursing education. Although, these implications do not yet adequately take into
account sociological dimensions to the construction of competence as a social
phenomenon there are nevertheless important implications for assessment which
rest upon the nature of the evidence that must be collected and recorded in
order to ground judgements. If
increasing professionality depends upon the richness and variety of experience,
then assessment should be directed towards not only the student's performance
at safe levels of practice, but also the student's knowledge of situations as
evidenced through an ability to record, analyse and critique an appropriate
range of their own clinical experience and set this into relationship with the
clinical experience and practice of others. The task for the student is to make explicit personal and shared understandings
through which action takes place in given situations in ways that are open to
critique and are knowledgeably argued.
This method does not abstract procedures from situations but sets them
meaningfully into relationship with personal and shared experience. It is the role of knowledge and
evidence relating to performance and technical accomplishment that becomes
critical in an educative paradigm.
1.5.2. Towards the
Educative Paradigm
The educative
paradigm depends upon a structure
of dialogue through which competent action , knowledgeability and their
evaluation are educed (or drawn out) by students and staff reflecting together
upon evidence.
Evaluation/assessment under this paradigm seeks to inform the decision
making of all parties (student, assessor, teacher, ward staff, clients).[14] The approach does not exclude
quantitative and analytical approaches but employs them within a relationship
that focuses analytical and critical reflection upon performance in clinical
areas.
As discussed in the
previous section, the background interpretative strategies of the practitioner
are what distinguishes the novice from the competent and from the expert. Since the tacit knowledge of the
expert is not freely available to the student, a process is required which sets
students and staff into educational relationships. The educative paradigm bases teaching and assessment upon
identifying the repertoires of interpretational strategies available to the
practitioner. In practice this
means that the student and the staff adopt a standpoint of mutual education
which involve taking research and inquiry based strategies to make explicit the
assumptions, the values, the rationales for judgement, the case-lore built of
memories possessed by the expert.
The educative approach sets theory and practice into a different kind of
relationship to the traditional separation of the two. Theory building and practice become the
two sides of the same action.
Rather than a division
between academic and professional competence as implied by crude distinctions
between theory and practice, it could be argued that to be a professional
requires students to have the special competence to inform practice through academic
reflection. This is the
perspective of the reflective practitioner (Stenhouse 1975, Dreyfus 1981, Schon
1983, 1987, Elliott 1991, Benner 1982,1984 )
where theoretical knowledge, far from being developed independently of
practice, is grounded in the experiences of practitioners who test theory
through practice and broaden their practical frame of reference through
principled application of that theory.
Such an approach is founded upon a notion of the mutuality of theory
and practice which entails the
modification of theory through practice and the modification of practice
through theory. It is an approach which demands an appreciation of professional
identity which places research at the heart of professional judgement and
action. To come to any worthwhile conclusions about the achievability of
excellence in both academic standards and professional competence, an
evaluation must be able to examine the nature and quality of judgements, and
gain access to students' reflections in both the clinical and the classroom
environment.
Eisner (1993)
¥ The tasks used to assess what
students know and can do need to reflect the tasks they will encounter in the
world outside schools, not merely those limited to the schools themselves
¥ The tasks used to assess
students should reveal how students go about solving a problem, not the
solutions they formulate
¥ Assessment tasks should
reflect the values of the intellectual community from which the tasks are
derived
¥ Assessment tasks need not be
limited to solo performance
¥ New assessment tasks should
make possible more than one acceptable solution to a problem and more than one
acceptable answer to a question
¥ Assessment tasks should have
curricular relevance, but not be limited to the curriculum as taught
¥ Assessment tasks should
require students to display a sensitivity to configurations or wholes, not
simply to discrete elements
¥ Assessment tasks should
permit the student to select a form of representation he or she chooses to use
to display what has been learned
These criteria may
provide a point of departure to construct principles of educative assessment
for the professions. Although it
is not the object of this report to complete such a task, in the succeeding
chapters further insights will be explored that may contribute to such an
endeavour (see in particular chapters 8, 9, 10).
1.6. THE EDUCATIVE
PARADIGM: A SUMMARY
There is a commonly held
but spurious distinction between theoretical and practical knowledge. It is a distinction reinforced - albeit
unintentionally - by assessment methodologies employing quantitative discourses
which adopt behaviourist and/or technicist strategies. Theory and knowledge become two
sides of the same coin in approaches which see knowledge and theory production
as being generated through action, and reflection upon the effects of action.
It has been argued that
the latter approach more closely fits the needs of contemporary demands being
made upon assessment. Assessment
is increasingly being seen as having multiple purposes that are integral to the
educational process and not additional to it.
Brown (1990)
¥ a broader
conception of assessment which fulfils multiple purposes
¥ an increase in
the range of qualities assessed and the contexts in which assessment takes place
¥ a rise in
descriptive assessment
¥ the devolution
of responsibilities for assessment
¥ the
availability of certification to a much greater proportion of young people
The first four of these
are clearly relevant to nursing and midwifery assessment, and to them may be
added:
¥ the assessment
of professional judgement
¥ the assessment
of professional action and problem solving
¥ the assessment
of team participation
¥ issues of
professional and cross professional dialogue and communication
¥ issues in the
generation of learning environments
There are here issues
both of cross disciplinarity and of what counts as knowledge. In particular, assessment must address
itself to the question of what constitutes the domain of professional
knowledge. Assessment thus becomes
integral to the whole question of the development of a professional mandate.
CHAPTER TWO
ABSTRACT
Texts (i.e. written statutory
guidelines, approved institution documentation, etc: and spoken advice) about
competence and its assessment are interpreted in relation to
situationally-specific, Ôreal-lifeÕ nursing and midwifery events. As they work
in their particular clinical location, individuals strive to understand
official regulations in relation to their career-long personal experience. Institutions
are made up of such individuals and consequently organisations themselves are
dynamically evolving Ôlearning institutionsÕ. They are places where the meaning
of competence is continually being developed and better understood, and where
strategies for implementing devolved continuous assessment are changing in
response to that new understanding. Institutions which have (at best) a
well-established culture of reflective practice or (at least) an embryonic one
are in a strong position to support the introduction of the new
responsibilities and roles necessary for this implementation; responsibilities
and roles which incorporate
reflective practice at their very heart. Where the culture is less
established, the guidelines and regulations are experienced as impositions and
resistance occurs. Individuals are less able to adjust their concepts of
competence, construing knowledge, for instance, as a sort of tool kit of
information containing separate items which are useful in themselves but are
rarely used as a set because the job for which they are to be used is not yet
clearly identified. It is only through the experience of working with concepts
of competence and addressing the issue of what counts for assessment purposes,
that coherence is achieved. In the course of work people ÔtestÕ the concepts
that frame the statutory texts, through their practice itself and the accounts
they share. Concepts of competence are, then, defined from a multi-dimensional
perspective and developed through individual reflective practice within a
reflexive institutional culture. While competence is assessed by strategies
that are adapted in operation to accord with the historically embedded
perceptions of the purpose of assessment that are peculiar to the particular
institution. The national strategy for devolved continuous assessment must,
therefore, take account of the variety and range of perceptions and
institutional cultures. It must encourage the development of structures for
principled movement towards a form of assessment that takes account of the
global needs of the profession and the local needs of individual institutions.
THE ASSESSMENT OF
COMPETENCE: THE ISSUES DEFINED IN PRACTICE
2.1. COMPETENCE
Introduction
This chapter explores
the issues identified by staff and students in their reflections upon and
experiences of the assessment of competence. It is divided into two broad sections. The first section
draws upon the ways in which practitioners and students define competence in
practice. The second draws upon
staff and student experiences of assessment practices as they relate to the
traditional forms of assessment and the new demands on them made by devolved
continuous assessment.
For some of the
students, practitioners and educators involved in the study, it was obvious
that they had previously given much thought to the concept of competence and
how it should be assessed, but even then had only reached provisional
understandings. For many, articulating their thoughts on competence and its
assessment was a difficult activity.
Their discussions during interview appeared to be first attempts at
exploring tacit and unarticulated beliefs. What is described in this chapter,
therefore, is the range of developing understandings of competence that people
work with on a daily basis. They are separated for convenience in creating a
description, but because they are in a state of becoming, they flow
into each other in practice.
2.2. HOW PEOPLE
DEFINE COMPETENCE IN PRACTICE
2.2.1 Developing
Conceptual Maps in Practice
It is important to
distinguish between definitions of competence and assessment within academic
debates and official texts, and understandings and beliefs concerning
competence that arise during the course of practice. The discussions in chapter one provided conceptual maps
which arise within different academic or scientific paradigms. In practical
day-to-day affairs, understandings concerning competence and its assessment may
arise in many different ways. They
may be held tacitly, they may be adopted uncritically, they may be part of a
strong value or belief system regarding traditional or radical images of how
professionals ought to behave. In short, each individual has their own
conceptual map of what counts as competence, the competent practitioner, and
assessment. These idiosyncratic maps may be informed by academic and official
discourses, or unreflectingly adopted through a process of occupational
socialisation. The focus in this chapter is on the range of 'maps' that people
actually draw upon in their judgements about, and interpretations of, competent
practice and student performance.
It is not just a matter
of there being different definitions of competence. Each definition is the result of a different kind of
discourse, that is, a different way of talking about experience and of
providing rationales for action.
Paraphrasing Kuhn (1970) adopting one conceptual map as distinct from
another involves perceiving the world differently. Different maps define the boundaries between one entity and
another differently. Rather like the gestalt figure which can be seen either as
a duck or a rabbit, the same material entity (the lines on the page) can be
perceptually organised in quite different ways and result in mutually exclusive
judgements: to one person it is a duck, to another person it is a rabbit, to a
third it is a perceptual illusion or trick! Similarly, one person's conceptual map of professional
practice may yield quite different judgements about what procedures should
apply in a given situation to the judgements of an individual whose perceptual
map is organised in a quite different way. In changing the conceptual structures that define competence
or professionality, the mechanisms and procedures which guide actions also
change. Thus for example, the
perceptions, mechanisms and structures underpinning a ward based practical test
used to assess the Rule 18 nursing competencies are not appropriate for the continuous assessment of the
broader Rule 18a nursing competencies for Project 2000. In general, the events that are
considered significant and valuable under one conceptual map may not be so
under another; indeed, an event recognised as having existence under one
conceptual scheme may not be considered 'real' under another. It thus matters very much how people
come to define competence. This in
turn has consequences for the development of professional practice and a body
of knowledge.
2.3. UNIDIMENSIONAL
APPROACHES TO DEFINING COMPETENCE
2.3.1 Statutory
Definitions of Competence
Statutory competencies
for nursing and midwifery (Rules 18 and 33 respectively) have been in place
since 1983 with the introduction of the Nurses, Midwives and Health Visitors
Act.[15] These statutory
requirements guide individuals as they attempt to come to an understanding of
the concept of competence:
Well, we have a
criteria as a midwife, we've got a set of rules, a code of conduct...rules that
we have to follow and your practice has to be safe otherwise you wouldn't be
following those rules so that's how I judge my competency....this is what
you...have to be able to do to be
a midwife.
(midwife)
As this midwife makes
clear, statutory guidelines constrain the way in which a conceptual map can be
drawn up; they can also broaden them, however. If we take nursing as an
example, we can see how Rule 18a has constructed competence as a broader set of
things to be achieved than Rule 18. [16] For instance, under
Rule 18 students must:
b) recognise situations
that may be detrimental to the health and well being of theindividual
However for students to
fulfil this particular focus of competence under Rule 18a, their understanding
of factors detrimental to health requires greater scope and detail. They must
demonstrate the following:
a) the identification of
the social and health implications of pregnancy and child bearing, physical and
mental handicap, disease, disability or ageing for the individual, her or his
friends, family and community
b) the recognition of
common factors which contribute to, and those which adversely affect physical,
mental and social well being of patients and clients, and take appropriate
action
d) the appreciation of
the influence of social, political and cultural factors in relation to health
care
These revised statutory
competencies for Project 2000 courses incorporate new educational aims, placing
greater emphasis on professionality. Nurses and midwives are expected to
demonstrate competence through:
c) the use of relevant
literature and research to inform the practice of nursing
e) an understanding of
the requirements of legislation relevant to the practice of nursing
Statutory definitions of
competence provide a way of criterion-referencing assessment and thus ensuring
that standards are met. At the same time they offer the individual a set of
criteria for formulating their own version of the essential conceptual map.
2.3.2 The Tautology
of Statutory Definitions of Competence
Where assessment
criteria are constructed narrowly in terms of statutory definitions of competence
this can lead to a tautology. Used in this way they can shape the concept of
competence in terms of the minimum standard necessary to meet the assessment
requirements. Fulfilling the assessment criteria for the course defines
competence; anything not defined in the assessment criteria is Ôworth-lessÕ as
far as this particular aim is concerned.
Just as IQ has been defined as what IQ tests measure, so competence can
be defined as what assessment procedures measure:
We're competent as a
qualified nurse because we've satisfied the system that requires us to
demonstrate it. We've passed the assessments etc so therefore we are competent.
That's one definition. (education manager)
This points to two very
real assessment issues. How much of assessment is about bureaucracy rather than
judgement or education? How do personal understandings about competence 'fit'
with those defined through assessment documentation? The issue of competence as
a minimum standard is a live one for curriculum planners and practitioners
alike.
2.3.3 Statutory
Competencies- a Minimum Standard?
Whilst some interviewees
identified the statutory competencies within their own 'maps' of competence,
others expressed worries about statutory definitions being perceived by others
as 'minimum' standards, where no reflection and development occurred once the
'basics' had been achieved:
...they're a good springboard for further development, but I would
challenge anyone who sees them as the be all and end all. Or who always points to the
competencies and says as long as a nurse can do that she is therefore a nurse. (educator)
...if it's a
statutory requirement then it can not be ignored (...) a course has got to be
approved. I think there is a danger that they become the bench mark, and I've
got no evidence to support that, this is a personal opinion, I just speculate
that it could be a danger, that curriculum development group would say that's
what we've got to achieve. (...)I would see it as a minimum gateway that you
get through some time during the course, but there's far more to be achieved
than that, bearing in mind levels and that we're only looking for a diploma
level etc, etc. I'm not advocating that we build in the course any more than
that, but I still think there's a great difference between the statutory
minimum requirements and what can be achieved even within the diploma level
course. And I guess if you went around the country and looked at all the
different courses that they all probably fall at different points within that.
I'm sure there are...I know there are some on the statutory minimum in a sense
and that they just achieve the minimum requirement and that's it, but I know
there are others that I'm sure are much more innovative than that.
If statutory definitions
can be regarded as a 'minimum' to be attained or as a 'springboard' for
development, there is a danger that teaching and learning may be addressed to
the minimum rather than to developmental opportunities.
2.3.4 Competence as a
Cluster of Components
Interviewees talked
about competence in terms of it being broken down into 'components' which are
considered to contribute towards the whole. These components clustered into
categories, examples of which are outlined below:
¥
possessing a wide range of skills:
×practical/technical
skills
×communication
skills
×interpersonal
skills
×organisational
skills
¥
safe practice
¥
knowledge base which is up to date
¥
critical thinking
¥
functions as a member of a team
¥
professional attitude
¥
motivation and enthusiasm
¥
confidence
This list is not
exhaustive, but illustrates the range and kinds of components interviewees
described. Each of these components may be further broken down into
sub-components in what may well become an indefinitely extendible series of
'bits', as in a broadly behavioural approach (c.f. Medley 1984, Evans 1987 and
the MSC ) [17]. However, upon closer inspection many of
the 'components' referred to by interviewees seem to resist exact
definition. This is demonstrated
in the examples of the
'components' in the following figure.
..professionals first
and
foremost...
...you've got to keep
yourself
aware
of changes in
current
thinking... ...tremendous self
awareness...
...good practical skills... ...safe...who
admits when they
don't know something...and goes to the right place and finds out who to
ask...
...enthusiastic
and eager to learn,
wanting to develop new skills...
...an adequate knowledge base...
personality
counts for a lot...
...someone who's
organised and can think
things through
logically...doesn't rush things, ..the
ability to provide a high
just stands and thinks for a
while standard
of care...not only meeting the
or
can organise themselves... mother's physical need but also her psychological
needs as well...
...communication
skills...
...it's being a
patients advocate...
...motivation...and to be able to take
...thinking; if
you want just an interest in ward
activities.
want
one word it's about That's very important....
thinking...
Figure: Components of
Competence (Part i)
..able
to show empathy with relatives
or carers...
...you can be competent at knowing
what to do, competent at
knowing how to do it...but
it's really knowing
why you're doing it...
...using
your resources suitably...
...good
with staff and able to help, see that
somebody's drowning under a lot
of work (...)
to
go in and help and guide them through, support
...a
supporter...
Figure: Components of
Competence (Part ii)
It is possible to
itemise what constitutes 'good practical skills' for an individual. It is even
possible to regard keeping up to date with changes in 'current thinking' or
having 'an adequate knowledge base' as observable and measurable elements. It
is more difficult to define 'tremendous self awareness' , or Ôregarding oneself
as a professional 'first and foremost', or 'thinking for a while' in the same
way. These latter point towards a more holistic conception that implies
processes of interpretation and judgement. All the so-called components could be reinterpreted as
features of a multi-dimensional holistic map, not reducible to bits but integral to the work process as it
reveals itself in concrete practice.
Nevertheless, a focus upon particular skills and qualities as
discernible 'bits' or 'elements' required for good practice, is a common way of
talking about practice whether or not that practice can actually be itemised
and quantified.
2.4. MULTIDIMENSIONAL
APPROACHES TO DEFINING COMPETENCE
2.4.1 The Factors
which Contribute to Multi-dimensionality
There are, then, several
factors which lead to development and reconceptualisation of competence. One is
the changing professional mandate.
In nursing, the amended competencies for Project 2000 were described by
one educator as:
...a new form of
competence. That it's not the skills based competence that we had before, it's
much more open, learning, flexible, outcomes type of thing.
There are concomitant
changes in the educational process too:
I think I'm taught
about what a competent nurse is, is somebody who can maintain the safety of the patients on the ward. But I think there's more to the
competencies, I don't think a lot's put down on communication skills which is
really what nursing, a lot of it's really about, it's being a patient's
advocate, and you can't be a patient's advocate unless you've got really good
communication skills. (student)
In addition, there is
change motivated by a changing world Ôout thereÕ to be taken into account:
...there are clinical
skills which I think a nurse needs to learn in order to survive in an ever
changing world. If you look at technology
and things like that, so the whole issue about the nurses role needs constant refining, i.e. can
nurses give IV drugs? (educator)
Multi-dimensionality is
also encouraged by working practices which continue to change roles. As one
midwife, in making a comparison with the 'extended role of the nurse', says:
...they're going on
about the extended role of the nurse, I just fall about because our midwives,
it's not an extended role, it is their role; for instance they perform episiotomies...they also suture their
cuts...they can also give life saving drugs without having to wait for medical
staff to get there.
Neither the behavioural
nor the legislative conceptualisations of competence address these kinds of
issues:
I mean generally
people I think in nurse education aren't happy with competency based
training. We think it concentrates
on performance, skills, the technician... and doesn't take sufficient account
of the development of the individual. The cognitive, the intellectual, the
reflective practitioner. And certainly this is a worry since one of the major
things about Project 2000 (...) at Diploma level, is that it strives to develop
cognitive and intellectual skills which enable the person to be reflective, a
critical change agent at the point of
practice, but also someone who can resource their own learning, their
own continuing education and direct that and influence all of
those things as an equal partner with all sorts of other professions. And I'm
not sure that the competencies necessarily reflect that side of the
professional role...either of them (pause) 18a
is certainly better but I think the very fact that they are still cold
competencies, which has a very clear manual task related definition.
2.4.2 Working
Relationships and the Dynamics of Work
In the multidimensional
approach, competence is defined by describing the features of the totality of
the concept as it is expressed within the context of work. It is not an easy task to convey the
variety of highly individual understandings in a way which makes sense of the
diversity without over-simplification.[18] IntervieweesÕ
reflections on competence appeared frequently to be exploratory in character,
often revealing inconclusivity and difficulty in articulating the indefinables
without resort to concrete instances or events. The following interview gives a sense of this:
It's very hard isn't
it? Because each individual's probably, you know, different. I think (pause) a
competent nurse...(pause) someone who can work in a team, work with other
disciplines, I think someone who is aware of current research, doesn't sort of
stay stagnant, is always trying to update her knowledge...a person who's
approachable, a person who has genuine regard for her patients...maybe has
experience of life as well. Erm, can identify maybe mood changes and take into
consideration why this happens. A
patient maybe has just been given bad news...so can adapt her approach (pause)
you don't want a nurse coming in bouncing if her patient has just been told
they've got inoperable carcinoma. So you've got to have counselling skills,
listening skills. Erm and to have a good rapport with patients, you know for
the patient to feel that they are able to come to the nurse, even just to sit
in silence and for the nurse to be there and just offer support, to sort of
show empathy with her patients. (staff
nurse)
At its broadest,
competence includes 'life experience'. Most importantly, there is the sense of
competence being based in work, and in particular, being based in a working
relationship. The features of working relationships are that they are
situationally specific and skills required in the situation are shaped or
tailored according to specific needs and circumstances.[19] In the conveyor belt
technology of car production it is possible to standardise patterns of work and
procedures so exactly that a robot can be programmed to perform them. What characterises nursing and midwifery
is the opposite. Work situations
are dynamic, conditions change, no two situations are identical. Programmed responses of the robotic
kind are not merely impossible but undesirable. These concerns are again echoed in the following extract
from a student:
It's a hard
question. One aspect of it is
actually having a good grasp of the kind of nuts and bolts of the job, like
when it comes to psychiatric nursing...you should as a competent nurse know the
relevance of the sections of the Mental Health Act thoroughly so you're not
fumbling around when the situation comes up....and similarly when it comes to
carrying out procedures like intramuscular injections dressings and so on....I
think when you're at least familiar you're far more competent in things like
that and I feel more confident and
then I think that sort of flows over you into sort of the other areas. (pause)
I find it difficult to put into words, but part of it is a sort of sensitivity
to other people because it's very much about personal relationships and
building relationships and a rapport with people who you know are in various
kinds of mental distress...So to me that's quite an important part of being
competent. I mean there's so much involved in that, it's not always what you do
it's what you don't do...knowing when to actually say something to somebody,
when to get into deep conversation, when to play it cool and when to stop a
conversation. (student)
It could be argued that
this interviewee while pointing to contextual matters also points to particular
'elements' that are necessary to professional competence. For example, having a
grasp of the 'nuts and bolts' seems to imply a kind of tool kit knowledge of
the job. Knowing relevant sections of the Mental Health Act is a case in
point. This kind of knowledge is
not just Ôa bit of knowledge to be added to other bits of knowledgeÕ, however. The Act is itself a complex text that
has to be interpreted in relation to situationally specific events. There are two kinds of reading here: a
reading of the text, and a reading of the real life situation. This double reading then leads to a
decision that certain procedures are required. In attempting to explain how this is done the interviewees
give the sense of trying to hold onto the image of a very large picture, while
trying to bring into focus each of its details.
In each case, however,
there is a structural coherence to this picture. It is a coherence that is provided by the experience of
work. Experiences of work provide
the materials for accounts and reflections which can be shared with
others. Asking what the
relationship is between one account of work and a developing body of
professional knowledge is rather like asking what the relationship is between
the particular and the general.
Competence involves acts of generalisation which at one level draw upon
the common features of a wide range of experiences and on another level relates
those generalisations to other bodies of knowledge or conceptual maps [20]. These acts of generalisation allow the
professional to make decisions with respect to the immediate case at hand. Without such acts of generalisation
there would be no guidelines for decision making.
In the previous two
extracts, both staff nurse and student try to make some generalisations but are
well aware of the situational specificity of competence. The student suggests that a procedure
that is appropriate to one context, or with one patient is not necessarily
appropriate to another apparently similar situation. Her remarks seem to indicate that competence involves the
ability to build up a repertoire of experiences and situations that bear some
similarity to each other but at the same time reveal significant differences. [21] To be able to work in a
given situation requires an extraordinary sensitivity to its specifics, such as
responsiveness to intangibles like mood changes. It is a background knowledge of the effect of context on
application that makes the vital difference. The subtlety required in being able to discern which
approaches and decision making are required for actions in individual contexts
is considerable. This in itself
has major implications for
learning and professional development, in particular it means that a sharing of
personal experience and group reflection on cases is vital to building up an
internalised body of case histories relevant to decision making.
2.5. COMPETENCE AS
REFLEXIVE KNOWLEDGE EXPRESSED IN WORK
2.5.1 Competence and
Professional Practice
The attitude of working
to satisfy minimum criteria, from an ethical point of view, cannot be regarded
as professional. Mechanisms can be
set in place to enable development beyond the acceptable minima. Professional action is effective action. Effectiveness does not work toward, nor
is it satisfied with, minimal criteria.
Rather it works according to criteria of continual improvement in
professional action. In this view, competence is always developmental in
orientation, never looking back to the minimal criteria but always looking
forward to better performance, improved decision making and greater quality of
outcome. This view of competence
firmly centres it in 'work' and work relationships. At its broadest work can be defined as the process of
manipulating and transforming physical and social environments for human
purposes (Sayer 1993[22]). Work as the dominant
means for people to structure their lives, find self value, form a sense of
identity and engage in relationships with others is fundamental not only to
self development but also to cultural and professional development (Lane 1991[23]). Competence, then, and the forms of
knowledge and knowing and acting that underpin it, can be defined as
expressions of work.
2.5.2 Competence and
the Development of a Reflexive Body of Knowledge
Self and peer review are
procedures that contribute to the development of a reflexive body of
professional knowledge, grounded in shared experience. They are procedures that
facilitate the internalisation of processes of professional judgement and
evaluation. The importance of
individual understandings of competence is emphasised as these frequently form
the basis of self and peer review.
Some interviewees focused upon these activities as professional competencies
in their own right. They talked
about nurses and midwives developing their own standards, and being ready to
assess themselves. They saw the development of competence as an on-going
process which takes place in a relationship of mutual support and critique with
colleagues.
You have to be self
judging as well, as well as your peers judging you I suppose...so if you feel
you're achieving those competencies yourself, then you're in (a) position to
judge someone else I suppose. (midwife)
Such imprecision may
disappoint those who want to measure clearly defined signs of competence. Yet, what is being judged is
imprecisely defined because it is not a single entity but rather a complex of
concepts, perceptions, feelings, values that constitute an orientation, a focus
and a rationale for acting:
I feel a lot of the
time nurses loose sight of the prime reason that they are there, and that is
the patient. Because it's so easy
to do because there's so much else going on. And a professional person is able
to constantly pull themselves and say, "Now hang on, what am I doing? Where am I going? It's not good enough just to measure
myself against the competencies and say, "Oh well I come up to that
standard."
In this case then, a
defined set of competencies is inadequate to generate the sense of competence
that is being described here. A
rationale is not a set of competencies but is generative of the criteria and reasons
that distinguish between competent and incompetent action. Such a view requires a sense of
continuous assessment of action, indeed a sense of mutual assessment:
...I mean I think we
all assess ourselves and our work colleagues continuously, all the time anyway.
I mean I'm sure as a team leader, the team leader will look at her staff... if
she's got a very poorly patient and know who to shout for if there's an emergency. And that way she's assessing her staff
isn't she, as she goes on...you assess people by how they go about their work, how
organised they are, are they tidy?...Are their patients happy?...Are they
lifting patients that are in pain without analgesia and that sort of thing. (student nurse)
The view of assessment
as supportive critique is an important one which requires further exploration.
It is necessary to discover, for instance, the extent to which that kind of
critique is possible given the pragmatic (eg time) and cultural (eg busy-ness)
constraints in nursing and midwifery environments, and also the extent that
these constraints affect everyday competence. [24]
Some interviewees
suggested that the competence of the qualified practitioner becomes such that
individual competencies are no longer distinguishable, having become features
of performance in general. They pointed out that these features are so embedded
in the practitioner's daily activity that it was often difficult, if not
impossible, to articulate them as specific competencies, as in Benner's (1984)
work:
...some of the skills
come with experiences of life and I think intuitiveness. I could work with a
student and I could say, 'What do you see in this patient?' and the student
could say, 'Well she looks fine.' Now intuitively I might say, 'Well I don't
think she is' and I can't explain why I think that. It's come with
experience...so that I don't think can be taught. That is something that has to
be acquired throughout as they go on. (staff nurse)
The concept of
competence as an expression of work seems inextricably linked also with a
concept of continual development.
2.5.3 Competence and
Professional Development
As understandings of
competence and expectations of role develop, notions of appropriate outcomes of
courses need also to change. A
newly registered practitioner may be able to take on certain kinds of work expectations
but may not be considered fully competent in their new role until they have
begun to consolidate their course-based knowledge in relation to an extended
period of work. As one educator commented:
...I have got
feelings that we are expecting maybe too much to say that at the end of training
the student is a competent practitioner.
And you would say that perhaps a student becomes a competent
practitioner when they have had sufficient time, and that's got to be
individual, to consolidate their overall training in a specific area which they
choose to work.
Even given time for
development, however, there is not necessarily a linear progression in
competence from level to level. Development is often not orderly and its pace
is certainly not open to external regulation. Seen from a developmental
perspective competence can be conceived in terms of individual readiness for
transition to competent practitioner status. This means, as the nurse quoted below suggests,
that there is no externally pre-specifiable point at which it is possible to
say for each and every individual that competence has been fully accomplished
with respect to every aspect of nursing or midwifery.
And I don't think
that in three years of training a nurse has even approached perfecting those
sorts of skill, (interpersonal and communication skills)if she ever perfects them,
but she certainly hasn't even begun...So things like the skills you need to be
self aware and to have good interpersonal relationship skills takes years and
years, if I look at myself (...) to build up, to define and refine.
It also means that the
maintenance or development of competence is not guaranteed, and that competence
is on a continuum along which, in common with all other continuums, it is
possible to move backwards as well as forward.:
I think it's
transient. I think it (pause) you glide in and out of competence and I don't
think you have it for ever and a day erm and yet our notion of it really is
that when you're competent that's it and you always are forever more, so it
needs nurturing.
If this is the case for
registered nurses and midwives, then it is certainly true for students, whose
development is taking place within a time-limited Project 2000 programme, and
whose learning still remains to be consolidated by extensive experience. Work
situations are dynamic, and the situationally specific events of everyday
experience are not precisely controllable. The experience of being competent to handle such situations
is thus likely to fluctuate according to the sense of being in control, knowing
what to do next and handling the unexpected. Thus:
It's not an all or
nothing state is it? They (students) are partially competent and I think most
practitioners are only partly competent ...
In this view, then,
competence is not a steady state; it is a fragile achievement and never a total
accomplishment.
I mean to be
competent as a nurse do you have to be competent in everything? Because
I'm not competent then... Because
applying it to my own situation there are a range of skills that I have to
bring to my job. There are some I do well I think, there are some I can handle
reasonably well, and there are some I'm quite poor at. What I tend to do is
delegate the poor ones, avoid them myself. So am I competent or not? I'm
competent in the areas in which I practice, but because I avoid the areas where
I might not be, does that keep me competent?
But while the
professional development view defines competence as something which continues
to evolve over time in fits and starts rather than by linear progression, it
nevertheless recognises a sense of purposeful direction, a sense of striving to
improve action in professional work.
2.6. COMPETENCE: A
SUMMARY
The concept of
competence resists easy categorisation. There are many different aspects of
this wide-ranging and complex concept which have to be taken into account when
designing strategies for itÕs assessment. Professional perceptions indicate
clearly that there is more to competence than simply what can be easily
observed and measured. From the developmental perspective, statutory
competencies may be considered as merely an initial framework, or starting
point for professional development.
Competence continues to develop and grow as the individual begins to
construct a more detailed picture of the general requirements of nursing or
midwifery by building up a repertoire of situation-specific experiences.
Competence is, therefore, a concept which is worked out and continually
reformulated through work itself. Assessment needs to take account of all
these complexities.
2.7. ASSESSMENT
Introduction
Devolved continuous
assessment was introduced into nursing and midwifery education as a response to
the need to assess a wider range of educational purposes and a developing
concept of competence. Long-established cultural practices are not changed
overnight, however, and the move towards devolution has been marked by a degree
of culture clash. Where there is transition there is also variety, as new forms
of assessment are introduced to run, for the time being, alongside more
traditional forms. Consequently, the experience of non-continuous, four-ward
based assessment has continued to affect the perceptions of assessment for some
time after the introduction of the more educative approach. Twin-track
assessment (i.e. different forms of assessment running in parallel within the
same institution) offers both a cultural challenge to the institution, and a
potential psychological challenge to the individuals in it. The inevitable
pragmatic and conceptual confusions become part of the discourse about what
constitutes satisfactory assessment practice. Like competence, therefore,
assessment itself is worked out, or constructed, in the process of doing it.
The net result is that both the quality of nursing and midwifery being assessed
and the instrument for assessing it are defined from multiple perspectives. To
discover how far devolved continuous assessment is effective in assessing
competence, it is necessary to identify not only the range of views of
competence that make up that unstable concept, but also the range of
experiences of assessment that have created the culture in which assessment is
to take place.
2.8.
TRADITIONAL FORMS OF ASSESSMENT
2.8.1 Centralised
Periodic Assessment
In 1971, the assessment
of nursing practice through classroom based simulations was replaced by a series of four practical tests conducted in clinical
settings[25]; a maximum of three
attempts at each test was permitted.
During placement experiences, the practical tests were supplemented with
King's Fund type ward reports, completed by clinical staff. Assessment of theory took the
form of a final determining exam, for which students could make three
attempts. Assessment for midwifery
students consisted of a final qualifying examination, with written and oral components.
The overall assessment
systems were the subject of much criticism, which was recognised at all levels
in the profession (Gallagher 1985; Aggleton et al 1987; Bradley 1987; ENB 1984,
p1; ENB 1986, p 1; Lankshear 1990) and stimulated ongoing debate about more
preferable alternatives. As a
consequence of the criticism and debate, continuous assessment was debated on
the assessment agenda for a number of years. A small number of pilot schemes were operated by the ENB
from the 1970's onward; however such developments were not widespread (Spencer
1985 ) until the national implementation of
continuous assessment.
2.8.2 Practical
Assessment
Although many
practitioners still cling to some of the ideals of traditional assessment with
the consequence that the nursing and midwifery culture in specific locations
within particular institutions has been slow to change, most when interviewed
were clear about its shortcomings. The apparent contradiction in this serves
only to illustrate the power of habituated practice to continue to frame
corporate action in the face of contrary innovative practice constructed around
essentially unstable (in the sense of developing) concepts. The information
people offer about their dissatisfaction with piecemeal, periodic forms of
assessment is interesting, therefore, from two points of view. Firstly, it
confirms the general sense that nurses and midwives are, in principle,
committed to continuous assessment as a ÔfairerÕ means of gauging student
competence. Secondly, it provides evidence of the fact that people can be aware
of shortcomings and yet still continue to work for some time without complaint
within a culture in which flawed practices persist.
1. One-off
practical tests
Interviewees
acknowledged that this approach was inadequate for the assessment of practical
competence. They comment about
performance situations which:
¥lacked reality
Interviewees made it
clear that assessment focusing on the ability to perform satisfactorily on
one-off occasion within an essentially 'false' situation did not reflect the
realities of everyday practice. Staff and students spoke of how a great deal of
rehearsal occurred prior to the assessment. Typical of comments that suggested
this performance element were the following:
I don't always feel
it's fair to the student really, because it's not real and when they qualify
and they get out there it's so different... And I can remember thinking it,
"But on my management it wasn't like this!" Everything went so
smoothly.'
(staff nurse)
You tend to do
probably extra things that you wouldn't normally do. (student)
..making sure the
trolley's perfectly clean and that you've got everything on the bottom, where
probably on a normal drug round...day to day you'd just look quickly then rush
off with the trolley. (student)
¥were stressful
Unsurprisingly, many
students expressed anxiety prior to and during assessment events. As one student commented wryly:
I suppose it assesses
how to cope with stress...you're nerves are just shot to pieces. I'm not a
particularly nervous person or highly strung...I'd hate to find someone who
gets quite intimidated by it all..it really is horrible.
¥assessed limited
application of principles to different contexts
Many students were
concerned that the focus on one-off performances provided little opportunity to
show what they knew about the principles of nursing or midwifery in a variety
of settings:
...like aseptic
assessments, you can have an assessment on a dressing and can not have gone
near a suture line or clips or anything and suddenly you're competent enough to
go off with your trolley and take out clips and sutures and God knows what else
on the ward.
Others were concerned
that success in a one-off situation did not guarantee transfer of competence to
a variety of contexts:
...It doesn't
suddenly mean that you're labelled safe and it doesn't mean that you can start
pulling down the barriers of double checking...
¥poorly
discriminated levels of performance
The limited capacity of
the assessment to pass 'good' students and detect and where necessary fail
'poor' ones was noted by several interviewees.[26]
...You do get
referrals to people that shouldn't be referred, and you know are quite capable
that have done something amazingly silly. But you know if they'd actually done
it on a day to day basis you'd just say, "Well that's stupid, I've got to
start all this again"..and then you get those who, you know to be honest
don't really put in a great deal more than they have to on a day to day basis
that come out with a glowing performance on the day.(educator)
2. Ward Reports
The shortcomings of the
King's Fund or 'ward report' forms used on students' clinical placements as an
adjunct to practical tests were largely to do with the lack of real evidence
that they provided:
...the King's Fund
report form is very erm... open and there's not a lot of room on it for comment.. (staff nurse)
Well, the thing is
they're never used properly, I mean half the time they just tick and there's
never any comment made at all, to reflect what the ticks are actually saying. (educator)
3. Final
examinations
Many students had strong views about the
inappropriateness of the final summative written exams in nursing, and the
final examination papers and orals in midwifery. They were unhappy with its one
off nature and its associated pressures.
It was also felt that an exam did not assess their nursing or midwifery
competence, and was therefore seen as unnecessary as well as unwelcome. Typical worries of student were:
I mean that sort of
really worries me, the fact that it comes down to an exam paper at the end of
the day, you know the final decision. The fact that I've passed the ward-based,
the practical assessments and been deemed to be competent or whatever... erm
comes down to the fact that at the
end of the day, pass the finals.
Personally I'm very
bad at exams...I've always had very good or outstanding ward reports but on
exams my marks are normally borderline or just over borderline, sort of 50 to
60%...I get into an exam and I know what I want to put and I know it all, well I know quite a
lot of it! But putting it down on a piece of paper's just something completely different...everything hinges on that
day.
The general attitude
towards exams and one-off assessments is summed up by the student who, in
advocating a more continuous approach, indicates what is missing from the
traditional one:
...you see it as a
hurdle...some people thrive on hurdles and jump them. But a lot of people see
them as a barrier...you've got to get through that barrier. If it was like
continuous assessment erm, I think
you'd keep yourself more aware of what you were doing and would be more willing to change your
practice, not just as a student but you'd have that sort of framework, so that when you do qualify you're still
able to look about at what's happening and change your practice.
He, like the majority of
students, practitioners and teachers interviewed, saw examinations as being
about something other than competence. Indirectly, such a perception
re-iterates the view that theory and practice are separate entities.
2.9. DEVOLVED CONTINUOUS
ASSESSMENT
2.9.1. The Reality of
Continuity
National guidance for
devolved continuous assessment was produced by the ENB in its Regulations and
Guidelines for the Approval of Institutions and Courses 1990[27]. The strategy requires students to
demonstrate the Ôacquisition of knowledge, skills and attitudes of differing
complexity and applicationÕ, and to include clearly identified summative and
formative assessment activities (ENB, 1990, p 50 ). The guidelines place a strong emphasis
on formative assessment, reflecting principles which are intended to maximise
learning, build on students' strengths, respond to their weaknesses, provide
profiles of progress, link the identified 'parts' of the overall strategy and
encourage student participation in self assessment (ENB, 1990, p 50).
The increased sense of
reality that this continuous process can provide is identified by this
midwifery teacher:
...the new assessment
will be a much...not so much fairer but a more realistic assessment of a
students progress.
The in-depth approaches
to the assessment of theory which continuous assessment allows is also
welcomed:
...at the General
Hospital they do a project where they've got so many weeks to do it, and I
think that's a good idea because you can demonstrate your knowledge in a much
deeper way...you've got more time and you can spend more care and...it's not as
traumatic as doing a big, you know
a two, three hour exam.
Insofar as competence
was seen as an ability to perform consistently over time, there was a view that
continuous assessment facilitated this.
An approach which
represents changes in educational aspirations must also map into work contexts,
the only scenarios through which professional competence is expressed and
realised. The recognition of distribution of responsibilities is therefore
essential to the functioning of continuous assessment. The cascade of responsibilities to all
levels was recognised by interviewees.
Some commented on the
effects on students:
I think it makes you
more aware for three years, certainly you've got to know what you're doing all
the time. (student)
I'm sure that a lot
of people believe that continuous assessment is an easy option, and that
ultimately it will dilute the performance of the individuals... I believe that
it will actually concentrate their minds considerably...(educator)
Others noted that not
only did continuous assessment have the potential to concentrate students'
efforts but also to foster in them a developmental attitude and an expectation
of education as an on-going process.
2.10. INNOVATION IN
AN ESTABLISHED WORKPLACE CULTURE
2.10.1.
The Persistence of the ÔOldÕ
Interviews show that
perceptions of the weaknesses of single-event assessment and the strengths of
continuous assessment do not always lead to different approaches to the
activity of assessing. It seems
that for many a reconceptualisation is required if assessment is to be part of
an educative process. Some interviewees comment that some schedules are still
dominated by behavioural formats which do not adequately reflect the
complexities of continuous assessment. In them, competencies are broken down
into numerous parts or sub-skills in order to measure them on a pass/fail
basis. They focus on techniques rather than 'complex' competence. It appears that 'technicist' approaches
are still evident in some continuous assessment documentation and hence they do
not promote assessment of the 'complex' or 'higher order' competence that
typically characterises a professional. The implication seems to be that in
evaluating the effectiveness of various forms of assessment, there is a need to
consider not only the forms themselves but also the general methodologies
employed within them.
In the same way that
nursing and midwifery competencies have been statutorily defined but are
modified and extended in practice, so the strategies for assessing competence are set down
in official procedures but adapted 'on-the-ground' as they are put into
operation. There is, for instance, a clear distinction in the Board's
regulations between formative and summative assessments, whereas interviewees'
comments provide evidence that some assessors employ all assessment
diagnostically and formatively. And despite the Board's clear distinction
between 'single event' assessment and continuous assessment, people talk about
both in ways that suggest both forms of assessment are carried out in similar
ways. All the interviewees, however, have definite views about the nature
and quality of different forms of assessment, and those views inevitably affect
their assessment practices. [28]
The part played in
shaping continuous assessment by 'residual' perceptions, held by those still
operating the earlier approach, albeit alongside the new one, and the often
cynical attitudes these engender, cannot be ignored. The following comment
encapsulates that cynicism.
I'd argue that
there's a conspiracy of passing people at the moment because if they judged it
by their (clinical assessors)values they (students)shouldn't pass, they're not
competent within their values, but they know their values aren't what's being
asked for but they have to assess them...to say someone's not competent to do
that you've got to know what you're talking about and their training may not have given (them that) ...if
you don't know you might as well
put a tick because if you put 'no'
and they challenge you...
The dilemma is summed up
nicely in the following comment, as the speaker unfavourably compares
preparation for the major attitude and practice shift required to facilitate
the radically different form of assessment required by devolution and
continuity, with the preparation offered to people in industry who are about to
undergo an innovation of similar proportions .
...half of the people
(assessors)
or whatever number have been trained literally as they work to operate within
one system. We are then asking then to assess with a different philosophical
view...apart from a 998 course here or there, I mean not particular help to do
it. If any industry was remodernising it would put in massive resources to
change it to operate the new machinery, Nursing somehow hasn't put that (in),
and after all assessing that they (students) are
competent to practice in this new
way when you've never practiced in it (pause) it's like asking me to judge
something I know nothing about.
There is clearly an
issue of what constitutes an appropriate investment in resources for a major
innovation.
2.11. ASSESSMENT: A
SUMMARY
Assessment, like
competence, is an activity informed by an evolving concept. It is also an
activity which is carried out regularly and thus becomes habituated. Where
institutions have a culturally embedded reflexivity, their assessment
strategies and practices develop gradually and meet the educational needs of
the profession. Where the imperative for change is entirely an external one
which impinges upon a non-reflective institutional culture, the attempt to
accommodate to that change is often traumatic, and the strategies adopted lead
to piecemeal adaptation within an ÔunreadyÕ context. By contrast, in a nascent
reflective culture there is principled movement towards a form of assessment
that satisfies the professional requirement that competence should be founded
in the application to practice of appropriate skills, knowledge, and attitudes.
Institutional histories are therefore of considerable importance in determining
the extent and rate at which a particular assessment strategy will succeed. The
success of devolved continuous assessment relies upon the people who operate it
having a sense of ownership, without which they will lack the necessary
commitment and understanding to exercise effectively the increased
responsibility it brings.
CHAPTER THREE
ABSTRACT
A system of assessment which is
devolved, must be capable of handing responsibility for design and implementation
to individual approved institutions without losing the capacity to hold them
accountable for meeting national criteria. The institution itself must be able
to ensure that it provides a course of professional preparation that equips
students with the competence to practice at a professional level. In any
assessment system there are, then, internal and external points of reference,
and individuals, committees, and planning groups must all respond to both sets
of constraints. The devolvement of decision-making scatters the centres of
decision; this can lead either to the development of collaborative or of
competitive relationships between institutions. Where there are well-defined
roles and structures that promote partnership between all the interested
parties, the personnel involved in the design and subsequent implementation of
an assessment policy feel a sense of ownership. Internal partnership rather
than external legislative imposition ensures a comfortable fit between the
needs of practice, teaching and learning, and assessment. The most facilitative
structures enable rather than constrain the process of parallel curriculum and
assessment policy design from the earliest possible opportunity. The most
supportive roles are defined in a way that encourages dialogue, and brings
together different perspectives (e.g. nursing and midwifery) and specialist
interests (e.g. branches within a course) in a mutually enabling relationship.
Where the assessment histories of merging or merged Schools and Colleges are
very different or these institutions are at different stages in the development
of their assessment policies, individuals are obliged to invest large amounts
of their personal time and energy to achieve compatibility. They do this
willingly where there are stable structures for dialogue and partnership, but
experience frustration where the re-organisation process has resulted in
destabilised structures and uncertain role-definitions. This seems to point to
the need for a policy co-ordinator and assessment quality assurance role. It is
not simply at the local level that clearly-defined assessment roles and
structures are appreciated, however; they are also valued at a regional and
national level. There are many instances of a happy relationship between ENB
Education Officers and their local approved institutions, in which officer and
approved institution are able to critique national guidelines as a route
towards making sense of them and gaining the ownership mentioned earlier. This
relationship recognises the scope for local interpretation of national
guidelines, and looks positively for creativity in the approved institutionÕs
operation of them. Where the relationship between the parties is genuinely one
of partnership, guidelines are used as an enabling framework and the need for
dialogue about different interpretations fully recognised. Without such a
relationship, assessment documentation, schedules, and procedures are
introduced in which no-one at the approved institution level has any faith. Inevitably
these fail to provide reliable or valid assessment information.
DESIGNING DEVOLVED
CONTINUOUS ASSESSMENT STRATEGIES
Introduction
A planning structure must be able
to ensure appropriate conceptual frameworks, principles of conduct, role and
communications structures, and individual commitment to the enactment of the
process. This section will explore
the features and experiences of planning in relation to these:
¥ by giving an overview of the relevant features of
the operating context of institutions in terms of 'mapping the system'
¥ and by describing and analysing the experiences of
staff operating within that 'map'
3.1. MAPPING THE
SYSTEM
In general terms an
institution can be regarded as a system which must meet the demands made upon
it both internally and externally if it is to survive and develop to fulfil its
intended mandate. Assessment is of
course a key function in the mandate of an approved institution to engage in
the education of professional nurses and midwives. Assessment is not merely an internal matter. Reference is made to demands from a
variety of sources external to the approved institution (ENB, UKCC, EC
directives and other appropriate legislation). These reference points provide one set of guidelines common to all by which to construct a
map which can then act as a framework for analysis, comparison and contrast. These external points of reference in
turn make demands upon and place limits upon the ways in which institutions can
organise themselves to meet the demands.
There are thus external and internal factors to take into account.
The appropriate
structures will of course depend upon local circumstances. However, it is possible to begin the
process of analysis by setting out, in the first instance, a simplified schema
as follows:
Schema
1
Without appropriate
structures and associated role definitions, committees, working groups,
planning procedures and communications structures, little is likely to be
accomplished. Although
oversimplified as a representation, this initial schema does point to two
important structural dimensions for the
approved institution. There
are two directions that it must face, first 'vertically' towards the national
bodies (ENB, UKCC), European directives and local bodies (RHA, Trusts); and
secondly, 'horizontally' towards the clinical areas of the region (purchasers,
clinical placements). This
therefore divides external demands upon its organisational structure into two
kinds. Roles must be established
to respond to the two kinds of external demand. In addition, to be properly informed about regional
operating conditions, it needs access to information regarding the relationship
between national bodies and the clinical areas in its region. That the schema
is oversimplified is made clear when the complexities begin to be identified
for each broad category of institution.
Thus the national-local level can be further amplified:
Schema
2
Clearly, the approved
institution must respond not only to national demands but also to local
demands. The local context is not
a mediating layer between national and institutional levels but rather is
symptomatic of global developments throughout society. There is no unambiguous hierarchical
'line-management' relationship running from the national through to the local
and then to the approved institution.
Devolvement of decision making scatters the centres of decision making
them at least quasi-autonomous.
The movement then is into the formation of collaborative and/or
competitive relationships where negotiation rather than 'command' is the
central operating feature.
With the increasing 'marketization' of once nationalised public
services, decision making, while made more sensitive to local demands and
operating conditions at the same time, is subjected to national demands for
'quality assurance', 'standardisation of outcomes', efficiency, effectiveness
and so on. Tailoring services and
training to meet local conditions of demand may come into conflict with
national demands for consistency, for a common professional education. This particular tension between the
local and the national represent a modern feature of society which can be
termed the 'global-local' problem (c.f., Harvey 1989).
The implications for the
approved institution do not neatly separate into two classes but must rather
address the central problem posed by the new operating context with its
global-local poles of decision making.
This means, in general terms,
a need for structures for collaboration, dialogue and information
gathering.
The following diagram
begins to unpack the internal complexity of the approved institution
itself. The internal
organisational logic of a given institution has its own historic origins. This provides the internal operating
context within which planning takes place. While there may be surface similarities between some
institutions, in practice the operating conditions are unique to each
institution. In general terms,
there are typical operating differences as between the 'new' and the 'old'
universities. The new universities
come from a polytechnic culture with its CNAA[29] influenced principles
and procedures of course development, validation and assessment. The old universities draw upon a quite
different culture of autonomy and self-validation. Where the polytechnic culture has expressed itself in terms
of strong line management patterns of control (Heads of Department and Deans
being seen as professional managers), the old universities typically incline
towards individualism, a limited style of line management (Deanships rotating
amongst senior academic staff) or a democratic mode of
School/Department/Faculty management.
With amalgamations or affiliations of colleges of nursing and midwifery
different ways of working, different role definitions and different
occupational cultures and institutional histories are brought together. Mediating roles and structures not only
within the institutions but between the institutions become important.
Schema
3
It is not only the map
of the variety of institutions delivering education that is under change but
also the 'clinical areas' for student placements.
Schema
4
Each schema provides a
way of beginning the process of mapping the range and sources of information
necessary to plan the assessment structure.
In order to specify in
more detail the roles required for assessment design the structure for
implementation purposes must first be identified. For this purpose a further schema can be offered:
Schema
5
The roles that the above
schemas have identified in general terms are articulated in practice according
to the circumstances faced by each institution. However, if the system is to operate it must meet the functions
defined in terms of these roles.
3.2. THE EXPERIENCE
OF DESIGNING CONTINUOUS ASSESSMENT STRATEGIES
3.2.1. Operating in
the Local Context
Responsibility
for the detailed planning of assessment strategies belongs to individual
approved institutions. In theory that responsibility is determined by a set of
ENB guidelines, UKCC statutory requirements and EC directives. In practice,
because every institution has a unique assessment history there are marked
differences in the approach each adopts to the design of assessment strategies
to meet the Board's most recent requirements. Those who have already been
operating continuous assessment informally for several years simply continue
the institutionalised process of evolution and development; whereas those for
whom the experience is a new one face a considerable challenge of innovation.
The following extracts typify the range of experience:
...it's where you are
now, what experience, what you've come from, which is what we've tried to do
with that profile (...) we know where we are at now is the King's Fund
assessment forms. Let's see this as an interim, let's move slowly (...) because
it's not just the educationalists in the school, it's the people out there you
know, it's like moving where you are now, from where you've been in the past,
to moving to the future...it's about development.
The experience of
designing a new assessment strategy is most difficult where it represents a
major innovation. The experience is quite different for those professionals who
are are working in a context that has
member of a professional group like the one described below that has
engaged in a gradual evol.
...historically, we
have been running continuous assessment here for at least ten years, but
initially it was run in conjunction with the statutory mechanisms deemed by
what was previously the General Nursing Council and subsequently the ENB and as
a result of that obviously we feel we could work through that which in the
early days were quite primitive tools for determining the knowledge, skills and
values of a student (...) so evolutionary we've moved on and yet in some
respects of course we've still got tools that were relatively primitive.
This teacher and her
colleagues work in a culture which is familiar with regularly evaluating and
developing continuous assessment. For them, each new requirement is an
opportunity to upgrade what they have been doing previously.
3.2.2. Ensuring
Ownership Through Partnership Structures
Assessment is normally
planned in common with other aspects of a course, and the planning team that
does one is the same group of core educators, practitioners, and (less
frequently) students, responsible for the other. This partnership ensures that
in planning their assessment strategies a team considers both the educational
agenda and the agendas of nursing and midwifery practice. Partnership, as the teacher below
points out, avoids the pursuit of an ÔidealÕ strategy which is inoperable in
practice;
It will involve
teachers, managers, practitioners, student representatives and any other person
who has specialist knowledge about this particular issue that is going to be
written about or discussed. (...) To write a curriculum with purely
educationalists, I mean people tend to say that educationalists tend to live in
the ideal world, and sometimes don't tend to realise the reality of the
situation. It is fine to sit down and write the ideal curriculum, but in
practice it can't be implemented. (...) But here we have always taken the view
that education is a partnership between the college of nursing and the service
area.
Partnership is not
always easy, however, and sometimes it can be a little imbalanced with either
the practitioners having slightly more say:
...both the midwife
teachers and the clinical midwives, they both need each other to function
properly and so it would have been in my opinion very wrong for it to have been
as assessment strategy designed by midwife teachers. It's got to be both sides,
(...) in fact we had just one midwife teacher heading up a small sub group and
there were three preceptors plus her, so in fact...the emphasis within that
small team if you like, was more on the practising midwives rather than the
midwife teacher.
or the teachers:
...I mean we say we
say that the student owns the document, but they can't own it if they've not
designed it really to their own use...as long as educationalists are designing
the form then there's going to be problems. As soon as the clinicians design
the form then there's going to be problems with the educationalists. I think
until we can all sort of get together and speak the same language there will be
problems won't there?
The issue here is one of
ownership. In a satisfactory partnership, because all the partners make a
significant contribution to the design of the assessment strategies and
documents they eventually use, all partners feel a sense of ÔownershipÕ. Not
all planning groups achieve that sense however. The comment of the educator quoted above reflects a view
which was often expressed, and highlights the need for greater dialogue between
educators, practitioners and students to promote shared ownership.
In the true partnership
between practitioners, teachers, and students, the relationship between
planning and assessment is perceived as integral. Far from being something
designed separately by educators
and imposed on practitioners, or given an undue practical bias by virtue of an over-heavy
practitioner input, assessment is seen as fulfilling both a curricular and a
practical function. Where all the people involved in the design of the
assessment strategy become ÔownersÕ, as in the instance below, the match
between the needs of practice, teaching and learning, and assessment is high.
...our experience has
taught us you canÕt divorce assessment from the main curriculum planning, and
you canÕt have curriculum planning taking place and assessment coming later.
Our experience has shown us this is not on. So from 1990 you could say that we
have had to work very very closely together.
The implicit statement
is about the importance of ownership, and the potentiality for alienation where
one partner feels they have had something imposed on them which they do not
own.
3.2.3. Strategies for
Achieving Coherence
Policies, procedures and
regulations for continuous assessment must, by virtue of having to meet
validation criteria, be designed to reflect a shared institutional philosophy
(ENB 1990 p 49). Even if this was not the case, it would make sense for
institutions to rationalise their
assessment approaches from time to time to reduce the potential for
confusion, prevent unnecessary duplication of work, and promote shared
understandings. Course planning occurs to less than ideal time scales, however,
a factor which can lead to the relegation of assessment to a low priority
position vis-a-vis curriculum design. An assessment strategy created in such
circumstances can easily be at odds with the curriculum, either because the
strategy is poorly designed or because the curriculum itself lacks coherence
that is only revealed once the assessment strategy is in place. Coherence then
has to be created at the post-planning stage, with a consequent waste of human
and time resources. With a sensible timescale it would be possible to ensure
that discussion about curriculum and assessment took place in parallel at the
design stage, and to lessen the potential need to find ways of making a course
Ôhang togetherÕ post-hoc.
The need here is for
structures which enable rather than constrain the development of coherent and
integrated assessment policies. This suggests that what is required for all
courses is the creation of core assessment teams, headed by a member of staff in
an assessment/examinations officer role whose responsibilities would include
the provision of general guidance and the facilitation of assessment
activities. Where there are clear role definitions for each member of a
planning team and a designated responsibility for ensuring that the system
operates effectively and individuals are given appropriate support, assessment
planning is highly effective:
...we utilised the
examination and assessment officer that was within the college as a whole, and
we had a small group of midwives and him, who were looking at all our existing
assessments and we in fact drew up...certainly the practical assessments based
very much on the (...) model that's being used for the Project 2000, because we
found it fitted what we were doing anyway. And I don't believe there's any
point in re-inventing the wheel if the wheel is...applies to midwifery in a way
in which we think we'll be able to utilise, that the students will understand,
that the preceptors in the clinical field will understand.
The picture which
emerges is of a planning team with well specified roles which enable
individuals to work together in a unified way. This again suggests that where
assessment roles for team members are clearly defined and supported by a role
holder with specific responsibility for co-ordinating all planning activities,
the outcome is more satisfactory than when roles are ill-defined, and tasks
randomly dispersed amongst a set of individuals who have a range of other
equally pressing activities as part of their role. The frustration of
simultaneously juggling an assessment planning role with other equally
demanding ones is summed up well in the following comment:
When I was saying
there are problems, I mean I think it's with the documentation in the first
instance. It's difficult to, to take a huge chunk out of people's work time to
prepare documents, and you don't know where to start...
The functions of a
person in a central planning role, focussing more or less exclusively on
assessment planning and implementation, are suggested succinctly in the next
remark:
Who works best with
whom? Who has got the particular experience? (...)...very early on(...) you
must divide, you must create a division of labour, a task, who is going to do
what? How is it going to be done? By when is it going to be done? What standard
are we expecting? What difficulties may arise? Who will sort out the
difficulty? (...) We set that ground very early on so that it is really a
partnership.
What becomes
increasingly apparent is the need for continuous dialogue, facilitated by a
person whose role it is to ensure the coming together of all those working on
assessment planning. Where no such role exists dialogue takes place in the
margins, and assessment strategies which unite different perspectives and
specialist interests (i.e. which bring together branches within a course or,
where appropriate, marry assessment principles for nursing and midwifery) are
less evident. Without dialogue at all levels planning is carried out in
ÔpocketsÕ, and anomalies arise where none should exist. Even where there is
dialogue problems are not always easily resolvable, of course, as the comment
below from a nurse teacher speaking about another branch of nursing clearly
shows. Nevertheless, dialogue offers the best hope of achieving what the
speaker calls ÔcompromiseÕ.
..it was a question
of compromising...they wanted 112 competencies and there was just no way that
we were going to...go along with that because it completely went against what
we were trying to achieve, which was an individual competency rather than one
which followed a rigid pattern.
Dialogue is also
essential to minimise the discontinuity created by mergers and amalgamations of
the old schools and colleges of nursing and/or midwifery to form new approved
institutions. It facilitates course planning, enables different experiences and
developments to be successfully incorporated, and assists the redesign of
assessment strategies in ways that draw on the best from previous developments.
Where the assessment histories of different chools and colleges were not always
harmonious, or they were at different stages in developing their assessment
policies, the need for individuals to invest large amounts of their personal
time and energy to achieve
compatibility through dialogue was evident.
Despite an obvious cost
in personal time where there is little provision of structured time for
discussion and dialogue, most people commit themselves fully to the planning
activity necessary to harmonise previously disparate systems, as the quote
below indicates.
Ultimately what we
finally did was we sort of put together the bits that were already being done here on this site, and quite
a bit of work they'd already done at X college when we amalgamated...there were
all these issues that we had to take on board... So that's how we came up with
the scheme that we have now, with a lot of changes on the way...formats had to
be changed, had to be changed to fit in with our current curriculum...
The emphasis is on a
complexity which requires Ôquite a bit of workÕ if it is to be made sense of
without resort to the simplistic. That work is made harder where the
composition of teams is unstable because of the re-organisation process, and it
becomes necessary to spend even more time ensuring common understandings.
I was representing
mental health in the team, and the team itself changed (...) a little bit over
the development period because umm, different people came into the college and
different people had different jobs and so they came and went.
Dialogue about
assessment strategy design in approved institutions where principles and
procedures are not yet fully united, has the knock-on effect of furthering
professional development across the board:
So we have this
dilemma in that the college came together as a single institution and inherited
various degrees of developed courses, some of which were still summatively
assessed and others which were continuously assessed, which meant that you had
teachers who were familiar with such processes and teachers who weren't. And
the training circuits, in the clinical placement circuits you also had
practitioners who were familiar with continuous assessment and those who
weren't, which meant there was a need to level off right through the
organisation. (educator)
The greater the
diversity of the histories an ÔamalgamatedÕ approved institution brings
together, the greater the demand for dialogue to expose and analyse differences
of perception, and to promote institution-wide understandings.
3.3. DESIGNING
ASSESSMENT IN THE LOCAL CONTEXT: A SUMMARY
The
research has identified four key features required to ensure assessment
planning at an approved institution level which offers an appropriate structure
for devolved continuous assessment. It has also revealed how certain
characteristics of the most successful roles have facilitated the development
of successful assessment strategies in even the most difficult circumstances
where discontinuity has become institutionalised. The first and perhaps most
important feature was:
1 the identification of an assessment quality
assurance role which incorporates responsibility for the co-ordination of
assessment activities, and enables:
¥ allocation of specific
responsibilities to individuals
¥ monitoring of these
responsibilities in action to ensure that they are compatible and do not lead to unnecessary role strain or
conflict
¥ monitoring of these
responsibilities in context to identify where additional resources are required
¥ development of action-oriented
responses to assessment issues
Other features included:
2. the
full representation of teachers, clinical staff, students, and representatives
of other relevant groups on assessment planning teams to ensure partnership
3. the
establishment of dialogue structures to facilitate partnership
4. the linking of assessment
planning, evaluation, and development
There are clear
implications for the development of both roles and structures for assessment
planning at the level of approved institution. The following section reveals that
there are similar implications for local policy.
3.4. OPERATING AT
LOCAL LEVEL WITHIN THE NATIONAL FRAMEWORK
3.4.1. ENB Education
Officers as Facilitators
In local contexts there
are certain roles and structures which play a key part in facilitating the
planning necessary to implement devolved continuous assessment. There are similarly important roles and
structures at a national level where guidance for course and assessment design
is issued from the ENB (in the form of written guidance and through
conversation with education officers) and the UKCC, and at European level by
the EC directives. The extent to which an institution acts confidently and
independently when designing their local assessment strategies depends a great
deal on whether these national (and international) guidelines have been fully
explored at the appropriate point in the development process. Crucial to this
process of exploration are the ENBÕs own education officers.
ENB education officers
have a key mediation role, intervening between institutions -in the form of the
people who make up the institutions - and the national bodies (see schema
1). A recent review of the role of
ENB education officers has established new relationships with approved
institutions. Each education officer has two roles, acting on the one hand as a
designated education officer with a direct personal relationship to a small
number of approved institutions (i.e. liaising with those institution regarding
all courses); and on the other hand in a specialist capacity as a regional
source of specialist advice to assist colleagues in their designated officer
role. Given what has been said above about dialogue it is clear that the
quality of the working relationships formed between approved institutions and
ENB education officers will play an important part in the harmonisation between
national and local perceptions, and the promotion of a sense of ownership
referred to earlier. ENB education officers themselves express the hope that
their old 'inspectorial' role has been superseded by a more facilitatory one,
as some of their comments illustrate:
We work on the
premise that they are the experts out there, they should know what they want
for their standards and quality of practitioner.(...)I mean the UKCC produces
the content, kind and standard of the programme and what we're looking at (is)
does the institution reflect that? And we're actually monitoring the delivery
of it, we're not writing the standards for them, they must do that at grass
roots.
Professional expertise
at local level is acknowledged, and the facilitative nature of the ENB
education officer requires assisting approved institution at a highly
individual level:
I can't tell you how
you should programme your computer because I don't know what software you've
got, whether it will take it, whether it's this compatible, that compatible.
How can I therefore tell them exactly what sort of assessment strategy they should
be using, because I don't know what materials they've got to work with, I don't
know what time they've got for preparation, I don't know what method they're
going to be using, how they are going to link theory with practice, what's
their regulations? They must set their own targets. But I will support and
discuss it with them, and I have found, albeit that the institutions do that,
some people are better at it than others. I can't destroy somebody who is
developing a system, because I'm not sure how it is going to work out, it might
be comfortable for them. They must
have a team of people to prepare it, they must know what their strategy is,
there's nothing to stop them reviewing it. I say to them, "OK this is a
start off."
This officer also refers
to her monitoring role, a role which she sees as centrally involving supportive
discussion. It is difficult to avoid the conclusion that where effective
liaison happens, discussion has once again played a central part in promoting
it.
3.4.2. Establishing a
Responsive and Negotiated Relationship
It is essential to the
development of a successful assessment strategy within individual approved
institutions that the relationship between that institution and the ENB is felt
to be a negotiated one in which concerns and understandings are shared. When
there is major change, it can be difficult, and therefore take time, for
individuals to come to terms with new roles, and sensitive support is
essential. In addition, because the facilitation needs of approved institutions
are highly individual (as is evident from the previous exploration of local
contexts) education officers will need to vary their guidance and support to
match these needs. As indicated earlier, education officers like to see
themselves as facilitators, but where role-holders are diffident and
institutions feel uncertain, an education officerÕs response must be clear and
unequivocal. Where the education officerÕs response does not match the needs of
the institution, the conflicting and insufficient guidance given can be
perceived very negatively..
...we've had so much
contradictory advice...advice in our first assessment document came from a
midwifery officer and then we developed the assessment along those lines, and
then we got our generic officer who disagreed entirely with what the midwifery
officer said and the whole document was changed.
Contradiction then
becomes perceived as confusion and lack of clarity by the Board itself about
its own guidelines.
I don't know, I think
the Board's reneged on their responsibilities somewhat...they set out you know,
however many competencies (...) really they've given you the bare bones but
they haven't sort of allowed or given any guidance on the flesh.(...) Now I
understand and I recognise that you have to err, develop them to reflect the
course, the curriculum, which is correct...and the local area where training or
education is taking place, but I think they could have given some more general
guidelines, you know clarification under each one of what they, you know, what
they expect of a first level nurse.
In such circumstances relationships
are characterised by uncertainty and lack of confidence, although in most
instances where staff felt unsupported by ENB education officers, they asked
for further guidance, rather than prescription.
I'm not suggesting,
and I would never suggest and I don't want to suggest that the Board say,
"This is what you must do," like the old syllabus, err... or like the
old book that the student used to get (...) But I think some, some more general
guidance on how the Board perceive, define assessment...
I think because we're
in the developmental stages and what seemed OK at the time ...I mean I think
people, ENB officers as a rule tend to know a good thing when they see it, but
won't or can't, I'm not sure (...) which it is, give you examples to start off
with. (...) If they're really going to give advice to me it's, "Well look
at this. What do you think about such and such." That might be a starting
point. Or (...) very definitely say, "I don't want such and such in,"
erm, "I want a two point scale as opposed to a multi point scale, let's be
clear about that." Or
"You'll be unlikely to get it passed if you go for a multi..." I
would prefer that, that they came to some agreement about it, if they gave some
examples of good practice. So that's been a problem."
Where individuals felt a
lack of direction and support in their efforts to satisfy national criteria,
'second guessing' what their ENB education officer would find acceptable as an
assessment strategy became the main aim. In such cases, the potential for
stifling development and further damaging confidence is clear. Like the
following educator, who described how she felt that knowing the formula to
satisfy the ENB education officer would be helpful to colleagues even though
she felt the result was reductionist, they will prepare assessment materials in
which they have no faith:
...I think if our
assessment document's been accepted then now our work with nursing colleagues
and colleagues at the polytechnic (is) to help them, because I know, I feel that
I know exactly what Mrs Smith wants now, so I can help them to achieve...you
know, the same in their documentation. Because really what you put into it
doesn't matter, it's the way` it's set out and how we're actually measuring
skills.
The perpetuation of an
older hierarchical relationship between education officers and their approved
institutions can, where it exists, have the unfortunate consequence of taking
away ownership from the institution which to compensate develops a collective
cynicism.
This present document
I feel doesn't measure midwifery at all. The fact that we only measure skills
means that only a small percentage of midwifery is actually being assessed
which is directly in line with the competencies laid down by the ENB...that I
feel (are) restricting...(educator)
Fortunately, however,
there are very many instances of a much more equitable relationship in which
both education officer and institution are able to reflect critically on (i.e.
to critique)national guidelines. ENB education officers and educators alike
recognise the scope for local interpretation within the national guidelines and
look positively for creativity at the approved institution level.
I mean to give the
ENB their due, they've actually said that the way you interpret them can be
really broad, but there's nothing in writing that says you can interpret them
broadly. (...) We went with the broadest one we possibly could, (...) they were
quite happy with that...(Break)... And I think the other issue there of course
is that because they are as vague and woolly as they are you find that
different education officers interpret them differently. So if you happen to
have fairly approachable, realistic education officers who keeps in touch with
what is actually going on, and the individuals and their needs, then I suppose
in that way they are beneficial. But it is very frustrating when you've been
categorically told... I just recall a conversation that I had with someone on
Tuesday that they can't do something and I was told we can in our approval
document.
(educator)
...I think some
colleges are very innovative so long as they prove it meets the rules and if
there are specific branches they meet what is required of them in that...(education officer)
Where the relationship
between an education officer and their approved institutions is genuinely one
of negotiation and mutual respect, guidelines are used as an enabling framework
and the need for dialogue about different interpretations fully recognised.
I'm hedging while I
answer this because I haven't got quite as much experience as other officers,
and I suppose yes, one could say that you could interpret it in many ways, but
often when you talk it through
with them, and often maybe it's what they want to do with it, I'm trying
to think of an example. If you take the enrolled nurses, the opportunities.
Some people read the regulations and they think, "I can't do that,"
and you'll say, "Well it does tell you you can." Sometimes they read
it, I often feel from my own experiences, that they can't do it, rather than
they can. I mean I don't think I've ever come across anybody that says,
"Well it says that," and I think, "No it doesn't." It's
often been the other way round, they've hedged on the safer side. (education officer)
Education officers are
unique in the influence they have over the development of assessment strategies
at an approved institution level because they, more than any other person that
the members of an institution meets, act as interpreters of ENB guidelines.
Variation amongst ENB education officers was noted regarding interpretation of
some ENB guidelines where 'personal' views were expressed (and acknowledged as
such) in response to the needs of approved institutions for greater
clarification:
..the Board has the
perception that all education officers work differently. (education officer)
Whilst in many cases
such action is doubtless helpful and unproblematic, the issue of varied
interpretations between approved institutions did arise:
I think if there is
going to be any standardisation (of guidelines), it should be the education
officers interpretation of them, because I do think it actually leads to a lot
of antagonism between colleges, and between students. (educator)
If education officers
are to be effective facilitators of local programmes and to offer advice that
is appropriate to the needs of individual approved institutions, they must
themselves have opportunities to explore some of the issues that are raised by the
ENBÕs assessment guidelines. There is clearly a good case to be made out for
regular regional and national meetings between ENB education officers solely
for the purpose of discussing assessment. There is also a case to be made for
education officers and members of approved institutions to meet to discuss what
they have learned from their positive developments and their less successful
innovations. Because teachers, clinical staff, and education officers are all
still learning, there is a need to provide a facilitative framework to foster
discussion at every level about development and innovation. It is not always easy
for one institution to learn directly from the experiences of other approved
institutions, because of the relative lack of experience Ôout thereÕ to draw
on:
..because it's a
complex issues in its own right, and when we were looking at continuing
assessment there weren't many places who had sort of really got it off to a
rolling start in nursing.
And in the new
market-oriented business culture there is a growing concern to get
value-for-money which prompts institutions to reclaim the time and resources
they have expended in curriculum and assessment planning by only allowing their
work to be made available to other approved institutions in exchange for a cash
payment.[30] The education officer,
therefore, has a unique opportunity to bring people together to discuss the
more general issues they will all have experienced in designing assessment
strategies. Amongst the most important issues on the agenda would be several which
people have very little official opportunity to discuss elsewhere, including
the experience of finding that their understanding of a particular assessment
principle or practice is no sooner developed than the goal posts change, and
that lack of clarity and direction in the initial stages of course development
coupled with the introduction of new Ôblue printsÕ as clarity is developed
makes it difficult to know what 'current thinking' is, let alone meet it.
It's about finding
out what they like and don't like, and it's also about being better than so and
so was (...) they've actually admitted to me when I said, (...) "It means
if you're a demonstration district, you get away with a lot more than we would
get away with now." He said, "Oh yeah that's right."
After completion of
assessment strategy design, the process of validation provides a further
opportunity for dialogue about assessment issues between individuals working in
local and national contexts.
Experiences of validation continue to develop a result of recent changes
in educational approaches and institutional relationships. As innovations in curriculum and
assessment design (now often at diploma level) are created and produced through
developing relationships between approved institutions and HE, it is clear that
all participants in the process are benefitting from greater understanding
through dialogue. Whilst accounts
of satisfactory validation relationships did vary, there was a sense of overall
progession, as one ENB education officer comments on links with staff in HE:
I think they've come
to realise that maybe nursing has a lot to offer, that in the past we were seen
as certificate level, but a lot of nurse educators are now at Masters level,
and sort of are as articulate and well educated as their colleagues. (...) But
yes I think the relationship has got better. I don't feel I'd have any difficulties ringing up my
equivalent in a University and having the honest relationship to say,
"Well I've not done it before, so how is your protocol, do the protocols
work together?"
3.5. IMPLICATIONS OF
THE EXPERIENCE OF DESIGNING DEVOLVED ASSESSMENT STRATEGIES FOR FUTURE ACTION
Following a review of
literature on competence, Runciman set out a series of suggestions for those
involved in course planning and curriculum development, several of which
reflect the findings of this research. Runciman suggests, for example, that;
¥ the nature of competence should be
widely discussed
¥ the extent to which learning
outcomes can and/or should be defined in a precise and measurable manner
should be debated
¥ collaboration between educators,
learners, employers, and practitioners should be established at an
early stage in the planning of programmes and should continue as programmes
develop
¥ the advantages and the limitations
of a competence-based approach to education should
be debated (Runciman
1990)
The ACE Project has
shown that there are so many mediating roles and structures between national,
regional, local and individual interests that it is only possible to achieve
coherent policies for assessment by ensuring dialogue at all levels. It has
revealed the importance of factional
needs and interests in shaping the progress of assessment and has
demonstrated where roles and
structures have helped the development of understanding between factions, and
where they have provided constraints. It has demonstrated that interpretation
is inevitable, and by implication that opportunities for sharing insights or
causes for concern can enable closer understandings and facilitate the
development of an approved institutionÕs assessment strategy. And it has shown that any framework for
negotiation and facilitation must be sufficiently sensitive to particular needs
and specific local circumstances to be able to cope with a demand for specific
and unequivocal information at some points (particularly at the beginning of the design process) and
for framework advice at other points. It has revealed in particular the need
for important new institutional and regional co-ordinating roles for assessment
policy, and for the establishment of regional and national forums for the
exploration of assessment issues.
The list of emerging
needs with which we conclude this chapter points forward to the recommendations
at the end of the Report. It indicates needs identified through emerging
themes, and has clear implications for the sorts of mechanisms, structures and
roles required to facilitate further development in the assessment of
competence at regional and institutional levels. There is an apparent need for:
¥ national and regional forums for the
discussion of assessment issues
¥ helping all education officers to develop
additional specialist assessment knowledge
¥ further research into the role of
the education officer at the interface between assessment planning and
development in approved institutions
¥ continued research into assessment
methods, with particular emphasis on the assessment of clinical practice and
higher order aspects of competence.
¥ a designated role in every approved
institution, carrying responsibility for the co-ordination of assessment
planning and development
¥ protected time, and where
appropriate designated resources, for the role-holder to carry out their
responsibilities.
¥ more opportunity for ÔpilotingÕ
major innovations before wide scale implementation
CHAPTER FOUR.
ABSTRACT.
Assessment texts
ÔreferÕ to institutional rationales for assessment, in the sense that the texts
are actually embeddeded in the educational culture that shapes and reflects
both the culture and public statements about it. There is, however, no simple
relationship between institutional statements of intention and what actually
happens on the ground. In practice, intentions are affected on the one hand by the
fact that systems are operated by people who interpret what they have to do,
and on the other by the fact that the organisation itself is an interpreting
organism that in turn transforms official guidelines and directives. And just
as there are many intervening factors that bring about ÔtranslationsÕ from
philosphy to assessment documenatation, so there are a whole range of
interpretations that intervene between the drawing up of an assessment system
and its use in practice. In the end the success or failure of assessment texts
as a means of assessing competence depends upon the extent to which they are
able to recognise and take account of the judgements that inform the assessorsÕ
decision-making. The format of these documents (and the other texts that are
used to record evidence of competence) is therefore extremely important because
formats constrain or enable opportunities for evaluating the reliablility and
validity of assessor judgements. They also shape the nature of the assessment
activity by framing the possibilities for evidence collection, and negotiation
over perceptions of the evidence base. Consequently, what emerges as a central
issue is whether the formats of assessment documents (together with the
mechanisms and procedures for using the documents) allow the independent
re-evaluation of evidence, or whether they depend upon the evidence of the Ôaccredited witnessÕ whose
judgement is not open to consideration because no evidence is provided. Those formats
which use a priori categories in an effort to achieve a high degree of
standardisation offer a tight frame that constrains the relationship between
assessor and student within an authority to novice dimension. Formats which
employ categories derived from reflection and that concur with professional
experience, are much more loosely framed and provide an independent evidence
base that can be examined for reliability. Each format has its typical form.
The document which asks for a tick from an accredited witness, although
requesting that tick to indicate a whole range of things from the complex to
the atomised behavioural skill, provides no evidence of the way in which the
decision about competence was reached, nor any indication of the context in
which it was achieved. The tick becomes a surrogate for the evidence and, where
it has occurred, the discussion about the evidence. Despite these shortcomings, the tick box approach is often
seductive to policy-makers because it purports to offer objectivity through the
accredited witnesses signature to closely specified categories. Assessors, however, draw on their own
experience and make judgements even about such categories and so the tick box
can be seen as no more reliable than any other subjective record of competence.
Indeed, it is impossible to remove subjectivity because human beings are
interpreting subjects; what is needed is dialogue to accommodate different
subjectivities and to enable comparison. If this happens, then assessment
enters an educative domain, but it also becomes more reliable assessment. Assessment texts in the educative
domain encourage judgements from multiple perspectives and look for interaction
between assessors and students to
identify and reveal understanding of the important professional qualities that
are less easily assessed. These
forms of text, of which the learning contract is a prime example, integrate
theory and practice through providing a triangulated evidence base that can
continue to be the focus of discussion over time, and thus provide additional
evidence of the process of development. To achieve an educative function,
assessment texts must document valid as well as reliable evidence of competence
by drawing on a wider constituency of witnesses, and by being context-sensitive.
THE FORMAT OF
ASSESSMENT TEXTS:
CONSTRUCTING THE
EVIDENCE BASE
Introduction
To accord with national
guidance issued by the ENB, assessment strategies designed by approved
institutions must ensure that students achieve separate pass criteria for
theory and practice, and that the overall assessment strategy reflects a
curriculum in which there is an inter-relationship of theory and practice (ENB,
1990 p 49 , 1993 p 5.15). This
chapter seeks to outline the range of assessment texts, the evidence bases
produced and the extent to which theory and practice inform each other within
assessment texts.
4.1 THE PLACE OF ASSESSMENT TEXTS WITHIN AN
ASSESSMENT RATIONALE.
To demonstrate that the
ENB's regulations have been satisfied, detailed assessment documentation is
produced by each approved institution, giving an overall rationale, role
definitions, definitions of assessment categories employed together with the
required mechanisms and procedures for carrying through the assessments. The rationale offers a justification
and a general framework for interpreting assessment events.
Approved institutions
must recognise the multipurpose functions of assessment as outlined in national
guidance and seek to address the need for quantitative and qualitative
approaches. How this is
accomplished in practice varies from institution to institution. For example, one institution opens its
rationale by stating:
Assessment entails
judgement and an attempt to evaluate.
It may be characterised quantitatively or qualitatively. Assessment presupposes standards and is
necessarily based on criteria, held by the assessor, who may be the student
herself or an external agent. It
is axiomatic that assessment is necessarily interpretative, and therefore subjective
to some degree. Assessment
procedures ought to seek to minimise this.
Assessment rationales
have to address not only national standards or requirements, but also
individual developmental needs. Formative assessment is typically defined to
address individual student developmental needs, and summative assessment is
defined as addressing particular pre-defined standards. Where formative assessment is typically
not seen as compulsory, summative assessment is compulsory and failure to reach
a specified standard at any point in the course may then, depending upon
circumstances, result in failure
of the course. Assessment
documentation for placement areas combine formative and summative elements to
produce a structure that articulates the general rationale for continuous
assessment. The intimate relationship
between educational philosophy and assessment rationale is clearly illustrated
in this example from one institutions' assessment and quality assurance
handbook:
The curriculum
philosophy addresses the benefits of a variety of learning experiences and,
therefore, a variety of summative assessment modes is appropriate. The role of the student of nursing is
seen to be active in her learning, therefore, formative self assessment and
self direction are of paramount importance throughout the course. The student will be greatly involved in
her own assessment. Opportunity
for improvement in performance will be available to the student as part of the
assessment process. This allows
for rapid feedback and remedial action should this be necessary. It also enables the student to take on
the responsibility for reviewing her own progress, and developing her ability
for realistic self appraisal.
The question then arises
whether the format of the document together with the mechanisms and procedures
for the implementation of assessment are appropriate to fulfilling such
rationales. For example to what extent does student self assessment feature
within assessment texts, and how are far are curicula which integrate theory
and practice represented? The
focus for discussion in this chapter then, is on the relationship between the format of assessment texts
and assessment purposes, and also on the creation of evidence bases on which to
make judgements about developing competence.
4.1.1 Texts as evidence bases.
Assessment texts can be
within the range of two extremes:
those that are tightly framed in terms of specifying the categories to
be assessed; and those that are loosely framed, providing guiding principles
instead of specifying categories.
A tight format for assessment of practice typically employs a
standardised document where the categories are formed in advance of use, with
each category being ticked, often indicating levels of performance or
measurements of some kind. At the
other extreme, a loose format employs less-standardised documentation where the
categorisation is determined after considerable reflection upon data derived
from practice.
The chosen format of the
assessment text itself structures relationships between assessor and
assessee. Its principles and
categories direct attention and organise the processes of reflection and
judgement that have to be made in fulfilling the demands to construct an
evidence base. The
collection of completed assessment texts constitutes an evidence base to
support decision making. It is
important to ask about the nature, reliability and validity[31] of the evidence base
that is provided by the different kinds of format.
The following diagram
formalises the relationship between tightly and loosely framed assessment
formats. The vertical axis
signifies the continuum of evidence bases from those that are highly open to
independent reassessment, that are highly tailored to practical circumstances,
and that fit with professional perceptions of competence, to those that are low
in each of these regards. The
horizontal axis describes the continuum between assessment formats that are
tightly prescribed, aiming at standardisation, to those that are loosely
prescribed aiming to draw assessment categories from the situation itself. It will be argued that the tighter the
frame the more restricted the evidence base, and the more it is standardised
the less it provides evidence that is open to independent reassessment. Conversely, the looser the frame, the
less it is open to standardisation but the more it accords with professional
perceptions of competence in practice.
Figure 1: From Tight
To Lossely Framed Evidence Bases
The nature of assessment
evidence is an important issue
when seeking to explore, as this study does, the effectiveness of assessment
strategies. Consequently it is a
guiding theme in this chapter on assessment texts.
4.2 ASSESSMENT OF PRACTICE.
What follows is not an
institutional analysis but rather an analysis of the potential for different
features of assessment structures that are currently in existence. The analysis will develop by
discussing:
¥ ticking boxes - the
accredited witness
¥ witness triangulation
¥ increasing the evidence base
4.2.1 Ticking Boxes - the accredited witness.
No approved institution
in this study relied exclusively upon the simplistic tick box, however the
majority did employ this format to some extent. The following extract illustrates the basic features of the
format:
A. Mental health Branch
(Area) (level
of achievement)
ASSESSMENT 1 2
3 4 5
(Items: 3 selected from
13)
Awareness of sources of
data |.....|.....|.....|.....|.....|
Considers
socio-economic, cultural, physical
and psychological
factors when carrying out
assessment |.....|.....|.....|.....|.....|
Utilizes theoretical
knowledge during
the assessment of client |.....|.....|.....|.....|.....|
The choice, number and
wording of areas for assessment, the items contained within each area and specified
levels to be achieved varied
according to course (and branch) design. Assessment items could be as broad as 'can undertake care
plan' or could be focused precisely on a well defined behavioural procedure or
aspect of a procedure. The actual
examples in illustration A are highly complex if subjected to searching
questioning. What, for example,
constitutes a source of data? This
question alone would be substantial enough to generate a course in research
methodology. Then to identify what
constitutes a 'level' of achievement in being aware of data would be sufficient
for several philosophy seminars.
The format of
illustration "A", structurally limits the kind of evidence that can
be collected. First, if it is
employed within a behaviouristic framework for the standardisation of analyses
of skills and competencies, it results in an atomistic categorisation of units
of behaviour[32]. As discussed in chapter 1 such lists
can be extended indefinitely and focus attention only upon clearly observable
and measurable units.
However, once the box is ticked, the statement which gives significance
to the tick becomes a surrogate for what actually happened. Even if deep discussion had taken place
over these items, they would simply be reduced to a tick in the box. The same tick would be there even if
there were no discussions at all.
What an outside
commentator cannot be sure of is whether what actually happened bears any
relationship with what the statement alongside the box signifies. That is, there is no primary data of
the interactions which led up to the ticking of the box. It is therefore tightly framed, but low
on independent primary evidence capable of re-assessment for purposes of
moderation.
Nevertheless, the
tightly framed format is highly seductive, particularly where other forms of
assessment format can be criticised as being too subjective:
Now assessment was
the same across the board in theory and practice, it was working quite well but
I mean I accepted the ENB officer when they came and said, you know, "A
lot of this is open to a great deal of subjectivity in clinical
practice." So really they
tore our assessment apart. I mean I could see what they were saying and they
said...in a clinical area you have to have skills that are measurable that a
mentor (sic) doesn't have to think about, they just have to think, "Yes
they're achieved." They don't have to think, "Well what level is the
student at?" (educator)
This approved
institution had been compelled into a tick box approach, an approach that they
were not happy with because it overlooked the multi-dimensional and dynamic
nature of professional competence.
As already noted, there
are different assessment styles which will lead to different kinds of
interaction preceding the completion of the document. Of course, not anybody can tick the document. It is to be presumed that the
individual who does this is considered competent to make the appropriate
judgement. Such an individual is
in effect an accredited witness.
A considerable reliance is therefore placed upon the individual who
carries out this role. It is
important to ask how much reliability can be placed upon this role.
In practice such items
as those in illustration A are in fact open to considerable variation in
interpretation which at once undermines the reliability of the accredited
witness role, and necessarily subverts the very standardisation that is
intended.
...each nurse has
their own view of looking at it. They could see a different...view point...you
know they might bring up something else that another mentor maybe didn't think
of. You know say for organisational skills, one person might think she's
organised in herself when another person might look at it as how she organises
the workload on the ward.
Educators were also
aware of this issue and highlighted the importance of increased dialogue about
assessment issues as a means of overcoming widely differing interpretations.
...I think a lot of
people make the document work and that's why you get so many interpretations on it.
I wondered whether
the piece of paper that we've got is inappropriate because it's not, it's got
no meaning, it's open to interpretation the meanings are vast so it really is meaningless because
it's open to interpretation...
Indeed, there is a
danger of the documents being used tautologically. If, according to a particular stage in the course a student
is expected to be at level three, then performance tends to be rated level three. It is the self fullfilling prophesy
(Rosenthal and Jacobson 19..) familiar to all educational researchers. Additionally, assessors could be
misled by the definition of the level in relation to the task to be performed
and thus confused as to which is actually the appropriate box to tick:
...I think everyone
finds them quite hard to know, the wording of the assessment is quite hard and
people always have trouble knowing which box to put you in, and the boxes don't
run on brilliantly, and so it might seem that you should be in box one (when in fact the student
should be achieving box three at that stage of the course.)
The kind of judgements
which assessors have to make are illustrated by the following three examples.
Item: Deviation |
from
the normal |
|
|
Level 1 |
Level 2 |
Level 3 |
Level
4 |
Can participate with instructions to assist in abnormal situations |
With assistance recognises deviation from normal and provides appropriate care under supervision |
Is aware of the potential effects of abnormality in midwifery and can initiate appropriate action |
Able to interpret and undertake care prescribed by registered medical practitioner where appropriate |
. . |
|
|
|
Figure 2A: Use of
Performance Criteria
Level 1 |
Level 2 |
Level 3 |
Level 4 |
Level 5 |
Exposure |
Participation |
Identification |
Internalisation |
Dissemination |
Illustration 2B: An
Adaptation of Steinaker and Bell's Taxonomy
(Steinaker and Bell,
1979)
Standard A |
Standard B |
Standard C |
The student is
beginning to relate knowledge and
skills to practice but
requires constant aid and supervision |
The student performs
with minimal supervision,
can explain actions and modify practice
utilising theoretical knowledge. |
The student can manage
and evaluate practice effectively and
justify own planning and decision making |
Illustration 2C: A
Standards of Achievement Model.
Our data suggested some fundamental
problems especially with the first of these examples. For many the categories
within each box did not always reflect progression as it is actually
experienced and expressed by the student on placement. In other words, the
first levels of a particular scheme may in fact represent for the assessor and student a level of activity which is not
necessarily less complex than higher levels.
Additionally, in
adopting an assessment framework which intends to demonstrate student
development and progression, many institutions were struggling with the
challenge of maintaining a sense of continuity between levels of achievement. Progression through a series of
predetermined levels implies an
accumulation of experience and skills with each learning experience and
assessment event, building on the
achievements of previous levels. For some institutions information about
previous experience did not flow easily between placements and students were
given primary responsibility for maintaining continuity between their
placements.
As the following student
outlines, assessor interpretation in relation to the different levels of
expectation of student attainment was a problem for some:
...I brought this up
with the school...what I found with the clinical objectives is that they're
very ambiguous, they don't really give a level. Like for instance...on the
first ward, one clinical objective would be, 'To be able to measure and record
a temperature' for instance. Now one mentor or staff nurse might for instance
say, "Well I've seen them take or record a temperature, I've checked it, it's right, it's correct,
that's okay." Someone else might see it as the competency of taking a
temperature and so you'd be asked about different areas of taking a
temperature, the length of time a temperature would be taken, "Do you
know the relevant research for
taking a temperature? What things are you looking for? When would you worry
about a temperature?"...etc,
etc etc. When I was asking the
school (they said) 'Every
mentor does go on a workshop and are (is) told about what levels to go
onto." But I've never found that...
It would seem that
subjectivity and multiple interpretation are inevitable in assessment:
...it all comes down
to what you do on a one to one with somebody...it's an intensive task, it's not
something you do on a cursory level, it's something that ultimately comes down
to a feeling you have about an individual
because you spend so much time with them, and there are markers or
indicators within their performance... certain social skills have to be
present...after that it becomes a
measure of whether or not to put into practice quite subjective
components. I'm talking here about
mental health perhaps rather than
what I call the tricks of nursing which are injections and some of the more
practical procedures which are obviously good or bad, because they're
observable, measurable...I have...no difficulties with those...it comes down to
whether or not you...follow
certain patterns of behaviour like when do you give advice? When don't you give advice?...What kind
of attitude did you adopt towards somebody? What was your approach? And it has to be something which is
analysed afterwards...the process of assessing somebody's level of safety or
competence or quality if you like, is through a pretty intensive, interactive
process, trying to establish what they were trying to do, how they did it...
In this view, what is
overlooked in the so called objective approach is the educational dimension
that is founded upon interactive processes. It is through such interactive processes that the less
tangible aspects of competence are determined and assessed.
Attitudes. Hardest
thing in the world to measure, it's practically impossible. Err (pause) you can
have a feeling about attitudes, right attitude, you assume from your social mores what's a good
attitude to the patient...you can measure
knowledge. You can measure
skills. But the best psychologists around have looked at various tools to measure attitude and
have been singularly unsuccessful. There
are lots of things that you can use, but the fact that there's so many
of them tells you that there's nothing particularly works.
This educator goes on to
express her opinions on what is an appropriate form of educational assessment:
You could tighten up the schedule,
the actual assessment process 'till it
becomes...so specific that there's no variation at all. But I don't
think that's really feasible when you're educating professionals, because if
you tighten them up so far, then
there's no room for judgement in
there...
so that's another way you can measure validity; it's so tight that there's
no deviation. 'You can do it that
way and that is the only way'. Now I don't think that's good
educationally .
The fully standardised
approach to assess practice is counter educational. Indeed, the principles of
classification differ in each case.
Standardisation proceeds by identifying units of behaviour that precisely
fit the category box. That is to
say each unit of behaviour placed in a particular category should be identical
to any other unit of behaviour in that category. However, when the categories are subject to confusions
(for whatever reason) or to alternative viable interpretations, there will be a
wide range of possible candidates for inclusion in the category. In these circumstances, it will not be
possible to assure homogeneity[33].
The structure itself,
also inhibits re-evaluation of assessments because there is insufficient
evidence to provide the basis for such re-evaluations. There is nothing either to support or
to challenge the tick in the box.
Two possibilities to
overcome these inadequacies are:
¥ To increase the number of independent
witnesses who come to an agreement as to a particular
judgement
¥ To ensure the recording of an evidence
base for assessment decisions that is sensitive to context rather than being
highly pre-determined and structured
These two alternatives
will now be explored.
4.2.2 Increasing Witness Reliability.
As has been demonstrated
by the assessment texts discussed so far, if there is a weak evidence base,the
possibility for further study of the decisions taken and the criteria for those
judgements is less open to external scrutiny.[34]
For the assessment of
practice, it is the judgement of the professional - in ticking a particular
box, or in making a particular interpretation - that is at the heart of
assessment.
Most of my colleagues
will argue strongly that by the nature of the thing, in terms of practical assessment we have
to accept that professional integrity and the professional judgement of the
practitioner assessing. And I
don't accept that. I work from the other extreme and say that because they are a competent practitioner (that) does not make them
a competent assessor. (ENB Representative)
The competence of the
assessor to assess is here said to be different from the competence to practice
as a clinician. There are distinct
types of judgement involved. The
assessor's judgement is a judgement about the student's capability to judge,
make decisions and act in a given circumstance at a given stage of development
in the professional role. The
reliability of the witness can therefore be increased by appropriate assessor
professional development.
Meanwhile:
...I do have concerns
about this (assessment skill)...you know, when I'm talking to...staff at audits and these sorts of
questions I'm asking. Erm, you know, what is it you're assessing? You know, and
they often haven't really broken down these skills and often really thought
about it...they haven't really thought about the level of the learner and what
their particular needs are...they talk about things like, you know what's the
role of the designated nurse in relation to preliminary interview? What are they supposed to be
doing?...What information are they gathering about the student at that
particular time?
Even if appropriate
professional development can be provided there are still likely to be
variations in judgement. This
issue, it can be argued, can be overcome by taking into account more than one
perspective, a process of witness triangulation. The triangulation of views can be be employed to draw out
both commonalities and points of disagreement. In this way individual bias can be reduced. The range of perspectives that can be
set along side each other include:
¥ the assessor
¥ the student
¥ the co-assessor
¥ other clinical colleagues
¥ the educator acting in a
clinical liaison role
In line with ENB
requirement for contributions to the assessment process, students were called
upon to contribute to assessment texts.
In the texts we studied, the minimum requirement consisted of the
student's signature to witness that the discussion that had taken place with
the assessor at the initial, intermediate and final interviews. It also
indicated that there had been an opportunity for the student to respond to the
assessors' written comments with their own. A greater degree of self assessment
was encouraged in the type of text illustrated below, where the student
contributed their own judgement to each assessment item. While not overcoming
the problems so far discussed, it does at least offer student self assessment
and the potential for the student to challenge a particular judgement.
(Performance criteria)
Performs a post natal
examination, explains the significance of the findings to the midwife, gives
correct advice to the mother in relation to:
Intermediate Final
S = student
interview interview
M = midwife (formative)
(summative)
achieved|not achieved achieved |not
ach'd
S
| M | S
| M
S | M | S | M
signed | | | |
| | |
breasts - prevention of nipple damage,
suppression of
lactation
abdomen - involution of the uterus,
abdominal tone
perineum - healing,
hygiene
lochia - changes
legs - oedema,
tenderness
competency achieved:
signature:......................
date:...............................
Figure 3: Midwifery Assessment Schedule
In this example, an
agreement between assessor and student defines whether the 'competency' has or
has not been met. There is the
potential for greater triangulation of views, but once signed, there is no
further evidence available for an independent examiner. The validity of the assessment is thus
still dependent upon the credibility of accredited witnesses.
In relation to the
stress placed on formative assessment in overall assessment rationales the
intermediate and final assessments offer the possibility of facilitating
diagnostic development and identifying progression. However, it would be misleading to regard the one-off
intermediate assessment as being 'formative' in the fuller sense of implying a
process of continuous assessment; it offers an intermediate step towards the
final summative assessment during the final interview.
Once again, considerable dependence is placed upon
upon the quality of staff development for the purposes of assessment, and upon
the staff-student assessment relationships. Theoretically, if it can be confidently maintained that each
assessor is interpreting criteria in the same way, and providing high quality
feedback via a high quality consultation processes, then it may be accepted
that the tick in the box, or the signed affirmation that a particular
competency has been achieved actually signifies what it claims to be. However, if as the data outlines, there
is considerable variation of practice, multiple interpretations of criteria and
limitations on professional development for the assessor and mentoring roles,
then consistency of quality cannot be assured. It is not only that consistency may vary between assessors
but also according to circumstance with the same assessor:
(...) my last
student, it was fine. I worked
with her a lot and I actually saw her getting better so I felt I could write a
fair assessment on her. But the
student I've got now, I've worked with her three times, four times, and I still
have to write an assessment on her.
But when I asked her whether she'd worked with anybody else in
particular, she hadn't, so I couldn't even go to them and say, "How's she
doing? Is she
improving?" whatever,
"What does she need?" I
did in the first talk with her, about what I expected of her and what she hoped
to gain, and we went through the things that she had to do, I did that. But like I say, I'm going to have to
write another assessment on her and I don't know much about her and how she's
progressed. So that's difficult.
In short, there is very
little triangulation of assessment required by the formats of documentation
discussed so far, and practical circumstances limit their possibility. This creates potential obstacles for monitoring, and if an
assessment is contested it remains difficult to see what would form the
independent evidence base on which the issues could be resolved. All educators, of course, have some
background knowledge of their students, form impressions and listen to the
views of others; consequently they are often aware of the difficulties
associated with some assessment texts:
...we get these
assessment forms back of course here in the college, and we discuss them and we look at the issues raised
by them and then we try and iron out
difficulties whether that
be the student contending what was written on the form or the staff on the ward contending what they've written on a
form, and that sometimes will clarify issues related,
especially I find if they are
issues it's normally related either to professional behaviour or communication skills on
the ward...it's normally because...people have different perceptions
of...how effective a first
year can be for instance in
dealing with a bereaved relative,
and so those are issues that we
have to constantly revisit.
Unless, the evidence
actually provides a detailed record of 'what happened', judgements remain
susceptible to bias, misinterpretations, misunderstandings. These issues point not only to the
complexity of the skills involved in assessment, it is also a matter of
ensuring the appropriate assessment texts are created. Assessment is not simply a matter of
exercising unambiguous judgement in relation to ticking a particular box. A considerable degree of reflection,
analysis and dialogue needs to take place before common agreements can emerge
as the basis for individual assessments.
The philosophy, the principles, the procedures and the structures
appropriate to support the assessment process needs to be in existence.
4.2.3 Increasing the Evidence Base.
Participating
institutions generated evidence beyond the tick and the witnessing signature in
one or more of the following ways:
¥ through a record of formal
diagnostic interviews
¥ through a 'discussion
statement' that cited an evidence base
¥ through learning contracts
An evidence base can be
used for a multiplicity of educational purposes. It can therefore begin the process of integrating theory
with practice as the assessment text becomes increasingly loosely framed to encompass
research based projects and assignments of various kinds. There is a kind of progression in the
above list as it moves from relatively tightly framed formats towards genuinely
open ended research based texts.
¥ Recording
Discussions of Formal assessment Interviews.
As described in chapter
six three formal interviews are normally planned during each placement, the
first two are diagnostic, the final is summative. They act as a minimum requirement for discussion during a process
of continuous assessment. These interviews may be used in conjunction with
tightly framed assessment formats as described earlier, or with more open
assessment formats.
For example, the
assessment document may specify that the preliminary interview is to take place
in the first week of the placement with the object of including:
¥ a discussion of the personal objectives
of the student
¥ the learning opportunities available
during the placement period
¥ a discussion of progress to date
towards achieving the specified levels
Typically
student and assessor are required to sign that these have taken place. As outlined in the previous section on
the triangulation of views, the practice involved in recording these
discussions varies. In some
cases, a blank page is given with only general advice as to what should take
place provided. In the blank space
a record in summary form of the agreed objectives/agenda for the placement may
be provided. Other documents may
more precisely specify the questions to be addressed in the interview. Similarly, for the intermediate
interview the assessment document may include space to record progress to
date. At its minimum this
may involve no more than penciling in the ticks in the tick box format. At its maximum it involves a detailed
open ended discussion. One such
typical example of the format for the intermediate interview is as follows:
PLACEMENT
MID-WAY PROGRESS REVIEW
Name of Student
.................................
Placement area
..................................
Start date
...........................................
1. Identify
problems in meeting learning objectives
2. Describe
modifications to be made to the preliminary learning plan to overcome the
identified problems:
3. Check
your level of development against the expected levels in the assessment
document
4. Identify
areas requiring further help
5. General
comments:
a) Link tutor
b) Student
c) Mentor/Assessor
Signature and date:
link teacher .......................................
assessor .............................................
Student ..............................................
Figure 4: Placement
Review Document
The space given to
record the results of the interviews varies from little more than a third of an
A4 page, to a whole page. At their
minimum, formative interviews may be regarded as a record of aspiration, and summative
interviews as statements of overall achievement rather than constituting an
evidence base of learning and accomplishment. In this document they do not yet provide an independent
evidence base which can be used for purposes of re-assessment. The external reader of the document
would still have to accept the judgements of the accredited witnesses. The evidence is still insufficient to
be able to challenge the judgements.
Even if a student agrees that certain skills are lacking, there is no
evidence base to challenge this judgement by the student. Only if a member of staff testifies
that the student is being too hard in their judgement can a challenge be accepted. But who is actually right? What evidence would be required to make
such a judgement?
¥ Discussion
statements and the citation of evidence
An approach which begins
to break down the listing of categories to ticks (supported by overall comment)
is one where broad statements are provided as a basis for which evidence of
achievement must be supplied. One
example of this approach for Project 2000 student community placements used
broad statements referred to
as 'stems', under which
were suggested possible activities which could be reflected on as follows:
The nurse assists the
client, as an individual or in partnership with the family, with daily living
skills and activities.
This then is
supplemented with a list of sub-statements:
¥ maintaining
high standards of nursing care.
¥ Assisting
with the activities of living to promote self-care and reduce dependence
¥ (others
are listed)
Figure 5: Sections
from Documents.
Instructions are given
in the student profile document as to how to proceed. A series of interviews is required where 'the learning needs
and expected learning outcomes will be discussed and mutually agreed'. Completion of the document requires all
experience gained and teaching given during the placement to be recorded in the
appropriate section. The approach
allows for more detailed evidence to be recorded than in the previous examples.
The approach is worth
exploring in further detail because it gives an insight into the increased
demands it makes upon the assessing processes. In the approved institution where this is operated, it is
seen as an interim step towards a more open ended process of discussion and
negotiation. The idea, as the
speaker below indicates, is to provide a general area for discussion with
students which staff can develop in relation to experience:
What we've got to get
them to think about is if we said this is an important area that we want them
to assess then they've got to find some experience that makes that live, and if
we dictate it what they'll do is try and do what we want as opposed to think
about the area. What we want them
to look at and find suitable experience and do suitable teaching for (the
student) ...
She develops this in
relation to Rule 18a:
...we were looking at
sort of areas I suppose that, that enliven rule 18A but don't try to fragment
it, because I think of these things we get into sometimes is we try and take
rule 18a and then we try and break it down into it's bits and sometimes by doing
that we actually lose what rule 18a is about which says that it should be all
sorts of skills, developing a multitude of areas and the more we try and pull
those different skills out the more we fragment it I feel. So we've got things like
"the nurse will promote a partnership with the family and the rights of
the child and will ensure that she or he's always treated with respect and the
dignity is maintained" and we
wanted them to actually say, well what does that mean? What does that mean I have to teach? What does that mean I want them to
assess? But as I say for the
beginning we've done it, such as attention giving to age, gender, sexuality,
culture, religion, personal choice, time or belonging, will be respected,
empowering the child of the family through the sharing of knowledge, skills and
resources, acting as the client's advocate as and when necessary, so we've
actually decided those areas this time, but what we'd hope to do eventually is
take all that out and just leave the gap.
This type of assessment
text moves away from a highly standardised format, and the evidence cited
reflects the true contexts and experiences of the student. In addition, evidence of teaching
undertaken is recorded.
The further developments
required to facilitate this approach clearly depend upon the quality of
professional development that can be achieved. There are demands upon the educational assessment expertise
of staff which are not currently possessed but can be developed over time and
with the appropriate structures in place.
¥The learning
contract.
Another approach
offering a more developed strategy which moves towards tackling some of the inadequacies of the texts
previously explored is the learning contract. For example, page one of a contract asked three basic open
ended questions:
What have I achieved to
date that is relevant to this module?
What do I need to
achieve during this module?
What did I achieve
during this module?
Page two (and subsequent
pages of the document) were structured as:
objectives to include | Resources
and strategies |Evidence
and standards for
competencies | |achievement
| |Reflection
| |
| |
| |
| |
Figure 6: Learning
Contract.
The contract was set out
on large A4 pages, the blank areas to be filled in by the student and validated
by an assigned member of staff.
The space available for the
student was sufficient to allow the development of an essay. Objectives and competencies were
identified and discussed in relation to resources and strategies. Completed examples showed the inclusion
of descriptions of practices and experiences, supported by referencing of
relevant literature.
As a result, the final product was very close to being a research based
assignment. There was a very considerable body of evidence
provided as to what exactly a student accomplished, together with integrated
understanding of the knowledge informing that practice.
This different
documentary format implies different kinds of skills appropriate to the
assessment role. Indeed it was the
case that at the approved institution from which this example was drawn,
assessors were prepared and actively supported by a network of lecturer
practitioners. The nature of the
lecturer practitioner role, based as it was in clinical settings, differed to
that of an educator performing a clinical liaison role, located predominantly
in the approved institution setting.[35]
Assessment texts which
captured the multi-dimensional nature of practice within a strong evidence base
were valued by a educators in colleges that were experimenting with learning
contract approaches,:
I think, I wonder
whether it would be more beneficial if we just gave a blank sheet of paper to
the staff nurse and said tell us whether this person's a competent nurse. (...) I think that (...) pieces of
prose freely given looking at positives and negatives would probably give you
more of a picture of competence than something broken down into areas which are
open to interpretation and rather vague... (educator)
This kind of thinking
suggests a move towards a method of assessment which starts to address the
whole issue of assessing theory and practice as an integral whole in the manner
which is implied in the UKCC interpretive principles
4.3 TOWARDS THE INTEGRATION OF THEORY AND
PRACTICE IN THE ASSESSMENT OF NURSING COMPETENCE
4.3.1 Texts Which Create Binary Opposition or
Texts Which Promote Integration?
Theory and practice are
often referred to as if they were in a binary opposition with each other; the
theory separated from and valued differently from practice. In an educative perspective no such
discrete binary distinction is implied.
Rather the relation between theory and practice is one of mutuality and
of interdependence. While not
identical to each other they can be regarded like the two faces of the same
coin. It has often been said,
there is nothing more practical than a good theory. The question then concerns
the extent to which the evidence base drawn from practice can be employed for
further educative purposes which include the development of theory derived from
practice. Thus what most approved
institutions term
'theoretical assessment' will be
the focus of discussion for this section, to see the extent to which it
can be integrated with evidence derived from practice. .
Compared to texts used
for the assessment of practice, those used for assessing theory provided a
wider range of evidence and were more loosely framed (hence they would be
situated to the right hand side of diagram 1 at the start of this chapter. One educator summed up the general
trend, in comparison to assessment of practice, as follows:
I think very much
clinical assessment is still...it's focusing on competencies and therefore what
it assesses is restricted, it's constrained. (...) However we do have the other
side of the equation and that is the theoretical assessment schedules within
the course which the college manages, in that they certainly (...) give the
student opportunity in a variety of ways to demonstrate that they are
developing the skills of analysis and synthesis.
The tendency for assessment strategies in practice areas
to actively constrain what can count as competence in order to meet the demand
for increased standardisation has already been discussed. In contrast the wider range of
evidence which can possibly be generated from theoretical assessment strategies
suggests the potential for greater incorporation of practical experience within
theoretical texts. In the section
which follows a range of examples of 'theoretical assessment' is described. These are explored in terms
of their sensitivity to issues of practice and the extent to which it is
possible to incorporate those issues into a broad theoretical framework.
4.3.2 The Range of Methods Employed.
The range of methods
used for the assessment of theory include the following:
¥ literature searches
¥
case studies
¥
care plans
¥
neighbourhood studies
¥
poster displays
¥
personal journals/learning diaries
¥
critical incident analysis
¥
research proposals
¥
seen written examinations
¥
unseen written examinations
Each method in this list
has the potential to contribute either formatively or summatively to the
overall assessment process. Decisions about how each method should be used
varies from institution to institution.
For example, one approved institution may use neighbourhood studies only
as a means of formative assessment whereas in another institution this would be
a summative piece of work. There are however some methods which are more
usually assigned to one particular type of assessment. For example, personal
journals and poster displays are more likely to form part of the students
formative assessment whereas seen and unseen written examinations are more
likely to be used summatively.
¥ Literature
searches
Literature searches are
often an integral part of other types of assignment. Typically the student is required to explore the available
literature on a given subject and give a written account of the important
issues and debates embedded in the body of literature. At its most distant from
practice, literature searching is experienced as an academic exercise. When it
occurs as part of a problem solving exercise where students are required to
explore the literature relevant to a problem they have experienced in practice
and to give and account, not only of the literature but the way in which it
informs the area of practice in question, then literature searches start to
bring these course elements together.
¥ Case studies
Case studies provide
students with the opportunity to carry out an in depth study on one particular
client/patient in their care. Typically
case studies are constructed from the students actual experience of a client
over a period of time and although students might be expected to draw on
relevant theory to inform the case the essential material which is used by the
student often comes from students direct observation as well as historical
documentary evidence.
¥ Care plans
Assignments based on the
construction of care plans not only require the student to use observational
and documentary evidence in assessing their client, but also require the student to reflect on how that clients
needs might be met and kept under continual review. Students might be expected to construct a care plan
from scratch, critically
evaluate an on-going and pre-existing care
plan or develop part of a care
plan. For example, a student in the early part of their course might only have
to write about how they have contributed to a clients assessment.
Most assignments based on care plans are constructed around the students own actual
experience of a client on placement and like case studies draw on relevant
theory.
¥ Neighbourhood
studies and community studies.
These typically require the student to explore and
describe in detail the characteristics and available resources in a given
locality. They may be based on the needs of one particular client or a specific
client group. Sometimes these studies are carried out as a group project and in
such cases are more likely to be used formatively.
¥ Poster displays
Like neighbourhood
studies, poster displays of project work are often carried out collaboratively
in groups. They can reflect
project work which is abstract/ academic and highly theorised or in contrast
can reflect work which has been focused on practice and student experience.
¥ Personal
journals and learning diaries.
Personal journals and
learning diaries typically require the student to record their reflections on
their experience throughout the course and draw heavily on the students
experience of practice. They may require the student to consider personal
learning objectives. Journals and diaries can be either confidential to the
student, open to scrutiny by peers and teachers or a mixture of both.
¥ Critical
Incident Analysis
Although critical
incident analysis can be used as a verbal learning tool in clinical areas and
classrooms, they are also used for the focus of written assignments. Critical
Incident Analysis tends to focus on a particular student experience which may
be part of everyday practice or it may be unusual in some way. Students are
usually required to show a broad understanding of antecedent, contextual and
theoretic issues in written
discussion of the incident chosen. Critical Incident Analysis therefore usually
has high potential for assessing the application of theory to practical
problems.
¥ Research
proposals
Where students are
required to produce a research proposal as part of their assessment the extent
to which the activity draws on students own experience of practice is dependent
to some extent on the way in which
research activity is defined and understood within the approved institution.
For example where research is experienced as a means of enquiring into
practical problem which students have encountered through their course then its
integrating potential will be high.
Where it is taught and experienced at a greater distance from the rest
of the curriculum then its integrating potential will be lower.
¥ Written
examinations
The extent to which
examinations are able to reflect students ability to contextualise abstract
theory and theorise about practical issues and experience depends both on the style of questioning and on the
opportunity they provide for student reflection. Unseen examinations offer less opportunity for reflection
than seen papers and questions which focus on the the range of considerations
which are needed for a particular client are more likely to draw on the
students practical experience than questions which focus more on theoretical concepts. [36]
4.4 CONSTRUCTING THE EVIDENCE BASE: A
SUMMARY
To provide adequate
evidence for the purposes of assessing competence as an integral part of an
educative activity, texts must be open to independent re-examination, must make
possible the monitoring of the assessment processes, and must provide a form of
quality assurance. From our examination of the range of assessment texts in
operation, this does not always happen.
Critical to the
professional role of assessing studentsÕ practice is the development of
judgement. Assessors need to be
skilled enough to form and defend appropriate judgements to act
professionally:
(assessors) need to
realise that it is a crucial stage of training for a nurse that the patient is
central, that they must make their decision based on the criteria laid down and that they should stand by
what they decide. Because no one has the right to challenge another
professional's professional judgement unless it can be proven that that
professional judgement is flawed....So I think that's where we
need to be more forceful
in our teaching, is trying
to get across to people that they have a right to make these judgements, they are
employed to do that. (educator)
Continuous assessment in
placement areas requires a complex evidence base, which reflects the realities
of practical experience. It is the
contention of this project that the evidence base provided by assessment
documentation is frequently not adequate for these purposes.
4.5 CHAPTER EPILOGUE.
AN INDICATIVE LIST OF
MECHANISMS & PROCEDURES FOR COLLECTING EVIDENCE.
PRE ASSESSMENT.
informational
mechanisms, such as:
¥ the provision
of handbooks
¥ briefing
meetings to ensure students and staff are apprised of role definitions of:
¡ assessors
¡ mentors
¡ link tutors
¡ other staff in the placement area
¡ students
¥ understand the
purposes of the placement
¥ understand the
purposes of and procedures necessary for the assessment of
placement
experiences and activities
professional
development and student preparation mechanisms
to ensure that staff and
students are competent in techniques of:
¥ interviewing
each other to identify agendas of needs and interests
¥ situational
analysis to make explicit:
¡ the wider context of social, political,
economic conditions
affecting the clinical area
¡ the professional context
¡ the structures, principles, and
procedures underpining
practice in the clinical
area
¡ the particular circumstances of the
situation
¡ the influences on decision making
¡ the constraints on action
reflection and
dialogue structures to encourage
¥ reflection
upon practice to make explicit:
¡ staff tacit expertise
¡ comparable and contrasting cases/events
that influenced
decision making
¡ student understandings/misconceptions
etc
¥ discussions
with a range of staff and students to:
¡ share experience
¡ internalise the appropriate values,
ways of behaving etc
DURING ASSESSMENT.
formative
mechanisms to develop
¥ diagnostic
interviewing with mentor/assessor/link tutor
¥ participation
in discussion with care teams
¥ the provision
of accounts of practice, e.g., giving a running commentary during practice,
making explicit internal thought processes; writing up accounts of practice for
later discussion.
summative
mechanisms to ensure
¥ triangulation
of opinions between: assessor,
other mentors, colleagues who have
worked with the student, the link tutor, the student, other students
¥ completion of
formal assessment documents
POST ASSESSESSMENT
monitoring and
quality assurance structures to ensure:
¥ role holders
are appropriately qualified and experienced
¥ roles and
procedures actually carried out
¥ judgements are
backed with appropriate evidence
¥ judgements are
consistent
¥ assessment
structures, mechanisms and procedures consistent with
rationales
¥ appeals
procedures available
This chapter has
considered a range of so called 'theoretical assessments' in order to describe
their potential for integrating knowledge and understanding learnt outside the
practice situation, with students own experiences of practice. It is the
contention here that this integrating function should be a high priority when
strategies are developed for the assessment of theory.
CHAPTER FIVE
ABSTRACT
With the increased
amount and complexity of assessment activity that accompanies devolved
continuous assessment come pragmatic and conceptual problems for assessors.
These give rise to a lack of confidence among the nurses and midwives who are
doing the assessing, and especially among those who have actually been prepared
for the different role of mentor. Since reflective practice requires assessment
that is indivisibly part of the process of action, analysis, critical
reflection and further action, the boundaries between assessing and mentoring
roles have become blurred. This makes the superficiality of mentor preparation
for assessing, a particular cause for concern. To cope with such problems the
professional preparation of clinical assessors -including putative assessors in
a mentor role- must focus on the development of understanding. Nurses and
midwives who are asked to assess practice in terms of knowledge, skills, and
attitudes must themselves have an opportunity for developing their own
competence in each of these. Until approved institutions are able to meet the
basic need for understanding that assessors comments imply, assessment will be
perceived primarily as an additional burden. Without that understanding
clinical staff, like students, will continue to seek clear procedural guidance
to frame their activities. The three main forms of assessor preparation appear
to be meeting assessor needs in part only. The ENBÕs 997/998 courses and the
City and Guilds 730 course are the oficial preparation for assessors. Where
successful they provide both content and experience of the process of exploring
educational issues, and equip staff for assessing in the clinical area. There
is, however, a question of whether even these courses are sufficiently long to
allow the process aspect to develop, and more particularly to give the course
members a real sense of being supported in their attempts to come to grips with
new ways of thinking and working. An unintended outcome is that practitioners
who have been prepared for assessment of studentsÕ competence, become
unofficial teachers of colleagues, through an informal cascade model of staff
preparation. This is a task for which they feel ill-prepared. Short in-house
courses offer update information but are less effective as a way of changing
conceptual maps, providing piecemeal knowledge rather than encouraging the kind
of paradigm shift in the bigger picture which allows principled new thinking.
Staff attend short courses if the course has a clear relationship to immediate
patient needs, if it provides them with access to further educational
opportunities, or if it offers them a considerable amount of personal
support. Even when courses satisfy
all these conditions, the pressure and unpredictability of workload, and the
inhospitability of the time and location at which the course is offered, make
staff attendance difficult. The third and final means of preparing assessors is
through clinical liaison, by which teachers attend placement areas on a regular
basis. This is capable of providing an opportunity for clinical staff to share
worries and doubts, to discuss ideas with teachers and to come to a better
understanding of assessment criteria. All too often, though, it is an occasion
for trouble-shooting, and focuses on the problems the students have rather than
the needs of the assessors. This leaves a proportionately small amount of time
for the development of mutually beneficial relationships, and the development
of dialogue. There appears to be a need to make current professional
development structures more effective. Assessors perceive their needs in terms
of support and guidance, and look for opportunities to increase their own
understanding through dialogue. They need to be able to gain ÔsafeÕ experience
of the assessment process themselves if they are to understand it from inside.
IMPLEMENTING THE ASSESSMENT PROCESS:
THE PROVISION OF
PROFESSIONAL DEVELOPMENT AND SUPPORT
Introduction
In this chapter we
consider the professional development experiences of the students, educators
and practitioners involved in the assessment process. The emphasis, however, is
on the group which the research data identifies as perceiving itself most in need
of professional development and support, namely the nurses and midwives who act
as assessors in clinical and community placement environments.
Prior to the
introduction of continuous assessment, assessment roles and responsibilities in
clinical areas were clear cut.
More senior grades of staff were prepared for assessment roles by the
ENB 997/998 "Teaching and assessing in clinical practice" courses
(for midwifery and nursing respectively), or equivalents such as the City and
Guilds 730 Further Education Teachers Certificate course. They were then
registered with Health Authorities to assess the 'one off' practical tests
undertaken by students in clinical areas and to complete placement reports.
With the introduction of continuous assessment considerable changes in the
operation of, and philosophy behind, assessment have occurred. In brief:
¥ The amount of assessing has
increased dramatically. Students are assessed throughout the duration of a
greater number of placements, and as a consequence the demand for assessors has
risen significantly. Meeting this
demand has been problematic for some approved institutions[37].
¥ The skills required of
assessors are more complex and hence re-skilling is of great importance to
ensure that activities such as reflection, formative assessment and
facilitation of self assessment occur. In many cases, assessment is now
conducted at a greater academic depth, reflecting curricula developments at
diploma level.
¥ The transitions in
educational cultures which have re-shaped approaches to assessment have been
predominantly led by (and hence taken on board more quickly in) educational
rather than clinical settings. As a consequence, professional development for
clinical assessors has extended beyond merely providing information, to
ensuring deeper understanding and appropriate execution of assessment
intentions within a wider educational framework.
5.1 PROFESSIONAL
DEVELOPMENT FOR ASSESSMENT
5.1.2. The Inadequacy
of Mentor Preparation to Deal with Assessment Issues
There is one thing the
people who have responsibility for assessing students are agreed about, and
that - as the quotation below indicates succinctly- is that they cannot carry
out their role satisfactorily without proper preparation.
I think you could
have excellent assessment documentation. I think that's by the by. I think the
biggest part of assessment is the preparation of assessors. (educator)
There are two roles
basic to continuous assessment for which staff require professional
development. These are the
mentoring and the assessor roles. According to the ENB guidelines (1993: 6.5) a
mentor is:
An appropriately
qualified and experienced first level nurse/midwife/health visitor who, by
example and facilitation, guides, assists and supports the student in learning
new skills, adopting new behaviour and acquiring new attitudes.
Again according to the
guidelines (6.4) an assessor is:
An appropriately
qualified and experienced first level nurse/midwife/health visitor who has
undertaken a course to develop his/her skills in assessing or judging the
students' level of attainment to the stated learning outcomes.
Mentors are
predominantly staff who have a least six months qualified experience, and have
been prepared to take part in mentoring activities through in-house
preparation. This is far less extensive than either the ENB 998/997 course or
the City & Guilds 730 course, lasting in most cases only for a day or two.
Often these induction programmes are included in more broadly focused short
courses for recently qualified staff nurses. The majority of mentors find this
form of preparation entirely inadequate for the assessor role which they may
have to take on.
I did a curriculum
awareness study day and I did a workshop on mentorship. But I think, sometimes
you have no actual awareness of what you're letting yourself in for really. You
suddenly go on to the ward and you're told...you're going to be a mentor...and
I think people have different
attitudes as to what the mentor role should be....what the student
perceives (as a) mentor and what actually the mentor feels that she should give
as input to the student. (staff nurse)
I know the school
says that, "Oh yes, mentors go on a workshop." It's only an
afternoon. (student)
Since learning of the
kind which leads to reflective practice requires assessment to be indivisibly
part of the process of action, analysis, critical reflection and further action
which every student will engage in on the journey towards competence, the boundaries
between assessing and mentoring roles are blurred. The new demands of
continuous assessment relate directly to these changes in education, where the
focus is on contextualised problem solving, holistic care and reflective
understanding rather than atomised, de-contextualised skills to be performed on
demand. Ideally, the people who
hold the separate roles of assessor, mentor and teacher are able to co-operate
to ensure that assessment and the educational process are integrally related in
developing the professional competence of the student. But the ideal is only
achieved where there are appropriate structures in place to promote collaboration,
and where the role-holders feel confident in those roles. To ensure that this
is the case nursing and midwifery staff, whose first job is to care for
clients, need adequate preparation for their second role as assessor or mentor.
The superficiality of
mentor preparation is a particular cause for concern when the boundaries
between mentoring and assessing activities have become so blurred.
5.1.2. Assessor
Preparation
If the mentor role is
not currently well adapted to the assessment role, the assessor role itself is
often not particularly well supported by programmes of preparation. There are
three basic forms that professional development for practitioners responsible
for clinical area assessment may take. They are:
¥ ENB 997/998 and City and
Guilds 730 courses
¥ in-house courses or workshops
¥ clinical liaison meetings
The first is looked for
wherever possible, and as time goes on there are more assessors who have had
the opportunity to participate in one of these approved courses. The others are
offered as an interim measure until such times as it becomes possible to
arrange for all assessors to undertake an ENB 997/998 style of course. None is
entirely satisfactory in meeting the full range of needs of assessors.
What is described below
are practitioners' perceptions of the adequacy of their preparation for the
role they are already carrying out. The comments reflect a great deal of
insecurity in the role, and considerable appreciation of the support and
encouragement they are given where it does actually occur. They also demonstrate that until
approved institutions succeed in meeting the very basic requirement that
assessors have for opportunities to acquire new knowledge for themselves, and
to discuss their doubts and anxieties, they will be perceived as placing a
further burden on practitioners instead of helping them to develop their
professional expertise. The initiation of mechanisms to encourage the exchange
of ideas about competence and the assessment of practice in a spirit of mutual
education would ensure that preparation for assessment became a truly
professional development matter. In the meantime, the support mechanisms there
are (including courses and liaison provisions), appear to need strengthening.
5.1.3. Preparation
Through ENB 997/998 and City & Guilds 730 Courses
Continuous assessment
together with changes in educational philosophy has increased the scope of the
assessment role and has also increased the demand for assessors. There is, it is argued by many, an overriding
need for all members of staff to be appropriately prepared for the role of
assessor.
People who become
continuing assessment assessors of pre-registration students should ideally be
registered for six months or more, ideally they should have completed a 998
course or attended a preceptorship course or maybe being an original ENB or GNC
ward based practical assessor and had some kind of adaptation, or to have been
prepared in a way as deemed appropriate by the senior management of (the)
college. (...) probably almost every registered staff nurse is going to be
called upon soon at least to be a preceptor or a mentor as a part of the
continuing assessment and they may well be contributing to the formative and
the summative continuing practical assessments. They probably wouldn't be the
over-riding signature, maybe they should be the countersigned or whatever (...)
And this is why I am trying very hard to make myself available whenever
possible to actually talk to more and more groups of staff nurses to prepare
them for that role. (Educator)
The probability is that
it will be some time before there is a full contingent of fully qualified
assessors who have attended an approved full-length course. In the meantime,
assessment preparation is disseminated using an informal cascade model. There are
qualified assessors who see their role solely as involving a direct
contribution to the assessment process, but many find themselves acting as
supporters and facilitators for colleagues conducting assessments without
formal assessing qualifications. The result is a situation for which neither
the qualified assessor nor the colleague with whom they are working is properly
prepared. In these circumstances
there is a clear mismatch between the content of the ENB 997/998 course and the
role as it is actually practised, which involves supporting and teaching other
members of staff:
...the 998 (it) seems to me, will have to be
completely reviewed in view of continuous assessment...it will also have to be
reviewed in relation to the supervisory component of a 998 holder...as a 998
holder they'll probably do assessments but they might be responsible as a
primary nurse for supervising a colleague, another primary nurse who's doing
continuous assessment without a 998. Now that's a whole different ball game. So
that needs to be reviewed and relatively quickly. (educator)
Chief among the other
perceived shortcomings of the current ENB 997/998 courses, is its relative
shortness for the task of moving practitioners into a different conceptual
territory.
Considering the amount
of detailed content such a course needs to encompass to prepare nurses and
midwives to meet current assessment demands, the relatively short duration of
courses aiming to cover both teaching and assessing is a particular problem. When course time is given over predominantly to providing
access to content, the time available for attending to the exploration of
educational assessment within professional contexts of practice is
proportionately reduced. Some
individuals question whether ENB 997/998 courses adequately alert practitioners
to both the educational and assessment implications of devolved continuous
assessment, suggesting that it is instead a mechanical exercise that does
little to support reflective practice:
...the 998 is a
mechanical exercise isn't it? It's about getting people through an ENB course,
teaching, assessing. I mean it's fine, I'm not knocking the course, the course
management team do what they can within what's available. But I mean they can't
produce the sort of reflective practice within supervisors that's necessary to
encourage it within students. (educational manager)
I know they all do
the 997, but sometimes the feeling that I get from them is that it's something
to be endured...(student midwife)
The issue here is about
whether the courses can deliver what is needed given the constraints of time,
resources and so on. Clearly,
there is a value to the courses; they meet a real need for those people confronted
with students who are products of
the new approaches, and with whom they have to work immediately. Those courses
which reflect current educational approaches are valued by practitioners not
least because they aid their understanding of the experiences faced by
students. This sort of understanding (of process, through experience) may well
help the longer-term development of a professional approach to assessment as
part of the learning process.
It came as quite a
shock at first because I'd never come across...I mean it's 17 years since I did
my midwifery and then we were just lectured to and it was like a totally
different way of teaching (...) they brought out things that I'd never though
about, I mean the teaching, making teaching plans out and things like that. I
started to read more than I had done previously, a lot of interesting things,
things that I'd never heard of, you know like the first time they said,
"We'll brainstorm this." I didn't know what it was! Everybody else
did but I didn't, and just little things like that.
I can see from the
student's side a bit more from doing the 997, having to go and try and find
bits in journals, but of course the journal's been ripped out, the article's
missing, the books are not there. It's been really difficult sometimes and I
think, students are having to put up with this fifty times more than me because
I'm only having to produce three pieces of work. They're having to produce an
article a week for this project, for that project.(...) So I feel sorry for
them.
Of course, the major
function of a course for assessors is to provide an opportunity for nurses and
midwives to learn more about that role. It may be the case, as some assert,
that there is a dimension of teaching integral to the professions of midwifery
and nursing. However, the assumptions that underpin these views of how and what
nurses and midwives teach in their everyday contact with clients, colleagues
and students remain largely implicit.
Courses such as the ENB 997/8 and City & Guilds 730 may contribute
to making such assumptions more explicit:
Well I think everyone can teach in this profession but I think having the 997 guides you...it shows you what you can actually use within the area you're working which you probably never realised was there initially. You're doing it but maybe you're enhancing those skills(...) so it does broaden your mind towards, you know, how to get what you want out of the student, how to give them information so they retain it (...) you can actually find out whether or not they're gaining from their experience. So it does help, definitely. I think you've got to do it, I don't think it's umm, like pie in the sky sort of thing, it's something you need to do, it enhances what you probably already do to make you more effective.
Courses which prepare
practitioners for their teaching role by raising their awareness of teaching,
will only prepare them for the assessor role if the teaching, learning, and
assessing functions of programmes of practical professional preparation are seen
as part and parcel of a single entity.
5.1.4. Preparation
Through In-House Courses and Workshops
All approved
institutions organise short sessions to update educational staff on new
assessment strategies, especially for those not involved in their design.
Educators require an understanding of the totality of assessment strategies to
liaise effectively with practitioners and facilitate them in their role for
assessing clinical practice. They also need a detailed knowledge of the
assessment of theory and their marking guidelines.
...we've had marking
workshops organised within the college (...) I mean I've never marked work at
diploma level. And so it was as new for me as it was for them(...) The tutor
who arranged these marking workshops
(...) drew up markers guides and sort of made it easier for teachers to be able
to look at pieces of work and say, "Well yes, there's a knowledge base
here," and going all the way up the taxonomy. (midwifery course leader)
And there have been
preparation sessions with staff for the introduction of the assessment
strategies(...) again that is something that we felt was important. There should be preparation, even
though it was fairly brief...(education manager)
Separate short courses
for practitioners are also held.
Often lasting a couple of days, they typically focus on specific changes
to the assessment system such as new documentation or the process of
determining levels of achievement.
It is questionable whether courses of such brief duration and
specificity provide adequate preparation for assessing complex forms of
behaviour. It is also doubtful whether they can support the sort of reflective
thinking about the relationship between knowledge, skills, and attitudes which
are necessary. Given that attendance at even the small number of days set aside
for in-house courses is not always high, there is a serious question about the
success of such courses in reaching their intended audience:
A small number of us
from the college's continuing assessment committee, which I chair, set up a
series of nine study days, three at (each of the three sites linked to the
college). Attendance at those study days was varied, (...) at the three at
Raven hospital I would suggest that we had an average of 40 to 50 psychiatric
charge nurses or staff nurses at each one, which must have covered almost every
qualified member of staff in the unit and I was delighted with the attendance
there. There was one study day that I held at Penrose hospital where I had an
audience of five, and all five of those people had come across from Thorpe
hospital to be there. In fact if I'd relied on the Penrose hospital audience I
was meant to be relying on then I'd have had nobody. (educator)
This kind of response
from practitioners was found in a range of placement areas. The reported
reasons for non-attendance are all related to pressures it would be normal to
find in a healthcare environment. It follows, then, that short in-house courses
are designed to fail unless some way can be found of moving assessment preparation
from the margins. This would
ensure that practitioners do not have to cope with the added pressure of having
to attend top-up sessions located at some distance from their workplaces, and
balance the course against other competing priorities of staffing difficulties
and unpredictable workload of client care.
One community educator
described the difficulties of providing educational resources to mount a
programme and secure the motivation and attendance of practitioners:
...I realised when I
first came into post that we were going to have a problem with freeing up
sufficient assessors and supervisors (to attend the sessions). Unfortunately it
was not a priority for the service side in terms of all the other things they
had on their agendas and it didn't matter how many times we raised it, it still
wasn't important until it actually came to the crunch about two months ago when
we said, "Well look, these students are coming out, can you identify these
supervisors?"
This educator managed
the situation of competing educational and patient care priorities by
staging updating
sessions which were not incompatible with workplace priorities:
...it's difficult
enough to pull a hundred practitioners off the community all at one time for a
study day(...)what we're hoping is if we get sixty then we'll do two half days
and pull thirty in at a time, and we've a chance of getting that so it's
manageable for them and manageable for us.
When assessment
strategies are designed for courses operating in newly merged approved
institutions and their associated placement areas, some practitioners are more
familiar with continuous assessment than others:
...it's interesting
too that the state of play on each site is different in terms of developing
practitioners to participate in forms of continuous assessment and indeed even
the form of assessment. (...Staff at the General Hospital) are further along
the road really in the sense of how you implement Project 2000 assessment and
continuous assessment than say other sites are. And the other thing about it is
perhaps the environment in which the change is being introduced is one in which
teachers and practitioners are already talking to each other about assessment
in the clinical situation in a constructive new way. So the ground has been
prepared really. Whereas on other sites it starts from scratch. (educator)
There is a particular
need to prepare community staff to meet some of the new demands and
expectations of Project 2000. One
community educator insisted that clinical liaison for placement in the
community needed to include a sizeable component about Project 2000, in order
to 'sell' it, and discussion about the relationship between Project 2000 ideals
and the values of community nurses and midwives. Community-based staff often
feel isolated from educational opportunities, and many felt the need to be
valued and supported as well as informed about the assessment process:
...we've actually spent a lot of time going
into the health centre to do tutorials(...)it's given us access to people to
make them familiar, because I think the community quite rightly felt aggrieved
that here was the community that's always been looked down on for years and now
all of a sudden because Project 2000's come on the horizon, "You want us
to move heaven and earth to accommodate these people." (community educator)
5.1.5. Preparation
Through Clinical Liaison
All approved
institutions have established systems for ensuring liaison between educators
and staff in placement areas. In
fact, one major purpose of the liaison, perceived by both sides as important,
is to ensure that information about assessment procedures and requirements is
shared. A second purpose, to support students on placement, is a little more
problematic, placing the educator in the position of having to decide from time
to time between the learning needs of the student and the developmental needs
of the practitioner. For example:
You know some of us have been here for a while, courses aren't available for different reasons, you know, financial or can't be released or lack of places...we are going to actually have recognised teaching sessions on topical developments...like Project 2000, PREP, continuous assessment...I do a lot of work at home and I do try to improve my existing skills but the time involved is difficult. In fact one of the suggestions at our trained staff meetings was that we actually do a piece of research about a topic ourselves and feed it back to the group, in our own time, not using hospital's resources. And the thing is that none of us can actually do it. We have not got the time out of work. And we really don't feel very well supported...(staff nurse)
The quality and quantity
of clinical liaison varies. In
some areas links are good and ongoing relationships are forged which ensure
staff are up to date and well supported regarding educational issues and assessment. However, more frequently practitioners
spoke of their needs not always being met, and educators of their difficulty in
fulfilling the kinds of commitments they would like to make to their link
areas. Practitioner's needs for
clinical liaison, and the importance of the activity are recognised by many
educators:
It's not that we don't want to do them, don't get me wrong, but sometimes you do think well really I'd like more time to give more time to these areas.
I used to be a clinical teacher years ago before I came here, and it could be a full time job just really being on the wards and doing this when we've got heavy commitments in the college (...)I think the majority of teachers (...) would like to give more but it's difficult to do.
Many practitioners and
students are aware of the constraints experienced by educators in fulfilling
their liaison role. As previously
outlined, educators spoke of the pressures of work in approved institutions
which frequently take precedence over clinical liaison. As many institutions have merged and
cover multiple sites, some educators have difficulties allocating sufficient
time to travel to, and spend useful periods on, link areas. This is
particularly the case if they have a number of clinical links which are
geographically spread.
Some staff juggle their
workloads to fulfil the commitments stipulated by approved institutions,
although the contact is not always as regular as they would wish:
...well the teaching staff...I mean I'm speaking for myself here obviously as well (...) there really is more and more you know...workload if you like, expected of us, particularly on the clinical areas, and personally, speaking personally I find that weak. I mean we do have this commitment to give, it's a limited commitment but we are supposed to be at least half a day per week to the clinical areas, and if we can't make that up, you know maybe we save up two days in the month.
Unsurprisingly, the
result in some areas is that liaison contact is reduced to the minimum, with
trouble-shooting being the main function:
...I find that I'm only doing it when people ask. What I should be doing is doing it on a more regular basis and we'd love to do it. Erm, but it's just a time scale. But it's always this business of you're doing it when you need to do it, when there's problems...
Building up
relationships that are positive and valued by both practitioners and educators
is a considerable task. The need
for credibility within the designated link is identified as an important issue
by many education staff in order to create relationships of mutual respect:
At the end of the day
the relationship is really the most important thing...I spent 18 months on a
placement area before I started to feel that I was in a position to actually
question what they were doing and suggest changes...before I felt confident or
comfortable because their reactions to me started to change (...) initially
they saw me as a complete and utter threat, checking up on them (...) I
approached them as equals I suppose. I helped them and I think that might have
been the best thing because I set
up work books, learning aids and things like that on the ward...
...I think the fact that we have now got a fair number of people who have a district nurse and health visitor background who are given that credibility, I think it's completely different from somebody from the institution going out and selling the idea (to community staff.)
A small number of
interviewees were of the view that clinical liaison was neglected by a minority
of educators because they either did not regard it as important, or because
they felt professionally insecure in practice settings. Where clinical links were poor for
whatever reason, the situation was often acknowledged by those involved. Students were often aware too, and as
the following extract highlights, this left some feeling uncomfortable:
Although there is a link teacher at school, there seems to be a bit of a ...I don't know, a bit of animosity between school and the clinical area. I don't know whether it's because of the diploma or what, but there's definitely some aggro there. So I'm a little bit reluctant to ask advice from the link teacher.
Sustaining mutually
beneficial relationships requires effort.
Even where good links have been established, there is the potential for
the liaison role to be eroded by other pressing commitments:
...when you're in an environment like that, you don't actually need to be there. So I have to be careful that I don't resist the temptation to say, "Look, call me when you need me." I mean when things get rough here and there's a lot of work to do and I can't get over there it's nice to know that it will function on it's own, but I like to be there at least once a week.
In light of this general
exploration of clinical liaison, it is unsurprising to note that specific
assessment-related input to supplement assessors' professional development
varies considerably. The following
extracts illustrate the typical range:
...we do spend quite
a bit of time going out and talking about the documentation to them. I think
the midwives are quite threatened by it as well. I tend to go out and say,
"Read it and I'll come back to you next week 'cos it's a big document,
there's quite a lot to get through," rather than just go to people and
going through it because they just don't know, they can't take it all in, it's
quite horrendous. So I tend to leave it with them and then go back. (educator)
...link teachers
working with practitioners (...) dealing with immediate issues and helping
clarify those that then leads to greater clarification of a whole if you like.
And there has been quite a lot of that in some areas although you know, in
other areas not so much. It's interesting, it's depending upon the strength of
the link teacher and that kind of approach. There have been formal sessions
initially in order to prepare people to participate in the assessment strategy
and I think they've tended to die out in sites A and B, although I'm not sure
about site C. I think they're continuing... (educator)
To give an example, the area I'm linked to, I
know there are some midwives that aren't that familiar with the course as much
as they could be to help these students, and yet I've said it would be useful
if we could spend some time together and talk through the course itself and
look at this in terms of approaches to the assessment, but it's getting them to
take that opportunity up. And that may well be to do with timings, priorities,
you know the priority of patient care, students don't come first. And I can
understand that, I can understand the difficulties they have, some of the areas
in terms of patients come first and that's right, but it creates problems for
us...And I'm sure it creates, well problems for the students. (educator)
Some of the problems
affecting the development of supportive clinical liaison may be resolved over
time; in the meantime the problems have to be taken into account.
5.2. MEETING CURRENT
DEMANDS FOR PROFESSIONAL ASSESSMENT
Unfortunately the
increased demand for qualified assessors for continuous assessment is not
matched by a proportionate rise in the numbers of appropriately qualified
assessing staff in all placement areas.
Insufficient places on courses for preparation of qualified assessors is
a common problem in many of placement areas, recognised by education, clinical
and managerial staff alike. One clinical manager described the situation of a
lack of assessors on her acute unit:
Terrible pressures.
Terrible pressures because we haven't had enough 998 places, and we (don't)
have I wouldn't say, a terribly high turn over of staff here, we're very lucky,
and we don't have agency nurses or anything like that. But we still have a
problem because we've gone for a... push, all the G grades, F grades and E
grades through, and that is difficult. And we look to the college to find
innovative ways of running the 998.
Assessment demands are
not always spread evenly throughout health districts. In some districts the allocation of precious places on ENB
997/998 courses does not reflect the concentration of student assessment needs. In areas where assessment demands are
high, staff feel aggrieved if the allocation of places is not prioritised
according to current assessing need:
And what we whinge on
about is the fact that we're desperate and we'll see one of the local health
services units who send a part-time staff nurse who may not even have students,
or once in a blue moon, taking our place (...) there is tension there. (unit manager)
This was confirmed by a
leader of the ENB 997/998 course:
But one of the
biggest problems is that it seems to be a kind of conveyor belt, a sausage
machine where managers are pushing practitioners onto the 998 course. Now quite
a, no I can't say quite a few because I can't substantiate that, but there are
certainly people there who don't have students, and in order to fulfil this[38] it's very difficult,
so they tend to develop role play situations and that kind of thing, which to
my mind is not satisfactory. So we are having practitioners on the course who
don't have students, may possibly not have students for a long while and so that's
potentially quite difficult and creates some problems.
Difficulty in retaining
staff with assessment skills once qualified is a further problem affecting some
areas. This is particularly the
case in large cities, where a high turnover of qualified staff is reported. Not only is there a lack of qualified
assessors to conduct student assessments, but a valuable resource is lost as
staff do not develop and pass on their skills and experience to more junior
colleagues:
And inevitably we are
actually teaching some people, giving them skills and then loosing them because
that was the very skill that they wanted to (...) get a sisters post (...)we
have such a tremendous turnover of qualified staff...(educator)
So the difficulty is
then as far as we're concerned, is actually having experienced assessors who
have actually done something like the 998 or the preceptor course, but they're
not practicing the skills. (...) I mean we've got many people actually doing the
course but the numbers of people at the end of the day that you can actually
count, who have sometime previously finished the course and now practising and
developing those skills, they seem very few and far between. (educator)
The consequences of this
shortfall are recognised, in that the quality of student assessment may be put
in jeopardy:
...the change over of
staff is just phenomenal really. And what concerns me is that erm...basically
students are not getting assessed. Students are going through without
assessment, with non-assessment.
(educator)
A lack of qualified
assessors in some placement areas leads to an extension of the mentor role and
a change in the role of the ENB 997/998 holder to that of a supervisor:
I haven't been on the
998 course because there's a lack of places available. However as there was
only one trained person on my ward who had done the 998 course and (they)
worked completely opposite shifts to us, how we got around that with Margaret
(the link tutor) was to arrange a co-assessing role for me. (staff nurse)
...ideally you should
have a mentor (sic assessor)with...the 998 but...it's impossible. There's
only me at the moment on the ward who can assess. The college stipulates that
someone should have that course if they're gonna be a mentor,(sic assessor) we need to
reach that standard, but at the moment we're not. And I know the college are
fully aware of it but it's a problem...we have two preceptors linked to each
learner...I usually have discussions with them, I countersign their reports and
go through it with them beforehand. (ward sister)
Failure to live up to
'ideal' assessment scenarios, and provide sufficient professional
development is common. It is acknowledged by both educators
and practitioners that the reality of assessment practices is often role
confusion and blurring:
What is meant to
happen and I don't think does happen, purely because there are not enough
people for the 998, is that mentors, and there's this confusion between mentors
and assessors and what's the difference between the two, is that mentors do
assess. However the mentors who assess should hold the 998 certificate, but not
all of them do. So that's quite problematic. (ENB 998 course leader)
These, it could be
argued, are largely transitional problems. Eventually, all will be
appropriately qualified, given sufficient time and resources. Indeed some
approved institutions have targeted allocation of ENB 997/998 places according
to assessing needs during the duration of the project. Meanwhile, students are meeting some
quite unsatisfactory situations.
It only takes two or three placements of this kind to represent a
significant proportion of a students practical experience.
Where initial professional development has not been totally successful, some students are, in effect, preparing assessors for their role, because of their greater understanding of assessment requirements:
I've had quite a lot of (assessment) experience 'cos I'm on my last but one placement, so there were problems initially and as we've gone on, as the course has progressed there's a greater understanding of what's expected I think. But still it's not a hundred percent from students and clinical staff. There were a lot of problems with the first one which was our delivery suite allocation. Staff had supposedly been prepared but didn't feel prepared and we weren't told anything 'cos we were sent out with, "The staff know what to do." So it was the blind leading the blind initially, but it's lot better now, eventually. (student)
5.3. ONGOING SUPPORT
AND DEVELOPMENT FOR ASSESSMENT
Practitioners develop
their confidence as assessors through a gradual learning process, resolving
issues of concern slowly as they increase their understanding of the concept of
continuous assessment through increasing familiarity with documentation and procedures:
...I think until you actually use the forms and
get familiar with the documentation... then you don't feel totally happy with it. But now we've had one set
of students I think just from doing their first interview we got to realise what
the form was like. (assessor)
Many realise that the
period of growth is a long one, recognising that confidence and understanding
take time to build up:
I think you get better at it as, as you become
more confident in yourself definitely, and your own practice. I'm confident in
my own practice now so I feel able to assess someone fairly, whereas initially
when I was very first qualified I didn't...when I was assessing someone I found
myself thinking, "Am I doing this right?"...questioning all the time
rather than looking at it objectively... (staff midwife)
Although the
accumulation of experience and confidence in the assessing role is
essential, such learning by doing
is of not of itself a guaranteed method of gaining insight, for experience
without reflection does not necessarily improve insight. The development of
skills and confidence in assessment is enhanced by regular contact with
colleagues who can offer advice and a sympathetic ear. The security of knowing
that support is at hand can diminish the personal threat that some novice
assessors experience:
I'm very frightened...because while we're doing
formal assessments who's to say that my standard is too high or too low? Would
other people pass a student who I maybe would fail? So that worries me a
lot...But at the same time I feel I'm confident enough now to refer somebody if
their care was detrimental...I have gained confidence but it still worries me. (assessor)
The need to develop
understandings about assessment also applies to students. All student groups receive information
about their approved institution's assessment strategy via written guidelines,
and there is often a session with an educator who takes them through the
overall strategy. Students stress
the importance of this activity to them, but at the same time point out their
belief that there is a need for more than a one-off session at the start of the
course.
Although in the case
quoted below, the student had the opportunity to clarify issues at a later
date, students who are the initial intakes on new courses are generally
adversely affected by hurried introduction to assessment:
Our tutor didn't seem
terribly familiar with the material and we understood that it had just been
landed on her desk more or less, and I think most of the tutors here are in the
same position. That might be untrue
but it was certainly the impression that we got. We had the examinations
officer from the college who came in at the end of our introductory course to
talk to us (...) specifically about the assessment schedule, and that was quite
helpful in clarifying our minds.
Some students comment
that their initial understandings hold until they start to be assessed or to
complete written assessments, at which point they realise that they are unsure
not only about what they were being assessed for but also how the assessment is
supposed to be conducted. Students
perceive a need for regular discussion with teachers to clarify procedural
issues and develop confidence and understanding of the system:
...we were given our
assignment title, we were given guidelines (as) to what the marking criteria
would be...but as to writing it and what was really needed, that was up to the
individual to go to the tutor and find out really. But the problem with that occurred in the fact that once you
went to one tutor you had to stay with that tutor because if you went to
another tutor you'd get totally different feedback and they'd expect it to be
different which caused a lot of controversy. (student)
It might be argued that
in a culture of mutually educative practice this perception would be replaced
with another that expressed the need in terms of an opportunity for discussing
differences of view with a range of people. In such a culture, differences in
advice about procedures would be treated as an opportunity for students and
teachers together to identify and come to a better understanding of the
principles by which assessment procedures operate. True confidence would be
engendered where all parties are able to feel the independence which comes from
an awareness of the underlying structure as opposed to the surface procedures.
The fact remains, however, that students were rarely sufficiently familiar with
these principles and so they did not have the confidence to act independently.
Under those circumstances they wanted desperately to be given clear procedural
guidance, and to have open access to a teacher who could reassure them about
these procedures when there appeared to be some difficulty in operating them.
In the same vein,
students sought for consistency between educators engaged in facilitating and
marking assessments of theory. This time, however, the issue is not simply one
of insufficient understanding
of the principles of procedure. There is clearly a case to be made for fairness
and consistency in assessment criteria. On this matter there is a need for
development for all those involved; teachers too benefit from opportunities
for dialogue on assessment issues:
...we're continuing
to develop staff in relation to marking and so on, so we've had some workshops
that people attended in order to try and develop that. Now I think we need to do quite a lot
more of that, but at least we haven't forgotten about that. (education manager)
Whilst there is a perceived need for developmental activities and the sharing of ongoing assessment dialogues throughout approved institutions, staff involved with the assessment of practice are identified as having some of the greatest needs. Given their uncertainty about how they should conduct assessments, and the fact that many do not see the assessment activity as an integral part of the clinical placement teaching and learning process, the variability of quality in the professional development they receive is of some concern. Of particular concern is the apparent divergence in practice within approved institutions for developing and supporting practitioners as they move from dependence on the purely informational to an understanding of the complexity of the assessment process:
And if half the people or whatever number have been trained literally as they work to operate within one system, we are then asking them to assess with a different philosophical view...without...apart from a 998 course here or there, I mean not particular help to do it. If any industry was re-modernising it would put in massive resources to change it, to operate the new machinery. Nursing somehow hasn't put that (in), and after all, assessing that they're competent to practice in this new way when you've never practiced in it...it's like asking me to judge something I know nothing about. They would argue that there isn't that much difference, I think there is a tremendous gap. (educator)
And I think the other
problem is that practitioners haven't as yet the framework to judge things by.
In other words what I mean by that is that they tend to be able to relate
clearly to and very easily to, generally speaking, traditional methods of assessment
which they have been exposed to themselves but they do have a problem, and
understandably a problem, not a criticism, of relating to new methods and with
new intentions and I think we've got some work to do in that respect. (educator)
The development of a
'framework to judge things by' is vital if the assessment of competence is to
be effective. Where practitioners do not receive regular professional
development, new educational approaches, curricular and assessment strategies
sometimes reach practitioners via the arrival of 'new' students on placement.
The quality of staff responses to students on Dip HE level courses appears to
be governed by the strength of practitioners' knowledge, skills, understanding
and confidence in both the educational programmes and the assessment processes.
The kinds of activities engendered by these educational and assessment
approaches, i.e. questioning and reflective practice based on sound knowledge
and research evidence, are perceived in some cases as a positive challenge
which stimulates and enhances staff skills and knowledge base through dialogue,
and in others as a threat to their skills and knowledge base.
But there's a lot of midwives who feel quite threatened by the students, diploma students, (...) they've got the experience of midwifery obviously in lots of practice, but in terms of thinking diploma that's quite difficult for a lot of them, understandably. Many of them haven't done research, students get research in their programme now, a lot of midwives don't know the first thing, you know, critiquing pieces of research, that's a problem isn't it for the student looking at the issues of practice and questioning practice. Sometimes midwives find it very hard to respond to the questioning student. (educator)
I mean a lot of people are very threatened by us, being called a "new fangled student'" and "you're a funny Project 2000 thing aren't you?"...I think a lot of the time they feel very threatened, they think, oh that when we're qualified we're going to step up the ladder quicker, but in practice we're not going to do that. (student midwife)
The need for
professional development to extend beyond a small number of informational
sessions is clear. Ongoing, regular contact is required to support assessors
and develop assessment practices. Dialogue about assessment skills and
practices, and the sharing of experiences is identified as a vital ingredient:
...the feedback we
get from them (assessing staff) is continuous because you invariably see them
at least once a week, once a fortnight maximum (...) as a clinical link. And so
you are not only responsible for the student in that clinical area but you're
also responsible for the supervisor, you've prepared the supervisor, prepared
them for that particular student. (...) So there is a very definite feedback
process(...) it's usually about their worries and fears, about a) whether
they're doing the right thing, b) how do they actually do the mechanics of the
thing and c) the levels at which they're pitching. (educator)
...I mean like...I would like regularly to meet. I mean I go down once a month, I would like regularly to discuss with the unit I liaise with, which I feel I've got a good relationship with, (...) some of the educational issues. I mean it's things like, O.K. we've had continuous assessment now since June. There must be all sorts of things that they want to discuss with us but I don't think anybody's actually yet had a meeting which actually ...we've got together with those staff and say well, "What difficulties have you had so far?" (educator)
In some approved
institutions, ongoing support to discuss progress with innovations and address
issues such as 'standards' and 'failure' is provided via regular forums. Most
approved institutions run forums every two to three months, and qualified
assessors are often obliged to attend a minimum number to retain their status
as practising assessors. [39] Planned activity which
stimulates dialogue and uses a critical incident analysis approach is valued by
assessors. It is unfortunate, therefore, that some forums do not focus on
assessment experiences and consequently critical debate and dialogue are not
stimulated. Attendance varies
considerably too. And as was the case with short in-house courses and
workshops, whilst some forums are well attended, reports and observations at
others reveal that teachers outnumber practitioners. Pressing workplace demands
and difficulty attending forums at sites other than the practitioners workplace
are identified as problems for some, as well as lack of enthusiasm and low
motivation for forums that do not usefully tackle assessment experiences.
I think the problem always is, where's the time to do all these things and where are all the resources to replace the staff coming off the ward and also from the educational centres, where are the resources? Well I mean you can always say, well it's part of their work anyway, but it's so...in this changing climate there's just so much else to do isn't there? I mean that's a one day mentor workshop and a one day on continuous assessment...I think it's safe to say we need a lot more than that for the issues to be aired properly(...) they could be looked at in assessors meetings, (...) perhaps break into small groups. (educator)
Practitioners
acknowledge that the resource implications of professional development and
ongoing support and development are not inconsiderable, but remain certain that
it is essential to provide support sufficient to ensure that the assessment of
competence is carried out in a way that promotes the registration of high
quality reflective practitioners.
5.4. PROFESSIONAL
DEVELOPMENT AND SUPPORT: A SUMMARY
In a study of the
effectiveness of assessment strategies in assessing the competencies of
students, it would be easy to perceive the needs of practitioners as something
to be attended to once the system is up and running. It is clear from the
comments of those interviewed, however, that structures for the preparation of
and support for assessors must be in place from the very start so that
practitioners can begin to gain confidence in their new role. Without proper
professional development, assessors are obliged to 'do the best they can' in
moments that they have free from their main nursing or midwifery activities. To
be able to operate assessment so that it is integral to the teaching and
learning process, assessors must be able to develop their own understanding of
the knowledge and skills necessary for competent practice, and have the
opportunity to reflect on their own attitudes. Assessors perceive their needs
in terms of support and guidance, and look for opportunities to increase their
own understanding through dialogue. Currently they express a preference for
developing their understanding ahead of discussion with the student. They are
not confident about open discussion with the students because their own
understanding is at an early stage. This implies a need for practitioners to be
able to gain 'safe' experience of the assessment process themselves in order to
begin to understand the process from inside. If this experience is to be given,
and assessors are to be given the professional development and support they
need and deserve, there are a number of issues to be resolved. It is necessary
to ensure, amongst other things:
¥ that the number of
qualified assessors is adequate to meet assessment needs
¥ that courses reach the total
population of assessors
¥ that assessors receive
ongoing support to:
-tackle
competing priorities in the workplace
-free
people to have time for assessment mentoring
-have
time to carry out educational liaison
¥ that the sense of de-skilling
and the feelings of threat typically associated with having to handle the new,
are overcome.
Opportunities for
inter-assessor dialogue and mutual education, as well as an understanding -
through experience - that the central processes of assessment are in fact
central to the caring process, will increase as a consequence of the
introduction of strategies to ensure the factors mentioned.
CHAPTER SIX.
ABSTRACT.
Learning and assessment
take place alongside each other and, in one view, are integrally related within
the Ôlearning institutionÕ. The institution is the place where education
occurs, but is also itself learning; it provides a context and at the same time
modifies that context in response to what happens when learning and assessment
take place. The Ôlearning institutionÕ has structures and mechanisms for
encouragng reflection on these processes and consequently affects the processes
themselves. Individual experiences suggest that the institution does not learn
at a constant rate and evenly across-the-board, though, and it those
differences that students and clinical practitioners report when they talk
about the problems they experience in doing assessment in workplace. Staff express
their concern in terms of the work conditions they have to contend with and the
strength of liaison and support they receive, while students talk about the
commitment and availibility of their assessors. Each is referring to pragmatic
and often institution-wide problesm rather than making a criticism of
individuals, although of course both realise that the relationship between the
individual nurse or midwife and the individual student is of some considerable
importance. There are many pragmatic constraints that can turn the link between
an assessor and their assessee into a relationship in name only; unpredictable
staff or student absence, incompatible shifts, and a general difficulty in
finding time to be together can give rise to a situation in which the assessor
rarely sees the student at work. In such circumstances it becomes extremely
difficult to monitor progress in terms of process, or to learn holistically.
The structural problem of the relatively short duration of many placements
exacerbates the pragmatic problems
of ÔmismatchÕ, for neither intermittent contact nor short contact is conducive
to the development of a holistic perspective on care through reflective
practice. What becomes the over-riding issue is the ÔcompetitionÕ between the
demands of the work and the demands of assessment, with the needs of the client
taking precedence over the needs of the student so that the latter are placed
in opposition to the former and marginalised. This construction of assessment
as something you do Ôon top of your jobÕ if you have the space, fails to
recognise the importance of dialogue about the very issues of pressure and
priorities that feature so centrally in. The value placed on good communication
by students and assessors makes it clear that they would welcome the further
development of supportive relationships through which they could learn to
understand better the assessment issues that trouble them most. In the
meantime, learning about assessment and its possibilities occurs in the
antecedent events, through an hidden curriculum, and at a tacit level. The
models that students develop are the result of attempting to Ôdo the jobÕ and
build a nursing or midwifery theory.
As they observe and work alongside staff, they learn the essential
elements of the job, and thus what counts for assessment purposes, discovering
in particular that they cannot always take for granted that the assessorÕs
values will be the same as their own, or ewven of another assessor. They learn the importance of reading
the situation, and discover the relationship between theory and values. In
antecedent events, then, the knowledge and value base that provides the
framework for looking at practice is built up. Successful participation in
assessment requires an understanding of the ÔrulesÕ in the first instant. But
when the rules have been ÔlearnedÕ, formative assessment can be developed.
Since formative assessment is closely allied to work processes, its proper use
facilitates the internalisation of reflective practice on the one hand, and
contributes to summative assessment on the other. The Ôlearning institutionÕ
has a tendency to provide support for formative assessment by offering
encouragement and support through the the establishment of organisational
patterns that allow continuity of contact and time for conversation and
reflection. By monitoring its activities it develops a set of principles for
learning and assessment that foregrounds interpersonal relationships,
discussion, and action.
TOWARDS THE LEARNING
INSTITUTION:
THE EXPERIENCE OF
ASSESSMENT, LEARNING, AND MONITORING DURING CLINICAL PLACEMENTS.
Introduction
This chapter examines
the close relationship between assessment and learning from the points of view
of staff and students. Briefly,
the intended aims of student placements in terms of learning and assessment events include:
¥ providing appropriate
learning experiences
¥ ensuring that students are
progressing towards, and achieving 'competence'
¥ contributing to the
integration of theory and practice
¥ providing formative feedback
on students' progress towards achieving competence
¥ opening the horizons of the
students towards future developments in the profession
¥ and compiling documentary
evidence to support summative judgements made regarding the students' levels of
achievement
In order to ensure that
these aims are met, monitoring must focus upon a) the mesh between course
intentions and the contextual conditions within which they are realised, b) the
decision making events, and c) the extent to which monitoring structures and
strategies reveal and facilitate action upon issues, concerns and suggestions
for improvement. In short,
structures and strategies for assessment, learning and monitoring comprise what
may be termed the 'learning institution' that is, an institution having
structures appropriate to the aim of learning from experience and capable of
modifying its structures accordingly.
Each of these points will be discussed in term under each of three major
sections:
¥ Contextual
Issues
¥ Assessment and
Learning Events
¥ Monitoring -
Towards the Learning Institution
6.1 CONTEXTUAL ISSUES
Occupational cultures
form the context to everyday practice.
They mediate 'formal conceptual' systems, translating formal concepts
into everyday occupational ways of categorising and constructing meaning as the
basis for action. When referring
to assessment, students and staff talk about 'feedback', 'discussion' and to a
lesser extent 'reflection' rather than formative activities; and 'doing' the assessment and filling in
documentation, 'forms' or reports rather than summative assessment. The formal concepts are interpreted in the language of
everyday activities. The language of 'doing' takes precedence over the language
of academic analysis. It is what
Schutz (1976) referred to as the primacy of the pragmatic attitude pervading
everyday affairs. In the pragmatic
attitude the aim is to carry out a particular task 'for all practical
purposes'. Placing a tick in the
appropriate box, for all practical purposes, signifies the completion of an
assessment task. In the pragmatic
attitude this task is to be acquitted as quickly and as clearly as possible
(cf. Cicourel 1964; Douglas 1970).
For example, if a student has completed two weeks in a clinical area without
causing any problems, then for all practical purposes that student may be deemed to have
successfully completed the placement experience. This may seem
cynical. However:
I think it's just
like the old system really. If you go about things in the right way, if you're
fairly pleasant to the staff and you get on on the ward, you're O.K. kind of
thing. It's not a question of whether you're giving the right advice to women
because they don't really know what you tell them. It's erm, you're not assessed
closely enough or supervised closely enough.
Or, in contexts of
pressure:
My preceptor wasn't there, I wasn't working with her for a long time...I'd gone to the sister before and said, "Well, since my preceptor is not here can I have somebody else to do my interview?" and she said, "No you can't, you have to wait for the preceptor." When my preceptor came (back) she (said), "Well I want to find out hows she's been working" and and in the end she ended up saying, "You've got to go to the sister to have it done." So...it was the last day of my placement and in the end the sister...just took the book and she just ticked it. She just wrote a few comments. And some of the things she wrote, I never even did them. (student)
On the other hand, the
reflective attitude interrogates the contextual features of practice,
contrasting sharply with a non questioning attitude:
..there's certainly
no in-depth discussion of like the reason why you've made any decisions or
anything like that. It's, "Oh I've been watching you work and I think you
work very well." That's about the level it gets to. (There is ) no,
"Why did you do this? Why did you do that? Can you explain this? Have you
got any questions?" That's about as far as it goes and of course you say
no. (student)
Interrogation of the
context to the placement experience inevitably raises issues. The main issues raised during
interviews can be described in four broad categories:
¥ approved
institution/placement area liaison and continuity of support
¥ course constraints
× short placements
× timetable inflexibilities
¥ placement/work conditions
× shifts
× staff absence (sickness, holidays)
× competing placement priorities
¥ individual commitment to role
performance
These will be discussed
in turn exploring their implications for the 'learning institution'.
6.1.1. Liaison and Continuity of Support
Educators from approved
institutions liaise with practitioners regarding the timing and nature of
student placements. Despite
periodic educational audits of placement areas, circumstances may alter by the
time a student arrives on placement due to changes such as skill mix or the
speciality of client care provided.
Consequently there is a need to maintain regular liaison between
educators acting in link roles with placement areas was highlighted. Such communication must ensure that
placement area staff receive sufficient information about the forthcoming
student placement, in good time.
The liaison role is thus essential in ensuring that preparatory mechanisms
are in place.[40]
For the duration of each
student placement, it is intended that formal assessing relationships with
clinical staff arere arranged. The
issues of which staff hadve official assessor roles are not always clear;however,
all students are assigned to one of more named members of staff as their
assessor(s). The rationale behind allocation to more than one member of staff
is to provide continuity of student assessment and support.
The specified number of
shifts during which students and assessors should be working together varied
from college to college, in line with directives in policy documents. Although there were accounts by
students of well maintained relationships which met or exceeded policy
requirements for students and assessors working together, there were also
reports of problematic relations by many students, assessors and educators.
There were reported
examples where the clinical links was effectively in name only and
practitioners did not have regular contact with their link tutor. Where this was the case, it was
not conducive to building assessor motivation or confidence. Most interviewees said, however, that
they would feel able to contact the tutor if any problems arose during the
student's allocation.
6.1.2. Course Constraints
Short placements,
especially in the initial stages of courses and sometimes only two weeks long,
are typically thought unrealistic for establishing and developing relationships
through which formative assessment can occur and summative decisions can be
reached. Students commented on
their need for an initial settling in period for any allocation to 'find their
feet', become familiar with routines and start to develop relationships with
colleagues. Students described
these activities alone as taking approximately a fortnight, hence expectations
of assessment activities occurring within such periods could be argued to be
unrealistic:
The other aspect of
the whole thing that we've got to try and address I think, and again I think
it's very difficult to do with the present course structure, is the length of
time over which people are assessed. I mean it is...well the most positive way
of putting it is a challenge to people to assess (...) students over two weeks.
It's probably not very reliable (...) I think we need to look at when
assessment occurs and the length of time that is allowed for that assessment
and it may be that in some areas we don't do summative assessment ,we do
formative. (educator)
The difficulty we've had here is in the allocation of students to the programme. They have only spent two weeks in some places as a placement, well that's five working days and that's impossible. There's no ownership by the student nurse and there is no ownership of her mentor to look after her for five days, ten working days of which at best you might get seven working days together and on day number six she's got to assess that person and that's a nonsense(...)it makes continuous assessment a mockery, I mean you might as well almost, God forbid, go back to the old four ward based assessments because all you're doing then is taking a snap shot. Well that person can perform good, bad or indifferently on a snap shot, but if you're saying as one should with continuous assessment, on a sustained period, if you've hardly got to know what that person's first name is and you are writing an assessment of that person which will give them their registration or not, the task is not only onerous, it becomes an impossibility. (clinical manager)
The brevity of
placements not only affects commitment through a lack of a sense of value of
the assessment, but also runs counter to the fundamental principles underlying
continuous assessment. Many of
these issues are already being tackled by approved institutions and remedial
strategies set in motion. These
included either increasing placement length, removing some short placements
from courses or substituting formative activities for summative assessment.
Placement periods were
also perceived to limit the extent to which a student could develop a holistic
sense of care. For example, one
manager speaking about a care of the elderly commented that because students
had to leave the placement area for a study day each week at the approved
institution, and because of limits on the length of their working day, could
not gain the full experience of care.
They therefore missed important elements of the holistic nature of care
and its management.
6.1.3. Workplace Demands
The rhythm of everyday
work in the placement area does not necessarily mesh smoothly with learning and
assessment intentions. To
ensure this mesh, the structure necessary to underpin assessment and learning
experiences may be clearly and appropriately worked out. Unless this structure is compatible
with the structures underpinning the work environment there are likely to be
conflicts.
The main obstacles to
the smooth functioning of the placement element of the course experienced by
students were:
¥ incompatible shifts due to
holidays, night duty and requests for specific days off
¥ unpredictability due to
sickness or changing workload/priorities (requiring assessors to forfeit
working with their student for that shift to comply with care or management
priorities)
These are illustrated in
the following comments:
Sometimes I've been
luckier than some of them. I've managed to work with my mentors quite a lot but
I haven't worked with either of them for the last three weeks. which out of an
eight week allocation is quite a lot. But prior to that I'd worked with them
fairly frequently. (student)
I
think that's the ideal. I think it would be nice to erm...say that they do work
very closely together (...but ) things
like holidays, study days, students requesting off duty,
sickness on the ward...other people being off, their mentor has to be in charge
of
the ward or whatever can interfere with the process. (educator)
I know there are
difficulties with assigning them to a clinical supervisor/assessor and match up the duty rota so they
spend at least two shifts with that person. (educator)
The extent to which
responsibility was passed from the named assessor(s) to other members of staff
when disruption of student/assessor relationships occurred, varied. Where such back up systems worked or
did not need to be activated too often, students felt supported[41]. However some students had more
uncomfortable experiences where there was no 'safety net' of continuity:
...you're passed from
pillar to post really...you work with one person one day and another person
another day and you, I don't know, there doesn't seem to be the time to sit
down or you don't feel it's appropriate to say to somebody, "Would you
mind sitting down and going through this (the documentation) with me?"
Especially when you don't really know them or they don't know what you want from it...
The amount of time that
students and assessors spent together was obviously important. When students
felt 'passed from pillar to post', it was apparent that opportunities for
developing relationships in which quality, meaningful assessment could occur
were reduced. Examples such
as this reveal the essential problem of making things happen in social
contexts. Unlike chemical
entities, or programmable robots, the human agent in the system has to be
personally committed to making something happen and has to weigh the relative
merits of competing commitments in conditions of scarce or limited resources.
6.1.4. Competing Workplace and Assessment
Demands
(Clinical staff) do their best and they work very hard, and their commitment and motivation when I audit the wards, they spend their own spare time, practically in every instance. When I audit the areas there'll be a current awareness file, err badly typed up things for the students information...photo-statted sheets out of books. That staff nurses...do this in their own time because their work time is taken up caring for patients, and they are spending their own work time supporting the students, and I think when you go out there they show you this. (educator)
The issue of competing
priorities in placement areas was raised by many individuals. The extent of any problems did
vary. Some areas did not
experience difficulty in accommodating client needs and students' learning and
assessing needs, however in others, the volume and increasingly involved nature of student assessment[42] was placed in direct
competition with the fluctuating demands of client care needs. How staff tackled these conflicts
whenever they arose varied considerably from accounts in data.
It was apparent that
client need took priority. This
stance was understood and unquestioned by clinicians, students and educators
alike. However, the extent to which
students' learning and assessing needs were marginalised and placed in
opposition to client care needs rather than working with the situation and utilising
all available learning opportunities was an issue for concern in some areas.
The course of action appeared dependent upon the motivation, morale and skills
of the nursing and midwifery staff.
Where staff felt supported and confident with their clinical and
assessment skills, and had experience in each, they appeared more able to work
with client care demands and meet student's learning and assessing needs. Where this was not the case, the
quality of student placements was affected:
And in some ways it's just not a priority and they will say to you, some midwives will actually say to you, we're short staffed, we're busy, we've got two midwives on, we've got three students or four students or whatever, and my priority is the care that I have got to give and therefore, you know, the students will just have to get into that. And O.K. that's the reality of practice today and in some ways the students have to learn to adapt to the realities of practice when these things are like that, that's part of their learning and provision of care, prioritising care, but it's not the idea for a student who is new, trying to learn midwifery practice. And learning doesn't get planned, it happens ad hoc, which if it's persistently like that it's not a very good learning experience. (educator)
The increased demands of
assessment, and having to facilitate students on an ongoing basis was stressful
to some staff, despite the overall enthusiasm. The effect was illustrated by a midwife who reported that a constant round of student
supervision and assessment did not leave much space for her own professional
development:
...there's a group that's gone out, the
second group and we've tried to make sure that the first people don't have
continuous assessment this time, so they get a break from that. Otherwise it
means that everyday you go to work and you've always got a student, and that's
a bit unfair. You need some time to look at your own practice. (educator)
For the student,
placements may be experienced as precious, unique; however for the assessor,
the student may be perceived as just another student from yet another course:
You're a named
midwife to a particular student, so you might end up with a student like I've
had a student on this block, and when this lot go you get another one next
block and you just carry on. And
you fill out the initial form, going though the areas that you want to develop,
and then the middle part of the form where you're going through areas, how
they're doing. And then the final
bit, how have we done? Have we managed to cover all these areas?
6.1.5. Individual Commitment to Assessing
Roles
Commitment to assessment
relationships affects their quality and hence the educational outcomes or
benefits. From students' accounts
we learn what is important and valued by them in the assessor/student
relationship. There were accounts of relationships which worked well and were students
appreciated the benefits. Such
accounts provided insight into the
features (including mechanisms and procedures) required for effective
relationships; they included continuity and support in facilitating the
development of skills leading to independence (while providing appropriate levels
of assistance), and giving the kinds of feedback that the students felt they
needed. Equally, when there was
communication breakdown and responsibilities went unacknowledged, the essential
features of an effective system were noticeable by their absence. There is, of course, a sense in which:
...you're going to
learn I think wherever you are if you are willing to learn and you've got
mental faculties...but it's so helpful if you've got a supportive network
around you and people who are encouraging you rather than just leaving you
to...discover things for yourself. (student)
Being able to watch
others at work and learn from observation is important, however:
...although it's good
to work with other people and see how they are doing things I don't feel like
I've had anybody to sort of fix to or have a relationship with that I can you
know, sort of confide in really, which is a pity.
There is a close
relationship between good practice and educational processes:
And again the people
I aspire to be as role models also seem to turn out to be the best teachers,
they enjoy teaching and they'll ask if you want to go over anything (...) and
they'll get you doing the practical...like doing vaginal examinations and
asking you what you're finding and making you apply what you're finding, and
like, "What does this tell you? What does that tell you?" Whereas
some of the others will let you sit there let you do the observations like
blood pressure and pulse all afternoon. They'll do the vaginal examinations,
you'll ask if you can do one and the answer will be, "No there's no need
for you to do one." Or
they'll let you do one but they won't discuss it with you which to me...
there's no point doing it because it's very difficult to assess what you're
feeling when you first start.
The reflective
capabilities, motivation and commitment of students as agents in their own
learning, while vital, are not sufficient in coming to grips with professional
action. The above student comments
refer to their learning needs and psychological needs such as 'confidence
building', someone to confide in and so on. However a further, educational dimension is
needed, so that each student is
facilitated in making links between the pratical application of knowledge and
not left to 'discover things for yourself'. The educational processes, psychological support and
assessment functions must all work together. This involves attempts to 'get
inside' professional action for educational purposes. For this, dialogue with experienced professionals is
essential. It requires time and
commitment and must not be marginalised by shrinking resources, variable levels
of commitment and competing priorities.
6.1.6 CONTEXTUAL ISSUES: A SUMMARY.
Everyday work demands
provide the context within which learning and assessment takes place. Reflective practice interrogates
everyday workplace activities and is the condition for improvement in practice. The processes of education and clinical
practice are not mutually exclusive.
For the intentions of reflective practice to be realised an
appropriately structured environment must form its context. That environment itself must be capable
of responding to needs and modifying its own structures to meet those
needs. This includes all the
elements of staff time to commit to learning and assesment events, the
professional preparation, personal commitment and liaison roles. The quality of such elements as these,
frame the quality of assessment and learning events.
6.2. ASSESSMENT AND LEARNING EVENTS
There are two kinds of learning and assessment events to distinguish. The first are those which constitute professionality. Professional action involves learning from experience, assessing courses of action and performance, and making judgements about outcomes. The second concerns the learning and assessment events that relate directly to the performance of the student. These latter may either be perceived as an additional burden or as part of the continuum to professionality. In this latter sense, student assessment and learning can be integrated into daily professional activity.
From the view point of the student, any activity which happens prior to an 'official', summative assessment decision can affect students' perceptions of what is important in judgements about nursing or midwifery competence, that is, about what it means to be a professional. Meanings about learning and assessment are constructed in various parts of the placement experience, culminating in the transaction(s) between student and assessor which take place when assessment documentation is completed. 'Official', summative assessment decision making is not a stand-alone occasion, but is embedded in the composite assessment 'event' that runs through time and continuously shapes both the assessor's judgement of the student and the students' values about and attitudes towards particular nursing or midwifery practices. The features of this composite event can be discussed under the following headings:
Antecedent Events
¥ Tacit/Hidden Aspects of
Learning & Assessment
× differring ideals about practice
× values and attitudes toward client care
× building a knowledge base and rationale
for practice
¥ Handling the Tacit or Hidden Aspects of
Learning and Assessment
¥ Formal Formative Relationships
for Assessment
Reaching Summative Assessment
Decisions
¥ The Scheduled Assessment
Events
¥ Self Assessment
¥ Questions of Pass and Fail
6.3. ANTECEDENT EVENTS: TACIT OR HIDDEN
ASPECTS OF LEARNING AND ASSESSMENT
Learning takes place all the time. As well as the intended processes and outcomes of a particular learning environment, some learning takes place which is not openly expressed, not necessarily intended. This hidden or unintended curriculum largely takes place at a tacit level. It is the kind of learning which occurs when the newcomer begins to 'sus out' the rules of the social situation, negotiates outcomes and learns the appropriate ways of presenting themselves in formal and informal relationships in order to achieve the kind of impressions that they desire. In short, they have to develop a model of 'the way things work in real life'.
The models that students
develop are the result of attempting to 'do the job' and build a
nursing/midwifery theory. Although
one of these activities is usually thought of as practical and the other as
intellectual, each is in fact a mixture of both; it is not possible to do the
job without a theory of some sort, nor is it possible to develop a satisfactory
theory without doing the job.
Insofar as there are
some procedures and practices that occur 'universally' within nursing/midwifery
as part of daily delivery of healthcare, there is a fair degree of unanimity
about the need to ensure that students are capable of doing the job successfully
by carrying them out. As
students observe and work alongside qualified staff, they discover that for
example, explaining a procedure to a client, confirming a client's identity
before administering a drug, and ensuring client safety in the bathroom are
essential parts of doing the job. They learn these aspects of practice
informally and intuitively, through observation and emulation of role models in
clinical environments. Doing the job with respect to these things becomes
second nature, the theory is embedded within them and rarely drawn out.
However in addition to the agreement about some nursing/midwifery practices, others are in fact the idiosyncratic preferences of individuals and groups; they are based upon differing degrees of experience, knowledge, argument, critique, values and beliefs. During learning and assessment processes, students come face to face with the fact that professional practice contains such complexities. Sometimes they experience this as a surprise, sometimes a disappointment, often a challenge. Students begin to realise that they cannot always take for granted that the value-bases that inform particular choices are agreed within a uniform theory of nursing/midwifery. The student must quickly learn how to read the situations they meet on placement. The diversity students face in attempting to build a coherent theory of nursing/midwifery which is comfortable in workplace settings can be summarised as follows:
¥
Differing 'Ideals' About Practice
For some students, dilemmas were faced when having to interpret the 'idealised' theory of practice learnt in the classroom in the light of experience of practice and resource limitations. As one student put it:
The major disappointment I've found is when you're in school obviously it's a Utopia they talk about, which is I think good, you've got to have set standards in what you want to do, and you blame yourself when you come to the real world and you've got staffing shortages, people being sick... and you know what you've been taught, research, counselling skills... all these wonderful ideas of actually sitting down and talking to people. And when you come on the wards, you really have to give ...like...second best.
This student learnt
to take account of two different views of nursing, a theoretical view and a
clinical view. She experienced a
polarity between the 'Utopian' theoretical view, and the 'real' world of
practice.
Educators were
aware that students often faced this dichotomy in practice, and had to develop
ways to reconcile both experiences:
The criticism as far as we're concerned is
that we're trying to emphasise nursing care whereas they'll turn round and say,
"Well, you know what you're saying is unrealistic, they wouldn't possibly
be able to do that in (...) the ward, it just wouldn't happen." And I think that's generally quite a
criticism. (...) What we actually try
to say it, "Well it's an ideal situation, this is what you should
be doing." What they're saying is, "There's no way, there just isn't
the staff to do it..." (educator)
¥
Values and Attitudes Towards Client Care
It is not only the
relationship between theory-as-knowledge and practice-as-action which students
have to work out in the course of observing or working with clinically qualified
staff. They also have to learn to resolve the conflicts between
theory-as-values and practice-as-action.
Students encountered many different attitudes as qualified staff and
peers went about their nursing and midwifery care. They spoke of being faced
with attitudes that mirrored their ideals or conflicted with their own values
in practice situations:
She's the sort of midwife I would like to be. She doesn't rupture membranes and apply scalp electrodes unless there's a need for it. But the women are always given an option, everything is discussed with them, she always finds out what they want first and will always comply with their wishes as far as she can... Things like pain relief you can have a big influence on that by the rapport you establish. I've seen Kay talk to to women with first babies who haven't needed any pain relief at all, which to me is pretty impressive. Whereas with other midwives it's, "I think you need an epidural," just because it's easier for them, they don't have to give all the support. (...) Some (midwives) will get in an epidural and wheel in a television so they don't have to talk to them. You just sit there and think, "Oh God, I don't want to be here."
Good and bad role
models were identified by students as they experienced working with different
staff members during placements.
Students appeared to engage in a process of sifting out the practices
and styles they wanted to emulate, and those which they found uncomfortable
either working with, or pressurised into having to do because of their student
status.
¥
Building a Knowledge Base and Rationale for Practice
Developing
practical experience and exploring the theoretical basis for practice is also
part of students' learning when on placement. The following example of a staff nurse preparing for the
administration of intravenous medication with a student demonstrates the kind
of craft knowledge that is often provided as they work together:
...when you put needles in, be careful not to push through there (...) these are designed so that you should be able to get the syringe in and draw it without losing the needle...it doesn't always work...
...then we have a single flush because we use
the flush to check that the line is working...so we put a bit of drug in...and
also to flush it between drugs, 'cos some drugs don't mix in the vein...
Such teaching and
learning is highly valuable, and needs to be backed up with rationale and up to
date knowledge. However sometimes
problems occurred when students encountered practices which either conflicted,
or the rationales provided by staff were inadequate. These caused confusion and
did not facilitate students' learning and their development of practice:
I've been on certain wards and been told
about not to use a steret[43] for when you're taking bloods for a BM[44], for instance, because it alters the reading. And yet you can be on
the same ward with another nurse who says yes you should use that for hygeine
reasons. I mean I know what you should be doing because I've looked it up..I
was getting so concerned about it I phoned up the diabetic liaison nurse and
had a chat with her...that's the difficult bit I find, the continuity of
knowledge that we're given isn't there. (student nurse)
I mean I've worked with some people who've told me to do one thing one way and I've been quite happy doing it and and it seemed right, and you'll go and do a delivery with another midwife who'll ball you out for doing something and she'll say, "No you shouldn't do it like that, " then, "Whose shown you to do that?" I don't personally like to say, "So and so said I should do it like that," and they don't seem to take that into consideration (...) some midwives are very good and they say, "As long as you're doing it safely then it's fine, there's no problem." But some are very sort of into doing things their way. (student midwife)
As is highlighted
from the above extracts, conflicting practices and procedures can create
pressures on students as they attempt to develop their practice and extend
their knowledge base on placements.
6.3.1. Handling the Tacit or Hidden Aspects of
Learning and Assessments
Students must find ways of either resolving these conflicts, putting them 'on hold' or learning to live with them. It may be that students themselves have to seek to supply their own rationale and continuity. This places a great deal of responsibility on the student who is in any case in a vulnerable position as the one being assessed.
Going against the norms of a ward or of a colleague was one option but this can, in turn, bring difficulties for the individual:
I wonder if I ought to just do some reading and look at the procedures and then make up my own mind about what I should be doing. But that's probably a good way to alienate everybody at once.
This student was clearly interested in developing her own understanding, but was nevertheless bothered by the probability of being shunned by staff because of her wish to base her practice on enquiry and research. Others less certain about their direction may be more easily deterred from educative activities when clinical staff discount 'book knowledge' and prefer their own 'folk knowledge'. Students may be faced with choosing between innovating, despite the probability of opposition, or 'making the best of the situation' through accepting that for assessment purposes it is necessary to carry out impression management:
I will be more careful who I listen to and what conclusions I come to...in terms of practice I will follow the procedure...which is ambiguous, yes. But if I am told by a particular member of staff to do this or do that ... I will want a rationale for that.
We've all sort of found you just adapt to what the midwife wants and you console yourself with the fact that, "When I'm qualified I'll do it how I want to do it."
...you really have to learn to give the best care you can actually give practically on the ward, and that's the hardest thing I've found (...) I feel what I do is spread myself so thin, and it does get upsetting...
In practice, however, the extent to which students exercise individuality depends upon the degree of match between their own values and those of their colleagues in particular clinical environments. To discover the latter, students have to engage in an on-going monitoring of the variety and range of practices, procedures and theories they encounter, and arrive at interpretations of their own. In this way they learn to be competent in ways that fit with what is required of them by a range of people from a variety of clinical and classroom environments.
6.3.2. ANTECEDENT EVENTS: A SUMMARY.
Part of successful participation in assessment requires an understanding of the 'rules'. It is also impossible to acquire those rules without at the same time acquiring some knowledge of values. One of the key things a summative assessment may assess is the extent to which a novice has internalised the rules and values of those assessing. In the course of their observation and work alongside colleagues who model behaviour and attitudes, students have to take account of the professional's expectations and values. What gets assessed on formal assessment occasions (i.e. the two or three occasions when mentors or supervisors sit down with a student to fill in their assessment schedule document) may be influenced by what students have learned informally about assessment while they have been doing things with or chatting to the staff. It may also depend upon the student's perception of what it is 'safe' to challenge; challenging the attitudes and work patterns of others involves personal risk.
6.4. ANTECEDENT EVENTS: THE FORMAL FORMATIVE
RELATIONSHIP
A successful formative
relationship not only focuses on but also is integral to the processes of
action, reflection and development inherent in work practices. Isolating instances of assessment is
thus inappropriate. How then, is
formative assessment as a formal requirement accomplished? A key procedure is the generation of
feedback. This in itself depends
upon some way of recording and analysing 'what happened' in order to think
about 'how to improve, develop, change'.
The kind of evidence base that is required to underpin formative
assessment events is one that is responsive to processes and situations. It should provide evidence of
reflection within the procesess of practice as well as reflection on the
processes of action at some later date.
Journals written by
students provide one means of doing this.
They are often built into courses, either as personal records for the
student, as documents open to inspection or both. The quality of their use varies. Their quality depends upon the extent to which students are
inducted into strategies for observation and record making, and the sustainance
of motivation thereafter. There is
a difference between a journal entry which notes the occurrance of an event,
and another which describes in detail what, when, how and who was involved,
what was said, what was considered in forming judgements and decisions about
what to do and what the outcomes were together with critical commentary on what
might be improved. The quality of
a journal thus depends upon 'frameworks for seeing'. Individuals who are not skilled in a given area do not 'see'
as much nor quite the same things as an individual who is an expert. Formative assessment is all about how
students begin to internalise expert ways of seeing as a critical step towards
making evaluations, decisions and carrying out actions. Continuous assessment means that
occasions for such internalisation to take place are pervasive, not
'one-off'. In this case feedback
becomes a part of general interaction:
I don't usually set
an appointment for that, but I think you're almost doing it on a daily basis
without really thinking about it. You know a situation will occur on the
ward...or they observe a situation and you put it back to them, "How would
you handle it? How do you feel about that?" Or "I think you did that
really well, what do you think?"...I think it's more informal then. But yeah
there is opportunity to do it. (staff nurse)
I get them to...try
and keep a learning journal and then we can reflect on, maybe on a day to day
basis or maybe once a week. (staff nurse)
I'd say most of it
was on an informal basis, if you sit in the office and say, "Did I make a
mess of that? Is that OK?" and they'll say, "Oh I think you
shouldn't, perhaps it's a good idea to do it that way, and why don't you try it
this way?" And sort of advice and prompting. Although you have your own
primary nurse everybody takes you under their wing because you're a first year
student. (student)
These approaches to
feedback and reflection parallel the kind of learning that 'naturally' takes
place in any socialisation process.
For that reason it enables the student to internalise habits of
reflection, analysis, evaluation.
Nevertheless, the kinds of activity outlined in the previous extracts
represent the features of a continuous approach to assessment which is still
new, and remains alien, to many.
The new is often interpreted in relation to the old, and this is
understandable when recalling the lack of professional development received by
some assessors[45]. As a consequence, assessment can be
perceived like assessments of old, and treated in the same way:
I think we've
adjusted to it more but I think the clinical staff find it difficult...I think
she (the assessor) finds it quite difficult, the importance. (It) actually goes
in front of the examining board and they have to pass it, whereas...I think
there's still the old feeling that it's just a ward report and somebody looks
at it and says "Oh pat you on the back." And the amount of time we
actually spend...doing you assessment. It's not "I'll take it home and
bring it back," it's, "Sit down and lets do it for an hour,"
which is very difficult on the wards, especially if they're busy. Yesterday we
sat down and we, I set down my objectives or what I wanted to do up here and it
took us nearly two hours. (student midwife)
Continuous assessment is
not a short cut. It demands substantial time, effort and understanding, carried
out on a regular basis. Good
continuous assessment is not guaranteed
and is often difficult to come by.
A few placement experiences, poorly supported in terms of assessment
feedback, can have a disproportionate effect on particular students'
experiences:
It doesn't happen
very often that you do actually get very good constructive criticism. I've had
it once or twice, but you tend to get more of the bland sort of comments most
often, which I think probably is a problem...
...I felt she was
patronising me slightly by saying, "Oh you're very good at doing the
paperwork aren't you," that sort of thing, and I was thinking, "I
don't need that sort of feedback, it doesn't interest me."
I think sometimes
people don't like to tell you that you're not very good, even if you ask them.
I don't know if they think that you're going to burst into tears or get really
offended and walk out, but as I say, (...) I find I benefit from it as long as
it's constructive criticism.
During a discussion on feedback the following
midwifery student was asked whether she had received any good experiences:
I can't say that I
have! Is that good criticisms? I don't know. I think everywhere that I've been,
because we're taught to ask, "How did you think I was doing that
delivery?" or "How was I doing that post-natal check?" And they
are just sort of, "You're fine, you're fine, you're doing O.K." And
that's the end of it. They (the assessors) are just not taught I don't think,
they are just not supported enough to know what to say...or they just don't
realise what the philosophy of the course is (...)sort of reflective learning,
they just don't seem to understand.
The following frank
statement from a staff midwife illustrates what can be a real problem for
assessors:
...I shy away from having to give
criticism anyway. I'll always go
to great lengths not to give criticism, so I'm not a very good assessor from
that point of view as I'll always highlight the positive aspects and I'll tend
erm, not to go into too many details if a student isn't doing terribly well in
certain areas. (staff midwife)
It is difficult to
compel formative relationships, rather they have to be grown through sharing
good assessor practice during for example, professional development
meetings. Thus it is difficult for
education staff to be certain about the overall quality of students'
experiences whilst on placement.
There is still very much the sense that the process is in transition and
the quality of experience is thus necessarily variable:
...I'm sure it does
go on, on an informal basis, it would have to in some ways to help them with
their learning experience. But again, whether it does or not is very
individual, depending on the person who's planning the programme and assessing
and monitoring them through that experience. (...) But I'm sure there are
some midwives who will get the students to reflect on what they are doing, what
the basis is for the management of their care(...) I think reflective practice
and being critically reflective is something quite new in terms of the concepts
taken on board by clinical midwives. Because I think one of the difficulties
we've got is that we're educating these students towards diploma status but
they're working alongside midwives who have all been at certificate level and
haven't been used to that way of thinking and working. And that's a difficulty,
I think that's big personally. And of course we can't get all midwives to
diploma status overnight. So there's a gap there, you know between what we are
asking the students to achieve and how to achieve it and the way the midwives
actually see it. (educator)
The time required to
fulfil current assessment demands is significant, and students' assessment and
learning needs are sometimes subordinate to client care demands. Consequently,
taking up those opportunities which do exist depends on the extent of
assessor motivation, quality and quantity of appropriate professional
development and/or the conceptual framework, skills, and confidence to work
closely with students.
It is understandable in
conditions of limited staff resources that students' assessing and learning
needs can become marginalised by patient care priorities. It is also understandable that staff
who have not been involved with extended forms of professional development will
find it difficult to recognise and understand the importance of and need for
formative strategies. Where
such formative strategies are particularly lacking, students report that
personal reflection and self assessment become their main sources of progress:
...you know yourself
that you've progressed...
I mean obviously if
it was something major, if you were practicing dangerously or breaching
confidentiality then other people would pick up on that. But the more,
intricacies that you know you're weak on and you need to develop, I mean really
you're probably the best person in some respects to identify that sort of thing. (student)
However, such self assessment cannot be a complete
substitute for good formative assessment provided by staff:
Now as we've progressed and you've matured yourself and progressed through
the course more so, but still not completely because if you're completely
lacking in some area you might be so lacking in it that you don't realise that
it exists, and its importance (...) So I mean you do need outside input as
well, definitely. (student)
6.4.1. THE FORMAL FORMATIVE RELATIONSHIP: A
SUMMARY.
Since formative
assessment is closely allied to work processes, it's proper use facilitates the
internalisation of reflective practice on the one hand, and contribute to
summative assessment events on the other.
However, formative activities can only do this if, a) they occur, b)
there is an appropriate evidence base to draw upon, c) staff have received
appropriate professional development and d) are fully informed of the course
aims and intentions for the placement experience of the student.
These are not always
present and the current state is one of variation in the quality of 'formal'
formative activity.
6.5. REACHING SUMMATIVE ASSESSMENT DECISIONS
The assessment of
practice contributes significantly to the overall assessment strategy, and
summative decisions determine access the status of professional
registration. Consequently the
effectiveness and quality of summative assessment decisions is vital.
The issue of determining
when summative decisions are made during the
placement period is a difficult one due to great variety of observation,
judgement, discussion, reflection which may occur between students and members
of staff throughout the placement.
The making of summative decisions is different from completing
assessment documentation during the final assessment interview.
This section focuses on
the summative decision making which results from the informal and formal
antecedents of clinical assessment.
Assessment of competence, it is argued, refers to a developmental
processwhere notions of self reflective critique, dialogue and negotiation are
integral. These will be discussed
in relation to the formal events, the role of self assessment and questions of
pass and fail. Failure, in this
context, can be seen in terms of the absence of, or inability to progress
rather than a failure to meet a minimum (see also Appendix F)
6.5.1. The Scheduled Assessment 'Events'
The importance attached
to the three assessment 'interviews' varied. In some assessment relationships
the documents were used at regular intervals, more frequently than the three
scheduled times, to check progress
made towards achieving stated aims.
As the standard is achieved
the document is completed gradually, or as areas of weakness are identified,
planning takes place to met those aims.
The documentation thus acts as a constant source of reference. In this
way, the foci of the three interviews become subsumed within a more continuous
assessment process. This is in the
spirit of the continuous assessment philosophy, where one intention is that
there should be no division between student learning and assessment. The potential danger lies however in
the three interviews becoming the focus of the assessment, with little or no
formative activities occurring between these three isolated events.
The intention for
continuity of assessment activity between formal assessing occasions needs to
be explicit as the following educator outlines:
...we have a
preliminary, and intermediate and a final interview...but again we try to
emphasise to the midwives and to the students that there should still be the
discussion on-going even between those interview times. That if you like, is
just a sort of...to highlight the importance of talking to the student and
formalising what you're saying (...) they're accepted those interviews, they're
done well and for the purposes of identifying any weak areas for the student to
work on...But also I think in many ways this is more important, to pick out
what the student's strengths are and to, you know, encourage her and say,
"We'll work on those." Because I think students do much better if
they're encouraged as well as, you know, the areas where they need sort of,
err...remedial action.
However, even the
minimum of three interviews can not be guaranteed. They do not always occur as
planned:
I think what it's
supposed to be is that you go on the ward and they introduce you, and then you
get your interview after you've learnt say maybe about the ward...but I think
because of the short staff you can never work with your mentor(...)in the end probably
you won't have your second interview, intermediate interview until the last
day, you have both of them (intermediate and final) at the same time. And at
times you find you have other things that you would have liked to achieve, but
because you never have the interviews, you don't. (student)
As a consequence,
opportunities which could maximise learning on a placement are lost and weak
areas of a student's performance may go undetected.
Adherence to the formal
events neither guarantees the quality of decision making nor its
appropriateness to the development of learning. Nor does a completed document imply that the required
activities have, or have not, taken place. The following comment is from a student at her intermediate
stage:
...I've worked with
Sally an awful lot so she knows exactly where I'm up to...we've talked about
what I actually need, although we haven't actually filled the forms in. But
that doesn't mean that we haven't done anything.
Where students worked
with other staff in addition to their assessor to provide continuity and
overcome the kinds of practical disruptions to the assessment process
previously discussed, there was a need for these staff to come together and
pool their experiences relating to the student. Where this happened, students
frequently expressed their satisfaction with the decision making process, for
example:
...my experience on the whole has been pretty good in that
area...they've made it clear to me that they are seeking the opinions of other
staff about the way I'm working...you know that's happening and that's fine as
far as I'm concerned.
(student)
Many educators
recognised that this was not always the case:
...we've said that
it's important for them to get the experience of other midwives apart from the
one who is assigned as their assessor because of the variety of approaches and
experience that's available in any area. Now ideally I would like to see those
midwives linking up in terms of the experience they've had with that particular
student. In reality I don't think that is something that happens from my own
sort of queries and questions about it in the clinical area and with the
students.
(educator)
Even when staff involved in working with the student do liaise regarding the assessment, it was a common feeling that the quality or depth of consultaion was not always satisfactory:
That's our meat and
drink, these forms. But as regards to assessment in the clinical area it's word
of mouth entirely. So if I'm away and I come back and I'm just going to finish
off the final assessment with this student, just leaving the area, I'll go
around asking people who've worked with her. So it is very, it's a bit basic
really isn't it? We don't go by, we don't test on how to do a VE or how to do a
delivery. It's just a general, "Oh yes I've found her to be very hard
working and very good." (assessor)
There was here, a sense of assessment as a rountine activity, not helped by this assessor's poor understanding of the intentions behind assessment. In her view, since nothing is 'tested' in a one off situation or performance, then only character and general motivation formed the basis of the continuous assessment process.
In general, the formally scheduled events should allow a triangulation of views, a sharing of experiences, the diagnosis necessary for forward planning. In particular it should create the framework for the kind of internalisation of judgement that allows self assessment to become the basis for future professional development.[46]
6.5.2. Self Assessment
In one view, the activity of self assessment was recognised as a challenging, but essential part of the process of learning, development and taking responsibility:
...it's amazing how students see themselves...they say, "I can't do that! Oh God this is really difficult!" But it's amazing what you get back and it's a great basis for discussion when a student rates themselves half way. (staff nurse)
I found it intimidating for the first time I did it, I think most people probably tend to under rate themselves. (student)
I think it's got to be a two-way process...
you don't always get true feedback from the student because they're afraid of
what's going to happen. Even with continuous assessment there is this element
of it being punative, or potentially punative and I think that's very sad. (educator)
...again we talk about this endlessly...the need for the student to be involved in their own assessment, and if there are any areas that they need to develop...being involved in the documentation...the formulation of an action plan which the student's actually involved in...you seem sometimes as if you're having to say this so endlessly and then things go wrong and and you realise again, it hasn't been done. (educator)
Essentially, self
assessment parallels formative assesment.
It can be seen as the internalisation of the latter and is an essential
ingredient of the development of professional judgement. From students' comments it was apparent
that some staff were either unaware of the need for a substantial element of
student reflection and self assessment, or that they applied out of date
approaches to assessment and failed to consult students over their assessment
decisions; such assessors just went through the motions of talking about what
they had written down, with little room for student negotiation:
...the first two reports that I had, it was
more or less a fait accompli..."This is what I've decided," and
although we did discuss it (pause) it was made clear that there weren't gonna
be major changes to what the primary nurse decided. (student)
This student went on to suggest that this non-negotiative approach had the potential advantage of raising student's awareness in the early stage of training where staff were in a better position to gauge students' practice because of their greater experience. If such relationships continue, how students then escape this 'dependancy culture' remains as an issue.
6.6. SUMMATIVE DECISIONS: ISSUES OF SUCCESS
AND FAILURE
If asked what the key
criterion for pass or failure is, most people - whether professionals or lay -
resort to a statement that includes 'safety to practice'. At first sight this seems a clear
criterion. Fuller reflection soon
reveals its complexity.
A considerable amount of
data has been collected regarding the assessment decisions made regarding weak
students and whether these decisions of student success or failure were
reliable. The data relating to non-continuous
assessments was informative in providing a context of assessment issues through
which the effectiveness of continous assessment strategies could be
considered. A selection of this
data and a summary of issues is contained in Appendix D.
Continuous assessment
was welcomed as a means of promoting greater confidence in decision making,
particularly regarding weak students, because the continuity it could provide
would offer a greater oppportunity to judge the consistency of student performance
over a wider range of situations and contexts than could one-off forms of
assessment. Indeed, continuous
assessment was more appropriate to the flow and dynamics of the work situation
than a one-off assessment could be.
In addition, by reinforcing the responsibilities of clinical assessors,
it was envisaged that greater confidence could be placed their professional
judgements. This rationale is
clearly expressed in the following comments on the change to continuous
assessment:
...the likelihood of
people failing may not necessarily be stronger but the responsibility definitely
rests with the staff, the service side, whereas before it tended to rest with
the educational staff. And more people are likely to fail as a consequence of
poor clinical practice than poor educational practice because we've reduced the
amount of educational tests to a
minimum in an attempt to make them more purposeful and meaningful, so they're
longer. But because they can be supervised and controlled, sorry, they can
control their educational development, they're less likely to fail them. But in
the clinical area if they fail, it will be because a staff nurses has said
they're not good enough, and if they only need to fail one...they only need to
get one minimum level out, you know they don't achieve the full package for
each part, then that's it, that's their course finished, and that's, you know,
that's quite a dramatic event. Whereas as it stands at the moment they can fail
three components of the course, three consecutive units. So they could fail a
whole year and a bit and still take their finals. Here they just have to fail
one small practical piece, moving say from a supervised, or a directed to a
supervised capacity on a skill and that's it.
As is
clear from the data on non-continuous assessment strategies[47], assessors require
considerable skill and confidence to handle the assessment weak students
effectively; such situations were often experienced as great challenges both
professionally and personally.
Consequently professional development and support are vital,
particularly when assessors also have to adapt to such demanding and sensitive
issues combined with new approaches to assessment. Staff who received professional education under previous
system need re-skilling to feel confident to handle weak individuals amongst a
new generation of students assessed on the principles of continuous
assessment. Other practitioners
who find themselves in assessing roles for which they received only limited
assessor preparation can feel
vulnerable carrying such a vicarious responsibility:
If I find a students
that I feel is not, because I am after all, although I am a staff midwife and
I've worked here a long time and I have a lot of experience I'm still only a
particular grade so I think people who are getting a higher grade should carry
the can for certain things. So if
I find a students that is so bad and I don't feel as if I'm equal to be able to
give her criticism because it would be unfair on her and unfair on me, then I
will try and pass the buck upwards, that way.
The need for
professional development to ensure understanding and action within new
approaches to assessment was identified as essential for all staff involved in
assessment activities to ensure that.
Unfortunately some educators were aware that this was not always the
case, and that this could have serious repercussions on the detection and
action taken on weak students:
I mean you've got to
have the knowledge base and the skills to say (...) someone's not
competent. To do that you've got
to know what you're talking about and their trainingmight not have given (them
that). If you don't know, you might as well put a tick because if you put a
'no' and they challenge you...
I'd argue that
there's a conspiracy of passing people at the moment because if they judged it
by their values they shouldn't
pass, they're not competent within their values, but they know their values
aren't what's being asked for but they have to assess them.
When assessment was
poor, students were often acutely aware of the importance and significance of
what was being neglected:
...it's 50/50. This
goes towards the diploma at the end of it and I don't think the clinical staff
feel supported enough to say fail somebody on the clinical assessment.
(student)
Where doubts about the
quality of students are expressed (either formally or informally) the
implications of them "slipping through the net" to become registered
practitioners can be grave. Users
of health care must be assured that systems of professional education possess
rigorous assessment procedures which do not allow unsatfisfactory students to
progress undetected.
Although it was clear
that poor students were typically in a minority, the issue is still an
important one. Any student who is
either not progressing or failing to meet the required standard needs
identification by assessment systems. Diagnostic activity must be carried out so that
students have opportunities to improve.
Not only does this reiterate the need for professional development and
support for assessors but it highlights the importance of assessment texts to
provide structures for the production of an evidence base which can be reviewed
and reflected on regularly.
Development needs can be identified quickly, detailed action plans can
be negotiated and detailed evaluations should result. If, despite remedial
action, development does not occur and standards are not achieved, then failure
decisions can be made fairly and on the basis of a fully documentated evidence
base.
6.7. MONITORING -TOWARDS THE LEARNING
INSTITUTION
Monitoring
is the way that institutions, like individuals, learn, develop, adapt to
changes. The structures,
mechanisms and roles through which these activities are executed within
approved institutions shared the common purpose to bring about a closer mesh
between course intentions and the structures, mechanisms and procedures
employed in practice. However, the
maturity of both the assessment strategies, and the monitoring and developing
systems associated with them varieS considerably.
With the
devolution of assessment from national to local level, approved institutions
now carry the responsibility for monitoring, reviewing and evaluating the
effectiveness of all aspects of the assessment process (ENB 1990, p 52). Also
explicit in overall assessment functions is the responsibility for development
of assessment strategies on the basis of monitoring, review and evaluative
activities.
Examples of the kinds of
mechanisms developed to meet these responsibilities are:
Assessment
/examination boards
Assessment/examination
sub-groups
Meetings/forums
for assessors of practice
Course
management meetings
End
of placement/part/course discussions/written reviews & comments
Clinical
liaison role of educational staff
Educational
audit of clinical placements
Reports
from external examiners
Agreed
notes from students' reflective discussions
Monitoring
of local strategies on a national level occurred through links with ENB
education officers and the yearly submission of an annual report on assessment
to the ENB.
There was a perception
that growing familiarity and confidence with assessment strategies has ironed
out initial problems in some areas:
...I mean you even hear positive statements as well these days! (laughs) So yes I mean I think it has got easier. More and more people get to know the system, more and more people have used it because of the numbers of students going through wards, so certainly people are more familiar with the documents, more familiar with the expectations that are sort of required of them really in a sense and more sure of their role really, so yes it has improved. It has got better and the indicators for that are that they're becoming critical of performance perhaps (...) they're talking more positively about it in an informal sense and there are less cries for help. I mean they're pretty crude measures but (...) there is an improvement.
However from further
comments made by this educator and others, it was clear that becoming familiar
with the system and getting used to the documents was insufficient. The need for active and effective
monitoring throughout systems was recognised, particularly for first
experiences of new courses:
...it'll be
interesting to look at the evaluations (...) we haven't really got enough
experience of it yet to see what the difficulties of it are and what things are
good about it. (...) ...I think
staff will have to look really across the evaluations to see what's going off
and where we can make changes. I
think there are a lot of challenges, clinically and theoretically to this
course, and educationally...(educator)
The issue of the
effectiveness of current strategies for monitoring assessment was raised in the
data. Some parts of the assessment system were more amenable than others. In summary form:
¥ Written
assessments
Written forms of
assessment appeared to be more open to scrutiny in terms of opportunities and
activities for monitoring their quality, validity and reliability. Double
marking by assessors, marking guidelines and criteria and detailed comments
sheets made decision making for written assessments explicit during regular
assessment board meetings.
¥ Assessments
of practice
In comparison to written
forms of assessment, the monitoring of assessments of practice using existing
strategies appeared to be considerably weaker and less effective. The following themes have been derived
from project data:
Firstly, the ongoing, continuous
process was a difficult and time consuming activity which many educators did
not, or could not participate in and monitor (to any great extent) as part of
their clinical liaison role.
Consequently monitoring predominantly occurred after the assessment,
using completed documentation for review during exam board meetings, and in
some instances, during meetings between educators and students. Only a small
number of educators spoke of being able to attend assessment decision making
occasions with assessors and students on a regular basis.
Secondly, approaches to
assessment of practice and the formats for the majority of documentation used
in the approved institutions in the study did not facilitate scrutiny of :
¥ the quality of the assessment
processes culminating in assessment decisions
¥ details of the students
performance, progress and development
This was because the
recording of assessment decisions relied heavily on the assessor as 'accredited
witness' rather than the production and recording of an evidence base amenable
to independent assessment. [48]
Consequently, completed
documentation from two similar student placements had the potential to 'hide'
the quality of the assessment processes, their validity and reliability and
also the quality of student performances.
Two sets of identical ticks and similar comments could conceal two very
different assessment processes.
For example in one instance an ongoing, well developed assessment
relationship between student and assessor where constant dialogue, feedback,
questioning and reflection were prominent features could be reduced to a series
of ticks and brief written comments because of the requirements of the
assessment documentation; in the other instance a poorly developed assessing
relationship where there had been little continuity, discussion or reflection
between assessor and student may well have given rise to an identical set of
ticks and similar brief written comments.
Thirdly the success of the other forms of
monitoring varied considerably in their ability to :
¥ highlight
and explore assessment issues
¥ act
on the issues raised as a basis for development
The evidence base for
assessment should aid monitoring processes. Assessment decisions and the criteria on which they are made
must be explicit and open to scrutiny via monitoring strategies; the quality of
the assessment process needs to be open to similar scrutiny. The task of the monitoring system is to
reveal the issues.
6.7.1. Revealing the issues
What follows is an illustration of issues arising from the experience of assessment of practice, and the extent to which they may be amenable to monitoring systems. The case highlights the concerns of a group of midwifery students attempting to air concerns about variations in the quality of practice assessments as a basis for future action. It represents many of the conflicting issues which other staff and students experienced in different contexts.
In the absence of quality assessment on some placements, students found themselves performing a mediating role of translating the assessment discourses, informing poorly prepared mentors how to interpret assessment guidelines and complete documentation. This put students in a very difficult position. One student described her experiences:
Well I think it makes a mockery of the assessment really, because I mean they say, What do I write here? What do you want me to put here?" I mean it's not really how they should be done, yet I don't think it's up to is to like correct it. Like Sue (the assessor) said "I want to write it like this, I don't know how to write it in this educational jargon." It's not up to the students to tell her how to write it, you know there is a problem there. I know the kind of thing they probably want, but it's not up to me to say "You can't write that, you must write this" because it makes it even worse then. (...) I think they need so much support in knowing what to write. We've been doing this for a year now and they are still asking,"What do you want me to do? What do I do? Do I tick or do I write A, B or C?" It's erm, it's not very good really when it's continuous assessment.
The above student's
assessor voiced some of the difficulty she experienced knowing what was
required of her due to the change in clinical assessments which now required a
more in depth approach:
To me, a few words, rather than me writing all that down, if I could just put, "Pauline has been an excellent student and has achieved all her aims and outcomes," that would have done for me, but it won't for them (education staff), or will it?
Similarly, a midwife
educator pointed out:
And you sometimes find the situation where the students are actually explaining their assessment package to the midwife who has got the responsibility for it. You know I have difficulty with that one!
Many students were unhappy to lead their assessments if assessors lacked the required skills, as they regarded it, quite rightly, as inappropriate to their status as learners. However tackling the situation was problematic. Some students, such as the one above, spoke of their difficulty being empowered enough to challenge the situation 'face to face' because of the potential threat and personal consequences that may result:
I mean in school
we're forever being told to question and challenge, but it's easier said than
done. You get in the clinical area and it's like the old reports, you think,
"I'd better not say anything because I don't want to blot my copy book, I
want to keep in here and be friends with people and not rock the boat too
much." So what do you do? I
feel like I want to be more assertive, and turn round and say, "I don't
agree with this, or this isn't right," or whatever, but I don't think I
can.
Others recognised the sensitivity required to change assessors' perceptions of the assessment:
We're frustrated
because we can't really explain to them what we have to say, because you can't
just put, "Blah, blah is a very nice person," because that isn't what
they want to know.
Some students assisted
poorly prepared assessors, feeling that it was partly their responsibility, but
found the ambiguities of this role difficult to reconcile:
...I think it should be everybodys' responsibility if you've got students in an area then it's up to you, you should be familiar with the assessment forms, perhaps it's the fault of school as well, I mean they do have a link teacher but I don't know how much that link teacher is used. (...) I think it's everybody's responsibility. I mean I don't think it's not my responsibility obviously. In a way I should be assertive and say, "I want to achieve so and so today," Perhaps it's probably a fault on my part as well.
Students were not always happy tackling the situation via more more formal structures. Some felt they did not want to be seen as a 'trouble makers' and also spoke of not wanting to hurt the feeling of others by 'going public'. One student gave an example of a typical comment on her assessment which read, "Overall working with S has been very good," but she hesitated to take up the issue of the inadequacy of her assessment:
I don't want to put it
on record, 'cos she's excellent at teaching and everything and she's been a
really good supervisor/assessor, well supervisor, but maybe she's not used to
the assessment forms or to writing in the sort of terms that educationalists
expect now.
Despite
the rhetoric about open forums for effective monitoring of assessment
activities, individuals experiences were not always favourable. For some, the
personal and professional skills and confidence required to engage in such
activity effectively did not come easy.
Likewise the creation of a non-threatening atmosphere in which
discussion could take place was not always attained. In addition to the need to feel secure and comfortable
during monitoring and review sessions, feeling that your comments are valued
and will be acted on wherever appropriate and possible was also raised as an
issue. The midwifery students
discussed their frustration when asked to give feedback on their clinical
assessment:
Well I said to them (tutors) they (the assessments) are not taken seriously enough and that they're just on an ad hoc basis. But it's not what they want to hear really. They want to hear that you've had appointments with your clinical supervisor and that it's been taken very seriously, but they're just not, which is, it's a problem.
...comments that we've made, quite constructive, we haven't you know, rocked the boat or anything...and it's disappointing to see that eighteen months on, the same mistakes are being repeated and groups are making exactly the same suggestions and complaints that we made, which is a shame really.
An educator on this course was aware of the difficulties faced by these students, and that as a consequence little action resulted:
Whenever students come with problems I will say to them, "What have you done about it? Have you actually addressed it in any way? Have you actually talked it through?" Now that's where the difficulty comes in I think (...) there are many things that prevent students from being open and honest about how they feel. I mean sometimes I think they don't always have the skills in terms of expressing it in a way that will be received well with clinical staff. And I think clinical staff are also very threatened sometimes. But when it relates to something that's happened in the clinical area they tend not to do anything about it because their comments will be, "Well it's not going to change anything if we do, it'll only make my situation worse. (...) I will be isolated out as a 'problem student' in inverted commas, and than I might be treated differently." So rather than deal with it they tend to accept it, not do anything about it and put up with it, which has implications for their learning.(...) And another thing is they talk about the grapevine, you know, "If I do anything about it, say anything, it will be round the unit in no time," which is quite sad really isn't it.
Some students had become
resigned to the 'quiet life' approach, no longer feeling willing or able to take further action
on the issue, when previous attempts had been unsuccessful:
I think it is
difficult to criticise yourself and if somebody's quite willing to write,
"A nice person. Settled into the ward well," it's easier to leave it
at that really. It's more challenging if you have to sort of say, "Where
are my weaknesses? Record them so that I know them to progress." So I
think it does put more responsibility onto the student.
Despite the educator's
'informal' understanding of the situation, the unacknowledged problem
remained. Action was not taken to
overcome the stalemate that had been reached. Both formal and more direct personal approaches were viewed
as either too uncomfortable, too threatening or ineffectual. This appeared to be strongly linked to
the lack of dialogue and debate about assessment between the education and
clinical departments. No formal
forums existed for the midwifery staff involved in assessing to raise and
discuss any problems they were experiencing, and likewise for educators to
explore and receive feedback on issues such as the quality with which the
assessments were conducted. Because of pressures on educators and clinicians,
clinical liaison relationships were reported to be poor, and hence the
provision of more informal support was limited.
In this instance, as in
other examples from the data there was a need for monitoring strategies which
provided supportive environments, backed up by commitment and resources to take
action as a result. Failure to
engage in monitoring activities and act on the results maintained the status
quo and meant that some poor quality assessment practices consequent of
deficits in skills and professional development of assessors remained. At best, some average or good students
may miss out on development during some placements, and at worst some poor
students may 'progress' through assessment undetected and unassisted, with the
possibility that erroneous summative assessment decisions may be made regarding
their level of competence.
In general terms the
task is to create an institution or a system that learns.
6.7.2. Principles for Learning from Monitoring
In summary form the
principles required for effective monitoring and development include:
¥ Ensuring
involvement of all relevant groups participating in the assessment process
As with assessment
strategy design, involvement of practitioners in monitoring and development
activities occurs, but there is a tendency for educators to dominate the
scene. The occurrence and
frequency of forums for practitioners involved in assessment (eg assessors
meetings or end of placement/unit discussions) varied widely, between approved
institutions, as did the attendance at sessions and the quality of activities
that occurred during them. Regular
attendance by practitioners at discussion forums or feedback sessions was not
always easy due to workplace commitments, but it was a vital contribution. As a result, essential feedback was generated from those
directly involved:
In fact we've already changed the profile from the common foundation programme as a result of feedback from clinical staff. Certainly from the feedback we're getting back now it is clearer. I don't think it was user friendly enough and perhaps some of the aspects were a bit generalised.
Likewise the need to
ensure that students are included in such activities is important. During
forums about assessment experiences on placements, good practices, difficulties
and problems can be raised and discussed, so that development can result.
¥ Appropriate
support
...we have the end of course feedback from the whole group on everything in general and we also have a group of service staff and educational staff chaired by the education who are the assessing panel and they constantly monitor the assessments. But it's a bit...it's a bit...that is the formal mechanism, I'm not convinced that there is a great deal of what I would call individual feedback, "Do you want some help with this sister because perhaps we could help you improve on so and so."
...from my own personal point of view I
think the evaluation that we're doing really needs to be taken on board in more
detail, and really looked at, the issues underlying some of the evaluation
material that's coming up. (educator)
Many staff recognise the
need for more targetted, individually tailored activity in addition to more
'formal' structures, However being
able to devote sufficient time etc to carrying out those activities which are
the most helpful, and having the space to act on them is often difficult,
during existing clinical liaison roles.[49]
¥ Putting
discussion about assessment on the agenda
As monitoring and
feedback activities compete with other activities, it is easy for them to be
marginalised by other workplace demands.
Whilst this is understandable in the current climate of competing
priorities, there is a need to air and discuss assessment issues that have been
identified informally:
I think again there have been some midwives who have informally mentioned it. You know if you get the opportunity to get a midwife to talk to you when you go, inevitably it will come into the discussion, but I think it's a bit ad hoc, rather than a formal network that is available for that. And I suppose in part that's as much to do with the teacher as well, teacher responsibility as well as midwife responsibility.
¥ Need for
Action Based on and Discussion
When assessment issues
form the basis of discussion, and actions are identified to improve practices,
mechanism need to be available to put developments into practice. Many developments have occurred in
approved institutions, however the following issues were highlighted relating
to development:
¥ Limitations on
development
For a variety of
reasons, it was not always possible to implement changes and development to the
assessment strategy within a short space of time, and in some cases not at
all. Implementing some changes
which fundamentally altered the assessment system required re-validation. More minor alterations to the
assessment system can often be implemented without requiring re-validation of
the assessment strategy, but even then they may take a while to occur. Where immediate change is not possible,
the reasons for this should be communicated to the relevant parties, so that
frustration and lack of faith in asystem which is preceived to be static does
not occur:
...I think we
should value students, I think they have opinions, we may not always be able to
say, "Yes, we'll change it if that's what you want..." because that
may take away the essence of the course...(educator)
¥ The pace of any
developments was one limitation on some developments. Staff noted that the pace
of change must be carefully planned and monitored so that those involved are
not overwhelmed by a constant round of change, which may be alienating:
...(assessments) are
monitored, I think they could be monitored a little bit to produce a higher
quality and I think that's where the college needs to be working with us a
little bit closer. But I'm not looking for that at the moment because I don't
want to push the staff any harder. But I think it needs to be a gradual
increase and pressure put on the service side to improve the quality of those
assessments. (clinical manager)
...in talking about the assessment strategy one has to be very careful even bringing about change that you don't actually invalidate what went before, because that raises questions about their qualification doesn't it? So it needs very delicate handling. (educator)
¥ If action does result from monitoring activities,
the manner in which this is conducted and the outcomes which result are
vital. For example sensitivity is
an important factor when tackling issues of poor practice. One group of students' confidence was shattered by what they felt was
poor handling of a fellow student's comments. After expressing negative views about a placement experience
with tutors in confidence (as they perceived it) at a review session, the
student and her peers were upset to discover that this issue had been taken up
with clinical staff without their knowledge and in a way which they felt
implicated the student directly.
As a result the students lacked faith in the system, believing that
their experiences would not be handled sensitively and expressed reluctance to
be so open in future.
¥ Development on a National Level
The devolution of
planning and developmental activities from national to local level which has
occurred in recent years has created a growing climate of competitiveness and
secrecy between approved institutions.
Many educators remarked upon what they perceived as a climate of less
sharing.
But what we need are systems in which we can share ideas and also cross fertilise everything. And I mean colleges of nursing, to be frank, by and large are notoriously defensive about sharing anything and I think that hasn't been improved by Project 2000 I don't think. It hasn't been improved by the White Paper, in fact I think it will make it worse. If we're in competition with each other then we don't want to give too many of our secrets away and I think that that may make it more difficult to ensure equity and parity and all those kinds of things between the levels of qualification.
Considerable effort is
invested in the developmental activities of approved institutions, and the
reluctance to share and 'cross fertilise' ideas if relationships are not
mutually beneficial is
understandable. However in light
of the learning curves currently experienced by approved institutions as they
design[50]and develop assessment
strategies, there appears to be a place for greater dialogue at regional or
national level, where assessment issues can be debated and understandings
shared to facilitate progress.
6.7.3. THE LEARNING INSTITUTION: A SUMMARY.
A learning institution
or system is one which has in place the structures and associated mechanisms
and procedures which generate reflection, facilitate the sharing of ideas and
experiences, and organise feedback appropriate for decision making and for action.
The institution or
system provides the context within which the student's own learning and
assessment takes place. The
quality of the contextual arrangements affects both the quality of the
student's experience and the quality of the judgements made about the student's
ability to practice. There is no
short cut to quality.
CHAPTER SEVEN
ABSTRACT
The relationship
between theory and practice, it has been argued, is shaped by the way in which
knowledge is used and assessed in the clinical environment. It is also affected
by the conceptions of it that are constructed, through action and dialogue, in
teaching and learning that takes place away from that environment. In
classrooms, where the focus of talking, writing, and doing is more upon theory
than practice, the development of ideas about what counts as knowledge forms a
major implicit agenda. In the classroom, knowledge is presented and employed in
ways that construct it either as theory divorced from practice, or as theory to
be applied to practice, or as knowledge from practice. Here, theory becomes a
discipline-based body of basic information to be stored until needed in
practice. It is also a collection of up-to-date practice-focused information to
be taken and applied to practice. Alternatively, (or sometimes in addition) it
is a synthesis of 'know-how' that is collected post-experience to form a
composite picture, like the bits of a jig-saw. And in some instances, knowledge
is the framework that enables analysis and critique. These four rather
different views of knowledge have implications for the status and value of
theory vis-a-vis practice. In the first conception, theory becomes something
'out there' in books and in the head of the 'expert', consisting of important
terminology and key facts to be slotted into a broader schema at some later
date. This knowledge is decontextualised and all-embracing, and as such can
easily come to be perceived as peripheral to practice but central for
assessment purposes. In the second conception, knowledge is judged by its
immediate relevance to practical tasks, and the extent to which it meets the
day-to-day demands of the job as known through experience. It is a piecemeal
theory consisting of highly specific information about particular nursing and
midwifery issues (such as 'lifting') of the kind that can be applied directly
to practice. It is often not perceived as theory because it is so clearly about
practice. The third construction of knowledge as synthesised 'know-how', allows
location-specific, task-particular theories generated in practice to be gathered
together to form a larger picture. Theory here is more closely related to
practice, and is associated with learning from a corporate set of experiences.
The distinction is between individual knowledge learned on the job in a single
placement, and corporate knowledge developed through synthesising information
from a range of such placements. Although it inspects routines and procedures,
it constructs a preliminary knowledge base for principled action, and is thus a
first step towards dissolving the barriers between theory and practice. The
final construction of knowledge as a framework for analysis and critique brings
about a paradigm shift in understanding. Here theory is something that grows
out of practice, is implicated in practice, and can be reflected upon and
critiqued from both a practical and a theoretical perspective. It involves
judgement and reflection upon values. Even in this conception knowledge can be
constructed as being open to classification through neutral, descriptive, and
technical analysis, rather than subject to interrogation through reflection.
Where reflection is based on critique, however, theory and practice become
indivisible; the one arises from the other and each is 'tested' in the light of
the other. The fact that there are multiple forms of knowledge suggests that
the question of how theory and practice should be assessed relates to where the
construct of knowledge to be assessed fits in the discourse about knowledgeable
doing and reflective practice, and depends upon how individuals, organisations,
and the professions resolve the debates about incremental knowledge and
holistic knowledge, and a priori theory and grounded theory. Each of these
types of knowledge and theory demands a different form of assessment.
Discipline knowledge can be assessed through writing that requests skilful
recall and is a form of memory test, evidence of piecemeal up-to-date
knowledge can only be seen in
practice. The two further forms of knowledge bring theory and practice into a
relationship which makes it possible to assess them in either theory or
practice, or both. In these two cases,
the importance of the dialogue that leads up to and follows the
assessment activity, is undeniable; without it there is difficulty in
formulating satisfactory criteria for assessing such complex relationships.
BRINGING TOGETHER
THEORY AND PRACTICE:
DOING ASSESSMENT IN
THE CLASSROOM
Introduction
This chapter focuses
both on knowledge and assessment. It begins by examining in some detail the
role of classroom-based learning in shaping students' perceptions of knowledge,
and considers some of the implications of the various conceptions of theory
constructed in the classroom environment. It discusses, for instance, the
concept of knowledge as background information that is Ôout thereÕ in books and
academic ÔdisciplinesÕ, and looks at how this places the student in the role of
information collector and internaliser. It also considers the view of knowledge
as up-to-date information about a range of particular practical matters, and
how this positions the student as bringer of theory to practice. And it looks
at the construct of knowledge as accumulated practical know-how, a view which
casts the student as drawer-out and synthesizer of ÔworkadayÕ theory. Finally, it
examines the view of knowledge as understanding achieved through the analysis
of experience and the critical review of theory, and explores how that focus
juxtaposes all the previously mentioned forms of knowledge, and dissolves many
of the boundaries between theory and practice. It notes, however, that several
well-intentioned attempts to reconstruct knowledge as understanding have failed
because of insufficiently well constructed programmes for developing student
expertise in analysis and critique. It indicates that this has happened, on the
one hand where teachers have given undue attention to the taxonomic aspects of
chosen frameworks for looking (treating them as information to be memorised)
rather than to either their analytical power or the evidence they reveal, and
on the other hand, where they have allowed reflection to be insufficiently
critical (treating it as an occasion for synthesis rather than
theory-building). The second part of the chapter examines the way that
knowledge is assessed in classroom-based written assessment. Evidence is
offered of the range of strategies employed by approved educational
institutions to assure quality in the marking of written assessments. These
strategies, together with the criteria used to determine the level of
understanding demonstrated by the students in their assignments, are evaluated
in terms of their effectiveness in facilitating the interpenetration of theory
and practice. The chapter as a whole addresses the problem of assessing
knowledge as an aspect of competence away from the clinical or community
environment.
7.1. CONSTRUCTIONS OF
KNOWLEDGE AND THEORY
7.1.1. Knowledge As
ÔBasic FactsÕ
In many of the
classrooms observed students learned that knowledge is a discipline-based body
of information which it was essential for them to acquire and store. In three
separate classes on a single day in one institution, for instance, lessons
consisted of lectures about human physiology and psychology where the emphasis
was on what was termed Ôthe basic factsÕ.
The students were asked to answer questions put by the tutors and to
which these tutors already knew the answers. They were also encouraged to write
down a number of terms, which the tutors either wrote out on the board as they
spoke, pointed to on pre-written overhead projector sheets, or drew attention
to through emphasis in their speech. In all three lessons the tutors spoke for
the entire hour and a quarter session, interrupting the information-giving
occasionally to elicit answers to questions about what was learned in a
previous lesson. The students were
being offered essential but largely academic information by teachers who had
clearly spent considerable time organising their material and polishing their
presentation skills. The net effect was to place the emphasis on the teaching
rather than the learning, and to construct knowledge as something to be found
in books (or in the teacherÕs notes), memorised, and kept on stand-by in case
it might prove useful later. Knowledge was informally and intermittently tested
by the teachers through the questions they put to the students as the lesson
progressed. Terminology and
factual information were foregrounded, along with the studentsÕ ability to
recall what had been taught.
Many of the lessons observed
were organised as lectures, a form which encourages the presentation of theory
in finite pre-packaged bundles. A strong disadvantage of this approach is that
theory often only makes sense and becomes assimilated into the learner's
conceptual schema at a much later date. Although lectures were only part of the
overall teaching programme in the institutions visited, they provided messages
about the place of knowledge in competence which contrasted strongly with the
ones that came from seminar discussions which addressed actual information
needs identified through clinical experience. The data provides a picture of
the balance of information-gathering to information use, and reveals an overall
bias towards the former. This is not only bound to affect students' ability to
assimilate theory and make sense of it in terms of practice but is also likely
to cause them to see theory as a peripheral part of competence and therefore
something which is to be learnt primarily for assessment purposes.
The focus on terminology
was evident even in lessons where the teacherÕs declared intention was to raise
awareness of professional issues. In one class where the topic was the need for
professionals to take responsibility for their own actions, for instance, the
teacher made reference to 'feedback', 'rationalisation' , and 'positive
strokes' using intonation and repetition in a way that suggested he wanted
these terms to be noted, remembered and quoted. In another class the
terminology was more medical with words such as macrocytic and normochromic being introduced and explained by means
of a chart which underlined the importance of the term and gave its everyday
equivalent. Sometimes terms were introduced with an additional cue to their
importance:
Here's
another psychological term to throw in at you' .
The focus on factual
information was evident in many lessons where detailed presentations of
material from approved bodies of knowledge were made. It was apparent from the
way this information was treated that students were expected to store it in
their personal data-bank for use later. Students were left in no doubt about
the ÔofficialÕ status of this information, which was prominently linked to key
'authorities' in the field. Quotations from adult educators such as Rogers and
Maslow were read out slowly for copying down. Taxonomies designed to aid
enquiry were presented in a form which suggested the taxonomy itself was more
important than its purpose. In one classroom, for example, the Lickert
Introspection Assessment Scale was introduced, attention drawn to a detailed version of it on the overhead
projector, and the students
encouraged to scribble furiously in an effort to get down the main
categories whilst the lecturer was speaking.
Paradoxically, in the
process of foregrounding terminology and factual information, some teachers
made incidental comments to their classes which suggested they had doubts about
the 'real-world' value of the sort of information they were offering. In one
class the teacher made several throwaway remarks which could have been
interpreted as suggesting that factual information should always be treated
with a degree of scepticism. During a lecture on haematology he commented in
passing, during a statement about the probable cause of leukaemia :
as with everything
from a broken leg to appendicitis they say it might be caused by a virus
And at the end of the
same lesson he reinforced this sceptical view with a jokily expressed comment:
Now you know all
about it so you'll be able to say 'Don't worry about this sister, just leave it
to me'
Such comments were
relatively small in number, but experience in a wide variety of mainstream
classrooms teaches us that students who are cue sensitive pick up the sub-texts
embedded in this sort of remark. If, as in this case, the sub-text is casting
doubt on the thesis in the main text, then students are faced with a dilemma of
interpretation.
Emphasis on knowledge as
a store of academic information, constructs professional competence as practice
founded on skilful recall. Skilful recall involves the
ability to bring to mind select pieces of information by reference to the
complete body of knowledge. This in turn requires the ability to successfully
store large quantities of information in a decontextualised form for long
periods of time, until such moments as some part of it may be needed for a
particular purpose. Although the
student who is accomplished at skilful recall has the potential for integrating
understanding and action, it is a strong possibility that for many such
students theory will remain 'out there' and 'academic' for considerably longer
than is helpful to their development as Ôknowledgeable doersÕ and Ôreflective
practitionersÕ. Successful professional practice depends upon the coming
together of theory and practice; skilful recall may be an important preliminary
step on the way to the reflective practice the profession wishes to
promote, but it is not of itself
sufficient to ensure that.
Normally the kind of
lessons we have described occur in the earlier part of a course and are a
precursor to a programme organised to be more responsive to knowledge needs.
They are therefore representative of a stage in a series of classroom
activities rather than typical of the whole set. Nevertheless, because students
have substantial early experience of knowledge as decontextualised and
all-encompassing data, some of them are likely to continue to see it as
something which is peripheral to practice but central for assessment purposes.
Indeed, it is clear from student comments that they see the heavy information
input at the beginning of their courses as intended to give them an 'academic'
background, and that they resent the implication that they need to have a
complete repertoire of so-called ÔacademicÕ knowledge, encompassing all that
there is to know about a discipline, topic, or subject, before they can practice
competently. It is also clear, however, that the front-loading of basic
theoretical knowledge leads to such a pernicious perception of knowledge as
something abstract and discipline-related, that when theoretical frameworks of
various kinds are employed later to encourage students to conceptualise areas
of competence, these too are treated as information to be assimilated.
7.1.2. Knowledge as
Up-To-Date Information
We have shown that most
courses have sizeable components where knowledge is treated as a priori information, and that many teachers
inadvertently encourage students to consider such knowledge as academic theory
whose main function is to inform writing done for assessment purposes. On many of those courses, however,
there are parts of the programme where students are introduced to more
practice-related information which they are encouraged to 'try out' and in so
doing, to test. Students can, of course, take parts of the ÔdisciplineÕ
knowledge they have acquired and apply it to practice, especially if they have
begun to internalise the information from the bodies of knowledge to which they
have been exposed, and can carry out skilful recall. But students appear more
inclined to apply piecemeal, task-relevant information to practice than to
seek links between inclusive abstract discipline-knowledge and the practical
context they may find themselves in. In addition, some courses give
considerable status to this sort of knowledge by first introducing recent work
by expert practitioner/writers who have examined the effectiveness of certain
approaches to practical care, and then providing opportunities in the classroom
environment itself for the student to put it into practice. This they do through
offering simulations which allow the student to use new knowledge about skills,
such as communication skills, and demonstrations (of lifting and artificial
respiration, for example) which give the student a chance to practice the most
up-to-date techniques. Where activities of this kind do not happen in the
classroom, and they are relatively rare occurrences, information about the most
recent advances in nursing and midwifery knowledge is kept on hold -like the
more ÔacademicÕ information - until the student is on placement. Students see
it as something they will take with them to the clinical or community area and
apply to their nursing or midwifery practice as soon as the opportunity presents
itself. As one interviewee said:
As a practitioner I should be able to operationalise the theory or
what's the point in having it
Qualified staff are
often very grateful for what they call Ôthe latestÕ on areas which are by their
very nature constantly being upgraded, areas such as drugs, theatre
technologies, and care techniques for specific conditions (e.g. the treatment
of bedsores, and the administration of client-controlled pain-relief).
Sometimes, though, they reject so-called Ônew theoriesÕ on the grounds that
their experience has taught them that something else is better. In either case,
this piecemeal task-relevant form of theory is constructed as something which
comes from the classroom and is applied to the practice environment, where it
is tested out and either accepted or rejected. Unlike ÔacademicÕ or
ÔdisciplineÕ knowledge, it is perceived as potentially useful, but like it, it
is rejected if it does not meet the demands of the job as perceived by
qualified practitioners. Because up-to-date task-relevant knowledge is so
important in practice, both qualified practitioners and students value it
highly. A striking account of its use is provided by a midwifery student in her
spoken account of clinical work she undertook as part of a fieldwork study.
This student talks about the way that she was able to use recent 'theory' about
foetal heart monitoring to identify potential delivery problems:
foetal heart
monitoring you know sort of during labour..I would come back on to the delivery
suite and actually look at some..hm.. monitorings that had undergone that
morning sort of like try and identify problems..so it was just taking theory
back to practice (midwifery student)
Interestingly, though,
student interviews reveal that up-to-date knowledge is rarely perceived as
theory, however, perpetuating the myth still common in the professional culture
that theory is something which is apart from practice.
7.1.3. Knowledge As
Synthesized ÔKnow-HowÕ
The majority of courses,
then, ÔconstructÕ knowledge as a mixture of storable data and of up-to-date
information to be applied to practice. However, most students also have a sense
of it as a broad spectrum of information that comes from reviewing and synthesizing practical experience.
Significantly they value this kind of knowledge highly, too. In their remarks
about post-placement discussions, students favour learning from each otherÕs
experience above book learning, and they particularly appreciate the
opportunity to pull together knowledge which has been gained in a piecemeal way
during practice.
it's through your own
observations you pick things up..it's from your own observations not from books
because we've done very little practical (in the classroom)..we learn
when we come back from a placement.. you go on your ward placement and you come
back then you know your practical theory when you come back..you don't learn
anything practical going out, you learn when you come back (nursing student)
you can see a midwife
give a lady a good experience and then somebody else can give them a bad
experience (--) and it's something we all commented on within the first..about
the first two months so the tutor, you know, asked what we thought about
midwives (midwifery
student)
There is an implicit
recognition that the broader picture created when a range of experiences is
shared by a group, can lead to insights that go beyond the particular:
it was quite good
because we're split sites, some are at X and we're here and we have like
reflective sessions because we're usually on the same areas at the same time
and we have two totally different experiences (midwifery student)
The distinction here is
between individual knowledge which is learned on the job in a single clinical
placement, and corporate knowledge which is developed through synthesizing
information from a whole range of such placements in the classroom after they
have been completed. The former includes knowledge about routines and skills in
particular contexts and in relation to the needs of specific clients, the
latter extends beyond that context-related knowledge to generate a corporate
knowledge which is more comprehensive. This newly synthesized knowledge,
although not yet quite a fully developed and explicit theory, is nevertheless
closer to the ÔglobalÕ or generic knowledge which forms the framework that
competent professionals use alongside context-related knowledge to inform their
practice. It can be thought of as preliminary knowledge on which to build a
professional theory that will facilitate action, provide a rationale for change
in practice, and lead ultimately to evaluation and revision of the theory
itself. The notion of theory as a working hypothesis rather than a truth to be
validated, is not yet widely adopted in the classroom.
7.1.4. Knowledge As A
Framework For Analysis And Critique
The types of knowledge
described above can be thought of as level one knowledge, that is, knowledge
which provides the foundation for a piecemeal theory but does not yet
constitute a coherent and unified theory. A unified theory does not necessarily
come about as a consequence of comprehensive coverage, as we have indicated
when referring to discipline or subject theory. A unified theory is constructed through reflection,
analysis, and critique which together lead to a paradigm shift in
understanding, and the development of an all-pervasive attitude of enquiry.
Indeed, comments from one group interview made it plain that their teachers had
been so successful in involving them in using their knowledge to guide their
way of looking at the world, that informed observation had become second
nature. According to their account they would now find it difficult to not look inquisitively at
what was going on around them, as the following remark from one student indicates:
you go out and you
think 'why does she wear that' you know, and you think 'where does she come
from/'and what possessed her to wear ..you know it's wrong ..that's sitting
next to somebody and you think 'oh well he's looking to go out with her and
she's looking such and such..she doesn't like, he's sitting this way ...it
becomes your whole way of thinking after a while ..it is.. you just adjust to
it ..it just takes you over really
Elsewhere there were
students who spoke of their experience in using their knowledge to present a
persuasive case for or against a course of action. One group of students, for
instance, in talking about a Patients' Home Needs Study which built upon the
Neighbourhood Study they carried out in the first year of their Project 2,000
course, commented on the difference between their own expectations in this
respect and the expectations of students following more traditional courses.
The student quoted below offers a sophisticated perception of what constitutes
analysis:
traditional students
on the same ward as us said 'we had that to do as well' she says 'do you want
to have a look at mine' and I looked through it and I mean it was so different
..there was no argument whatsoever..it was just explanation, explanation.. (--)
what they did was explained what each service was..whereas if we did that we'd
be failed outright (--) (we had to) give a rationale for it, evaluate the
effectiveness of it, and discuss the whole situation, you know,...and look at
the long-term the alternatives
Clearly, if students are
able to remain inquisitive and enthusiastic throughout the front-loaded
discipline-based component of a course, they begin to develop a perception of
knowledge as integral to practice. This expectation is reinforced by the assessment
activities they encounter, through which they learn to integrate the specific
and concrete with the general and slightly more abstract. Her expectation that
she should 'analyse' her experience in the light of her developing theory base,
rather than merely offer an 'explanation' of it, reveals an understanding of
the need for analysis to be supplemented by a critique, or what she calls an
'argument', that allows her to conceive of alternatives. On most courses,
analysis and critique are different. The first is usually a neutral and
technical activity, involving the deconstruction of an event or text. The
latter normally involves judgement and the adoption of a position. Analysis which relies on categorisation
into taxonomies without comment does not lead to critique. However, in some of
the classes where 'analysis' is done, a number of students expect to take part
in some sort of critique of practice. A number of students speak about this
critiquing in terms of "asking questions" about current practice.
Their comments show how classroom rhetoric has impinged on their thinking, and
how they act in response to encouragement from tutors.
they want us to
question things more and you do..I mean you really do. ---no you do you do question things more.. you look
at different..you look at alternatives (nursing student)
you know you take
everything into account then you start to question all of this.. why this is
there..and just do a lot more questioning (nursing student)
I know when we were
on Delivery Suite they found it quite difficult because we were the first
course through and they said they couldn't believe that we asked..you know..
questioning so much (midwifery student)
at X it's the
paediatrician who will dictate some of the care so..I mean it's quite
conflicting if you've been taught something in school and then you go on to
clinical area and you think 'but I thought that went out about twenty years
ago' it's difficult and you tend to find that they find it really difficult
because they were questioning ..like..why things were done that way (midwifery student)
Students regularly talk
about their reluctance to apply the rhetoric of the classroom (using 'rhetoric'
in its classical rather than its pejorative sense) to the clinical setting,
commenting on the 'danger' of challenging established practices in that
environment and citing the possibility that they would be adversely assessed as
a consequence of any challenge they might make. Nevertheless, in the interim,
they are ready to evaluate theory by themselves in the classroom:
I mean it's only
recently that I've understood like that .. to rationalise something to give a
rationale ..I never understood what that meant but it was like 'give an
explanation of what you think' .. but it takes you time, you know, you have to
make mistakes and realise, you know, that you must .. I just did the knowledge
and discussion for the first few projects but after a while you realise there
are things and you evaluate
it and rationalise (i.e. provide a rationale for ) it (nursing student)
They are also keen to
invite others into the classroom to find out why certain things happened in
practice. The following comment about how an obstetrician was gently quizzed on
his theoretical basis for making certain kinds of interventionist decisions in
birthing situations provides an example of this:
he (the obstetrician)
was invited to come and see us ..so you know we can say 'why do you do this?'
'why has midwifery research shown this?'(midwifery student)
7.2. ASSESSING
DIFFERENT FORMS OF KNOWLEDGE IN WRITING
7.2.1. Assessing the
Knowledge Necessary for 'Knowledgeable Doing'
It is sometimes assumed
that the main difficulty in assessing studentsÕ knowledge in the classroom
context is the fact that the situation forces a focus on ÔacademicÕ theory.
What has emerged is that things are a great deal more complex than commentators
have previously thought. Nursing and midwifery theory exists at two levels, the
level of information (or ÔbasicÕ knowledge) and the level of organising
discourse (or understanding) [51]. First-level knowledge
is usually introduced during the earlier part of a course. It is apparent from
observation and interview that even at this level knowledge is multi-faceted.
Informational knowledge manifests itself as data for retrieval; something which
is already known to others who have packaged it into Ôbodies of knowledgeÕ that
have to be learned in case some part of one of them is needed in a practical
context at an unspecified time in the future. This type of knowledge has little
direct application to practice and so can be appropriately assessed in writing
outside the clinical or community context. In another manifestation it is
information for immediate application to practice; up-to-date knowledge about physiology, drugs, physical care
techniques, and communication 'skills'. Student awareness of this second type
of knowledge can only be assessed
in the placement area where it can be observed in action. And in its third
form, knowledge is the Ôbroader pictureÕ built up when individuals share their
separate experiences; information
they have Ôfound outÕ from a range of practical contexts and put into a common
pool. This form can be assessed either in subsequent practice, or in writing,
and in both cases it will represent an incomplete 'workaday' theory rather than
a total and coherent grand theory.
What distinguishes the
three forms of first-level knowledge from each other is their relationship to
practice. Data, or book knowledge, exists prior to practice and is organised
within a ÔsubjectÕ discipline. Up-to-date knowledge is contemporaneous with practice
and is organised within the rather more eclectic disciplines of nursing and
midwifery. Synthesized knowledge is created after practice, and like the previous
type, it forms part of the newly developing disciplines of nursing and
midwifery. All of these forms of knowledge are assessed in writing in the
classroom, although the second less appropriately than the others. The fact
that there are three forms of knowledge found predominantly in the earlier part
of most courses means that within any single course their is an inbuilt
discourse about knowledge itself. The discourse here is the discourse of
'knowledgeable doing', which includes debates about the relative value of
different forms of knowing, and about the nature of nursing and midwifery
theory. There are central debates about the value of incremental (or piecemeal)
knowledge vis-a-vis holistic (or broad picture) knowledge; the value of a
priori
theory vis-a-vis grounded theory; and the value of subject knowledge vis-a-vis
nursing knowledge. Contingent on this discourse are a number of assessment
issues. Should early assignments be focused on discipline knowledge and latter
ones focused more heavily on theory-out-of practice, for instance? If they are, then should there be
different assessment criteria used to assess theory in different parts of the
course, with the assessment of theory and practice together being a major
criteria in the latter part only?
Given that certain forms of knowledge cannot be assessed in the
classroom context, should teachers spend more time in the clinical placement
area expressly for the purpose of assessing these? And should there be more
attention paid to assessing the process by which a student moves from a narrow
knowledge base to a broader one?
7.2.2. Assessing the
Knowledge Necessary for 'Reflective Practice'
In most approved
education institutions first-level knowledge and knowledgeable doing are
treated as prerequisites for reflective practice. The pattern of learning
development most prevalent in classroom-based nursing and midwifery education
is one in which student's move from information collection to the application
of knowledge, from application to synthesis, and finally from analysis to
critique. Students progress from having a hesitant command of theory which they
apply more or less uncritically to particular clinical events, to full command
of a critically informed theory which has become robust enough to take account
of physiological, psychological, and sociological factors across a wide range
of environments. By the time these students reach the latter stage of their
three year course, the best of them are able to reflect on the relationship
between particular and
context-related events (referred to elsewhere in this report as the 'local'
aspects of nursing and midwifery) and on the more general and generic issues of
nursing and midwifery (referred to elsewhere in the report as the 'global' or
'universal' aspects.) In a small
number of cases individual teachers encourage reflective practice (that is,
practice informed by analysis and critique) from the outset of a course. In
these classes first year students are able to construct an argument which sets
out alternatives and makes a case for a chosen option. At whatever stage it is
developed, however, second-level knowledge is the holistic knowledge which
permits a competent student to recognise complexity in a situation, make
informed judgements, take difficult decisions where necessary, and critique the
values that underpin those judgements and decisions.
As with first-level
knowledge there are a number of discourses revealed by the different types of
second-level knowledge. This time the major discourse is the discourse of
reflective practice, in which is embedded a debate about the relative
importance of explanatory analysis and critical analysis. This incorporates the
debate about the relationship between action (which by necessity has to be
immediate and narrowly focused) and reflection (which if effective recognises
complexity and attempts to prevent closure in thinking). This second debate is
essentially the debate about the interdependence of theory and practice itself.
Although the contending voices in the discourse are apparently strongly
opposed, the essence of effective reflective practice is that it leads to
action. In the clinical placement area reflection can follow action if there is
an appropriate set of procedures. In the classroom, however, reflective
practice inevitably concentrates on reflection rather than practice. This seems
to indicate that the building of a holistic picture through analysis that
brings the local into relationship with the global, and the construction of a
critique of practice in relation to theory (and vice versa), are the most
appropriate professional development activities for learning in the classroom.
Once again there are assessment issues contingent on this discourse. In
considering written assessment it is worth noting the need for well-defined
criteria for differentiating types of reflective practice so that the
analytical explanation can be distinguished from the critique(al) argument. We
also have to examine the structures that institutions have for bringing teachers
together to discuss those criteria, because it is well recognised that
processes such as judgement are notoriously difficult to assess in a way that
guarantees unanimity between assessors.
Finally, we have to ask how the 'problem' of assessor subjectivity can
be minimised.
7.3. CONSTRUCTING
CRITERIA FOR THE ASSESSMENT OF KNOWLEDGE
7.3.1. Preparation
The discussion so far
has been about the various forms of knowledge presented and assessed in
classrooms, and has examined some of the ways in which cue-sensitive students
respond to informal messages about the value of theory. It might appear from
such an account that all issues to do with the nature and function of knowledge
and theory are dealt with solely within the hidden curriculum. This is not so,
for in the classrooms that we observed there were occasions when discussion was
explicitly focused on assessment issues, and especially on the matter of what
constitutes relevant knowledge for the achievement of a given standard of
written work. There were lessons set aside regularly for preparatory discussion
about particular coursework assignments, and in many of those lessons the
teacher talked about the type of knowledge the students were expected to
include in their assignments. In addition, there were often individual
preparation tutorials where a similar conversation took place. Students were
appraised of the criteria which would be used to differentiate between
satisfactory and excellent work, and in some instances were provided with a written
version of these criteria. Observation in lessons and tutorials indicated that
in the earlier part of their course students were extremely anxious about their
capacity to demonstrate comprehensive knowledge but, because of their lack of
awareness of the need to do so, were much less anxious about their ability to
show critical depth. There was,
however, a change in emphasis over time as students experienced a number of
whole class discussions in which teachers discussed completed written
assignments at length. And where longer assignments carrying a substantial
percentage of the overall course marks were used, as was the case in two
institutions where the students carried out Neighbourhood Studies during
community placements, there was an increase in student-teacher discussion
expressly about assessment criteria. There was also greater student awareness
of the value attached to different types of knowledge in those classes where,
irrespective of the part of a course being assessed or the means by which the
assessment was being carried out, a teacher-completed feedback sheet was
offered to each student when they received back their work. In those courses
where there was regular targeted feedback students felt they approached
subsequent assignments with more understanding of how to show their knowledge
and understanding.
Although careful teacher
preparation of students for assignment writing, and targeted,
criterially-explicit feedback on the completed assignment are certainly of
value in promoting students' understanding of the task they have to complete,
it is not necessarily the case that it will also develop their understanding.
If the explicit advice proposed in preparation and feedback sessions differs
markedly from the 'advice' implicit in the hidden curriculum of other lessons,
students are liable to believe what they have experienced rather more than what
they have been told. Where teaching concentrates on first-level knowledge
rather than second-level, assignments are likely to include quantities of 'bolted-on' (i.e. non-integrated)
information.
Sometimes, the explicit
criteria offered in preparation sessions themselves perpetuate a focus on level
one knowledge, particularly when
teachersÕ represent the assignment task as presenting level one knowledge
(the Ôbasic informationÕ). Typical of the arrangements made for preparation and
feedback were those in the class we observed where the students met their
course teachers to discuss an imminent written assignment. The two teachers
concerned had taught some but not all of the material to be assessed, and
perceived their function as co-ordinators. Discussion took place first in small
groups where issues were identified, and was then followed by a class
discussion about what was required to write a paper on one of five topics
offered. One major group of
queries was about what content was relevant and whether or not it would be
possible to remember in detail what had been said in lectures two months ago. Behind
these questions, of which the one quoted below is a good example, was a clear
assumption that there was a correct content, and that questions would be about
the core professional facts
can you just go over
what we should know about body language?
The teachers did not
challenge this assumption, but 'mapped' briefly the general territories for
which facts should be learned in detail. They also responded with an offer of a
group tutorial on the respiratory system for those who had missed out last time
or who were a bit unsure. There was a clear indication that it would be wise to
focus on what had been taught, rather than what might have been learnt
I guess you're all
going to be doing number three, we've done a lot on that
tell me what you're
going to need..give me two or three from what we've done.
Interviews with the
teachers made it plain that they were Õlearning by the seat of their pantsÕ and had to rely on
their folk versions of what assignments should show. The preparation lesson
itself was Ôslotted inÕ at the last moment, and consequently the teachers came
to it relatively unprepared.
Later, their institution became aware of the fact that teacher
preparation for assessment was inadequate and that student preparation was too
often happening in the margins, and developed procedures for dealing with both
these issues. In the meantime, however, students had been taught and had
learned that for assessment purposes it is better to be safe than to venture on
to new ground, and that accuracy in factual recall is paramount.
Discussions often generated
questions that were implicitly about the nature and range of ways in which
theory could be made to relate to practice. In one preparation lesson, two
students asked respectively:
can we focus in on
just one aspect, you can't do much in just two thousand words;?
can you provide a
small range of examples?
These were really
questions about the relationship between what we have called the ÔglobalÕ and
the Ôlocal, or between the context-specific and the generic, but this went
unrecognised. The questions were
answered as if they were procedural questions, however; the teachers suggested
that examples had to be selected carefully to fit into the word limit of the
assignment. There was no discussion
about other possible criteria on which examples might be provided'. No-one mentioned the value of
presenting of a range of views through these examples, for instance. Indeed, it
was suggested that hypothetical examples might be used to make the point, implying
that the purpose of examples was to illustrate theory, and foreclosing on the
possibility that practice might either generate the questions to be answered
from a theoretical perspective, or provide the data with which to critique
theory.
By not exploring some of
the assumptions behind the different responses to questions asked, teachers
lose an opportunity to bring the theory-practice relationship to the front of
the curriculum agenda. In this respect, what is left unsaid is often as significant
as what is said.
In many of the lessons
observed, what was shown to count for assessment purposes was first and
foremost the ability to demonstrate familiarity with theory, and to make that
familiarity evident to the reader.
Secondly, it was the ability to show that the writer could think of some
instances where the theoretical point had been demonstrated. This approach
seems to be a response to the Board's directive that students should
demonstrate an ability to bring together theory and practice but that the two
should remain separate for assessment purposes. Where assessment criteria are
fully explored by students and teachers in preparation for writing, and where
they match the criteria used in learning at other times, writing for assessment
can become an educative activity. Unfortunately, where there is a mismatch it
becomes a largely irrelevant time-consuming imposition which reinforces the
message that theory has little to do with the ÔrealÕ world of practice.
7.3.2. ÔFeedbackÕ
The argument above
focuses on what we have called the ÔconstructionÕ of concepts of knowledge and
contingent attitudes towards theory. ÔConstructionÕ takes place in the course
of repeated classroom interactions; it not only involves pre and post
assessment-event comment that is focused centrally on assessment issues, but also
all the Ôthrowaway remarksÕ and subversive discourses within the hidden
curriculum. In every institution, however, there is a set of explicit
written-down assessment criteria which also contributes towards this
construction process. A survey of the marking criteria used in a number of
approved education institutions has revealed that they are intuitively divided
into 'lower order' and 'higher order' meta-categories [52]which parallel the first
and second knowledge levels suggested above, with the higher order set of
categories including the use of analysis, judgement, and critique, and the
lower order set centring around knowledge-collection, synthesis, and display.
Teacher comments related to these criteria are the main focus of feedback
sessions; the feedback sheets through which teachers convey their first
responses, act, therefore, in both a formative and a summative capacity.
A typical feedback
sheet, such as the one outlined below, offers guidance on presentation,
structure, and content, but gives little indication of the cognitive processes
the student will be expected to demonstrate.
¥ Presentation
¥ Structure
+
introduction
+
development
+
conclusion
¥ Content
+
relevance to the set question
+
evidence of breadth of reading
+
use of references
+
accuracy and relevance of reported criteria,
+
going beyond the reading.
In this example, the
Content section mentions the importance of cover ('breadth' of reading) and of
accurate recall of what has been read ('accuracy', 'use of references'), it
does not however differentiate mere wide cover from useful breadth, or indicate
what makes the inclusion of a particular references relevant and lifts its
mention out of the realm of 'name-dropping'. Given schedules like this one, it
is not surprising that students often concentrate on providing evidence that
they have collected and synthesized data rather than on exploring the issues
raised by the lack of fit between what they have read, what they have
experienced, and what they have begun to formulate as an overall nursing or
midwifery theory. Student writing seems to suggest that they interpret the
criteria as encouraging a display of what they know about the area of concern, and that knowing
about is perceived as more important than knowing
why,
and having knowledge is thought to be more important than critiquing knowledge. Certainly
most students offer very little explanation about why they have included some
things and excluded others. On the whole, knowledge is treated axiomatically in
feedback sessions, when it might well be more useful if all knowledge was interrogated
(i.e. if questions were asked about its content and its status). In such an
interrogation the full range of paradigmatic alternatives selected out of the
knowledge bank in order to arrive at the current presentation, would be
re-instated and become the subject of discussion about the criteria used for
exercising judgement and making choices. Feedback sessions can give precise,
targeted information on the quality of the knowledge presented in a written
assignment, they can also provide a major opportunity for students to develop
the cognitive processes necessary for successful judgement and decision-making.
7.3.3. Marking
Schedules
Every approved education
institution is obliged to have a set of marking criteria which is available for scrutiny by external
examiners. This schedule is perhaps the best indicator of the institutionÕs
ÔofficialÕ view of what constitutes relevant nursing and midwifery knowledge,
and of the relative weightings given to various forms of knowledge by the
institution. Schedules are often constructed by either a single person or a
committee whose major responsibility is in the area of assessment; for most of
the users of the schedule it comes from ÔaboveÕ. In using any form of guideline
to which they have not been a party, people interpret what they have before
them in the light of their previous understanding and of their personal
experience. They also tend to persist with their previous perceptions of
knowledge until they ÔownÕ the criteria in the schedule. They therefore often
present the students with two
contradictory versions of what is valued. On the one hand they present
the version offered in the assessment schedule, while on the other they present
them with a more subtle and insinuous version in what they do and say in the
classroom as part of their teaching. Given the interpretive tendency of human
beings, the only possible way of achieving some degree of comparability between
these two versions is through dialogue about the differences between the
ÔidealÕ, as represented by the official assessment schedule, and the Ôreal
worldÕ, as represented by the range of individual schedules enacted in the
course of classroom interactions.
In the certain knowledge
that all schedules are ÔidealÕ and that therefore anything written in them
indicates only an intention, we offer examples of two such schedules to
illustrate the range of assessment criteria they employ, and to show the extent
of their acceptance of the multi-dimensionality of knowledge.
Example 1. (complete
set of criteria)
Grade A The student has
demonstrated an exceptional standard of work, achieving all levels of cognitive
skills, i.e. knowledge, comprehension, application, analysis, synthesis, and
evaluation.
Grade B The student has
adequately demonstrated knowledge, comprehension, application; has made some
good attempts at analysis, and synthesis, but some conclusions reached are not
substantiated by the arguments/discussions raised.
Grade C The student has
adequately demonstrated knowledge, comprehension, application; and has made
some weak attempts at analysis and synthesis.
Grade D. The student has made a weak
attempt at knowledge, comprehension, application; but has failed to demonstrate
depth of knowledge, and answer contains some irrelevant material and/or
inaccuracies.
Grade F The student has failed
to demonstrate knowledge of the subject; material is largely unrelated to their
subject or contains major inaccuracies or omissions.
In this schedule there
is a clear valuing of analysis, and of the development of an argument. Although
it looks as if argument is largely conceived of in terms of ideas derived from
reading rather than from a self-developed critique of what has been read.
The second schedule has
an even stronger message about the value placed on critical analysis.
Example 2. (Selection
from the full set)
Grade A. Clear evidence of wide
reading and understanding supported by (correct) appropriate use of reference;
clear evidence of relevant research; clear expression, debate and argument
indicating some critical appraisal of facts, comparing and contrasting of issues
and the relating of theory to practice where appropriate.
Grade B. Some evidence of relevant research; sound knowledge and understanding of
issues being addressed; some attempt made at using knowledge to draw
conclusions and explore relevant ideas, relating theory to practice where
appropriate.
Grade C. Largely descriptive with
little discussion and interpretation; some application of theory to practice
where appropriate.
Grade D. The application of knowledge
to situations is restricted; superficial handling of the issues discussed;
descriptive account showing little evidence of reading; no significant errors
or inaccuracies which might imply dangerous practice/judgement.
Grade E. Inadequate levels of
knowledge; minimal evidence of understanding; many unsupported statements;
presence of errors and/or inaccuracies which may imply dangerous
practice/judgement.
Here value is attributed
to critical appraisal, to consideration of alternatives, and to the application
of knowledge to situations. The use of reading and reference is to be
appropriate, suggesting a concern that the students' judgement should be
evaluated. It also challenges the value of unsupported assertion, and insists
upon the (partial) integration of theory and practice.
Even within this small
sample, it can be seen that schedules offers a complex picture of knowledge.
These schedules will give credit to students for what they know about - the terms they are
familiar with, the taxonomies they can repeat, the authorities and models they
can quote - and for what they can do with that knowledge - their ability to see
issues, to differentially apply theories in relation to contexts, and to
critique practice and theory itself.
They can achieve a pass mark by providing evidence of basic knowledge as
much as they can by demonstrating understanding and reflection. What students
come to value depends, therefore, much more on the values and emphases they
receive from the hidden curriculum of daily classroom interaction.
7.3.4. Marking
Workshops
It has been suggested
several times that the most effective mechanism for achieving some form of
comparability between the criteria applied by different assessors is to provide
increased opportunity for inter-assessor dialogue. A dialogic culture
flourishes best where time is structured in for the dialogue to happen. Several
institutions have created s culture of this kind and hold termly markers'
workshops in which teachers mark and discuss summative written assignments
together. Written comments are compared, and where different teachers value
different forms of knowledge in different ways; there is the basis for a
dialogue between the holders of the different views. More importantly, there is
the chance that a closer understanding could be achieved. The three comments
below are typical, ranging from those that address the overall structure of the
assignment, through those about the disciplines 'covered', to comments on the
use of problem-solving and research.
good evidence of
introductions, main points, arguments and conclusions. Good indication of
having read adequately. I detected a sense of becoming more hurried towards the
end.
all perspectives have
been considered but sociological and psychological perspectives rather vague.
Important aspects such as ...(...)..are not included;
very good. a
problematic approach is taken relating all issues to nursing care, usually
research-based, and professionalism (sic!).
In this institution
further marking workshops led to a revision of the feedback sheets, with the
result that more higher order criteria were included. The application of
knowledge to problems became recognised as important, whilst analysis appeared
for the first time. Some anomalies remained (eg. the separation of application
and analysis) but the process of reflection through dialogue increased
collective understanding of the relationship between theory and practice and
facilitated the introduction of a marking schedule that rewarded understanding
of theory 'applied' via analysis. Markers subsequent written feedback included
comments such as:
Questions approached
with set ideas and perhaps not a full understanding of what was asked for
Very good; a
problematic approach is taken relating all issues to nursing care, usually
research-based and professionalism (sic)
indicating a shift away
from knowledge display and coverage and towards critique. Dialogue between
markers was the single most effective mechanism by which better teacher
understanding of the criteria employed to assess student knowledge was
developed. In a second approved education institution, the nine teachers and
their facilitator raised and 'answered' (i.e. came to a working position on)
some core assessment questions. It was agreed, for instance, that references to
'authorities' should only be credited where the use of the authority genuinely
helped the case being put. Similarly, quotations of information from the
literature should only be credited where it had been shown that the information
had been appropriately applied to the student's own study. In the same workshop
it was also decided that descriptive accounts were to be valued a great deal
less than those where a degree of synthesis had occurred, and these in turn
were less preferable than ones where analysis had taken place.
This group also
distinguished between analysis and what they termed 'evaluation' and we have
been calling 'critique' . When several sample scripts were marked in the group,
it was agreed that one that
provided evidence of synthesis and analysis in the final paragraph but no
evaluation anywhere, should be marked lower than another in which the student
had drawn out a number of conclusions and made a summative value
judgement. The focus here is on
the ability to identify issues, make judgements and be explicit about the
values on which they are predicated,
consider alternatives, and
offer a critique.
It is interesting to
note that in one case studied, the teacher and students were able to examine
the criteria their writing was to be judged by (i.e. criteria based on Bloom's
Taxonomy) before they embarked on the writing task. They were then able to
take the criteria into account whilst doing their writing. The discussion of
general criteria before a piece of work is carried out empowers students to
shape the immediate task to take account of those criteria. Discussion after a
piece is completed only allows consideration of what could have been done
better, and where things might change in the future.
7.4. ASSESSING
KNOWLEDGE: AN OVERVIEW
7.4.1. The Current
Situation
The classroom component
of nursing and midwifery education has a potentially important role to play in refocussing student expectations
about clinical competence. In particular it has the potential to enable
students to both value and become competent at using experiential and
theoretical data, at doing analysis, at critiquing, and at seeking
alternatives. Students from some classes are more aware of the need for
reflection and critique, and more used to doing them, than students from other
classes. These differences appear to be influenced by:
¥ individual
staff approaches to teaching
¥ discussion
of assessment criteria with students
¥ assessment
training programmes for staff
¥ opportunities
for teacher-teacher dialogue
Before teaching staff
can help students reach a clear understanding of the criteria for assessment
that will be applied to written work
they must have a degree of agreement amongst themselves. There is little
reason to believe that in midwifery and nurse education institutions there is
likely to be any more immediate agreement about those criteria than in any
other professional education organisation. It is more likely that over time,
through working together and discussing together, teachers will eventually come
to a degree of common understanding about what criteria to apply to the
assessment of knowledge. Data from two institutions suggests that where either
informal teacher discussions or more structured workshops occur on a regular
basis, analysis, argumentation, and critique become foregrounded as assessment
criteria.
7.4.2. Strategies for
Moving Forward
Although there is a wide
variety of perceptions of what the assessment of theory should focus upon, and
therefore a considerable range of things that get emphasised in the
interactions that take place before and after the assignments are written,
institutions often have written criteria that point towards an enlightened
approach to assessment. The problem is not the presence or absence of those
criteria, but the degree to which they are understood and internalised by the
teaching staff who do the marking. That understanding can only be achieved
through regular discussion about differences in perception, and about the
values which inform judgements. There will always be some areas of disagreement
where judgement is involved, but reliability is improved by strategies such as:
¥ double-marking
followed by discussion
¥ sample
monitoring to establish marker comparability
¥ marker
workshops to clarify criteria
¥ course
grading meetings to scrutinise the form of the assessment
Strategies like these
encourage discussion between markers, and bring them closer in their
understanding of what constitutes appropriate assessment criteria.
CHAPTER EIGHT
ABSTRACT
Nursing and midwifery
are professions in which action informed by relevant knowledge is central. The
knowledge can be private and in the head of the individual practitioner, or
public and accessible to comment and reflection. If it is to be assessed, it must
become public through dialogue which makes possible comparison, exploration, and
critique. In a dialogic culture, practice is treated as something about which
it is legitimate to argue, ask questions, and explore alternative
possibilities. In such a culture, monologue-with-an-audience, in which what
counts as competence is presented for students to copy, is replaced by a
sharing of ideas and understandings about it that draw on accounts from
theoretical perspectives, and both generalised and situation-specific contexts
of practice. The conversations which are an integral part of a dialogic culture
make it possible for people to consider the values and theories embodied in the
habituated practices of their working lives. All organisations have a variety
of communication networks which have the potential to promote learning
conversations. Hierarchical, line-management networks, with their largely
univocal, uni-directional forms of communication, offer prominence to the
official ÔvoiceÕ and seek to promote a single perspective vis-a-vis issues with
an organisation-wide significance. Flattened hierarchies and peer group
networks strive more often to ensure understanding in terms of multiple
perspectives and location-specific events. Dialogue about assessment occurs in
both contexts and consequently has to handle the interplay between the two
different types of perspective.
Assessment events and the conversations that contribute to them are
places where the situation-specific, the organisational (or institution-wide),
and the ÔglobalÕ are brought into juxtaposition, and where, as a consequence
what is done habitually in separate specific contexts can be brought to a level
of awareness that cannot be there in practice. Through dialogue, the
internalised criteria which have enabled the student and practitioner to
reflect in action and so act on a day to day, situation by situation basis, is
brought to a level of consciousness that makes it possible to generalise that
knowledge and use it in analogous situations. Reflection-in-action is
situation-specific, time-limited, and action-oriented; because a dialogic
culture encourages discussion before, during, and after action it promotes
reflection-on-action that contemplates possibilities, alternatives, and general
principles. It also allows a move beyond the level of reflection-on-action to
critical (or critique(al)) review in which situations are analysed, judgements
made, and understanding constructed. At this level it facilitates a move to a
higher level of individual and group awareness where conceptual maps are
reorganised to take account of complexity, and critique is developed through
reflection. This level of reflection can only occur, however, if there is a
well-documented evidence base collected from multiple sources to provide a
focus. Dialogic assessment relationships and structures bring collaborative
meaning-making from the margins, where it happens regularly but stays
unrecorded and unacknowledged, to the centre of assessment activity.
A CONCEPTUAL
FRAMEWORK FOR CONSIDERING THE USE OF DIALOGUE IN ASSESSMENT
8.1. THE FUNCTION OF
DIALOGUE IN LEARNING & ASSESSMENT
8.1.1. The
Constitutive and Educative Function of Dialogue
All nursing and
midwifery, being about action informed by relevant knowledge, requires
practitioners to reflect on what they are doing as they do it. However, this thinking-whilst-doing,
which involves assessing the evidence immediately available and acting on it,
normally takes place privately, in the head of the individual nurse or midwife.
In addition, this Ôpractical thinkingÕ, or reflection in practice (Carr and Kemmis,1987) A competent
practitioner must be able to operate across contexts and to apply principles in
a context-sensitive way. It is generally believed, therefore, that reflection , is situation-specific and therefore
context-dependent. on practice (Carr and Kemmis,
1987), where practical thinking done in one situation is made explicit, built
on, and used in another, should be developed through discussion which takes
place outside the arena of immediate activity. To move from the status of
novice, who assesses each clinical situation as if it were a one-off, to
competent (practitioner) who
perceives nursing or midwifery holistically, requires a paradigm shift, however
(Benner 1984 ), and this is not necessarily
achieved through discussion which simply looks back on and reviews particular
practice events. In fact, far from requiring post-event discussion which is
exclusively situation-bound, the different way of seeing the world which
constitutes a 'paradigm shift', demands event-related dialogue situated both in
and around a wide and varied range of practical situations. Such dialogue
promotes reflection before, during, and after action, and often leads to change
in practice over time. Dialogue,
then, brings together action and reflection in a mutually constitutive
relationship. Or, to put it another way, through dialogue action and reflection
impinge upon each other and over time result in changes in both the practice
(ÔwhatÕ is done and ÔhowÕ it is done) and the theoretical framework for that
practice (ÔwhyÕ it is done).
Dialogic conversations are mutually educative, they bring about new
understandings on the part of both interlocutors. Mutually educative
conversation is integral to the development of competence in the caring
professions.
Dialogue is an essential
condition of institutional life, whether or not it is encouraged. Where
dialogue is actively encouraged, however, it promotes new forms of
institutional understanding which can in turn establish a permanent dialogic
culture. In such a culture the differences between understandings are
recognised from the outset, the ethical and practical issues those differences
raise are made a major focus, and the possibility of multiple voices is
carefully considered. Any assessment scheme which excludes the possibility of
dialogue about the values that inform professional judgements, or the
likelihood of discussion about the principles by which one part of the
knowledge base becomes privileged over another in constructing a course of
action, is an incomplete scheme. It is also an unnatural scheme.
8.1.2. The Assessment
Function of Dialogue
'Validity,
reliability, and practicality have been widely regarded as the criteria for
successful assessment. We should add a fourth attribute - the consequence of
that assessment. This over-arching criterion requires that we plan, from the
outset, to appraise the actual use and consequence of any assessmentÓ (Harden
1992)
Through discussion about
a particular event, students can demonstrate the knowledge, understanding and
values that have informed their actions in the clinical area on a given
occasion, enabling assessors to test their own observation-based judgements
about the quality of those actions. The great advantage of assessment dialogue,
as opposed to simple discussion, is that it facilitates learning as part of the
assessment process. It not only provides an opportunity for the student and
their assessor to make their understanding accessible to each other (and
therefore available for evaluation) but also helps to develop both the theory
and the practice out of each other. For in the course of doing assessment
according to dialogic principles, students and assessor are obliged to consider
the competing values and theories embodied in the habituated practices on which
they draw, and are able to present their individual perspectives for
comparison, critique, and exploration. This is very different from the
situation where the assessor ÔknowsÕ authoritatively and expects the student to
match that knowledge with their own. In a dialogic situation, instead of simply
ticking a chart to indicate what has been agreed upon, students and their
assessors can indicate where there have been disagreements and doubts, offer
information about their various views, and document the process by which
decisions -including any decision to recognise that consensus could not be reached - were
arrived at. Successful assessment dialogue is founded on a recognition that
nursing and midwifery are complex practical activities informed by a theory
which is multi-dimensional and so, like other theories, includes within it
several potentially competing views of the world .
Òthe only values, the
only ideas, the only concepts, the only form of existence which will be truly
stable and coherent will be one in which opposition is included rather than
kept out; all such notions, from the standpoint of traditional logic, will appear
paradoxical and absurd.Ó (Rowan, 1981, p. 131)
Ócontradictions are
never ÔresolvedÕ, there is a movement between opposites as an inevitable part
of the human condition: Ôwe can no longer talk about simple ÔgrowthÕ as the
basic need of the human being. for growth is always within a dialectical
relationship in a dilemma which is never fully resolvedÓ (May, 1967, p.19)
8.1.3. Exploration of
the ÔOrganisationalÕ and ÔSituationalÕ
To function effectively
an institution must work through a system of networks. A network is essentially
a communication system, a mechanism for ÔexchangingÕ information, ideas,
opinions, and values. There are official networks built on management
structures where communication is often between role-holders in a hierarchical
line-management relationship. In such cases communication is treated officially
as a functional activity to ensure the smooth-running of the organisation
through facilitating its line-management accountability; its purpose is to
promote the exchange of institution-wide, univocal (i.e. single perspective)
information that pertains to the wider working context. There are also a number
of other official networks built on workplace groupings where communication is
between individuals within a flattened hierarchy, and a set of unofficial
networks founded on friendship groupings, where it is between peers. The main
purpose of both these alternative types of network is to promote
situation-relevant understandings of the policies of the organisation. Local
networks, although concerned with systems-maintenance information from time to
time, provide a major opportunity for conversations in which people in their
capacity as individuals can share ideas and opinions, shape attitudes and
feelings, and explore information which they as yet only partially understand.
Together, organisational and situational networks contribute to the construction
of institutional and individual meanings, and activate an interplay between
them through dialogue.
Embedded within this
system of dialogic networks is the student-assessor 'network'' where assessment
discussion takes place. Like all other networks, it provides an opportunity for
the exchange of information. But also like them it provides a focal point for
the creation of new meanings or the modification of existing ones. This
mini-network is not just a structure for assessment, but simultaneously a location
for the exploration of competing interests, ideologies, and values; it is one
of the places where the pluralistic, complex and changing world of nursing and
midwifery is 'constructed'. It is where the student discovers what knowledge
ÔcountsÕ and what actions are 'approved', and where they learn what being
competent really means (as opposed to what their curriculum documents say it
means). It is also the 'network' where the meaning of the assessor and student
roles themselves and the assessment system per se is developed, and where the
meaning of what it is to be a 'knowledgeable doer' or a 'reflective
practitioner', continues to evolve.
Through their separate
but parallel actions, their interactions and their discussions, the student and
their assessor experience first-hand the dynamic interplay between the
organisational and the situational, and begin to construct a better understanding of the ÔglobalÕ,
that is, of everything that is generic, and applicable across a wide spectrum
of professional activity beyond the individual organisation. Through this
interplay they simultaneously reveal and help to shape the culture - including
the ideas and values- which pervades professional perceptions of what nursing
and midwifery essentially are. And they do this not in one but in two
ways. Firstly, as they work and talk together they define, 'test' and reshape
their assessment roles, relationships, and responsibilities, finding out
through a tacit negotiation what makes the greatest sense for them personally
in the particular situation in which they find themselves. Secondly, as they
'do' their caring in ways that respond to the needs of clients and their
families, and to the local constraints of their workplace, they are unable to
avoid taking account of the global expectations set out in legal frameworks and
non-statutory guidelines, or not to accommodate their global theories of
nursing and midwifery. It is in
the nature of practical professional preparation for nursing and midwifery that
it has to manage the interplay between the situational, the organisational, and
the global.
8.2. CONSTRUCTING A
DIALOGIC FRAMEWORK
8.2.1. Developing the
Notion of Assessment Itself Through Dialogue
ÔAssessmentÕ is not a
unified concept. What it means to one person, and therefore how it is ÔdoneÕ by
them, may differ considerably from what it means to someone else because each
may have a quite different lived experience of it. That experience includes the
activity of being assessed or doing assessing, with its role conflicts and
pragmatic constraints. Indivisibly part of the same experience are the
conversations which occur before, during, and after assessment. Because part of
any situation is the location where it occurs, the situational is everything
that is location-specific, including those conversations. The events that
happen in particular locations are microcosms of the global culture; what
distinguishes them from the ÔglobalÕ is the fact that they are formulated in
response to the immediate and the pragmatic first and the generic and
theoretical second. The conversations about them encompass global,
organisational, and situational issues, what ÔassessmentÕ means for any
individual depends firstly upon the understanding they have developed through
applying and discussing assessment in their local workplace, secondly upon
their understanding of the official assessment processes and procedures
operating in their institution (following the DoH, UKCC, and ENB legal and
non-statutory guidelines), and finally upon their handling of the contests
between the two. In the course of Ôdoing assessmentÕ, differences in
individualÕs perceptions of the function of assessment, and of the way in which
the assessor role is to be operated, reveal themselves. Dialogue is one
powerful element in the construction of both the role of the assessor and the
function of assessment, for it is through discussion that perceptions become a
focus for the construction of new understandings.
Dialogue offers
opportunities for the development of understanding through a series of
ÔlevelsÕ. At one level there is the understanding which stems from habitual
ÔdoingÕ, at another the more holistic understanding which comes from an
awareness of beliefs, procedures, and ideas which emerges from involvement in a
wide range of location-specific ÔdoingsÕ. At another level again there is the
rich and complex understanding which comes from the re-organisation of an
individualÕs conceptual map to take account of the critical reflections they
have engaged in with others. Through discussion, an individual can move from
one level to another (See Diagram: ÔTowards Critical ReflectionÕ) by first
identifying and internalising criteria of judgement for effective action that
will improve professional practice, and then making explicit through reflection
in
and on
those criteria, the rationales of judgement and action. Dialogue, therefore,
provides the framework within which individuals can develop their knowledge
base at one and the same time as their competence, allowing them to continually
re-evaluate and express them both through their practice. In addition, because
dialogue enables movement between levels[53] it also facilitates the
assessment of knowledge at those levels by providing the means by which it is
made explicit and public through critical reflection.
8.2.2. Moving Towards
Critical Reflection
In a dialogic context
assessors and students will not only be involved as active subjects in the
business of assessing competence, as described above, but will also be encouraged to be so. In any learning context where
practical competence is being developed there will be dialogue which
accompanies action and functions as a form of reflection-in-action. As suggested
earlier, this type of discussion enables the student and their assessor to
engage in mutual real-time reflection on what they are doing. What is talked
about is situation-specific, time-rationed, and action-oriented. It is
concerned with what is taking place then and there, that is, with what can be
directly observed in situ , including the skills and attitudes demonstrated. But in a
specifically dialogic context there is also discussion before, during, and
after an
activity, through which the activity is reviewed and analysed. Such dialogue
provides a vehicle for formative reflection-on-action in which student and
assessor are able to explore possibilities, examine alternatives, attempt to
make sense of experiences participated in intuitively, and consider alternative
courses of action. Through this post-event dialogue the participants can begin
to locate their situation-specific behaviour in a broad theoretical framework
and examine the relationship between general principles and specific actions.
Where reflection-on-action takes place in a truly dialogic context, that is,
where there is an acknowledged framework for discussing differences in
perceptions and underlying values rather than one designed to immediately
homogenise them, reflection becomes targeted on the problematic, and becomes
constitutive of the situation. For those things which are ÔproblematicÕ are
essentially the ones which oblige both parties to make public their knowledge
and assumptions, and only when they are made explicit can theory and practice
be assessed at one and the same time. [54] Once the process of
making knowledge, understanding, and values public through the problematic has
been institutionalised, professional level competence can be assessed as part
of the learning process itself as student and assessor formulate together the
principles which apply in the present case, speculate about the way in which
those principles would apply in analogous future cases, and develop a
critique(al)[55] approach to the
examination of nursing and midwifery practice in general.
A reflective
professional practitioner without a critique is neither reflective nor
professional. But reflective practice with an analytical cutting edge, carried
out collaboratively with colleagues who form a critical communicative
community, has the power to transform practice. Dialogue is instrumental in
moving student-assessor discussion about competence from mere review of the
activities undertaken to what Kress calls Ôcritical discourseÕ. According to Kress all
discourse is predicated on difference, for within every theory or belief-system
there are competing and even contradictory sub-discourses (Kress 1985
Where there is an
assumption either that there is only one legitimate set of procedures for
delivering care or only one persuasive theory of nursing or midwifery,
reflection is largely uncritique(al), descriptive and anodyne. In organisations
as complex as those in the Health Service, attempts to promote a single,
unified perspective on what it means to provide health care inevitably prove
unsatisfactory. Indeed, because Health Service institutions are essentially
places where discourses are acted out and competing value-systems cohabit. It
follows that in the course of their daily clinical work nurses and midwives
will constantly have to identify and organise apparently contradictory
information, analyse that information critically, make judgements on the basis
of the analysis, and take appropriate action. Indeed, they can only be
effective carers if their competence includes the skills, knowledge, values,
and understanding necessary to do these things. It is essential, therefore,
that the assessment of competence examines them. Post-event dialogue fosters
the development of high quality decision-making procedures and the formulation
of theoretically-informed principles for practical action in ÔproblemÕ
situations. Where it focuses on evidence collected during the event and is
itself recorded in some way for further consideration it also makes accessible
the knowledge and values employed in constructing those theories and action
plans. By exploring the differences between their own version of what counts as
competence and the version that others hold, students discover the dilemmas
within their own professional rhetoric and the contradictions within their own
practice. Through what Carr and Kemmis call 'committed, interested reflection'
(Carr and Kemmis 1986) and what we are calling Ôcritical
dialogueÕ, the assessment activity is moved from being a survey of activities
ÔdoneÕ and skills Ô coveredÕ to a collaborative, analytical discussion about
practice. Out of critique(al) dialogue informed by more than one analytical
framework can come dialogic imagining about future action ,.
(See Diagram).
Figure: Towards Critical Reflection.
Reflective dialogue as a means of generating
explanation about what is happening and what has happened is, we are arguing,
essentially a co-operative activity designed to enable both analysis - which is
reductionist, and understanding - which is holistic. On the one hand it invites
the student and the assessor to 'examine the parts' of what they are doing,
that is, the component skills, attitudes, and concepts. On the other hand it
encourages them to resituate those parts within some form of coherent theory,
however informal. Critique(al) dialogue, as suggested above, ensures an
interplay between the parts and the whole by moving student and assessor beyond
review to critique that examines underlying values and ideas. This is the
activity of 'theorising'. [56] Because it is a process
which parallels what happens when professionals consciously enter a mutually
educating relationship, it can act as an induction into that process. It is
worth noting, then, that a comfortable and everyday medium through which many
people in the practical professions reflect, construct holistic accounts, and
further imagination, is story. Stories do not explain what is happening or has
happened, but reveal meaning. They illuminate understanding.
(Stories) are analogical
and symbolic; they do not point out meaning directly; they demonstrate it by
recreating pattern in metaphorical shape and form
(Reason and Hawkins 1988 p.81)
Stories allow the teller
to move from situationally-located account, through metaphor, to archetype,
mirroring the move from the 'local' to the global' described earlier. They also
permit movement in the other direction so that an archetypical instance can be
understood in terms of a particular experience. (Greenall 1982; Reason and
Hawkins 1988 ) Appropriate stories are,
therefore, important both as evidence of attitudes, knowledge, and
understanding, and as a means by which reflection on these can happen.
Students' stories, possibly collected together in a diary of workplace
observations, can provide an assessor with valuable insights into the same
sorts of areas that critical dialogue does; if treated appropriately they can
also contribute to the development of critique. They can be primary data in the
same way as negotiated analytical accounts or criterion-related checklists, and
as such can be the object of reflection. But they also embody the dialogic
process within themselves, dealing with contradictions in experience through
shaping them, and revealing (without necessarily spelling out) analysis,
critique, and values. Stories constitute dialogic expression and provide a form of
evidence of natural dialogue. [57]
Although all
professionals know that they work with contradictions, ambiguities and
uncertainties, publicly they conspire to maintain a pretence that they have a
unified view of their purpose and the means of achieving it. To quote Cox et
al:
By reflecting in a
committed way we may come to see that many of our deepest beliefs about our
nursing worlds may be contradicted in the ways that we think and act; and we
may discover that it is not through external forces unrelated to ourselves that
we are prevented from meeting our ideals, but the ways we perceive ourselves,
our actions, and our worlds. .......critical reflection exposes ......new ways
of knowing and new ways of acting within those worlds. (Cox,
Hickson, and Taylor 1991 p.387 )
In dialogue built around
the notion of critique(al) reflection student and assessor have what Hirschman
calls 'voice' (Hirschman 1970, reported in West 1993
8.3. ¥ DIALOGUE
AND DIALOGIC STRUCTURES: A SUMMARY
To summarise, then,
dialogue is a powerful assessment tool which can be shown to actively promote
the development of learning. Through dialogue students can achieve new
understanding which can change what Maruyama refers to as their
ÔmindscapeÕ(Maruyama 1979 and accounts derived from particular,
situation-specific practice contexts.
8.4. STRUCTURES &
MECHANISMS FOR DIALOGIC ASSESSMENT
8.4.1. Recognising
the Inherently Dialogic Structure of Institutions
Discussion is in part
shaped by the participantsÕ occupational culture and their own subcultural
expectations, to the extent that some regularly occurring forms of discussion
are highly ritualised. Ritualised discussions
that conform to a particular schematic structure institutionalise ways of
thinking. Through the setting up
of appropriate routines it is possible to institutionalise the process of
seeking for new understandings. Among those routines must be those which move
discussion from the margins to the centre. Assessment is not a simple business
for which there is a universal set of do's and don'ts that can be applied
across all contexts. It is a complex social activity in which interpretation
plays a major part. Student and assessor have to interpret and make sense of
each other's behaviour and to understand the assessment event itself. In many
ways an assessment event is primarily a communication event.
Many organisational
cultures institutionalise disempowerment whilst paying lip-service to nursing
and midwifery as activities which depend upon the quality of the carers as
decision-makers. If we accept that critical dialogue and the actions which
follow from it are two of the things which distinguish professional
nursing/midwifery from other forms of caring, then we must look to
institutionalise discussions where there is an opportunity for an interplay
between a focus on the micro-issues of particular clients or specific caring
practices, and the macro-issues of policy, resources, and values. Because
improved communication between student and assessor is related to the
conversational context as a whole, where it is common practice to treat
conversations as occasions for collaborative meaning-making, assessment can
have an important formative function. A dialogic conversational context
encourages the exploration and deconstruction of habitual practice, and this in
turn results in an institutionalisation of critically powerful reflective
practice. [58]
8.4.2. Implementing
Dialogic Assessment Procedures
The case for critical
dialogue as a key element of assessment is a strong one. The case for building
that dialogue around the careful analysis of varieties of well-documented
evidence from multiple sources is equally strong. It is essential, therefore, to
find appropriate strategies for bringing dialogue from the unrecorded (and
mostly unacknowledged) margins to the centre of the assessment process. To be
appropriate these strategies must be achievable in the multi-dimensional
occupational context as it currently exists, and must ultimately have greater
benefits than costs. Consequently they will have to contribute to quality in
nursing and midwifery in the short or medium term as well as providing in the
longer-term a better qualified initial entry to the profession. The strategies
which will ensure all three will be ones which enhance the analysis of evidence
and the making of informed judgements that take account of the broadest possible
set of relevant contextual factors. The analysis of evidence and informed
decision-making are, as we have suggested throughout this report, central
components of nursing and midwifery practice.
Opportunities for
dialogue between student and assessor are most likely to develop where the
evidence about the studentÕs competence is contributed by more than one
accredited witness. Multi-source
evidence automatically invites the making of comparisons and the addressing of
contrasts. The criteria for selection of one piece of evidence as opposed to
another become a central issue for debate. So too do the potential
contradictions between the various discourses that become apparent when
evidence from a range of sources is juxtaposed. To ensure analytical dialogue,
however, it is necessary to have a framework for discussion which avoids
analytical criteria that are either too bland and over-general, or too
specific. That framework will require student and assessor to consider the
situation-specific in the context of the generic and ÔglobalÕ, and will invite
them to formulate questions about theories of nursing and midwifery that can be
explored further in the light of evidence from a variety of other situations.
The procedures for achieving a dialogic review of practice will
include the following:
¥ the discussion of criteria used to
identify skills and attitudes
¥ the negotiation of principles for including items in a
portfolio of evidence
¥ the evaluation of evidence in terms
of comparative comment that identifies the issues raised by differences in
perspective
¥ the development of responses to
concrete and context related evidence in terms of a theoretical framework
¥ the identification of development
in knowledge, attitudes (values), and understanding
¥ the agreement of an account of the
issues raised
¥ the agreement of an account of the
development in understanding etc. that has occurred
¥ the written documentation of agreed
accounts
To move the dialogue
from analytical review to critique, it would be necessary to instigate a
further set of procedures. Dialogic critique builds on the analysis
done at the review stage but carries it out as an exploratory/critical activity
rather than a definitive one. A definitive analysis is predicated on the notion
that the analytical instrument is an all-embracing and authoritative version of
what constitutes competent practice in nursing and midwifery. An analytical
framework is, of course, a theoretical construct, and like all other constructs
is open to re-construction. In dialogue which focuses on critical analysis the
assumptions underpinning particular analytical frameworks used are treated as
open to examination, thus ensuring an interplay between theory and practice. In
this form of assessment nothing is taken-for-granted or axiomatic, everything
is open to interrogation. That is,
discussion is conducted on the principle that it is legitimate to ask
persistent questions and offer critique. Definitive analysis, on the other
hand, is capable of ensnaring the student within a categorical and a priori theory; it begins with
Ôthe knowledgeÕ and evaluates practice in terms of it. [59] If theory and practice
are to be assessed together then students must generate theory out of practice
and on the basis of their own emergent theory critique others. The purpose of
critique(al) dialogue, then, is to promote a constant interplay between theory
and practice, and to enable each to be constitutive of the other. It is that
second-level reflection-on-practice, what we have referred to earlier as
reflection that is constitutive of
practice, that is the business of critique(al) dialogue. The procedures for
achieving dialogic critique of practice will include the
following:
¥ the agreement of ground rules for
interrogating the taken-for-granted
¥ the discussion of frameworks for
looking at competence, in the light of practice
¥ the writing of suggested
modifications to frameworks, with the justifications for change made explicit
¥ the incorporation into assessment
schedules of ÔwhyÕ questions that seek to discover the options considered by a
student when faced with a problem in practice, and the reasons for the choice
finally made.
¥ the employment of student
self-assessments which require the student to discuss particular instances of
their practice in terms of general principles
¥ the agreement of an account of the
development in understanding etc.: that has occurred
¥ the written documentation of agreed
accounts
8.4.3. Ensuring
Continuity in the Development of Assessment Level
AssessorsÕ perceptions
of competence are shaped by the clinical context in which they work, by their
understanding of the relationship between that context and nursing or midwifery
theory, and by their experience of working with students at different levels of
competence. Inter-assessor dialogue can help individuals situated in different
contexts to better understand one anothersÕ perceptions. At the same time, provided assessors can avoid the
temptation to falsely represent their views as having greater heterogeneity
than they actually do, dialogue can be a mechanism for putting a range of
subjective views into relationship with each other, thus exposing for
discussion the practical and intellectual theories on which those views are
based. The establishment in this way of a mechanism for comparing and
developing perceptions of competence across clinical placements, can provide
the foundation for ensuring that there is greater continuity in the practice of
assessment from one placement to another. This, in turn, can facilitate
assessment that addresses the studentÕs needs at an appropriate level. Where
there is no dialogue between an earlier assessor and the current one, the
student can be assumed to have little or no experience of doing things which
they have in fact demonstrated considerable competence in on the previous
occasion. This can result in the student repeating the previous experience
rather than developing their competence further. The mechanisms for ensuring that a studentÕs competence is
assessed at the appropriate level will include the following:
¥ time for a new assessor to read a
new studentÕs portfolio of evidence from earlier placements
¥ regular meetings between education
and clinical staff for the latter to share their experiences of the assessment
process
¥ occasional workshops for examining
the nature and quality of the evidence of competence collected in different
placement contexts
¥ dedicated time for teachers to
explore assessment issues with clinical staff across environments.
Devolved continuous assessment of competence can
only be effective where it takes place as part of a network of dialogic
structures within a dialogic culture. Through dialogue, assessment can move
students from review through analysis to critique. The necessary continuity to
ensure this move from one level to another requires the further development of
some existing structures and roles, and the provision of a number of new ones. [60]
CHAPTER NINE
ABSTRACT
Human beings are
Ôinterpreting subjectsÕ who make sense of the world through thinking and
reflecting about it. It follows that a structural analysis of human activity
can never give a complete picture of what is taking place because it is unable
to take account of this individual capacity for reflexivity. To begin to gain
access to the thoughts of the interpreting subject it is necessary to examine
the discourses that they speak and write about, and that they enact in their
daily interactions. A discourse analysis reveals the internal and external
debates about the values and alternative conceptions of ÔrealityÕ that occur
during the course of daily interactions, and which are either reiterated or
reconstructed in the practices of work. The key discourses to emerge from such
an analysis are those of education and training, theory and practice,
reflection and action, ownership and authority, effectiveness and efficiency,
and market economics and public service. Each of these is in a complex
relationship with the others, none can be considered without at the same time
bringing into focus a whole range of sub-discourses. The discourse of theory
and practice, for example, cannot be considered apart from the discourse of
education and training. On the one hand, the historical concept of caring as subservient
to curing has left traces of practices associated with skilled ÔdoingÕ as part
of the discourse, and that in turn has resulted in implicit theories about the
competent application of know-how; both notions which belong within the
training discourse. On the other hand, the modern concept of flexible,
innovative, and complementary caring invites discourses about cognitive
flexibility, knowledge and understanding, and practice informed by theory; all
of which are sub-discourses of the major discourse of education. Other
discourses are similarly complex and interrelated. The discourse of reflective
practice is intertextually linked with the previous one through the ongoing
contest between Ôknowing aboutÕ, knowing how toÕ, and Ôknowing whyÕ. At the same
time it points to further discourses about the quality of client care and its
relationship to professional development, in which the question of improvement
for clients in the future has to be balanced against their immediate needs. In
a context of limited resources, the discourse about reflective practice gives
way to a dialogue about efficiency versus effectiveness. It makes clear the
fact that reflective practice depends upon appropriate material structures, and
shows the pivotal role of the assessor who can, irrespective of material
structures, welcome or discourage reflective practice by suggesting, through
their words and their actions, its worth. The discourse of ownership includes
the sub-discourses of subjectivity and objectivity, self-esteem, and mutual
education. The first leads right back to the earlier concerns with theory and
practice, and raises questions about the nature of theory-from-practice which
is relatively underexplored in that. It also raises important issues about the
nature and quality of evidence. The second explores the risk to self-esteem
when nurses and midwives who have expertise in their own professional field are
asked to take on a new, and possibly alien, role as assessors. The third raises
questions about the sorts of communication that will promote educative liaison,
and suggests a relationship between personal and professional development. The
authority discourse addresses the issue of how the nature and form of authority
can affect the success of innovation. As the assessor moves back and forth
between the role of diagnostician and accredited witness, they enact the
discourse about the relationship between authority, power, and support.
Formative and informal assessment constructs authority in a non-authoritarian
way, but both formative midpoint placement assessment occasions and summative
end of placement assessment occasions construct it differently. Unless, that
is, the authorities based on
evidence that is open to inspection, and can be judged for fairness (rather
than objectivity). All these discourses exist in a complex relationship to one
another in relation to the wider discourses of nursing and midwifery education
and the assessment of competence, and come together through the Ôinterpreting
subjectsÕ who constitute the organisations that provide healthcare.
EXPLORING THE
DISCOURSE(S) OF ASSESSMENT PRACTICE
9.1. WHY FOCUS ON DISCOURSE?
9.1.1. The
ÔInterpreting SubjectÕ
Although this research
study looks closely at the roles, structures and mechanisms by which the
assessment of competence in nursing and midwifery is carried out at
institutional, local, and national levels, it is not a straightforward structural analysis of either competence or assessment. Such an analysis is able to describe
many of the elements of the context that contribute towards the doing of
assessment. It is not capable, however, of providing a complete account of the
relationship between the human subject and their context, because it is unable
to address the issue of human reflexivity. A study of human practices must seek
to unearth the discourses, or debates, that these subjects are engaged in as a
natural part of their working lives. It must also represent those discourses in
a way that maintains their dynamic quality and recognises the active,
interpreting person at the centre of the process. The term 'discourse' refers
to the implicit and explicit debates about alternative conceptions of reality
and values, that the members of any social group (including professional
groups) conduct many times during their daily interactions. These ideas and
attitudes are either reiterated or reconstructed in the work practices of teachers, students,
and qualified nurses and midwives, and in the interactions between members of
those groups as they attempt to bring their differing knowledge to bear on common problems. They are also shaped within any one of those groups
by the interplay of expectations brought by individuals who come with
significantly different biographies, and therefore expectations. By allowing
the key debates about the assessment of nursing and midwifery competence to
surface through the words and actions of the main players, it is possible to
preserve the richness and variability of these discourses. Awareness of those
debates allows us to make sense of the behaviour of the people who live and
work in the institutions where competence is developed and assessment carried
out.
Nursing and midwifery
are grounded in interactive social practices and mediated by professional
dialogue; to fail to capture the competing discourses embedded in those
practices or to neglect to examine the dialogue would be to leave out at least
half the picture. In the Introduction we say (with reference to competence) Ôit
may not be that there is a single general perceptionÕ. The research data makes
it very clear that there are indeed a number of perceptions of both assessment
and competence held by all the parties involved in the professional preparation
of nurses and midwives. Discourses are about differences of perception, they
are also instrumental in shaping those perceptions, for such is the nature of
reflexivity. Discourses are also, inevitably, about differences in values. This
chapter sets out the key areas of discourse identified by interviewees, enacted
in clinical and classroom environments, and encapsulated in institutional
documentation, and introduces the range of alternative accounts of reality
brought together in each discourse.
It illustrates the complex interrelations between knowledge, skills, and
attitudes, and the inter-connectedness within these of discourses about
education and training, theory and practice, reflection and action, ownership
and authority, effectiveness and efficiency, and market economics and public
service. Reflecting the complexity still further it notes the discourses about
knowledge that juxtapose knowing about with knowing how to and knowing why; examines how these contribute to
the discourse about theory and practice in which Ôgetting on with the jobÕ is
set alongside knowledgeable doing which in turn is set against reflective
practice; and explores the extent to which these are embedded in the discourses
about assessment that bring into contention the legislative, the technicist,
the diagnostic, and the educative.
The brief discourse analysis which follows - which includes an element
of concept analysis
- enables
us to project forward to consider potential new roles, structures, and
mechanisms for assessing competence, in terms of their ability to take account
of the interpreting subject.
9.2. KEY DISCOURSES
IDENTIFIED
9.2.1. The Education
& Training Discourse
The first and most
immediately obvious discourse that emerges from this study is the discourse of
'education and training'. It is the major discourse implicit in the research
aims quoted in the Introduction, and is embedded in the meta-discourse of innovation
and change[61] See the Introduction to
this study. As with most
discourses, this one intersects with others. It is, for instance, impossible to
consider education without looking closely at the discourse of 'theory', or to
discuss training without examining the discourse of 'practice'. These touch on
other discourses about 'knowledge' and 'work'. Each of these discourses is
motivated by the conflict between a historical concept of the caring
professions as subservient to the curing professions and a modern conception of
them as flexible, innovative and complementary. The two concepts currently
co-exist because of the persistence within the professional culture of a view
which sees nurses and midwives as skilled doers who can be inducted by being
'shown the ropes', and which remains in place alongside a new culture of
thoughtful, well-informed practice that requires the development of
competencies such as judgement and career-long commitment to learning.
The discourse about
education and training is predicated on an assumption of fundamental difference
between the two. Education (as defined by the writers of validation documents
and by teachers of current courses) is about the development of concepts,
ideas, and cognitive flexibility. It requires both a knowledge base and understanding. It also
requires a coherent theoretical framework that can lead to understanding and
action. Education is about processes and outcomes. Training (defined by the
same sources) is about the development of workplace skills and practical
flexibility. It requires the competent application of know-how more than
understanding. And it requires the ability to assimilate programmes of
instruction, and to produce the ÔbestÕ practice advocated in them. Training is
primarily about achieving outcomes without the need for an explicit theory.
These definitions construct ÔidealÕ versions of education and training, and
they are, of course, artificially absolute. The discourse about the
relationship between knowledge, skills, and attitudes, and the attendant debate
about the possibility of assessing both together, is, in fact, the argument
about where various ideas currently clustered under one or other of the two
headings actually belongs. Because it starts from the assumption that education
is linked with theory whereas training is linked with practice, and thus
constructs educational and training activities dichotomously, it ignores practice as the source of
theory, and marginalises opportunities for interaction between experiential
knowledge and ÔdisciplineÕ knowledge (cf. Benner and Wrubel 1982; Gray and
Forsstrom 1991 ). The design and operation of
programmes that assume the need to bring theory and practice together, and take
no account of the fact that they may be two sides of the same coin, makes it
clear that the central discourse is about the struggle to reconstruct training
to an educative paradigm, rather than about the relationship between education
and theory per se. This has implications for the provision and design of
programmes to meet the professional development needs of practitioners who see
themselves as under-prepared for their assessment role. [62] In particular, it
places a focus on how far the education and training discourse can be
satisfactorily addressed in programmes of preparation, and how far the
programmes themselves can resist a training imperative and engender educative
priorities.
9.2.2. The Reflective
Practice Discourse
The second discourse at
the centre of this study, is the discourse of reflective practice. Whereas the
discourse of education and training was evidenced in documents and interviews,
this one is worked out primarily in action - it is, quite literally, enacted.
For most nurses and midwives practice is very much about ÔdoingÕ, and about
delivering hands-on care. It is a busy activity leaving little time to sit and
reflect. If and when they find themselves with
time, nurses and midwives feel guilty that they are not ÔgainfullyÕ employed.
Even those nurses and midwives we spoke to who valued the opportunity to think about their work made
it clear that they placed clients' needs before their own professional
development needs or the needs of students. Educators did not dispute the
rightness of this, and in any case there could be no ethical justification for
neglecting client needs in order to talk or read. The problem here is that the
discourse of reflective practice becomes about how to achieve the space to
carry out reflection that might lead to improved care in the future, and how to
avoid jeopardising the immediate care needs of clients. Or alternatively it
becomes about Ôwhat you do in the classroom but couldnÕt possibly do in the
ÔrealÕ world of clinical practiceÕ.
The economic context in
which most caring takes place tends to frame the discourse of reflective
practice as a rhetoric without material structures to support it. The net
effect of this is to subvert the discourse of reflective practice and
reconstruct it in terms of the very different discourse of resources and
resource management, with its own internal debates about value-for-money and
the relative importance of time as a cost element in the equation. This is not to suggest that the debate
is not held, or that reflective practice never occurs as an acknowledged part
of a course, but to propose that where it does occur there is usually a 'price
to pay', and the paying of that price itself features in the discourse. When a midwife says,
we'll have to talk
about this one over lunch, there's no time now
or a staff nurse suggests,
I think it would be a
good idea if I filled your assessment form in at home tonight because otherwise
I wonÕt....we'll keep getting interrupted and I'll miss something out
they are speaking about
the price of reflective practice and indicating that for them it is too great.
They have conceived the debate not in terms of knowledge and action but in
terms of pragmatics and action. Like much of the discourse in this area, they
are addressing issues of efficiency and effectiveness in a discourse which sets
investment of resources to promote longer-term understanding and improved care,
in competition with attempts to introduce efficiency by providing resources
sufficient only for covering immediate needs. What we find underpinning the
discourse of reflective practice is a debate which extends well beyond the
boundaries of this one issue to consideration of the dilemma of how to provide
professional development in a context resourced for systems maintenance.
It is worth noting that
the discourse of reflective practice takes on more than one sub-discourse. In
addition to the oppositions outlined above it also addresses the tension
between habitual practice and innovative practice, where innovation is
represented by the ability to be flexible in the face of the novel and
unexpected, and habitual practice by Ôwhat we have always doneÕ (cf. BourdieuÕs
notion of ÔhabitusÕ; Bourdieu 1976, 1988). To be flexible across a variety
of contexts, the competent practitioner must be able to understand the
principles of particular practices, and perceive how the particular relates to
the general (or generic). It is sometimes difficult for those who are steeped
in a culture to stand outside it a little in order to see how it might look to
others. It is equally difficult for those who come into that culture from the
outside to introduce new practices before they have earned acceptance by
demonstrating their ability to work within the habitual practices and
procedures of the group. It is apparent from this that in the context of
innovation and change, the discourse of reflective practice incorporates
discourses about personal security and confidence, group solidarity and group openness,
and the credibility of the innovator. In many ways these are all issues of
power; the power of the individual vis-a-vis the power of the group; the power
of knowledge vis-a-vis the power of established practice; and the power of
innovation vis-a-vis the power of resistance.
By identifying the
connection between reflective practice and power, we begin to see more clearly
the pivotal nature of the role of the assessor in professional development.
Assessors are established practitioners who are members of a
culturally-established group. They can welcome the person who comes with new
knowledge and exchange it for some of their own, or they can offer resistance
and suggest the worthlessness of theory. If they do the former they encourage
reflective practice because they demonstrate a willingness to reconsider in the
light of the new. If they adopt the latter stance they are by implication
supporting uncritical habitual practice. In so far as the person who holds the
assessor role can accredit or discredit a student and can affect their
self-esteem for good or ill, they can prevent the student from building on any
reflections they may make privately either by refusing to engage in dialogue or
by being insufficiently prepared for doing so. This makes the point, again,
that the discourse of reflective practice is worked out in practice, and indicates
that there is a symbiotic relationship between it and the assessment discourse.
9.2.3. The Ownership
Discourse
A third discourse is
concerned with the concept of 'ownership' as it applies to the assessment of
competence. This debate is framed in course committee meetings, 997/998
programmes, and teacher-practitioner interactions. It is also a very powerful
focus in 'formal' clinical placement assessment meetings between students and
their nurse or midwife assessors. In each of these locations there are
occasions when people speak and act in ways that make apparent their feelings
of being divorced from the process by which the criteria for assessment were
designed. They make comments that indicate the difficulty they have in seeing
the relevance of the criteria in the schedules with which they are expected to
work, and mention the problems they have in making sense of the language that
is used. To an outsider, that language is often less obscure than their
comments seem to suggest, making it appear that their concern is in fact less
about the clarity of the text than about the alienation they feel from its
process of composition. A comment such as;
I don't really know
what to put here...what do they want to hear do you think
from an assessor talking
to a student at the end of her placement as she was filling in the assessment
schedule supplied by the approved education institution, typifies this concern.
And a remark from notes taken at a meeting of a Core Curriculum Group of
teaching staff who were preparing validation documents for their new course,
shows that the concern is more than an individual one:
the trouble is the
Board hand you down all this stuff about how to assess and they're always
changing things so you have to make it up to get through the validation but
then you go ahead and do what makes sense anyway
Coming in a forum whose
purpose was to collate and represent the institutional perspective, this remark
reveals that the discourse about ownership is worked out at all levels in the
institution. it is a discourse which pervades the conversation of teachers,
nurses and midwives, and students.
The central concern of
this research to discover the effectiveness of devolved continuous assessment
is inextricably linked with the discourse of ownership. Successful devolved
assessment depends in part upon the appropriateness of the mechanisms employed
to ensure the successful devolution of responsibility. The extent to which that
success is achieved depends in turn upon the sense of commitment to the
processes and documents that teachers and assessors have. The extent and range
of comments that contribute to the ownership discourse leave us in no doubt that
where these things have been constructed by committees and specified
post-holders without the full involvement of the people who will have to use
them or work within them, they will be less effective than where they are the
product of extensive and on-going evaluation by all the parties concerned.
These comments make it clear that in the minds of those who have to carry out
devolved assessment, the concern that 'others' (whoever they might be) have
with issues of standardisation and objectivity is subsumed within the discourse
of ownership. It seems likely, therefore, that even if it was not the case that
a range of individual perceptions brought into juxtaposition and checked
against one another is more reliable than a single allegedly objective view, in
relation to the discourse of ownership it would be necessary to consider ways
of giving greater credence to the subjectively constructed perceptions of those
who are closest to the student and their practice.
An approach that seeks
evidence from a range of subjective perspectives will be responsive to the fact
that assessors are in a position to see evidence of the student's competence
across a wide range of contexts, and able to consider it in more ways than an
assessment document can capture when it is organised under discrete headings.
At the same time it will recognise the fierceness of the feelings expressed
about ownership and accommodate to them. In this discourse, then, there are
three main sub-texts. Firstly, there is a discourse about subjectivity and
objectivity - with the valuing of subjective assessment based on professional
expertise as a confirmation of ownership, and abstract language acting as a
symbol for attempted objectivity and consequent alienation. Secondly there is a
discourse about self-esteem, and the risk to which it is subjected when nurses
and midwives who have expertise in their own professional field are asked to
take on responsibility for assessment, which many see as someone else's
expertise. Thirdly, there is a discourse about mutuality in education, with its
own internal argument about the kind of communication and dialogue that will
promote educative liaison between parties with similar but different interests
in both the promotion and assessment of competence.
Each of these sub-texts
reveals the importance of the ownership discourse with respect to the design of
appropriate assessment instruments and procedures. The first, for instance,
raises questions about the nature and quality of evidence, and presents a problem
for a body that wishes to ensure some kind of comparability of standards across
institutions and regions. This discourse explores the relationship between
scientific objectivity of the sort that can be applied to the study of inert
materials [63] and human subjectivity,
and moves the discussion towards consideration of how to construct reliable
judgements that draw upon a range of subjective perceptions of the same
situation by a variety of people (i.e. to take advantage of what social
scientists call multiple subjectivities). The second once again explores the
relationship between theory and practice, and poses the question of how to
access practitioners' unique knowledge in ways that make it obvious that
assessment is not a 'bolt-on' activity to be carried out separately from
nursing and midwifery proper. The discourse here is about whether or not these
professionals are, as a consequence of their caring activities, already
sufficiently skilled in carrying out the assessment activities of gathering
evidence, analysing that evidence, and critically reflecting on it. If they
are, then there is no justification for setting up an assessment system from
which they feel alienated; if they are not then no amount of well-intentioned
involvement of the practitioners in the development of assessment schedules
will, of itself, lead to better assessment practices. Which is not to claim
that the discourse excludes consideration of better and more mutually
satisfying liaison processes, but to point out that it also encompasses a
discourse about the nature of the difference between nursing and midwifery
experience on which assessment
might be built and that on which it could not. And the remaining sub-text
explores the
relationship between
personal and professional development, examining it almost surreptitiously in
an attempt to make sense of the concepts and values in the assessment system in
terms of individual feelings about it and the way those feelings compare with
other people's. The essential role of dialogue is made evident in all three
discourses within the ownership discourse from which it emerges as a
consideration to be taken fully into account as possible structures and
mechanisms for supporting assessors and monitoring assessment standards are
explored.
.
9.2.4. The Authority
Discourse
Closely related to the
ownership discourse is a discourse about authority. This discourse is inherent
in the concept of knowledgeable doer, where detailed knowledge gives the nurse
or midwife one kind of authority and experience gives them another. More
directly relevant to the study of the assessment of competence, however, is the
fact that it is also inherent in the concept of devolved continuous assessment
itself. Like reflective practice, it is a discourse worked out in action rather
than spoken interaction. Because devolved continuous assessment includes both
formative and summative events, it carries contradictory notions of authority.
Most of the time, the nurses and midwives with whom students work give advice
and help as the need for it arises in the clinical or community environment.
Their knowledge and skills are offered in informal ways as they work alongside
the students whose comments make clear that they value enormously their
qualified colleagues' extensive and valuable professional know-how.
Nevertheless, students report that there are times when they feel the advice
they have been given is either misleading or - in the light of research that
they have encountered - just wrong. They describe the situation where they have
to consider challenging the practitioner,
as 'threatening', and a typical comment came from one who spoke about:
getting on with the
job and keeping your head down because after all they're the one who's going to
write your placement report
It is apparent from
these remarks that the students are conscious of the authority of the assessor,
however covert the nurse or midwife in that role is in exercising their
authority. The generous help and
support that most practitioners willingly give when assessment is not officially being done, constructs
the discourse in terms of non-authoritarian authority where partnership and
openness are feasible positions to adopt. Because both the midpoint formative
and the endpoint summative assessments of practice are so often obviously conducted
from an alternative position, however, uncertainty arises in the mind of the
student about the authenticity of the first position. In doing formative
assessment the assessor is expected to assume a diagnostic role, using their
authority as an experienced practitioner to indicate to the student where they
can make progress by targeting their efforts in a particular direction. In
doing summative assessment they take on the role of accredited witness, signing
to say that the student has successfully completed what was asked of them to
demonstrate competence, or has failed to meet the standard of competence
required. In both these instances the student is dependent on the authority of
the assessor for validation of their practice, who can choose either to look in
detail at what they have achieved or to carry out a purely perfunctory
'interview'. Even though the assessor's choice may be constrained by pressure
on time, their authority is reinforced by their ability to make the choice, a
right which is denied to the student. And if they choose to eschew the
collection of detailed evidence of the student's performance, the assessor
further reinforces their own authority by making themselves the sole source of
evidence about the student's competence.
In a teaching and
learning context it would be impossible to ignore the issue of authority, and
foolish to pretend it was a matter of no consequence. Indeed, teachers have
authority invested in them to develop students' knowledge, and clinically
qualified practitioners are given authority by virtue of their status as
assessors to judge the students' success in applying that knowledge. The
parameters of the discourse are therefore circumscribed in a way that several
of the others are not, and what becomes the issue of contention is the form of authority. It
therefore becomes a discourse about power (invested in individuals who hold
roles which carry authority -eg the assessor role and the teacher role), and a
discourse about evidence [64]. Where authority is
invested in a role it is both personal and positional (Bernstein 1976)
An alternative form of
authority is found in a comprehensive evidence base which is open to scrutiny
by the student and, if necessary, a third party. The authority of the
comprehensive evidence base lies in the fact that it can be verified and is
open to inspection. It is also bound to the fact that it can be judged for its
fairness. Evidence of skills and attitudes can be gathered from a number of
separate contexts and combined into a unified account, which can be compared
with the student's own evidence. It can in fact consist of evidence of
knowledge contributed directly by the student in the form of explicit
references to the both the internalised knowledge and the book knowledge on
which they have drawn to broaden their understanding. The question of what
constitutes reliable evidence is of course part of a discourse which extends
well beyond nursing and midwifery into the whole field of phenomenology. The
discourse which surrounds that fundamental question is concerned with the
position of the subject vis-a-vis the object, and the intervening influence of
language. In a sense, all these
issues from the wider debate are of direct concern in the discourse about authority
in assessment. There is an interpenetration of discourses which emphasises the
need to tackle both issues together through the vehicles in which the two types
of authority are vested; namely, the role of the assessor as accredited
witness, and the function of assessment documentation as evidence base.
9.3. MOVING FORWARD
9.3.1. Designing
Assessment That Recognises Discourse
Bannister (1981),
following Kelly (1955 Just as
research that is conducted conjointly with the subjects of the research is more
likely to deliver valid data because it offers the interpretations of the
people who are closest to the context of research, so the implementation of a
new system to develop the concepts, mechanisms and structures for devolved
continuous assessment is more likely to be effectively implemented if the
interpretations and the perceptions of the people who have to execute it are
taken account of. The fact that their interpretations are not all the same, and
that there is an on-going discourse about many of the key issues that surround
assessment practice, makes it even more essential that planners should become
aware of the discourse, and find ways of addressing the discourse itself
through the programme of assessment. Dialogue about the nature of education and
training, reflective practice, ownership, and authority will certainly persist,
and each of these discourses will continue to be worked out in social practices
that form part of the work culture. To attend to these discourses, with all
their interweaving sub-discourses, is -in a paraphrase of Bannister:
to draw on and make
explicit, (the subject's) personal experience, (and) to enjoy the wisdom and
companionship of your 'subject'
(Bannister
1981
From such a position it
is possible to advocate structures, roles, and mechanisms for development that
take full account of these personal experiences and draw on the wisdom of the
research subjects. The recommendations
we make for:
¥ the collection and
discussion of evidence
¥ the promotion of
inter-assessor and student/assessor dialogue
¥ the use of assessment
documents to promote analytical reflection
¥ the assurance of
continuity
¥ the professional
development of assessors
¥ the design of
assessment documentation
¥ the assessment of
written work
¥ the regulation of
detection of Ôcause for concernÕ
¥ the integration of
theory and practice
¥ the assurance of
quality in assessment
¥ further research and
development [65]
take full account of the major and minor discourses explored, developed, and given significance by the people who work as nurses, midwives, and teachers, and consequently will, if adopted as national policy, have a better chance of being successfully implemented within individual institutions and specific clinical locations than policies which do not do so.
CHAPTER TEN
ABSTRACT
In an extension of
the argument in Chapter Nine, the final chapter examines the discourse of
theory, knowledge and practice through the discourse of work and
professionalism. The 'interpreting subject', who is also the 'knowing subject',
operates within a social, political, and economic context; they are a self in
the market place. They have to contend with the dilemmas created by the fact
that caring is a service for which the quality of the outputs (rational market
model) depends entirely on the quality of the processes (professional, or
developmental process model) and that therefore the two discourses, although
apparently diametrically opposed are in fact interdependent. The market
discourse envisages a perfect product, requiring comprehensive surveillance
systems to assess the performance of the producers and to measure the standard
of the individual components, skills, competencies, or outcomes. In this
discourse, the professional like any other worker is said to be motivated by consumer
demand for a better product, and by economic recompense. In the rational output
discourse, the job of the professional is to make judgements about
cost-efficiency, suspending personal knowledge if the intended outcome is
perceived as too costly. The professional discourse, by contrast, puts quality
of experience at the centre, and people work for satisfaction from the work. As
a consequence, the professional discourse values professional judgement and
decision-making, both the product of reflective practice and critique(al)
dialogue. These are complex acts involving consideration of values and
knowledge, and therefore motivate a reconceptualisation of both. What counts as
theory and what counts as practice has to be re-evaluated. In the professional
development discourse, knowledge, competence, and understanding are seen as
socially and interactively constructed in both the classroom and the workplace,
and in this sense all assessment is formative as it shapes attitudes, beliefs,
and even competence itself. What follows from the debate about market models
and professional development process models of work, when that work is the
professional work of nurses and midwives, is a discourse about knowledge
itself. In preparation for that work, knowledge is central; when the knowledge
takes the form of a priori theory, the discourse constructs theory as 'unreal'
and practice as 'real'' when it is seen either as arising from practice or as
an hypothesis to be tested in practice, the discourse constructs it as integrated
'knowing and doing'. In this second model competence is not a thing, nor an
outcome, but a multi-faceted process that mirrors and expresses work itself.
The professional process discourse recognises the uniqueness of individuals and
their organisations and avoids generalisations. Instead it seeks to establish
structural dialogue and the research-based cycle of assessment and education.
This involves the collaborative collection of evidence, dialogic reflection
upon that evidence, interpretation of these reflections in terms of the
learning, occupation, and education discourses, placing caring itself back at
the centre of the agenda.
EXPLORING THE
DISCOURSE(S) OF THEORY, KNOWLEDGE AND PRACTICE
Introduction
The 'interpreting
subject' has knowledge in two domains; an inner knowledge about their own
personal intentions, feelings, experiences, and the other outer knowledge of the world. In each case the individual claims a
particular knowledge base or conceptual framework capable of discriminating
between 'fact' and 'fiction', 'real' and 'unreal'. Whether a professional or a client, a person's judgements
about situations and appropriate actions depend on the knowledge claimed they
claim, and the extent to which those claims are believed or can be enforced.
The respective
discourses of professionals and clients occur within a social, political and
economic framework. That is to
say, they are part of a wider complex of systems and are subject to their laws
and conditions. In particular,
'interpreting subjects' are subjects in a market place, their discourses
employed in networks conditioned by the demands, opportunities and constraints
of markets. This has profound implications for the notion of 'knowledgeable
doer' described throughout this report.
Healthcare market
discourses are currently pervaded by two distinct conceptions of the market and
its role in the provision of services.
The dominant of the two has traditionally focused on products or outputs
rather than processes. These,
however, cannot be arbitrarily separated since the quality of a product or a
service depends upon the quality of the processes undergone. A further problem arises when
considering 'professional services'.
Unlike the production of cars or other concrete objects, the typical
output of a profession is often a process or a quality not reducible to a
concrete object or quantitative difference. The 'outputs' may be seen in terms of a changed 'way
of life', an experienced
'improvement' in the quality of life.
A professional is expected to handle complex information and make
appropriate judgements. These are
highly complex outcomes not amenable to simple procedures of
classification. Thus concepts of
knowledge and theory that are directed towards outcomes rather than processes
are likely to prove inadequate.
The two kinds of market
discourses having quite distinct implications for the integration of theory and practice can be referred to as the
rational outputs model, and the developmental process model.
10. THE RATIONAL
OUTPUTS MODEL
The rational model
dominates current thinking, deriving from the classical theories of economics
developed from the late eighteenth century. Three key assumptions of the classical model are of
importance in discussing knowledge, theory, competence and their
assessment: a) that all
participants in the market place have 'perfect knowledge' of all conditions,
products, services and changes, that b) people seek to maximise their
satisfaction through consumption - a situation summed up in the term 'consumer
sovereignty', and c) the homogeneity of goods/services. While, it has always been
recognised that 'perfect knowledge' does not obtain, it has always been
considered as an ideal towards which the market must move[66]. While the ideal is not achievable in
practice, the rational market metaphor applied to health service practices and
processes does place on 'producers' and 'consumers' demands for detailed and
weighty audits, comprehensive surveillance systems for the assessment of performance
and the equally comprehensive documentation of competency in terms of
'competencies', or 'skills', or 'behavioural outcomes' or in the case of the
educational process 'learning outcomes'.
Implicit in all such demands is the desire for a kind of certainty in a
realm of action in which the dominating features are that of indeterminacy and
uncertainty.
Lane (1991 In relation to the prioritisation of consumption and leisure
over work, work becomes defined as a 'disutility'. People only engage in additional work to the extent that the
disutility is compensated for by the extra income received. The consequence for any profession is
to shift the focus away from professional commitment as the motivation behind
professional action to a) monetary reward as an incentive; and b) the purchaser
as the one who values and through consumer demand defines not only what that
activity should be but also the level of quality to which it should be
delivered. The professional, in
this view, is an employee, hired
like any other employee to do a job defined by market demands. The metaphor of
the rational market, however,
presents a misleading logic of collective action and cultural and social
production (Lane 1991: 47 ). It ignores the centrality of work
experiences in framing collective action, generating a sense of self worth,
contributing to the development of a sense of identity, and it ignores that
work is the major lifetime experience of individuals providing for them a sense
of purpose, of belonging to a community.
That is, it ignores work as a process central to human development and
culture.
The product of the work
of professionals is not amenable to the same kind of standardisation as say
production-line cars. The kind of
service given cannot be standardised into a routine set of movements and stock
phrases as can be the case in a fast food restaurant chain. Standardisation is, in effect, the
opposite to professional standards[67].
The achievement of
professional status gives a mandate to exercise professional judgement.
Effectiveness is its standard.
Where the professional mandate articulates the need for effectiveness,
the rational outputs model shifts attention to cost efficiency. Cost efficiency involves a level of
risk, weighing the cost of an action against the probability of a desired
outcome or effect. A 90% success rate
may be considered a socially acceptable risk if the financial cost of a 100%
success rate is deemed too expensive.
In this kind of case the professional may be asked or expected to
suspend knowledge and standards in the interests of cost-efficient
standardisation of practice. Cost
efficiency and effectiveness are
thus quite distinct and can be in opposition to such an extent that ethical
dilemmas for the professional are raised.
In order to act the subject must not only interpret, but also come to a
decision. This decision is at the
interface of a multiplicity of discourses which include considerations of
knowledge, values, practicality, authority, efficiency, effectiveness. In short, professional decision making
as a product of reflective practice formed through critical dialogue is a
complex act involving valuing and knowing. Knowing and valuing are integral to judgement and are
constitutive of the professional mandate.
10.1 Moving From the
Rational Model to the Professional Mandate.
As discussed in chapter
1, section 1.2, a professional mandate is more than that provided by a legal
status. Nor can it be simply
compelled through legislation. It has
to be 'grown' through internalising professional discourses.
A recent study of
professional socialisation in nurse education in Northern Ireland (O'Neill,
Morrison and McEwen 1993:5[68]) provides an
illustration of this. It found
that 'There is some evidence that, despite the best efforts of the college,
traditional students come to see the nurse as someone who performs a series of tasks as opposed to someone
who thinks about the care he or she
is delivering.' This focus on a
prescribed series of tasks echoes the rational outcomes model of the
'scientific management' of workplace activities espoused by Taylor (1947 ). In
comparison, Project 2000 nurses:
eventually define their
role as a nurse in terms of a series of tasks. However, while traditional students define the role of the
nurse in terms of physical, low-level tasks such as bed-making, feeding and
washing patients, the tasks listed by Project 2000 students consist of
high-level interpersonal skills such as counselling, listening and explaining.
(p.7)
This points to the
distinction between the professional and the non professional mandate that is
being encouraged within Project 2000, focusing on the kind of holistic processes
espoused by Lane (1991) in his developmental approach to work. Two kinds of tasks can thus
be identified which orient on the one hand the professional and on the other
the non-professional (or perhaps sub-professional). Together with this is a difference of philosophy, in fact, a
paradigmatic difference:
Project 2000 students
see themselves as health promoters, an expression rarely used by traditional
students. This reflects project
2000's health rather than disease-based philosophy.
(p.7)
The traditional and the
Project 2000 discourses signal the contemporary move from sub-professional to
professional work practices, from theory as something that is developed by
experts outside the workplace to be applied to the workplace, to something that
is formed by professionals in the work place itself. The implications of this move for courses of education are
profound. What counts as theory,
and what counts as practice have to be reconceptualised, leading to a
reconceptualisation in turn of pedagogic and assessment practices.
The diagram approaches
this reconceptualisation by sketching a relationship between 'theory' and
'practice' which recognises that practice is itself suffused with theory,
albeit 'common-sense' or taken for granted theories as well as those which are
both critical and systematic.[69] In addition, the apparent or perceived
'theory' 'practice' opposition is represented.
Key:
t.f.g. =
taken-for-granted
A.R. = Action Research.
Figure 1: Forms Of
Theory
The gap between theory
and practice is most explicit when theories are conceived as a priori, as
developed independently of practice at a level of 'ideals', or as a conceptual
taxonomy of 'skills', 'aims', and 'objectives'. The gap is also very apparent
when practice is conceived as 'reality' which in turn makes 'theory' seem
'cold', 'distant', impractical', 'utopian'. The gap between 'theory and practice' may be reinforced by
the distinct sites in which they arise:
theory at the college; practice in the reality of the clinical area.
In this relationship
work, being the realm of practical everyday affairs takes precedence. It is here that the material and human
world is manipulated and transformed.
It is here, therefore, that judgements are made about what is or is not
real. Thus through the discourses
of work theory, knowledge and
practice are placed into relation.
On the one hand, the relation may be perceived as being divided in terms
of theory being 'unreal' and practice being 'reality', on the other as
inextricably bound, theory being the ever provisional product of the critical
reflection on practice and in practice.
Alternatively, from the point of view of the traditional model, theory
being the product of reason, should impose itself upon reality, making reality
itself, more rational.
Where a gap is
perceived, people construct bridges between theory and practice. One person may seek to bridge the gap
through trying to achieve accommodations between the demands of reality and the
ideals of theory; another may seek to resist what is felt to be an obstructive
system in order to go some way to meet some dearly felt ideals. In either case, curriculum development
and assessment are seen as college based activities quite separate from
practice but imposing on practice, leading to all the issues of competing
demands.
Where theory and
practice are perceived as integrated ,
theorising is grounded in experience (Glaser and Strauss 1967 ) and action plans based upon emergent theory are
formulated both to test theory and to innovate and bring about improvements in
action (Elliott 1991). A
'grounded' theory is one which is based on what has been observed, and checked
out, over an extended period of
time. Both curriculum development
and assessment practices therefore find their basis in work practice. A curriculum becomes a hypothesis to be
tested out in practice (c.f. Stenhouse 1975 ). Assessment is then as much an
evaluation of curriculum as it is of student performance (c.f. MacDonald 1970;
Eisner 1993 ). The student is as centrally involved in curricula and
assessment matters as is the teacher in a relation of mutuality (c.f. Schostak
1988; 1991 )
10.1.2. Assessing
Integrated Knowledge, Understanding, and Competence
Given the evidence
throughout this report that knowledge, understanding, and competence are
socially and interactively constructed in both the classroom and the clinical
environment, it follows that what is valued through being made the focus of
assessment and learning in one or other of those locations becomes thought of
as 'worth learning', and what is excluded (by box-ticking, for instance)
becomes 'not worth learning'. In
this sense, all assessment is formative, as it shapes attitudes, beliefs, and
even competence itself. The
assessment task is to promote activities associated with professional
judgement.
This section sets out
the research team's attempt to construct a theoretical model of the relationship between knowledge,
skills, measurement, and judgement. The model remains tentative in its outlines,
it is an attempt to explore rather than an attempt to draw up a definitive
statement. The ideas around which the model is formed all derive from
interview, observation and documentary data.
Figure 2: Knowledge, Information, and Judgement.
The triangle, in figure
two, reading down from the top is divided into three parts. The first provides a narrow definition
of competency restricting it to those elements or features that are observable,
can be reduced to behavioural or discrete elements, and can be sequenced into
procedures. This level corresponds
also with what can be called 'information'. This kind of information can be stored on the 'human
computer' and can be reproduced, in an examination for instance, in a computer
like fashion .
The second level in the
triangle represents a step away from that which can readily be measured and
proceduralised and includes elements of judgement. Where the first level is abstractable from concrete situations,
the second is situationally specific.
At this level, the individual must discriminate between procedures and
select those appropriate to a given situation.
The third level of the
triangle corresponds to the level of competence that allows an individual to
transcend the situationally specific and set the specific into relationship
with a 'greater picture' concerning client care.
The move from the
'narrow definition' towards the 'broad definition' signals also a move from
'mere' information held much
like a computer holds information, towards knowledge. Because in this model, the
definition of knowledge includes the notion of the 'knowing subject who acts in
the world', knowledge here is essentially integrated with 'practice' and
'action'. 'Skills' at level one may be 'measured' in terms of the observable,
or behaviourally reduced elements that compose the 'skill'. However, this remains mere 'information'.[70] Only by the move to level two do
'skills' take on a situational meaning in terms of 'action'. Being an interpreting subject, means
not just being able to access a data-base of information and skills but being
able to interpret, select and prioritise according to situational demands and
the needs of the client. At level
three, the interpreting subject -who by now has become the knowing subject -
generates not simply a situational gestalt but draws upon past experiences, and
projected possibilities based upon 'situations like this', and recognitions of
difference, novelty. At this level
competence involves 'case-sensitivity' and the ability to act in novel
situations, or situations where the individual has not met a case like it. This is not to say that measurement is
excluded, but that it is subjected to professional judgement.
...we actually
generate theoretical assessment out of learning to nurse in the placement. And
that...is then assessed not by the theoretician in the school, or the
practitioner on the ward, but by both....to ensure that the teacher stamps
academic diploma level credibility on it, in terms of the cognitive and
intellectual development, and the theory base, and the application of research
and the vocabulary..., but that the clinician stamps clinical credibility on it
by saying that is what happens, that describes a real situation..."
The circle represents
the flow or interrelationship between the narrow definition and the broad
definition. Judgement without
operational skills is as useless in practical situations as operational skills
without the discriminating judgement of when to use them.
The aim of the assessment
process is not only to make judgements about the performance of students but
also to sensitise the student to the work situation. Care is a specialised sphere of work. It is the professional's work as
'knowing subject' to draw on past experiences in relation to projected
possibilities and make judgements about how best to manipulate and transform
objects in the social and physical world to create appropriate conditions of
care. Formative assessment
processes draw students into the ambit of work of the professional, enabling
the student to reflect with the
experienced professional at the same time as events are happening and also,
afterwards to reflect on, critique and learn from what happened. This model of assessment as being
intricately tied with the processes of work and education contributes to the
conceptualisation of what it means to be a competent practitioner. Throughout the report the data supports
a conceptualisation of competence as not a 'thing', not an 'outcome' but rather
a multifaceted process.
Competence, it has been argued, is an expression of work where work
itself includes the whole complex of processes through which individuals learn,
judge, make decisions and act to manipulate and transform the world for human
purposes, then critically reflect on the results and begin again the process of
work.
10.2. TOWARDS
RECOMMENDATIONS: A REVIEW AND A PREVIEW.
The institutional
structures required to implement this conception of the process have been
progressively explored throughout the report. Since no one institution is identical to another no general
assertion of what is required can be made. However, it is possible to contribute to the debate by
making observations about structural needs. This is done in detail in the recommendation which follow
this chapter. These
recommendations are underpinned by a conception of the entire process through
which institutional structures, work processes and reflective practitioners are
all integrated in a research based cycle of assessment and education:
Figure 3:
The diagram focuses on
the cycle for the collection of assessment evidence through the structures and
mechanisms established by an institution.
These are then reflected upon by a) the student, b) the mentor, and c)
the tutor. Each interpret the
evidence according to their own particular discourses - the learning discourses
of the student, the practice/work discourses of the mentor, the educational
discourses of the tutor. If these
are not placed into dialogue then each will take their interpretations
individually and no integration will take place. If there is dialogue then this too can feed into the
monitoring processes through which the assessment structures and mechanisms can
be evaluated in terms of their efficacy for forming an appropriate evidence
base. And the cycle begins again.
Essential to the process
is structured dialogue. At the
heart of the recommendations to come from this report is the view that both
dialogue and the creation of evidence for critical reflection must be
structured into the system. Taking
a more general interpretation of the model, it points to the research process
as underlying the processes of learning, assessment and professional activity.
In a sense, this chapter
marks a return. In Part A of the
report the assessment of competence was explored at a conceptual level, first
through the literature and then in the discourses of academics and practitioners
as they reflected upon their work in relation to students. The research has been informed by a
view of the interrelationship between concepts, structures and events in the
world (Sayer 1993 ). change any one of these and there are
implications for the others.
Nursing and midwifery are undergoing radical changes on a multiplicity
of fronts. These make considerable
demands upon the professional as 'knowing subject' who must assess, make
decisions and 'act in a manner so as to promote and safeguard the interests and
well being of patients and clients (and) maintain and improve (your) professional
knowledge and competence' (UKCC Code of Conduct, 1992 ).
This is recognised in
the Taskforce[71] report on research
(1993) They considered that
'Education and training are vital to the dissemination of knowledge and the
generation of critical and enquiring approaches to practice' (2.2.7).
The general aim is to
refine the judgement of the knowing subject through research and education to
meet the changing demands on the professions. More generally, what is required is the knowing subject as a
reflective practitioner within a community of reflective practitioners each
engaged in critique to improve practice.
This, effectively, is a definition of research based professional
practice.
The fuller implications
of this are far reaching both for clinical and educational practice in nursing
and midwifery. The recommendations
that complete this report are intended to contribute to the debate on the
appropriate structures, mechanisms and procedures that can be employed to
fulfil the vision.
RECOMMENDATIONS
FOR ACTION TO DEVELOP THE ASSESSMENT OF COMPETENCE IN NURSING AND MIDWIFERY
EDUCATION, AND TO BETTER ENABLE THE INTEGRATED ASSESSMENT OF KNOWLEDGE, SKILLS,
ATTITUDES, AND UNDERSTANDING.
This document suggests
recommendations for changes in the assessment of competence in nursing and
midwifery education that are based directly on the research findings of the ACE
Project. An outline of the relevant findings is provided under each heading,
and a key recommendation made (shown in bold typeface). Following the key
recommendation, there is a list of contingent recommendations which offer
suggestions about the structures and mechanisms by which the key recommendation
may be achieved. In the spirit of the argument presented in the main research
document, these contingent recommendations are presented as propositions for
discussion.
The ACE research shows that
policy-makers in organisations providing nursing and midwifery education
interpret the original ENB framework for assessment in a variety of ways. It
also shows that the organisationÕs own guidelines and procedures are then re-interpreted
by the individuals who carry out the policy. The pragmatic and cultural
practices in an institution impose further constraints upon what happens. The
consequence is that there is often a considerable gap between the rhetoric of
documentation and policy and the reality of what happens in practice. Many of
the recommendations which follow acknowledge the existing intentions but aim to
suggest appropriate structures and mechanisms for closing the gap between
intention and reality. Some of the recommendations add another dimension by
focusing on dialogue, analysis, and educative practice, and propose further
procedures for its implementation.
¥ THE COLLECTION AND DISCUSSION
OF EVIDENCE OF COMPETENCE. [72]
The research demonstrates that the
central assessment device of student-assessor meetings (of which there are
normally three) to discuss progress in the clinical placement area is
potentially a valuable one. It also shows, however, that where these meetings
do not function as effectively as they might it is because they are hurriedly
carried out, the student and the assessor discuss the studentÕs competence
primarily in terms of ÔcoverageÕ, and there is limited close analysis of the
evidence that demonstrates the studentÕs competence. We recommend that to
redress these shortcomings the following strategies should be instituted
nationally for all clinical placements. [73]
1. The assessment of clinical
practice should require the collection of a range of forms of evidence to serve
as the basis for student/assessor discussion about knowledge, skills,
attitudes, and understanding.
¥ 1.1
Protected (i.e. timetabled) time
should be allocated for mid-placement discussion between student,
assessor, and other appropriate staff.
¥ 1.2. During the
mid-placement assessment, time should be allocated for carrying out the
procedures for collecting, analysing, and reflecting on evidence defined in
recommendations 1.3 -1.7. below.
The formative assessment should be
used as follows:
¥ 1.3.
to review the studentÕs activity to
date in terms of issues;
¥ 1.4.
to offer the student targeted,
evidence-based feedback on how their present understanding of the issues is
perceived;
¥1.5.
to outline an action plan for studying the issues
in the second part of the placement;
¥ 1.6
to discuss and agree the principles
on
which an action plan for student development is to be formulated [74];
¥ 1.7.
to record in writing the evidence
offered in discussion, the feedback advice given, the issues raised, and the
agreed principles for shaping the action plan
¥ 1.8
During the final summative
assessment, additional time should be allocated to carry out the procedures for
reviewing current and preparing for future placements defined in
recommendations 1.9 -1.11 below.
The summative assessment should be
used as follows:
¥ 1.9.
to discuss the formative assessment
and any subsequent evidence relating to it explicitly in terms of knowledge,
skills, attitudes, and understanding
¥ 1.10.
to record the issues from the
discussion in an agreed format which will provide the focus for later study
¥1.11
to construct a portfolio of
evidence from the placement as a whole which will be available to assessors for
subsequent placements.
¥ CONTINUITY AND PROGRESSION IN THE DEVELOPMENT AND ASSESSMENT OF COMPETENCE.
The research indicates that assessors often
experience difficulty in ensuring that students are advised and assessed at a
level commensurate with their (the studentÕs) practical placement history and
level of theoretical understanding. One of the reasons for this is the
prevalence of short placements which make it difficult for assessors to (a)
gather evidence of complex competencies, (b) carry out diagnostic assessment,
and (c) ensure student progress with respect to identified needs. A further
reason is that many assessors operate within a narrow theoretical framework
founded primarily on their current clinical context and leaving out of account
the need for development across contexts. [75] Most assessors have had
only limited training for their role, and are unused to discussion of the
relationship between the principles of practice they apply in their own
clinical area and the operation of those principles elsewhere. Indeed, students
are often left to construct their own continuity between placements, a
situation which places a considerable burden of responsibility on them. To
ensure continuity and progression of student learning, assessors must be able
to carry out appropriate analysis of student competence in context and be able
to comment on the issues raised with reference to the wider context of
nursing/midwifery. In the light of this we recommend the following.
2. There should be a clear procedural
framework for ensuring that assessment provides continuity of reflective
practice based on the collection of evidence, the carrying out of informed
analysis, and the construction of a theoretically informed action plan.
¥ 2.1.
Assessors should indicate in
documentation and discussion the level at which they are expecting
students to address knowledge issues. [76]
¥ 2.2.
Assessors should gather and record
evidence of development in the level of a studentÕs competencies between the
outset of a placement and its conclusion.
¥ 2.3.
Placements should be long enough to
enable evidence of overall progress to be reliable and progress in
particular areas of competence to be differentiated according to
level. [77].
¥ 2.4.
Time should be provided at the
beginning of a placement, and at weekly intervals thereafter, for at least two
people to study the studentÕs portfolio of evidence and discuss it together.
¥ 2.5.
There should always be available to
the student a person other than the main assessor who is familiar with their
portfolio and able to discuss progress with them on the basis of the evidence
in it. [78]
¥ 2.6.
Placements should be long enough to
enable the collection of multiple sources of evidence and to facilitate fair
judgements.
¥ 2.7.
Assessment documentation included
in a studentÕs portfolio of evidence of knowledge, skills, attitudes, and
understanding, should be made available to an assessor who is about to receive
that student.
¥ 2.8.
There should be increased
opportunities for assessors to meet together to discuss continuity, with particular reference to the notion
of levels of competence and the criteria that attach to them.
¥ THE PROFESSIONAL DEVELOPMENT OF ASSESSORS.
It is apparent from the research that while most
clinical staff are comfortable discussing craft knowledge with students, many
feel insufficiently prepared to carry out the critical analysis necessary for
helping students focus on particular strengths and weaknesses and they
consequently lack the confidence to formulate a set of individual targets for a
student to meet. The data also reveals a wide range of practices with respect
to liaison between staff in the clinical areas and those in the approved
education institution, and indicates a dissatisfaction on both sides with the
quality of such liaison. To be able to assess studentsÕ competence on the basis
of collection, analysis, and reflection on multiple sources of evidence, in a
manner that ensures the development of contextualised theory and clinical
competence, all assessors must themselves have an appropriate level of
understanding of both theory and practice. The following recommendations suggest how this might be
assured.
3. The preparation of clinical area assessors
should develop assessorsÕ competence in collecting evidence, analysing data,
and developing frameworks for discussion.
¥ 3.1
Assessor preparation should include
familiarisation with Ôframeworks for looking at practiceÕ
¥ 3.2
Assessor preparation should include
extensive discussion about
frameworks which offer a range of alternative perspectives, and should
provide experience of handling the arguments and issues involved.
¥ 3.3
Assessor preparation should include
guidance on the collection of evidence
¥ 3.4.
Assessor preparation should include
a focus on the identification of issues which affect the construction of an
action plan for student development in the clinical area
¥ 3.5.
Assessor preparation should explore
ways of ensuring that studentsÕ previous experience and the level of their
theoretical knowledge are both taken into account in assessing their
competence.
¥ 3.6.
Assessor preparation should include
guidance on the development of a critique of contextualised practice.
¥ 3.7.
The role of link teachers should
formally include fully negotiated responsibilities for ongoing liaison with,
and support and development for, clinical assessors.
¥ PROMOTING INTER-ASSESSOR AND STUDENT/ASSESSOR
DIALOGUE.
The research shows that assessorsÕ perceptions of
competence are shaped by the clinical context in which they work, by their
understanding of the relationship between that context and nursing theory, and
by their experience of working with students at differentiated levels of
competence. It also shows that where individual criteria for judgements are the
subject of discussion and dialogue between groups of assessors, common
understanding increases. There is no evidence, however, that this greater
common understanding leads to a single ÔobjectiveÕ view; rather, it helps to
make clearer the basis for a range of subjectivities, and thus to encourage
debate about theory in practice.
The following recommendations suggest how dialogue might be promoted.
4. To ensure assessment which looks at theory
and practice at one and the same time, mechanisms should be introduced to
facilitate regular dialogue between clinical assessors, and between clinical
assessors and their approved education institution colleagues.
¥ 4.1.
Each approved education institution
should institute regular forums for clinical and education staff to:
¥ promote shared understandings of
the assessment process in that institution.
¥ discuss the nature and
quality of evidence on which judgements will be made.
¥ identify areas of strength and
weakness in current assessment practice
¥ 4.2.
In each placement area, the
assessor(s) in consultation with the student should compile a portfolio of
evidence of the studentÕs knowledge, skills, attitudes, and understanding, and
should identify in writing issues to be explored further in discussion.
¥ 4.3.
Discussion and documentation for
both formative and summative assessments should include evidence from more than
one source. The evidence should be compared, differences noted, and an account
given of the analysis of those differences.
¥ 4.5.
Evidence for assessment in the
clinical area should be supplied by at least two assessors who would sign the
assessment documentation after discussion between themselves and the student
concerned. The two assessors may
¥ both be placement area staff
¥ be an assessor from the placement
area, and a link teacher from the
approved education institution
¥ be an assessor from the placement
area, and a person with a lecturer-practitioner role which includes assessment.
¥ 4.6.
It is possible that dialogue
between clinical practitioners and educators may increase naturally as a
consequence of the ENB directive that 0.2 WTE of education staff time be spent
in the clinical area. We would strongly recommend that assessment be a declared
orientation for all staff for a specified percentage of this time.
¥ 4.7.
Where it is not already the case,
link teachers should ensure that their liaison activity includes discussion
with clinical staff about the relationship between theory and practice,
offering papers where appropriate.
¥ ASSESSMENT DOCUMENTATION.
The research data reveals that
clinical practice assessment documents take a wide variety of forms, ranging
from tick-charts (with minimal space for comments) that indicate behaviour
observed and skills achieved, through schedules which embody lists of criteria
that indicate generic competencies and include generalised statements that many
assessors find Ôtoo vagueÕ, to diaries or learning portfolios in which there
are records of experience gained, evidence of the learning which has taken
place, and reflective discussions of the theoretical issues raised. The data
also suggests that discussion between assessor(s) and student about theoretical
issues arising from practice is most likely to occur where documentation
provides evidence which can form the focus of a dialogue. The following
recommendations suggest how assessment documents might be structured to promote
such discussion.
5. Assessment documentation
should be broadened to include evidence contributed by more than one accredited
witness and used as the basis of dialogue about knowledge, skills, attitudes,
and understanding.
¥ 5.1.
Assessment documents should include
an indication of the areas of competence undertaken by a student and a
criterion-referenced account of the quality of the studentÕs skills and
attitudes in relation to those competences.
¥ 5.2.
Assessment documents should include
details of the issues raised in discussions between student and assessor(s).
¥ 5.3.
Assessment documents should either
provide full and sufficient space for the proper documentation of evidence from
several sources, or should clearly indicate in the instructions to assessors
that they are to be compiled and read in conjunction with an accompanying
portfolio.
¥ 5.4.
Assessment documents should include
more than one source of evidence about the matters in 5.1. and 5.2.
¥ 5.5.
Assessment documents should include
details of the action plan devised between assessor(s) and student during the
mid-placement formative assessment
¥ 5.6.
Where assessment schedules contain
categorical lists these should include categories that reflect situationally
grounded theory
¥ 5.7.
A high percentage of assessment
documentation should employ a process model of professional development in
which the assessment activity itself promotes learning; the formative function
should be central.
¥ REGULATING THE EFFECTIVENESS
OF EARLY DETECTION OF ÔCAUSE FOR CONCERNÕ.
Somewhat paradoxically, the
research shows that not only do many nurses and midwives find their current
assessment schedules less than helpful as a framework for analysing studentsÕ
ability to see the particular in relation to a holistic context, but also that
they find them unhelpful as a way of providing a mechanism for failing ÔpoorÕ
students. The way that documents are structured, and more particularly the way
they are used often means that failure is identified on the basis of the
authority of the accredited witness (i.e. the clinical assessor who signs the
assessment schedule), rather than the basis of carefully documented evidence
from a range of sources. There were a number of clinical practitioners who were
convinced at an intuitive level that the student they were assessing would make
a poor nurse, but were nevertheless hesitant about signing the student into
fail status. Some assessors expressed concern about their own ability to make
that final decision, and showed a perennial optimism about their ability to
help the student improve. [79] The problem, then, is
not related to the absence of ÔfailureÕ mechanisms but to the lack of
appropriate monitoring of the use of them. The following recommendations
address the points raised by Ôfailure to failÕ.
6. Every institution should review the
operational effectiveness of its mechanisms for early identification of
students presenting cause for concern, ensuring that the procedures for
evidence collection and case discussion are adequately regulated.
¥ 6.1.
The use of the Ôcause for concernÕ
section of assessment documents should be regularly monitored to ensure that
concern is recorded at an early stage, and certainly no later than the point at
which formative mid-placement assessment takes place.
¥ 6.2.
In each approved educational
institution a named person with an assessment quality assurance role (see
below) should be responsible for ensuring that each placement area is aware
that documented Ôcause for concernÕ is to be delivered by an agreed time, and
for sending a reminder no later than the point at which formative mid-placement
assessment takes place.
¥ 6.3.
On receiving notice of Ôcause for
concernÕ, the assessment quality assurance officer should initiate a meeting
between the assessors and the student to consider the evidence which has lead
to concern, and to arrange for the student to be provided with a targeted
action plan.
¥ 6.4.
The assessment quality assurance
officer should not arbitrate on the fairness of the assessment at this stage,
but should arrange for the student to be re-assessed within two weeks. [80]
¥ 6.5.
The re-assessment should examine
the strategies the student has used to improve their understanding, the quality
of their reflection on the issues which the action plan was introduced to
examine, and the way that they have carried out any programme of practical
activity.
¥ 6.6.
Details of the Ôcause for concernÕ
together with accounts from the assessors and the student of the action taken
should be available to any future assessors.
¥ THE USE OF ASSESSMENT
DOCUMENTS TO PROMOTE ANALYTICAL REFLECTION.
It is clear that documentation which requires
discussion between assessors on the one hand, and the assessor(s) and student
on the other is most likely to promote contextualisation of practice within a
theoretical context and to promote understanding of site-particular issues in relation to the wider institutional,
professional, and educational context. [81] It is also clear that
structures to support dialogue facilitate integration between theory and
practice. [82] The following
recommendations indicate some of the ways that assessment documents might be
used to increase dialogue.
7. Documents should require written accounts
of analytical reflection on the relationship between particular clinical events
and general nursing or midwifery principles.
¥ 7.1.
Documentation should be used to
support situational analysis.
¥ 7. 2.
All the issues discussed between
assessors and students during a placement should be recorded in assessment
documents and should be the focus of a mid-placement overview of the
theoretical issues raised during the placement to date, together with the possible
implications for practice.
¥ 7.3.
Accounts of discussions held during
one placement should be available in written form to the assessors for
subsequent placements.
¥ 7.4.
Assessors for a placement should
ensure that they discuss with the student at the earliest possible opportunity
the account of issues raised on a previous assessment , with a view to setting
a programme for the current placement which will further develop that studentÕs
knowledge and understanding.
¥ THE INTEGRATION OF THEORY AND
PRACTICE, WITH REFERENCE TO KNOWLEDGE,
SKILLS, ATTITUDES, AND UNDERSTANDING.
The research provides evidence that theory is more
often assessed separately from practice than Ôat one and the same timeÕ as
practice. It does show, however, that there are occasions when theory drawing
directly on practice is assessed in the classroom. Typically, students may
complete a neighbourhood profile which may include an account of practical
activity they themselves have undertaken in the community, the context in which
it occurred, and the theoretical framework within which the study was
constructed. Additionally, immediately after a placement block they may discuss
the theoretical issues which have emerged from their practice. More frequently,
however, theory is perceived as a priori knowledge which has to be drawn upon
or ÔappliedÕ in practice. To promote the bringing together of theory and
practice in a way that fosters genuine integration rather than pragmatic
ÔbridgingÕ, and at the same time permits appropriate assessment of both
components of the whole, we recommend the following.
8. Summative assessment should take account of
multiple sources of evidence of knowledge, skills, attitudes, and
understanding.
¥ 8.1.
Assessment documentation should
include evidence of skills demonstrated, collected in a form which makes it
possible to identify the range and type of context in which the skill has been
used, and the range of different assessors who have seen it used.
¥ 8.2.
Assessment documentation should
enable students to provide evidence of practical and theoretical knowledge and
to demonstrate that they are able to formulate theory through reflection on
practice.
¥ 8.3.
There should be a range of
documents from which evidence is taken to determine the quality of a studentÕs
understanding and the application of that understanding to the achievement of
competence. Such documents would require the student to take a reflective
stance.
¥ 8.4.
All the documents examining a
studentÕs practice and the theoretical context in which it has been considered,
should contribute to a portfolio to be assessed as an entity for evidence of
the extent to which the student has indeed integrated theory and practice. [83]
¥ 8.5.
A portfolio should include several
of the following items which bring together theory and practice, and provide
evidence of student knowledge, attitudes, and understanding.
¥ case studies of a project
undertaken [84]
¥ accounts of a
practical/professional problem identified and explored by the student
¥ student diaries (or significant
extracts and stories from these) constructed during placement and open to
inspection and comment
¥ learning contracts in which there
are
× records of what has
been identified from practice as needing to be explored within a theoretical
framework
× discussions of the issues raised
× references to the
reading drawn on
¥ 8.6.
The portfolio of evidence should be
used to determine the studentÕs ability to examine critically the particular
and situation-specific in terms of the general context of nursing and
midwifery.
¥ ASSESSMENT OF THEORY AND
PRACTICE THROUGH WRITTEN COURSEWORK.
The ProjectÕs research in
classrooms indicates that more opportunities are created for nurse and midwife
teachers to meet together to agree their assessment criteria than for clinical
practitioners to do so. This is partly because of the teachersÕ very different
work pattern, and partly because of the fact that they are obliged to operate
within a set of procedures for the assessment of written work which have
developed out of a long tradition of such assessment. Although there were disagreements between assessors about
the precise level at which a piece of work had been executed, and therefore the
mark it should be given, disputes were resolved by dialogue between assessors
who often met together as a course team. In some institutions there were
Ômarking workshopsÕ where assessors took and marked specimen scripts and
discussed their criteria.
Assessors experienced greatest difficulty in evaluating new forms of
coursework such as projects, often varying considerably in the marks they
allocated. In particular they
sometimes experienced difficulty because the student had supplied an inadequate
evidence base. In some instances problems arose where there was a focus on
taxonomies as frameworks for understanding, and where those taxonomies were
treated as unproblematic. The fact that theory was often not ÔproblematisedÕ
lead in some cases to students perceiving it as something to apply uncritically
to practice, rather than something to be reflected on in the light of
practice. Where the quality of teacher contact with clinical areas was good,
there was increased awareness on both sides of the fit between theory and
particular contextualised occurrences; there were, however, few instances of
the generation of theory directly out of practice or of the planning and
implementation of a piece of small-scale research into nursing or midwifery
practice per se. [85] The following recommendations build on
the strengths of assessment in the approved education institution and suggest
ways that it might be strengthened.
9. Assessment of written
coursework should ensure a focus on the development of a coherent and explicit
theory of nursing/midwifery which takes account of the complexity of practice,
and encourages the development of critique of both practice and theory.
¥ 9.1.
A dossier of Ômoves towards the
formulation of a nursing/midwifery theoryÕ should be compiled for each
student. This dossier should be used to provide evidence of an individual
studentÕs development of a theory of nursing/midwifery practice, and should be
available for formative and summative assessment.
¥ 9.2.
A dossier should include several of
the following items which bring together theory and practice, and provide
evidence of student knowledge, attitudes, and understanding.
× reports on a studentÕs
area of interest in post-placement discussions in the classroom
× issue papers presented
to the class for discussion
× issue-based
compilations of evidence from a studentÕs portfolio
× reports on
post-placement classroom simulations of dilemmas experienced in practice
¥ 9.3.
Assessment criteria for written
coursework should be made explicit at the outset of each course and students
offered an opportunity to evaluate the criteria in the light of their practical
experience.
¥ 9.4.
Coursework which involves
small-scale research into practice should be encouraged, and the way in which
the criteria for assessment differ from library-based assignments should be
made explicit.
¥9.5.
Wherever appropriate, clinical
assessors should be invited to read and comment on assignments completed by
students who have drawn on data from their clinical context. [86]
¥ 9.6.
Coursework should encourage the
comparison of theories, and the development of a critique where appropriate.
¥ 9.7.
The measures recommended from the
perspective of clinical assessment for ensuring liaison and dialogue between
assessors in the clinical placement area and teachers who are also assessors,
should be reciprocally applied from the perspective of those involved in
coursework assessment. [87]
¥ QUALITY ASSURANCE IN
ASSESSMENT.
Perhaps the most significant factor
affecting the effectiveness of assessment of both theory and practice is the
extent to which assessors apply the principles and procedures set
out in their institutionÕs validation documents. All the validation documents
we studied espoused the rhetoric of reflective practice in their Philosophy
section. The section on Assessment in the same documents usually had clear
statements that formative assessment was to take place regularly and was to
involve discussion between assessors and students. The documents also set out a
number of structures through which formative assessment was to take place. In
the context of the classroom it was, in most cases, to be dealt with through
individual tutorial and post-placement feedback. In the clinical context it was
invariably to be tackled through three assessment-schedule-completion events
involving both student and assessor. Many clinical practitioners were
constrained in the way they assessed students by the fact that they (the
practitioners)had intensely busy schedules, shift and off-duty patterns of work
that were incompatible with the studentsÕ, and insufficient preparation for
carrying out effective analysis and discussion. Many lecturers were constrained
by their heavy face-to-face teaching loads, by the fact that they were teaching
and carrying out assessment across a wide range of programmes, and by the
limited opportunity they had for visiting students in the clinical placement
area. The consequence was that there were many occasions when assessment was
hurried and feedback given without explicit discussion of the evidential and
value bases that informed the assessorÕs judgements. Given that there is also a
natural tendency for institutions providing a service to be responsive to
immediate demands and downgrade time for planning and reflection, it is clear
that there is a need for a further set of mechanisms to assure the
effectiveness of formative assessment. The rigorous application of assessment quality assurance
mechanisms may have some small additional initial cost implications; because
the form of assessment recommended throughout is a constantly developing
mutually educative one, however, that initial investment of time by
practitioners will be repaid from savings achieved later as they understand the
competence and assessment issues better. The following recommendations offer a
way forward.
10. Every institution should
be required to rigorously apply the ENBÕs requirement for approved institutions
to Òmonitor, review, and evaluate the progress, development, and effectiveness
of all aspects of the assessment processÓ in terms of the structures, roles,
and mechanisms employed for carrying out continuous assessment .
¥ 10.1.
A designated member of the clinical
staff in each placement area should be responsible for carrying out an annual
review of the professional development needs of staff involved in assessment.
¥ 10.2.
A designated team of both clinical
staff and education staff should be responsible for carrying out an annual
review of the quality of evidence offered in assessment documents.
¥ 10.3.
There should be light sampling
monitoring of mid-placement discussions to ensure that they are happening at
the appropriate moment during a placement, that they are of the duration necessary to permit reasonable
discussion, that discussion is satisfactorily recorded, and that practical
competence is being assessed in relation to a theoretical framework.
¥ 10.4.
Where not already in place, an
assessment quality assurance role should be created in each approved education
institution. The person appointed to this role should be known as the
Assessment Quality Assurance (or AQA) Officer.
¥ 10.5.
The responsibilities of the AQA
Officer should include the following:
¥ monitoring of mid-placement
assessments
¥ receiving representations from
students who believe they have a cause for complaint with respect to the time
allocated to them for discussion of their knowledge, skills, attitudes, and
understanding.
¥ co-ordinating liaison between
clinical placement areas
¥ promoting continuity between
assessment ÔcentresÕ through the development of shared understanding of the
assessment process as a whole.
¥ ensuring the development of
assessment that brings together the clinical and the educational.
¥ ensuring the provision of assessor
forums and workshops
¥ 10.6.
The AQA Officer should be allocated
time and resources to ensure planning and development.
¥ 10.7.
The AQA Officer should have an
ex-officio role on all Examination Boards.
¥ 10.8.
There should be national and/or
regional ENB-approved courses in the philosophy and practice of assessment in
the workplace as it applies to nursing and midwifery. These courses should deal
with assessment and learning at a level beyond the current 997/998 courses, and
should be offered to Assessment Quality Assurance Officers and the Regional
Education Officers of the Board in the first instance.
¥ RESEARCH AND DEVELOPMENT.
The ACE project research shows that
the focus in nursing and midwifery education on Ôresearch awarenessÕ has lead
many clinical practitioners and educators to conceive of research-led nursing
and midwifery primarily in terms of the application to practice of
findings that exist Ôout thereÕ in research papers or books. It is clear that there is a need for
nurses, midwives, and educators to ÔownÕ research for themselves through
participation in forms of research which focus on practice from the perspective
of the practitioner. It is also clear from the project data that there is a
need to develop programmes that consider the findings of mainstream assessment
research in the context of nursing and midwifery. The following recommendations
suggest ways in which the research focus may be extended.
11. The current nursing and
midwifery focus on research as Ôresearch awarenessÕ should be extended to
include a focus on research process, particularly as that process applies to
practice at local level and policy at regional and national levels..
¥11.1.
There should be a concerted push to
induct nurses, midwives, and educators into the processes through which they
can examine their own practice and become familiar with collection, analysis,
and reflection upon evidence . The focus should be on the development of
research programmes which can be undertaken in the workplace as an aspect of
quality assurance, and not upon the de-contextualised transmission of research
techniques. [88]
¥ 11.2.
Approved education institutions
should be encouraged to seek research funding to look specifically at the
operational effectiveness of their current assessment policies
¥ 11.3
Approved education institutions
should be encouraged to seek research funding to look specifically at the
effectiveness of assessor preparation.
¥ 11.4.
Future innovations in nursing or midwifery education should be piloted and thoroughly researched before the innovation is adopted on a national scale, with a view to establishing the most appropriate strategies for assessing any new competence involved. Wherever possible that research should involve some groundwork research by a number of individuals from innovating institutions [89]
The rationale for each set of recommendations is set out briefly above, but reference to the main body of the Research Report will provide a much fuller argument. This argument, and therefore the recommendations that follow from them, is itself open to critique.
REFERENCES[90]
ACE Project (1992) The
Assessment of Competencies in Nursing and Midwifery Education and Training, Interim Report, School
of Education, University of East Anglia
Aggleton, P., Allen, M.
and Montgomery, S. (1987) 'Developing a System for the Continuous Assessment of
Practical Skills', Nurse Education today, 7, 158-164
Akinsanya, J. A. (1990)
'Nursing Links with Higher Education: a Prescription for Change in the 21st
Century', Journal of Advanced Nursing, 15, 744-754
Andrusyszyn, M. A.
(1988) 'Clinical Evaluation of the Affective Domain', Nurse Education Today, 9, 75-81
Ashworth, P. and
Morrison, P. (1991) 'Problems of
Competence-based Education', Nurse Education Today, 11, 256-260
Ball, S. (1981)
Beachside Comprehensive: A case study of secondary schooling, Cambridge: Cambridge University Press
Bannister, D. (1970) Perspectives in Personal Construct
Theory,
London: Academic Press.
Belstead, Lord Hansard, H.L. vol. 532, col.
1022, 9
Benner, P. and Wrubel,
J. (1982) 'Skilled Clinical Knowledge; the Value of Perceptual Awareness' (Part
2), Journal of Nursing Administration, June, 28-33
Benner, P. (1984) From
Novice to Expert: Excellence and
Power in Clinical Nursing Practice, Addison-Welsley, California
Benner, P. (1982)
'Issues in Competency-Based Testing', Nursing Outlook, May, 303-309
Benner, P. (1983) Knowledge in Clinical Practice', Image:
The Journal of Nursing Scholarship, XV, 2, 36-41
Bereiter, C. (1979)
'Development in Writing' in: Gregg, L. and Steinberg, E (eds) Cognitive
Processes in Writing, Hillsdale N.J.: Erlbaum.
Bernstein, B. (1971)
Class, Codes, and Control: Volume 1. London: Routledge and Kegan Paul
Bloom, B. S. (ed.)
(1954) Taxonomy of Educational
Objectives. Handbook I: Cognitive
Domain,
New York: D. McKay &
Co. Inc.
Bloom, B. S. (ed.)
(1956) Taxonomy of Educational
Objectives. Handbook II: Affective
Domain,
New York: D. McKay &
Co. Inc.
Bourdieu, P. and Wacquant, L.J.D.(1992) An
Invitation to Reflexive Sociology, Cambridge: Polity Press
Bourdieu, P. (1977) Outline
of a Theory of Practice, Cambridge: Cambridge University Press
Bradley, J. (1987) 'Is
Clinical Progress Assessment Fair?' Senior Nurse, Nov 5th, 7, 5, 14-16
Brown, S. (1990) 'Assessment: A
Changing Practice', in: Horton, T. (ed)
Assessment Debates, London/Sydney:
Hodder and Stoughton
Burke, J. and Jessup, G.
(1990) 'Assessment in NVQs: Disentangling Validity from Reliability in NVQs',
in Horton, T. (ed) Assessment Debates, London, Sydney: Hodder and Stoughton
Burns, S. (1992)
'Grading Practice', Nursing
Times,
Jan 1st, 88, 1, 40-42
Burt, C. Sir (1947) Mental
and Scholastic Tests, London: Staples Press
Carr, W. and Kemmis. S.
(1986) Becoming Critical; Knowing Through Action Research, Geelong: Deakin
University Press
Cicourel, A. V.
(1964) Method and Measurement
in Sociology, New York: Free Press;
London: Collier-Macmillan
Clanddinin, D.J. and
Connelly, F.M . ÔNarrative and Story in Practice and ResearchÕ in D.Schon (ed) The
Reflective Turn: Case Studies In and On Educational Practice. New York: Teachers
College Press
Coates, V.E. and
Chambers, M. (1992) 'Evaluation of Tools to Assess Clinical Competence', Nurse
Education Today, Vol 12, p 122- 129
Cohen, M. R. (1944) A
Preface to Logic, New York: Dover
publications
Cole, N. (1990)
Conceptions of Educational Achievement, Educational Researcher, April, p 2- 7
Cox, H., Hickson P, and
Taylor, B. (1991) ÔExploring Reflection; Knowing and Constructing PracticeÕ
in: Gray, G. and Pratt, R. (eds) Towards
a Discipline of Nursing, Melbourne, Edinburgh, London.
Cumberlege, J. (1987)
'Cumberlege: One Year On', Community Outlook, June, 24-25
Darbyshire, P., Stewart,
B., Jamieson, L. and Tongue, C. (1990) 'New Domains in Nursing', Nursing
Times,
July 4th, 86, 27, 73-75
Davies, R. M. and
Atkinson, P. (1991) 'Students of Midwifery: 'Doing the Obs' and Other Coping
Strategies', Midwifery, 7, 113-121
Dawson, K. P. (1992)
'Attitude and Assessment in Nurse Education', Journal of Advanced Nursing, 17, 4, 473-479
Department of Education
and Science (1987) Higher Education: Meeting the Challenge, London: OHMS
Department of Education
and Science (1991) Higher Education: A New Framework, London: OHMS
Department of Health
(1989) A strategy for Nursing, London: DoH
Department of Health
(1989a) Working for Patients, London: HMSO
Department of Health
(1989b) A Strategy for Nursing: A Report of the Steering Committee, Department of Health
Nursing Division, London: DoH
Department of Health
(1990) National Health Service and Community Care Act 1990, HMSO,
Department of Health
(1991) The Patients Charter, HMSO
Department of
Health (1993) A Vision for the
Future, NHSME London
Department of Health and
Social Security (1983) National Health Service Management Inquiry, London: DHSS
Department of Health and
Social Security (1983) National Health Service Management Inquiry, London: DHSS
Department of Health and
Social Security (1986) Neighbourhood Nursing: A Focus for Care, The Report of the
Community Nursing Review, Chair: J Cumberlege, London: HMSO
Department of Health and
Social Security (1986) Mix & Match: A Review of Nursing Skill Mix, London: DHSS
Douglas, J. D. (1970) The
Social Meanings of Suicide, Princetown (N. J.):
Princetown University Press
Dreyfus, S. E.
(1981) 'Formal Models VS. Human
Situational Understanding:
Inherent Limitations on the Modelling of Business Expertise', US Airforce Office of Scientific
Research, Contract No. F49620-79-C-0063. mimeo, University of California, Berkeley
Dunn, D. (1986)
'Assessing the Development of Clinical Nursing Skills', Nurse Education
Today,
6, 28-35
Dunn, D. (1984) 'Have you Done My Report Please,
Sister?', Nursing Times, April 4th, 56-60
Dunn, W. R., Hamilton,
D. D. and Harden, R. M. (1985)
'Techniques of Identifying Competencies Needed of Doctors', Medical Teacher, 7, (1), 15-25
Eisner, E. W. (1993) 'Reshaping
Assessment in Education: Some
Criteria in Search of Practice', Journal
of Curriculum Studies, 25, 3, 219-233
Elbaz,
F. (1983) Teacher Thinking: A
Study of Practical Knowledge, London: Croom Helm
Elkan, R. and Robinson,
J. (1991) The Implementation of Project 2000 in a District Health Authority:
the effect on the nursing service, University of Nottingham
Elliott, J. (1991) Action Research for Educational
Change,
Milton Keynes: Open University Press
English National Board
(1984) Dear Colleague Letter 21st Nov, Examinations (Basic general nurse
training/education), London: ENB
English National Board
(1986) Circular 1986 (16) ERBD (March), Continuing Assessment and the
Present Position Regarding the Final Written Examination, London: ENB
English National Board
(1990) Regulations and Guidelines for the Approval of Institutions and
Courses,
London: ENB
English National Board
(1993) Regulations and Guidelines for the Approval of Institutions and
Courses,
London: ENB
English National Board
(1990) Framework for Continuing
Professional Education and Training for Nurses Midwives and Health Visitors, London: ENB
Eraut, M. (1985) 'Knowledge Creation and Knowledge Use
in Professional Contexts', Studies
in Higher Education, 10, 2: 117-133
Eraut, M. (1990)
'Identifying the Knowledge which Underpins Performance', in: Black and Wolf (eds) Knowledge and Competence: Current Issues in Training and
Education, Employment Department
Group/COIC
Etzioni, A. (1969) The
Semi-professions and their Organisation: Teachers, Nurses and Social Workers, New York: Free Press
Evans, D. (1990)
Assessing Students' Competence to Practice in College and Practice Agency, London: Central Council
for Education and Training in Social Work
Evans, N. (1987) Assessing
Experiential Learning. A review of Progress and Practice, Longman for Further
Education Unit Productions
Feyerabend, P. (1975) Against
Method,
London: NLB
Freire, P. (1973) Education:
The Practice of Freedom, London: Writers and Readers Publishing Cooperative.
Fullerton, J. T.,
Greener, D. L., and Gross, L. J. (1992) 'Scoring and Setting Pass/Fail
Standards for an Essay Certification Examination in Nurse-Midwifery', Midwifery, 8, 31-39
GagnŽ, R. M. (1968) The
Technology of Teaching, New York:
Appleton- Century-Crofts
GagnŽ, R. M. (1975) Essentials
of learning for instruction, Hinsdale (III):
Dryden Press
GagnŽ, R., and Briggs, L. (1974) Principles of
Instructional Design, New York: Holt, Rinehart & Winston
Gallagher, P. (1983)
'Why Do We Fail Them?', Nursing Mirror, Feb 2nd, 16
Gibbs, I. et al (1991)
'Skill Mix in Nursing: a Selective Review of the Literature' Journal of
Advanced Nursing 16: 242-249
Giddens, A. (1984) The
Constitution of Society. Outline
of the Theory of Structuration, Cambridge: Polity Press
Girot, E. A. (1993)
'Assessment of Competence in Clinical Practice-a Review of the Literature', Nurse
Education today, 13, 83-90
Girot, E. A. (1993)
'Assessment of Competence in Clinical Practice: a Phenomenological Approach', Journal
of Advanced Nursing, 18, 114-119
Glaser, B. G. and
Strauss, A. L. (1967) The Discovery of Grounded Theory. Strategies for
Qualitative Research, Aldine: Atherton
Glaser, R. (1963) 'Instructional Technology and the
Measurement of Learning Outcomes: Some Questions', American Psychologist, 18, 519-21
Glass, G. (1978)
'Standards and Criteria', Journal of Educational Measurement, 21, 215-238
Gray, J. and Forsstrom
S. (1991) ÔGenerating Theory from Practice: the Reflective TechniqueÕ in G.Gray
and R.Pratt (eds) Towards a Discipline of Nursing. London, Edinburgh,
Melbourne: Churchill Livingstone
Greenall, A. J. (1982) (Untitled
Seminar Paper on Storytelling), School of Management, University of Bath
Grussing, P. G. (1984) 'Education and
Practice: is Competency-Based
Education closing the gap?' American Journal of Pharmaceutical Education, 48: 2, 117-124
Hargreaves, D. H. (1967)
Social Relations in a Secondary School, London: Routledge and Kegan Paul
Harvey, D. (1989) The condition of Modernity: An Enquiry
into the Origins of Cultural Change, Cambridge, MA, Oxford: Blackwell
Hayne, Y. (1992) 'The
Current Status and Future Significance of Nursing as a Discipline', Journal
of Advanced Nursing 17, 104-107
Health Advisory Service
(1982) The Rising Tide: Developing Services for Mental Illness in Old Age, HAS
Hepworth, S. (1989)
'Professional Judgement and Nurse Education', Nurse Education Today, 9, 408-12
Horton, T. (ed) (1990) Assessment
Debates,
London/Sydney: Hodder and
Stoughton
Jackson, P. W. (1968) Life
in Classrooms, New York: Holt, Rinehart
& Winston
Kelly, G. (1963) A
Theory of Personality: the Psychology of Personal Constructs, New York, London:
W.W.Norton
Kress, G. (1988) Linguistic
Processes in Sociocultural Practice, Geelong/ Oxford: Deakin University
Press and Oxford University Press.
Kuhn, T. (1970) The Structure of Scientific
Revolutions, (2nd edition), Vols. I and II. Foundations of the Unity of Science, Chicago: University of Chicago Press
Kuipers, B. and
Kassirer, J. P. (1984) 'Causal
Reasoning in Medicine: Analysis of
a protocol', Cognitive Science, 8, 363-385
Lacey, C. (1971) Hightown
Grammar,
Manchester: Manchester University Press
Lane, R. E. (1991) The Market Experience, Cambridge: Cambridge University Press
Lankshear, A. (1990)
'Failure to Fail: the Teacher's Dilemma', Nursing Standard, Feb 7th, 4, 20, 35-37
Lazarus, M. (1981) Goodbye
to Excellence. A Critical Look at
Minimum Competency Testing, Boulder,
Colorado: Westview Press
MacDonald, B. (1970)
'Briefing Decision-makers: The Evaluation of the Humanities Curriculum Project', in: Hamingson (ed) (1973) Towards Judgement, Occasional Publication No.
1. CARE, University of East
Anglia, Norwich
MacDonald, B. (1987) 'Evaluation and the Control of
Education', in: Murphy, R. and Torrance, H. (eds) Evaluating Education: Issues and Methods, London: Harper and Row
( in association with the Open University)
MacIntosh, H. G. and
Hale, D. E. (1976) Assessment and the Secondary School Teacher, London: Routledge and
Kegan Paul
Manning, P. K. (1987) Semiotics
and Fieldwork, Qualitative Research Methods, 7, Beverly Hills, London: Sage
Marris, P. (1984) Loss and Change, Routledge & Kegan
Paul, London
Maruyama, M. (1979)
ÔMindscapesÕ, World Future Society Bulletin, 13, 5, 13-23
May, R. (1967) Psychology
and the Human Dilemma. Princetown, NJ. Van Nostrand.
McLelland, D. C.
(1973) 'Testing for Competence
Rather Than For "Intelligence",
American Psychologist, 28, 1: 1-14
Medley, D. (1984)
'Teacher Competency Testing and the Teacher Educator', in: Katz, L.G. and Raths, J. D. (eds), Advances in teacher education, vol 1, New Jersey, Ablex
Meerabeau, L. (1992)
'Tacit Nursing Knowledge: an
Untapped Resource or a Methodological Headache?', Journal of Advanced
Nursing,
17,
108-112
Melia, K. (1987) Learning
and Working, London: Tavistock
Merton, R. K. (1957) Social
Theory and Social Structure, New York: Free Press
Miles, B. (1988)
'Competence and the General Nurse', Senior Nurse, 8, 1, 27- 30
Miller, C. et al (1988) Credit
where credit's due. The report of the accreditation of work-based learning
project,
SCOTVEC
Needham, R. (1983) Against
the Tranquility of Axioms, Berkeley, London: University of California Press
Norris, N. and MacLure,
M. (1991) Knowledge Issues and Implications for the Standards Programme at
Professional Levels of Competence,
Department of Employment:
CARE, Norwich
Norris, N. (1991) 'The
Trouble with Competence', Cambridge Journal of Education, 21, 3, 331-41
Novak, J. D. and Gowin,
D. B. (1984) Learning How to Learn, Cambridge: Cambridge University Press
NSW Project, (1990) Studies
in Professional Training: An evaluation of Police Recruit Education in New
South Wales (Australia), No. 2, CARE, Univerity of East Anglia
O'Neill, E., Morrison,
H. and McEwen, A. (1993) Professional Socialisation and Nurse
Education: An Evaluation, The National Board for Nursing,
Midwifery and Health Visiting for Northern Ireland; The Queen's University of
Belfast
Phillips, T. (1993a)
'The Dead Spot in Our Struggle for Meaning: Learning and Understanding through
Small Group Talk', in Wray, D. (ed) Primary English: the State of the Art, London: Heinnemann
Phillips, T. (1993a)
ÔThe Professions Working Together: Learning To CollaborateÕ in Lyndsay, B. (Ed)
A Reader In Childcare Nursing, London: Baliere Tindall
Phillips, T. (in press)
'Exploratory Talk: A Critical Learning Discourse' to appear in Scrimshaw, P.
(ed) Spoken Language and New
Technology,
London: Longman
Ragin, C. C. and Becker,
H. S. (1992) What is a Case? Exploring the Foundations of Social Inquiry, Cambridge: Cambridge University Press
Reason, P. and Hawkins
P. (1988) ÔStorytelling as InquiryÕ in Reason, P. (ed) Human Inqury in
Action: Developments in New Paradigm Research, London: Sage
Robinson, J. E. (1991)
'Project 2000: the Role of Resistance in the Process of Professional Growth', Journal
of Advanced Nursing, 16, 820-824
Rowan, J. (1981) ÔA
Dialectical Paradigm for ResearchÕ in Reason, P. and John Rowan, J. (eds) Human
Inquiry: a Sourcebook of New Paradigm Research. Chichester, Brisbane, Toronto: John
Wiley and Sons
Runciman, P. (1990) Competence-Based
Education and the Assessment and Accreditation of Project 2000 Programmes of
Nurse Education-a Literature Review, Edinburgh: The National Board for
Nursing, Midwifery and Health Visiting for Scotland
Saunders, B. (1988)
'Professional Skills Assessment in Medicine: A description and Critique', paper presented to the annual
meeting of the BERA, 31 Aug-3 Sept. University of East Anglia, Norwich
Sayer, A. (1993) Method
in Social Science. A Realist
Approach,
London, New York: Routledge
Schon, D. (1983) The
Reflective Practitioner, London: Temple Smith
Schon, D. (1987) Educating
the Reflective practitioner, London, Jossey-Bass
Schostak, J. F. (1983) Maladjusted Schooling: Deviance,
Social Control and Individuality in Secondary Schooling, London, Philadelphia:
Falmer
Schostak, J. F. (1986) Schooling the Violent
Imagination, London, New York: Routledge and Kegan Paul.
Schostak, J. F. (ed.) (1988) Breaking into the
Curriculum: the Impact of Information Technology on Schooling, London, New York. Methuen.
Schostak, J. F. (1993) Dirty Marks. The Education of
Self, Media and Popular Culture, London,
Boulder: Pluto
Schutz, A. (1976) The
Phenomenology of the Social World, tr. G. Walsh and F. Lehnert, London: Heineman
Shor, I. (1980) Critical
Teaching and Everyday Life, Boston: South End Press
Scriven, M. (1967) The Methodology of Evaluation, AERA Monograph Series,
Perspectives on Curriculum Evaluation, Chicago: Rand McNally
Skinner, B. F. (1953) Science
and Human Behaviour, New York:
Macmillan
Spencer, A. (1985) 'A
Move in the Right Direction?', Nursing Times, April 10th, 43-48
Steinaker, N. W. and
Bell, M. R. (1979) The Experiential Taxonomy, London: Academic Press
Stenhouse, L. (1975) An
Introduction to Curriculum Research and Development, London: Heinemann
Strauss, A. L. (1987) Qualitative Analysis for Social
Scientists, Cambridge: Cambridge
University press
Taskforce (1993) Report
of the Taskforce on the Strategy for Research in Nursing, Midwifery and Health
Visiting,
Department of Health, April
Taylor, F. (1947) Scientific
Management: Comprising Shop Management, the Principles of Scientific Management
[and] Testimony Before the Special House Committee; with a forward by Harlow S.
Person. London: Harper Row
Thorndike, E. L. (1910)
'The Contribution of Psychology to Education,' Journal of Education Psychology, 1, 5, 5-12
Torrance, H. (1986)
'Expanding School-based Assessment: Issues, Problems and Future possibilities',
Research Papers in Education, 1,
1, 48- 59
Torrance, H. (1989) 'Ethics
and Politics in the Study of Assessment', in Burgess, R. (ed) The Ethics of
Educational Research, Barcombe:
Falmer Press
Torrance, H. (1989)
'Theory, Practice and Politics in the Development of Assessment', Cambridge
Journal of Education, 19, 2, 183-191
Tyne, A. & O'Brien,
J. (1981) The Principle of Normalisation: A Foundation for Effective
Services,
London: CMH
UKCC, (1986) Project
2000: A New Preparation for Practice, London: UKCC
UKCC, (1990) The
Report of the Post-registration Education and Practice Project, London: UKCC
UKCC, (1991) Report
on Proposals for the Future of Community Education and Practice London: UKCC
UKCC, (1992 Code of Professional Conduct, London: UKCC
Waller, W. (1932) The
Sociology of Teaching, New York: John Wiley/London: Chapman and Hall
Watson, J. B.
(1931) Behaviourism, London: Kegan Paul,
Trench, Trubner
West, S. (1993) Values
in Management. London. Kogan Page
Willis, P. (1977) Learning to Labour, Farnborough, Saxon
House.
White Paper (1989) Working for Patients, London, HMSO
Whittington, D. and
Boore, J. (1988) 'Competence in Nursing', in: Ellis, R. (ed) Professional Competence and Quality
assurance in the Caring professions, London, New York:
Croom Helm
Woods, P. (1979)
'Negotiating the Demands of Schoolwork', Journal of Curriculum Studies, 10, 4, 309-27
Wright, S. (1992) 'The
Case for Confidentiality', Nursing Standard, 6,19, 52-53
Yerkes, R. (1929) Army
Mental Tests, New York: Henry Holdt
[1] See also appendix B
[2] See appendix B
[3] There is now a
substantial body of literature exploring the issues involved in the concepts of
latent functions, unintented functions and hidden curriculum. The lines of reasoning can be traced
through Waller 1932, Merton 1957, Jackson 1968, Hargreaves
1967, Lacey 1970, Willis 1977, Ball 1980, Schostak 1983, 1986. The action responses to these forms of
analysis led to a focus upon the processes of teaching and learning from which
have emerged the concepts of the reflective practitioner (Schon 1983, 1987),
critical teaching (Freire 1973, Shor 1980), democratic education and evaluation
(Stenhouse 1975, MacDonald 1987, Elliot 1991) and of education as a perspective
capable of critiquing all social forms and drawing out alternative forms of
curricula for social expression and enaction (Schostak 1988, 1991, 1993).
[4]
In a
review of the literature Runciman (1990) states that 'the literature on
competence is confusing and contradictory'. This is true also of definitions of competence
drawn from interview data discussed in the next chapter.
[5] Developed in the 1950s, it employs
panels of experts to achieve consensus on issues (see Miller et al 1988).
[6] Experts during a day's workshop brainstorm key tasks and categories of roles and personal qualities required by role holders.
[7]
See
in particular the discussions in chapters 4 and 6
[8] Taylor (1865-1915) presented this report to the House of
Representative in America but it was first published in 1947.
[9] Runciman (1990) quotes the AORN Journal
(1986, 43 (1), 235): "A competency is the same as a behavioural
objective. It describes the
performance expected and the level of acceptability or adequacy and the
condition under which it should be performed."
[10] 'One way to determine whether a
competency or an enabling skill is being measured is to ask the question: To what end and under what
circumstances is the behaviour being done? If the goal and context is are not stated, then an enabling
skill is being measured rather than a competency.' (Benner: 305)
[11] Although Benner instances these as
'skills' it may be that some other terminology is preferable. Although there may be some identifiable
behaviours or procedures associated with those considered to be highly
empathetic, the learning of these does not guarantee empathy. The empathetic therapist is an easy
butt for comedy, the pose and the form of questioning, and enabling strategies
with little modification can be utilised not only for comic effect but also
with acute embarrassment.
[12] Philosophically, no exhaustive account
can be given of a particular social task.
Each sub-element divides into further sub-elements. The boundaries between stages in a
process are always subject to dispute.
Hence the possibility of drawing unambiguous distinctions is itself
highly disputable.
[13] To explain: 'Graded qualitative distinctions can be elaborated and
refined only as nutrses compare their judgements in actual patient care
situations. For example, intensive
care nursery nurses compare their assessments of muscle tone so that they can
come to consitent appraisals of tonicity.' (p. 5)
[14] The development of this
kind of approach will be discussed in relationship to the data in chapters 4,
6, 7, 8, 9, 10 of the report.
[15] See
Appendix C 1 for the full set of competencies.
[16] Student midwives are
required to demonstrate a different range of competencies: See Appendix C 1 for
Rule 33 and Midwives Directives.
[17] See chapter one
[18] See Chapters 9, 10 which discusses the complex relationship between a range of central discourses.
[19] See Chapter 8 on the relationship between the situation-specific and the wider-context.
[20] See Chapter 1 and appendix B
[21] See Chapter 7
[22] See appendix B.
[23] See Chapter 10
[25] Further details on the
practical tests can be found in "Regulations and guidelines for the
approval of institutions and courses 1990." (Section 5, "Regulations
and guidelines for the conduct of existing courses leading to parts 1-6 and 8 of
the professional register.")
[26] See chapter.6 and appendix E
[27] This has now been
superseded by an updated version: see ENB, 1993, section 5.
[28] In Chapters 4,5,6 we report typical interviewee comments about the strategies employed in their particular environment for carrying out assessment, indicate briefly a range of views on the merits of these strategies, and record the comments related specifically to the issue of 'ownership' of continuous assessment agendas
[29] Council for National Academic Awards.
[30] As
the following comment from an educational manager indicates, there are often
far-reaching implications for individuals and institutions when they take part
in innovations. In addition to the disturbance caused by new roles and
responsibilities, financial implications were recognised:
Well it's been horrific! (laughs) Well I mean no, that's probably overstating it. Very cleverly done actually, the English National Board have off loaded responsibility...well the whole process to colleges and I mean that has educational and financial implications as well. Again another thing that I don't think the college has taken on board really is the cost of actually managing your own assessment.
[31]
There are many definitions of reliability and validity. Typically reliability refers to the
consistency of the assessment tool in use. That is, items and procedures should be interpreted
consistently in any context of use and by any user. Validity typically refers to the extent to which the tool actually
assesses what it claims to assess.
That is, if it claims to assess an aspect of professional judgement,
then those individuals the tool identifies as possessing that aspect should be
recognised as possessing it by their peers. if not, then the tool is invalid.
[32]
This in itself begs the question as to whether a 'level of awareness'
can be reduced to observable behavioural units.
[33]
In the discussion of methodology, appendix B two kinds of classification
are identified: monothetic and
polythetic. While the behaviourist
format discussed here can be seen to be essentially monothetic, the latter is
not polythetic. Rather, this
latter process of classification is essentially confused by the range of
possible subjective interpretations of the category that are involved.
[34]See chapter 6.
[35] See chapter 5 for an exploration of
clinical liaison and professional development/support of clinical staff by
educators.
[36]
One
interesting observation can be made in relation to all the above methods of
assessment. Although most have the potential to include student practical
experience, assessment judgements were almost exclusively the domain of
teachers in cases of summative assessment and occasionally included student
peers where assessments were used formatively. We found no examples of practitioner involvement in
assessment of theory even where assignments drew heavily on students placement
experience.
[37] At the time of writing
however, this is now being affected in some instances by a fall in student
populations.
[38] The practical component of the course
were the would be assessor must conduct a specified number of student
assessments, under the supervision of a qualified assessor.
[39] There were no assessors forums operating for practitioners involved with one course we studied.
[40] See for more detailed discussion of the role and related issues ch ... sections ....
[41] In midwifery, the value
of working with a range of midwives to witness different approaches was
mentioned by students, clinicians and educators alike. However this did not negate the need
for continuity for learning and assessing.
[42] It was
noted that total assessment demands within many placement areas had
increased. Many nursing staff were
involved in assessing other health care personnel such as Health Care
Assistants (via The National Vocational Qualification system) and paramedics.
[43] Alchol skin wipe.
[44] Measurement of blood glucose level.
[45] See chapter 5.
[46] See also chapter 4, on assessment texts and
the construction of evidence bases for assessment of practice
[47] See appendix D.
[48] See chapter 4.
[49] See chapter 5, on clinical liaison.
[50] See chapter 3.
[51] The
term ÔlevelÕ is used to draw a distinction between qualitatively different
forms of knowing. It is employed because it mirrors the popular perception that
one form of knowledge is a pre-requisite to another. In some ways, however, the
notion of ÔlevelsÕ is an unfortunate one, because - as we have indicated below
- there is no obvious reason why reflection, etc. should not take place in the
early stages of a course. The dilemma in offering a structuralist framework for
an essentially gradualist process is one that occurs in a number of accounts of
learning development in other areas (e.g. Berieter and Scardamalia, 1978). We
use the term level with caution, therefore.
[52] The term ÔorderÕ is less
problematic than the term ÔlevelÕ used to describe knowledge. Nevertheless, we
use it with the same reservations. (See Footnote 1).
[53] The concept of levels is a problematic one in an analytic account which attempts to describe both structural and conceptual development from a perspective which gives full recognition to the Ôinterpreting subjectÕ. We use the term here because it carries a notion of progress and is therefore helpful in the construction of an assessment strategy whose main purpose is to evaluate progress. Strictly speaking, however, it would be better to describe the movement from one way of conceptualising the world to another way as a movement from Ôfirst encounterÕ to Ôsecond encounterÕ and so on, emphasising the cyclical process of action, analysis, reflection, and action.
[54] But see the comments later about story as a way of expressing and shaping experience. Stories do not necessarily make the storyteller's thinking explicit, but they nevertheless reveal it
[55] The term ÔcriticalÕ is almost invariably used in nursing and midwifery circles to mean something negative and carping. ItÕs original meaning, however, is Ôproviding a critiqueÕ. Critique is based on careful analysis and reflection that assumes the need to interrogate (i.e. ask questions about) the taken-for-granted. The word Ôcritique(al) has been coined by Phillips (Phillips, 1993) to recover this meaning, and to emphasise the positive and developmental role of critical dialogue.
[56] See Chapter 10 for further discussion of the discourses about the relationship between Theory, Knowledge, and Action.
[57] At the same time, they constitute an evidence base that can be developed into intensively researched Case Studies.
[58] This parallels the move from Level 1 to Level 2 suggested in the Diagram on p 150)
[59] See Chapter 10, p.170
[60] See Recommendations.
[61] See the Introduction to this study.
[62] See chapter 5
[63] See Methodology Appendix
[64] See Chapters 1, 5, and 6.
[65] See Recommendations at end of this report
[66] The
starting point for theoretical models
of competition are to be found in the ideal type of perfect competition
which is built upon the following assumptions: 1) there is perfect knowledge throughout the markt of all
changes and information relating to market conditions; 2) there are many firms
such that no one firm has a size advantage over any other firm; 3) there are
many consumers such that no single consumer has a wealth advantage over any
other; 4) the product or service is homogeneous; 5) firms maximise profits as
consumers maximise utility. The
consumer is said to be sovereign.
Only effective demand is considered. That is, people must not only be willing to demand
something but able to back this up with money. Furthermore, money itself is assumed to be neutral and an
adequate measure of the strength of people's wants. While in the more complex versions of market behaviour these
assumptions are modified to account for, for example, monopoly practices, the ideal of competition guides policy
making with regard to the formation of health markets. That, the model is a bad guide to the
extent that it is highly misleading with regard to actual market experiences is
a case powerfully and increasingly influentially made by Lane (1991).
[67] See in particular the discussion in
chapter 2.
[68] O'Neill, E., Morrison, H. and McEwen,
A. (1993) Professional Socialisation and Nurse Education: An Evaluation, School of Education, The Queen's University
Belfast; and, The national Board for Nursing, Midwifery and health Visiting for
Northern Ireland.
[69] This diagram should be read in
conjunction with the model offered on p.150 . Level 1 in that diagram can be read as the taken for granted
level of activity (see also chapter 6 ).
When critical reflection is brought to bear, a reconceptualisation takes
place at level 2 which provides the new theoretical frame for action. This new level becomes the taken for
granted level of operation that then has to be subjected to further critical
reflection at a later date to produce the next level of reconceptualisation. And so on.
[70] At this level, a 'knowing subject' is
not necessary to the reproduction of information, or the execution of
routinised skills or procedures. A
computer based 'expert system' or programmable robot would do as well - an
aspiration motivating contemporary theories of computer science.
[71] Report of the Taskforce on the Strategy for Research in Nursing, Midwifery and Health Visiting, Department of Health, April 1993
[72] The definition of competence here is one which perceives it as evidenced in the knowledge, skills, attitudes, and understanding demonstrated in clinical practice and in the construction of an evidence base, the informed analysis of that evidence base, and the critical reflection upon the issues raised by this analysis, as demonstrated in writing and dialogue. Competence is, however, as the research shows, an evolving concept that needs to be continually evaluated. The working definition we provide here is, like all others, open to development through testing in practice. Recommendations 1.1 - 1.11 deal specifically with competence in the clinical location.
[73] Recommendations 1.1 - 1.11 deal specifically with competence demonstrated in the clinical location. Other recommendations consider competence demonstrated elsewhere.
[74] Examples of such principles include: (a) a
principle about the distribution of responsibility in collecting evidence for
assessment, (b) a principle about which aspects of practice and learning are to
be prioritised, and (c) a principle about how decisions about priorities are to be
decided.
[75]In most cases ENB 997/998, whilst not ignoring assessment, concentrates primarily on teaching. In addition, assessment is looked at more in terms of outcome measurement than in terms of the learning process and the studentÕs level of understanding. Little, if any, training is provided in the use of dialogue and other mechanisms for assessing these latter.
[76]The concept of ÔlevelÕ is not entirely unproblematic. It is probable that agreement about what constitutes a ÔlevelÕ will be achieved slowly through the dialogue about evidence, analysis, and reflection recommended elsewhere in this set of proposals.
[77]This is unlikely to be achieved in a placement that is shorter than four weeks. Several of the institutions visited were already successfully operating with clinical/community placements of four weeks or more. Several of the others were reviewing their placements in the light of difficulties caused because of the brevity of some of them.
[78] In a few institutions, despite off-duty arrangements and the fact that there was often only a single assessor available, clinical staff worked hard to ensure that they held regular discussions with students about particular strengths and weaknesses. They were often hampered in doing this at an appropriate level by (a) the length of placement, (b) the quality of documentary evidence, and (c) the training and support they themselves had received.
[79] There are a number of additional reasons why assessors feel unable to take the decision to fail students, including the feeling many of them have that they are ill-prepared. Many of the recommendations suggested in other sections will help to redress these other causes of failure to fail.
[80] Notwithstanding the considerable difficulty likely to be experienced in finding an additional appropriate person in many clinical areas, it would be sensible wherever possible to involve a third, independent assessor in the re-assessment. The research shows that in clinical areas where there are several appropriately qualified assessors there is considerable advantage to the student in being seen by as many of them as possible.
[81] See Chapter 6 of Report
[82] See Chapter 8 of Report
[83] See Evans D.1990 p. 51 for an outline of the advantages of a portfolio of evidence for the assessment of the competence of social work students.
[84] The term ÔCase StudyÓ is used here to refer to the study of an instance (using observation, collection of evidence, analysis, reflection, and construction of an action plan for further exploration) which is commonly adopted as a methodology for Action Research. It does not refer to the taking of a ÔcaseÕ in the more usual nursing/midwifery sense.
[85] An interesting exception were the community studies which was undertaken in several institutions. These sometimes involved a small piece of research which necessitated observing, analysing, and commenting on the practice of other (ie. non-nursing/midwifery) workers in the community. The value given to such studies by some institutions contrasts strongly with that given in others where placements in the non-healthcare sector were perceived as peripheral.
[86] Another advantage is that it may involve the student in Ôclearing the dataÕ, i.e. checking their account with that of participating practitioners. This may not lead necessarily to an agreed account, but it will help the student to understand the need for recording any differences of perception.
[87] Many of the recommendations for assessment in the clinical placement area apply equally to assessment in the classroom. Unless it is evident that they should not be applied to coursework assessment, it should be assumed that the suggestions apply within the education institution too.
[88] cf. Report of the Taskforce on the Strategy for Research in Nursing, Midwifery, and Health Visiting. 1993.
[89] This was, of course, the pattern with several of the Project 2,000 Demonstration Districts. In that case, because the research was not completed before the innovation was adopted on a wider scale, many courses that were established subsequently were themselves, in all but name, pilot studies. This was most true with respect to assessment.
[90] Includes a selection of relevant literature consulted.