| AQ 97 |
Proceedings of the AQ 97 Conference
Winchester, 2 December 1997
Evaluation and enhancement of student experience
Brenda Barrett
Middlesex University Business School
[email protected]
Students enter university at a certain stage in their personal
development in order to achieve the exit level necessary to 'graduate' - using
that word in a wide sense - from the programme of their choice. We have to
ensure that the learning experience which we offer them between entry and exit
is appropriate to the journey that they have to make.
Collecting and interpreting informed criticism is one of the most challenging tasks facing those of us who are concerned with the quality of higher education. If we are to improve the quality of the student experience we must know how others, not least the students themselves, perceive that experience. Doing this has always been important, but it is likely to be even more so now that the Dearing Report has appeared.
We are in an era of consumerism and students, like other consumers, are very aware of their rights. They now have their 'charter' expressly spelling out - in the spirit of the times - what they instinctively believe is their entitlement. The Dearing Report has to be seen in that context and the students' own view of their experience has already gained great significance. As students in higher education start to contribute to the cost of their tuition, they are likely to become yet more vocal and to measure their satisfaction in terms of their actual or anticipated exit grades.
This paper does not report a solution to the problem of how to evaluate and enhance student experience appropriately: but an analysis of the problem is attempted and, in the process, some of the issues which have to be addressed are mentioned, and ways of addressing them are suggested. The theme of this meeting is collaboration; this paper is therefore intended to provoke debate so that we may clarify our ideas for tackling a problem which we all share.
How can we evaluate student experience? There are two major ways of evaluating the experience of students, which may be applied equally to the totality of their experience in the institution where they are studying, to their experience in relation to the programme on which they are enrolled, or to a single module within their programme. The first method is to ask the students themselves; the second is to evaluate the professionalism of those who are providing the experience.
Student feedback
How do we obtain student feedback? The new universities, brought to maturity in the CNAA tradition, have many years of experience of collecting and using student feedback in their monitoring and review procedures. The traditional ways of collecting student views, and their respective strengths and weaknesses are:
This enables the lecturer to identify the students' problems in coping with the content and delivery of the syllabus and may resolve many matters of concern to the group. The rapport between teacher and taught is likely to be improved, but it is not self-evident that the standards and coverage intended in the formal syllabus will necessarily be better achieved.
This can enable the manager to discuss the perceived problem with the teacher concerned. On the face of it this is a very constructive way of dealing with problems, but its success has to depend on the rapport between the manager and the teacher. It is very easy for the manager to do no more than convey to the teacher the impression that the teacher must be at fault, without conducting any enquiry and offering neither support nor guidance.
Unfortunately students rarely make agenda items out of what they regard as the good features of their experience. There is a danger that such meetings may become confrontational and distressing to staff involved.
This method appears to produce an accurate documented record; but there are a number of problems. As with all questionnaires there are problems of structure. Further, such questionnaires can all too easily give students the opportunity to register dissatisfaction with what they have received from the teacher, while giving little indication of their personal preparation for, and commitment to, the classroom experience. Questionnaires completed on the eve of assessment can also be regarded as an insurance policy for the eventuality of failure. A member of teaching staff who gets a bad rating in the questionnaire can be heartily thanked by students when the pass list is published: but such expressions of thanks are not on the record! There is also a problem of response rate: if only a small proportion of students complete the questionnaires it may well be that the feedback is not representative.
A full statistical analysis of student feedback is rarely available for the period under consideration, but the review panel normally meets with some students. There may be doubts as to whether these students are representative of the cohort; and, if so, whether they are they given - and take - the opportunity to provide a fair feedback on their experiences.
These are rarely taken into account in the spectrum of student feedback, since the focus tends to be on the students' opinion of the experience rather than on their performance. Yet in some ways assessment results are one of the most important aspects of student feedback. It is, however, too naive to suppose that good or bad assessment outcomes necessarily reflect the quality of the delivery of the programme. Moreover, it is not helpful to a cohort of students to look at their poor results and conclude that they must have had a poor learning experience.
Is student feedback a good gauge of the learning experience? It is suggested that student feedback is not necessarily indicative either of programme objectives being appropriate, or of them having been met. Good feedback may show that the teacher is popular because he or she does not stretch the student. On the other hand, poor feedback may reflect the curriculum or the environment in which the programme is delivered, and does not reflect the attitude or performance of either teacher or students.
Peer evaluation
Professional evaluation can be at institutional, programme or module level. It may occur in any of the following forms (although this list may not be exhaustive):
This evaluation may be by a professional body, such as the Law Society, by a quasi-Governmental body, such as the Quality Assurance Agency, or by an advisory body selected by the institution itself.
This again may be to a system developed within the institution or by an external agency. In recent years, since the demise of the CNAA, institutions have largely operated an internally devised and operated system of programme validation and review developed from the CNAA model. At the same time, programme areas have been reviewed externally to HEFCHE and HEQC agendas which have been subject-orientated but seeking to establish national norms.
The work of students on modules and programmes is subject to consideration by assessment boards on which external examiners have always been deemed to play a crucial role both in ensuring justice as between individual students in the cohort, and in ensuring the institution operates to national norms. Whether this system continues to operate satisfactorily is a matter of debate.
This may be a part of institutional review by an external agency, or it may be a matter of individual staff appraisal carried out by the employing institution. Monitoring of the individual by the employer has traditionally been fought by staff, on the grounds that the observer is unlikely to have the same specialist expertise as the observed and that the exercise comes close to interference with academic freedom. Recent moves to evaluate performance stressing the manner of delivery, rather than the content, have strengthened the case for such staff appraisal.
Is peer evaluation a good gauge of the learning experience? The focus of such evaluation has traditionally been on how the evaluators themselves rank the situation; some attention may be given to student views, but this is a relatively small component in the exercise. The evaluators tend to prioritize the achievement of exit standards. Where the evaluators are educationalists their criteria may be based on national norms: and these standards may shift over time. Where the evaluators are professionals, who may also be potential employers, their criteria may be the standards in knowledge and skills which are necessary to a practitioner: meeting such requirements may be in conflict with providing a good learning experience.
Comment
One institutional nightmare is to have students who are dissatisfied: a group of dissatisfied students can attract bad publicity; a dissatisfied individual may litigate. Another nightmare is to be deemed unsatisfactory by a professional body. Fears such as these are passed down the structure of an institution to the individual teacher. Even if such extreme trials never materialize, receiving poor feedback is a most demoralizing experience for the teacher: which is rendered worse if a 'blame culture' allows the buck to rest at that level.
What does poor feedback really indicate? It may suggest that:
The professions, in the narrower sense of the term, have changing needs which may no longer be - if they ever were - reconcilable with what can be offered in a degree course, given the student intake and the resources now available. Dearing did not attempt to define a degree; his Report charges external examiners and the Quality Assurance Agency with ensuring that universities maintain proper standards, but fails to raise the fundamental question of what a degree should be in the year 2000 and beyond.
Evaluation tends to focus on the student-teacher relationship. In fact, this only a small part of the student's university experience. Class contact hours have been so reduced and staff-student ratios so increased that there is little personal relationship between individual student and teacher: the teacher is unlikely to be able to link the names on the printed class list to individuals in the sea of faces in the lecture hall. In these circumstances it is not appropriate to attribute an unsatisfactory programme evaluation entirely, or even principally, to the teacher. Any total quality control exercise needs to recognize the importance of factors such as the following:
But it should also be borne in mind that matters of great dissatisfaction may not be matters of great importance: students may deplore the standards of campus catering but not allow this to determine their choice of university.
Universities, like other large organizations, have responded to straitened financial resources by shedding staff. This has very often resulted in the expectation that the staff retained will be less specialized, and will take on a wider range of tasks. Poor feedback may be an indication that staff expertise is not being deployed to best advantage. Holding the attention of a mass audience throughout a lecture period may not demand the same skills as giving a one-to-one tutorial.
Conclusions
No doubt there is much which may be done to improve the quality and reliability of feedback. Part of the solution may be to consider more carefully the purpose of seeking the feedback; and then to ask questions in circumstances conducive to getting these issues fairly addressed. However, a more fundamental problem is to identify appropriate responses to feedback: there should be neither complacency that good feedback is proof that there is no cause for concern about the quality of education being provided nor an automatic assumption that poor feedback indicates poor performance of teaching staff.
Those currently working in higher education have personal perceptions of what constitutes higher education in general and an undergraduate programme in particular, both in terms of curriculum and standards; but this is based on their experience as students. A generation ago, undergraduates were only a small percentage of the 18-21 age group: currently the intention is to open higher education to the majority of the population.
A major question for consideration must therefore be the extent to which the product on offer should be modified to suit the market. Dearing suggests that any further expansion of higher education is likely to be at sub-degree level. Will the consumers who now perceive higher education and undergraduate studies as virtually synonymous 'buy into' sub-degree programmes? Should those in higher education re-define the degree so that its curriculum gives satisfaction to the mass market, or should admission to undergraduate programmes be confined to an elite group of high fliers? This will not be determined by individual universities in competition to achieve their recruitment targets.
The matter is complex; perhaps more so than management - concerned with short-term solutions - may be prepared to acknowledge.
|
© Brenda Barrett 1997
Published by Information Geometers Ltd |
| Back to the AQ 97 contents list |