A few weeks back the Times Higher published an article on student survey fatigue. Students fill in some many surveys including the National Student Survey, module surveys and institution wide surveys (coincidently, some the questions on the latter surveys are similar or identical to those on the NSS). I suggested on Twitter that the ineffectiveness of current surveys means that we want to do more surveys to fix the shortcomings of the existings surveys in order to give a ‘truer’ picture. Much of the reason that we are considering a national survey of language students stems from the shortcomings of the NSS outlined in our recent report. Should this new national survey go ahead I’m sure that it too will have its own shortcomings.
Legg and Wilson’s recent paper in Assessment and Evaluation in Higher Education on the reliability of www.ratemyprofessors.com vis-à-vis in course evaluations is interesting in its own right (their title "RateMyProfessors.com offers biased evaluations" deserves an award for clarity). But at least one thing could be said in defence of www.ratemyprofessors.com. It is unambiguous about who is the target of the evaluation—the teacher. With most other surveys it is unclear whether the students are being asked about the course, the teacher, the content, the university or the programme of study. The students don't know when they are answering the questions and we don't know when we are analyzing their answers.
See also: 4 ways to Avoid Survey Fatigue in Higher Education
HESA has published an interesting document on the definition of the word ‘course’. The report outlines the differing definitions of ‘course’ used by the Higher Education Statistics Agency (HESA), the Universities and Colleges Admissions Service (UCAS), The Quality Assurance Agency (QAA) and other players in higher education. As I noted in my previous posts, this poses many difficulties for those us seeking the answer to the basic question—how many people are studying ‘X’?
Only last week the Times Higher Education ran an article under the headline ‘1 in 4 new undergraduate courses fails to attract any students’. However, the truth here is that courses are certain combinations of modules and subjects, not distinct entities. A university can offer a ‘course’ in Criminology, Accountancy and Canadian Studies on the grounds that it is a feasible combination in terms of the timetable. The question of whether or not students choose this ‘course’ is a different one to the question of whether programmes in Criminology, Accountancy or Canadian Studies departments and courses are viable.
In May this year, it was reported that London Metropolitan University was cutting 400 courses. Whilst it should be acknowledged that departments and jobs have been threatened at the institution, it is quite a different issue to the question of the number of subjects on offer and the number of staff there will be to teach them.
Interestingly, the collection of data for the Key Information Set (KIS) could lead to the end of certain offerings – not because they are bad courses, are expensive to run or even that they have low numbers of students – but because of the extpense of reporting on each combination. It appears that each ‘course’ will require its own KIS (which will report on employability, salary outcomes etc. ). So a BA (Hons) ‘course’ in Criminology, Accountancy and Canadian Studies would require its own KIS which would be different than those for each of those subjects as a single honours subject or in combination with other subjects. If this is indeed the case then Malcolm Gilles, quoted in the THE sums up the situation poignantly:
…the new student information requirements would also have an impact on decisions about which courses to keep. Why keep running courses that don't attract many students, especially now that you will have the cost of producing a Key Information Set for every course you offer? If you have only one student or even zero enrolment, [how will you record employability] for that course in the KIS? Such requirements would force universities to weed those courses out.
Could an unintended consequence of the KIS the death of joint and combined degrees or at least the end of ‘non-cognate’ combinations? Working in languages where joint and combined courses are particularly common this could become a new threat to the subject.
I have written an new article for the LLAS blog (in a personal capacity).
Some rise by sin and others by virtue fall. William Shakespeare, Measure for Measure
Statistics are everywhere in education. We have the National Student Survey (NSS), the First Destinations Survey, newspaper league tables, and the Times World University rankings among others. Universities are now required to publish ‘Key Information Sets
’ (KIS) from 2012. The KIS has data from the NSS (the higher the agreeing percentages the better), the cost of university accommodation (presumably the lower the better), fees (the lower the better), graduate employment rates (the higher the better), percentage of assessment which is written exams (depends on the student) and number of ‘contact’ hours (again, depends on the student). In short if it can be measured the data is out there. And if it can’t be measured, we’ll find a way to measure it anyway, (research impact anyone?). Add to all this the information that students get from visit days, Facebook, twitter, the online student forums, friends and the phrase ‘information overload’ comes to mind. In his report Dimensions of Quality
Graham Gibbs warns us about that immeasurable factor, reputation, which can override any real measure of quality. I suspect that all this information only serves to make reputation all the more important.
Read the full article on the LLAS news blog