Category Archives: assessment

New publication: Assessment as a wicked problem

My article with Juliet Eve has now been published.

Canning, J. and Eve, J. (2020) ‘Assessment in higher education: the anatomy of a wicked problem’. In Masika, R. (ed.) Research Matters: The Pedagogic Research Conference and Articles 2019, pp.51-60. Brighton: University of Brighton. Available at: https://doi.org/10.17033/UOB.9781910172223 (Open access)

  • Twitter
  • del.icio.us
  • Digg
  • Facebook
  • Technorati
  • Reddit
  • Yahoo Buzz
  • StumbleUpon

How do we select the right 11 year-olds for the right schools? Can assessment be tutor proof?

A new Prime Minister and a new education policy. Not content with merely continuing the long established journey towards the privatisation of ‘state’ schools by stealth (i.e. academisation), Theresa May is keen on the idea of reintroducing grammar schools.

This is not a post about whether selection by ability is a good thing or not, though that may come in a future post. Instead I want to ponder how we decide which children will go to the grammar schools and which ones will not.

Selective schools, both state and private, already exist of course. These schools have to have a means to identify which pupils to take and which pupils not to take. The traditional method of selection is through a one-off 11-plus exam. Proponents of grammar schools argue, publicly at least, that grammar schools often a better chance of academic achievement for bright children from poorer/ disadvantaged backgrounds. However grammar schools always were, and still are, disproportionately filled with middle class children. Alongside the many advantages of coming from a better-off/ more advantaged home, there is a whole industry around tutoring/training to pass the grammar school entry exam. My eldest son is now in year 6 and if we lived in a grammar school area I would now be throwing the proverbial kitchen sink at him to ensure that he gets into a grammar school-- after all, bringing back grammar schools for the ‘brightest’ (say 25%) children also means bringing back secondary modern schools for the other 75%. I went to a comprehensive school but know a lot of people who talk about ‘passing’ or ‘failing’ the 11+. The so-called ‘failure’ seems to have had a lifetime affect on the post-war generation.

This brings us to the crux of the matter. The twittersphere has been abuzz about the need to ensure that the ‘tests’ (I use the word here to cover any sort assessment) are tutor-proof. In, other words is it possible to design an assessment that will reliably discriminate between ‘able’ and ‘less able’ which does not discriminate on the grounds of previous experience, background or performance? Is there a means of assessment to prevent a less-able child getting into grammar school because she has had private tutoring to help her pass the test at the expense of bright of an able child who does not enjoy these advantages? This takes into the even more dangerous territory of ‘innate’ intelligence that can be separated from previous experience and from teaching and learning. Is there a test that can separate the disadvantaged child who may not have performed well in primary school and may not have ‘engaged in education’, but would benefit from a grammar school education from a pupil who does well because she works hard and has a supportive (or pushy) home environment?

To be totally reliable such a test would need to be:

1. Impossible to game through studying or the practice of learning and teaching. (Despite what proponents of IQ type tests say, like any test, the more you practice the better you get at them). The tutor-proof assessment goes against the whole point of assessment which is to evaluate whether learning has taken place. Assessment might be able to predict future learning achievements, but only on the basis of past learning.

2. Be culture/ value free. The tests would need to ensure that children were not disadvantaged by going to the wrong school, growing up in the wrong type of household or with parents from a different culture to the prevailing local culture. For the most part cultural bias can be taken into account, but not eliminated completely. Cultural bias can many forms and assessment can reward knowledge of both so-called ‘high’ and ‘low’ culture. The belief that cultural bias can be eliminated completely is more dangerous than cultural bias itself.

3. Not rely on luck. There is a subtext in the grammar schools debate that grammar schools are/ will be good schools and other schools not so good. If in attempting to eliminate the above questions we end up with an assessment which is only slightly more reliable than a coin toss, a dice throw or a game of snakes and ladders then the whole point of selective schools in undermined.

4. Transparent and fair. If those taking the test do not know how marks are allocated or exactly what is being assessed then the assessment is neither transparent or fair. Once teachers/ tutors/ parents know what kind of questions are asked on a test they can learn how to do better on the exam and this lead to testing to the test.

I’ll write in more detail my thoughts about selection at age 11 in a later post, though you’ve probably guessed I’m not very keen on the idea.

  • Twitter
  • del.icio.us
  • Digg
  • Facebook
  • Technorati
  • Reddit
  • Yahoo Buzz
  • StumbleUpon

Book review: 53 interesting ways to assess your students, 3rd Edition.

B53 interesting ways to assess your studentsook review 1 Victoria Burns (2015 ed.) 53 interesting ways to assess your students. 3rd Edition. 2 Newmarket: The Professional and Higher Partnership £19.81 (RRP) ISBN 978-1-907076-52-7

I wouldn't usually start a book review with a personal point of context, but when the first edition of this book by Gibbs, Habeshaw and Habeshaw was published in 1986 I was still in primary school. While many early 21st century books look decidedly dated the '53 ways' series is sufficiently enduring that 30 year-old copies of the various '53 ways' books remain on the shelves of our Centre for Learning and Teaching library and are still consulted by early career lecturers taking the PGCert in Learning and Teaching in Higher Education course.

Each '53 ways' book consists of 53 'ideas' of 2-3 pages each. For example in 53 interesting ways to assess your students way 1 is actually an introduction to choosing assessment methods, way 2 is 'the standard essay', way 20 is 'writing for the Internet' and way 36 is the 'seen exam'. These ways are grouped together in chapters ; for example Chapter 1 (ways 2-4) is called 'Essays' and Chapter 9 (ways 33-38) is 'Examinations'. Each assessment way is then described and explained and the strengths and limitations of each form of assessment is briefly considered. Strictly speaking there are more than 53 assessment ways as many ways have variations on the theme.

As with other '53 ways' this volume can be read from beginning to end, flicked through or dipped in and out of at the reader's pleasure. New and experienced lecturers alike will find treasures here; I thought the 'learning archive' (way 29) whereby students are set the same question in years 1, 2 and 3 and are given the opportunity to reflect on their intellectual development particularly interesting. Framed in the context of the 2010 Equality Act, Way 51 on inclusive assessment and equal opportunities is useful for UK readers, but will no doubt be helpful to others too. It was also positive to see a chapter of ways devoted to feedback as well.

Inevitably, every reader will identify omissions. Many of our PGCert participants write about Objective Structured Clinical Examinations (OSCEs), and although a fairly specialist assessment discipline-wise they are probably worthy of a place in the book, and could fit nicely into the chapters on authentic assessment or problem-based assessment. Similarly field trips/ visits might have been included, but perhaps they didn't sit well in a publication aimed at a general academic audience, or may have made the '53' difficult to achieve. '53 ways' books are not and do not purport to be in-depth theorisations of their subjects and when introducing assessment and feedback I like to 'drill deep' with the principles and purposes of assessment with other texts; I see '53 ways' as a good quality accompaniment to a module on assessment and feedback rather than a core text.

For the benefit of readers familiar with previous editions the publisher's foreword (p. ix) helpfully outlines the connections between Burns' editorial work and the previous work of Gibbs and his colleagues. A balance has been nicely struck between producing a work which is fit for purpose in the second decade of the 21st century while maintaining the approach and appeal of the earlier editions which lies in the accessibility, diversity and brevity of the 53 ways. A balance has also been struck between maintaining content from previous editions while introducing new material, the most notable development between the second and third editions being the small matter of the World Wide Web! Not only have new assessment ideas such as 'Writing for the Internet' and 'Designing Multimedia materials' been added a substantial amount the material is actually new material developed by Burns and her team.

In conclusion I highly recommend that lecturers at any stage of their career take time to look at '53 interesting ways to assess your students'. Although I suspect many of its readers will be academics at the beginning of their careers I particularly hope it will challenge experienced lecturers who have long relied on traditional staples such as unseen exams and set essays to see the rich possibilities of assessment.

  • Twitter
  • del.icio.us
  • Digg
  • Facebook
  • Technorati
  • Reddit
  • Yahoo Buzz
  • StumbleUpon

Notes:

  1. This review was carried out at the request of the publisher who sent me a review copy of the book.
  2. First edition 1986, Second Edition 1988. This third edition is long overdue!

13 wicked problems in assessing students in higher education

The concept of ‘Wicked problems’ is often used to refer to complex problems such as climate change or social inequality. Rittel and Webber (1973 –open access) outline 10 characteristics of ‘wicked problems’: 1  ‘Wicked’ does not been mean ‘evil’ here, but in set in contrast to ‘tame’ problems which are potentially solvable, even if they are very complex. 2

  1. There is no definitive formulation of a wicked problem.
  2. Wicked problems have no stopping rule.
  3. Solutions to wicked problems are not true-or-false, but good or bad.
  4. There is no immediate and no ultimate test of a solution to a wicked problem.
  5. Every solution to a wicked problem is a "one-shot operation"; because there is no opportunity to learn by trial and error, every attempt counts significantly.
  6. Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan.
  7. Every wicked problem is essentially unique.
  8. Every wicked problem can be considered to be a symptom of another problem.
  9. The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem's resolution.
  10. The social planner has no right to be wrong (i.e., planners are liable for the consequences of the actions they generate).

Here are 12 questions we face regarding the assessment of students in higher education -- this list is by no means exaustive. If you are convinced any of these are not ‘wicked problems’ I’d love to hear from you. Some of these are UK specific, but every country will have its own version of the problem. The same problems are true of other sectors of education as well.

  1. Is the UK degree classification system fit for purpose?
  2. Should/ can student work be assessed anonymously?
  3. Are some courses under assessed or over-assessed?
  4. Is a degree from one university the same standard as the same class of degree from another UK university?
  5. Is a degree from a UK university equal to a degree (in the same subject) from a university in another country?
  6. What say should students have in how they are assessed?
  7. (When) does an assessment accommodation (e.g. for disability) provide an advantage? E.g. how much extra time in exams is needed to gain an unfair advantage?
  8. Could a student object to a form of assessment for moral, ethical or religious reasons? How should they be accommodated (if at all)?
  9. Are assessment regulations across a university consistent? Should they be?
  10. Are students able to avoid particular topics of types of assessment through strategic module choice?
  11. Are too many students getting ‘good degrees’? Why is the growth in the number of students getting good degrees often cited as evidence of falling standards?
  12. Why (in the UK) do we call marks ‘percentages’ when we rarely give marks above 80 or below 30?
  13.  Are we under assessing formatively and over assessing summatively?  (From Juliet Eve)
  • Twitter
  • del.icio.us
  • Digg
  • Facebook
  • Technorati
  • Reddit
  • Yahoo Buzz
  • StumbleUpon

Notes:

  1. Rittel, H. W. J. and Webber, M. M. (1973) Dilemmas in a General Theory of Planning. Policy Sciences 4, pp. 155-169
  2. I don’t know if people still say ‘wicked' to mean ‘cool’ or ‘great’, but it doesn’t mean that either.

The 'crit'

This year I'm hearing a lot about 'the crit' from participants enrolled on the Assessment and Feedback module I teach as part of the PGCert course. 'Crit' is associated with stress, fear and anxiety, yet is evidently part of the culture of Art and Design subjects.

I found this video online. I do feel very sorry for the student.

  • Twitter
  • del.icio.us
  • Digg
  • Facebook
  • Technorati
  • Reddit
  • Yahoo Buzz
  • StumbleUpon

It's not what you know, it's where you know.

Our brains respond differently to a painting if we are told it is genuine according to a study by academics at Oxford University.

Fourteen participants were placed in a brain scanner and shown images of works by 'Rembrandt' -- some were genuine, others were convincing imitations painted by different artists. Neither the participants nor their brain signals could distinguish between genuine and fake paintings. However, advice about whether or not an artwork is authentic alters the brain's response; this advice is equally effective, regardless of whether the artwork is genuine or not.

I wonder if academics’ brains would undergo the same process if told that an article was published in Nature (or whatever the ‘top’ journal in your discipline might be) as opposed to being posted on some random website or published in a low ranking journal (however defined). For the sake of argument I am assuming that the academics would only be looking at work which was good (the imitation Rembrandts were good paintings by all accounts.) I‘m not a scientist, but I know that Nature is good – at least that is where to publish if you want Radio 4 to notice your work.

Last week I attended a workshop for Islamic Studies PhD students in my capacity as Acting Academic for the HEA Network. A business academic told me about the Association for Business Schools’ journal guide. Each journal has been classified as 1 to 4 star (in parallel to the UK’s Research Evaluation Framework). It isn’t my place to comment on the policies of disciplines in which I do not have expertise, but this strikes me as a highly transparent way of assessing the quality of research—if you publish in a 4 star journal the article must be good, if a 3 star not so good etc. etc.  No arguments- the publication is the judge.

However this puts some topics off limits to academics wanting to publish in the top journals. I understand the top business journals publish little about Islamic Finance—if this is your topic then you cannot publish in the top journals. A humanities  academic from an Eastern European country recently informed me that research impact in his country involved publishing in ‘top journals’, in short journals written in English. Linguistic issues aside, one of the consequences is that he and his fellow academics have to write about the sorts of topic Anglophones (or more accurately Americans) think are important —therefore fewer academics are writing about their own country – they write about the USA.

In science PLOS One is an open access venue which is unrestricted by topic or by what editors think would be expedient to publish (important and popular not being the same thing).

This is one of the great advantages of the internet—we can have peer review, open access research which is not restricted to certain topics.

Under the current system the journal an article is published in is our equivalent of a genuine Rembrandt. It would be interesting if all inputs to the REF had to be submitted as plain text files to see if the efforts of the Rembrandts and artists of lesser reputation can really be distinguished. Brain responses might be the fairest method of evaluation we have available to us.

 

 

 

 

 

 

 

  • Twitter
  • del.icio.us
  • Digg
  • Facebook
  • Technorati
  • Reddit
  • Yahoo Buzz
  • StumbleUpon

The UK Citizenship test: Making sure that all new citizens have a good short term memory.


Valid
assessment is about measuring that which we should be trying to measure.

Phil Race Making Learning Happen.

The Guardian website quiz ‘Life in the UK: could you pass the citizenship test?’ has been provoking a lot of discussion amongst my friends. None of my friends, UK citizens or otherwise, have been able to pass the citizenship test yet.

I suspect that the Guardian has selected some ‘greatest hits’ amongst the questions and that most obscure questions have been deliberately chosen. But, if the citizenship test is really about assessing British values, British history and British culture it is a total failure. We can’t be sure that new British citizens are able to participate fully in British society, appreciate British history and understand British customs but we can be sure that all our new citizens are successful learners of trivia.

Does it measure what we are trying to measure? The Home Office need to read Phil Race.

  • Twitter
  • del.icio.us
  • Digg
  • Facebook
  • Technorati
  • Reddit
  • Yahoo Buzz
  • StumbleUpon

A critical examination of proofreading (from latest edition of Studies in Higher Education)

I find proofreading difficult, especially proofreading my own work. I’ve long taken the view that proofreading my own work is beyond my abilities, particularly when a manuscript has gone through multiple drafts. Friends and colleagues generally concur; “You’re too close the text” they sometimes say. I’m always grateful for the professionals who perform this service on my journal articles.

Joan Turner’s critical examination of the nature of proofreading in the most recent edition of Studies in Higher Education is the first treatment of the subject I have come across (not that I have especially been looking out for an article like this, but it caught my attention when the e-mail alert from the journal came into my inbox). Student support centres which provide guidance on writing often emphasise that they are NOT a proofreading service. She writes:

 Such services offer some analysis of issues of style, grammar or rhetorical organisation that students should be aware of and attempt to resolve in their own writing, but they do not provide a 'clean' copy or 'proof' that the student can immediately submit for assessment (p. 427).

The article engages the question of proofreading from different angles. For example:

  1. Is proofreading is a skill which all students should acquire— particularly students whose first language is not English? Is it part of learning to write well?
  2. There is an ambiguity between teaching writing skills and proofreading.
  3. There is a moral question about whether getting someone to read an assessed paper is unfair. And is there an ethical difference between asking a friend to read your work and paying a professional (or non-professional) proof-reader?
  4. Will a proof-reader ‘just’ improve the writing or will they also improve the content of the text? At what point does using a proof-reader become cheating? What is being accessed—the writing or the content? Is it possible to even separate writing style from content?
  5. Does use/ overreliance on a proof-reader lead to lower standards? Does it prevent the students from learning how to write well?

Article reference

Joan Turner, “Rewriting writing in higher education: the contested spaces of proofreading,” Studies in Higher Education 36, no. 4 (2011): 427-440.

  • Twitter
  • del.icio.us
  • Digg
  • Facebook
  • Technorati
  • Reddit
  • Yahoo Buzz
  • StumbleUpon