Although Graham Gibb's article in the THE last week is ostensibly about fees, ...
"Higher fees should reflect an institution's quality, rather than status, so we should start measuring it, argues Graham Gibbs"
... it is actually an argument about how to measure the likely quality of student experience in a university degree. Gibbs says that much of what is currently measured in the various UK league tables are merely proxies for reputation:
"Input variables, such as resources, do not predict outcomes, such as degree results and employability, as much as you would expect. And the modest extent to which input variables do predict outcomes results largely from reputation. Input variables are even worse at predicting educational gains – the difference between students at the start and on graduation – than they are at predicting outcomes. Outcome measures such as employability tell us little about the institution, other than about their reputation and the quality of students they can attract, and so outcome measures are also not as helpful as one might hope as indicators of quality."
He argues that what can more validly predict educational gains are process measures: what institutions do with whatever students they have, using whatever resources are available. He goes on to say that thirty years of research has identified which process variables best predict educational gains. They include:
class size; cohort size; who does the teaching; the volume, promptness and usefulness of feedback on student work; the extent of close contact with academics; and the extent of collaborative learning – along with the extent of student engagement that results from these variables.
Gibbs says that key aspects of engagement include how much time students spend on their studies, and the extent to which they take a deep approach (attempting to understand) or a surface approach (attempting only to reproduce). He says that all these variables are measurable, and that institutions that have improved in these process variables have been shown to increase student engagement and increase learning gains, without increasing resources. Gibbs says that the Quality Assurance Agency does not ask institutions to provide information about these educational characteristics, nor do National Student Survey and the National Union of Students' student satisfaction ratings.
Whilst there is much to ponder here, and even agree with, my feeling is that a Gibbsean revolution in the way which "quality" is measured might not make all that much difference to who comes out well / badly in all these measures. It seems highly unlikely that the first shall be last, and last first.
Whilst it might be quite interesting to find out, I'm sure institutions will say they cannot afford it. They could, of course, replace all those quality assurers and managers they now employ with real teachers who know something about both subject and teaching.