Well, it is that time of year again, where we need to evaluate our services, but for us this year will also include an institutional review of e-submission, e-grading and e-return, as well as starting the initial review of our VLE.
Like many Heads of e-Learning across UK HEIs I’ll be spending lots of time thinking about effective evaluation methodologies. Something which strikes me as I am plotting, is are we actually the right people to review these institutional services? Its not that I don’t want to do the work, I am more concerned we are the same people who manage the services. So perhpas we are just too close. I’m aware we try to minimise the bias through using members from other teams to act as the interviewees. For instance, we (elevate) will work with members of the library team to author the semi-structured interview and focus group questions, however, it is the Library Team who undertake the data collection and we collectively analyse it with them.
However, I’m wondering do we need a fresh pair of eyes when developing the questions and evaluation methodologies? For instance, should we take a lead from the management literature on the opportunities of what diverse leadership offers business (1). This explores the ideas around identifying the value of difference, through eliminating bias in organisations. We are aware there exists unconscious bias in all of us, and I’d argue the membership most central team / professional services task and finish groups draw from a small pool of people. Therefore, this must create bias and the emergence of ‘group think’. The outcome is, we might be missing out on lots of opportunities, and getting a truer picture of what service users need.
So, if we are going to change things, what might this mean in practice? I’m wondering, perhaps we should include within these task and finish groups a stage where we invite a small number of researcher from within UCS to draw up the evaluation framework. We could also include post grad researchers to allow them to further develop their skills through undertaking the data collection and analysis phase.
In terms of our annual service reviews the process is more clearly defined. For instance, we produce an annual report per service, ie., for use of clickers at UCS, see http://ucselevate.blogspot.co.uk/2014/06/this-blog-post-is-annual-report-on-use.html
This type of work would be enhanced through a more collaborative working relationship with the Research Office. One exciting option would be for a number of researchers to participate in a short “evaluation sprint”, facilitated by the service lead, ie the head of e-learning. This would last two days, and focus on “defining the service we provide, who we provide it to, how have we evaluated it in the past, and how do they think we should evaluate it” The output will be an appropriate evaluation framework, with schedule, and potentially some researchers.
This would skill up the elearning team, improve the quality of the evaluations, and get a truer picture from our service users.
(1) Shah, A., & Scott, R., (2012) Boardroom Diversity – The Opportunity, Mazars LLP
Image Source: With Thanks: http://upload.wikimedia.org/wikipedia/commons/a/a8/PDCA_Process.png