What impact will TEF have on TEL?

This is a cut & paste from my LinkedIn account where I tend to publish my posts. However, given the topic I’d be interested in gathering views and thoughts.

Question: What impact will UK’s Teaching Excellence Framework (TEF) have on Technology Enhanced Learning (TEL) in UK HEIs?

A colleague of mine asked if I could provide some input on this question, and I said, “I’ve a three hour train ride home, so I’ll put the thinking hat on”. The following is the outcome of that long, lonely train ride home. As many in this area will know, there is no simple answer to this, and there no evidence to draw upon. Therefore, I’d love to read / hear what other people think will be the likely impact. Don’t hold back, get those comments flying ….

Key take away: It will have a positive impact on the adoption of TEL, however, the determinants of effective adoption of TEL within an institution are varied. In the initial period, TEF will highlight the need for more effective embedding of TEL. I would strongly expect a number of institutions to use TEF as the driver to embed more effective TEL.

The three core elements of TEF (NSS, Retention & Employability) are strongly influenced by the use TEL when combined with improved monitoring and intervention strategies. The impact of TEF on TEL is going to be very mixed, and we’d expect many institutions to undergo little change. However, there we will be a number where TEF will drive a re-think of the role of TEL, and TEL will become an enabler for improved TEF ratings.

TEF provides a rationale and urgency for an institution to re-visit the effective use of TEL within its wider innovative teaching and learning agenda. The effective embedding of TEL within learning models has the potential to become a differentiator between institutions, and a provide the mechanism by which an institution can improve its TEF rating. For instance, the simple scenario of delivery of blended (flipped) learning to enhance large group, face to face teaching should improve NSS Assessment & Feedback ratings, and provide more real-time data to inform personal tutor and intervention strategies (positive impact on retention and attainment). Both of which will improve the core TEF components. In addition, a TEL focussed learning and teaching model will provide an easier mechanism for institutions to surface excellent practice to include within their Institutional statements.

As stated there is likely to be a positive correlation between changing TEF performance and TEL adoption. However, institutional wide academic adoption of TEL is strongly influenced by a number of factors, including institutional culture, commitment, robust infrastructure, as well as academic and student support, development and engagement models. Therefore, for some institutions the likely impact on broad adoption of TEL will be low due to wider institutional barriers and challenges. The likelihood is institution’s will start to set clearer academic adoption KPIs (both in terms of level and type of adoption) to align with its emerging “teaching USP”, and be much more strategic in aligning its limited curriculum support and development resources (teams) to ensure maximum impact. Therefore, it is likely TEF will drive a re-think in how TEL is supported and developed within an institution.

The institutional requirements for TEL are also likely to change over time in response to TEF. For instance, an emerging institutional requirement is the need to improve reporting and monitoring. This will enable institutions to identify opportunities to improve TEF performance through more proactive interventions. This is likely to follow a similar institutional response to the introduction of the National Student Survey. Where a consequence of introducing the NSS question set for final year students, has been most institutions now include the same questions within their internal surveys for pre-final year students, and depending on the outcomes, Departments may introduce action planning to improve the experience of the student cohort as they progress through the final year. The expected outcome is, improved NSS scores. This required institutions to implement more robust and standardized reporting and monitoring.

The size of the impact will depend strongly on Senior Managers ability to drive institutional change towards decision making processes being more data informed in response to a need to align enhancement programmes, ie., curriculum redesign or better TEL support. The need for data decision making around TEF is likely to drive two related changes in TEL, firstly a consolidation on a core set of e-learning tools which will provide better quality data sets through more sophisticated data integrations and secondly improved reporting tools which can surface reports from the core e-learning tools.

Will the impact of TEF on TEL be large and quick? I would expect not initially due the resistance to change being relatively high in the sector given the nature of its current implementation and relative immaturity (only on phase 2). I am not sensing a significant level of dissatisfaction by staff with the current situation and a strong business case and vision from the sector. The expectation is this will change as TEF becomes more established, and the rewards for a strong TEF rating start to emerge.

How can you drive Learning Analytics within your Institution? #blackboardEMEA

gregynog_hall_tregynon_near_newtown_powys_wales_ukThe question “how can you drive Learning Analytics within your Institution” was the one I set myself to try to answer when I presented at the recent Gregynog Colloquium. The link to the slides illustrates how I tried to facilitate the audience to find an answer. The audience where IT Services staff from across various Welsh Higher Education Institutions.

The approach was to (i) make them aware on the cross institutional nature of Learning Analytics, therefore all stakeholders have a role, (ii) touch on how  we all are change agents, (iii) and provide some practical steps to promoting the change?

As you’ll see from the slide deck, I drew upon a methodology we use to develop out use cases, and encouraged them to reflect on how their role fits within these use cases. The final step was to raise awareness of the factors which determine institutional readiness.

Gregynog Blackboard Analytics June 16:

Achieving institutional adoption. ITTHE Session

20160604_121758The attached slide deck (see below) was used at a recent presentation (ITTHE 2016 Conference, Kadir Has University, Istanbul) focussing on, Achieving institutional wide adoption in Learning Technology

The key message was, institutional wide adoption of learning technology has little to do with technology, but a lot about managing institutional change.

The audience was predominantly lecturers (faculty members) with a sprinkling of Senior Managers and those in professional services. Given this audience, the intended aims where to ensure awareness of Blackboard’s ebook – 6 characteristics to increase technology adoption, provide an insight into the required alignment between leadership, institution commitment & investment and faculty training models on institutional wide adoption. The session included a number of activities to allow the participant the opportunity to reflect on where their institution currently resides on an adoption matrix.

The slides (PDF) are available from: ITTHE Istanbul Adoption Jun 16 final PDF

An wider observation from the day included significant interest around the effectiveness of flipped classroom learning models. A number of the research questions which presenters were addressing include;

  • Has the flipped classroom increased students engagement with the course?
  • Do students actually engage with the materials before class? What motivates them?
  • Do students prefer the flipped classroom approach compared to the traditional lecture?
  • What are the least favourite elements of a flipped classroom?
  • What problems do students encounter during the flipped classroom?
  • What factors reduce student motivation in the flipped classroom?
  • How would students design a flipped classroom activity?

Audience Task: Grade Journey Presentation #bbtlc16

As you maybe aware, Stephen Bryne and myself are presenting on “Optimising the Online Feedback and Grades Journey Experience” on Wednesday at the Blackboard TLC in Groningen.

The advertised session aims are:

  • What is the Grade Journey?
  • How will it help you achieve your strategic aims?
  • What lessons can we learn when implementing Grade Journey?
  • How might Blackboard help?

However, the unadvertised session aim is to encourage those attending to write an elevator pitch for a DVC/PVC/Rector or Provost in response to …

DVC asked the following question in a recent Education Committee Meeting:

Given all our pressing priorities, why should the institution invest the resource and effort to implement an enhanced grade journey?

Can you develop an elevator pitch to suggest … why?

You can write, video, record these wherever you like, just sharing using the tags: #bbtlc16 #gjelevator – for more info on elevator pitches, see

 

What is the grade journey? #bbtlc16

The intention of this post is to answer the question, what is the grade journey?

This is the second post in a series which are based on our presentation at Blackboard’s Teaching and Learning Conference in Groningen.

The short answer would be, the Grades Journey enables an institution to manage more effectively and efficiently the exchange of grades related data between Blackboard Learn and any Student Information System.

The previous definition illustrates it is associated within a broader process, where it is a small part of a much wider solution. For instance, the grade journey would be part of a wider set of electronic management of assessment solutions;

  • Ensure there is a method of submitting coursework electronically through a single interface.
  • Enable Academics to mark work and to annotate comments on students electronic scripts
  • Ability to either run all electronically submitted coursework through plagiarism detection (SafeAssign) or individual pieces of coursework or batches of coursework.
  • Enable rubrics & define metrics to assure the quality of marking
  • Improve the students understanding of their feedback and marks allocations. With a means 
to understand how they can improve their mark for the next piece of summative coursework

Figure 1, illustrates the end to end process from the initial creation and publication of the assessment task driven by information within the Student Record System, to the pubication of the final grade from the Student Record System.

Figure 1: Grade Journey as an end to end process

figure 2 end to end gj

Those who are aware of the manual intensive processes for e-submission, e-marking and e-return will be familiar with it often being error-prone, time consuming and involving many manual integrations to enable effective synchronization of assessment data. Against this backdrop, the grade journey synchronization offers a number of key enhancements;

  • Automated grade column creation creation based on records within your Student Information System. However, there is flexibility in terms of columns being displayed, and associated with grazeable content items (assignments, tests etc.,)
  • Marks (grades) approval and release. There is flexibility in determining when you release grades and your approval process. These will align with your institutional approval workflows. There is flexibility to enable you to deploy a disaggregated approval process through role settings for authorized users.
  • Automated grade exchange from Blackboard to your SIS, with the functionality to allow you to lockdown the grade centre as read only once the data has been synchronised.

So, where might you start to think through how the grade journey applies within your institutional context? An important first step is to map out what you are currently doing, and who is responsible for the different parts of the grade journey end to end process. A good place to start is to work through the following visualisation (from Sheffield Hallam University), identifying what each section means within your institutional context, and who is responsible.

Figure 3: Assessment Lifecycle (Sheffield Hallam University)

shu_assessment_lifecycle

Source: http://repository.jisc.ac.uk/6287/1/SHU_Assessment_Journey_Programme_Overall_Lifecycle.pdf

How does the Grade Journey help you achieve your strategic goals? #bbtlc16

I’m off to the Blackboard Teaching & Learning Conference (as you do), and I’m presenting on “optimizing the online feedback and grades journey experience”. As part of the positioning piece I thought it would be useful to share what I’ll be talking about around a Q&A. The first question is … how does the grade journey help you achieve your strategic goals?

If we define the Grade Journey as an end to end process for enhancing the efficiency and effectiveness of exchanging grade related data between Blackboard Learn and their Student Information System. Then the question must be, how does this granular, strategic design focus map back to the institution achieving its Strategic aims?

This can be answered through a number of different perspectives.

Firstly, placing the Grade Journey within the broader context of an institution moving from vision and strategy to implementation. This is illustrated in Figure 1, with a potential process by which the institutional strategy is achieved through a Grade Journey project.

Figure 1: Grade Journey (Strategic Design) within wider context

Figure1-grade-journey

If it is assumed an institutional assessment and feedback review identifies the following key requirement; “students will experience flexible and innovative approaches to learning and assessment informed by contemporary pedagogical practice”, with a priority being “to promote quality in all aspects of assessment and feedback” Through a Strategic Review and Planning engagement there are a number of short and medium term operational objectives identified to deliver these strategic requirements.

One of these in the short term, is implementing a Grade Journey project as part of a wider electronic management of assessment change program.

From the institutional perspective a number of cost (time) savings are expected;

  • Realise cost savings through introduction of more efficient administrative workflow
  • Remove duplication of effort by using a single system for marking and submission of marks
  • Enable a consistent approach for all staff entering marks, and increase the opportunity for greater use technology for feedback
  • Decrease the amount of time spent by academics on the administration of marks
  • Improve potential for errors to be identified and corrected before board processing by providing early and sustained visibility of provisional marks for both staff and students

Outside of these efficiency gains there are likely to be a number of enhancement gains which are associated with the standardisation process, which will also deliver a number of wider strategic aims. These include;

  • Assessment criteria can be presented explicitly via an interactive rubric with component marks and feedback
  • Enriched feedback through margin notes, summative comments and rubrics and increased use of a wider range of assessment types
  • Provide academic with greater control over assessment process through more visible monitoring
  • Diverse and wider range of assessment methods which can be embedded into the teaching and learning journey – include journals, wikis and automated testing
  • Feedback can be more easily provided in a variety of formats (comments box, attached document, annotation, audio, video, rubrics)
  • Feedback tools are embedded as part of the process, to allow more detailed feedback against assessment criteria

The next question is, in reality have significant efficiency gains been realized? The JISC EMA Programme identified there are, with significant potential resource savings (https://www.jisc.ac.uk/guides/electronic-assessment-management).

  • At Queen’s University Belfast the School of English moved to e-submission and marking which saved 20 working days per year in administrative staff time (in a school with c.900 students).
  • At the University of Huddersfield a time and motion study showed that an administrator dealing with 1,620 students saved 137 hours per year or 3.7 weeks based on a 37 hour working week.
  • At the University of Dundee time saved through e-submission and e-tutoring works out at around 20 mins per assignment submitted. The centralised e-tutoring approach at the University of Dundee has also generated further efficiencies in that tutor time spent responding to emails is 60 minutes per week as opposed to 176 minutes on a similar programme that does not employ the system.

The question to ask yourself is, what are your strategic goals around Assessment & Feedback, and how well do they align to the direct and indirect benefits associated with the Grade Journey?

The intention of a later post is to answer; what is the grade journey?

Notes from JISC’s 5th Learning Analytics Network Meeting (UEL)

The following are a few notes I made when attending the JISC 5th Learning Analytics Network Meeting at UEL on the 3rd Feb.

The primary reason for me to be there was to sit on a panel session on reviewing the institutional readiness discovery assessments which Blackboard have been commissioned to support (http://bit.ly/LADiscovery). My notes for specific questions are below;

Can you outline briefly the process you’ve gone through with each institution?

The discovery exercise is focussed on the four “pillars of readiness” culture, people, processes & terminology infrastructure

The assessment is made against 20 plus evidence statements which are mapped to a number of rubrics. The outcome is by pillar and category. We categorise as Ready, Ready with recommendations, Not Ready with recommendations

Data is collected through a number of different routes – Pre-onsite (questionnaire, doc review) & Onsite (forum, interviews/open groups, workshops)

The second deliverable is an indicative feature set and data improvement cycle. This is informed by the topic / theme. The aim is to give an indication of what the indicators might be, and the pilot design.

How much similarity have you found across the institutions you’ve visited so far?

The context >> I was expecting there to only be a view similarities across the five institutions because of different types of institutions (Russell Group, Post 1992), plus a very small sample.

However, have found a number of similarities

  • Strong vision / belief in importance of learning analytics from a senior managers around enhancing the student experience
  • Interest in enhanced reporting and dashboards, compared to predictive analytics
  • common themes around progression & attainment
  • The agent (vehicle) of change is primarily personal tutor mediating the data / intervention, not automatic or large scale unmediated student facing dashboard

Overall what are the main areas which institutions need to tackle to move forward?

Given small sample, very difficult not to generalise – I have tried to organise these by theme

The focus is on “big challenges”, selected three, view technology infrastructure as easier to overcome

Culture >> need to widen and deepen the conversation with staff – what is learning analytics? what does it look like? how can it improve learning an teaching at the institution?

Process >> Exploring interventions for retention / attainment – how are you going to create frequent, actionable data? must question is, will current curriculum development process & staff development models generate what is needed at the programme level

People >> capacity – looking at workflows for LA with interventions – this makes a very large, cross institutional project. So, all institutions undergoing significant change programs – so in the short term, capacity to handle more change. However, in longer term, links into culture and need to reduce resistance to change across all stakeholders.

Notes from the day

JISC Update (Paul & Michael)

  • Student app will be available in App stores – invite only by April
  • Code of practice, need to tie this in, and listen to the podcasts (5 in total, rolled out individually – first next week) – http://analytics.jiscinvolve.org/wp/

Gary: UEL

What are they are doing

  • Student engagement for personal tutors – developed spring 2015
  • Developing a student engagement metric – working up a model – enhancement on the student engagement for personal tutors – this will provide the student with an indicator of their own level of engagement
  • Developing a student facing app – includes position in terms of student engagement compared to peers, also visualization of distance to travel

Their research

  • Proof of very strong correlation between attendance and average module marks.
  • Use the data / research to inform the metric / importance of the various indicators, ie., use own data to identify significance factors
  • identifying a group of students where interventions will have the biggest effect. Sounds similar to stuff from John Whitmer

Micro Projects

Open University – Student workload tool. The problem: how to manage the workload for online learners, especially workload distribution. Observed workload distribution seems to be negatively correlated with student outcomes. So need to ensure workload is even. Use activity taxonomy to inform a web based workload assessment tool. Road ahead … hosted version, and download version. Should be available within the next month – Action :: follow up at end of March 2016 – this would be really useful to have in the back pocket for institutions when onsite and program development

Edinburgh – cognitive presence coding – supporting traditional De/OLlots of scary stuff around coding of cognitive presence. Would allow easier identification and intervention within MOOCs and online courses. Potential down the line to provide visualisation and monitoring of engaged learners. Watch this space …

University of Greenwich – Survey data as an indicator of learning outcomes? They are trying to leveraging more out of the data they collect. Follow Sheffield Halam’s work in this area. Greenwich stuff <<< http://analytics.jiscinvolve.org/wp/2016/01/30/guest-post-can-survey-data-be-used-as-an-indicator-of-learning-outcomes/ >>>

Workshop Outputs: Blackboard User Group: Durham University #durbbu

The following is a transcript of the tasks we ran in the workshop at the Blackboard User Group hosted by Durham University (6th Jan). Malcom, please share ….

The background to the workshop tasks was around institutional readiness for Learning Analytics. The tasks where a little rushed due to timing issues (not my fault :-)).

Thanks to the attendees for participating, and I’ll leave it to them to interpret the outcomes.

The aim of the first task was to think about the attributes, qualities and characteristics of an engaged learner, and then explore the likely indicators they might use to measure engagement. The exercise fits into the Strategic Review / Planning as we seek institutional consensus around terminology, and understanding.

>> Could you describe the qualities, characteristics and attributes of an engaged learner at your institution

• Attends all types of timetable sessions. They take part in assessment activities. Performing to their capability. Making progress.
• Enthusiasm, interest in the their subject, and willingness to share in the whole experience
• An active learning student (high attendance in teaching and learning), with regular contact, successful outcomes and is happy
• accessing reading material (in VLE course library etc.,), watching lecture recordings, submitting work in time, completing formative assessment, engagement in tutorials, leading in group work & discussions, and joining clubs
• Attends sessions, use of available resources, intrinsically motivated, participants in discussions and peer groups, gets desired results.
• Participates in sessions, activities, collaborates and communicates. Asks questions, is inquisitive and uses feedback to improve work

>> Based on your description, how would you measure student engagement? What are the top five indicators of an engaged student?

1. Attendance
2. Assessment – completion
3. Assessment – grades

Group 2

1. Competition of assessments
2. Extra curricular activities, including external commitments
3. Self declared data

Group 3

1. Attendance
2. Submissions of assignments
3. Quality of communication
4. Pass / fail of outcomes
5. Happiness
6. Support requested / received

Group 4

1. VLE data
2. Door access data
3. SIS data

Group 5

1. Attendance
2. Submission
3. Contribution to online and/or face to fave sessions
4. Access to resources (library and online)
5. Evidence of critical thinking

Group 6

1. Library usage
2. Attendance
3. Grade history
4. Wifi tracking
5. Course triggers

Group 7

1. Attendance
2. Activity competition (formative assessment)
3. Activity of course (discussion boards, forums)
4. Achieving desired grades

The “elevator pitch to a PVC within your institution on the benefits of Learning Analytics within teaching, learning and assessment” task was not so easy to analysis (or actually write up !!! – as I mentioned in the session, but would anyone listen to me). An emerging theme was to pitch the concept that we have lots of data, so what questions do you want answering?

One group did take a slightly different approach;

Healthy Body, Healthy Mind: Using on campus purchases of food and drink (including alcohol) and compare this to academic performance.

Learning Analytics: So, what is an effective intervention?

I just read some really interesting work at San Diego State University on triggers and intervention strategies. I was forwarded by John Whitmore, at Blackboard (thanks John). The work was reported at LAK 2015, the report was by Dodge, Whitmore, and Frazee (2015) but I can’t find a public link to the SDSU report. So, sorry … you’ll need to hunt it down yourself 🙂

Their work is based on a number of large courses, over a long period of time. From my perspective, the pilot introduced weekly trigger events through the semester. These combined triggers included;

  • Classroom voting / clickers – if no score at session, then they did not attend
  • Regular Blackboard quizes – if you didn’t complete one a trigger was activated
  • Blackboard logins – if they had not logged into Blackboard for over a week a trigger was activated
  • Score / grades on course tests / assignments – low scores on assignments and tests would activate a trigger
  • Cumulative average grade is just below passing as they near the end of the semester

The work drew a number of important findings;

1) the deployment of regular (weekly) triggers through a range of technologies and meaningful learning tasks is achievable (I do know this, but I am often told it is unlikely to happen given Institutional culture). Therefore, if you can drive the required changes in curriculum design across a programme, the learning experience for certain students can be improved.

2) these triggers are significant predictors of the likelihood of passing the course. They found none of the students who activated only one trigger needed to repeat the course, while almost all the students who activated ten or more triggers in the semester needed to retake the course. Therefore, the number of trigger events is a good indicator of who needs support.

3) intervention strategies based on emails, including links to multimedia based support materials etc., did not have any statistical impact on behavior of those who received them. They conclude, “while the triggers may indicate a problem, students may not have the skills to address the cause of the problem despite awareness and the will to do so”. The article suggests a more human intervention strategy based on supplemental instruction by peer (student) mentors. The interesting challenge for SDSU was the issue of achieving impact with tight resources, given student intakes of over 400 on the course, scaleability is critical.

These outcomes resonate with some of the discussions I have with UK HEIs, around effective intervention strategies. It also supports the position held by many for the need of a mediated intervention, as opposed to large scale student dashboards and auto-generated emails. To further support this position, the question I’m interested in answering is, is there any evidence that student facing dashboards are demotivating impact on the learner?

What is a proxy for student engagement? It appears access data to learning resources doesn’t cut the mustard

I was traveling to an onsite visit so I had a great opportunity to catch up on some reading. Top of my Learning Analytics reading list was Colthorpe, Zimlardi, Ainscough and Andersen (2015) – “Know thy student! Combining learning analytics and critical reflections to increase understanding of students’ self regulated learning in an authentic setting

I was particularly interested in this article as I’m starting to explore what are appropriate data triggers for interventions?

The key message I’ve taken from the article is the potential to include non-discipline questions with low stake summative assessment, and the early submission of these meta-learning assessments are very good indicator of academic achievement. While if we consider the strategies of the self-regulated learner, our use of access logs to learning resources can be a misleading trigger for an intervention.

The intention of this post is to partially unpack the above.

Colthorpe et al., (2015) applied Zimmerman’s three stage self-regulated learner model (forethought, performance, reflection). Their analysis identified student access to learning resources (in this case the lecturer recording) was not a good indicator of academic achievement, and would be a poor intervention trigger.

The exploration being a high performing self regulated learner has set their goals and motivations in the forethought stage. Consequently, if they access the lecture recording and they do not achieve their desired learning outcome, they’ll shift to alternative sources, ie., textbooks. This can be contrasted by an individual who has a low self-regulation measured by them not reflecting on the effectiveness of their previous learning strategies, and consequently, have not developed a range of effective approaches. In this scenario, they will simply continue to access the lecture recording as they have no previous experience to draw upon to seek alternative sources. The above illustrates two extremes, but suggest access clicks on resources may not be a good indicator of a struggling student. For instance, even an effective self regulated learner may return to re-watch sections of the lecture recording.

The article evidences a good indicator of academic engagement is the early submission of meta-learning assessment tasks. Where these tasks are designed around the timetable of discipline specific assignments. The structure of the tasks are designed around Zimmerman’s self-regulated learning model. The submission date (time from deadline) and quality of the responses are the triggers for intervention.

I’ll admit the term “meta-learning assessment tasks” does sound rather alarming, and abstract to the majority of Academics / Faculty. However, within Colthorpe et al’s mode it breaks down into a relatively straight forward design.

They have four tasks evenly spaced throughout the course, each tasks contains 6 short questions and altogether they contribute around 12% to the overall grade.

The tasks include;

articulating the study strategies they had used in the past and identify hinderances to their learning

articulating strategies they may use to improve their learning and promoting effective study before an exam (mid semester)

reflecting on strategies they used for their exam (mid semester) identify how effective and how to improve

A message I took from the article was the importance of the forethought stage with academic achievement. They suggested this is a critical stage for success as it concerns organization and planning, motivation, beliefs and personal goal setting.

So not wishing to critique the whole article (I leave that to you), it would be good to consider how might a faculty/academic need to change practice in light of this article and my previous posts around assessment patterns. I’d suggest these ideas could be easily incorporated within curriculum learning models. For instance, the inclusion of non-discipline short answer question is easily accommodated within the VLE quiz engine, and their existing summative assignment diet. Therefore, apart from the allocation of marks for completion, it would be relatively straight forward to implement.

The use of marking rubrics should allow a quick assessment by the faculty/academics. The outcome is the intervention decision is informed by;

have they “successfully” completed the low stakes assessment?

was their submission “timely”?

did their submission demonstrate the “attributes of a self regulated learner?

The words in “” need to be further defined with the institutional and course culture, and the threshold levels need to be informed from analyzing the data.