Workshop Outputs: Blackboard User Group: Durham University #durbbu

The following is a transcript of the tasks we ran in the workshop at the Blackboard User Group hosted by Durham University (6th Jan). Malcom, please share ….

The background to the workshop tasks was around institutional readiness for Learning Analytics. The tasks where a little rushed due to timing issues (not my fault :-)).

Thanks to the attendees for participating, and I’ll leave it to them to interpret the outcomes.

The aim of the first task was to think about the attributes, qualities and characteristics of an engaged learner, and then explore the likely indicators they might use to measure engagement. The exercise fits into the Strategic Review / Planning as we seek institutional consensus around terminology, and understanding.

>> Could you describe the qualities, characteristics and attributes of an engaged learner at your institution

• Attends all types of timetable sessions. They take part in assessment activities. Performing to their capability. Making progress.
• Enthusiasm, interest in the their subject, and willingness to share in the whole experience
• An active learning student (high attendance in teaching and learning), with regular contact, successful outcomes and is happy
• accessing reading material (in VLE course library etc.,), watching lecture recordings, submitting work in time, completing formative assessment, engagement in tutorials, leading in group work & discussions, and joining clubs
• Attends sessions, use of available resources, intrinsically motivated, participants in discussions and peer groups, gets desired results.
• Participates in sessions, activities, collaborates and communicates. Asks questions, is inquisitive and uses feedback to improve work

>> Based on your description, how would you measure student engagement? What are the top five indicators of an engaged student?

1. Attendance
2. Assessment – completion
3. Assessment – grades

Group 2

1. Competition of assessments
2. Extra curricular activities, including external commitments
3. Self declared data

Group 3

1. Attendance
2. Submissions of assignments
3. Quality of communication
4. Pass / fail of outcomes
5. Happiness
6. Support requested / received

Group 4

1. VLE data
2. Door access data
3. SIS data

Group 5

1. Attendance
2. Submission
3. Contribution to online and/or face to fave sessions
4. Access to resources (library and online)
5. Evidence of critical thinking

Group 6

1. Library usage
2. Attendance
3. Grade history
4. Wifi tracking
5. Course triggers

Group 7

1. Attendance
2. Activity competition (formative assessment)
3. Activity of course (discussion boards, forums)
4. Achieving desired grades

The “elevator pitch to a PVC within your institution on the benefits of Learning Analytics within teaching, learning and assessment” task was not so easy to analysis (or actually write up !!! – as I mentioned in the session, but would anyone listen to me). An emerging theme was to pitch the concept that we have lots of data, so what questions do you want answering?

One group did take a slightly different approach;

Healthy Body, Healthy Mind: Using on campus purchases of food and drink (including alcohol) and compare this to academic performance.

Advertisements

Learning Analytics: So, what is an effective intervention?

I just read some really interesting work at San Diego State University on triggers and intervention strategies. I was forwarded by John Whitmore, at Blackboard (thanks John). The work was reported at LAK 2015, the report was by Dodge, Whitmore, and Frazee (2015) but I can’t find a public link to the SDSU report. So, sorry … you’ll need to hunt it down yourself 🙂

Their work is based on a number of large courses, over a long period of time. From my perspective, the pilot introduced weekly trigger events through the semester. These combined triggers included;

  • Classroom voting / clickers – if no score at session, then they did not attend
  • Regular Blackboard quizes – if you didn’t complete one a trigger was activated
  • Blackboard logins – if they had not logged into Blackboard for over a week a trigger was activated
  • Score / grades on course tests / assignments – low scores on assignments and tests would activate a trigger
  • Cumulative average grade is just below passing as they near the end of the semester

The work drew a number of important findings;

1) the deployment of regular (weekly) triggers through a range of technologies and meaningful learning tasks is achievable (I do know this, but I am often told it is unlikely to happen given Institutional culture). Therefore, if you can drive the required changes in curriculum design across a programme, the learning experience for certain students can be improved.

2) these triggers are significant predictors of the likelihood of passing the course. They found none of the students who activated only one trigger needed to repeat the course, while almost all the students who activated ten or more triggers in the semester needed to retake the course. Therefore, the number of trigger events is a good indicator of who needs support.

3) intervention strategies based on emails, including links to multimedia based support materials etc., did not have any statistical impact on behavior of those who received them. They conclude, “while the triggers may indicate a problem, students may not have the skills to address the cause of the problem despite awareness and the will to do so”. The article suggests a more human intervention strategy based on supplemental instruction by peer (student) mentors. The interesting challenge for SDSU was the issue of achieving impact with tight resources, given student intakes of over 400 on the course, scaleability is critical.

These outcomes resonate with some of the discussions I have with UK HEIs, around effective intervention strategies. It also supports the position held by many for the need of a mediated intervention, as opposed to large scale student dashboards and auto-generated emails. To further support this position, the question I’m interested in answering is, is there any evidence that student facing dashboards are demotivating impact on the learner?

What is a proxy for student engagement? It appears access data to learning resources doesn’t cut the mustard

I was traveling to an onsite visit so I had a great opportunity to catch up on some reading. Top of my Learning Analytics reading list was Colthorpe, Zimlardi, Ainscough and Andersen (2015) – “Know thy student! Combining learning analytics and critical reflections to increase understanding of students’ self regulated learning in an authentic setting

I was particularly interested in this article as I’m starting to explore what are appropriate data triggers for interventions?

The key message I’ve taken from the article is the potential to include non-discipline questions with low stake summative assessment, and the early submission of these meta-learning assessments are very good indicator of academic achievement. While if we consider the strategies of the self-regulated learner, our use of access logs to learning resources can be a misleading trigger for an intervention.

The intention of this post is to partially unpack the above.

Colthorpe et al., (2015) applied Zimmerman’s three stage self-regulated learner model (forethought, performance, reflection). Their analysis identified student access to learning resources (in this case the lecturer recording) was not a good indicator of academic achievement, and would be a poor intervention trigger.

The exploration being a high performing self regulated learner has set their goals and motivations in the forethought stage. Consequently, if they access the lecture recording and they do not achieve their desired learning outcome, they’ll shift to alternative sources, ie., textbooks. This can be contrasted by an individual who has a low self-regulation measured by them not reflecting on the effectiveness of their previous learning strategies, and consequently, have not developed a range of effective approaches. In this scenario, they will simply continue to access the lecture recording as they have no previous experience to draw upon to seek alternative sources. The above illustrates two extremes, but suggest access clicks on resources may not be a good indicator of a struggling student. For instance, even an effective self regulated learner may return to re-watch sections of the lecture recording.

The article evidences a good indicator of academic engagement is the early submission of meta-learning assessment tasks. Where these tasks are designed around the timetable of discipline specific assignments. The structure of the tasks are designed around Zimmerman’s self-regulated learning model. The submission date (time from deadline) and quality of the responses are the triggers for intervention.

I’ll admit the term “meta-learning assessment tasks” does sound rather alarming, and abstract to the majority of Academics / Faculty. However, within Colthorpe et al’s mode it breaks down into a relatively straight forward design.

They have four tasks evenly spaced throughout the course, each tasks contains 6 short questions and altogether they contribute around 12% to the overall grade.

The tasks include;

articulating the study strategies they had used in the past and identify hinderances to their learning

articulating strategies they may use to improve their learning and promoting effective study before an exam (mid semester)

reflecting on strategies they used for their exam (mid semester) identify how effective and how to improve

A message I took from the article was the importance of the forethought stage with academic achievement. They suggested this is a critical stage for success as it concerns organization and planning, motivation, beliefs and personal goal setting.

So not wishing to critique the whole article (I leave that to you), it would be good to consider how might a faculty/academic need to change practice in light of this article and my previous posts around assessment patterns. I’d suggest these ideas could be easily incorporated within curriculum learning models. For instance, the inclusion of non-discipline short answer question is easily accommodated within the VLE quiz engine, and their existing summative assignment diet. Therefore, apart from the allocation of marks for completion, it would be relatively straight forward to implement.

The use of marking rubrics should allow a quick assessment by the faculty/academics. The outcome is the intervention decision is informed by;

have they “successfully” completed the low stakes assessment?

was their submission “timely”?

did their submission demonstrate the “attributes of a self regulated learner?

The words in “” need to be further defined with the institutional and course culture, and the threshold levels need to be informed from analyzing the data.

Visualisation of Learning Analytics V2

Having talked it through with a few people, and tried to use it as a discussion aid at a recent event, I’ve tweaked my visualisation of learning analytics slightly, and included the new version below.

The tweaks are because I wanted to illustrate not just the required technology infrastructure an institution is likely to need, but also the people who are likely to be involved within the process, and provide an indicator of their likely required skill sets.

It does illustrate just how learning analytics touches on the majority of the institution. It also, allows us to look at the visualisation to see if we recognise these roles within our institution.

Note – this only uses a subset of potential data sources, and feedback loops to senior managers etc.,

People & Technology Infrastructure- Generic

What might the Learning Analytics process look like from my perspective?

The starting point for this post is firstly to admit I’m not technical, and secondly, admit I’m no expert on the nuisances of Learning Analytics. However, I am asked quite regularly about it, and I do try to explain what is required from various perspectives. The question is, do I help? I’m not as effective as I’d like to be 🙂 I feel I need a graphical means of enabling around the people round the table to have a meaningful conversation.

The following is an attempt to visualise conversations I’ve had with colleagues, for people who aren’t immersed in learning analytics. I’d love feedback …

The context of use start a conversation around a feature set an institution use as part of a pilot study. Therefore, I’d talk through a general use case, and start to identify the likely indicators used to identify the degree of student engagement. For instance, in a previous post on this blog, I’ve discussed a use of learning analytics to nudge curriculum design to generate timely, actionable data. A use case might be around retention and the need for enhanced reporting on educational data to action appropriate intervention strategies. The proxies for the level of student engagement could be a combination of educational data: LMS/VLE (formative assessments – quizzes, discussion boards, and login data), SIS (summative assessment record), Lecturer Attendance, and engagement with classroom technologies (clickers).

An indicative feature set for an institution to enable this is visualised below,

Indicative Feature Set- Overview (1)

Where

  • Extract, Transform and Load (ETL) refers to a process in database usage and especially in data warehousing that: Extracts data from homogeneous or heterogeneous data sources. Transforms the data for storing it in proper format or structure for querying and analysis purpose.
  • OLAP is an acronym for Online Analytical Processing. OLAP performs multidimensional analysis of business data and provides the capability for complex calculations, trend analysis, and sophisticated data modeling.
  • SIS is student information system
  • LMS / VLE is learning management system / virtual leaning environment

I’d suggest a visualisation approach for lay people is a really useful way to move the conversation between the bigger picture of interlinked determinants, such as culture, people and processes and the required technical infrastructure.

It also provides a framework to drive out the questions which would need answering within a learning analytics pilot. This is through exploring the big picture, and how the individual components function. For instance;

  1. What does Learning Analytics mean within your context?
  2. What are the specific questions you’d like answering?
  3. What are the data points and indicators you need to answer these questions?
  4. Where does that data reside? Where and how would it be best stored?
  5. Who can access the data? What is the reporting process?
  6. Is your current curriculum development model generating consistent, actionable learning data?
  7. What are the most appropriate intervention strategies and work flows?
  8. What are the ethical considerations around this use Learning Analytics?
  9. What changes, if any do you need to consider in Policy and Procedures for this use of Learning Analytics?
  10. What is the required evaluation framework and enhancement cycle for Learning Analytics?

Plus many, many more.  So, what should I add to this to make it work for you?

With Thanks – Image – http://www.hightekknowledge.com/wp-content/uploads/2015/03/cube-.jpg

Competency based learning: have you any examples outside health studies?

I’ll be off on my travels soon to present at a Teaching and Learning Forum. The topic they’d like me to focus on is competency based learning. The audience is very diverse, from senior managers, academics, support teams (curriculum design), and sys admins. So, I’m thinking, the following should resonate in part with all of them (40 minutes listening to me, ranting about a topic close to my heart, but not theirs might be a tall order for some in the audience).

However, what I do need is some research findings, and case studies which are not from non-medical / health discipline areas. For instance, is competency based learning being used to teach undergraduate social scientists?

So if you know of any materials, links please ping them to me – thanks 🙂

The Abstract:

Competency based learning has a rich history in medical and health education. The key characteristic is student progression based on demonstrating a proficiency and/or mastery of specific skills or abilities, as measured through assessments. This contrasts with the more traditional learning design with a fixed-time model (a semester or year). Competency based learning advocates suggested student engagement is higher through the creation of personalized learning pathways which are tailored to their unique needs.

Recent innovations in Technology Enhanced Learning have reduced the barriers to designing, implementing a monitoring a competency based learning model. For instance, the ease by which Goals and Achievements are set within Blackboard Learn, and enhanced reporting through Learning Analytics. Therefore, a number of perceived barriers have disappeared, which enables us to re-visit the possibilities of this learning model.

The aim of this presentation is to discuss the potential of adopting competency based learning within the context of the individual faculty member, and the wider institution. This will be achieved the through answering five questions; 

1. What are the characteristics of Competency Based Education (Learning)?

2. In what educational context has it be used effectively?

3. How might Competency Based Learning transfer to other disciplines?

4. From a Faculty perspective, what do the technology enhanced learning activities look like? What is good practice around how we design these activities?

5. From a Senior Managers perspective, how might we scale up Competency Based Education across a the institution?

—————————–

With Thanks – Image – https://upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Target_logo.svg/2000px-Target_logo.svg.png

Using Learning Analytics to “nudge” students to take more ownership over their learning

Becoming an independent or self regulated learner is an important part of becoming a more effective the learner. This type of learner is “distinguished by their responsiveness to feedback regarding the effectiveness of their learning, and by their self perceptions of academic accomplishment” (Zimmerman (2000:14)). It requires not only the learner taking responsibility for their weaknesses, but Faculty to give them the opportunity to identify, correct and improve upon these weaknesses (Fritz (2013)).

The following discussion has two aims. Firstly, to outline how a Faculty member can redesign their learning activities around tools within the virtual learning environment to provide richer feedback and trigger dialogue amongst students with the intention of nudging them to become more effective self regulated learners. Secondly, to outline the case for Senior Mangers to drive this redesign process to enable the institution to take better advantage of learning analytics and data driven decision making.

Davenport et al., (2000) define learning analytics as “the application of analytic techniques to analyze educational data, including data about the learner and teacher activities, to identify patterns of behaviors and provide actionable information to improve learning and learning related activities”. An important aspect within this definition is the concept of the data being “actionable”, by either the learner, teacher or another stakeholder group.

To gather data and take actions requires the learning model to move away from an orthodox approach of a few high stake summative assessments (typically one essay and an unseen exam) towards one which provides more frequent feedback opportunities and learning loops for the student but is sustainable and scaleable for the Faculty member. The problem with the orthodox design is it can not easily develop self regulated learners because there is too little reliable and actionable data. Therefore, if frequent actionable data is required the re-designed approach should include using the VLE quiz engine, the submission of short online writing tasks (with defined marking criteria) and potentially classroom voting technologies. Within this case the learning activity will generate more feedback and enhanced reporting for the individual learner. For instance, if you deployed a number of five question online tests to be completed by all students. This would generate significant amounts of actionable date without significant work. Also, by using a variety of different question types you’ll be easily able to tease out higher order learning skills. Question types might include Likert Scale (to what extent do you agree …), and short free text responses (in no more than 150 words). The individual can access their score and feedback online, and compare themselves to the average grade. The Faculty can dedicate a proportion of the next face to face teaching session to provide additional feedback on the questions. This approach aligns with the all seven principles of good feedback (Nicol & Macfarlane-Dick (2006)

So, why should Senior Managers encourage Faculty to enhance their technology based learning designs? The simple answer is, from the institutional perspective the use of the VLE to generate and share data underpins the effective adoption of learning analytics. These re-designs by individual Faculty will act “as a series of small [steps] designed to gain experience and make the case that data based decisions have enhanced value” (Bichsel 2012:26). For this to work, the Institution needs to be aware Learning Analytics does not require perfect or near perfect data, and the institution needs to walk before it can run. Therefore, a number of small scale pilots around enhanced reporting and effective intervention strategies will enable the institution to improve its readiness for learning Analytics. The pilots would instigate conversations around institutional culture, people and processes, as well as technology infrastructure. This will add more value compared to exploring correlations in large data sets between VLE access data as a proxy for student engagement, and final grade performance often discussed in the literature.

Where next for you? If you are Faculty a good starting point would be to explore the functionality and possibilities of the online quiz tool and grade center. If you are a Senior Manager a good starting point would be to explore effective academic adoption models within the context of your institution.

————————————————

Bichsel, J., (2012) ECAR Study of Analytics in Higher Education, available at: http://www.educause.edu/library/resources/2012-ecar-study-analytics-higher-education (Accessed October 2015)

Fritz, J., (2013) Using Analytics at UMBC, Educase Center for Applied Research, Research Bulletin, available at https://net.educause.edu/ir/library/pdf/ERB1304.pdf

Nicol, D., & Macfarlane-Dick, D., (2006), Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218

Nussbaumer, A., Hillemann, E-C., Gutl C., and Albert, D., (2015), “Competence-based Service for Supporting Self-Regulated Learning in Virtual Environments, Journal of Learning Analytics, 2(1), 101-133

Zimmerman, B., (1990), “Self-Regulated Learning and Academic Achievement: An Overview” in Educational Psychologist, 25(1), 3-17