Wednesday 18 April 2018

Food for Thought about Online Teaching and Learning: A D2L Design Story from the University of Calgary - Danielle Dore


I recently attended the D2L Connection: Alberta event hosted by SAIT which provided a wonderful opportunity to connect with other institutions regarding their practices and insights into using D2L. Representatives from D2L were also on hand to talk about new Daylight features and implementation. One particularly interesting session I attended was Changing the Conversation about Online Teaching and Learning: A D2L Design Story presented by the University of Calgary’s Taylor Institute for Teaching and Learning.
This session centered on what was learned about increasing learner success and engagement in an online course. Over two years of learner and instructor surveys, feedback, and subsequent changes, the Educational Development Unit proposed and implemented a number of changes to their online courses. What I found quite interesting is many of these changes are quick, easy to implement, and are modifications made directly in D2L.
Changes made include:
-          Elimination of submodules – Learner feedback suggested that content was difficult to locate when learners had to navigate into submodules.

-          Cut content – Learners found it hard to prioritize and process what information was the most important when too much content was presented. Course content should only include what is essential for learning, and supplementary resources or resources for extension should be selectively released for individual learners as needed. D2L provides many options for selective “Release Conditions” that can allow additional resources and content to be released to learners including set scores on Dropbox rubrics, grade rubrics, scores on quiz totals, and/or even certain quiz questions. This allows D2L to remain uncluttered while providing target resources to particular learners.
*For more information about Release Conditions in D2L please see Lorna Malanik’s post: Personalize Learning with Intelligent Agents and Release Conditions

-          Naming modules by week instead of “Module X” – Naming modules by week provided learners another way of keeping on-track in their courses. Also, feedback suggested that using the word “module” actually presented a barrier to some learners as they did not fully understand what the term referenced. When the content/topic of a module is referenced, it should be done so with an action verb. For example:
o   Week 1: Applying Reading Strategies vs Module 1: Reading Strategies

-          Online environment as the third teacher – Significant changes were made to “humanize” the online environment to increase learner engagement and reduce anxiety through:

o   Use of videos, gifs, holiday greetings, videos from instructors. *On an interesting side note, feedback suggested that learners found having a video greeting from their instructor to be very important; however, many learners did not actually watch their instructor video. It seems that just having their option to watch the instructor’s video was enough for them.
o   Use of learner pictures.
o   Groupings of learners within the course.

-          Implement mental health strategies – The need for normalizing mental health and wellness was recognized as an important consideration for course development. It was implanted in the course through:
o   Extending the length of the course to include a “Break Week” allowing for learners (and instructors) a time to incorporate mental health/wellness strategies and access mental health resources. 
o   Imbedding self-care strategies through:
§  Mindfulness/meditation activities.
§  Use of D2L Intelligent Agents and then personal email to check-in with learners.
§  Course cafés. (Opportunities for learners in a course to connect with each other that is not necessarily content related.)

-          Role based discussion – Each learner is expected to perform in each of the following roles throughout the course:
o   Director/facilitator – oversee the discussion and keep it relevant and on track
o   Connector – make real-life connections to discussion content
o   Wordsmith – research to support connections
o   Reporter – present discussion to whole class

By providing learners with discussion roles and expectations, all aspects of an effective discussion can be covered while allowing learners the flexibility to choose when and what discussions they would take on in a specific role. Also, it allows learners to decide which weeks might be better or more difficult to perform certain roles. For example, if a learner had a more demanding week with other courses, they may decide within the group that they would take on less demanding role for that discussion.

-          “Veedback” – Video feedback in which the instructor provides learners with a video of feedback in place of written feedback. Learners suggested this mode of feedback was more preferred and reviewed than written feedback. This feedback strategy also contributes to the humanizing of the course.

-          Develop inquiry skills – Learners are presented with a discussion topic in the form of a driving question over time through discussion threads. This allowed learners to “connect the dots” of content to see the bigger context of what they were learning. This also allows the instructor to see the navigation of a learner’s understanding of a topic over time and support, enhance, direct or redirect a learner’s exploration of a topic.

One of the most impactful messages I gained from this session was the importance of continual surveying and feedback of both learners and instructors to inform modifications for course improvement. All courses should be seen as continually under development with room for enhancing learner success and engagement.

Tuesday 17 April 2018

The Importance of Accurate Student Assessment - Take-Aways from the Calgary Regional Consortium’s “Interpreting Diploma Exam Results: A Formative Tool to Enhance Instructional Practices” Professional Learning Session


As the name suggests, the focus of this session was to consider a multitude of factors to appropriately interpret diploma exam results, particularly to consider discrepancies between an individual learner’s school awarded mark (SAM) and diploma exam mark (DIP) as well as averages of SAMs and DIPs province-wide.  Participants were even provided several Excel programs that are used to input data to provide enhanced, visual interpretations of diploma exam results.  While the focus was on HS, the take-aways can apply to all teachers at any level because it matters to all of us that a learner’s grades accurately reflect his or her skills.

In relation to the reports provided by Alberta Education, it is interesting to look at provincial averages in comparison to our learners’ averages.  But before throwing our hands in the air in defeat, or alternatively jumping for joy, we should consider that our small sample size at BVC doesn’t give us very much data to work with in any particular exam sitting.  Instead of looking at a single year, we should consider trends through the multi-year reports and also reflect on whether the demographics of our learners are representative of the demographics of the province.

What we can consider at BVC for a particular exam sitting is any discrepancies between a learner’s SAM and DIP.  The province-wide trend is for DIPs to be lower than SAMs.  But why?  

Tim Coates, a former Social Studies teacher who served as the Director of the Diploma Examination Program Branch at Alberta Education from 2005 – 2014, facilitated the session and offered some points for consideration.

  • A bad day at the exam.  It can happen that a learner may have a bad day at the exam and underperform. 
  • Grades for non-outcomes.  Awarding or reducing grades for attendance, participation, attitude, effort, or other behaviours such as turning work in early/late can impact learners’ grades substantially regardless of their ability in a course.  While instructors may be frustrated that a learner does not take his or her work seriously, or delighted that he or she makes a tremendous effort, these things should not directly inform any part of a learner’s grade.
  • Grades for “polluted” data.  Coates provided a memorable example.  Imagine you need a lab test.  You go to a lab and provide your sample.  The technician then tells you, “we’ll give you your results just as soon as we have samples from three other people; we’re going to mix your samples, test the results, and then distribute the results among the group.”  Clearly, this would not work to measure an individual’s lab results, and it doesn’t work to measure an individual learner’s ability in a group work situation, either.  We need “clean” data.
  • Grades for formative assessment and resubmission of assignments.  We can agree that learning happens through practice and assessing this practice serves as formative assessment for learning, and therefore it should not be counted as a weighted grade.  But what about allowing a learner to take feedback from a summative assessment, make improvements, and then resubmit the assessment?  Clearly the learner’s grade will improve.  Good.  But is this an actual representation of a learner’s ability to do the work him or herself?  Coates emphasized that this method of resubmission and reassessment should be considered formative, not summative, and therefore there should be no weight associated with the grade.  For learners to demonstrate their mastery of a skillset, they need a new task to independently show what they know.
  • Assessments are unreliable.  Coates explained in considerable length the process of assessing the reliability of exams and showed a variety of calculations we can make to determine both the difficulty and discrimination of our exams.  (This is not for an ELA teacher to attempt to explain.)  Through this, he stressed that we consider the purpose of our assessments: through our assessments, can we determine the difference between an 80% and 90% learner, or a 30% and 50% learner?  Does the assessment make it possible for everyone to perform reasonably well despite not knowing much, or perform only average in spite of knowing a lot?  In other words, does our assessment discriminate?  If the assessments we provide our learners throughout the course are not able to really showcase what he or she does or does not know, their SAM is not going to correlate to their DIP.  
    •  As a side note, this is the purpose of analytics programs such as Form Return (which we used in our department for a short time) and Smarter Grades (a newer program which we are investigating) that can make calculations for multiple choice and short answer tests and show the results of the data, even aggregating the data from multiple classes, terms, or years.  Although it would mean considerable backend work, use of such a program to analyze the reliability of unit and equivalency exams could result in the useful development of better exams.  

Unfortunately, at the time I attended the session, our DIP reports were not available to me, but I am looking forward to analyzing my own learners’ diploma examination results in this new light and I hope to have reasonable access to the data in a timely manner after future exam sittings.  I think looking at the results within subject discipline groups will foster useful discussion about exam and assessment reliability and consistency between classes.

This session proved to be a good reminder that all assessments should reflect the outcomes, the entire set of outcomes, and only the outcomes of the course.  It also brought me back to the old conflict between what I feel is philosophically sound and what I feel is practically and/or usefully managed.  For instance, how do we survive the marking load from multiple learners who fail to complete assessments on time and then submit a horde of them the last week of the term?  Coates reasons that it is not our job to teach the learner a lesson in responsibility and that our obligation is to provide the most accurate report of learning; therefore, no assignment is too late and no marks should be deducted for being late.  So if we do accept late assignments and allow them full grades as Coates suggests, do we also laboriously make comments on the late work, or can we say the point of providing feedback is now lost?  Or what about formative assessments (i.e. a writing assignment not for grade but for feedback) where the associated summative assessment is already past due?  Or what about denying access to a discussion forum after a new forum has started and most of the class has moved on? 

What do you think?  Do you face similar conflicts?  How have you resolved them?

Lorna

Monday 16 April 2018

Take-aways from the D2L Connection: Alberta Event - Personalize Learning with Intelligent Agents and Release Conditions


Have you struggled with knowing what information to include in D2L and which to leave out?  You know that some learners need lots of support, and you know that other learners don’t need nearly as much.  So what can you do?  Do you include everything in your unit folders “just in case”?  You could.  But this can become overwhelming and confusing for learners to navigate.

Did you know that you can use D2L to release materials selectively to learners on a need-to-know basis?

You may wonder, why would I want to limit the information I provide to all learners?  Well, simply, to make things simpler.  Your course can be streamlined, and as a bonus, you can increase the appearance of instructor presence by allowing specific material to be released to meet individual learners’ needs.

Here’s a scenario: you have a learner who has just flunked a quiz or other assessment.  So you, brilliant, ever-present instructor, use Release Conditions to automatically make additional support available and use an Intelligent Agent (personalized email sent automatically based a criteria you have established) to inform this learner of the material that has been made available specifically to address the issue and provide more practice for a concept.

What’s the point?  If you have sufficient opportunities for formative assessment, you can have learners see only the vital information in your courses and keep the extra stuff hidden until a Release Condition has been met.  Your courses will be streamlined and your learners will know that each file is important and can’t be skipped.  It will eliminate the temptation to have folders such as “Extra Material” or “Additional Handouts” or “Supplementary Resources” that are rarely accessed by the learners who need them most.  

Click here for a video that shows how Release Conditions and Intelligent Agents can be used together to personalize learning.

Lorna